Loading
Feedback

picekl 307

Name

Lukas Picek

Organization

University of West Bohemia

Location

Rokycany, CZ

Badges

1
0
1

Activity

Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Classify images of snake species from around the world

Latest submissions

See All
graded 69846
graded 69838
graded 69321

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 68146
graded 68145
graded 68143

Latest submissions

See All
graded 68147
graded 68144
graded 68142

Latest submissions

See All
graded 68471
failed 68470
graded 68378
Gold 1
EulerLearner
May 16, 2020
Silver 0
Bronze 1
Trustable
May 16, 2020

Badges


  • May 16, 2020
  • Has filled their profile page
    May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020
  • Great work! You're one of the top participants in this challenge. Here's a gold badge to celebrate the acheivement.
    Challenge: Snake Species Identification Challenge
    May 16, 2020
Participant Rating
alexis.joly 0
Participant Rating
  • picekl Snake Species Identification Challenge
    View

Snake Species Identification Challenge

Tried pushing a non-debug mode version but was detected to have debug mode active

5 months ago

Hi,

just remote the “debug”: “false” line from your file config file. Should work fine.

Lukas

System confirmation for submissions

6 months ago

@gokuleloop

From my experience, this depends on your environment-setup / docker file.

Lukas

Evaluation Error but Image built successfully

6 months ago

Dear Akash,

looks like it’s a problem with your enviroment file, especialy with “cudatoolkit=10.1.243”.

@shivam Can you please look into it?

Best,
Lukas

SnakeCLEF how to submit, when and how many?

6 months ago

@gokuleloop @christophmf

Dear participants, we have decided to postponed the deadline by 12hours to 23:59 UTC.

Lukas

Error in submission

6 months ago

Can you please tell me what is your timezone? I’m available tomorrow 11:00am CEST till ~11:00pm CEST. Is there any time that fits you?

Write me please on my email -> lukaspicek@gmail.com

Best,
Lukas

SnakeCLEF how to submit, when and how many?

6 months ago

Dear Christoph,

To my best knowledge, the deadline is 5th of June 12:00 UTC (2pm CEST).

Best,
Lukas

Error in submission

6 months ago

Hi there,

I have looked into your last commits and it’s hard to tell what could be wrong as there is a lot of mistakes.

How about a private Skype / Zoom call where I might help you better/easier?

Best,
Lukas

Read state_dict in my submission

6 months ago

Hi,

in this case you have to use Git LFS (https://docs.gitlab.com/ee/topics/git/lfs/).

Please let me know uf you manage to upload it.

Lukas

Submission is taking really long

6 months ago

@shivam Can you please look into it?

Lukas

Submission is taking really long

6 months ago

Hi Eric,

sadly, I can not help you with this error.

@shivam can you please look into it?

Best,
Lukas

Submission is taking really long

6 months ago

We have to wait for Shivam. I don’t have such rights.

@shivam Can you do that?

Best,
Lukas

Submission is taking really long

6 months ago

Hi Eric,

My guess is that you are not using GPU.

Put - “gpu”: true - into your aicrowd.json file.

Like this -> https://gitlab.aicrowd.com/nikhil_rayaprolu/food-recognition/blob/master/aicrowd.json

Best,
Lukas

How to unzip fastly the data?

6 months ago

Hi,

If you are Linux user, please follow instructions on this site.

For the Windows users:

Best,
Lukas

New Submission does not appear in Leaderboard

7 months ago

Hi Eric,

please see the comment bellow your latest submission.
Hope it helps.

Anyway, you should run the script locally first to prevent such failings.

Best,
Lukas

Can we add our own images in the training set?

7 months ago

Hello,

the answer is no, since you can download data that are present in the test set.

What is your motivation behind that? Based on my initial experiments, you can achieve really high score just with the provided data.

Please keep in mind that your results should be replicable any time.

Best,
Lukas

New Submission does not appear in Leaderboard

7 months ago

Let me please point out one more think.

You are supposed to run your model directly on the GitLab. It looks like your are doing some strange things there.

Please let me know if there is anything I can help you with.

Best,
Lukas

New Submission does not appear in Leaderboard

7 months ago

Hi Eric,

the Leaderboard really shows only the best submission for each participant.

You should be able to see your latest scores on the GitLab or under the submissions page.

Best,
Lukas

SnakeCLEF how to submit, when and how many?

7 months ago

Dear Christoph,

after our discussion with CLEF organizers followed by our internal one, we decided to keep
the “Kaggle like” submission workflow.

That means:

  • You are allowed to submit maximum of 5 submission per day
  • Any submission after 5 June 2020 wont accepted for CLEF
  • Whole evaluation is done on our servers without any access to the “test set”.

Kind Regards,
Lukas

SnakeCLEF how to submit, when and how many?

7 months ago

Hi Christoph,

Thank you for contacting us.
As I was participating the CLEF already I know what you are talking about. I know what exactly confused you.

We are currently looking into this and I will let you know once I will know the answer.
Hopefully later today.

Best,
Lukas

Introduction - New Technical Support / Community Manager

7 months ago

Hi everyone,

my name is Lukas Picek and I’m the latest „hombre“ here at the AICrowd.

My main responsibility is to make your user experience as good as it’s possible while supporting you with any issue related to the one and only Snake Species Identification Challenge challenge. :wink:

Briefly about me:

  • Originally from Pilsen, Czech Republic.
  • Computer Scientist / PhD student with focus on Computer Vision, especially Fine-Grained-Visual-Categorisation.
  • CV competitions Addict:
    • Have won multiple challenges on all the platforms including: Kaggle, AICrowd, DrivenData and ZINDI.
    • Have won Snake Species Identification Challenge P3 .)

Hope to hear from you soon!

My Best,
Lukas

Submissions Failed

About 1 year ago

Hi all,

I was trying to submit some preliminary results and I failed multiple times.
Now I’m feeling pretty frustrated as I’m not able to identify what is wrong. There is just “FAILED” label on the GitLab issue.

Is there any way how to find out what was the error about? Or is there anyone who is open to investigate it on the side of AIcrowd?

@mohanty

Thank you in advance.
LP

ImageCLEF 2020 Coral - Pixel-wise parsing

"Create Submission" not working

6 months ago

Works fine now. :+1:

"Create Submission" not working

6 months ago

@shivam @albagarciaseco

Looks like there is some error related to the deadline extension.
CreateSubmission returns 500.

EDIT: Annotation & localisation challenge is having the save issue.

Lukas

ImageCLEF 2020 DrawnUI

Exploit like score - 0.998!?

6 months ago

This is up to organisers to decide. From my perspective, it’s really hard to track the number of people under single team and it’s expected that one team will have 10 submissions. If not you will be motivated to accumulate the huge number of “contributors” to increase your number of submissions. In our case we are 3 and we are going to submit only 10 submissions. There is only one who have signed the Eula.

Exploit like score - 0.998!?

6 months ago

I’m afraid that you are supposed to submit only 10 submissions per team. Using multiple accounts to increase the number of submissions could be considered as rules violation. At least other platforms (e.g. Kaggle) works this way.

Lukas

Exploit like score - 0.998!?

6 months ago

I’m not sure if this is possible. This is how the CLEF is evaluated -> Secretly.

Sadly, since best score is visible, it’s not the same.

Exploit like score - 0.998!?

6 months ago

Thank you @dimitri.fichou for clarification. I would also like to hear from the @OG_SouL about their solution. I could be wrong and they might have such solution that is super accurate.

Best,
Lukas

Exploit like score - 0.998!?

6 months ago

Dear Organisers,

Seem like @OG_SouL is using the metric exploit I have mentioned previously on the forum.

How can we (the honest participants) be sure that such submissions wont be considered in the competition? For your information, our submissions are constructed to top F1 score as the harmonic mean of precision and recall.

Thank you in advance for your answers.
Lukas

About the evaluation metric

6 months ago

Dear Alba & Dimitry,

to prove my point I have submitted one submission today. I have achieved the 0.997 overall Precision and 10.276 Recall. You can note that maximum values for both, Precision and Recall are 1.

I’m begging you, PLEASE change the evaluation metric. Without the change, it will end badly.

Kind Regards,
Lukas

@dimitri.fichou
@albagarciaseco

About the evaluation metric

6 months ago

Dear Dimitry,

Again, I don’t agree with you. Looking at the paper from the last year --> http://ceur-ws.org/Vol-2380/paper_200.pdf, you can see that the matric was different. They had to use different script.

With proposed metric, the competition could look really amateurish and related articles probably wont be well acceptable by the CV/ML community.

Furthermore, looks like your script is ignoring classes while calculating IoU, can you confirm / deny that?

PS: If you consider that the competition is running for 2 months already, changing the final metric two weeks before deadline is a bit unfair.

Best,
Lukas

About the evaluation metric

6 months ago

Dear Dimitry,

I need to disagree with you. Other challenges from ImageCLEF such as ImageCLEF-Coral are using mAP0.5 as evaluation metric. Same metric was used also last year in the same challenge.

All the big datasets are having mAP as the main metric. I don’t see any reasonable argument to use anything different. The metric you are using does not make any sense in for your long-tail “class presence” distribution. It will ignore the least present classes.

Best,
Lukas

picekl has not provided any information yet.