0 Follower
0 Following
Mengdi
Mengdi Song
Location
Badges
2
1
0
Activity
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Mon
Wed
Fri
Ratings Progression
Loading...
Challenge Categories
Loading...
Challenges Entered
3D Seismic Image Interpretation by Machine Learning
Latest submissions
See Allgraded | 108915 | ||
graded | 108910 | ||
graded | 108703 |
Participant | Rating |
---|
Participant | Rating |
---|
Mengdi has not joined any teams yet...
Seismic Facies Identification Challenge
Which average method is used in the calculation of f1-score?
Almost 4 years agoThank you very much!
Which average method is used in the calculation of f1-score?
Almost 4 years agoHello,
If I understand correctly, the principle evaluation method is multi-class f1-score. I would like to ask which average method is used in the calculation of this f1-score? For example, if we use sklearn.metrics.f1_score
, there are options of βmicroβ, βmacroβ or βweightedβ.
If someone knows the answer, that would help me a lot.
Thank you very much!
Mengdi has not provided any information yet.
Notebooks
-
3rd place solution image segmentation, DeepLabV3+, efficientnet-b3, PytorchMengdiΒ· Over 3 years ago
Availability of testset 1 labels
Almost 4 years agoHello,
For round 2: since testset 2 (the testset of round 2) is near to the trainset and testset 1, adding testset 1 data in training would probably help to improve the model performance. However, we do not have testset 1 labels. If one wants to use trainset + testset 1 in training, he should first train a good βround 1β model. This means that if someone wants to succeed in round 2, his round 1 model needs to be the best (or among the bests).
My question is: do you plan to release the ground-truth of testset 1, so that round 2 becomes independent of round 1?
Thank you in advance for your answer.