Loading
0 Follower
0 Following
wangbot

Location

US

Badges

2
1
1

Activity

May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...
Participant Rating
Participant Rating

Novartis DSAI Challenge

How to use conda-forge or CRAN for packages in evaluation?

Over 4 years ago

@bjoern.holzhauer Hi Bjoern were you successful in using glmnet during evaluation by adding - r-glmnet 2.0_16 to the environment.yml file? Thx.

RAW DATA now available in shared folder

Over 4 years ago

Thanks for making the raw data available in this more approachable manner. Is there a way to share the data dictionary of the raw data, or instructions on how to get the data dictionary through Informa API?

I ask because the master data dictionary provided under β€œResources” is not complete. Many variables’ descriptions are missing or not informative. Examples:

  • β€œstrTerminationReason”: what does is mean when this variable is β€œβ€ (empty string)?

  • β€œTherapyDescription”, β€œstrStudyDesign”, β€œstrPrimaryEndpoint”, β€œstrPatientPopulation”, β€œDrugDeliveryRouteDescription”: all missing variable description.

It’d be helpful if we could know how are these variables collected: free text in some trial registry? Or hopefully some more structured meta-data with which we could engineer some features out of the long text values. e.g. we want to create a new endpoint category: biomarker, survival, etc.

  • β€œintpriorapproval”: missing description. What do β€œ[]” and β€œ[0]” mean?

  • β€œdrugCountryName”: missing description, not in the script documentation, and are all a big string separated by β€œ|” for all rows (see screenshot below)

Copy/Paste into the workspace example:

Over 4 years ago

This link seems to direct to an XML file and it says β€œAccessDenied”.

What is being evaluated during submission?

Over 4 years ago

Is the predict.R or predict.py script run in the queue when we submit a solution through SSH? If that’s the case, how do we know where is the test dataset in the evaluation environment?

Also, is there a way to get the log file when the GitLab issue says evaluation is failed to debug what might have been wrong?

Thanks.

wangbot has not provided any information yet.