Foundation Model Prompting for
Medical Image Classification Challenge 2023
News
[2022.09.01] We recommend the participants read the submission instructions on the submission page carefully before trying to submit the final evaluation results. Each participant will only have THREE chances to submit.
[2022.08.28] To make the evaluation phase fair, we ask that all participants follow the official few-shot split (data file lists can be found here), which is also detailed in the submission instructions.
[2022.08.25] The test phase is open, and the validation set annotations and test set images are released.
[2022.07.15] The online evaluation submission is open, and the submission tutorial and example can be found here.
[2022.06.25] The validation phase submission pipeline is debugging, we will release the submission interface and corresponding tutorial and examples as soon as possible.
[2022.05.26] The MedFM 2023 website is now fully open. Please check the timeline.
[2022.05.25] MedFM 2023 is accepted as a NeurIPS\’2023 Competition! More details will be announced soon.
Timeline
1. Release of training data and validation data: May. 26th (12:00 AM GMT), 2023;
2. Submission of the validation phase opening: July. 15th (12:00 AM GMT), 2023;
3. Submission of the evaluation phase opening: Aug. 25th (12:00 AM GMT), 2023;
4. Submission closing date: Sep. 15th (12:00 AM GMT), 2023;
5. Short paper and source code submission ends: Sept. 15th (12:00 AM GMT), 2023;
6. NeurIPS\’2023 workshop: Dec. 15th(12:00 AM GMT), 2023.
How to Participate
Note:
- All participants are not allowed to use external medical image datasets during this challenge. The foundation models in other areas, such as natural images and natural language processing area, are allowed.
- In the setting of our tasks, the few-shot number is counted by the number of patients rather than the number of images. All participants can only use corresponding few-shot samples in the training set. Using the full training set is not allowed.
- When making the final submission, all participants should give links to the pre-trained model they used in this challenge in their methodology paper.
- Please do not upload your Docker to Dockerhub during the challenge.
- Noted that there are some images of the same patients existing in both training and validation sets, but it\’s ok because our main task is few-shot learning. The results in the validation submission are important but are not counted in the final testing submission. The reserved testing set is different from the public training and validation set. So we urge all participants to treat validation submissions carefully.
Stage 1. Join the competition and download the data
- Register on the website and verify your account.
- Click the green \’Join\’ button to participate in the Challenge. Please make sure that your grand challenge profile is complete (e.g., Name, Institution, Department, and Location).
Stage 2. Develop your model and make validation submissions
- We would provide an official baseline in GitHub, and you can follow it to go through the whole process of the MedFM Challenge.
- In the validation submission phase, all participants only need to submit CSV files which include the prediction of validation of each dataset in a 1/5/10-shot setting. We offer 1 submission opportunity per day in the whole Validation Phase.
Stage 3. Make testing submission
- To avoid overfitting the reserved dataset, we only offer three successful submission opportunities in Testing Phase. In the testing phase, participants would train their models on few-shot samples of the MedFMC public dataset and test performance on MedFMC reserved testing dataset (See Important Dates). We offer three submission opportunities in the whole Testing Phase. More details could be found here.
Awards
1. Monetary awards for top-3 winners(with best ranking over all three tasks): 1st place: $1,000; 2nd place: $600; 3rd place: $400.
2. Outstanding winners with groundbreaking solutions will be invited to submit their work to our special issue in the prestigious Medical Image Analysis Journal.
3. The top-10 winners will be invited to submit their groundbreaking solutions (as coauthors) in a
summarization paper.
4. Student participants in the winning teams will be considered for admission and scholarship
in organizers’ institutes.