How to participate
Participation includes challenge registration/submission and conference participation requirements.
- Registration is only considered complete after sending the team registration email to medicon_challenge2026@dei.uc.pt using the template below.
- Each team is allowed up to 3 submission attempts in total.
- Download the public participant package and use the released training data to develop your model.
- Prepare your ZIP package in the official structure and submit runnable inference code.
- Organizers execute submitted code on private hidden test data and compute the official ranking.
This opens your email client with a pre-filled registration template.
Expand registration details
- Use a single team identity across all submissions.
- Keep contact details updated for organizer notifications.
- Store submission metadata consistently (team name, submission ID, timestamp).
- Register to the challange through the website https://medicon2026.unisi.it/ with team name, contact email, and team members.
- Submit to the conference MEDICON, special Session Scientific Challenge,
- At least one team member should complete conference registration and attend according to MEDICON rules.
Expand conference details
Follow official conference instructions for format, submission template, and registration constraints.
- Hidden test data and the respective ground truth are never distributed to participants.
- External data is allowed; teams should disclose external sources and preprocessing in submission README.
- Submissions must run offline in organizer infrastructure and comply with the official output contract.
- Teams are responsible for legal, licensing, and ethical compliance of all datasets and resources used.
By registering, teams confirm agreement with challenge rules, data confidentiality obligations, and organizer-side evaluation procedures.
Task
Build an AI model for automated classification of face/neck pigmented lesions using dermoscopy images and metadata.
- Dermoscopy image
- Clinical metadata (e.g., age, sex, lesion site, acquisition context)
Official classes
- Atypical nevus
- Lentigo maligna
- Lentigo maligna melanoma
- Pigmented actinic keratosis
- Seborrheic keratosis
- Seborrheic-Lichenoid keratosis
- Solar lentigo
Dataset description
Participants receive training assets only. Hidden test assets remain private and are used exclusively for organizer-side scoring.
01_public_package/data/
images/
train/
trainData.csv
dataset_description.txt
Participants receive only training assets. Hidden test assets (images/test/ and testData.csv) remain organizer-private.
id: unique sample identifier- Each image filename matches the
idfield in the corresponding CSV row. - Patient/lesion attributes such as age, sex, lesion site and acquisition metadata
label(training only): target diagnostic class = {1,2,3,4,5,6,7}
Expand metadata details
Use dataset_description.txt and trainData.csv as authoritative references for field names and class labels.
Validation checklist
- Submission ZIP includes
inference.py,requirements.txt,README.md, andmodel/. - CLI runs with
python inference.py --input_dir ... --output_file .... predictions.csvmust be a matrix with shape(N,9)whereN=335on the hidden test.- Columns of predictions.csv are:
ID, probability to belonging to each class, andPredicted_class. - All probabilities are in [0,1] and row sums equal 1.
Submission contract
Each team submits one ZIP package with code, dependencies, documentation, and model artifacts.
teamname_submission.zip inference.py requirements.txt README.md model/...Expected command
python inference.py --input_dir <test_images_dir> --output_file <predictions_csv>Output contract
predictions.csv id, Probability_Atypical_nevus, Probability_Lentigo_maligna, Probability_Lentigo_maligna_melanoma, Probability_Pigmented_actinic_keratosis, Probability_Seborrheic_keratosis, Probability_Seborrheic-Lichenoid_keratosis, Probability_Solar_lentigo, predicted_class
Output example (2-3 rows)
id,Atypical nevus,Lentigo maligna,Lentigo maligna melanoma,Pigmented actinic keratosis,Seborrheic keratosis,Seborrheic-Lichenoid keratosis,Solar lentigo,predicted_class img_0001,0.02,0.07,0.01,0.03,0.05,0.02,0.80,Solar lentigo img_0002,0.10,0.66,0.06,0.04,0.03,0.02,0.09,Lentigo maligna img_0003,0.58,0.14,0.05,0.04,0.07,0.03,0.09,Atypical nevus
- Organizer-side local execution in isolated environment.
- No internet access during execution.
- Timeout and resource limits may be enforced.
- Dependencies must be declared in
requirements.txt. - (
README.md): briefly describe how to run inference in the organizers’ environment, including required dependencies, execution command and expected file structure.
Example workflow
- Receive and unzip submission package
- Run
inference.pyon private hidden test images - Validate
predictions.csvformat and IDs - Compute official metrics and global score
- Update participant results and final leaderboard
python inference.py --input_dir <hidden_test_images> --output_file predictions.csv
Evaluation criteria
Official ranking uses organizer-computed metrics on the hidden test set.
- Macro-AUC
- Averaged Recall
- Macro F1-Score
- Brier Score
- Accuracy
S_total = (100 / 38) * ( 10 * AUC + 9 * AvgRecall + 8 * F1 + 6 * (1 - BS/2) + 5 * Acc )
- One output row per hidden-test ID is mandatory.
- No duplicate IDs and no unknown IDs are allowed.
- Class probabilities must be numeric, within [0,1], and sum to 1 per row.
- Invalid format, execution failure, or contract mismatch may lead to disqualification.
Tie-break order: lower Brier Score, then higher Macro-AUC, then earlier submission timestamp.
Important dates
- Challenge opens: 15/03/2026
- Challenge deadline: 15/05/2026
- Full paper submission (challenge papers): until 22/05/2026
- Full paper final decision: 27/05/2026
- End of early-bird registration: 31/05/2026
Contact information
- Conference website: MEDICON 2026 Official Website
- Paper submission portal: MEDICON 2026 Paper Submission
- Challenge registration/contact details are published in official MEDICON communication channels.
Frequently asked questions (FAQ)
- Do teams receive test data? No. Only training data is distributed.
- Do teams submit predictions or code? Code + model artifacts. Organizers run inference privately.
- Can submissions require internet? No. Execution is offline.
- Can I change output format? No. Output must match official
predictions.csvcontract. - Are external datasets allowed? Yes. External data is allowed; teams should document sources in README.
- How many submission attempts are allowed? Up to 3 attempts per team.
- How are score ties resolved? Lower Brier Score, then higher Macro-AUC, then earlier submission timestamp.
Organizers
Official organizing entities