MEDICON 2026 Scientific Challenge
MEDICON 2026 · Multiclass diagnosis · Dermoscopy + metadata

Multiclass Differential Diagnosis of Facial Pigmented Lesions

In this edition, the objective of this challenge is to develop an Artificial Intelligence system for the automated classification of skin lesions located exclusively on the face and neck. The challenge requires a multimodal approach: models should integrate dermoscopic images with the provided clinical metadata (age, sex, specific localization). This is a multiclass classification problem, in which the diagnostic categories are divided into seven distinct classes.

Task

7-class classification using dermoscopy images and metadata.

Submission

Runnable Python script that generates predictions.csv.

Evaluation

Official weighted score from Macro-AUC, Averaged Recall, Macro-F1-Score, Brier Score, and Accuracy.

Training data only for participants
Hidden test kept private
Organizer-side execution
Single output contract
Quick Start
  1. Download public package and prepare training pipeline.
  2. Train your model and prepare the package submission files.
  3. Submit a ZIP package; the organizers will run inference on hidden tests and score it.

How to participate

Participation includes challenge registration/submission and conference participation requirements.

1| Challenge registration and submission
  • Registration is only considered complete after sending the team registration email to medicon_challenge2026@dei.uc.pt using the template below.
  • Each team is allowed up to 3 submission attempts in total.
  • Download the public participant package and use the released training data to develop your model.
  • Prepare your ZIP package in the official structure and submit runnable inference code.
  • Organizers execute submitted code on private hidden test data and compute the official ranking.
Register Team by Email

This opens your email client with a pre-filled registration template.

Expand registration details
  • Use a single team identity across all submissions.
  • Keep contact details updated for organizer notifications.
  • Store submission metadata consistently (team name, submission ID, timestamp).
2| Conference submission
  • Register to the challange through the website https://medicon2026.unisi.it/ with team name, contact email, and team members.
  • Submit to the conference MEDICON, special Session Scientific Challenge,
  • At least one team member should complete conference registration and attend according to MEDICON rules.
Expand conference details

Follow official conference instructions for format, submission template, and registration constraints.

Important notes
  • Hidden test data and the respective ground truth are never distributed to participants.
  • External data is allowed; teams should disclose external sources and preprocessing in submission README.
  • Submissions must run offline in organizer infrastructure and comply with the official output contract.
  • Teams are responsible for legal, licensing, and ethical compliance of all datasets and resources used.

By registering, teams confirm agreement with challenge rules, data confidentiality obligations, and organizer-side evaluation procedures.

Task

Build an AI model for automated classification of face/neck pigmented lesions using dermoscopy images and metadata.

Inputs
  • Dermoscopy image
  • Clinical metadata (e.g., age, sex, lesion site, acquisition context)
Classes
7
Official class set
Output
Probabilities
Patient ID + 7 Probabilities + Predicted_class

Official classes

  1. Atypical nevus
  2. Lentigo maligna
  3. Lentigo maligna melanoma
  4. Pigmented actinic keratosis
  5. Seborrheic keratosis
  6. Seborrheic-Lichenoid keratosis
  7. Solar lentigo

Dataset description

Participants receive training assets only. Hidden test assets remain private and are used exclusively for organizer-side scoring.

Public package files
01_public_package/data/
  images/
    train/
  trainData.csv
  dataset_description.txt

Participants receive only training assets. Hidden test assets (images/test/ and testData.csv) remain organizer-private.

Training labels
N = 777
Publicly available
Hidden test
N = 335
Organizer-private evaluation set
Metadata examples
  • id: unique sample identifier
  • Each image filename matches the id field in the corresponding CSV row.
  • Patient/lesion attributes such as age, sex, lesion site and acquisition metadata
  • label (training only): target diagnostic class = {1,2,3,4,5,6,7}
Expand metadata details

Use dataset_description.txt and trainData.csv as authoritative references for field names and class labels.

Validation checklist

  • Submission ZIP includes inference.py, requirements.txt, README.md, and model/.
  • CLI runs with python inference.py --input_dir ... --output_file ....
  • predictions.csv must be a matrix with shape (N,9) where N=335 on the hidden test.
  • Columns of predictions.csv are: ID, probability to belonging to each class, and Predicted_class.
  • All probabilities are in [0,1] and row sums equal 1.

Submission contract

Each team submits one ZIP package with code, dependencies, documentation, and model artifacts.

ZIP structure
teamname_submission.zip
inference.py
requirements.txt
README.md
model/...
Expected command
python inference.py --input_dir <test_images_dir> --output_file <predictions_csv>
Output contract
predictions.csv
id,
Probability_Atypical_nevus,
Probability_Lentigo_maligna,
Probability_Lentigo_maligna_melanoma,
Probability_Pigmented_actinic_keratosis,
Probability_Seborrheic_keratosis,
Probability_Seborrheic-Lichenoid_keratosis,
Probability_Solar_lentigo,
predicted_class
Output example (2-3 rows)
id,Atypical nevus,Lentigo maligna,Lentigo maligna melanoma,Pigmented actinic keratosis,Seborrheic keratosis,Seborrheic-Lichenoid keratosis,Solar lentigo,predicted_class
img_0001,0.02,0.07,0.01,0.03,0.05,0.02,0.80,Solar lentigo
img_0002,0.10,0.66,0.06,0.04,0.03,0.02,0.09,Lentigo maligna
img_0003,0.58,0.14,0.05,0.04,0.07,0.03,0.09,Atypical nevus
Execution constraints
  • Organizer-side local execution in isolated environment.
  • No internet access during execution.
  • Timeout and resource limits may be enforced.
  • Dependencies must be declared in requirements.txt.
  • (README.md): briefly describe how to run inference in the organizers’ environment, including required dependencies, execution command and expected file structure.

Example workflow

Organizer execution sequence
  1. Receive and unzip submission package
  2. Run inference.py on private hidden test images
  3. Validate predictions.csv format and IDs
  4. Compute official metrics and global score
  5. Update participant results and final leaderboard
python inference.py --input_dir <hidden_test_images> --output_file predictions.csv

Evaluation criteria

Official ranking uses organizer-computed metrics on the hidden test set.

Metrics
  • Macro-AUC
  • Averaged Recall
  • Macro F1-Score
  • Brier Score
  • Accuracy
Official score
AUC x 10
Averaged Recall x 9
F1 x 8
Brier x 6
Acc x 5
S_total = (100 / 38) * (
  10 * AUC + 9 * AvgRecall + 8 * F1 +
  6 * (1 - BS/2) + 5 * Acc
)
Validation and disqualification
  • One output row per hidden-test ID is mandatory.
  • No duplicate IDs and no unknown IDs are allowed.
  • Class probabilities must be numeric, within [0,1], and sum to 1 per row.
  • Invalid format, execution failure, or contract mismatch may lead to disqualification.

Tie-break order: lower Brier Score, then higher Macro-AUC, then earlier submission timestamp.

Important dates

  • Challenge opens: 15/03/2026
  • Challenge deadline: 15/05/2026
  • Full paper submission (challenge papers): until 22/05/2026
  • Full paper final decision: 27/05/2026
  • End of early-bird registration: 31/05/2026

Contact information

Frequently asked questions (FAQ)

  • Do teams receive test data? No. Only training data is distributed.
  • Do teams submit predictions or code? Code + model artifacts. Organizers run inference privately.
  • Can submissions require internet? No. Execution is offline.
  • Can I change output format? No. Output must match official predictions.csv contract.
  • Are external datasets allowed? Yes. External data is allowed; teams should document sources in README.
  • How many submission attempts are allowed? Up to 3 attempts per team.
  • How are score ties resolved? Lower Brier Score, then higher Macro-AUC, then earlier submission timestamp.

Organizers

Official organizing entities