F.A.Q.¶
This page contains the answers to Frequently Asked Question(s) (F.A.Q.) and will be gradually populated.
General Questions¶
Q1: Is the direction information stored in the NIfTI files physically accurate?
A1: No, it isn't, but all the images selected for the Final Test Phase will share the same orientation and voxel spacing as the training set.
Q2: Will the paper describing my challenge submission be included in the proceedings?
A2: Yes. The challenge is part of the ODIN Workshop, taking place in South Korea during MICCAI 2025. The workshop will have its own post-event Springer proceedings, which will include both papers submitted to the workshop and papers describing challenge submissions. All papers will undergo a review process, and accepted ones will be included in the proceedings. You can find more details on the submission webpage. Additionally, all challenge participants will be invited to co-author a journal publication in line with the policy described in the structured submission document.
Q3: Are we allowed to use any additional data to train deep learning models? For example: (a) architectures pre-trained on public datasets such as ImageNet, or (b) additional CBCT images obtained from our own sources?
A3: You are allowed to use any additional publicly available data sources, with the following conditions:
(a) Pre-trained architectures are permitted as long as their weights are publicly available or were trained using publicly accessible data.
(b) Data from your own research group may be used only if it is publicly available to the research community.
Task 1: Fast Multi-class Segmentation¶
Q1: Data starting with "P" seems to be missing information related to the upper part of the oral cavity, including the upper teeth, maxilla, maxillary sinus, etc. Could you please confirm if this is indeed the case for the dataset?
A2: Yes, that is correct: handling such a different view is part of the challenge. The training dataset can be divided into "Set A" (samples that begin with the letter "P"), "Set B" (samples that begin with the letter "F"), and "Set C" (samples that begin with the letter "S"). The acquisition machine for "Set A" and "Set B" is the same, while it differs for "Set C". The field of view of "Set C" is between that of the other two. For more details, please refer to Ditto. Test data are assured to have the same field of view as "F" cases.
Q2: As mentioned on the dataset homepage, the "Dataset classes" are listed as 77. However, in the dataset.json file, I find a total of more than 100 classes.
A2: Classes are 77, as specified on our webpage. Numbering in the dataset.json file goes up to 148, but if you look at the classes, you'll see that many of them are empty. This choice is to keep the teeth identifiers aligned with the standard FDI medical notation. Pulp cavities have the class ID of the corresponding teeth +100. Submitted algorithms should predict 77 classes only.
Task 2: IAC Interactive Segmentation¶
Q1: How are the simulated clicks generated?
A1: The simulated clicks are generated using the script simulate_clicks.py
, which is available in our official GitHub repository. You can review the code there to see the implementation details and customize it if needed.