A Multi-Task Method for Immunofixation Electrophoresis Image Classification

A Multi-Task Method for Immunofixation Electrophoresis Image Classification
Application
馃敩
This project provides a tool for classifying immunofixation electrophoresis (IFE) images, aiding medical diagnostics in detecting protein abnormalities.
Contributors
馃懃
Developed by researchers dedicated to advancing medical image analysis through machine learning.
Contact
鉁夛笍
For questions or support, please raise an issue on the GitHub repository.
Free Download
馃捇
Access the code and sample data on GitHub.
Progress
馃搱
The model has been successfully implemented and evaluated; performance results are detailed in our MICCAI-2023 paper.
Resources
馃敆
View the project on GitHub for more information.

This project offers a PyTorch implementation of our MICCAI-2023 paper, “A Multi-Task Method for Immunofixation Electrophoresis Image Classification.” It focuses on classifying immunofixation electrophoresis (IFE) images using a multi-task learning approach.

Key Features

  • Multi-Task Learning: Predicts multiple labels at once for comprehensive analysis.
  • User-Friendly: Simple code structure for easy use and customization.
  • Flexible Labeling: Supports different label types derived from the main label.
  • PyTorch-Based: Built with PyTorch for flexibility and high performance.

Getting Started

Prepare your IFE data with the following structure (refer to the example data in the repository):

  • ./data (IFE images)
  • ./label (CSV file with additional information)

There are three types of labels:

  • Main Label (“label”): Nine classes represented by numbers 0-8:
    • Non-M, 魏, 位, IgG-魏, IgG-位, IgA-魏, IgA-位, IgM-魏, IgM-位.
  • Label 1 (“label1”): Four classes:
    • Neg, G, A, M (co-location between ELP lanes and heavy chain lanes).
  • Label 2 (“label2”): Three classes:
    • Neg, 魏, 位 (co-location between ELP lanes and mild chain lanes).

For example, a sample labeled “IgG-魏” has severe label “G” and mild label “魏”.

Dependencies

All required software dependencies are listed in dependencies.txt.

Training and Evaluation

To train the model, navigate to the ./code directory and run:

bashCopy codepython main.py

This will train the model, evaluate it, and save the model’s state dictionary.

To only evaluate a pre-trained model, run:

bashCopy codepython main.py --no_train

The model will load the state dictionary and evaluate on your test set.

Dataset

Due to privacy concerns, we cannot share the original IFE dataset used in the paper. However, some synthetic images and their labels are available in ./data and ./label.

Share your love

Leave a Reply

Your email address will not be published. Required fields are marked *