Workshop@EMBC-2025#

2:30 PM, 16 July | B3 M7-8, Bella Center, Copenhagen, Denmark

Welcome#

Open Biomedical Multimodal AI Research#

This workshop offers practical, hands-on tutorials in open biomedical multimodal AI research, introducing machine learning techniques for leveraging multimodal data in biomedical applications. It brings together three core themes:

  1. Skills in multimodal learning: Developing knowledge and practical skills in multimodal AI for biomedical applications. Specifically, regularization, fusion (intermediate and late), and interaction, to leverage multimodal data are covered in the tutorials.

  2. Open research: Enabling impactful and high-quality research through open-access data and code. All tutorials used in this workshop are openly available in the GitHub repository.

  3. Reproducible pipelines with PyKale: The core machine learning library behind the tutorial of this workshop is PyKale, which supports standardized machine learning pipelines and configurable experimentation without extra coding—making AI research more reproducible, reusable, and recyclable.


Overview of the Materials#

This workshop covers tutorials on four biomedical applications using the PyKale multimodal AI library. You are welcome to focus on one application with its exploration tasks or explore all four applications. The workshop post and slides introduce multimodal AI and PyKale and provide summaries of each tutorial.

Tutorial Topics:

  1. Brain Disorder Diagnosis

    • Modalities: Neuroimaging (fMRI) and phenotypic features (e.g., age, gender, IQ)

    • Task: Use neuroimaging and phenotypic data for autism classification

    • Multimodal approach: Regularization - using phenotypic features to regularize feature embedding for reducing the phenotypic effect (e.g. site effect) in neuroimaging data to improve cross-site classification performance

    • Dataset: ABIDE (Autism Brain Imaging Data Exchange)

  2. Cardiovascular Disease Assessment

    • Modalities: Chest X-ray images and ECG signals

    • Task: Integrate imaging and physiological signals for classifying health and cardiothoracic abnormalities

    • Multimodal approach: Intermediate fusion - combining CXR and ECG at feature embedding level

    • Dataset: MIMIC Chest X-rays and ECG signals

  3. Cancer Classification

    • Modalities: DNA methylation, mRNA expression, and miRNA expression.

    • Task: Combine genomics and transcriptomics data for cancer classification

    • Multimodal approach: Late fusion - contrcuting cross-omics tensor for probability fusion

    • Dataset: TCGA (The Cancer Genome Atlas)

  4. Drug–Target Interaction Prediction

    • Modalities: Protein structures (3D) and molecular graphs (SMILES)

    • Task: Predict molecular interactions from structural and textual features

    • Multimodal approach: Interaction - bilinear interaction between protein and molecular embedding

    • Dataset: BindingDB and BioSNAP

The core functions of each tutorial are implemented in the PyKale APIs of a standardized machine learning pipeline:

Data loading → Preprocessing → Embedding → Prediction → Evaluation → Interpretation

Schedule#

Chair: Peter Charlton | Co-Chair: Xianyuan Liu

Duration

Activity

5 mins

Welcome

20 mins

Opening Talk: Towards Deployment-Centric Multimodal AIProf. Haiping Lu

20 mins

Introduction to the Tutorials
Speakers:
Jiayang Zhang
Zixuan (Kelly) Ding
Xianyuan Liu
Sina Tabakhi

10 mins

Interactive Tutorial

1 hour

Hands-on Session (Round 1)

10 mins

Open Sharing and Discussion

1 hour

Hands-on Session (Round 2)

15 mins

Post-tutorial Discussion

10 mins

Closing Remarks

Discussion forum for Q&A etc#

We have a discussion forum as our primary communication channel, e.g. for Q&A, information sharing, and discussion. Please ask questions or post information there, rather than emailing the organisers directly. This will help others to benefit from the answers and help build an engaging community. You will need a GitHub account to post questions.

Questions to Consider#

Before We Begin#

Consider the following questions to help familiarize yourself with the key concepts of this workshop:

  • Have you used publicly available data or code in your work?

  • Have you ever shared data or code from your own research?

  • Have you applied any practices to make your research more reproducible?

  • Do you currently face any challenges when working with multimodal data or methods?

Post-Tutorial Reflection#

Consider the following questions, choose those that apply to you, to reflect on your experience:

  • Were there any tools or techniques introduced in the tutorials that you found particularly useful or innovative?

  • Which aspects of the tutorials did you find most helpful, and which were most challenging?

  • How do you see multimodal AI contributing to or shaping your own research going forward?