positions

DC4: Split learning over distributed heterogeneous devices

Task: Split learning over distributed heterogenous devices (WP2)

Host institution: TU Delft

Country: Netherlands

Supervisor: Prof. Q. Wang [TU Delft]

Co-supervisors: Dr. J. Beysens [CSEM]; Dr. E. Rocco [A5]

Objectives: 1) To investigate splitting neural network models on a per-layer basis across the distributed devices. 2) To push forward split learning to split neurons in the same layer of a neural network model across multiple devices; 3) To further investigate the optimal trade-off by modelling the cost in connection update, integrating results from dynamic sparse training.

Expected Results: 1) Successful distribution of neural network layers across devices for efficient parallel processing and reduced computational load per device. 2) Achievement of splitting neurons within the same layer across devices, enhancing model scalability. 3) Development of an optimal trade-off strategy, balancing connection update costs and performance, informed by dynamic sparse training insights.

PhD enrolment: Doctoral School of TU Delft

Planned secondments: 

  • CSEM (5 months, M15-M19): Per-layer model splitting with early exit on distributed devices, with Dr. J. Beysens (KPI: joint paper)

  • A5 (4 months, M26-M28): Integration of dynamic sparse training to use case of underwater environment monitoring, with Dr. E. Rocco (KPI: joint paper)

Candidate profile: computer science, telecommunication engineering, electrical engineering (in order of preference)

Desirable skills/interests: machine learning, tinyML, embedded systems/AI, signal processing, applied optimization (the applicant should be proficient in at least two of the skills)

Application Deadline: February 14, 2025, AoE

Descriptions of all the 18 DC Positions

Submit Your Application HERE