검색 상세

Fast and fair split computing for accelerating deep neural network (DNN) inference

초록/요약

Conventional split computing approaches for AI models that generate large outputs suffer from long transmission and inference times. Due to the limited resources of the edge server and selfish MDs, some MDs cannot offload their tasks and sacrifice their performance. To address these issues, we formulate an optimization problem to determine one or two split points that minimize inference latency while ensuring fair offloading among MDs. Additionally, we devise a low-complexity heuristic algorithm called fast and fair split computing (F2SC). Evaluation results demonstrate that F2SC reduces inference time by 3.8%~20.1% compared to the conventional approaches while maintaining fairness.

more