논문 인용하기
각 논문마다 생성되어 있는 BibTeX를 사용하시면 자신이 원하는 스타일의 인용 문구를 생성할 수 있습니다.
생성된 BibTeX 코드를 복사하여 BibTeX Parser를 사용해 일반 문자열로 바꾸십시오. 아래의 사이트와 같이 웹에서 변환할 수도 있습니다.
bibtex.online2022
1.
Park, Juwon; Yoon, Daegun; Yeo, Sangho; Oh, Sangyoon
AMBLE: Adjusting Mini-Batch and Local Epoch for Federated Learning with Heterogeneous Devices🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2022, ISSN: 0743-7315.
Abstract | Links | BibTeX | 태그: federated learning, Local mini-batch SGD, System heterogeneity
@article{Juwon2022AMBLE,
title = {AMBLE: Adjusting Mini-Batch and Local Epoch for Federated Learning with Heterogeneous Devices},
author = {Juwon Park and Daegun Yoon and Sangho Yeo and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S0743731522001757},
doi = {https://doi.org/10.1016/j.jpdc.2022.07.009},
issn = {0743-7315},
year = {2022},
date = {2022-07-21},
urldate = {2022-07-21},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As data privacy becomes increasingly important, federated learning applied to the training of deep learning models while ensuring the data privacy of devices is entering the spotlight. Federated learning makes it possible to process all data at once while processing data independently from various devices without collecting distributed local data in a central server. However, there are still challenges to overcome for the system of devices in federated learning such as communication overheads and the heterogeneity of the system. In this paper, we propose the Adjusting Mini-Batch and Local Epoch (AMBLE) approach, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning and updates the parameters synchronously. With AMBLE, we enhance the computational efficiency by removing stragglers and scaling the local learning rate to improve the model convergence rate and accuracy. We verify that federated learning with AMBLE is a stably trained model with a faster convergence speed and higher accuracy than FedAvg and adaptive batch size scheme for both identically and independently distributed (IID) and non-IID cases.},
keywords = {federated learning, Local mini-batch SGD, System heterogeneity},
pubstate = {published},
tppubtype = {article}
}
As data privacy becomes increasingly important, federated learning applied to the training of deep learning models while ensuring the data privacy of devices is entering the spotlight. Federated learning makes it possible to process all data at once while processing data independently from various devices without collecting distributed local data in a central server. However, there are still challenges to overcome for the system of devices in federated learning such as communication overheads and the heterogeneity of the system. In this paper, we propose the Adjusting Mini-Batch and Local Epoch (AMBLE) approach, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning and updates the parameters synchronously. With AMBLE, we enhance the computational efficiency by removing stragglers and scaling the local learning rate to improve the model convergence rate and accuracy. We verify that federated learning with AMBLE is a stably trained model with a faster convergence speed and higher accuracy than FedAvg and adaptive batch size scheme for both identically and independently distributed (IID) and non-IID cases.