유미리
Miri Yu
zztiok at ajou.ac.kr
Research interests
Cloud Computing, Distributed System, Federated Learning
Introduction
Miri Yu is a master’s course student of the Department of Artificial Intelligence of Ajou University
Publications
2024
Yu, Miri; Choi, Jiheon; Lee, Jaehyun; Oh, Sangyoon
Staleness Aware Semi-asynchronous Federated Learning🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2024.
@article{miri2024staleness,
title = {Staleness Aware Semi-asynchronous Federated Learning},
author = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S074373152400114X},
year = {2024},
date = {2024-07-01},
urldate = {2024-07-01},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Paulo, C. Sergio; 유미리,; 최지헌,; 오상윤,
Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{2024kics-2,
title = {Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data},
author = {C. Sergio Paulo and 유미리 and 최지헌 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737048},
year = {2024},
date = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
abstract = {Multilevel graph algorithms are used to create optimal partitions for large graphs. However, the dynamic changes to the graph structures during partitioning lead to increased memory. These changes involve adding temporal data to arrays or queues during intermediary operations. To enhance efficiency and minimize memory usage, we integrated dynamic programming. Experimental results demonstrate the improved scalability and effectiveness of the proposed approach in terms of memory usage.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
Yu, Miri; Kwon, Oh-Kyoung; Oh, Sangyoon (Ed.)
Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach🌏 InternationalConference
The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023), 2023.
@conference{nokey,
title = {Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach},
editor = {Miri Yu and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2023},
date = {2023-11-10},
urldate = {2023-11-10},
booktitle = {The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
유미리,; 윤대건,; 오상윤,
연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{연합학습기법들의성능평가를지원하는이기종기반의실험플랫폼설계,
title = {연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계},
author = {유미리 and 윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487802},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = { 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lee, Seungjun; Yu, Miri; Yoon, Daegun; Oh, Sangyoon
Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?🌏 InternationalConference
2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2023, ISBN: 979-8-3503-1200-3.
@conference{nokey,
title = {Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?},
author = {Seungjun Lee and Miri Yu and Daegun Yoon and Sangyoon Oh},
url = {10.1109/IPDPSW59300.2023.00134},
doi = {10.1109/IPDPSW59300.2023.00134},
isbn = {979-8-3503-1200-3},
year = {2023},
date = {2023-05-15},
urldate = {2023-05-15},
booktitle = {2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
abstract = {Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
최지헌,; 유미리,; 윤대건,; 오상윤,
연합학습에서의 보안 취약점 분석🇰🇷 DomesticConference
2023년도 한국통신학회 동계종합학술발표회 논문집 , vol. 80, 한국통신학회 2023, ISSN: 2383-8302.
@conference{최지헌2023연합학습에서의,
title = {연합학습에서의 보안 취약점 분석},
author = {최지헌 and 유미리 and 윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11227811},
issn = {2383-8302},
year = {2023},
date = {2023-02-28},
urldate = {2023-02-28},
booktitle = {2023년도 한국통신학회 동계종합학술발표회 논문집
},
volume = {80},
pages = {1201-1202},
organization = {한국통신학회},
abstract = {개인 데이터에 대한 프라이버시 침해 없이 분산 기계학습을 구현하기 위해 연합학습이 제안되었다. 기존 연합학습 기법의 개선을 통해 정확도향상 및 수렴속도 향상을 목표로 하는 새로운 기법들이 등장하고 있어서, 이에 대한 보안 가이드라인이 필요한 상황이다. 본 논문에서는연합학습 구조의 특징으로 나타나는 보안 취약점을 공격형태 별로 구분하고 이에 대한 대응방안을 고찰한다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2022
정현석,; 유미리,; 윤대건,; 이승준,; 오상윤,
재난 대응 기계학습 모델의 Data Drift 문제에 대한 MLOps 기반 대응 기법🇰🇷 DomesticConference
2022 한국차세대컴퓨팅학회 춘계학술대회, 한국차세대컴퓨팅학회, 2022.
@conference{정현석2022재난,
title = {재난 대응 기계학습 모델의 Data Drift 문제에 대한 MLOps 기반 대응 기법},
author = {정현석 and 유미리 and 윤대건 and 이승준 and 오상윤},
url = {https://www.earticle.net/Article/A412404},
year = {2022},
date = {2022-05-21},
urldate = {2022-05-21},
booktitle = {2022 한국차세대컴퓨팅학회 춘계학술대회},
pages = {pp.473-476},
publisher = {한국차세대컴퓨팅학회},
abstract = {기계학습에서 Data Drift는 정확도에 큰 영향을 주는 중요한 문제이며, 재난 대응과 같이 모델의 잘못된 예측 피해가 큰 분야에서 더 중요하다. 본 논문에서는 재난 분야 Data Drift 문제에 대해 MLOps를 이용하여 모델의 재학습을 효과적으로 수행할 수 있는 방안으로 MLOps 기법과 툴들을 사용하는 것을 제안하고, Kaggle 데이터와 MLFlow를 기반으로 정확도 실험을 수행하여 주장을 검증하였다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2021
Yu, Miri; Lee, Seungjun; Oh, Sangyoon
Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system🌏 InternationalConference 📃 In press
The 7th International Conference on Next Generation Computing 2021, 2021.
@conference{Yu2021container,
title = {Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system},
author = {Miri Yu and Seungjun Lee and Sangyoon Oh},
year = {2021},
date = {2021-11-05},
urldate = {2021-11-05},
booktitle = {The 7th International Conference on Next Generation Computing 2021},
abstract = {In light of the recent advancements made in IT, many researchers are studying and exploring ways to minimize damage from fire disasters using artificial intelligence and cloud technology. With the introduction of edge computing, fire-disaster response software systems have made significant progress. However, existing studies often do not consider the response to a sudden power supply cut-off due to fire. In this study, we propose a container migration scheme based on the first-fit-decreasing algorithm of bin-packing problem and 0-1 knapsack algorithm to provide fault tolerance for containers running on edge servers that are powered off.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}