이재현
Jaehyun Lee
Email
dlwogus8888@ajou.ac.kr
Research interests
Distributed System, Big Data Processing, Cloud Computing
Introduction
Hello, I’m Jae Hyun Lee.
I am studying software at the software department of Ajou university.
I’m planning to conduct research on the above topics.
Publications
2025
7.
Choi, Jiheon; Lee, Jaehyun; Yoon, Taeyoung; Choo, Minsol; Kwon, Oh-Kyoung; Oh, Sangyoon
When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data🌏 InternationalConference
The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025), 2025.
@conference{choi-hpc-active,
title = {When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data},
author = {Jiheon Choi and Jaehyun Lee and Taeyoung Yoon and Minsol Choo and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2025},
date = {2025-02-20},
urldate = {2025-02-20},
booktitle = {The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
6.
이재현,; 오상윤,
HPC를 위한 공유 데이터 레포지토리: 통신 프로토콜과 데이터 베이스 조합의 처리량 분석🇰🇷 DomesticConference
2025년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2025.
@conference{KICS-Winter-Conference-2025,
title = {HPC를 위한 공유 데이터 레포지토리: 통신 프로토콜과 데이터 베이스 조합의 처리량 분석},
author = {이재현 and 오상윤},
year = {2025},
date = {2025-02-06},
urldate = {2025-02-06},
booktitle = {2025년도 한국통신학회 동계종합학술발표회, 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2024
5.
추민솔,; 윤석현,; 이재현,; 오상윤,
TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법🇰🇷 DomesticConference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
@conference{ksc2024-winter-3,
title = {TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법},
author = {추민솔 and 윤석현 and 이재현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12042231},
year = {2024},
date = {2024-12-18},
urldate = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
4.
Yu, Miri; Choi, Jiheon; Lee, Jaehyun; Oh, Sangyoon
Staleness Aware Semi-asynchronous Federated Learning🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2024.
@article{miri2024staleness,
title = {Staleness Aware Semi-asynchronous Federated Learning},
author = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S074373152400114X},
year = {2024},
date = {2024-07-01},
urldate = {2024-07-01},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.
3.
안성배,; 이재현,; 박종원,; Paulo, C. Sergio; 오상윤,
HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택🇰🇷 DomesticConference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{kcc2024-1,
title = {HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택},
author = {안성배 and 이재현 and 박종원 and C. Sergio Paulo and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862282},
year = {2024},
date = {2024-06-27},
urldate = {2024-06-27},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2.
안성배,; 이재현,; 박보현,; 오상윤,
HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{kics2024-1,
title = {HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석},
author = {안성배 and 이재현 and 박보현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737204},
year = {2024},
date = {2024-03-27},
urldate = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
1.
이재현,; 정현석,; 오상윤,
VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{nokey,
title = {VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘},
author = {이재현 and 정현석 and 오상윤 },
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487081},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}