논문 인용하기
각 논문마다 생성되어 있는 BibTeX를 사용하시면 자신이 원하는 스타일의 인용 문구를 생성할 수 있습니다.
생성된 BibTeX 코드를 복사하여 BibTeX Parser를 사용해 일반 문자열로 바꾸십시오. 아래의 사이트와 같이 웹에서 변환할 수도 있습니다.
bibtex.online2025
Choi, Jiheon; Lee, Jaehyun; Yoon, Taeyoung; Choo, Minsol; Kwon, Oh-Kyoung; Oh, Sangyoon
When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data🌏 InternationalConference Forthcoming
The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025), Forthcoming.
BibTeX | 태그:
@conference{choi-hpc-active,
title = {When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data},
author = {Jiheon Choi and Jaehyun Lee and Taeyoung Yoon and Minsol Choo and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2025},
date = {2025-02-20},
urldate = {2025-02-20},
booktitle = {The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025)},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}
2024
정재윤,; 안성배,; 오상윤,
WSGI 웹 애플리케이션 서버 성능 최적화를 위한 BO기반 튜닝 기법🇰🇷 DomesticConference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
BibTeX | 태그:
@conference{Ksc2024-winter-1,
title = {WSGI 웹 애플리케이션 서버 성능 최적화를 위한 BO기반 튜닝 기법},
author = {정재윤 and 안성배 and 오상윤},
year = {2024},
date = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
윤태영,; 최지헌,; 권오경,; 오상윤,
HPC 환경에서 데이터 불균형을 고려한 작업 응용 예측 기법🇰🇷 DomesticConference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
BibTeX | 태그:
@conference{ksc2024-winter-2,
title = {HPC 환경에서 데이터 불균형을 고려한 작업 응용 예측 기법},
author = {윤태영 and 최지헌 and 권오경 and 오상윤},
year = {2024},
date = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
추민솔,; 윤석현,; 이재현,; 오상윤,
TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법🇰🇷 DomesticConference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
BibTeX | 태그:
@conference{ksc2024-winter-3,
title = {TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법},
author = {추민솔 and 윤석현 and 이재현 and 오상윤},
year = {2024},
date = {2024-12-18},
urldate = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Park, Sangjun; Kim, Youngjoo; Oh, Sangyoon; Jeong, Chanki
Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices🌏 InternationalJournal Article
In: pp. 122671 - 122683, 2024, ISSN: 2169-3536.
@article{nokey,
title = {Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices},
author = {Sangjun Park and Youngjoo Kim and Sangyoon Oh and Chanki Jeong},
url = {https://ieeexplore.ieee.org/document/10639408},
doi = {10.1109/ACCESS.2024.3445911},
issn = {2169-3536},
year = {2024},
date = {2024-08-19},
urldate = {2024-08-19},
booktitle = {IEEE Access, 2024},
pages = {122671 - 122683},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Miri; Choi, Jiheon; Lee, Jaehyun; Oh, Sangyoon
Staleness Aware Semi-asynchronous Federated Learning🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2024.
Abstract | Links | BibTeX | 태그: federated learning
@article{miri2024staleness,
title = {Staleness Aware Semi-asynchronous Federated Learning},
author = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S074373152400114X},
year = {2024},
date = {2024-07-01},
urldate = {2024-07-01},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},
keywords = {federated learning},
pubstate = {published},
tppubtype = {article}
}
안성배,; 이재현,; 박종원,; Paulo, C. Sergio; 오상윤,
HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택🇰🇷 DomesticConference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{kcc2024-1,
title = {HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택},
author = {안성배 and 이재현 and 박종원 and C. Sergio Paulo and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862282},
year = {2024},
date = {2024-06-27},
urldate = {2024-06-27},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이민서,; 최지헌,; 윤태영,; 오상윤,
스트리밍 기반의 고성능 데이터 파일 병합 기법🇰🇷 DomesticConference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{2024kcc-2,
title = {스트리밍 기반의 고성능 데이터 파일 병합 기법},
author = {이민서 and 최지헌 and 윤태영 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862265},
year = {2024},
date = {2024-06-26},
urldate = {2024-06-26},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
안성배,; 이재현,; 박보현,; 오상윤,
HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{kics2024-1,
title = {HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석},
author = {안성배 and 이재현 and 박보현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737204},
year = {2024},
date = {2024-03-27},
urldate = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Paulo, C. Sergio; 유미리,; 최지헌,; 오상윤,
Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
Abstract | Links | BibTeX | 태그:
@conference{2024kics-2,
title = {Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data},
author = {C. Sergio Paulo and 유미리 and 최지헌 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737048},
year = {2024},
date = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
abstract = {Multilevel graph algorithms are used to create optimal partitions for large graphs. However, the dynamic changes to the graph structures during partitioning lead to increased memory. These changes involve adding temporal data to arrays or queues during intermediary operations. To enhance efficiency and minimize memory usage, we integrated dynamic programming. Experimental results demonstrate the improved scalability and effectiveness of the proposed approach in terms of memory usage.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning🌏 InternationalConference
The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024), 2024.
@conference{icpp2024yoon,
title = {Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://arxiv.org/abs/2402.13781},
year = {2024},
date = {2024-02-13},
urldate = {2024-02-13},
booktitle = {The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
Jung, Hyunseok; Choi, Jiheon; Park, Jongwon; Baek, Sehui; Oh, Sangyoon
Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction🌏 InternationalConference
The 9th International Conference on Next Generation Computing 2023, 2023.
BibTeX | 태그:
@conference{nokey,
title = {Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction},
author = {Hyunseok Jung and Jiheon Choi and Jongwon Park and Sehui Baek and Sangyoon Oh },
year = {2023},
date = {2023-11-24},
urldate = {2023-11-24},
booktitle = {The 9th International Conference on Next Generation Computing 2023},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yu, Miri; Kwon, Oh-Kyoung; Oh, Sangyoon (Ed.)
Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach🌏 InternationalConference
The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023), 2023.
BibTeX | 태그: federated learning
@conference{nokey,
title = {Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach},
editor = {Miri Yu and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2023},
date = {2023-11-10},
urldate = {2023-11-10},
booktitle = {The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023)},
keywords = {federated learning},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training🌏 InternationalConference
30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023), 2023.
Links | BibTeX | 태그: distributed deep learning, gradient sparsification
@conference{nokey,
title = {MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://ieeexplore.ieee.org/abstract/document/10487098},
year = {2023},
date = {2023-10-02},
urldate = {2023-10-02},
booktitle = {30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023)},
keywords = {distributed deep learning, gradient sparsification},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification🌏 InternationalConference
International Conference on Parallel Processing (ICPP) 2023, 2023.
Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification
@conference{nokey,
title = {DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://dl.acm.org/doi/10.1145/3605573.3605609},
year = {2023},
date = {2023-08-07},
urldate = {2023-08-07},
booktitle = {International Conference on Parallel Processing (ICPP) 2023},
abstract = {Gradient sparsification is a widely adopted solution for reducing
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.},
keywords = {distributed deep learning, gradient sparsification},
pubstate = {published},
tppubtype = {conference}
}
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.
유미리,; 윤대건,; 오상윤,
연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
Links | BibTeX | 태그: federated learning
@conference{연합학습기법들의성능평가를지원하는이기종기반의실험플랫폼설계,
title = {연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계},
author = {유미리 and 윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487802},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = { 한국통신학회},
keywords = {federated learning},
pubstate = {published},
tppubtype = {conference}
}
이재현,; 정현석,; 오상윤,
VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
Links | BibTeX | 태그: deep reinforcement learning
@conference{nokey,
title = {VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘},
author = {이재현 and 정현석 and 오상윤 },
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487081},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = {한국통신학회},
keywords = {deep reinforcement learning},
pubstate = {published},
tppubtype = {conference}
}
Baek, Minseok; Paulo, C. Sergio; Oh, Sangyoon
Analysis of the In-Memory Checkpointing Approach in Apache Flink🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{nokey,
title = {Analysis of the In-Memory Checkpointing Approach in Apache Flink},
author = {Minseok Baek and C. Sergio Paulo and Sangyoon Oh},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11487634},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = { 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
정현석,; 최지헌,; 박종원,; 백세희,; 오상윤,
화재 감지 시스템을 위한 MLOps 시스템 구조🇰🇷 DomesticConference
2023 한국차세대컴퓨팅학회 춘계학술대회 , 한국차세대컴퓨팅학회 2023.
Links | BibTeX | 태그: edge computing, MLOps
@conference{MLOps,
title = {화재 감지 시스템을 위한 MLOps 시스템 구조},
author = {정현석 and 최지헌 and 박종원 and 백세희 and 오상윤 },
url = {https://www.earticle.net/Article/A433574},
year = {2023},
date = {2023-05-31},
urldate = {2023-05-31},
booktitle = {2023 한국차세대컴퓨팅학회 춘계학술대회 },
pages = {313-315},
organization = {한국차세대컴퓨팅학회 },
keywords = {edge computing, MLOps},
pubstate = {published},
tppubtype = {conference}
}
Lee, Seungjun; Yu, Miri; Yoon, Daegun; Oh, Sangyoon
Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?🌏 InternationalConference
2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2023, ISBN: 979-8-3503-1200-3.
Abstract | Links | BibTeX | 태그: federated learning
@conference{nokey,
title = {Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?},
author = {Seungjun Lee and Miri Yu and Daegun Yoon and Sangyoon Oh},
url = {10.1109/IPDPSW59300.2023.00134},
doi = {10.1109/IPDPSW59300.2023.00134},
isbn = {979-8-3503-1200-3},
year = {2023},
date = {2023-05-15},
urldate = {2023-05-15},
booktitle = {2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
abstract = {Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets.},
keywords = {federated learning},
pubstate = {published},
tppubtype = {conference}
}