논문 인용하기
각 논문마다 생성되어 있는 BibTeX를 사용하시면 자신이 원하는 스타일의 인용 문구를 생성할 수 있습니다.
생성된 BibTeX 코드를 복사하여 BibTeX Parser를 사용해 일반 문자열로 바꾸십시오. 아래의 사이트와 같이 웹에서 변환할 수도 있습니다.
2025
양채원,; 추민솔,; 오상윤,
자율주행 시계열 데이터에 대한 그래프 모델링 기반 최적 경로 탐색 방법 Conference
2025년도 한국인터넷정보학회 춘계학술발표대회 논문집 제26권 1호, 2025.
BibTeX | 태그:
@conference{KSII-Spring-Conference-2025,
title = {자율주행 시계열 데이터에 대한 그래프 모델링 기반 최적 경로 탐색 방법},
author = {양채원 and 추민솔 and 오상윤},
year = {2025},
date = {2025-04-24},
booktitle = {2025년도 한국인터넷정보학회 춘계학술발표대회 논문집 제26권 1호},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Choi, Jiheon; Lee, Jaehyun; Yoon, Taeyoung; Choo, Minsol; Kwon, Oh-Kyoung; Oh, Sangyoon
When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data Conference
The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025), 2025.
@conference{choi-hpc-active,
title = {When HPC Scheduling Meets Active Learning: Maximizing The Performance with Minimal Data},
author = {Jiheon Choi and Jaehyun Lee and Taeyoung Yoon and Minsol Choo and Oh-Kyoung Kwon and Sangyoon Oh},
url = {https://dl.acm.org/doi/full/10.1145/3712031.3712334},
year = {2025},
date = {2025-02-20},
urldate = {2025-02-20},
booktitle = {The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2025)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
추민솔,; 윤태영,; 오상윤,
다중 목표 최적화를 위한 Autotuning 프레임워크 벤치마크 Conference
2025년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2025.
@conference{KICS-Winter-Conference-2025,
title = {다중 목표 최적화를 위한 Autotuning 프레임워크 벤치마크},
author = {추민솔 and 윤태영 and 오상윤},
url = {https://dbpia.co.kr/journal/articleDetail?nodeId=NODE12132445},
year = {2025},
date = {2025-02-06},
urldate = {2025-02-06},
booktitle = {2025년도 한국통신학회 동계종합학술발표회, 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
윤석현,; 정재윤,; 안성배,; 오상윤,
효율적 동적그래프 처리를 위한 중요도 기반 Event 선별 기법 Conference
2025년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2025.
@conference{KICS-Winter-Conference-2025,
title = {효율적 동적그래프 처리를 위한 중요도 기반 Event 선별 기법},
author = {윤석현 and 정재윤 and 안성배 and 오상윤},
url = {https://dbpia.co.kr/journal/articleDetail?nodeId=NODE12132829},
year = {2025},
date = {2025-02-06},
urldate = {2025-02-06},
booktitle = {2025년도 한국통신학회 동계종합학술발표회, 한국통신학회},
keywords = {graph},
pubstate = {published},
tppubtype = {conference}
}
이재현,; 오상윤,
HPC를 위한 공유 데이터 레포지토리: 통신 프로토콜과 데이터 베이스 조합의 처리량 분석 Conference
2025년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2025.
@conference{KICS-Winter-Conference-2025,
title = {HPC를 위한 공유 데이터 레포지토리: 통신 프로토콜과 데이터 베이스 조합의 처리량 분석},
author = {이재현 and 오상윤},
url = {https://dbpia.co.kr/journal/articleDetail?nodeId=NODE12132830},
year = {2025},
date = {2025-02-06},
urldate = {2025-02-06},
booktitle = {2025년도 한국통신학회 동계종합학술발표회, 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
최지헌,; 오상윤,
대규모 언어 모델 RAG 시스템의 벡터 인덱싱 확장성 Conference
2025년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2025.
@conference{KICS-Winter-Conference-2025,
title = {대규모 언어 모델 RAG 시스템의 벡터 인덱싱 확장성},
author = {최지헌 and 오상윤},
url = {https://dbpia.co.kr/journal/articleDetail?nodeId=NODE12132022},
year = {2025},
date = {2025-02-06},
urldate = {2025-02-06},
booktitle = {2025년도 한국통신학회 동계종합학술발표회, 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2024
정재윤,; 안성배,; 오상윤,
WSGI 웹 애플리케이션 서버 성능 최적화를 위한 BO기반 튜닝 기법 Conference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
@conference{Ksc2024-winter-1,
title = {WSGI 웹 애플리케이션 서버 성능 최적화를 위한 BO기반 튜닝 기법},
author = {정재윤 and 안성배 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12041798},
year = {2024},
date = {2024-12-18},
urldate = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
윤태영,; 최지헌,; 권오경,; 오상윤,
HPC 환경에서 데이터 불균형을 고려한 작업 응용 예측 기법 Conference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
@conference{ksc2024-winter-2,
title = {HPC 환경에서 데이터 불균형을 고려한 작업 응용 예측 기법},
author = {윤태영 and 최지헌 and 권오경 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12042232},
year = {2024},
date = {2024-12-18},
urldate = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
추민솔,; 윤석현,; 이재현,; 오상윤,
TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법 Conference
한국소프트웨어종합학술대회 (KSC2024), 한국정보과학회, 2024.
@conference{ksc2024-winter-3,
title = {TPE를 적용한 ytopt 기반의 HPC 응용 Autotuning 기법},
author = {추민솔 and 윤석현 and 이재현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12042231},
year = {2024},
date = {2024-12-18},
urldate = {2024-12-18},
booktitle = {한국소프트웨어종합학술대회 (KSC2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Park, Sangjun; Kim, Youngjoo; Oh, Sangyoon; Jeong, Chanki
Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices Journal Article
In: IEEE Access, vol. 12, pp. 122671 – 122683, 2024, ISSN: 2169-3536.
@article{nokey,
title = {Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices},
author = {Sangjun Park and Youngjoo Kim and Sangyoon Oh and Chanki Jeong},
url = {https://ieeexplore.ieee.org/document/10639408},
doi = {10.1109/ACCESS.2024.3445911},
issn = {2169-3536},
year = {2024},
date = {2024-08-19},
urldate = {2024-08-19},
booktitle = {IEEE Access, 2024},
journal = {IEEE Access},
volume = {12},
pages = {122671 - 122683},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Miri; Choi, Jiheon; Lee, Jaehyun; Oh, Sangyoon
Staleness Aware Semi-asynchronous Federated Learning Journal Article
In: Journal of Parallel and Distributed Computing, 2024.
Abstract | Links | BibTeX | 태그: federated learning
@article{miri2024staleness,
title = {Staleness Aware Semi-asynchronous Federated Learning},
author = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S074373152400114X},
year = {2024},
date = {2024-07-01},
urldate = {2024-07-01},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},
keywords = {federated learning},
pubstate = {published},
tppubtype = {article}
}
안성배,; 이재현,; 박종원,; Paulo, C. Sergio; 오상윤,
HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택 Conference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{kcc2024-1,
title = {HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택},
author = {안성배 and 이재현 and 박종원 and C. Sergio Paulo and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862282},
year = {2024},
date = {2024-06-27},
urldate = {2024-06-27},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이민서,; 최지헌,; 윤태영,; 오상윤,
스트리밍 기반의 고성능 데이터 파일 병합 기법 Conference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{2024kcc-2,
title = {스트리밍 기반의 고성능 데이터 파일 병합 기법},
author = {이민서 and 최지헌 and 윤태영 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862265},
year = {2024},
date = {2024-06-26},
urldate = {2024-06-26},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
안성배,; 이재현,; 박보현,; 오상윤,
HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석 Conference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{kics2024-1,
title = {HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석},
author = {안성배 and 이재현 and 박보현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737204},
year = {2024},
date = {2024-03-27},
urldate = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Paulo, C. Sergio; 유미리,; 최지헌,; 오상윤,
Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data Conference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
Abstract | Links | BibTeX | 태그:
@conference{2024kics-2,
title = {Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data},
author = {C. Sergio Paulo and 유미리 and 최지헌 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737048},
year = {2024},
date = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
abstract = {Multilevel graph algorithms are used to create optimal partitions for large graphs. However, the dynamic changes to the graph structures during partitioning lead to increased memory. These changes involve adding temporal data to arrays or queues during intermediary operations. To enhance efficiency and minimize memory usage, we integrated dynamic programming. Experimental results demonstrate the improved scalability and effectiveness of the proposed approach in terms of memory usage.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning Conference
The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024), 2024.
@conference{icpp2024yoon,
title = {Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://arxiv.org/abs/2402.13781},
year = {2024},
date = {2024-02-13},
urldate = {2024-02-13},
booktitle = {The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
Jung, Hyunseok; Choi, Jiheon; Park, Jongwon; Baek, Sehui; Oh, Sangyoon
Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction Conference
The 9th International Conference on Next Generation Computing 2023, 2023.
@conference{nokey,
title = {Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction},
author = {Hyunseok Jung and Jiheon Choi and Jongwon Park and Sehui Baek and Sangyoon Oh },
url = {https://www.earticle.net/Article/A448155},
year = {2023},
date = {2023-11-24},
urldate = {2023-11-24},
booktitle = {The 9th International Conference on Next Generation Computing 2023},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yu, Miri; Kwon, Oh-Kyoung; Oh, Sangyoon (Ed.)
Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach Conference
The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023), 2023.
BibTeX | 태그: federated learning
@conference{nokey,
title = {Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach},
editor = {Miri Yu and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2023},
date = {2023-11-10},
urldate = {2023-11-10},
booktitle = {The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023)},
keywords = {federated learning},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training Conference
30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023), 2023.
Links | BibTeX | 태그: distributed deep learning, gradient sparsification
@conference{nokey,
title = {MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://ieeexplore.ieee.org/abstract/document/10487098},
year = {2023},
date = {2023-10-02},
urldate = {2023-10-02},
booktitle = {30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023)},
keywords = {distributed deep learning, gradient sparsification},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification Conference
International Conference on Parallel Processing (ICPP) 2023, 2023.
Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification
@conference{nokey,
title = {DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://dl.acm.org/doi/10.1145/3605573.3605609},
year = {2023},
date = {2023-08-07},
urldate = {2023-08-07},
booktitle = {International Conference on Parallel Processing (ICPP) 2023},
abstract = {Gradient sparsification is a widely adopted solution for reducing
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.},
keywords = {distributed deep learning, gradient sparsification},
pubstate = {published},
tppubtype = {conference}
}
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.