오상윤
Sangyoon Oh
syoh at ajou.ac.kr
Research interests
Distributed Computing, Grid/Cloud Computing, Data-Intensive Computing, Web System
Introduction
Sangyoon Oh is a Professor of the Software Department at Ajou University, Rep. of Korea. Prior to join, he worked at SK Telecom from 2006 to 2007. Sangyoon Oh received Ph.D. in Computer Science at Indiana University (IU) – Bloomington (Advisor Dr. Geoffrey C. Fox).
Publications
2024
Park, Sangjun; Kim, Youngjoo; Oh, Sangyoon; Jeong, Chanki
Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices🌏 InternationalJournal Article
In: pp. 122671 - 122683, 2024, ISSN: 2169-3536.
@article{nokey,
title = {Robust Bare-Bone CNN Applying for Tactical Mobile Edge Devices},
author = {Sangjun Park and Youngjoo Kim and Sangyoon Oh and Chanki Jeong},
url = {https://ieeexplore.ieee.org/document/10639408},
doi = {10.1109/ACCESS.2024.3445911},
issn = {2169-3536},
year = {2024},
date = {2024-08-19},
urldate = {2024-08-19},
booktitle = {IEEE Access, 2024},
pages = {122671 - 122683},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yu, Miri; Choi, Jiheon; Lee, Jaehyun; Oh, Sangyoon
Staleness Aware Semi-asynchronous Federated Learning🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2024.
@article{miri2024staleness,
title = {Staleness Aware Semi-asynchronous Federated Learning},
author = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S074373152400114X},
year = {2024},
date = {2024-07-01},
urldate = {2024-07-01},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
안성배,; 이재현,; 박종원,; Paulo, C. Sergio; 오상윤,
HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택🇰🇷 DomesticConference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{nokey,
title = {HPC 작업 수행 최적화를 위한 Autotuning 기법의 Surrogate Model 선택},
author = {안성배 and 이재현 and 박종원 and C. Sergio Paulo and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862282},
year = {2024},
date = {2024-06-27},
urldate = {2024-06-27},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이민서,; 최지헌,; 윤태영,; 오상윤,
스트리밍 기반의 고성능 데이터 파일 병합 기법🇰🇷 DomesticConference
한국컴퓨터종합학술대회 (KCC 2024), 한국정보과학회, 2024.
@conference{2024kcc-stream,
title = {스트리밍 기반의 고성능 데이터 파일 병합 기법},
author = {이민서 and 최지헌 and 윤태영 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11862265},
year = {2024},
date = {2024-06-26},
urldate = {2024-06-26},
booktitle = {한국컴퓨터종합학술대회 (KCC 2024)},
publisher = {한국정보과학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
안성배,; 이재현,; 박보현,; 오상윤,
HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{kics2024-1,
title = {HPC 환경에서의 스케줄링을 위한 강화학습 및 휴리스틱 알고리즘의 비교 분석},
author = {안성배 and 이재현 and 박보현 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737204},
year = {2024},
date = {2024-03-27},
urldate = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Paulo, C. Sergio; 유미리,; 최지헌,; 오상윤,
Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data🇰🇷 DomesticConference
2024년도 한국통신학회 동계종합학술발표회, 한국통신학회, 2024.
@conference{2024kics-2,
title = {Dynamic Programming-Based Multilevel Graph Partitioning for Large-Scale Graph Data},
author = {C. Sergio Paulo and 유미리 and 최지헌 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11737048},
year = {2024},
date = {2024-03-27},
booktitle = {2024년도 한국통신학회 동계종합학술발표회},
publisher = {한국통신학회},
abstract = {Multilevel graph algorithms are used to create optimal partitions for large graphs. However, the dynamic changes to the graph structures during partitioning lead to increased memory. These changes involve adding temporal data to arrays or queues during intermediary operations. To enhance efficiency and minimize memory usage, we integrated dynamic programming. Experimental results demonstrate the improved scalability and effectiveness of the proposed approach in terms of memory usage.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning🌏 InternationalConference
The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024), 2024.
@conference{icpp2024yoon,
title = {Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://arxiv.org/abs/2402.13781},
year = {2024},
date = {2024-02-13},
urldate = {2024-02-13},
booktitle = {The 24th IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2024)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2023
Jung, Hyunseok; Choi, Jiheon; Park, Jongwon; Baek, Sehui; Oh, Sangyoon
Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction🌏 InternationalConference
The 9th International Conference on Next Generation Computing 2023, 2023.
@conference{nokey,
title = {Conditional LSTM-VAE-based Data Augmentation for Disaster Classification Prediction},
author = {Hyunseok Jung and Jiheon Choi and Jongwon Park and Sehui Baek and Sangyoon Oh },
year = {2023},
date = {2023-11-24},
urldate = {2023-11-24},
booktitle = {The 9th International Conference on Next Generation Computing 2023},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yu, Miri; Kwon, Oh-Kyoung; Oh, Sangyoon (Ed.)
Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach🌏 InternationalConference
The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023), 2023.
@conference{nokey,
title = {Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach},
editor = {Miri Yu and Oh-Kyoung Kwon and Sangyoon Oh},
year = {2023},
date = {2023-11-10},
urldate = {2023-11-10},
booktitle = {The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training🌏 InternationalConference
30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023), 2023.
@conference{nokey,
title = {MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://ieeexplore.ieee.org/abstract/document/10487098},
year = {2023},
date = {2023-10-02},
urldate = {2023-10-02},
booktitle = {30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification🌏 InternationalConference
International Conference on Parallel Processing (ICPP) 2023, 2023.
@conference{nokey,
title = {DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://dl.acm.org/doi/10.1145/3605573.3605609},
year = {2023},
date = {2023-08-07},
urldate = {2023-08-07},
booktitle = {International Conference on Parallel Processing (ICPP) 2023},
abstract = {Gradient sparsification is a widely adopted solution for reducing
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
the excessive communication traffic in distributed deep learning.
However, most existing gradient sparsifiers have relatively poor
scalability because of considerable computational cost of gradient
selection and/or increased communication traffic owing to gradient
build-up. To address these challenges, we propose a novel gradient
sparsification scheme, DEFT, that partitions the gradient selection
task into sub tasks and distributes them to workers. DEFT differs
from existing sparsifiers, wherein every worker selects gradients
among all gradients. Consequently, the computational cost can
be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to
select gradients in partitions that are non-intersecting (between
workers). Therefore, even if the number of workers increases, the
communication traffic can be maintained as per user requirement.
To avoid the loss of significance of gradient selection, DEFT
selects more gradients in the layers that have a larger gradient
norm than the other layers. Because every layer has a different
computational load, DEFT allocates layers to workers using a binpacking algorithm to maintain a balanced load of gradient selection
between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed
in gradient selection over existing sparsifiers while achieving high
convergence performance.
유미리,; 윤대건,; 오상윤,
연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{연합학습기법들의성능평가를지원하는이기종기반의실험플랫폼설계,
title = {연합학습 기법들의 성능평가를 지원하는 이기종 기반의 실험 플랫폼 설계},
author = {유미리 and 윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487802},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = { 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이재현,; 정현석,; 오상윤,
VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{nokey,
title = {VM 배치를 위한 DDQN 기반 태스크 스케줄링 알고리즘},
author = {이재현 and 정현석 and 오상윤 },
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11487081},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = {한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Baek, Minseok; Paulo, C. Sergio; Oh, Sangyoon
Analysis of the In-Memory Checkpointing Approach in Apache Flink🇰🇷 DomesticConference
2023년도 한국통신학회 하계종합학술발표회 , 한국통신학회 2023.
@conference{nokey,
title = {Analysis of the In-Memory Checkpointing Approach in Apache Flink},
author = {Minseok Baek and C. Sergio Paulo and Sangyoon Oh},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11487634},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
booktitle = {2023년도 한국통신학회 하계종합학술발표회 },
organization = { 한국통신학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
정현석,; 최지헌,; 박종원,; 백세희,; 오상윤,
화재 감지 시스템을 위한 MLOps 시스템 구조🇰🇷 DomesticConference
2023 한국차세대컴퓨팅학회 춘계학술대회 , 한국차세대컴퓨팅학회 2023.
@conference{MLOps,
title = {화재 감지 시스템을 위한 MLOps 시스템 구조},
author = {정현석 and 최지헌 and 박종원 and 백세희 and 오상윤 },
url = {https://www.earticle.net/Article/A433574},
year = {2023},
date = {2023-05-31},
urldate = {2023-05-31},
booktitle = {2023 한국차세대컴퓨팅학회 춘계학술대회 },
pages = {313-315},
organization = {한국차세대컴퓨팅학회 },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lee, Seungjun; Yu, Miri; Yoon, Daegun; Oh, Sangyoon
Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?🌏 InternationalConference
2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2023, ISBN: 979-8-3503-1200-3.
@conference{nokey,
title = {Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?},
author = {Seungjun Lee and Miri Yu and Daegun Yoon and Sangyoon Oh},
url = {10.1109/IPDPSW59300.2023.00134},
doi = {10.1109/IPDPSW59300.2023.00134},
isbn = {979-8-3503-1200-3},
year = {2023},
date = {2023-05-15},
urldate = {2023-05-15},
booktitle = {2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
abstract = {Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
최지헌,; 유미리,; 윤대건,; 오상윤,
연합학습에서의 보안 취약점 분석🇰🇷 DomesticConference
2023년도 한국통신학회 동계종합학술발표회 논문집 , vol. 80, 한국통신학회 2023, ISSN: 2383-8302.
@conference{최지헌2023연합학습에서의,
title = {연합학습에서의 보안 취약점 분석},
author = {최지헌 and 유미리 and 윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11227811},
issn = {2383-8302},
year = {2023},
date = {2023-02-28},
urldate = {2023-02-28},
booktitle = {2023년도 한국통신학회 동계종합학술발표회 논문집
},
volume = {80},
pages = {1201-1202},
organization = {한국통신학회},
abstract = {개인 데이터에 대한 프라이버시 침해 없이 분산 기계학습을 구현하기 위해 연합학습이 제안되었다. 기존 연합학습 기법의 개선을 통해 정확도향상 및 수렴속도 향상을 목표로 하는 새로운 기법들이 등장하고 있어서, 이에 대한 보안 가이드라인이 필요한 상황이다. 본 논문에서는연합학습 구조의 특징으로 나타나는 보안 취약점을 공격형태 별로 구분하고 이에 대한 대응방안을 고찰한다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
백민석,; 정현석,; 공은빈,; 오상윤,
자율주행 데이터의 효과적인 처리를 위한 분산 데이터베이스 설계🇰🇷 DomesticConference
2023년도 한국통신학회 동계종합학술발표회 논문집, vol. 80, 한국통신학회 2023, ISBN: 2383-8302.
@conference{,
title = {자율주행 데이터의 효과적인 처리를 위한 분산 데이터베이스 설계},
author = {백민석 and 정현석 and 공은빈 and 오상윤},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11227933},
isbn = {2383-8302},
year = {2023},
date = {2023-02-28},
urldate = {2023-02-28},
booktitle = {2023년도 한국통신학회 동계종합학술발표회 논문집},
volume = {80},
pages = {1,411 - 1,412},
organization = {한국통신학회},
abstract = {자율주행 기술 고도화를 위해서는 관련 데이터의 효과적인 관리를 지원하는 시스템이 반드시 필요하다. 본 논문에서는, 비정형 대용량의 자율주행 데이터를 처리하기 위한 HDFS 와 HBase 기반의 분산 데이터베이스의 설계를 소개하며, 공개 자율주행 데이터의 ETL 과정을 통해 실증적인 효과를 분석한다. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Jeong, Minjoong; Oh, Sangyoon
SAGE: toward on-the-fly gradient compression ratio scaling🌏 InternationalJournal Article
In: The Journal of Supercomputing, pp. 1–23, 2023.
@article{yoon2023sage,
title = {SAGE: toward on-the-fly gradient compression ratio scaling},
author = {Daegun Yoon and Minjoong Jeong and Sangyoon Oh},
url = {https://link.springer.com/article/10.1007/s11227-023-05120-7},
doi = {https://doi.org/10.1007/s11227-023-05120-7},
year = {2023},
date = {2023-02-25},
urldate = {2023-02-25},
journal = {The Journal of Supercomputing},
pages = {1--23},
abstract = {Gradient sparsification is widely adopted in distributed training; however, it suffers from a trade-off between computation and communication. The prevalent Top-k sparsifier has a hard constraint on computational overhead while achieving the desired gradient compression ratio. Conversely, the hard-threshold sparsifier eliminates computational constraints but fail to achieve the targeted compression ratio. Motivated by this tradeoff, we designed a novel threshold-based sparsifier called SAGE, which achieves a compression ratio close to that of the Top-k sparsifier with negligible computational overhead. SAGE scales the compression ratio by deriving an adjustable threshold based on each iteration’s heuristics. Experimental results show that SAGE achieves a compression ratio closer to the desired ratio than a hard-threshold sparsifier without exacerbating the accuracy of model training. In terms of computation time for gradient selection, SAGE achieves a speedup of up to 23.62×
over the Top-k sparsifier.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
over the Top-k sparsifier.
2022
Yoon, Daegun; Jeong, Minjoong; Oh, Sangyoon
WAVE: designing a heuristics-based three-way breadth-first search on GPUs🌏 InternationalJournal Article
In: The Journal of Supercomputing, 2022, (2).
@article{Yoon2022WAVE,
title = {WAVE: designing a heuristics-based three-way breadth-first search on GPUs},
author = {Daegun Yoon and Minjoong Jeong and Sangyoon Oh},
doi = {10.1007/s11227-022-04934-1},
year = {2022},
date = {2022-11-17},
urldate = {2022-11-17},
journal = {The Journal of Supercomputing},
abstract = {Breadth-first search (BFS) is a building block for improving the performance of many iterative graph algorithms. In addition to conventional BFS (push), a novel method that traverses a graph in the reverse direction (pull) has emerged and gained popularity because of its enhanced processing performance. Several frameworks have recently used a hybrid approach known as direction-optimizing BFS, which utilizes both directions. However, these frameworks are mostly interested in optimizing the procedure in each direction, instead of designing sophisticated methods for determining the appropriate direction between push and pull at each iteration. Owing to the lack of in-depth discussion on this decision, state-of-the-art direction-optimizing BFS algorithms cannot demonstrate their comprehensive performance despite improvements in the design of each direction because they select ineffective directions at each iteration. We identified that the current frameworks suffer from high computational overheads for their decisions and make decisions that are overfitted to several graph datasets used for tuning their direction selection process. Based on observations from state-of-the-art limitations, we designed a direction-optimizing method for BFS called WAVE. WAVE minimizes the computational overhead to near zero and makes more appropriate direction selection decisions than the state-of-the-art heuristics based on the characteristics extracted from the input graph dataset. In our experiments on 20 graph benchmarks, WAVE achieved speedups of up to 4.95×, 5.79×, 46.49×, and 149.67× over Enterprise, Gunrock, Tigr, and CuSha, respectively.},
note = {2},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
백민석,; 오상윤,
하둡 맵리듀스와 페이지 랭크를 이용한 서울시 대중 교통 인구 이동 분석 🇰🇷 DomesticConference 📃 In press
추계학술대회 Annual Conference of KIPS (ACK 2022), 한국정보처리학회 2022, (우수논문상).
@conference{백민석2022하둡,
title = {하둡 맵리듀스와 페이지 랭크를 이용한 서울시 대중 교통 인구 이동 분석 },
author = {백민석 and 오상윤},
url = {https://kiss.kstudy.com/Detail/Ar?key=3988407},
year = {2022},
date = {2022-11-04},
urldate = {2022-11-04},
booktitle = {추계학술대회 Annual Conference of KIPS (ACK 2022)},
organization = {한국정보처리학회 },
note = {우수논문상},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
여상호,; 배민호,; 정민중,; 권오경,; 오상윤,
Crossover-SGD: A gossip-based communication in distributed deep learning for alleviating large mini-batch problem and enhancing scalability🌏 InternationalJournal Article
In: Concurrency and Computation: Practice and Experience, 2022.
@article{여상호2022Crossover-SGD,
title = {Crossover-SGD: A gossip-based communication in distributed deep learning for alleviating large mini-batch problem and enhancing scalability},
author = {여상호 and 배민호 and 정민중 and 권오경 and 오상윤},
url = {https://arxiv.org/abs/2012.15198},
doi = {10.48550/arXiv.2012.15198},
year = {2022},
date = {2022-11-01},
urldate = {2022-11-01},
journal = {Concurrency and Computation: Practice and Experience},
abstract = { Distributed deep learning is an effective way to reduce the training time of deep learning for large datasets as well as complex models. However, the limited scalability caused by network overheads makes it difficult to synchronize the parameters of all workers. To resolve this problem, gossip-based methods that demonstrates stable scalability regardless of the number of workers have been proposed. However, to use gossip-based methods in general cases, the validation accuracy for a large mini-batch needs to be verified. To verify this, we first empirically study the characteristics of gossip methods in a large mini-batch problem and observe that the gossip methods preserve higher validation accuracy than AllReduce-SGD(Stochastic Gradient Descent) when the number of batch sizes is increased and the number of workers is fixed. However, the delayed parameter propagation of the gossip-based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover-SGD that alleviates the delay propagation of weight parameters via segment-wise communication and load balancing random network topology. We also adapt hierarchical communication to limit the number of workers in gossip-based communication methods. To validate the effectiveness of our proposed method, we conduct empirical experiments and observe that our Crossover-SGD shows higher node scalability than SGP(Stochastic Gradient Push). },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
윤대건,; 노병희,; 오상윤,
전술망의 라우팅 성능 개선을 위한 성능 지표 분석 기반 정책 엔진 설계🇰🇷 DomesticJournal Article
In: 한국통신학회 논문지, vol. 47, iss. 9, no. 9, pp. 1353-1359, 2022.
@article{윤대건2022전술망,
title = {전술망의 라우팅 성능 개선을 위한 성능 지표 분석 기반 정책 엔진 설계},
author = {윤대건 and 노병희 and 오상윤},
url = {https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002877193},
year = {2022},
date = {2022-10-31},
urldate = {2022-10-31},
journal = {한국통신학회 논문지},
volume = {47},
number = {9},
issue = {9},
pages = {1353-1359},
abstract = {컴퓨팅 관련 기술 발달에 따라 군 작전 수행에서 발생하는 데이터의 규모가 매우 커지고 있으며, 이에 따라 이를 처리하기 위한 군 전술망의 성능 향상에 대한 요구 또한 점점 늘어나고 있다. 군 전술망의 특성 상 다양한 장비로 구성된 네트워크를 활용해야 하며, 이러한 상황에서 민간에서 활발히 적용되는 Software-Defined Network (SDN) 기술을 적용한다면 장비를 제공 벤더로부터 자유로운 손쉬운 네트워크 관리가 가능하다. 본 논문에서는SDN 기반 네트워크 환경에서 패킷 전송 성능 향상을 목적으로 하는 네트워크 정책 엔진 구조 설계를 소개한다.
정책 엔진은 Flow table의 Flow들이 나타내는 라우팅 경로를 수정하도록 하는 알고리즘을 포함하며 성능 개선 여부는 본 연구에서 정의한 종합 성능 지표를 통해 판단한다. 추후 본 연구에서 제안하는 전술망 라우팅 성능 개량을 위한 성능 지표 분석 기반 정책 엔진 기반의 소프트웨어를 실제 네트워크 운용 상황에 적용하고, 네트워크 성능 향상을 검증하도록 할 계획이다.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
정책 엔진은 Flow table의 Flow들이 나타내는 라우팅 경로를 수정하도록 하는 알고리즘을 포함하며 성능 개선 여부는 본 연구에서 정의한 종합 성능 지표를 통해 판단한다. 추후 본 연구에서 제안하는 전술망 라우팅 성능 개량을 위한 성능 지표 분석 기반 정책 엔진 기반의 소프트웨어를 실제 네트워크 운용 상황에 적용하고, 네트워크 성능 향상을 검증하도록 할 계획이다.
Lee, Seungjun; Jeong, Minjoong; Oh, Sangyoon
Is Ant Colony System better than FFD for VM placement in a heterogeneous cluster?🌏 InternationalConference
2022 IEEE International Conference on Cloud Engineering (IC2E), 2022, ISBN: 978-1-6654-9116-7.
@conference{Seungjun2022Ant,
title = {Is Ant Colony System better than FFD for VM placement in a heterogeneous cluster?},
author = {Seungjun Lee and Minjoong Jeong and Sangyoon Oh},
url = {https://ieeexplore.ieee.org/document/9946320},
doi = {10.1109/IC2E55432.2022.00038},
isbn = {978-1-6654-9116-7},
year = {2022},
date = {2022-09-22},
urldate = {2022-06-11},
booktitle = {2022 IEEE International Conference on Cloud Engineering (IC2E)},
pages = {277-278},
abstract = {First fit decreasing (FFD) is the most popular heuristic for virtual machine (VM) placement problems. However, FFD does not perform well in a heterogeneous cluster environment in which physical machines have different capacities. Moreover, FFD and other heuristics, such as best fit decreasing (BFD), do not effectively handle the VM placement problem if multiple resources are considered together. In this study, we analyze the reason why the ant colony system performs better than FFD for VM placement in a heterogeneous cluster. We verified our logical observations through experimental comparisons with other heuristics.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
The 8th International Conference on Next Generation Computing (ICNGC) 2022, 2022.
@conference{yoon2022empirical,
title = {Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep Learning in a Supercomputing Environment},
author = {Daegun Yoon and Sangyoon Oh},
doi = {10.48550/arXiv.2209.08497},
year = {2022},
date = {2022-09-19},
booktitle = {The 8th International Conference on Next Generation Computing (ICNGC) 2022},
abstract = {To train deep learning models faster, distributed training on multiple GPUs is the very popular scheme in recent years. However, the communication bandwidth is still a major bottleneck of training performance. To improve overall training performance, recent works have proposed gradient sparsification methods that reduce the communication traffic significantly. Most of them require gradient sorting to select meaningful gradients such as Top-k gradient sparsification (Top-k SGD). However, Top-k SGD has a limit to increase the speed up overall training performance because gradient sorting is significantly inefficient on GPUs. In this paper, we conduct experiments that show the inefficiency of Top-k SGD and provide the insight of the low performance. Based on observations from our empirical analysis, we plan to yield a high performance gradient sparsification method as a future work. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이승준,; 윤대건,; 오상윤,
SDN 정책엔진의 사용자 모듈을 위한 분석 요청 정의 언어🇰🇷 DomesticJournal Article
In: 한국통신학회 논문지, vol. 47, no. 9, pp. 1360-1369, 2022.
@article{이승준2022SDN,
title = {SDN 정책엔진의 사용자 모듈을 위한 분석 요청 정의 언어},
author = {이승준 and 윤대건 and 오상윤},
url = {https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002877194},
year = {2022},
date = {2022-09-01},
urldate = {2022-09-01},
journal = {한국통신학회 논문지},
volume = {47},
number = {9},
pages = {1360-1369},
abstract = {현대전에서 작전 수행은 네트워크 중심전의 양상을 띄고 있으며, 이에 따라 군 전술망 자원을 효과적으로 사용하기 위한 여러 연구가 진행되고 있다. 단일 연구 결과가 아닌 여러 연구의 결과를 복합적으로 적용했을 때의 효과를 분석하기 위한 노력의 일환으로 통합 테스트베드가 구축되고, 여기에서 여러 네트워크 알고리즘을 동시에 수행하고 성능을 분석하기 위한 정책 엔진도 설계되었다. 하지만 이종 네트워크 환경에서는 사용자들이 요구하는 서른 다른 데이터 구조의 성능 지표와 이를 처리할 각 알고리즘의 서로 다른 실행 환경에 적응적으로 대응하기 어려운 문제가 있었다. 이에 본 연구에서는 프로그래밍 언어와 실행 환경 등 특정 기술에 종속되지 않는 정책 엔진을 위한 XML 기반의 인터페이스 포맷을 정의하고 그 스키마를 제안한다. 제안된 스키마를 사용하여 메시지는 특정 프로그래밍 언어에 종속되지 않고 인코딩과 디코딩을 할 수 있으며 Open Container Initiative 표준을 기반으로실행 환경을 정의하는 컨테이너를 기술할 수 있다.
In modern warfare environment, the well defined networks becomes important to the operations. Thus, researchers study on how to use the military tactical network resources effectively. To analyze effectiveness of the results from multiple studies together, an integrated testbed is critical as well as the design and the implementation of a policy engine that performs multiple network algorithms and analyze performance simultaneously. However, when the network environment is heterogeneous, it is hard to respond adaptively to the performance indicators of the different data structures and the different execution environments of each user algorithm. To address this issue, we propose an XML-based interface format and its schema for the policy engine, which is independent from specific technologies such as programming languages and execution environments. A message from and to the policy engine and the testbed can be encoded and decoded regardless of the programming language. Furthermore, it can describe containers of the execution environment based on the Open Container Initiative standard.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In modern warfare environment, the well defined networks becomes important to the operations. Thus, researchers study on how to use the military tactical network resources effectively. To analyze effectiveness of the results from multiple studies together, an integrated testbed is critical as well as the design and the implementation of a policy engine that performs multiple network algorithms and analyze performance simultaneously. However, when the network environment is heterogeneous, it is hard to respond adaptively to the performance indicators of the different data structures and the different execution environments of each user algorithm. To address this issue, we propose an XML-based interface format and its schema for the policy engine, which is independent from specific technologies such as programming languages and execution environments. A message from and to the policy engine and the testbed can be encoded and decoded regardless of the programming language. Furthermore, it can describe containers of the execution environment based on the Open Container Initiative standard.
Park, Juwon; Yoon, Daegun; Yeo, Sangho; Oh, Sangyoon
AMBLE: Adjusting Mini-Batch and Local Epoch for Federated Learning with Heterogeneous Devices🌏 InternationalJournal Article
In: Journal of Parallel and Distributed Computing, 2022, ISSN: 0743-7315.
@article{Juwon2022AMBLE,
title = {AMBLE: Adjusting Mini-Batch and Local Epoch for Federated Learning with Heterogeneous Devices},
author = {Juwon Park and Daegun Yoon and Sangho Yeo and Sangyoon Oh},
url = {https://www.sciencedirect.com/science/article/pii/S0743731522001757},
doi = {https://doi.org/10.1016/j.jpdc.2022.07.009},
issn = {0743-7315},
year = {2022},
date = {2022-07-21},
urldate = {2022-07-21},
journal = {Journal of Parallel and Distributed Computing},
abstract = {As data privacy becomes increasingly important, federated learning applied to the training of deep learning models while ensuring the data privacy of devices is entering the spotlight. Federated learning makes it possible to process all data at once while processing data independently from various devices without collecting distributed local data in a central server. However, there are still challenges to overcome for the system of devices in federated learning such as communication overheads and the heterogeneity of the system. In this paper, we propose the Adjusting Mini-Batch and Local Epoch (AMBLE) approach, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning and updates the parameters synchronously. With AMBLE, we enhance the computational efficiency by removing stragglers and scaling the local learning rate to improve the model convergence rate and accuracy. We verify that federated learning with AMBLE is a stably trained model with a faster convergence speed and higher accuracy than FedAvg and adaptive batch size scheme for both identically and independently distributed (IID) and non-IID cases.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yoon, Daegun; Oh, Sangyoon
SURF: Direction-Optimizing Breadth-First Search Using Workload State on GPUs🌏 InternationalJournal Article
In: Sensors, vol. 22, no. 13, pp. 4899, 2022.
@article{yoon2022surf,
title = {SURF: Direction-Optimizing Breadth-First Search Using Workload State on GPUs},
author = {Daegun Yoon and Sangyoon Oh},
url = {https://www.mdpi.com/1424-8220/22/13/4899},
doi = {https://doi.org/10.3390/s22134899},
year = {2022},
date = {2022-06-29},
urldate = {2022-01-01},
journal = {Sensors},
volume = {22},
number = {13},
pages = {4899},
publisher = {Multidisciplinary Digital Publishing Institute},
abstract = { Graph data structures have been used in a wide range of applications including scientific and social network applications. Engineers and scientists analyze graph data to discover knowledge and insights by using various graph algorithms. A breadth-first search (BFS) is one of the fundamental building blocks of complex graph algorithms and its implementation is included in graph libraries for large-scale graph processing. In this paper, we propose a novel direction selection method, SURF (Selecting directions Upon Recent workload of Frontiers) to enhance the performance of BFS on GPU. A direction optimization that selects the proper traversal direction of a BFS execution between the push and pull phases is crucial to the performance as well as for efficient handling of the varying workloads of the frontiers. However, existing works select the direction using condition statements based on predefined thresholds without considering the changing workload state. To solve this drawback, we define several metrics that describe the state of the workload and analyze their impact on the BFS performance. To show that SURF selects the appropriate direction, we implement the direction selection method with a deep neural network model that adopts those metrics as the input features. Experimental results indicate that SURF achieves a higher direction prediction accuracy and reduced execution time in comparison with existing state-of-the-art methods that support a direction-optimizing BFS. SURF yields up to a 5.62×},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
최지헌,; 송봉섭,; 오상윤,
자율주행 데이터 관리를 위한 백엔드 아키텍처 연구🇰🇷 DomesticConference
2022년도 한국통신학회 하계종합학술발표회, 2022.
@conference{최지헌2022자율주행,
title = {자율주행 데이터 관리를 위한 백엔드 아키텍처 연구},
author = {최지헌 and 송봉섭 and 오상윤},
url = {https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11108453},
year = {2022},
date = {2022-06-01},
urldate = {2022-01-01},
booktitle = {2022년도 한국통신학회 하계종합학술발표회},
journal = {한국통신학회 학술대회논문집},
pages = {1719--1720},
abstract = {자율주행 산업 전반의 발전으로 자율주행 알고리즘의 안정성 검증과 관련한 연구가 활발하다. 하지만, 안정성 검증을 위해 활용되는 데이터의 관리와효과적인 질의가 어려운 문제를 해결하기 위하여 다양한 종류의 차량 주행 데이터, 시뮬레이션 결과로 수집되는 비정형 데이터를 통합하여 데이터웨어하우스에 적재하는 연구를 수행하였다. 정형 데이터 기반으로 하는 기존 데이터 웨어하우스의 한계점을 분석하고 새로이 추가되는 파일 유형혹은 출처의 데이터를 효율적으로 적재 및 질의할 수 있는 시스템 설계를 제안한다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
정현석,; 유미리,; 윤대건,; 이승준,; 오상윤,
재난 대응 기계학습 모델의 Data Drift 문제에 대한 MLOps 기반 대응 기법🇰🇷 DomesticConference
2022 한국차세대컴퓨팅학회 춘계학술대회, 한국차세대컴퓨팅학회, 2022.
@conference{정현석2022재난,
title = {재난 대응 기계학습 모델의 Data Drift 문제에 대한 MLOps 기반 대응 기법},
author = {정현석 and 유미리 and 윤대건 and 이승준 and 오상윤},
url = {https://www.earticle.net/Article/A412404},
year = {2022},
date = {2022-05-21},
urldate = {2022-05-21},
booktitle = {2022 한국차세대컴퓨팅학회 춘계학술대회},
pages = {pp.473-476},
publisher = {한국차세대컴퓨팅학회},
abstract = {기계학습에서 Data Drift는 정확도에 큰 영향을 주는 중요한 문제이며, 재난 대응과 같이 모델의 잘못된 예측 피해가 큰 분야에서 더 중요하다. 본 논문에서는 재난 분야 Data Drift 문제에 대해 MLOps를 이용하여 모델의 재학습을 효과적으로 수행할 수 있는 방안으로 MLOps 기법과 툴들을 사용하는 것을 제안하고, Kaggle 데이터와 MLFlow를 기반으로 정확도 실험을 수행하여 주장을 검증하였다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2021
Lee, Seungjun; Yoon, Daegun; Yeo, Sangho; Oh, Sangyoon
Mitigating Cold Start Problem in Serverless Computing with Function Fusion🌏 InternationalJournal Article
In: Sensors, vol. 21, no. 24, 2021, ISSN: 1424-8220.
@article{s21248416,
title = {Mitigating Cold Start Problem in Serverless Computing with Function Fusion},
author = {Seungjun Lee and Daegun Yoon and Sangho Yeo and Sangyoon Oh},
url = {https://www.mdpi.com/1424-8220/21/24/8416},
doi = {10.3390/s21248416},
issn = {1424-8220},
year = {2021},
date = {2021-12-23},
urldate = {2021-12-16},
journal = {Sensors},
volume = {21},
number = {24},
abstract = {As Artificial Intelligence (AI) is becoming ubiquitous in many applications, serverless computing is also emerging as a building block for developing cloud-based AI services. Serverless computing has received much interest because of its simplicity, scalability, and resource efficiency. However, due to the trade-off with resource efficiency, serverless computing suffers from the cold start problem, that is, a latency between a request arrival and function execution. The cold start problem significantly influences the overall response time of workflow that consists of functions because the cold start may occur in every function within the workflow. Function fusion can be one of the solutions to mitigate the cold start latency of a workflow. If two functions are fused into a single function, the cold start of the second function is removed; however, if parallel functions are fused, the workflow response time can be increased because the parallel functions run sequentially even if the cold start latency is reduced. This study presents an approach to mitigate the cold start latency of a workflow using function fusion while considering a parallel run. First, we identify three latencies that affect response time, present a workflow response time model considering the latency, and efficiently find a fusion solution that can optimize the response time on the cold start. Our method shows a response time of 28%–86% of the response time of the original workflow in five workflows.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
오도현,; 오상윤,
컨테이너 기반 엣지-클라우드 협업 구조의 공군 C4I 체계 적용🇰🇷 DomesticConference
2021 한국소프트웨어종합학술대회, 2021.
@conference{오도현2021컨테이너,
title = {컨테이너 기반 엣지-클라우드 협업 구조의 공군 C4I 체계 적용},
author = {오도현 and 오상윤},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE11035600&mark=0&useDate=&ipRange=N&accessgl=Y&language=ko_KR&hasTopBanner=true},
year = {2021},
date = {2021-11-26},
urldate = {2021-11-26},
booktitle = {2021 한국소프트웨어종합학술대회},
abstract = {ICT의 발전은 컴퓨터와 인터넷을 기반으로 한 정보화 시대를 이끌었고, 이는 21세기 들어 더욱 발전하고 있으며, 국방분야에도 폭넓게 적용되고 있다. 공군 C4I 체계는 Sensor-to-Shooter를 핵심 개념으로 하는 네트워크 중심전(NCW)을 가능하게 하는 핵심 체계이나, 정보제공 수준의 역할을 하는데 머물러 있는 것이 현실이다. 본 논문에서는 컨테이너 기반 엣지-클라우드 협업 구조를 공군 C4I 체계에 적용하는 방안을 제안 한다. 제안방안은 전장에서 발생하는 이종·대량의 데이터의 신속·정확한 처리를 통한 지휘결심 지원과 task 스케줄링, 효율적인 자원관리와 더불어 제공되는 서비스의 효율적인 개발, 테스트, 배포 및 관리 등을 기대한다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yu, Miri; Lee, Seungjun; Oh, Sangyoon
Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system🌏 InternationalConference 📃 In press
The 7th International Conference on Next Generation Computing 2021, 2021.
@conference{Yu2021container,
title = {Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system},
author = {Miri Yu and Seungjun Lee and Sangyoon Oh},
year = {2021},
date = {2021-11-05},
urldate = {2021-11-05},
booktitle = {The 7th International Conference on Next Generation Computing 2021},
abstract = {In light of the recent advancements made in IT, many researchers are studying and exploring ways to minimize damage from fire disasters using artificial intelligence and cloud technology. With the introduction of edge computing, fire-disaster response software systems have made significant progress. However, existing studies often do not consider the response to a sudden power supply cut-off due to fire. In this study, we propose a container migration scheme based on the first-fit-decreasing algorithm of bin-packing problem and 0-1 knapsack algorithm to provide fault tolerance for containers running on edge servers that are powered off.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Oh, Sangyoon
Traversing Large Road Networks on GPUs with Breadth-First Search🌏 InternationalConference 📃 In press
The 7th International Conference on Next Generation Computing, 2021.
@conference{Yoon2021Traversing,
title = {Traversing Large Road Networks on GPUs with Breadth-First Search},
author = {Daegun Yoon and Sangyoon Oh},
year = {2021},
date = {2021-11-05},
urldate = {2021-11-05},
booktitle = {The 7th International Conference on Next Generation Computing},
journal = {The 7th International Conference on Next Generation Computing 2021},
abstract = {Breadth-first search (BFS) is one of the most used graph kernels, and substantially affects the overall performance when processing various graphs. Since graph data are frequently used in real life for example road networks in navigation systems, high performance graph processing becomes more critical. In this study, we aim to process BFS algorithm efficiently on road network data. We propose BARON, a BFS algorithm that copes with road networks. To accelerate graph traversal, BARON reduce the occurrence of branch and memory divergences by exploiting warp-cooperative work sharing and atomic operations. With this design approach, BARON outperforms the other BFS kernels of state-of-the-art graph processing frameworks executed stably on the latest GPU architectures. For various graphs, BARON yields speedups of up to 2.88x and 5.43x over Gunrock and CuSha, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
여상호,; 오상윤,
도시 화재 시뮬레이션에서의 효과적인 화재 대응을 위한 강화학습 적용 솔루션의 설계 및 구현🇰🇷 DomesticConference
ACK 2021, vol. 28, no. 2, 2021.
@conference{여상호2021도시2,
title = {도시 화재 시뮬레이션에서의 효과적인 화재 대응을 위한 강화학습 적용 솔루션의 설계 및 구현},
author = {여상호 and 오상윤},
url = {https://kiss.kstudy.com/thesis/thesis-view.asp?key=3921079},
year = {2021},
date = {2021-11-04},
urldate = {2021-11-04},
booktitle = {ACK 2021},
volume = {28},
number = {2},
pages = {104--106},
abstract = {도시의 인구 밀집도가 증가함에 따라 도시의 단위 면적당 건물 밀집도 역시 증가하고 있으며, 이에 도시 화재는 대규모 화재로 발전할 가능성이 높다. 도시 내 대규모 화재로 인한 인명 및 경제적인 피해를 최소화하기 위해 시뮬레이션 기반의 화재 대응 방안들이 널리 연구되고 있으며, 최근에는 시뮬레이션에서 효과적인 화재 대응 방안을 탐색하기 위해 강화학습 기술을 활용하는 연구들이 소개되고 있다. 그러나, 시뮬레이션의 규모가 커지는 경우, 상태 정보 및 화재 대응을 위한 행위 공간의 크기가 증가함으로 인해 강화학습의 복잡도가 증가하며, 이에 따라 학습 확장성이 저하되는 문제가 발생한다. 본 논문에서는 시뮬레이션 규모 증가 시 강화학습의 학습 확장성을 유지하기 위해, 화재 상황 정보와 재난 대응을 위한 행위 공간을 변환하는 기법을 제안한다. 실험 결과를 통해 기존에 강화학습 모델의 학습이 어려웠던 대규모 도시 재난 시뮬레이션에서 본 기법을 적용한 강화학습 모델은 학습 수행이 가능하였으며, 화재 피해가 없는 상황의 적합도를 100%로 하고, 이것 대비 99.2%의 화재 대응 적합도를 달성했다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lee, Seungjun; Yoon, Daegun; Oh, Sangyoon
Imitation learning for VM placement problem using demonstration data generated by heuristics🌏 InternationalConference 📃 In press
17th Int. Conference on Data Science (ICDATA’21), 2021.
@conference{lee2021imitation,
title = {Imitation learning for VM placement problem using demonstration data generated by heuristics},
author = {Seungjun Lee and Daegun Yoon and Sangyoon Oh},
url = {https://youtu.be/CmG3E1rWroQ},
year = {2021},
date = {2021-07-26},
urldate = {2021-07-26},
booktitle = {17th Int. Conference on Data Science (ICDATA’21)},
abstract = {Data centers are key components of cloud computing to run virtual machines. For saving the cost to operate data centers, it is important to decide how to allocate each virtual machine to a certain physical machine. Because the virtual machine placement problem is NP-Hard, there are many heuristics to obtain near-optimal solutions as quickly as possible. The reinforcement learning technique can be applied for virtual machine placement problem. However, if the problem size gets bigger, the convergence speed of reinforcement learning gets slower. The possible solution is that the agent imitates the behavior of given demonstration, called imitation learning. In this paper, we propose a method combining reinforcement learning with imitation learning. In our proposed approach, demonstration data is generated by simple heuristics not human experts.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
윤대건,; 노병희,; 오상윤,
전술망 성능 개량을 위한 정책 엔진 인터페이스 설계🇰🇷 DomesticConference 📃 In press
2021 한국군사과학기술학회 종합학술대회, 2021.
@conference{윤대건2021전술망,
title = {전술망 성능 개량을 위한 정책 엔진 인터페이스 설계},
author = {윤대건 and 노병희 and 오상윤},
year = {2021},
date = {2021-06-10},
booktitle = {2021 한국군사과학기술학회 종합학술대회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
여상호,; 이승준,; 오상윤,
도시 재난 대응을 위한 Multi Objective 강화학습 모델 설계🇰🇷 DomesticConference
2021 한국차세대컴퓨팅학회 춘계학술대회, 한국차세대컴퓨팅학회, 2021.
@conference{여상호2021도시,
title = {도시 재난 대응을 위한 Multi Objective 강화학습 모델 설계},
author = {여상호 and 이승준 and 오상윤},
url = {https://www.earticle.net/Article/A409315},
year = {2021},
date = {2021-05-13},
urldate = {2021-05-13},
booktitle = {2021 한국차세대컴퓨팅학회 춘계학술대회},
pages = {11-15},
publisher = {한국차세대컴퓨팅학회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
이승준,; 여상호,; 오상윤,
Edge AI의 추론 과정을 위한 계층적 작업 분할 배치 기법🇰🇷 DomesticConference
2021 한국차세대컴퓨팅학회 춘계학술대회, 한국차세대컴퓨팅학회, 2021.
@conference{이승준2021edge,
title = {Edge AI의 추론 과정을 위한 계층적 작업 분할 배치 기법},
author = {이승준 and 여상호 and 오상윤},
url = {https://www.earticle.net/Article/A409319},
year = {2021},
date = {2021-05-13},
urldate = {2021-05-13},
booktitle = {2021 한국차세대컴퓨팅학회 춘계학술대회},
pages = {26-29},
publisher = {한국차세대컴퓨팅학회},
abstract = {머신러닝 모델을 엣지 디바이스에 안정적으로 배포하기 위해서 기존 클라우드 기반의 머신 러닝 모델 배포는 높은 지연 시간으로 인해 머신 러닝 서비스의 질을 떨어트리는 문제를 야기한다. 또한, 추론을 위한 입력 데이터의 전송 과정은 개인 정보의 유출을 야기한다. 이러한 문제를 해결하기 위해서 개인 정보 유출 및 통신 부하 문제를 해결할 수 있는 엣지 서버 및 엣지 디바이스를 활용한 추론 과정의 정의가 요구된다. 본 연구팀은 효과적인 추론 과정의 정의를 위해 기존 분산 딥러닝의 모델 및 데이터 병렬화 파이프라인 기법에 기반하는 단일 추론 모델에 대한 엣지 서버-디바이스 간 모델 분할 기법 및 엣지에서 요청되는 독립된 다중 작업들에 대한 효과적인 스케쥴링 기법을 제안한다.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Zhu, Jiang; Wang, Lizan; Xie, Guoqi; Pei, Tingrui; Oh, Sangyoon; Li, Zhetao
A low redundancy and high time efficiency large-scale task assignment strategy for heterogeneous service-oriented cloud computing systems🌏 InternationalJournal Article
In: The Journal of Supercomputing, vol. 77, no. 4, pp. 3450–3483, 2021.
@article{zhu2021low,
title = {A low redundancy and high time efficiency large-scale task assignment strategy for heterogeneous service-oriented cloud computing systems},
author = {Jiang Zhu and Lizan Wang and Guoqi Xie and Tingrui Pei and Sangyoon Oh and Zhetao Li},
url = {https://link.springer.com/article/10.1007/s11227-020-03403-x},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {The Journal of Supercomputing},
volume = {77},
number = {4},
pages = {3450--3483},
publisher = {Springer},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yoon, Daegun; Li, Zhetao; Oh, Sangyoon
Balanced content space partitioning for pub/sub: a study on impact of varying partitioning granularity🌏 InternationalJournal Article
In: The Journal of Supercomputing, pp. 1–27, 2021.
@article{yoon2021balanced,
title = {Balanced content space partitioning for pub/sub: a study on impact of varying partitioning granularity},
author = {Daegun Yoon and Zhetao Li and Sangyoon Oh},
url = {https://link.springer.com/article/10.1007/s11227-021-03821-5},
year = {2021},
date = {2021-01-01},
journal = {The Journal of Supercomputing},
pages = {1--27},
publisher = {Springer},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yeo, Sangho; Naing, Ye; Kim, Taeha; Oh, Sangyoon
Achieving Balanced Load Distribution with Reinforcement Learning-Based Switch Migration in Distributed SDN Controllers🌏 InternationalJournal Article
In: Electronics, vol. 10, no. 2, pp. 162, 2021.
@article{yeo2021achieving,
title = {Achieving Balanced Load Distribution with Reinforcement Learning-Based Switch Migration in Distributed SDN Controllers},
author = {Sangho Yeo and Ye Naing and Taeha Kim and Sangyoon Oh},
url = {https://www.mdpi.com/2079-9292/10/2/162},
year = {2021},
date = {2021-01-01},
journal = {Electronics},
volume = {10},
number = {2},
pages = {162},
publisher = {Multidisciplinary Digital Publishing Institute},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kim, Taeha; Oh, Sangyoon
Metadata Replication with Synchronous OpCodes Writing for Namenode Multiplexing in Hadoop🌏 InternationalConference
2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), IEEE 2021, ISBN: 978-1-6654-4067-7.
@conference{kim2021metadata,
title = {Metadata Replication with Synchronous OpCodes Writing for Namenode Multiplexing in Hadoop},
author = {Taeha Kim and Sangyoon Oh},
url = {https://ieeexplore.ieee.org/abstract/document/9422639},
doi = {10.1109/IEMTRONICS52119.2021.9422639},
isbn = {978-1-6654-4067-7},
year = {2021},
date = {2021-01-01},
booktitle = {2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS)},
pages = {1--7},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
김대현,; 여상호,; 오상윤,
분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법🇰🇷 DomesticJournal Article
In: 정보처리학회논문지. 컴퓨터 및 통신시스템, vol. 10, no. 7, pp. 191–198, 2021.
@article{김대현2021분산,
title = {분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법},
author = {김대현 and 여상호 and 오상윤},
url = {https://kiss.kstudy.com/thesis/thesis-view.asp?key=3898298},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {정보처리학회논문지. 컴퓨터 및 통신시스템},
volume = {10},
number = {7},
pages = {191--198},
abstract = {분산 딥러닝은 각 노드에서 지역적으로 업데이트한 지역 파라미터를 동기화는 과정이 요구된다. 본 연구에서는 분산 딥러닝의 효과적인 파라미터 동기화 과정을 위해, 레이어 별 특성을 고려한 allreduce 통신과 연산 오버래핑(overlapping) 기법을 제안한다. 상위 레이어의 파라미터 동기화는 하위 레이어의 다음 전파과정 이전까지 통신/계산(학습) 시간을 오버랩하여 진행할 수 있다. 또한 이미지 분류를 위한 일반적인 딥러닝 모델의 상위 레이어는 convolution 레이어, 하위 레이어는 fully-connected 레이어로 구성되어 있다. Convolution 레이어는 fully-connected 레이어 대비 적은 수의 파라미터를 가지고 있고 상위에 레이어가 위치하므로 네트워크 오버랩 허용시간이 짧고, 이를 고려하여 네트워크 지연시간을 단축할 수 있는 butterfly all-reduce를 사용하는 것이 효과적이다. 반면 오버랩 허용시간이 보다 긴 경우, 네트워크 대역폭을 고려한 ring all-reduce를 사용한다. 본 논문의 제안 방법의 효과를 검증하기 위해 제안 방법을 PyTorch 플랫폼에 적용하여 이를 기반으로 실험 환경을 구성하여 배치크기에 대한 성능 평가를 진행하였다. 실험을 통해 제안 기법의 학습시간은 기존 PyTorch 방식 대비 최고 33% 단축된 모습을 확인하였다.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
여상호,; 이승준,; 오상윤,
학습 성능 향상을 위한 차원 축소 기법 기반 재난 시뮬레이션강화학습 환경 구성 및 활용🇰🇷 DomesticJournal Article
In: 정보처리학회논문지. 소프트웨어 및 데이터 공학, vol. 10, no. 7, pp. 263–270, 2021.
@article{여상호2021학습,
title = {학습 성능 향상을 위한 차원 축소 기법 기반 재난 시뮬레이션강화학습 환경 구성 및 활용},
author = {여상호 and 이승준 and 오상윤},
url = {https://kiss.kstudy.com/thesis/thesis-view.asp?key=3898295},
year = {2021},
date = {2021-01-01},
journal = {정보처리학회논문지. 소프트웨어 및 데이터 공학},
volume = {10},
number = {7},
pages = {263--270},
abstract = {
강화학습은 학습을 통해 최적의 행동정책을 탐색하는 기법으로써, 재난 상황에서 효과적인 인명 구조 및 재난 대응 문제 해결을 위해 많이 활용되고 있다. 그러나, 기존 재난 대응을 위한 강화학습 기법은 상대적으로 단순한 그리드, 그래프와 같은 환경 혹은 자체 개발한 강화학습 환경을 통해 평가를 수행함에 따라 그 실용성이 충분히 검증되지 않았다. 본 논문에서는 강화학습 기법을 실세계 환경에서 사용하기 위해 기존 개발된 재난 시뮬레이션 환경의 복잡한 프로퍼티를 활용하는 강화학습 환경 구성과 활용 결과를 제시하고자 한다. 본 제안 강화학습 환경의 구성을 위하여 재난 시뮬레이션과 강화학습 에이전트 간 강화학습 커뮤니케이션 채널 및 인터페이스를 구축하였으며, 시뮬레이션 환경이 제공하는 고차원의 프로퍼티 정보의 활용을 위해 비-이미지 피쳐 벡터(non-image feature vector)에 이미지 변환방식을 적용하였다. 실험을 통해 본 제안 방식이 건물 화재 피해도를 기준으로 한 평가에서 기존 방식 대비 가장 낮은 건물 화재 피해를 기록한 것을 확인하였다.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
강화학습은 학습을 통해 최적의 행동정책을 탐색하는 기법으로써, 재난 상황에서 효과적인 인명 구조 및 재난 대응 문제 해결을 위해 많이 활용되고 있다. 그러나, 기존 재난 대응을 위한 강화학습 기법은 상대적으로 단순한 그리드, 그래프와 같은 환경 혹은 자체 개발한 강화학습 환경을 통해 평가를 수행함에 따라 그 실용성이 충분히 검증되지 않았다. 본 논문에서는 강화학습 기법을 실세계 환경에서 사용하기 위해 기존 개발된 재난 시뮬레이션 환경의 복잡한 프로퍼티를 활용하는 강화학습 환경 구성과 활용 결과를 제시하고자 한다. 본 제안 강화학습 환경의 구성을 위하여 재난 시뮬레이션과 강화학습 에이전트 간 강화학습 커뮤니케이션 채널 및 인터페이스를 구축하였으며, 시뮬레이션 환경이 제공하는 고차원의 프로퍼티 정보의 활용을 위해 비-이미지 피쳐 벡터(non-image feature vector)에 이미지 변환방식을 적용하였다. 실험을 통해 본 제안 방식이 건물 화재 피해도를 기준으로 한 평가에서 기존 방식 대비 가장 낮은 건물 화재 피해를 기록한 것을 확인하였다.
2020
박주원,; 김태하,; 오상윤,
HDFS 이기종 스토리지 전략을 위한 스몰파일 아카이빙🇰🇷 DomesticConference
한국차세대컴퓨팅학회 하계학술대회 MR-IoT 융합 재난대응 인공지능 응용기술 워크샵, 2020.
@conference{박주원2020HDFS,
title = {HDFS 이기종 스토리지 전략을 위한 스몰파일 아카이빙},
author = {박주원 and 김태하 and 오상윤},
url = {https://www.youtube.com/watch?v=xDjfVcu4Txw},
year = {2020},
date = {2020-12-08},
urldate = {2020-01-01},
booktitle = {한국차세대컴퓨팅학회 하계학술대회 MR-IoT 융합 재난대응 인공지능 응용기술 워크샵},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Yoon, Daegun; Park, Gyudong; Oh, Sangyoon
Exploring a system architecture of content-based publish/subscribe system for efficient on-the-fly data dissemination🌏 InternationalJournal Article
In: Concurrency and Computation: Practice and Experience, pp. e6090, 2020.
@article{yoon2020exploring,
title = {Exploring a system architecture of content-based publish/subscribe system for efficient on-the-fly data dissemination},
author = {Daegun Yoon and Gyudong Park and Sangyoon Oh},
url = {https://onlinelibrary.wiley.com/doi/full/10.1002/cpe.6090},
year = {2020},
date = {2020-01-01},
journal = {Concurrency and Computation: Practice and Experience},
pages = {e6090},
publisher = {Wiley Online Library},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
윤대건,; 오상윤,
Software-Defined Network 에서의 Conflict Resolution 을 위한 정책엔진 구조 및 전략 분석🇰🇷 DomesticConference
한국통신학회 학술대회논문집, 2020.
@conference{윤대건2020software,
title = {Software-Defined Network 에서의 Conflict Resolution 을 위한 정책엔진 구조 및 전략 분석},
author = {윤대건 and 오상윤},
url = {https://www.dbpia.co.kr/pdf/pdfView.do?nodeId=NODE10498659&mark=0&useDate=&ipRange=N&accessgl=Y&language=ko_KR&hasTopBanner=false},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {한국통신학회 학술대회논문집},
journal = {한국통신학회 학술대회논문집},
pages = {566--567},
abstract = {본 논문은 Software-Defined Network 에서 추가되는 정책들이 기존의 정책들과 일으키는 충돌을 해결하기 위해 존재하는 정책엔진의 새로운 설계를 위해 분석한 기존 연구 내용들을 소개하고 분석한 내용들을 기반으로 정책엔진의 conflict resolution 전략을 제안한다. 본 논문에서는 SDN 에 새로운 정책이 추가되는 경우 발생할 수 있는 conflict 를 감지한 후 해결하기 위해, 정책엔진이 conflict detector 와 conflict handler 로 구성되는 구조를 가정한다. Conflict detector 는 새로 추가되는 정책이 기존의 정책들과 충돌을 일으키는지 감지하고 conflict handler 는 conflict resolution 을 통해 문제가 되는 정책을 삭제하는 역할을 한다. 본 논문에서는 conflict handler 가 Recency 중심 전략과 Priority 중심 전략을 사용하여 문제가 되는 정책을 삭제하는 방안에 대해서 분석한 결과를 소개한다},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
김대현,; 오상윤,
분산 딥러닝 최적화를 위한 Layer 별 동기화 기법🇰🇷 DomesticConference
한국차세대컴퓨팅학회 하계학술대회, 2020.
@conference{김대현2020,
title = {분산 딥러닝 최적화를 위한 Layer 별 동기화 기법},
author = {김대현 and 오상윤},
year = {2020},
date = {2020-01-01},
booktitle = {한국차세대컴퓨팅학회 하계학술대회},
journal = {한국차세대컴퓨팅학회 하계학술대회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
김용표,; 오상윤,
빅데이터 처리를 위한 Spark Mllib 성능 Benchmark🇰🇷 DomesticConference
한국통신학회 하계학술대회, 2020.
@conference{김용표2020,
title = {빅데이터 처리를 위한 Spark Mllib 성능 Benchmark},
author = {김용표 and 오상윤},
year = {2020},
date = {2020-01-01},
booktitle = {한국통신학회 하계학술대회},
journal = {한국통신학회 하계학술대회},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}