논문 인용하기

각 논문마다 생성되어 있는 BibTeX를 사용하시면 자신이 원하는 스타일의 인용 문구를 생성할 수 있습니다.

생성된 BibTeX 코드를 복사하여 BibTeX Parser를 사용해 일반 문자열로 바꾸십시오. 아래의 사이트와 같이 웹에서 변환할 수도 있습니다.

bibtex.online

Show all

2023

7.

Yoon, Daegun; Oh, Sangyoon

MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training
🌏 InternationalConference

30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023), 2023.

Links | BibTeX | 태그: distributed deep learning, gradient sparsification

6.

Yoon, Daegun; Oh, Sangyoon

DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification
🌏 InternationalConference

International Conference on Parallel Processing (ICPP) 2023, 2023.

Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification

5.

Yoon, Daegun; Jeong, Minjoong; Oh, Sangyoon

SAGE: toward on-the-fly gradient compression ratio scaling
🌏 InternationalJournal Article

In: The Journal of Supercomputing, pp. 1–23, 2023.

Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification

2022

4.

여상호,; 배민호,; 정민중,; 권오경,; 오상윤,

Crossover-SGD: A gossip-based communication in distributed deep learning for alleviating large mini-batch problem and enhancing scalability
🌏 InternationalJournal Article

In: Concurrency and Computation: Practice and Experience, 2022.

Abstract | Links | BibTeX | 태그: deep learning, distributed deep learning

3.

Yoon, Daegun; Oh, Sangyoon

Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep Learning in a Supercomputing Environment
Conference

The 8th International Conference on Next Generation Computing (ICNGC) 2022, 2022.

Abstract | Links | BibTeX | 태그: distributed deep learning, GPU, gradient sparsification

2021

2.

이승준,; 여상호,; 오상윤,

Edge AI의 추론 과정을 위한 계층적 작업 분할 배치 기법
🇰🇷 DomesticConference

2021 한국차세대컴퓨팅학회 춘계학술대회, 한국차세대컴퓨팅학회, 2021.

Abstract | Links | BibTeX | 태그: deep learning, distributed deep learning, edge computing, neural network

1.

김대현,; 여상호,; 오상윤,

분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법
🇰🇷 DomesticJournal Article

In: 정보처리학회논문지. 컴퓨터 및 통신시스템, vol. 10, no. 7, pp. 191–198, 2021.

Abstract | Links | BibTeX | 태그: all-reduce, deep learning, distributed deep learning, layer overlapping, synchronization