논문 인용하기

각 논문마다 생성되어 있는 BibTeX를 사용하시면 자신이 원하는 스타일의 인용 문구를 생성할 수 있습니다.

생성된 BibTeX 코드를 복사하여 BibTeX Parser를 사용해 일반 문자열로 바꾸십시오. 아래의 사이트와 같이 웹에서 변환할 수도 있습니다.

bibtex.online

Show all

2023

4.

Yoon, Daegun; Oh, Sangyoon

MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training
🌏 InternationalConference

30th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2023), 2023.

Links | BibTeX | 태그: distributed deep learning, gradient sparsification

3.

Yoon, Daegun; Oh, Sangyoon

DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification
🌏 InternationalConference

International Conference on Parallel Processing (ICPP) 2023, 2023.

Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification

2.

Yoon, Daegun; Jeong, Minjoong; Oh, Sangyoon

SAGE: toward on-the-fly gradient compression ratio scaling
🌏 InternationalJournal Article

In: The Journal of Supercomputing, pp. 1–23, 2023.

Abstract | Links | BibTeX | 태그: distributed deep learning, gradient sparsification

2022

1.

Yoon, Daegun; Oh, Sangyoon

Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep Learning in a Supercomputing Environment
Conference

The 8th International Conference on Next Generation Computing (ICNGC) 2022, 2022.

Abstract | Links | BibTeX | 태그: distributed deep learning, GPU, gradient sparsification