{"id":873,"date":"2021-06-30T12:09:59","date_gmt":"2021-06-30T03:09:59","guid":{"rendered":"https:\/\/wise.ajou.ac.kr:9605\/?page_id=873"},"modified":"2023-09-07T22:27:41","modified_gmt":"2023-09-07T13:27:41","slug":"%ec%9c%a0%eb%af%b8%eb%a6%ac","status":"publish","type":"page","link":"https:\/\/wise.ajou.ac.kr\/?page_id=873","title":{"rendered":"\uc720\ubbf8\ub9ac"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/wise.ajou.ac.kr\/wp-content\/uploads\/2023\/09\/\uc99d\uba85\uc0ac\uc9c4-233x300-1.jpg\" alt=\"\" class=\"wp-image-2313\" style=\"width:291px;height:291px\" width=\"291\" height=\"291\"\/><\/figure>\n<\/div>\n\n\n<h1 class=\"wp-block-heading has-text-align-center\">\uc720\ubbf8\ub9ac<br><sup>Miri Yu<\/sup><\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Email<\/h2>\n\n\n\n<p>zztiok at ajou.ac.kr<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Research interests<\/h2>\n\n\n\n<p>Cloud Computing, Distributed System, Federated Learning<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Introduction<\/h2>\n\n\n\n<p>Miri Yu is a master&#8217;s course student of the Department of Artificial Intelligence of Ajou University<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Publications<\/h2>\n\n\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><h3 class=\"tp_h3\" id=\"tp_h3_2024\">2024<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">4.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yu, Miri;  Choi, Jiheon;  Lee, Jaehyun;  Oh, Sangyoon<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('182','tp_links')\" style=\"cursor:pointer;\">Staleness Aware Semi-asynchronous Federated Learning<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Parallel and Distributed Computing, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_182\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{miri2024staleness,<br \/>\r\ntitle = {Staleness Aware Semi-asynchronous Federated Learning},<br \/>\r\nauthor = {Miri Yu and Jiheon Choi and Jaehyun Lee and Sangyoon Oh},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S074373152400114X},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-07-01},<br \/>\r\nurldate = {2024-07-01},<br \/>\r\njournal = {Journal of Parallel and Distributed Computing},<br \/>\r\nabstract = {As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_182\" style=\"display:none;\"><div class=\"tp_abstract_entry\">As the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocol, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69%p higher accuracy than the baseline, 2.32 times higher round-to-accuracy and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can't reach.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_182\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S074373152400114X\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S074373152400114X\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S074373152400114X<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2023\">2023<\/h3><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">3.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yu, Miri;  Kwon, Oh-Kyoung;  Oh, Sangyoon (Ed.)<\/p><p class=\"tp_pub_title\">Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023), <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_171\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('171','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_171\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Addressing Client Heterogeneity in Synchronous Federated Learning: The CHAFL Approach},<br \/>\r\neditor = {Miri Yu and Oh-Kyoung Kwon and Sangyoon Oh},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-11-10},<br \/>\r\nurldate = {2023-11-10},<br \/>\r\nbooktitle = {The 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023)},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('171','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">2.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lee, Seungjun;  Yu, Miri;  Yoon, Daegun;  Oh, Sangyoon<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('172','tp_links')\" style=\"cursor:pointer;\">Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), <\/span><span class=\"tp_pub_additional_year\">2023<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 979-8-3503-1200-3<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_172\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('172','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_172\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('172','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_172\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('172','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_172\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?},<br \/>\r\nauthor = {Seungjun Lee and Miri Yu and Daegun Yoon and Sangyoon Oh},<br \/>\r\nurl = {10.1109\/IPDPSW59300.2023.00134},<br \/>\r\ndoi = {10.1109\/IPDPSW59300.2023.00134},<br \/>\r\nisbn = {979-8-3503-1200-3},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-05-15},<br \/>\r\nurldate = {2023-05-15},<br \/>\r\nbooktitle = {2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},<br \/>\r\nabstract = {Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('172','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_172\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('172','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_172\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"10.1109\/IPDPSW59300.2023.00134\" title=\"10.1109\/IPDPSW59300.2023.00134\" target=\"_blank\">10.1109\/IPDPSW59300.2023.00134<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/IPDPSW59300.2023.00134\" title=\"Follow DOI:10.1109\/IPDPSW59300.2023.00134\" target=\"_blank\">doi:10.1109\/IPDPSW59300.2023.00134<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('172','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2021\">2021<\/h3><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">1.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yu, Miri;  Lee, Seungjun;  Oh, Sangyoon<\/p><p class=\"tp_pub_title\">Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">The 7th International Conference on Next Generation Computing 2021, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_147\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('147','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_147\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('147','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_147\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Yu2021container,<br \/>\r\ntitle = {Energy-aware container migration scheme in edge computing for fault-tolerant fire-disaster response system},<br \/>\r\nauthor = {Miri Yu and Seungjun Lee and Sangyoon Oh},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-11-05},<br \/>\r\nurldate = {2021-11-05},<br \/>\r\nbooktitle = {The 7th International Conference on Next Generation Computing 2021},<br \/>\r\nabstract = {In light of the recent advancements made in IT, many researchers are studying and exploring ways to minimize damage from fire disasters using artificial intelligence and cloud technology. With the introduction of edge computing, fire-disaster response software systems have made significant progress. However, existing studies often do not consider the response to a sudden power supply cut-off due to fire. In this study, we propose a container migration scheme based on the first-fit-decreasing algorithm of bin-packing problem and 0-1 knapsack algorithm to provide fault tolerance for containers running on edge servers that are powered off.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('147','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_147\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In light of the recent advancements made in IT, many researchers are studying and exploring ways to minimize damage from fire disasters using artificial intelligence and cloud technology. With the introduction of edge computing, fire-disaster response software systems have made significant progress. However, existing studies often do not consider the response to a sudden power supply cut-off due to fire. In this study, we propose a container migration scheme based on the first-fit-decreasing algorithm of bin-packing problem and 0-1 knapsack algorithm to provide fault tolerance for containers running on edge servers that are powered off.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('147','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>\uc720\ubbf8\ub9acMiri Yu Email zztiok at ajou.ac.kr Research interests Cloud Computing, Distributed System, Federated Learning Introduction Miri Yu is a master&#8217;s course student of the Department of Artificial Intelligence of Ajou University Publications<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":785,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_uag_custom_page_level_css":"","footnotes":""},"class_list":["post-873","page","type-page","status-publish","hentry"],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"twentyseventeen-featured-image":false,"twentyseventeen-thumbnail-avatar":false},"uagb_author_info":{"display_name":"wise","author_link":"https:\/\/wise.ajou.ac.kr\/?author=1"},"uagb_comment_info":0,"uagb_excerpt":"\uc720\ubbf8\ub9acMiri Yu Email zztiok at ajou.ac.kr Research interests Cloud Computing, Distributed System, Federated Learning Introduction Miri Yu is a master&#8217;s course student of the Department of Artificial Intelligence of Ajou University Publications","_links":{"self":[{"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/873","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=873"}],"version-history":[{"count":22,"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/873\/revisions"}],"predecessor-version":[{"id":2314,"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/873\/revisions\/2314"}],"up":[{"embeddable":true,"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/785"}],"wp:attachment":[{"href":"https:\/\/wise.ajou.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=873"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}