Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Energy-aware Multi-Criteria Federated Learning Model for Edge Computing
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0001-6061-0861
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0002-3118-5058
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0003-3128-191x
2021 (English)In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021 / [ed] Younas M., Awan I., Unal P., IEEE, 2021, p. 134-143Conference paper, Published paper (Refereed)
Abstract [en]

The successful convergence of Internet of Things (IoT) technology and distributed machine learning have leveraged to realise the concept of Federated Learning (FL) with the collaborative efforts of a large number of low-powered and small-sized edge nodes. In Wireless Networks (WN), an energy-efficient transmission is a fundamental challenge since the energy resource of edge nodes is restricted.In this paper, we propose an Energy-aware Multi-Criteria Federated Learning (EaMC-FL) model for edge computing. The proposed model enables to collaboratively train a shared global model by aggregating locally trained models in selected representative edge nodes (workers). The involved workers are initially partitioned into a number of clusters with respect to the similarity of their local model parameters. At each training round a small set of representative workers is selected on the based of multi-criteria evaluation that scores each node representativeness (importance) by taking into account the trade-off among the node local model performance, consumed energy and battery lifetime. We have demonstrated through experimental results the proposed EaMC-FL model is capable of reducing the energy consumed by the edge nodes by lowering the transmitted data.

Place, publisher, year, edition, pages
IEEE, 2021. p. 134-143
Keywords [en]
Federated Learning, Clustering Analysis, Energy consumption, battery lifetime, Human Activity Recognition
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:bth-22236DOI: 10.1109/FiCloud49777.2021.00027Scopus ID: 2-s2.0-85119667934ISBN: 9781665425742 (print)OAI: oai:DiVA.org:bth-22236DiVA, id: diva2:1605945
Conference
8th International Conference on Future Internet of Things and Cloud, FiCloud 2021, Virtual, Online, 23 August 2021 through 25 August 2021
Available from: 2021-10-26 Created: 2021-10-26 Last updated: 2024-04-05Bibliographically approved
In thesis
1. Resource-Aware and Personalized Federated Learning via Clustering Analysis
Open this publication in new window or tab >>Resource-Aware and Personalized Federated Learning via Clustering Analysis
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Today’s advancement in Artificial Intelligence (AI) enables training Machine Learning (ML) models on the daily-produced data by connected edge devices. To make the most of the data stored on the device, conventional ML approaches require gathering all individual data sets and transferring them to a central location to train a common model. However, centralizing data incurs significant costs related to communication, network resource utilization, high volume of traffic, and privacy issues. To address the aforementioned challenges, Federated Learning (FL) is employed as a novel approach to train a shared model on decentralized edge devices while preserving privacy. Despite the significant potential of FL, it still requires considerable resources such as time, computational power, energy, and bandwidth availability. More importantly, the computational capabilities of the training devices may vary over time. Furthermore, the devices involved in the training process of FL may have distinct training datasets that differ in terms of their size and distribution. As a result of this, the convergence of the FL models may become unstable and slow. These differences can influence the FL process and ultimately lead to suboptimal model performance within a heterogeneous federated network.

In this thesis, we have tackled several of the aforementioned challenges. Initially, a FL algorithm is proposed that utilizes cluster analysis to address the problem of communication overhead. This issue poses a major bottleneck in FL, particularly for complex models, large-scale applications, and frequent updates. The next research conducted in this thesis involved extending the previous study to include wireless networks (WNs). In WSNs, achieving energy-efficient transmission is a significant challenge due to their limited resources. This has motivated us to continue with a comprehensive overview and classification of the latest advancements in context-aware edge-based AI models, with a specific emphasis on sensor networks. The review has also investigated the associated challenges and motivations for adopting AI techniques, along with an evaluation of current areas of research that need further investigation. To optimize the aggregation of the FL model and alleviate communication expenses, the initial study addressing communication overhead is extended to include a FL-based cluster optimization approach. Furthermore, to reduce the detrimental effect caused by data heterogeneity among edge devices on FL, a new study of group-personalized FL models has been conducted. Finally, taking inspiration from the previously mentioned FL models, techniques for assessing clients' contribution by monitoring and evaluating their behavior during training are proposed. In comparison with the most existing contribution evaluation solutions, the proposed techniques do not require significant computational resources.

The FL algorithms presented in this thesis are assessed on a range of real-world datasets. The extensive experiments demonstrated that the proposed FL techniques are effective and robust. These techniques improve communication efficiency, resource utilization, model convergence speed, and aggregation efficiency, and also reduce data heterogeneity when compared to other state-of-the-art methods.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2024. p. 260
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2024:04
Keywords
Federated Learning, Clustering Analysis, Eccentricity Analysis, Non- IID Data, Model Personalization
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:bth-26081 (URN)978-91-7295-478-6 (ISBN)
Public defence
2024-05-17, C413A, Karlskrona, 10:00 (English)
Opponent
Supervisors
Available from: 2024-04-05 Created: 2024-04-05 Last updated: 2024-04-22Bibliographically approved

Open Access in DiVA

fulltext(4883 kB)589 downloads
File information
File name FULLTEXT01.pdfFile size 4883 kBChecksum SHA-512
3a1eacf1479daf09cf1159e0768dfc8847a4a8b5b46d98c2bfe89ca2bb2c7dfb768ad2c8ef2e9e7f1d695706035f88c0fdc6959dd55d010aa66500e625a0774b
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Al-Saedi, Ahmed Abbas MohsinCasalicchio, EmilianoBoeva, Veselka

Search in DiVA

By author/editor
Al-Saedi, Ahmed Abbas MohsinCasalicchio, EmilianoBoeva, Veselka
By organisation
Department of Computer Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 589 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 786 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf