Search results

1 – 3 of 3
Article
Publication date: 14 July 2022

Pradyumna Kumar Tripathy, Anurag Shrivastava, Varsha Agarwal, Devangkumar Umakant Shah, Chandra Sekhar Reddy L. and S.V. Akilandeeswari

This paper aims to provide the security and privacy for Byzantine clients from different types of attacks.

Abstract

Purpose

This paper aims to provide the security and privacy for Byzantine clients from different types of attacks.

Design/methodology/approach

In this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing.

Findings

By using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped.

Originality/value

By using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Content available

Abstract

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Article
Publication date: 28 November 2023

Tingting Tian, Hongjian Shi, Ruhui Ma and Yuan Liu

For privacy protection, federated learning based on data separation allows machine learning models to be trained on remote devices or in isolated data devices. However, due to the…

Abstract

Purpose

For privacy protection, federated learning based on data separation allows machine learning models to be trained on remote devices or in isolated data devices. However, due to the limited resources such as bandwidth and power of local devices, communication in federated learning can be much slower than in local computing. This study aims to improve communication efficiency by reducing the number of communication rounds and the size of information transmitted in each round.

Design/methodology/approach

This paper allows each user node to perform multiple local trainings, then upload the local model parameters to a central server. The central server updates the global model parameters by weighted averaging the parameter information. Based on this aggregation, user nodes first cluster the parameter information to be uploaded and then replace each value with the mean value of its cluster. Considering the asymmetry of the federated learning framework, adaptively select the optimal number of clusters required to compress the model information.

Findings

While maintaining the loss convergence rate similar to that of federated averaging, the test accuracy did not decrease significantly.

Originality/value

By compressing uplink traffic, the work can improve communication efficiency on dynamic networks with limited resources.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 3 of 3