Robust Federated Learning With Contrastive Learning and Meta-Learning

Authors

DOI:

https://doi.org/10.9781/ijimai.2025.09.004

Keywords:

Contrastive Learning, Federated Learning, Meta-Learning, NonIndependent and Identically Distribution (Non-IID)
Supporting Agencies
This work was supported in part by the Joint Key Project of National Natural Science Foundation of China under Grant U2468205, in part by the National Natural Science Foundation of China under Grant 62202156 and Grant 62472168; in part by the Hunan Provincial Key Research and Development Program under Grant 2023GK2001 and Grant 2024AQ2028; in part by the Hunan Provincial Natural Science Foundation of China under Grant 2024JJ6220; in part by the Research Foundation of Education Bureau of Hunan Province under Grant 23B0487

Abstract

Federated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non-IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods.

Downloads

Download data is not yet available.

Author Biographies

Huan Zhang, Hunan University

Huan Zhang received bachelor’s degree from Changsha University in 2019. She is currently pursuing a master’s degree at Hunan University of Science and Technology. Her current research interests include artificial intelligence, network security, and anomaly detection.

Yuxiang Chen, Hunan University

Yuxiang Chen received the Ph.D. degree from Hunan University, Changsha, China, in 2021. He is currently an Assistant Professor with Hunan University of Science and Technology, Xiangtan, China. His research interests include network monitoring, network security, big data, and AI.

Kuanching Li, Hunan University

Kuanching Li is a Professor at the School of Computer Science and Engineering, Hunan University of Science and Technology. Dr. Li has co-authored over 150 conference and journal papers, holds several patents, and serves as an associate and guest editor for various scientific journals. He has also held chair positions at several prestigious international conferences. His research interests include cloud and edge computing, big data, and blockchain technologies. Dr. Li is a Fellow of the IET.

Yuhui Li, Hong Kong Polytechnic University

Yuhui Li is a first-year PhD student at The Hong Kong Polytechnic University. He received an M.Eng. degree from Hunan University in 2024. Prior to that, he received a B.Eng. degree from Shantou University in 2021. He has published several high-quality peer-reviewed papers in top journals and conferences, including IEEE TC, IEEE INFOCOM, IEEE TDSC, IEEE TITS, IEEE ICME, IEEE TCBB, and IEEE SCC/SSE. His research interests include network measurements, service computing, network security, and deep learning.

Sisi Zhou, Hunan University

Sisi Zhou received her Bachelor’s and Master’s degrees in 2009 and 2012, respectively, and currently pursuing a Ph.D. degree at Hunan University of Science and Technology. Her research interests include information security and privacy protection, with a focus on blockchain technology and federated learning.

Wei Liang, Hunan University

Wei Liang (Senior Member, IEEE) received the Ph.D. degree from Hunan University, Changsha, China, in 2013. He is a Postdoctoral Scholar with Lehigh University, Bethlehem, PA, USA, from 2014 to 2016. He is currently a Professor with the School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan, China. His research interests include intelligent transportation, security of IoV, blockchain, embeded system and hardware IP protection, and security management in wireless sensor networks.

Aneta Poniszewska- Maranda, Lodz University of Technology

Aneta Poniszewska-Maranda received M.Sc. degree in computer science from Lodz University of Technology in 1998; PhD degree in computer science in 2003 from Universite of Artois in France; DSc degree in computer science from Czestochowa University of Technology, Poland in 2014.Her research interests include: software engineering, information systems security, analysis and design of information systems, multi-agent-based systems, cloud computing, internet of things, mobile security, blockchain, data analysis, machine learning, data processing, distributed systems, optimization. She has published more than 160 research papers in journals, conference proceedings and books. She is a reviewer in more than 40 research international journals, member of Editorial Board and Reviewer Boards of research journals and Chair, Vice-Chair and PC member of many international scientific conferences from all over the world. She is also the member of ACM, IEEE, AIS and INSTICC research organizations.

References

A. Abadi, B. Doyle, F. Gini, K. Guinamard, S. K. Murakonda, J. Liddell, et al., “Starlit: Privacy-preserving federated learning to enhance financial fraud detection”, arXiv preprint arXiv:2401.10765, 2024.

V. Mothukuri, P. Khare, R. M. Parizi, S. Pouriyeh, A. Dehghantanha, G. Srivastava, “Federated-learning-based anomaly detection for iot security attacks”, IEEE Internet of Things Journal, vol. 9, no. 4, pp. 2545–2554, 2021, doi: https://doi.org/10.1109/JIOT.2021.3077803.

S. K. Das, N. R. Moparthi, S. Namasudra, R. González Crespo, D. Taniar, “A smart healthcare system using consumer electronics and federated learning to automatically diagnose diabetic foot ulcers”, International Journal of Interactive Multimedia and Artificial Intelligence, vol. 9, no. 2, pp. 5–17, 2025, doi: https://doi.org/10.9781/ijimai.2024.10.04.

Z. Xu, R. Zhang, W. Liang, K.-C. Li, K. Gu, X. Li, et al., “A privacypreserving data aggregation protocol for internet of vehicles with federated learning”, IEEE Transactions on Intelligent Vehicles, pp. 1–11, 2024, doi: https://doi.org/10.1109/TIV.2024.3411313.

W. Liang, J. Xiao, Y. Chen, C. Yang, K. Xie, K.-C. Li, B. Di Martino, “Tmhd: Twin-bridge scheduling of multi-heterogeneous dependent tasks for edge computing”, Future Generation Computer Systems, vol. 158, pp. 60–72, 2024.

Z. Tang, Y. Zhang, S. Shi, X. He, B. Han, X. Chu, “Virtual homogeneity learning: defending against data heterogeneity in federated learning”, Proceedings of the 39th International Conference on Machine Learning, pp. 21111–21132, 2022.

F. Zhang, K. Kuang, L. Chen, Z. You, T. Shen, J. Xiao, et al., “Federated unsupervised representation learning”, Frontiers of Information Technology & Electronic Engineering, vol. 24, no. 8, pp. 1181–1193, 2023, doi: https://doi.org/10.1631/FITEE.2200268.

M. Duan, D. Liu, X. Chen, Y. Tan, J. Ren, L. Qiao, et al., “Astraea: Selfbalancing federated learning for improving classification accuracy of mobile deep learning applications”, 2019 IEEE 37th international conference on computer design (ICCD), pp. 246–254, 2019, doi: https://doi.org/10.1109/ICCD46524.2019.00038.

T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, Talwalkar, V. Smith, “Federated optimization in heterogeneous networks”, Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020, doi: https://doi.org/10.48550/arXiv.1812.06127.

M. Mendieta, T. Yang, P. Wang, M. Lee, Z. Ding, C. Chen, “Local learning matters: rethinking data heterogeneity in federated learning”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8397–8406, 2022.

D. Yin, Y. Chen, R. Kannan, P. Bartlett, “Byzantine- robust distributed learning: towards optimal statistical rates”, International conference on machine learning, pp. 5650–5659, 2018.

C. Xie, O. Koyejo, I. Gupta, “Generalized byzantine- tolerant sgd”, arXiv preprint arXiv:1802.10116, 2018.

Z. Sun, P. Kairouz, A. T. Suresh, H. B. McMahan, “Can you really backdoor federated learning?”, arXiv preprint arXiv:1911.07963, 2019.

S. Zhou, K. Li, Y. Chen, C. Yang, W. Liang, A. Y. Zomaya, “Trustbcfl: mitigating data bias in iot through blockchain-enabled federated learning”, IEEE Internet of Things Journal, vol. 11, no. 15, pp. 25648–25662, 2024, doi: https://doi.org/10.1109/JIOT.2024.3379363.

K. Pillutla, S. M. Kakade, Z. Harchaoui, “Robust aggregation for federated learning”, IEEE Transactions on Signal Processing, vol. 70, pp. 1142–1154, 2022, doi: https://doi.org/10.1109/TSP.2022.3153135.

S. Fu, C. Xie, B. Li, Q. Chen, “Attack-resistant federated learning with residual-based reweighting”, arXiv preprint arXiv:1912.11464, 2019.

Q. Li, B. He, D. Song, “Model-contrastive federated learning”, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10713–10722, 2021.

X. Mu, Y. Shen, K. Cheng, X. Geng, J. Fu, T. Zhang, et al., “Fedproc: Prototypical contrastive federated learning on non-IID data”, Future Generation Computer Systems, vol. 143, pp. 93–104, 2023.

Y. Tan, G. Long, J. Ma, L. Liu, T. Zhou, J. Jiang, “Federated learning from pre-trained models: A contrastive learning approach”, Advances in neural information processing systems, vol. 35, pp. 19332–19344, 2022.

T. Yoon, S. Shin, S. J. Hwang, E. Yang, “Fedmix: Approximation of mixup under mean augmented federated learning”, arXiv preprint arXiv:2107.00233, 2021.

V. Tsouvalas, A. Saeed, T. Ozcelebi, N. Meratnia, “Labeling chaos to learning harmony: Federated learning with noisy labels”, ACM Transactions on Intelligent Systems and Technology, vol. 15, no. 2, pp. 1–26, 2024.

J. Zhang, D. Lv, Q. Dai, F. Xin, F. Dong, “Noise- aware local model training mechanism for federated learning”, ACM Transactions on Intelligent Systems and Technology, vol. 14, no. 4, pp. 1–22, 2023.

H. Li, M. Funk, N. M. Gurel, A. Saeed, “Collaboratively learning federated models from noisy decentralized data”, 2024 IEEE International Conference on Big Data (BigData), pp. 7879–7888, 2024, doi: https://doi.org/10.1109/BigData62323.2024.10825502.

P. Blanchard, E. M. El Mhamdi, R. Guerraoui, J. Stainer, “Machine learning with adversaries: byzantine tolerant gradient descent”, Advances in neural information processing systems, vol. 30, pp. 119–129, 2017.

M. Luo, F. Chen, D. Hu, Y. Zhang, J. Liang, J. Feng, “No fear of heterogeneity: classifier calibration for federated learning with non-IID data”, Advances in Neural Information Processing Systems, vol. 34, pp. 5972– 5984, 2021.

Z. Zhu, J. Hong, J. Zhou, “Data-free knowledge distillation for heterogeneous federated learning,” International conference on machine learning, pp. 12878– 12889, 2021.

D. Li, J. Wang, “Fedmd: heterogenous federated learning via model distillation,” arXiv preprint arXiv:1910.03581, 2019.

H. Wang, Y. Li, W. Xu, R. Li, Y. Zhan, Z. Zeng, “Dafkd: domain-aware federated knowledge distillation”, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 20412–20421, 2023.

Y. Wu, S. Zhang, W. Yu, Y. Liu, Q. Gu, D. Zhou, et al., “Personalized federated learning under mixture of distributions”, International Conference on Machine Learning, pp. 37860–37879, 2023.

J. Lu, H. Liu, R. Jia, J. Wang, L. Sun, S. Wan, “Toward personalized federated learning via group collaboration in iiot”, IEEE Transactions on Industrial Informatics, vol. 19, no. 8, pp. 8923–8932, 2022, doi: https://doi.org/10.1109/TII.2022.3223234.

B. McMahan, E. Moore, D. Ramage, S. Hampson, B. A. y Arcas,”Communication-efficient learning of deep networks from decentralized data”, Artificial intelligence and statistics, vol. 54, pp. 1273–1282, 2017.

Y. Lei, S. L. Wang, C. Su, T. F. Ng, “Oes-fed: a federated learning framework in vehicular network based on noise data filtering,”,PeerJ Computer Science, vol. 8, p. e1101, 2022, doi: https://doi.org/10.7717/peerj-cs.1101.

T. Tuor, S. Wang, B. J. Ko, C. Liu, K. K. Leung, “Overcoming noisy and irrelevant data in federated learning”,2020 25th International Conference on Pattern Recognition (ICPR), pp. 5020–5027, 2021, doi: https://doi.org/10.1109/ICPR48806.2021.9412599.

S. Duan, C. Liu, Z. Cao, X. Jin, P. Han, “Fed-dr- filter: Using global data representation to reduce the impact of noisy labels on the performance of federated learning”, Future Generation Computer Systems, vol. 137, pp. 336–348, 2022, doi: https://doi.org/10.1016/j.future.2022.07.013.

K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, “Momentum contrast for unsupervised visual representation learning”, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020.

T. Chen, S. Kornblith, M. Norouzi, G. Hinton, “A simple framework for contrastive learning of visual representations”, International conference on machine learning, pp. 1597–1607, 2020.

P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, et al., “Supervised contrastive learning”, Advances in neural information processing systems, vol. 33, pp. 18661–18673, 2020.

S. Ravi, H. Larochelle, “Optimization as a model for few-shot learning”, International conference on learning representations, 2017.

Y. Li, W. Liang, K. Xie, D. Zhang, S. Xie, K. Li, “Lightnestle: quick and accurate neural sequential tensor completion via meta learning”, IEEE INFOCOM 2023-IEEE Conference on Computer Communications, pp. 1–10, 2023, doi: https://doi.org/10.1109/INFOCOM53939.2023.10228967.

C. Finn, P. Abbeel, S. Levine, “Model-agnostic meta- learning for fast adaptation of deep network”, International conference on machine learning, pp. 1126– 1135, 2017.

A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images”, 2009.

Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng, et al., “Reading digits in natural images with unsupervised feature learning”, NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 2, p. 4, 2011.

B. K. Alotaibi, F. A. Khan, Y. Qawqzeh, G. Jeon, D. Camacho, “Performance and communication cost of deep neural networks in federated learning environments: An empirical study,” 2024, doi: https://doi.org/10.9781/ijimai.2024.12.001.

S. Han, S. Park, F. Wu, S. Kim, C. Wu, X. Xie, et al., “Fedx: Unsupervised federated learning with cross knowledge distillation”, European Conference on Computer Vision, pp. 691–707, 2022, doi: https://doi.org/10.1007/978-3-031-20056-4_40.

D. Hendrycks, T. Dietterich, “Benchmarking neural network robustness to common corruptions and perturbations”, arXiv preprint arXiv:1903.12261, 2019.

Downloads

Published

2025-10-09
Metrics
Views/Downloads
  • Abstract
    113
  • PDF
    10

How to Cite

Zhang, H., Chen, Y., Li, K., Li, Y., Zhou, S., Liang, W., and Poniszewska- Maranda, A. (2025). Robust Federated Learning With Contrastive Learning and Meta-Learning. International Journal of Interactive Multimedia and Artificial Intelligence. https://doi.org/10.9781/ijimai.2025.09.004

Issue

Section

Articles