A Robustness Study of Multi-Layer Perceptrons and Logistic Regression to Data Perturbation: MNIST Dataset

Authors

  • Muhammad Thahiruddin Universitas Annuqayah
  • Siti Khotijah Universitas Annuqayah
  • Moh. Fajar Universitas Annuqayah
  • Adib El Farras Universitas Annuqayah

DOI:

https://doi.org/10.31102/zeta.2025.10.1.39-50

Keywords:

Machine Learning Robustness, Data Perturbation, Multi-Layer Perceptrons, Logistic Regression, MNIST Dataset

Abstract

This study systematically evaluates the robustness of Multi-Layer Perceptrons (MLPs) And Logistic Regression (LR) models against data pertubations using the MNIST handwritten digit dataset. While MLPs and LR are foundational in machine learning, their comparative resilience to diverse pertubations-noise, geometric distortions, and adversarial attacks-remains underexplored,despite implications for real-world applications with imperfect data., whe test three pertubations categories : Gaussian noise (σ=0.1 to 1.0), salt and pepper noise (p=0.1 to 0.5), rotational distorsions (5° to 30°), and adversial attacks (FGSM with ϵ=0.005 to0.30). both models were trained on 60.000 MNIST samples and tested on 10.000 pertubed images. Results demonstrate that MLPs exhibit superior robustness under moderate noise and rotations, achieving baseline accuracies of 97.07% (vs. LR’s 92.63%). For Gaussian noise (σ=0.5), MLP retained 35.35% accuracy compared to LR’s 23.91% . however, adversarial attacks (FGSM, ϵ= 0.30) reduced MLP accuracy to 0.20%, revealing critical vulnerabilities. Statistical analysis (paired t-test, p < 0.05) confirmed significant performance differences across pertubations levels. Alinear regressions (R^2 = 0.98) further quantified MLP’s predictable accuracy decline with Gaussian noise intensity. These findings underscore MLP’s suitability for noise-prone environments but highlight urgent needs for adversarial defense mechanisms. Practitioners are advised to prioritize MLPs for tasks with moderate distortions, while future work should integrate robustness enhancements like adversarial training.

Downloads

Download data is not yet available.

References

I. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning.”

M. Kuhn and K. Johnson, “Feature Engineering and Selection; A Practical Approach for Predictive Models; Edition 1.” doi: 10.4324/9781315108230.

M. Goldblum, A. Schwarzschild, A. Patel, and T. Goldstein, “Adversarial attacks on machine learning systems for high-frequency trading,” in ICAIF 2021 - 2nd ACM International Conference on AI in Finance, Association for Computing Machinery, Inc, Nov. 2021. doi: 10.1145/3490354.3494367.

A. Serban, E. Poll, and J. Visser, “Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise,” Aug. 2020, [Online]. Available: http://arxiv.org/abs/2008.05247

L. Ayachi, “Assessing Forecasting Model Robustness Through Curvature-Based Noise Perturbations,” in International Joint Conference on Computational Intelligence, Science and Technology Publications, Lda, 2024, pp. 488–495. doi: 10.5220/0013061600003837.

H. Kim, J. Park, Y. Choi, and J. Lee, “Fantastic Robustness Measures: The Secrets of Robust Generalization.”

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” Jun. 2017, [Online]. Available: http://arxiv.org/abs/1706.06083

Y. Lecun, L. Eon Bottou, Y. Bengio, and P. H. Abstract|, “Gradient-Based Learning Applied to Document Recognition.”

D. Hendrycks and T. Dietterich, “Benchmarking Neural Network Robustness to Common Corruptions and Perturbations,” Mar. 2019, [Online]. Available: http://arxiv.org/abs/1903.12261

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Dec. 2014, [Online]. Available: http://arxiv.org/abs/1412.6980

I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” Dec. 2014, [Online]. Available: http://arxiv.org/abs/1412.6572

C. Szegedy et al., “Intriguing properties of neural networks,” Dec. 2013, [Online]. Available: http://arxiv.org/abs/1312.6199

A. Fawzi, S.-M. Moosavi-Dezfooli, and P. Frossard, “Robustness of classifiers: from adversarial to random noise.”

P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis,” 2003.

P. Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C. J. Hsieh, “ZOO: Zeroth order optimization based black-box atacks to deep neural networks without training substitute models,” in AISec 2017 - Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2017, Association for Computing Machinery, Inc, Nov. 2017, pp. 15–26. doi: 10.1145/3128572.3140448.

J. Demšar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” 2006.

Downloads

Published

2025-05-27

How to Cite

Thahiruddin, M., Khotijah, S., Fajar, M., & Farras, A. E. (2025). A Robustness Study of Multi-Layer Perceptrons and Logistic Regression to Data Perturbation: MNIST Dataset. Zeta - Math Journal, 10(1), 39–50. https://doi.org/10.31102/zeta.2025.10.1.39-50

Issue

Section

Articles