A survey on adversarial attack and defense of deep learning models for medical image recognition

Jipeng Hu, Jinyu Wen, Meie Fang

Article ID: 2156
Vol 4, Issue 1, 2023

VIEWS - 284 (Abstract)

Abstract

The advancement of hardware and computing power has enabled deep learning to be used in a variety of fields, particularly in AI medical applications in intelligent medicine and medical metaverse. Deep learning models are aiding in many clinical medical image analysis tasks, including fusion, registration, detection, classification and segmentation. In recent years, many deep learning-based approaches have been developed for medical image recognition, including classification and segmentation. However, these models are susceptible to adversarial samples, posing a threat to their real world application and making them unsuitable for clinical use. This paper provides an overview of adversarial attack strategies that have been proposed against medical image models and the defense methods used to protect them. We assessed the advantages and disadvantages of these strategies and compared their efficiency. We then examined the existing state and restrictions of research methods involving the adversarial attack and defense of deep learning models for medical image recognition. Additionally, several suggestions were given on how to enhance the robustness of medical image deep learning models in intelligent medicine and medical metaverse.

Keywords

adversarial samples; attack; defense; deep learning; medical image

Full Text:

PDF



References

1. Ma X, Niu Y, Gu L, et al. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition 2021; 110: 107332.

2. Li X and Zhu D. Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); 2020 Apr. 3-7; Lowa City, IA, USA. New York: IEEE; 2020. p. 1154–1158. doi: 10.1109/ISBI45749.2020.9098628.

3. Paul R, Schabath M, Gillies R, et al. Mitigating adversarial attacks on medical image understanding systems. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); 2020 Apr. 3-7; Lowa City, IA, USA. New York: IEEE; 2020. p. 1517–1521. doi: 10.1109/ISBI45749.2020.9098740.

4. Park H, Bayat A, Sabokrou M, et al. Robustification of segmentation models against adversarial perturbations in medical imaging. In: Rekik I, Adeli E, Park SH, et al. (editors). Predictive Intelligence in Medicine. PRIME 2020. Lecture Notes in Computer Science (vol. 12329). Cham: Springer; 2020. p. 46–57. doi: 10.1007/978-3-030-59354-4_5.

5. Li X, Pan D, Zhu D. Defending against adversarial attacks on medical imaging AI system, classification or detection? In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI); 2021 Apr. 13-16; Nice, France; New York: IEEE; 2021. p. 1677–1681.

6. Minagi A, Hirano H, Takemoto K. Natural images allow universal adversarial attacks on medical image classification using deep neural networks with transfer learning. Journal of Imaging 2022; 8(2): 38. doi: 10.3390/jimaging8020038.

7. Apostolidis KD, Papakostas GA. Digital water-marking as an adversarial attack on medical image analysis with deep learning. Journal of Imaging 2022; 8(6): 155. doi: 10.3390/jimaging8060155.

8. Winter TC. Malicious adversarial attacks on medical image analysis. American Journal of Roentgenology 2020; 215(5): W55–W55. doi: 10.2214/AJR.20.23250.

9. Desjardins B, Ritenour ER. Reply to “Malicious Adversarial Attacks on Medical Image Analysis”. American Journal of Roentgenology 2020; 215(5): W56–W56.

10. Yao Q, He Z, Zhou SK. Medical aegis: Robust adversarial protectors for medical images. arXiv: 2111.10969v1. 2021.

11. Zhou Q, Zuley M, Guo Y, et al. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nature Communications 2021; 12(1): 1–11.

12. Selvakkumar A, Pal S, Jadidi Z. Addressing adversarial machine learning attacks in smart healthcare perspectives. In: Suryadevara NK, George B, Jayasundera KP, et al. (editors). Sensing Technology. Lecture notes in electrical engineering 2022; 886: 269–282. doi: 10.1007/978-3-030-98886-9_21.

13. Khowaja SA, Lee IH, Dev K, et al. Get your foes fooled: Proximal gradient split learning for defense against model inversion attacks on IoMT data. arXiv: 2201.04569 v3. 2022. doi: 10.48550/arXiv.2201.04569.

14. Rodriguez D, Nayak T, Chen Y, et al. On the role of deep learning model complexity in adversarial robustness for medical images. BMC Medical Informatics and Decision Making 2022; 22(2): 160. doi: 10.1186/s12911-022-01891-w.

15. Jin R, Li X. Backdoor attack is a devil in federated GAN-based medical image synthesis. In: Zhao C, Svoboda D, Wolterink JM, et al. (editors). Simulation and Synthesis in Medical Imaging. SASHIMI 2022. Lecture Notes in Computer Science; Cham: Springer; 2022. p. 154–165. doi: 10.1007/978-3-031-16980-9_15.

16. Kos J, Song D. Delving into adversarial attacks on deep policies. arXiv:1705.06452v1. 2017. doi: 10.48550/arXiv.1705.06452.

17. Dong Y, Liao F, Pang T, et al. Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun. 18-23; Salt Lake City, UT, USA; 2018. p. 9185–9193. doi: 10.1109/CVPR.2018.00957.

18. Naseer M, Khan SH, Porikli F. Local gradients smoothing: Defense against localized adversarial attacks. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV); 2019 Jan. 7-11. New York: IEEE; 2019. p. 1300–1307. doi: 10.1109/WACV.2019.00143.

19. Dai H, Li H, Tian T, et al. Adversarial attack on graph structured data. In: Proceedings of the 35th International Conference on Machine Learning; 2018 Jul. 10-15; Stockholm, Sweden. Priscilla Rasmussen: Curran Associates, Inc.; 2018; 80: 1115–1124.

20. Lin Z, Shi Y, Xue Z. IDSGAN: Generative adversarial networks for attack generation against intrusion detection. In: Gama J, Li T, Yu Y, et al. (editors). Advances in Knowledge Discovery and Data Mining. PAKDD 2022. Lecture Notes in Computer Science 2022. Cham: Springer; 2022; 13282: 79–91. doi: 10.1007/987-3-031-05981-0_7.

21. Qiu S, Liu Q, Zhou S, et al. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 2019; 9(5): 909. doi: 10.3390/app9050909.

22. Dong Y, Pang T, Su H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun. 15-19; Long Beach, CA, USA; New York: IEEE; 2019. p. 4312–4321. doi: 10.1109/CVPR.2019.00444.

23. Morris JX, Lifland E, Yoo JY, et al. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. arXiv: 2005.05909v4. 2020. doi: 10.48550/arXiv.2005.05909.

24. Feinman R, Curtin RR, Shintre S, et al. Detecting adversarial samples from artifacts. arXiv:1703.00410v3. 2017. doi: 10.48550/arXiv. 1703.001410.

25. Hirano H, Minagi A, Takemoto K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Medical Imaging 2021; (21)1: 1–13.

26. Dong Y, Liao F, Pang T, et al. Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun. 18-23; Salt Lake City, UT, USA; New York: IEEE; 2018. p. 9185–9193. doi: 10.1109/CVPR.2018.00957.

27. Ozbulak U, Messem AV, Neve WD. Impact of adversarial examples on deep learning models for biomedical image segmentation. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2019. 2019 Oct. 13-17; Shenzhen, China; Lecture Notes in Computer Science (vol. 11765); Cham: Sprigner; 2019. p. 300–308. doi: 10.1007/978-3-030-32245-8_34.

28. Pena-Betancor C, Gonzalez-Hernandez M, Fum-ero-Batista F, et al. Estimation of the relative amount of hemoglobin in the cup and neuroretinal rim using stereoscopic color fundus images. Investigative Ophthalmology & Visual Science 2015; 56(3): 1562–1568. doi: 10.1167/iovs.14-15592.

29. Codella NCF, Gutman D, Celebi ME, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the inter-national skin imaging collaboration (ISIC). In: 2018 IEEE 15th International Symposium on Bio-medical Imaging (ISBI 2018); 2018 Apr. 4-7; Washington, DC, USA; New York: IEEE; 2018.p. 168–172. doi: 10.1109/ISBI.2018.8363547.

30. Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks. arXiv:1312.6199. 2013. doi: 10.48550/arXiv.1312.6199.

31. Xie C, Wang J, Zhang Z, et al. Adversarial examples for semantic segmentation and object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct. 22-29; Venice, Italy; New York: IEEE; 2017. p. 1369–1378. doi: 10.1109/ICCV.2017.153.

32. Shao M, Zhang G, Zuo W, et al. Target attack on biomedical image segmentation model based on multiscale gradients. Information Sciences 2021; 554: 33–46. doi: 10.1016/j.ins.2020.12.013.

33. Eladawi N, Elmogy MM, Ghazal M, et al. Classification of retinal diseases based on OCT images. Front Biosci (Landmark Ed) 2018; 23(2): 247–264. doi: 10.2741/4589.PMID:28930545.

34. Chen H, Liang J, Chang S, et al. Improving adversarial robustness via guided complement entropy. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2021 Apr. 13-16; Seoul, Korea (South); 2019. p. 4881–4889. doi: 10.1109/ICCV.2019.00498.

35. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, et al. (editors). Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. 2015 Oct. 5-9; Lecture Notes in Computer Science 2015; Munich, Germany; 2015. p. 234–241. doi: 10.1007/978-3-319-24574-4_28.

36. He X, Yang S, Li G, et al. Non-local context encoder: Robust biomedical image segmentation against adversarial attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence; Palo Alto, California, USA; California: AAAI Press; 2019; 33(01):8417–8424. doi: 10.1609/aaai.v33i01.33018417.

37. Reda I, Ayinde BO, Elmogy M, et al. A new CNN-based system for early diagnosis of prostate cancer. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018); 2018 Apr. 4-7; Washington; DC; USA; New York: IEEE; 2018. p. 207–210. doi: 10.1109/ISBI.2018.8363556.

38. Qin Y, Zheng H, Huang X, et al. Pulmonary nodule segmentation with CT sample synthesis using adversarial networks. Medical Physics 2019; 46(3): 1218–1229. doi: 10.1002/mp.13349.

39. Stanforth R, Fawzi A, Kohli P, et al. Are labels required for improving adversarial robustness? In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019); 2019 Dec. 8-14; Vancouver; Canada; 2019. p. 1–10.

40. Taghanaki SA, Abhishek K, Azizi S, et al. A kernelized manifold mapping to diminish the effect of adversarial perturbations. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun. 15-20; Long Beach, CA, USA; New York: IEEE; 2019. p. 11332–11341. doi: 10.1109/CVPR.2019.01160.

41. Carlini N and Wagner D. Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP); 2017 May 22-24; San Jose; CA; USA; New York: IEEE; 2017.p. 39–57. doi: 10.1109/SP.2017.49.

42. Madry A, Makelov A, Schmidt L, et al. Towards deep Learning models resistant to adversarial attacks. arXiv: 1706.06083v4. 2017. doi: 10.48550/arXiv.1706.06083.

43. Zhang H, Wang J. Towards adversarially robust object detection. arXiv: 1907.10310v1. 2019. doi: 10.48550/arXiv.1907.10310.

44. Arnab A, Miksik O, Torr PHS. On the robustness of semantic segmentation models to adversarial attacks. arXiv:1711.09856v1. 2018. doi: 10.48550/arXiv.1711.09856.

45. Cisse M, Adi Y, Neverova N, et al. Houdini: Fooling deep structured prediction models. arXiv:1707.05373.; 2017. doi: 10.48550/arXiv.1707.05373.

46. Isensee F, Jaeger PF, Full PM, et al. nnU-Net for brain tumor segmentation. arXiv:2011.00848. 2020. doi: 10.48550/arXiv.2011.00848.

47. Milletari F, Navab F, Ahmadi SA. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D vision (3DV); 2016 Oct. 25-28; Stanford; CA;USA; New York: IEEE; 2016. p. 565–571. doi: 10.1109/3DV.2016.79.

48. Tang H, Zhang C, and Xie C. Nodulenet: Decoupled false positive reduction for pulmonary nodule detection and segmentation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2019. Lecture Notes in Computer Science; 2019 Oct. 13-17; Shenzhen, China; Cham: Springer; 2019. p. 11769. doi: 10.1007/978-3-030-32226-7_30.

49. Zhu Z, Xia Y, Shen W, et al. A 3D coarse-to-fine framework for volumetric medical image segmentation. In: 2018 International Conference on 3D Vision (3DV); 2018 Sep. 5-8; Verona; Italy; New York: IEEE; 2018. p. 682–690. doi: 10.1109/3DV.2018.00083.

50. Carlini N, Athalye A, Papernot N, et al. On evaluating adversarial robustness. arXiv:1902.06705. 2019.

51. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Pascal Frossard. Deepfool: A simple and accurate method to fool deep neural networks. arXiv: 1511.04599v3. 2016. doi: 10.48550/arXiv.1511.04599.

52. Yu Q, Yang D, Roth H, et al. C2FNAS: Coarse-to-fine neural architecture search for 3D medical image segmentation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun. 13-19; Seattle, WA, USA; New York: IEEE; 2019. p. 4125–4134. doi: 10.1109/CVPR42600.2020.00418.


DOI: https://doi.org/10.54517/m.v4i1.2156
(284 Abstract Views, 0 PDF Downloads)

Refbacks

  • There are currently no refbacks.