Mitigation and Identification of Camouflage Attack Over Computer Vision Applications

Authors

  • Chandra Sekhar Sanaboina Assistant Professor, Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Kakinada, India. Author
  • Sree Bharathi Garikiparti PG Student, Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Kakinada, India. Author

DOI:

https://doi.org/10.55524/

Keywords:

Camouflage attack, Image Scaling, Computer Vision, Dhash

Abstract

Computer vision technologies are now  commonly used in real-time image and video recognition  applications using deep neural networks. Scaling or  Mitigation is the basic input pre-processing feature in these  implementations. Image scaling methods are designed to  maintain visual elements before and after scaling, and are  widely utilized in a variety of visual and image processing  applications. Content disguising attacks may be carried out  by taking advantage of the picture scaling mechanism,  which causes the machine's extracted content to be  drastically different from the input before scaling. In this  project the actual input image with dimension Am×n is  resized to an attacked image dimension Om1×n1, which  will mislead the classifier as the input is attacked image. To  illustrate the threats from such camouflage attacks, some  computer vision applications taken as targeted victims that  includes multiple image classification applications based on  popular deep learning frameworks like OpenCV, pillow.  This attack is not detected while checking with the  YOLOV4 demo. To defend against such attacks, in this  project using Dhash i.e., difference hash concept. The hash  the value of the actual image and the resized are same when  using the Dhash. If the image is camouflaged the means the  content is modified so the Dhash value of the image also  changed if the image is not camouflaged the then the hash  value of the actual image and resized images are same. By  using the Dhash concept detecting the image is  camouflaged or not.  

Downloads

Download data is not yet available.

References

L. Huang et al., “Universal physical camouflage attacks on object detectors,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., no. 2, pp. 717–726, 2020, doi: 10.1109/CVPR42600.2020.00080.

R. Duan, X. Ma, Y. Wang, J. Bailey, A. K. Qin, and Y. Yang, “Adversarial camouflage: Hiding physical-world attacks with natural styles,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 997–1005, 2020, doi: 10.1109/CVPR42600.2020.00108.

P. P. K. Chan et al., “Face Liveness Detection Using a Flash Against 2D Spoofing Attack,” IEEE Trans. Inf. Forensics Secur., vol. 13, no. 2, pp. 521–534, 2018, doi: 10.1109/TIFS.2017.2758748.

N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks,” Proc. - 2016 IEEE Symp. Secur. Privacy, SP 2016, pp. 582–597, 2016, doi: 10.1109/SP.2016.41.

N. Carlini and D. Wagner, “Towards Evaluating the Robustness of Neural Networks,” Proc. - IEEE Symp. Secur. Priv., pp. 39–57, 2017, doi: 10.1109/SP.2017.49.

N. Papernot, P. Mcdaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” Proc. - 2016 IEEE Eur. Symp. Secur. Privacy, EURO S P 2016, pp. 372–387, 2016, doi: 10.1109/EuroSP.2016.36.

D. Moreira et al., “Image Provenance Analysis at Scale,” IEEE Trans. Image Process., vol. 27, no. 12, pp. 6109–6123, 2018, doi: 10.1109/TIP.2018.2865674.

Z. Wang, M. Song, S. Zheng, Z. Zhang, Y. Song, and Q. Wang, “Invisible adversarial attack against deep neural networks: An adaptive penalization approach,” IEEE Trans. Dependable Secur. Comput., vol. PP, no. c, pp. 1–15, 2020, doi: 10.1109/TDSC.2019.2929047.

M. Li et al., “Provably Secure Camouflaging Strategy for IC Protection,” IEEE Trans. Comput. Des. Integr. Circuits Syst., vol. 38, no. 8, pp. 1399–1412, 2019, doi: 10.1109/TCAD.2017.2750088.

A. Tankus and Y. Yeshurun, “Computer vision, camouflage breaking and countershading,” Philos. Trans. R. Soc. B Biol. Sci., vol. 364, no. 1516, pp. 529–536, 2009, doi: 10.1098/rstb.2008.0211.

Downloads

Published

2022-01-30

How to Cite

Mitigation and Identification of Camouflage Attack Over Computer Vision Applications . (2022). International Journal of Innovative Research in Computer Science & Technology, 10(1), 73–78. https://doi.org/10.55524/