The good achievement of deep neural networks (DNNs) is threatened by the vulnerability to adversarial examples. Lately, adversarial attacks in the actual physical domain, for occasion, employing the laser beam as an adversarial perturbation, have been proven to be powerful attacks to DNNs.
A recent paper, posted on arXiv.org, studies a new style of optical adversarial examples in which the perturbations are produced by a shadow. Scientists choose website traffic indication recognition as the target activity and present feasible optimization methods to produce digitally and bodily realizable adversarial examples perturbed by shadows.
Experimental final results confirm that shadows can mislead a device finding out-dependent vision program to develop an faulty choice. Researchers suggest a protection mechanism that can improve the product robustness and the issue of the attack.
Estimating the risk level of adversarial examples is critical for safely deploying machine learning products in the real globe. A single common technique for physical-globe assaults is to adopt the “sticker-pasting” tactic, which nonetheless suffers from some limits, including complications in accessibility to the concentrate on or printing by legitimate shades. A new style of non-invasive attacks emerged a short while ago, which attempt to forged perturbation on to the concentrate on by optics centered instruments, this sort of as laser beam and projector. Nevertheless, the extra optical styles are synthetic but not purely natural. Therefore, they are nevertheless conspicuous and awareness-grabbed, and can be effortlessly discovered by human beings. In this paper, we study a new variety of optical adversarial examples, in which the perturbations are generated by a pretty widespread normal phenomenon, shadow, to obtain naturalistic and stealthy bodily-planet adversarial attack underneath the black-box location. We thoroughly consider the efficiency of this new assault on the two simulated and authentic-entire world environments. Experimental results on targeted traffic sign recognition demonstrate that our algorithm can make adversarial examples proficiently, reaching 98.23% and 90.47% results prices on LISA and GTSRB examination sets respectively, even though continually misleading a relocating digicam above 95% of the time in true-environment scenarios. We also provide discussions about the constraints and the defense system of this attack.
Investigation paper: Zhong, Y., Liu, X., Zhai, D., Jiang, J., and Ji, X., “Shadows can be Harmful: Stealthy and Powerful Actual physical-earth Adversarial Assault by Normal Phenomenon”, 2022. Website link: https://arxiv.org/abs/2203.03818