Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

The good achievement of deep neural networks (DNNs) is threatened by the vulnerability to adversarial examples. Lately, adversarial attacks in the actual physical domain, for occasion, employing the laser beam as an adversarial perturbation, have been proven to be powerful attacks to DNNs.

Automated recognition of road signs can be difficult because of different external factors, including changes in lighting, shadows, or environmental conditions.

Automatic recognition of street signs can be tricky mainly because of various exterior components, such as variations in lighting, shadows, or environmental ailments. Picture credit score: pic_drome by using Pixnio, CC0 Community Area

A recent paper, posted on, studies a new style of optical adversarial examples in which the perturbations are produced by a shadow. Scientists choose website traffic indication recognition as the target activity and present feasible optimization methods to produce digitally and bodily realizable adversarial examples perturbed by shadows.

Experimental final results confirm that shadows can mislead a device finding out-dependent vision program to develop an faulty choice. Researchers suggest a protection mechanism that can improve the product robustness and the issue of the attack.

Estimating the risk level of adversarial examples is critical for safely deploying machine learning products in the real globe. A single common technique for physical-globe assaults is to adopt the “sticker-pasting” tactic, which nonetheless suffers from some limits, including complications in accessibility to the concentrate on or printing by legitimate shades. A new style of non-invasive attacks emerged a short while ago, which attempt to forged perturbation on to the concentrate on by optics centered instruments, this sort of as laser beam and projector. Nevertheless, the extra optical styles are synthetic but not purely natural. Therefore, they are nevertheless conspicuous and awareness-grabbed, and can be effortlessly discovered by human beings. In this paper, we study a new variety of optical adversarial examples, in which the perturbations are generated by a pretty widespread normal phenomenon, shadow, to obtain naturalistic and stealthy bodily-planet adversarial attack underneath the black-box location. We thoroughly consider the efficiency of this new assault on the two simulated and authentic-entire world environments. Experimental results on targeted traffic sign recognition demonstrate that our algorithm can make adversarial examples proficiently, reaching 98.23% and 90.47% results prices on LISA and GTSRB examination sets respectively, even though continually misleading a relocating digicam above 95% of the time in true-environment scenarios. We also provide discussions about the constraints and the defense system of this attack.

Investigation paper: Zhong, Y., Liu, X., Zhai, D., Jiang, J., and Ji, X., “Shadows can be Harmful: Stealthy and Powerful Actual physical-earth Adversarial Assault by Normal Phenomenon”, 2022. Website link:

Maria J. Danford

Next Post

Upper Peninsula Student Awarded Impact Scholarship

Fri Mar 11 , 2022
Kalle Keranen, a senior at Westwood Substantial College in Ishpeming, has been named the recipient of Michigan Technological University’s complete-tuition 2022 Effect Scholarship. The competitive award is given out every year to prospective pupils pursuing majors inside of the College of Business at Michigan Tech. Finalists from Michigan, Minnesota and […]

You May Like