Realistic Adversarial Examples in 3D Meshes
Dawei Yang*, Chaowei Xiao*, Bo Li, Jia Deng, Mingyan Liu
paper on ArXiv.org
In this paper we consider adversarial behaviors in practical scenarios by manipulating the shape and texture of a given 3D mesh representation of an object.
Our goal is to project the optimized "adversarial meshes" to 2D with a photorealistic renderer, and still able to mislead different machine learning models.
Extensive experiments show that by generating unnoticeable 3D adversarial perturbation on shape or texture for a 3D mesh, the corresponding projected 2D instance
can either lead classifiers to misclassify the victim object as an arbitrary malicious target, or hide any target object within the scene from object detectors.
In addition to the subtle perturbation for a given 3D mesh, we also propose to synthesize a realistic 3D mesh and put in a scene mimicking similar rendering conditions and therefore
attack different machine learning models. In-depth analysis of transferability among various 3D renderers and vulnerable regions of meshes are provided to
help better understand adversarial behaviors in real-world.