Textile Patterns Generated by Adversarial Example
We launched a joint textile label “UNLABELLED” with Dentsu Lab Tokyo, and developed a camouflage pattern to protect ourselves from the AI surveillance society. In this project, we focused on the ability of AI to detect information such as gender, age, race, and appearance from images and videos, and we attempted to create a camouflage that would be difficult for AI to recognize as a person. Qosmo was in charge of the development of the camouflage pattern generation system, applying technology that triggers AI misrecognition by adding specific patterns to images. We will be participating in DESIGNART TOKYO 2021, one of the largest design and art festivals in Japan, which will be held from October 22 (Fri.). We will also hold our first new exhibition “Camouflage Against the Machines” there.
The camouflage pattern generation system developed by Qosmo can reduce the recognition rate of a person by showing the generated camouflage pattern against a surveillance camera. This pattern is trained to work against an object recognition model (YOLOv2) using deep learning.
There is a method called Adversarial Example that triggers a misrecognition from classification models, that is implemented by introducing a small amount of noise that is almost unrecognizable to humans. By adding this noise to the original input image and calculating how much the recognition rate reduces, and after iteratively optimizing the image so that the recognition rate reduces even a little, we can generate an image (=Adversarial Example) that can fool the classification model.
On the other hand, there is a method called Adversarial Patch, which triggers misrecognition to the classification model by including patch of pixels into the input image, rather than adding noise to the input image. The big differences from the Adversarial Example is that the patch can actually be printed and used in the real world, since it only needs to be included in the input image. In this project, we focused on this approach and developed the Adversarial Patch as a piece of clothing that can actually be worn.
In order to make real-world wearable clothes work as Adversarial Patch, the method of simply overlaying patches on top of human images and learning them in 2D will not work. This is because we need to take into account the wrinkles and folds of the clothes. Also, when developing a patch, which is a single image, it is necessary to take into account how it will look when it is actually used on a piece of clothing, such as which position in the image corresponds to which position on the garment. In order to solve this problem, we created 3D models of clothes using 3D fashion design software. We then loaded them into a game engine, pasted the generated patches on them, and used the captured images as training data. This allows us to train under the same conditions as if we had actually deployed the patch on the clothes. (Based on the research conducted mainly by members of the Nao Tokui Laboratory at Keio University SFC (Makoto Amano, Yuka Sai, Ryosuke Nakajima, Hanako Hirata))
The design of the garment is also one of the major factors in its creation. In order to control the pattern of the generated patches and expand the range of designs, we have incorporated a style transfer algorithm using deep learning, which enables us to generate various camouflage patterns based on arbitrary design images.
Nao Tokui （Qosmo, Inc）, Naoki Tanaka（Dentsu Lab Tokyo）
Yusuke Koyanagi（Dentsu Lab Tokyo）
Yuma Shingai（Dentsu Lab Tokyo）
Risako Kawashima（Dentsu Lab Tokyo）
Makoto Amano（Keio University SFC）
Hanako Hirata（Keio University SFC）
Yuka Sai（Keio University SFC）
Ryosuke Nakajima（Keio University SFC/Qosmo, Inc.）
Shoya Dozono（Qosmo, Inc.）, Robin Jungers（Qosmo, Inc.）
Ryotaro Omori（Dentsu Craft Tokyo）, Kohei Ai（Dentsu Lab Tokyo）, Miyuki Fujishima（Dentsu Lab Tokyo）
Naoki Ise （Qosmo, Inc.）, Takumi Saito（Dentsu Lab Tokyo）
Sota Suzuki（Dentsu Craft Tokyo）
Yuki Tanabe（Dentsu Craft Tokyo）
Kei Murayama、Yusuke Yamagiwa