Research on Noise Injection, Perturbative Adversarial, and related techniques

In the effectiveness testing phase...

Thinking and clues:

Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models (https://arxiv.org/pdf/2302.04222.pdf) provides the foundation for the idea of increasing AI learning cost through perturbative adversarial techniques. However, its effectiveness is still in question, particularly regarding ensuring the original quality of the images. The transferability of the method is also unknown, as it seems to be enhanced only for specific models. Its general applicability is yet to be determined.

Explaining and Harnessing Adversarial Examples (https://arxiv.org/pdf/1412.6572.pdf) introduces adversarial noise as a possible method involving adding small perturbations to images to induce misclassifications in AI models. One potential approach is using Generative Adversarial Networks (GANs) to create such perturbations. GANs consist of a generator and a discriminator. The generator aims to create fake data that looks like real data, while the discriminator's goal is to distinguish between real and fake data. This dynamic competition can be utilized to generate noise that is enough to confuse AI but harmless to human vision. However, it may impact the quality of the original style and art. Additionally, the lack of available source code is a limitation.

SEGMENTING UNSEEN INDUSTRIAL COMPONENTS IN A HEAVY CLUTTER USING RGB-D FUSION AND SYNTHETIC DATA (https://arxiv.org/pdf/2002.03501.pdf) is related to countering deepfakes and follows the principles of perturbative adversarial methods, which can be referenced.

Adversarial Attacks and Defences: A Survey (https://arxiv.org/pdf/1810.00069.pdf) and Improved Techniques for Training GANs (https://arxiv.org/pdf/1606.03498.pdf) indicate that research on perturbative adversarial AI is relatively limited but appears to be effective. This technique is commonly used to enhance the robustness of AI models, countering adversarial attacks with similar methods.

Mist (https://mist-documentation.readthedocs.io/zh_CN/latest/content/mode.html) and its source code (https://github.com/mist-project/mist) are projects similar to Glaze that increase AI learning cost through perturbative adversarial techniques. Unlike Glaze, Mist offers open-source code and claims to have some level of transferability and generality according to the validation of the relevant team. However, there may be concerns about its generality when tailored to specific art models. Further investigation and interpretation of the source code by experts are necessary. Contacting the development team might be considered. Nevertheless, this research provides valuable insights for the study.

Last updated