Project Overview
UI Experience: https://modao.cc/app/nn7Mv1Ifryl9u2A0YIreIh #No-AI Zone-Share
Last updated
UI Experience: https://modao.cc/app/nn7Mv1Ifryl9u2A0YIreIh #No-AI Zone-Share
Last updated
Throughout history, the technological demands and innovations of human society have been crucial forces driving progress. However, with the explosive development of AI technology, we face an unprecedented challenge in striking a balance between the advancement of artificial intelligence and the protection of human creativity.
Under the pressure of comprehensive AI development, the creative space for artists and other creators has been severely affected. AI's progress not only encroaches upon their creative territory but also violates their intellectual property through unauthorized imitation. Existing social applications have mostly failed to fully recognize the severity of this problem. "Cyber Sanctuary" aims to fill this gap. Cyber Sanctuary is a community concept dedicated to addressing the imbalance between AI and creators' development. It serves as a platform that helps artists and other creators protect their works, resist the displacement of AI creations, and unauthorized imitations. For our application, the key to solving this problem is to find a fair and equitable balance point, where AI technology's development no longer comes at the expense of human creators.
Based on our research, artists and other creators are extremely dissatisfied with the current situation of AI infringing on their rights. They need a platform that can protect their rights fairly and impartially. Cyber Sanctuary is such a community. Our solution is summarized as follows: First, we will introduce copyright protection strategies like digital watermarking, alongside image protection technologies such as noise injection and perturbative adversarial techniques to increase the cost of imitating creator images for AI models, thus protecting these works (we do not oppose training models but rather unauthorized training. We believe that model training can proceed with the consent and agreement of creators, who can provide their image datasets voluntarily). In the concept of Cyber Sanctuary, it is a platform that rejects AI creations. AI works will be detected and identified to ensure that human creators' works receive fair treatment. Finally, we hope that our concept can promote a balance between AI works and creator works, providing the creative community with a new direction for considerationβafter all, creators are your greatest treasure.
The app provides convenient registration and login services, and your identity will be verified through a simple email/phone number verification process, where you will receive an electronic code for validation.
The community has a relatively typical and comprehensive presentation system, with a navigation bar displaying the following sections: Community, Favorites, Messages, and My Page. Each page contains secondary and tertiary functionalities. For a visual representation, you can refer to the mind map of the frontend architecture of the Community App's technical framework.
As mentioned in the background, a core idea of the community is to protect creative works from unauthorized use by AI models and provide accompanying copyright protection features. This is specifically reflected in the following aspects: AI work detection, anti-AI protection (digital watermarking, noise injection, and adversarial perturbations), and copyright modules (commercial use and reprints permissions). In addition to these, the community also offers basic image search, sharing, and publishing capabilities through features like tags.
As described in the background, to ensure fair treatment of human creators' works, AI-generated works will be detected, labeled, and even prohibited from being published if there are significant issues of style imitation infringement. Currently, AI image generation technology has reached a very high level, making AI-generated artworks visually indistinguishable from human-created artworks. However, differences still exist in creative details and randomness (commonly referred to as the "one-eyed AI" phenomenon). In this case, a feasible approach is to train AI to identify and exclude AI-generated works (fighting fire with fire). However, our intention is to ensure the fairness of creator works.
The digital watermarking feature is not a direct anti-AI measure. It is a common technique of embedding recognizable markers in digital content to protect the copyright of creators. While digital watermarking may not directly prevent models from recognizing and learning images, it significantly enhances the ability to trace and identify pirated or unauthorized content. The cost of removing digital watermarks is usually high, and potential infringers would need to invest considerable time and resources in discovering and removing the watermarks, with no guarantee of complete success. This makes digital watermarking an effective method for tracking and proving copyright infringement.
Noise injection is a secondary approach that disrupts AI learning by adding random noise to the input data. Its objective is to reduce the accuracy and reliability of AI models. Noise injection methods can be used to disturb the AI model's understanding and analysis of input data. However, noise injection schemes have limitations; they may decrease image quality and cannot completely prevent AI learning.
Perturbative Adversarial is a commonly used technique in model training aimed at enhancing the robustness of the model. In our context, by applying the principles and effects of perturbative adversarial, we aim to achieve the effect of confusing AI training. The core idea of perturbative adversarial is to introduce small, intentionally designed perturbations into the input data to alter the data's features, making it difficult for the AI model to accurately recognize and classify. These perturbations are designed to be imperceptible to human perception but have a significant impact on the output results of the AI model. A similar approach is also used in the Glaze project at the University of Chicago.
This switch controls and indicates whether the work is available for commercial use and reprinting. When the reprinting switch is turned off, the community's reposting and saving functions will be disabled. Additionally, we are considering adding a series of features such as anti-screenshot measures to protect the works from unauthorized use.