Here’s how to protect your content from AI training.
In the digital gold rush of artificial intelligence, data is the new oil. And for many AI models like ChatGPT, Midjourney, or Stable Diffusion, this data comes directly from us: bloggers, artists, photographers, and journalists.
The problem? Often, this happens without permission, without compensation, and without credit. This “data scraping” plunder poses an existential question for creatives: How do I maintain control over my intellectual property?
Here’s the current state of the art and the tactics you need to protect your content from the tech giants’ hungry bots.

The first line of defense: Technical barriers (“opt-out”)
The simplest way is often the technical one. Many AI companies have started implementing mechanisms that allow website operators to signal: “Please do not train here.”
Adjusting robots.txt
If you own a website (e.g., a portfolio or blog), the robots.txt file acts as your gatekeeper. You can block specific bots.
- GPTBot (OpenAI): OpenAI generally respects blocks for its crawler.
- CCBot (Common Crawl): One of the largest databases for AI training. Blocking this bot will undermine the foundation of many models.
- Google-Extended: Prevents Google from specifically targeting your content for Bard/Gemini and Vertex AI.
Code snippet for your robots.txt:
- User-agent: GPTBot
Disallow: / - User-agent: CCBot Disallow: /
- User-agent: Google-Extended Disallow: /
Use platform settings
Many platforms are responding to creator pressure. Check the settings on sites like:
- DeviantArt / ArtStation: Look for checkboxes like “NoAI” or “Opt-out of AI datasets”.
- Instagram / Facebook: Meta has introduced options (often hidden in the privacy settings) to opt out of data use for “Generative AI”.
We have created a ready-made robots.txt file for you that you can simply copy and paste:
How to insert the file
The procedure depends on how your website is built. Here are the instructions for the most common systems:
1. WordPress
WordPress creates a virtual robots.txt file by default. The easiest way to edit it is with an SEO plugin.
- Yoast SEO: Go to Yoast SEO -> Tools -> File Editor. There you can edit the contents of the robots.txt file. Simply add the code below.
- Rank Math: Go to Rank Math -> General Settings -> Edit robots.txt.
- Without a plugin: You can create a text file named robots.txt on your computer, insert the code, and upload this file to your website’s root directory via FTP (e.g., FileZilla).
2. Wix
- Go to your dashboard, then Marketing & SEO -> SEO -> SEO Settings.
- Scroll down to robots.txt and click Edit.
- Add the “Disallow” lines. (Note: Wix often has predefined settings; don’t delete anything important, just add the bots.)
3. Squarespace
Squarespace is a bit more restrictive. You can’t directly edit the robots.txt file.
- However, Squarespace recently added a global setting: Go to Settings -> Crawlers & Bots (or Site Visibility, depending on your version) and activate the “Block Artificial Intelligence” toggle. This will handle most of it automatically.
4. Shopify
- You can edit the robots.txt file via the admin panel by editing the robots.txt.liquid template in your theme code. This is a bit more technical.
- Often, it’s easier to use an app like “Easy Robots.txt Editor” from the Shopify Store.
Poison pills for image AI: Nightshade and Glaze
For visual artists, simply opting out is often insufficient, as images are frequently already included in datasets (like LAION-5B). This is where tools come into play that modify the image at the pixel level so that it looks normal to humans but is “toxic” to AI.
Glaze: This tool overlays an invisible “veil” on your image. If an AI tries to copy your style, it will be confused. The model then learns, for example, that your impressionistic style actually looks like an abstract doodle. It protects against style theft.
Nightshade: This is the offensive option. Nightshade manipulates data so that the AI model learns false associations. For example, an image of a dog is coded as a cat for the AI. If enough of these “poisoned” images are used for training, the model will start generating cats when “dog” is input. This sabotages the model’s training.
Important: These tools are currently available for free through the University of Chicago, but require computing power to use.
Watermarks and metadata (C2PA)
The Content Authenticity Initiative (CAI) and the C2PA standard aim to create transparency.
- Invisible watermarks: Tools like Digimarc or Imatag add invisible noise that remains even when the image is cropped or compressed. This at least allows you to prove that the image belongs to you.
- Metadata: Ensure that your copyright information is firmly embedded in the IPTC metadata of your files. While many AI scrapers currently ignore this, future legislation could require them to read this data.
The “paywall” strategy: premium content
If bots are scanning everything that’s publicly accessible, the logical consequence is: Don’t make it public.
The trend is strongly shifting back towards closed communities and gated content:
- Newsletters & Substack: Texts land directly in the readers’ inboxes, not on an indexable website.
- Patreon / Ko-fi: High-resolution images or exclusive texts are only available for a fee behind a registration barrier. Bots (usually) can’t get in here.
This not only protects against AI but often also strengthens the bond with “real” fans.
However, this only works if a correspondingly stable community has been built!
Legal action: What does the future hold?
Technology is a constant cat-and-mouse game. In the long run, creators need legal certainty.
- EU AI Act: The European Union requires AI companies to be more transparent about what they have used to train their models. This is the first step in being able to prove copyright infringement.
- Class action lawsuits: In the US, major lawsuits are currently underway by authors (including George R.R. Martin) and artists against OpenAI and Midjourney. The outcome of these lawsuits will determine whether AI training falls under “fair use” or constitutes copyright infringement.
Conclusion: A multi-layered protective shield
There is (still) no foolproof way to protect your work. Anyone who shares their art online takes a risk. But you’re not defenseless.
Your checklist for today:
- Block bots: Update your robots.txt file.
- Use cloaking tools: Download Glaze if you create visual art.
- Diversify: Consider putting your most valuable content behind a paywall.
The battle for intellectual property has only just begun – and knowledge is your best weapon.
Beliebte Beiträge
Is crypto mining worth it and how does it work?
Discover how crypto mining works! Learn the steps from setting up the right mining hardware to using mining pools and securing your cryptocurrencies.
The limits of limitless growth
Can there be limitless economic growth with today's models. And how sustainable is that? What alternatives are there? In this article we want to take a critical look at our current global economic model.
Amazon Prime Lite Video – Cheaper plan with ads
After Freevee, a free streaming service, was established in Germany, a new variant, Prime Lite Video, is currently being tested.
Immerse yourself in the world of auctions – A cheerful chronicle of eBay
Join us on a merry ride through the Ebay chronicle. And how we managed to turn Ebay into a billion-dollar company. As so often, the start begins in a small garage in the middle of nowhere.
Immediate transfer free of charge within the EU
Sofortüberweisung will be free of charge within the EU in the future. This could be a real competitor for PayPal and other payment service providers where payments are executed immediately compared to traditional bank transfers.
Google My Business: The success factor for local businesses
Google My Business for Local Businesses: The Road to Success. Discover the advantages and conquer the local search results!

























