Here’s how to protect your content from AI training.
In the digital gold rush of artificial intelligence, data is the new oil. And for many AI models like ChatGPT, Midjourney, or Stable Diffusion, this data comes directly from us: bloggers, artists, photographers, and journalists.
The problem? Often, this happens without permission, without compensation, and without credit. This “data scraping” plunder poses an existential question for creatives: How do I maintain control over my intellectual property?
Here’s the current state of the art and the tactics you need to protect your content from the tech giants’ hungry bots.

The first line of defense: Technical barriers (“opt-out”)
The simplest way is often the technical one. Many AI companies have started implementing mechanisms that allow website operators to signal: “Please do not train here.”
Adjusting robots.txt
If you own a website (e.g., a portfolio or blog), the robots.txt file acts as your gatekeeper. You can block specific bots.
- GPTBot (OpenAI): OpenAI generally respects blocks for its crawler.
- CCBot (Common Crawl): One of the largest databases for AI training. Blocking this bot will undermine the foundation of many models.
- Google-Extended: Prevents Google from specifically targeting your content for Bard/Gemini and Vertex AI.
Code snippet for your robots.txt:
- User-agent: GPTBot
Disallow: / - User-agent: CCBot Disallow: /
- User-agent: Google-Extended Disallow: /
Use platform settings
Many platforms are responding to creator pressure. Check the settings on sites like:
- DeviantArt / ArtStation: Look for checkboxes like “NoAI” or “Opt-out of AI datasets”.
- Instagram / Facebook: Meta has introduced options (often hidden in the privacy settings) to opt out of data use for “Generative AI”.
We have created a ready-made robots.txt file for you that you can simply copy and paste:
How to insert the file
The procedure depends on how your website is built. Here are the instructions for the most common systems:
1. WordPress
WordPress creates a virtual robots.txt file by default. The easiest way to edit it is with an SEO plugin.
- Yoast SEO: Go to Yoast SEO -> Tools -> File Editor. There you can edit the contents of the robots.txt file. Simply add the code below.
- Rank Math: Go to Rank Math -> General Settings -> Edit robots.txt.
- Without a plugin: You can create a text file named robots.txt on your computer, insert the code, and upload this file to your website’s root directory via FTP (e.g., FileZilla).
2. Wix
- Go to your dashboard, then Marketing & SEO -> SEO -> SEO Settings.
- Scroll down to robots.txt and click Edit.
- Add the “Disallow” lines. (Note: Wix often has predefined settings; don’t delete anything important, just add the bots.)
3. Squarespace
Squarespace is a bit more restrictive. You can’t directly edit the robots.txt file.
- However, Squarespace recently added a global setting: Go to Settings -> Crawlers & Bots (or Site Visibility, depending on your version) and activate the “Block Artificial Intelligence” toggle. This will handle most of it automatically.
4. Shopify
- You can edit the robots.txt file via the admin panel by editing the robots.txt.liquid template in your theme code. This is a bit more technical.
- Often, it’s easier to use an app like “Easy Robots.txt Editor” from the Shopify Store.
Poison pills for image AI: Nightshade and Glaze
For visual artists, simply opting out is often insufficient, as images are frequently already included in datasets (like LAION-5B). This is where tools come into play that modify the image at the pixel level so that it looks normal to humans but is “toxic” to AI.
Glaze: This tool overlays an invisible “veil” on your image. If an AI tries to copy your style, it will be confused. The model then learns, for example, that your impressionistic style actually looks like an abstract doodle. It protects against style theft.
Nightshade: This is the offensive option. Nightshade manipulates data so that the AI model learns false associations. For example, an image of a dog is coded as a cat for the AI. If enough of these “poisoned” images are used for training, the model will start generating cats when “dog” is input. This sabotages the model’s training.
Important: These tools are currently available for free through the University of Chicago, but require computing power to use.
Watermarks and metadata (C2PA)
The Content Authenticity Initiative (CAI) and the C2PA standard aim to create transparency.
- Invisible watermarks: Tools like Digimarc or Imatag add invisible noise that remains even when the image is cropped or compressed. This at least allows you to prove that the image belongs to you.
- Metadata: Ensure that your copyright information is firmly embedded in the IPTC metadata of your files. While many AI scrapers currently ignore this, future legislation could require them to read this data.
The “paywall” strategy: premium content
If bots are scanning everything that’s publicly accessible, the logical consequence is: Don’t make it public.
The trend is strongly shifting back towards closed communities and gated content:
- Newsletters & Substack: Texts land directly in the readers’ inboxes, not on an indexable website.
- Patreon / Ko-fi: High-resolution images or exclusive texts are only available for a fee behind a registration barrier. Bots (usually) can’t get in here.
This not only protects against AI but often also strengthens the bond with “real” fans.
However, this only works if a correspondingly stable community has been built!
Legal action: What does the future hold?
Technology is a constant cat-and-mouse game. In the long run, creators need legal certainty.
- EU AI Act: The European Union requires AI companies to be more transparent about what they have used to train their models. This is the first step in being able to prove copyright infringement.
- Class action lawsuits: In the US, major lawsuits are currently underway by authors (including George R.R. Martin) and artists against OpenAI and Midjourney. The outcome of these lawsuits will determine whether AI training falls under “fair use” or constitutes copyright infringement.
Conclusion: A multi-layered protective shield
There is (still) no foolproof way to protect your work. Anyone who shares their art online takes a risk. But you’re not defenseless.
Your checklist for today:
- Block bots: Update your robots.txt file.
- Use cloaking tools: Download Glaze if you create visual art.
- Diversify: Consider putting your most valuable content behind a paywall.
The battle for intellectual property has only just begun – and knowledge is your best weapon.
Beliebte Beiträge
How we all turned Google into our monopolist
Google's monopoly is homegrown – created by us. We chose the superior search engine, "free" services like Gmail and Maps, and ignored the competition. In exchange for convenience, we gave away our data and created the monopolist ourselves.
Das HBO-Portfolio zerbricht: Was Sky-Kunden 2026 verlieren
Sky & WOW verlieren 2026 die exklusiven HBO-Rechte, da HBO Max in Deutschland startet. Neue Top-Serien wie die "Harry Potter"-Serie laufen künftig exklusiv bei Max. Überraschend: Laufende Hits wie "House of the Dragon" bleiben Sky-Kunden erhalten. Alle Details zur neuen Streaming-Lage.
Warum die Streaming-Zersplitterung nur einen Verlierer kennt
Die goldene Streaming-Ära ist vorbei. Netflix, Disney+, Sky & bald HBO Max zersplittern den Markt. Die Folge: Abo-Müdigkeit, steigende Kosten und Frust statt Komfort. Warum der Kunde der große Verlierer dieser Entwicklung ist.
Training Data Liability: Tech-Aktien im freien Fall
Der KI-Boom steht auf wackeligen Füßen. "Training Data Liability" (Haftung für Trainingsdaten) wird zum Top-Risiko. Urheberrechtsklagen & DSGVO-Strafen bedrohen die Geschäftsmodelle der Tech-Giganten. Warum der Markt jetzt panisch reagiert.
Vodafone earthquake at DE-CIX: The end of the open network?
A bombshell in the internet world: Vodafone is ending free public peering at DE-CIX. Data traffic will now be routed through its partner Inter.link – for a fee. What does this change in strategy mean for net neutrality and the quality of your stream?
Warning: The “Black Friday” trap in the office mailbox
Black Friday is full of dangerous traps lurking in office inboxes. Phishing emails disguised as great deals can lead to data theft and ransomware. Learn how to recognize these fraudulent emails immediately and effectively protect your business.


























