Here’s how to protect your content from AI training.
In the digital gold rush of artificial intelligence, data is the new oil. And for many AI models like ChatGPT, Midjourney, or Stable Diffusion, this data comes directly from us: bloggers, artists, photographers, and journalists.
The problem? Often, this happens without permission, without compensation, and without credit. This “data scraping” plunder poses an existential question for creatives: How do I maintain control over my intellectual property?
Here’s the current state of the art and the tactics you need to protect your content from the tech giants’ hungry bots.

The first line of defense: Technical barriers (“opt-out”)
The simplest way is often the technical one. Many AI companies have started implementing mechanisms that allow website operators to signal: “Please do not train here.”
Adjusting robots.txt
If you own a website (e.g., a portfolio or blog), the robots.txt file acts as your gatekeeper. You can block specific bots.
- GPTBot (OpenAI): OpenAI generally respects blocks for its crawler.
- CCBot (Common Crawl): One of the largest databases for AI training. Blocking this bot will undermine the foundation of many models.
- Google-Extended: Prevents Google from specifically targeting your content for Bard/Gemini and Vertex AI.
Code snippet for your robots.txt:
- User-agent: GPTBot
Disallow: / - User-agent: CCBot Disallow: /
- User-agent: Google-Extended Disallow: /
Use platform settings
Many platforms are responding to creator pressure. Check the settings on sites like:
- DeviantArt / ArtStation: Look for checkboxes like “NoAI” or “Opt-out of AI datasets”.
- Instagram / Facebook: Meta has introduced options (often hidden in the privacy settings) to opt out of data use for “Generative AI”.
We have created a ready-made robots.txt file for you that you can simply copy and paste:
How to insert the file
The procedure depends on how your website is built. Here are the instructions for the most common systems:
1. WordPress
WordPress creates a virtual robots.txt file by default. The easiest way to edit it is with an SEO plugin.
- Yoast SEO: Go to Yoast SEO -> Tools -> File Editor. There you can edit the contents of the robots.txt file. Simply add the code below.
- Rank Math: Go to Rank Math -> General Settings -> Edit robots.txt.
- Without a plugin: You can create a text file named robots.txt on your computer, insert the code, and upload this file to your website’s root directory via FTP (e.g., FileZilla).
2. Wix
- Go to your dashboard, then Marketing & SEO -> SEO -> SEO Settings.
- Scroll down to robots.txt and click Edit.
- Add the “Disallow” lines. (Note: Wix often has predefined settings; don’t delete anything important, just add the bots.)
3. Squarespace
Squarespace is a bit more restrictive. You can’t directly edit the robots.txt file.
- However, Squarespace recently added a global setting: Go to Settings -> Crawlers & Bots (or Site Visibility, depending on your version) and activate the “Block Artificial Intelligence” toggle. This will handle most of it automatically.
4. Shopify
- You can edit the robots.txt file via the admin panel by editing the robots.txt.liquid template in your theme code. This is a bit more technical.
- Often, it’s easier to use an app like “Easy Robots.txt Editor” from the Shopify Store.
Poison pills for image AI: Nightshade and Glaze
For visual artists, simply opting out is often insufficient, as images are frequently already included in datasets (like LAION-5B). This is where tools come into play that modify the image at the pixel level so that it looks normal to humans but is “toxic” to AI.
Glaze: This tool overlays an invisible “veil” on your image. If an AI tries to copy your style, it will be confused. The model then learns, for example, that your impressionistic style actually looks like an abstract doodle. It protects against style theft.
Nightshade: This is the offensive option. Nightshade manipulates data so that the AI model learns false associations. For example, an image of a dog is coded as a cat for the AI. If enough of these “poisoned” images are used for training, the model will start generating cats when “dog” is input. This sabotages the model’s training.
Important: These tools are currently available for free through the University of Chicago, but require computing power to use.
Watermarks and metadata (C2PA)
The Content Authenticity Initiative (CAI) and the C2PA standard aim to create transparency.
- Invisible watermarks: Tools like Digimarc or Imatag add invisible noise that remains even when the image is cropped or compressed. This at least allows you to prove that the image belongs to you.
- Metadata: Ensure that your copyright information is firmly embedded in the IPTC metadata of your files. While many AI scrapers currently ignore this, future legislation could require them to read this data.
The “paywall” strategy: premium content
If bots are scanning everything that’s publicly accessible, the logical consequence is: Don’t make it public.
The trend is strongly shifting back towards closed communities and gated content:
- Newsletters & Substack: Texts land directly in the readers’ inboxes, not on an indexable website.
- Patreon / Ko-fi: High-resolution images or exclusive texts are only available for a fee behind a registration barrier. Bots (usually) can’t get in here.
This not only protects against AI but often also strengthens the bond with “real” fans.
However, this only works if a correspondingly stable community has been built!
Legal action: What does the future hold?
Technology is a constant cat-and-mouse game. In the long run, creators need legal certainty.
- EU AI Act: The European Union requires AI companies to be more transparent about what they have used to train their models. This is the first step in being able to prove copyright infringement.
- Class action lawsuits: In the US, major lawsuits are currently underway by authors (including George R.R. Martin) and artists against OpenAI and Midjourney. The outcome of these lawsuits will determine whether AI training falls under “fair use” or constitutes copyright infringement.
Conclusion: A multi-layered protective shield
There is (still) no foolproof way to protect your work. Anyone who shares their art online takes a risk. But you’re not defenseless.
Your checklist for today:
- Block bots: Update your robots.txt file.
- Use cloaking tools: Download Glaze if you create visual art.
- Diversify: Consider putting your most valuable content behind a paywall.
The battle for intellectual property has only just begun – and knowledge is your best weapon.
Beliebte Beiträge
Who owns the future? AI training and the global battle for copyright.
AI companies are training their models with billions of copyrighted works from the internet – often without permission. Is this transformative "fair use" or theft? Authors and artists are complaining because AI is now directly competing with them and copying their styles.
Dynamic ranges in Excel: OFFSET function
The OFFSET function in Excel creates a flexible reference. Instead of fixing =SUM(B5:B7), the function finds the range itself, e.g., for the "last 3 months". Ideal for dynamic charts or dashboards that grow automatically.
Mastering the INDIRECT function in Excel
The INDIRECT function in Excel converts text into a real reference. Instead of manually typing =January!E10, use =INDIRECT(A2 & "!E10"), where A2 contains 'January'. This allows you to easily create dynamic summaries for multiple worksheets.
The best remote maintenance tools for Windows and Mac
Which remote support tool is best for Windows & Mac? From TeamViewer and AnyDesk to Splashtop: We compare the top solutions for IT support and home office. Find the tool with the best performance, security, and the fairest price-performance ratio.
The discount trap: Why supermarket apps don’t give us anything for free
Supermarket apps like Lidl Plus lure customers with discounts. But we don't get anything for free. We pay with our most intimate shopping data. This data turns us into transparent consumers. Retailers use it to analyze and deliberately manipulate our purchasing behavior.
How digital identity turns citizens into objects of surveillance
We are trading privacy for convenience. Our digital identity – from e-IDs to social media likes – is becoming a tool. Corporations and governments are linking data, turning citizens into predictable and transparent objects of surveillance.


























