Robots.txt generator

Create crawl rules visually, block AI scrapers when needed, and export a ready-to-ship robots.txt file.

Rule editor

Use * for every crawler, or enter a specific crawler name such as Googlebot.

One path per line. Start each path with /.

Use allow rules to override a broader disallow rule.

Point crawlers to your sitemap so they can discover pages faster.

File preview: robots.txt

Robots.txt reference

User-agent: *

Matches every crawler and is typically used as the default rule group.

Disallow: /private/

Blocks crawlers from indexing everything inside the /private/ directory.

Allow: /private/image.jpg

Lets crawlers access a specific file even when a parent path is disallowed.

Sitemap: URL

Points crawlers to the sitemap so they can discover your public URLs more efficiently.

Frequently asked questions

Do I need a robots.txt file?

It is not mandatory, but it is strongly recommended. Without one, crawlers assume they can inspect everything they discover, including admin areas, duplicate content, and utility paths you may prefer to keep out of search.

Can I upload this file directly even if I am not technical?

Yes. The generator follows the standard robots exclusion protocol. For a typical marketing site, the default allow-all preset plus a sitemap URL is often enough.

How do I block ChatGPT or other AI crawlers?

Use the Block AI crawlers preset. It adds disallow rules for common bots such as GPTBot, ChatGPT-User, Google-Extended, and CCBot.