Skip to content
Kordu Tools Kordu Tools

Robots.txt Generator

Runs in browser

Generate and validate your robots.txt file with a form builder, CMS presets, and instant error checking.

Last updated 02 Apr 2026

Build a complete robots.txt file using our free generator. Add User-agent rules, Allow and Disallow directives, Sitemap URLs, and Crawl-delay settings. Use CMS presets for WordPress, Shopify, or Next.js. Validate any existing robots.txt for syntax errors, conflicting rules, and missing sitemaps — instant, browser-based.

CMS Presets

Rules

Note: Googlebot ignores Crawl-delay. Use Google Search Console instead.

Sitemap URLs

Preview

Loading rating…

How to use

  1. 1

    Add crawl rules

    Choose a User-agent (or use * for all bots) and add Allow or Disallow directives for specific paths like /admin/ or /checkout/.

  2. 2

    Apply a CMS preset

    Click a preset button for WordPress, Shopify, or Next.js to auto-fill common crawl rules for that platform, including admin paths and duplicate content patterns.

  3. 3

    Add your sitemap URL

    Enter your sitemap URL (e.g. https://example.com/sitemap.xml) so search engines can find and index your content efficiently.

  4. 4

    Review the live preview

    The generated robots.txt file updates in real time as you make changes. Check it carefully before copying.

  5. 5

    Copy or download

    Click Copy to clipboard or Download as robots.txt and upload the file to the root of your domain.

Frequently asked questions

What is a robots.txt file?
A robots.txt file tells search engine crawlers which pages of your site they can and cannot access. It lives at the root of your domain (e.g. https://yourdomain.com/robots.txt) and is checked by crawlers before they begin indexing.
Which User-agent should I use?
Use * (asterisk) to apply rules to all crawlers. Add specific rules for individual bots — Googlebot, Bingbot, GPTBot (OpenAI), CCBot (Common Crawl), and others — if you need different behaviour per crawler.
Does Disallow remove pages from Google's index?
No. Disallow prevents crawling but does not remove already-indexed pages. To deindex a page, use a noindex meta tag or the URL Removal Tool in Google Search Console.
What is Crawl-delay?
Crawl-delay tells crawlers how many seconds to wait between requests to reduce server load. Google ignores this directive — use Google Search Console to control Googlebot's crawl rate instead. Other crawlers like Bingbot do respect it.
Should I block my admin and login pages?
Yes. Paths like /admin/, /wp-admin/, /login/, /dashboard/, and /checkout/ should be blocked to prevent crawlers from wasting crawl budget on pages that should never be indexed.
What does the validator check for?
The validator checks for syntax errors, conflicting Allow/Disallow rules for the same path, missing Sitemap directives, unusually high Crawl-delay values, and unknown or misspelled directives.
What is the difference between Allow and Disallow?
Disallow blocks a crawler from accessing a path. Allow explicitly permits a path even if a broader Disallow rule covers it. Allow takes precedence over Disallow for the same path.
Is my robots.txt file sent to a server?
No. All generation and validation happens in your browser. Your file content is never transmitted to any server.
How do I block AI training bots?
Add rules for specific AI crawlers: Disallow: / for GPTBot (OpenAI), CCBot (Common Crawl), anthropic-ai, and Google-Extended. The WordPress and Next.js presets include these by default.

Your robots.txt file is the first thing search engine crawlers read when they

visit your site. Getting it wrong can block your entire site from Google or

waste crawl budget on pages that should never be indexed.

This tool gives you a visual form builder for robots.txt — no need to memorise

directives or worry about syntax. Add rules for any search engine crawler (or

use * for all bots), set Allow and Disallow path patterns, declare your Sitemap

URL, and apply Crawl-delay settings. One-click CMS presets auto-fill sensible

defaults for WordPress, Shopify, and Next.js, covering admin paths, login pages,

duplicate content patterns, and bot-specific rules for GPTBot and other AI

crawlers.

The Validate mode lets you paste an existing robots.txt and check it for syntax

errors, conflicting Allow/Disallow rules, missing Sitemap directives, overly

aggressive Crawl-delay values, and unknown directives. All checking runs in your

browser with no file upload required.

Who is this for? SEO specialists auditing crawl settings, developers deploying

new sites, e-commerce managers protecting admin and checkout paths, and content

teams managing crawl budget on large sites.

Related tools