Tabelog Robots.txt 🎯 High Speed

The list of Disallow: /tokyo/ , /osaka/ , /kyoto/ , etc., is unusual. Most sites want their city landing pages indexed. Tabelog explicitly blocks them. Why? Possibly because those pages are thin, auto-generated, or contain internal navigation that leads to disallowed content. More likely: Tabelog prefers to control how its regional authority is presented — via their own sitemap and internal linking, not via open-ended crawler access.

At first glance, it looks like a standard robots.txt . But look closer. It tells a fascinating story about data protection, competitive moats, and Japan’s unique web culture. User-agent: * Disallow: /search/ Disallow: /rgsearch/ Disallow: /kw/ Disallow: /syop/ Disallow: /rr/ Disallow: /list/ Disallow: /rvw/ Disallow: /photo/ Disallow: /map/ Disallow: /guide/ Disallow: /sitemap/ Disallow: /navi/ Disallow: /rank/ Disallow: /shop/%A5%EA%A5%B9%A5%C8 Disallow: /bshop/ Disallow: /rstd/ Disallow: /west/ Disallow: /tokyo/ Disallow: /osaka/ Disallow: /aichi/ Disallow: /kyoto/ Disallow: /hyogo/ Disallow: /hokkaido/ Disallow: /fukuoka/ Disallow: /miyagi/ Disallow: /chiba/ Disallow: /saitama/ Disallow: /kanagawa/ Disallow: /shizuoka/ Disallow: /hiroshima/ What Tabelog is really saying 1. “Search results are off-limits.” The /search/ and /list/ paths are blocked. This is common for large sites to prevent infinite crawl loops, but for Tabelog, it’s strategic: search result pages contain ranked restaurant lists — their core IP. Letting search engines index those would let competitors reverse-engineer their ranking algorithm. tabelog robots.txt

If you’ve ever tried to crawl Tabelog (食べログ), Japan’s most authoritative restaurant review platform, you’ve met its first line of defense. It’s not a CAPTCHA. It’s not an IP ban. It’s a deceptively simple text file: https://tabelog.com/robots.txt . The list of Disallow: /tokyo/ , /osaka/ , /kyoto/ , etc