# robots.txt adapted from http://www.wikipedia.org/ and friends # # Please note: There are a lot of pages on this site, and there are # some misbehaved spiders out there that go _way_ too fast. If you're # irresponsible, your access to the site may be blocked. # # USE THE VALIDATOR!!! ## https://technicalseo.com/tools/robots-txt/ # Observed spamming large amounts of https://en.wikipedia.org/?curid=NNNNNN # and ignoring 429 ratelimit responses, claims to respect robots: # http://mj12bot.com/ User-agent: MJ12bot Disallow: / # Wikipedia work bots: User-agent: IsraBot Disallow: User-agent: Orthogaffe Disallow: # Crawlers that are kind enough to obey, but which we'd rather not have # unless they're feeding search engines. User-agent: UbiCrawler Disallow: / User-agent: DOC Disallow: / User-agent: Zao Disallow: / # Some bots are known to be trouble, particularly those designed to copy # entire sites. Please obey robots.txt. User-agent: sitecheck.internetseer.com Disallow: / User-agent: Zealbot Disallow: / User-agent: MSIECrawler Disallow: / User-agent: SiteSnagger Disallow: / User-agent: WebStripper Disallow: / User-agent: WebCopier Disallow: / User-agent: Fetch Disallow: / User-agent: Offline Explorer Disallow: / User-agent: Teleport Disallow: / User-agent: TeleportPro Disallow: / User-agent: WebZIP Disallow: / User-agent: linko Disallow: / User-agent: HTTrack Disallow: / User-agent: Microsoft.URL.Control Disallow: / User-agent: Xenu Disallow: / User-agent: larbin Disallow: / User-agent: libwww Disallow: / User-agent: ZyBORG Disallow: / User-agent: Download Ninja Disallow: / # Misbehaving: requests much too fast: User-agent: fast Disallow: / # # Sorry, wget in its recursive mode is a frequent problem. # Please read the man page and use it properly; there is a # --wait option you can use to set the delay between hits, # for instance. # User-agent: wget Disallow: / # # The 'grub' distributed client has been *very* poorly behaved. # User-agent: grub-client Disallow: / # # Doesn't follow robots.txt anyway, but... # User-agent: k2spider Disallow: / # # Hits many times per second, not acceptable # http://www.nameprotect.com/botinfo.html User-agent: NPBot Disallow: / # A capture bot, downloads gazillions of pages with no public benefit # http://www.webreaper.net/ User-agent: WebReaper Disallow: / # # Friendly, low-speed bots are welcome viewing article pages, but not # dynamically-generated pages please. # # Inktomi's "Slurp" can read a minimum delay between hits; if your # bot supports such a thing using the 'Crawl-delay' or another # instruction, please let us know. # # There is a special exception for API mobileview to allow dynamic # mobile web & app views to load section content. # These views aren't HTTP-cached but use parser cache aggressively # and don't expose special: pages etc. # # Another exception is for REST API documentation, located at # /api/rest_v1/?doc. # User-agent: * Allow: /w/api.php?action=mobileview& Allow: /w/load.php? Allow: /api/rest_v1/?doc Disallow: /w/ Disallow: /api/ Disallow: /trap/ Disallow: /wiki/Special: Disallow: /wiki/Spezial: Disallow: /wiki/Spesial: Disallow: /wiki/Special%3A # # T14111 Disallow: /wiki/Wikipedia:Checkuser/ # T15961 Disallow: /wiki/Dragon Mania Legends Wiki:Spam-Blacklist-Log Disallow: /wiki/Dragon Mania Legends Wiki%3ASpam-Blacklist-Log # # T16075 Disallow: /wiki/MediaWiki:Spam-blacklist Disallow: /wiki/MediaWiki%3ASpam-blacklist Disallow: /wiki/MediaWiki_talk:Spam-blacklist Disallow: /wiki/MediaWiki_talk%3ASpam-blacklist # # Throttle YandexBot User-Agent: YandexBot Crawl-Delay: 2.5 # Throttle BingBot User-agent: bingbot Crawl-delay: 1 # Block SemrushBot User-Agent: SemrushBot Disallow: / # Throttle MJ12Bot User-agent: MJ12bot Crawl-Delay: 10 # Prevent bots from crawling action views # Manual:Short_URL/Prevent_bots_from_crawling_index.php # https://www.mediawiki.org/wiki/Manual:Robots.txt#With_short_URLs # BE CAREFUL with this User-agent: * Disallow: /w/index.php? Sitemap: https://www.dragon-mania-legends.wiki/sitemap/sitemap-index-dragon-mania-legends-wikiwiki.xml