File robot.txt




















A robots. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page. You can use a robots. If your web page is blocked with a robots. If you see this search result for your page and want to fix it, remove the robots.

If you want to hide the page completely from Search, use another method. Use a robots. This won't prevent other pages or users from linking to your image, video, or audio file. Before you create or edit a robots. Depending on your goals and situation, you might want to consider other mechanisms to ensure your URLs are not findable on the web.

If you decided that you need one, learn how to create a robots. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4. For details, see the Google Developers Site Policies. Documentation Not much time? Beginner SEO Get started. Establish your business details with Google.

Advanced SEO Get started. Documentation updates. Go to Search Console. General guidelines. Content-specific guidelines. Images and video. Best practices for ecommerce in Search. COVID resources and tips. Quality guidelines. Control crawling and indexing. Sitemap extensions. Meta tags. Crawler management. Google crawlers. Sitemap extensions. Meta tags. Crawler management. Google crawlers.

Site moves and changes. Site moves. International and multilingual sites. JavaScript content. Change your Search appearance. Using structured data. Feature guides. Debug with search operators. Web Stories. Early Adopters Program. Optimize your page experience. Choose a configuration.

Search APIs. Create a robots. Here is a simple robots. All other user agents are allowed to crawl the entire site. This could have been omitted and the result would be the same; the default behavior is that user agents are allowed to crawl the entire site.

See the syntax section for more examples. Basic guidelines for creating a robots. Add rules to the robots. Upload the robots. Test the robots. Format and location rules: The file must be named robots. Your site can have only one robots. The robots. If you're unsure about how to access your website root, or need permissions to do so, contact your web hosting service provider. If you can't access your website root, use an alternative blocking method such as meta tags.

Google may ignore characters that are not part of the UTF-8 range, potentially rendering robots. Each group consists of multiple rules or directives instructions , one directive per line. Each group begins with a User-agent line that specifies the target of the groups. A group gives the following information: Who the group applies to the user agent. Which directories or files that agent can access. Which directories or files that agent cannot access. Crawlers process groups from top to bottom.

A user agent can match only one rule set, which is the first, most specific group that matches a given user agent. The default assumption is that a user agent can crawl any page or directory not blocked by a disallow rule. Rules are case-sensitive. The character marks the beginning of a comment.

Google's crawlers support the following directives in robots. This is the first line for any rule group. Google user agent names are listed in the Google list of user agents. If the rule refers to a page, it must be the full page name as shown in the browser. This is used to override a disallow directive to allow crawling of a subdirectory or page in a disallowed directory. For a single page, specify the full page name as shown in the browser.

Sitemaps are a good way to indicate which content Google should crawl, as opposed to which content it can or cannot crawl. Learn more about sitemaps. Lines that don't match any of these directives are ignored. Test robots.



0コメント

  • 1000 / 1000