{"id":104,"date":"2022-04-12T17:21:13","date_gmt":"2022-04-12T11:51:13","guid":{"rendered":"http:\/\/sh107.global.temp.domains\/~rinoblas\/?p=104"},"modified":"2022-09-29T15:13:14","modified_gmt":"2022-09-29T09:43:14","slug":"robots-txt-file-in-seo","status":"publish","type":"post","link":"https:\/\/rinoblast.com\/robots-txt-file-in-seo\/","title":{"rendered":"3 reason why should we use robots.txt in seo?"},"content":{"rendered":"\n

Have you ever gone out for a tour, and found yourself in a situation where you need a guide to visit important places. Then, the search engine crawler also needs one guide book to crawl your website and here it comes the robots.txt file which guides the search engine crawler which page to crawl and which page to not.<\/p>\n\n\n\n

What is robots.txt?<\/h2>\n\n\n\n

Robots.txt is a file which the site owner creates in order to tell search engine spider which url to crawl and which url not to crawl. It is also known as the robots exclusion standard. This file is generally placed in the root directory of the website.<\/p>\n\n\n\n

You can simply type https:\/\/yourdomainname.com\/robots.txt <\/strong>to get the robots.txt file.<\/p>\n\n\n\n

All the major search engines out there follow this robots exclusion standard.<\/p>\n\n\n\n

Why do we use robots.txt file?<\/h2>\n\n\n\n

There is a very need for the robots.txt file for most of the websites on the internet. If you don\u2019t have one the spider bot will simply crawl the website and index all the pages.<\/p>\n\n\n\n

So, why do we need that robots.txt file after all the crawler will crawl the site any way, when should you use a robots.txt file.<\/p>\n\n\n\n

One will need the robots.txt file for one of the three reason<\/p>\n\n\n\n

Keeping some folder or page private<\/h3>\n\n\n\n

There must be some pages on your site that you might not want to be indexed. Such as the admin login page. You might not want any random user landing on the login page and also it will be a security threat for your website itself.<\/p>\n\n\n\n

For that, you can use robots.txt protocol, so that the page might not get indexed.<\/p>\n\n\n\n

More pages need to indexed as quickly as possible<\/h3>\n\n\n\n

If you have a big website or ecommerce website<\/a> having thousands of pages and you want to index all these pages as quickly as possible. Then you need to optimize the crawl budget. The best way to do that is by instructing the crawler not to index the unimportant pages, media files, videos, etc. <\/p>\n\n\n\n

If you are a wordpress user then you might find some pages are automatically created as tags which may cause the duplicate contain issue. For that case also you can use robots.txt file not to index these pages.<\/p>\n\n\n\n

Decreases the server overload possibility<\/h3>\n\n\n\n

If some users got into unimportant pages which they are not in need of and your site always get high traffic , then the server overloading will occur, which might crash your site. And hence affects the user experience.<\/p>\n\n\n\n

Cons of robots.txt files<\/h2>\n\n\n\n

It will still index some pages<\/h3>\n\n\n\n

The crawler will still index some pages that you have instructed in robots.txt file not to index. This happens if your url is also placed in other websites also.<\/p>\n\n\n\n

Different crawler will interpret the rules differently<\/h3>\n\n\n\n

Though the robots.txt file provides the similar instructions to the web crawler, but some search engines spider might interpret the instruction differently.<\/p>\n\n\n\n

You can check the misleading instructions by the crawler by simply checking the google search console page indexation data.<\/p>\n\n\n\n

Some symbols to know before creating Robots.txt file<\/h2>\n\n\n\n

user-agent<\/strong>: This specifies the search engine crawlers, who will follow the rule.<\/p>\n\n\n\n

\u2018*\u2019asterisk<\/strong>: This is an universal sign, meaning all search engine crawlers.<\/p>\n\n\n\n

disallow:<\/strong> This instruct search engine crawler not to crawl<\/p>\n\n\n\n

allow:<\/strong> This instructs search engine crawlers to crawl. This is used to nullify the disallow instruction to crawl the sub-directory in the directory.<\/p>\n\n\n\n

Sitemaps<\/strong>: It contains all the urls that you want to index. And the crawler will follow the sitemaps for indexation.<\/p>\n\n\n\n

How to create robots.txt?<\/h2>\n\n\n\n

Before creating any robots.txt file first check whether your website has a txt file or not.<\/p>\n\n\n\n

You can create the txt file both by using tools or by manually. <\/p>\n\n\n\n

There are many online tools that are present which will help you in creating the robots.txt file but they only create a simple instruction. You don\u2019t have freedom to customize as you want.<\/p>\n\n\n\n

How to create robots.txt file manually?<\/h3>\n\n\n\n

The best way to create it is by doing it manually. For creating it you need a text editor like a notepad.<\/p>\n\n\n\n

So, login to the cpanel and open the file manager.<\/p>\n\n\n\n

\"open<\/figure>\n\n\n\n

Open the public_html folder and find the robots.txt file. If the file exists the edit the file and if not then create a new one.<\/p>\n\n\n\n

\"open<\/figure>\n\n\n\n

Type the possible instruction which you want the crawler to follow and then save changes.<\/p>\n\n\n\n

\"in<\/figure>\n\n\n\n

How to add robots.txt in wordpress<\/a>?<\/h3>\n\n\n\n

If you are a wordpress user here, where you can create and add the robots.txt file using some seo plugins like all-in-one seo, yoast seo, rankmath pro.<\/p>\n\n\n\n

In this blog we will show you by using rankmath pro.<\/p>\n\n\n\n

First of all login to the wordpress dashboard select the rankmath icon on the menu and then select the general settings option<\/p>\n\n\n\n

\"open<\/figure>\n\n\n\n

Open the option edit robots.txt. Place the instructions and save changes. That’s done.<\/p>\n\n\n\n

\"select<\/figure>\n\n\n\n

How to check robots.txt?<\/h2>\n\n\n\n

You can simply check the robots.txt file in the google search console. And also I am providing the link of robots.txt checker<\/a>. If you don\u2019t have an account on google search console, then first create it and then check it.<\/p>\n\n\n\n

\"robots.txt<\/figure>\n\n\n\n

If it shows no error then it’s good to go.<\/p>\n\n\n\n

Which on better robots.txt or meta directories<\/h2>\n\n\n\n

It depends upon the size of the website. If you have a small website then use meta directories instead of robots.txt. But if you have a bigger website go for robots.txt.<\/p>\n\n\n

\n
\n
\n

What happens if your robots.txt file has some errors?<\/h3>\n
\n\n

These mistakes are generally made by beginners. But if it shows an error it means it will not index the page that we want to index. Sometimes, a small error in the robots exclusion protocol may lead to no indexation of the whole website.<\/p>\n\n<\/div>\n<\/div>\n

\n

What mistake in instruction may lead to blocking of the whole website?<\/h3>\n
\n\"\"\n

If the robots.txt file give the instruction like<\/p>\n

user-agent: *\u00a0
disallow: \/<\/p>\n

This discourages the search engine from indexing the site.<\/p>\n

For wordpress users make sure you must keep the usage tracking section unchecked. So that the spiders can index the site.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n

If you need more information about the robots.txt file then make sure to check the google search central<\/a> blog.<\/p>\n","protected":false},"excerpt":{"rendered":"

Have you ever gone out for a tour, and found yourself in a situation where you need a guide to visit important places. Then, the search engine crawler also needs one guide book to crawl your website and here it comes the robots.txt file which guides the search engine crawler which page to crawl and …<\/p>\n

3 reason why should we use robots.txt in seo?<\/span> Read More »<\/a><\/p>\n","protected":false},"author":1,"featured_media":178,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"default","ast-global-header-display":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","footnotes":""},"categories":[3],"tags":[],"class_list":["post-104","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-seo-basic"],"_links":{"self":[{"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/posts\/104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/comments?post=104"}],"version-history":[{"count":5,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/posts\/104\/revisions"}],"predecessor-version":[{"id":167,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/posts\/104\/revisions\/167"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/media\/178"}],"wp:attachment":[{"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/media?parent=104"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/categories?post=104"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rinoblast.com\/wp-json\/wp\/v2\/tags?post=104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}