How to Index Pages on Google with a robots.txt File | 2022 TTG Training

Описание к видео How to Index Pages on Google with a robots.txt File | 2022 TTG Training

If you're wondering how to index pages on Google, you can actually do so with the robots.txt tester on Google Search Console. While this may sound awfully technical, it's actually something anyone can do. In this TTG Search Console tutorial for beginners, we're going to show you how to check your site's robot.txt file, which ensures your pages are indexed and searchable so your customers can find you.

Basically, when a search engine finds your website, it starts looking for the robots.txt file for directives that help it crawl all of your site's pages so it can show those pages in the relevant search results. If your robots.txt file is not providing the right directives, it can hurt your site's visibility.

In this video, we break down all of the nuances of the robot.txt file and how to use it to boost your business in search results and get more customers!

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

🗺️Looking for SEO help but don't know where to start? We offer an SEO for beginners course to get you on the right path to building your organic traffic: https://technologytherapy.com/product...

✋Feeling overwhelmed and need a hand? TTG's website support packages will free up time for you and your team while also developing your site into an effective engine that gets results: https://technologytherapy.com/web-sup...

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

🖨️ Transcript 🖨️

We're going to break down what a robots.txt file is, why it's important to your site's search performance, and how you can check your site's robots.txt file.

So what exactly is a robots.txt file? Great question. A robots.txt file is a text file that lives in your website's directory. It contains a set or multiple sets of directives that tell search engine robots which pages of a website to crawl and which pages to ignore. Robots.txt files use the directive, “allow”, to tell search engines like Google “yes, we want you to see this website and to index it in search results”; or, the “disallow” directive to say “no, we do not want you to see this page or this website, or to index it in the search results”.

You may be thinking, wow, that's a lot of work for a simple text file. And you're right. Robots.txt is a powerful tool in forming your website's visibility. When a search engine robot or a crawler lands on your website, before it crawls a single page, it first looks for a robots.txt file. Based on the instructions the crawler receives from the directives in the robots.txt file, it will either crawl and index your site's pages and show them in relevant search results, or it will hide them from search engines. This means your robots.txt file can be a boon to your SEO efforts, or significantly hinder them.

Of course, disallowing search engine robots from crawling some site pages may be necessary. For example, if your website is built on the content management system like WordPress, you want to disallow crawlers from parsing pages on your website that are viewed from the administrator dashboard, or any URL that can that contains “WP-admin”. In this case, disallowing user agents from crawling back end versions of your web pages will keep irrelevant content inside your CMS from impacting your website's performance.

So how do you know if your site's robots.txt file is allowing search engine crawlers to see the content you want them to and disallowing the content you don't want indexed? There are two easy ways to quickly check your own robots.txt file.

The first way is the easiest. Simply type your website's URL into the browser's address bar and /robots.txt to the end of the URL. Hit enter and you should be redirected to your site's very own robots.txt file, where you will be clearly able to see how the allow and disallow directives are set up for your site. If you have access to the Search Console property for your website, Google offers a tool called the robots.txt tester that outputs the contents of the robots.txt file associated with that search console property. This is a legacy Google tool and can sometimes be a bit tricky to find, so we will leave a link in the description below. From the testers dashboard below choose a verified property, select the please select a property drop-down and choose the Search Console property connected to your site. Note that if you only have one search console property, you will not see this drop down and the tester will select your site's property by default. Once your property is selected, you will automatically be redirected to a window showing you the content of your robots.txt file. If you find any errors or that your file is disallowing your website's pages from being crawled by search engines, let your developer or web team know as soon as possible so that you can work with them to ensure that your robots.txt file is working for your website, not against it.

Комментарии

Информация по комментариям в разработке