Video tutorial showing how to create robots.txt file on a web server. Learn how to use Robots Exclusion Protocol directives to control user-agent web crawlers like Googlebot crawling.
IMPORTANT: most website owners believe that using robots.txt file directives will let Google not index certain URLs. This is NOT true. The only real use cases for using robots.txt file rules are:
1. if you want to let certain parts of a website not be crawled by Googlebot
2. if Googlebot is requesting to many URLs thus causing server performance issues.
Basically, to be able to control Google to not index certain URLs on a website, you will need to use other methods such as noindex.
Introduction to robots.txt
https://developers.google.com/search/...
More robots.txt file examples and how to's can be found here:
https://www.rankya.com/seo/robots-txt...
RankYa is a passionate digital marketer, website optimizer, content creator, and a fully qualified web developer helping businesses of all sizes (big or small) to achieve better results online.
We love sharing our passion through freely available how to videos and courses related to Google (Search Console, Ads, Analytics, YouTube), SEO, HTML5, Structured Data and WordPress on RankYa YouTube channel.
Rest assured that these proven insights will serve you well as well.
Securely purchase your new private SEO courses online (instant access). Insights that also come with Guaranteed Results for your business website.
https://www.rankya.com/courses/