But before we jump in, let’s start with a few definitions of terms you may come across in this article.
What is crawling?
Crawling, or web crawling, is the term used to describe the process of a website spider systematically searching content/data on a website. Web crawlers do this by following links on a website from one page to the next.
What is rendering?
What is indexing?
Indexing is the process whereby search engine spiders store and categorise information and content that is found on websites, to display in SERPs (Search Engine Results Pages).
then fetches this page from the crawl queue, it reads the robots.txt file to make sure that you allow crawling. If a URL is marked as disallowed, Googlebot will skip making an HTTP request to this URL.
Googlebot will then parse the response for other URLs in the href attribute of HTML links and add these URLs to the crawl queue.
- Use unique titles and meta descriptions
- Write compatible code
- Carefully use meta robots tags
- Use long-lived caching
- Use meaningful HTTP status codes
- Properly inject the rel=”canonical” tag
- Use structured data
- Design for accessibility
- Fix images and lazy-load content