JavaScript is one of the core technologies of the web. Not only does it help add dynamic behaviour to websites, but it makes websites more interactive and user-friendly. We know how Google processes HTML, but how does it process JavaScript?
In this article, we’re going to look at how Google processes JavaScript, as well as some tips and best practices for improving JavaScript websites and web apps for Google Search.
But before we jump in, let’s start with a few definitions of terms you may come across in this article.
What is JavaScript?
JavaScript is a programming language that is used to create dynamic and interactive websites and web applications. It is one of the most popular programming languages used by more than 95% of websites.
What is crawling?
Crawling, or web crawling, is the term used to describe the process of a website spider systematically searching content/data on a website. Web crawlers do this by following links on a website from one page to the next.
What is rendering?
In a nutshell, rendering is the process whereby web crawlers retrieve your pages, run your code and assess the content to understand the structure and layout of page/site. This is generally done through a browser where the HTML, CSS and JavaScript is ‘run’ to see what the page would look like to a user.
What is indexing?
Indexing is the process whereby search engine spiders store and categorise information and content that is found on websites, to display in SERPs (Search Engine Results Pages).
Now that we’ve gotten that out of the way, let’s get to the part where we demystify how Google processes JavaScript.
Google & JavaScript Processing
Google processes JavaScript in three phases: crawling, rendering and indexing.
then fetches this page from the crawl queue, it reads the robots.txt file to make sure that you allow crawling. If a URL is marked as disallowed, Googlebot will skip making an HTTP request to this URL.
Googlebot will then parse the response for other URLs in the href attribute of HTML links and add these URLs to the crawl queue.
Once a webpage has been crawled and processed, it then sits in the render queue. Once Google’s resources allow, a headless Chromium will then render the page and execute the JavaScript. Googlebot will then parse the rendered HTML for links and queues URLs it finds for crawling.
Now that we know how Google processes JavaScript, here are some JavaScript SEO tips to help improve your JS website or web app in Search.
Top JavaScript SEO tips
Here are some JavaScript SEO tips to help you maximise your visibility in Search.
- Use unique titles and meta descriptions
- Write compatible code
- Carefully use meta robots tags
- Use long-lived caching
- Use meaningful HTTP status codes
- Properly inject the rel=”canonical” tag
- Use structured data
- Design for accessibility
- Fix images and lazy-load content