Google Clarifies Googlebot Doesn’t Always Crawl Entire Webpages

, ,
Google Quality Content Banner - Google Logo with Man Creating Content

In an update to the help documentation for Googlebot, the search engine’s crawling tool, Google explained it will only crawl the first 15 MB of any webpage. Anything after this initial 15 MBs will not influence your webpage’s rankings.

As the Googlebot help document states:

“After the first 15 MB of the file, Googlebot stops crawling and only considers the first 15 MB of the file for indexing.

The file size limit is applied on the uncompressed data.”

Though this may initially raise concerns since images and videos can easily exceed these sizes, the help document makes clear that media or other resources are typically exempt from this Googlebot limit:

“Any resources referenced in the HTML such as images, videos, CSS, and JavaScript are fetched separately.”

What This Means For Your Website

If you’ve been following the most commonly used best practices for web design and content management, this should leave your website largely unaffected. Specifically, the best practices you should be following include:

  • Keeping the most relevant SEO-related information relatively close to the start of any HTML file. 
  • Compressing images.
  • Leaving images or videos unencoded into the HTML when possible.
  • Keeping HTML files small – typically less than 100 KB.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply