Tag Archive for: SEO

Google is rolling out a new addition to its “About this result” feature in search results which will explain why the search engine chose a specific result to rank.

The new section, called “Your search & this result” explains the specific factors which made Google believe a specific page may have what you’re looking for.

This can include a number of SEO factors, ranging from the keywords which matched with the page (including related but not directly matching terms), backlink details, related images, location-based information, and more. 

How Businesses Can Use This Information

For users, this feature can help understand why they are seeing specific search results and even provide tips for refining their search for better results. 

The unspoken utility of this tool for businesses is glaringly obvious, however. 

This feature essentially provides an SEO report card, showing exactly where you are doing well on ranking for important keywords. By noting what is not included, you can also get an idea of what areas could be improved to help you rank better in the future.

Taking this even further, you could explore the details for other pages ranking for your primary keywords, helping you better strategize to overtake your competition.

What It Looks Like

Below, you can see a screenshot of what the feature looks like in action:

The information box provides a quick bullet point list of several factors which caused the search engine to return the specific result.
While Google only detailed a few of the possible details the box may include, users around the web have reported seeing information about all of these factors included:

  • Included search terms: Google can show which exact search terms were matched with the content or HTML on the related page. This includes content that is not typically visible to users, such as the title tag or meta data.
  • Related search terms: Along with the keywords which were directly matched with the related page, Google can also show “related” terms. For example, Google knew to include results related to the Covid vaccine based on the keyword “shot”.
  • Other websites link to this page: The search engine may choose to highlight a page which might otherwise appear unrelated because several pages using the specific keyword linked to this specific page.
  • Related images: If the images are properly optimized, Google may be able to identify when images on a page are related to your search.
  • This result is [Language]: Obviously, users who don’t speak or read your language are unlikely to have much use for your website or content. This essentially notes that the page is in the same language you use across the rest of Google.
  • This result is relevant for searches ih [Region]: Lastly, the search engine may note if locality helped influence its search result based on other contextual details. For example, it understood that the user in Vermont, was likely looking for nearby results when searching “get the shot”.

The expanded “About this result” section is rolling out to English-language U.S. users already and is expected to be widely available across the country within a week. From there, Google says it will work to bring the feature to more countries and languages soon.

One of the most frustrating aspects of search engine optimization is the time it takes to see results. In some cases, you can see changes start to hit Google’s search engines in just a few hours. In others, you can spend weeks waiting for new content to be indexed with no indication when Google will get around to your pages.

In a recent AskGooglebot session, Google’s John Mueller said this huge variation in the time it takes for pages to be indexed is to be expected for a number of reasons. However, he also provides some tips for speeding up the process so you can start seeing the fruits of your labor as soon as possible.

Why Indexing Can Take So Long

In most cases, Mueller says sites that produce consistently high quality content should expect to see their new pages get indexed within a few hours to a week. In some situations, though, even high quality pages can take longer to be indexed due to a variety of factors.

Technical issues can pop up which can delay Google’s ability to spot your new pages or prevent indexing entirely. Additionally, there is always the chance that Google’s systems are just tied up elsewhere and need time to get to your new content.

Why Google May Not Index Your Page

It is important to note that Google does not index everything. In fact, there are plenty of reasons the search engine might not index your new content.

For starters, you can just tell Google not to index a page or your entire site. It might be that you want to prioritize another version of your site or that your site isn’t ready yet. 

The search engine also excludes content that doesn’t bring sufficient value. This includes duplicate content, malicious or spammy pages, and websites which mirror other existing sites.

How To Speed Up Indexing

Thankfully, Mueller says there are ways to help speed up indexing your content.

  • Prevent server overloading by ensuring your server can handle the traffic coming to it. This ensures Google can get to your site in a timely manner. 
  • Use prominent internal links to help Google’s systems navigate your site and understand what pages are most important.
  • Avoid unnecessary URLs to keep your site well organized and easy for Google to spot new content.
  • Google prioritizes sites which put out consistently quality content and provide high value for users. The more important Google thinks your site is for people online, the more high priority your new pages will be for indexing and ranking.

For more about how Google indexes web pages and how to speed up the process, check out the full AskGooglebot video below:

We all know that the search results you get on mobile and the ones you get on desktop devices can be very different – even for the same query, made at the same time, in the same place, logged into the same Google account. 

Have you ever found yourself asking exactly why this happens?

One site owner did and recently got the chance to ask one of Google’s Senior Webmaster Trends Analyst, John Mueller.

In the recent SEO Office Hours Session, Mueller explained that a wide range of factors decide what search results get returned for a search query – including what device you are using and why this happens.

Why Are Mobile Search Rankings Different From Desktop?

The question asked to Mueller specifically wanted to clarify why there is still a disparity between mobile and desktop search results after the launch of mobile-first indexing for all sites. Here’s what was asked:

“How are desktop and mobile ranking different when we’ve already switched to mobile-first indexing.”

Indexing and Ranking Are Different

In response to the question, Mueller first tried to clarify that indexing and rankings are not exactly the same thing. Instead, they are more like two parts of a larger system. 

“So, mobile-first indexing is specifically about that technical aspect of indexing the content. And we use a mobile Googlebot to index the content. But once the content is indexed, the ranking side is still (kind of) completely separate.”

Although the mobile-first index was a significant shift in how Google brought sites into their search engine and understood them, it actually had little direct effect on most search results. 

Mobile Users and Desktop Users Have Different Needs

Beyond the explanation about indexing vs. ranking, John Mueller also said that Google returns unique rankings for mobile and desktop search results because they reflect potentially different needs in-the-moment. 

“It’s normal that desktop and mobile rankings are different. Sometimes that’s with regards to things like speed. Sometimes that’s with regards to things like mobile-friendliness.

“Sometimes that’s also with regards to the different elements that are shown in the search results page.

“For example, if you’re searching on your phone then maybe you want more local information because you’re on the go. Whereas if you’re searching on a desktop maybe you want more images or more videos shown in the search results. So we tend to show …a different mix of different search results types.

“And because of that it can happen that the ranking or the visibility of individual pages differs between mobile and desktop. And that’s essentially normal. That’s a part of how we do ranking.

“It’s not something where I would say it would be tied to the technical aspect of indexing the content.”

With this in mind, there’s little need to be concerned if you aren’t showing up in the same spot for the same exact searches on different devices.

Instead, watch for big shifts in what devices people are using to access your page. If your users are overwhelmingly using phones, assess how your site can better serve the needs of desktop users. Likewise, a majority of traffic coming from desktop devices may indicate you need to assess your site’s speed and mobile friendliness.

If you want to hear Mueller’s full explanation and even more discussion about search engine optimization, check out the SEO Office Hours video below:

Despite the difference in how the pages are used created and generally thought about, Google’s John Mueller says the search engine sees no difference between “blog posts” and “web pages.”

In a recent SEO hangout, Mueller was asked by site owner Navin Adhikari about why the blog section of his site wasn’t getting the same amount of traffic as the rest of his site. This, combined with the way Google emphasizes content within its guidelines, has made Adhikari suspect that the search engine may be ranking blog content differently. This would explain why the rest of his site would be performing consistently well, while the blog was underperforming.

However, Mueller says this isn’t the case. In fact, Mueller explained that while the distinction between blog content and other areas of a site is something the search engine does not have access to, it is also not something the company would heavily factor into results if it could.

Google’s John Mueller Says Google Sees All Pages Similarly

In most cases, Mueller says the distinction between “blog posts” and “web pages” is entirely artificial. It is something provided for convenience on a website’s content management system (CMS) to help creatives generate content without the need for code skill and to help keep pages organized. 

So, while the blog part of your site may seem entirely separate to you while you are creating posts, it is just another subsection of your site in Google’s perspective.

“I don’t think Googlebot would recognize that there’s a difference. So usually that difference between posts and pages is something that is more within your backend within the CMS that you’re using, within WordPress in that case. And it wouldn’t be something that would be visible to us.

“So we would look at these as if it’s an HTML page and there’s lots of content here and it’s linked within your website in this way, and based on that we would rank this HTML page.

“We would not say oh it’s a blog post, or it’s a page, or it’s an informational article. We would essentially say it’s an HTML page and there’s this content here and it’s interlinked within your website in this specific way.”

Why A Blog May Underperform

If Google wasn’t ranking Adhikari’s blog differently, why would his blog specifically underperform? Mueller has some ideas.

Without access to in-depth data about the site, Mueller speculated that the most likely issue in this case would be how the blog is linked to from other pages on the site.

“I think, I mean, I don’t know your website so it’s hard to say. But what might be happening is that the internal linking of your website is different for the blog section as for the services section or the other parts of your website.

“And if the internal linking is very different then it’s possible that we would not be able to understand that this is an important part of the website.

“It’s not tied to the URLs, it’s not tied to the type of page. It’s really like we don’t understand how important this part of the website is.”

One way to do this is to generate a feed of links to new content on the homepage of your site. This helps to quickly establish that your blog content is important to your audience.

To hear the Mueller’s full response and more discussion on the best search engine optimization practices for Google, check out the full SEO Office Hours video below:

Have you gotten your brand’s website ready for the upcoming Google Page Experience ranking signal update? 

If not, Google Developer Martin Splitt says there’s no need to panic. 

In an interview on the Search Engine Journal Show on YouTube, host Loren Baker asks Splitt what advice he would give to anyone worried their site isn’t prepared for the update set to launch in mid-June. 

Giving a rare peek at the expected impact of the impending update, Splitt reveals the Page Experience signal update isn’t going to be a massive gamechanger. Instead, it is more of a “tiebreaker.”

As a “lightweight ranking signal”, just optimizing your site’s Page Experience metrics isn’t going to launch you from the back of the pack to the front. If you are competing with a site with exactly the same performance in every other area, however, this will give you the leg up to receive the better position in the search results. 

Don’t Ignore The Update

While the Page Experience update isn’t set to radically change up the search results, Splitt says brands and site owners should still work to optimize their site with the new signals in mind. 

Ultimately, making your page faster, more accessible on a variety of devices, and easier to use is always a worthwhile effort – even if it’s not a major ranking signal. 

As Splitt says:

“First things first, don’t panic. Don’t completely freak out, because as I said it’s a tiebreaker. For some it will be quite substantial, for some it will not be very substantial, so you don’t know which bucket you’ll be in because that depends a lot on context and industry and niche. So I wouldn’t worry too much about it.

I think generally making your website faster for users should be an important goal, and it should not just be like completely ignored. Which is the situation in many companies today that they’re just like ‘yeah, whatever.’”

As for how he thinks brands should approach the update, Splitt recommended focusing on new projects and content rather than prioritizing revamping your entire site upfront. 

… For new projects, definitely advise them to look into Core Web Vitals from the get-go. For projects that are already in maintenance mode, or are already actively being deployed, I would look into making some sort of plan for the mid-term future — like the next six months, eight months, twelve months — to actually work on the Core Web Vitals and to improve performance. Not just from an SEO perspective, but also literally for your users.”

Much of the discussion focuses on the perspective of SEO professionals, but it includes several bits of relevant information for anyone who owns or manages a website for their business. 

To hear the full conversation, check out the video below from Search Engine Journal:

For many small-to-medium businesses, appearing in search results around their local area is significantly more important than popping up in the results for someone halfway across the country. 

This raises the question, though. How many of the countless searches made every day are actually locally based?

We now have the answer to that question thanks to a new tool released by LocalSEOGuide.com and Traject Data.

What Percent Of Searches Are Local?

Working together, the companies analyzed over 60 million U.S. search queries and found that over a third (approx. 36%) of all queries returned Google’s local pack – indicating the search was location-based. 

Perhaps the biggest surprise from the data is that locally-based searches have remained largely consistent throughout the year. Following an uptick in early 2020 (likely driven by the coronavirus pandemic), the rate stayed around 36% over the course of the year. The only significant exception came in September, where the data shows a significant decrease in locally-driven searches. 

This data shows just how important it is for even brands that are strictly local to establish their brands online and optimize for search engines. Otherwise, you might be missing out on a big source of potential business.

Other Features In The Local Pack-O-Meter

Along with data on the appearance of local packs in Google search results, the Local Pack-O-Meter includes information on several other search features. These include:

  • Knowledge Graphs
  • “People Also Ask” Panels
  • Image Boxes
  • Shopping Boxes
  • Ads
  • Related Searches
  • And more

Though the current form of the tool doesn’t include ways to more selectively filter the information, there is plenty to take from the information for planning what search features you need to prioritize and which can be put on the back burner. 

To explore the Local Pack-O-Meter for yourself, click here.

Throughout 2020, approximately 65% of searches made on Google were “zero-click searches”, meaning that the search never resulted in an actual website visit.

Zero-click searches have been steadily on the rise, reaching 50% in June 2019 according to a study published by online marketing expert Rand Fishkin and SimilarWeb.

The steep rise in these types of searches between January and December 2020 is particularly surprising because it was widely believed zero-click searches were largely driven by mobile users looking for quick-answers. Throughout 2020, however, most of us were less mobile than ever due to Covid restrictions, social distancing, and quarantines.

The findings of this latest report don’t entirely disprove this theory, though. Mobile devices still saw the majority of zero-click Google searches. On desktop, less than half (46.5%) were zero-click searches, while more than three-fourths (77.2%) of searches from mobile devices did not result in a website visit.

Study Limitations

Fishkin acknowledges that his reports do come with a small caveat. Each analysis used different data sources and included different searching methods, which may explain some of the variance. Additionally, the newer study – which included data from over 5.1 trillion Google searches – had access to a significantly larger data pool compared to the approximately one billion searches used in the 2019 study.

“Nonetheless, it seems probable that if the previous panel were still available, it would show a similar trend of increasing click cannibalization by Google,” Fishkin said in his analysis.

What This Means For Businesses

The most obvious takeaway from these findings is that people are increasingly finding the information they are looking for directly on the search results pages, rather than needing to visit a web-page for more in-depth information.

It also means that attempts to regulate Google are largely failing.

Many have criticized and even pursued legal action (with varying levels of success) against the search engine for abusing their access to information on websites by showing that information in “knowledge panels” on search results.

The argument is that Google is stealing copyrighted information and republishing it on their own site. Additionally, this practice could potentially create less reason for searchers to click on ads, meaning Google is contributing to falling click-through rates and making more money off of it.

Ultimately, Google is showing no signs of slowing down on its use of knowledge panels and direct answers within search results. To adjust to the rise of zero-click searches, brands should put more energy into optimizing their content to appear in knowledge panels (increasing your brand awareness) and diversify their web presence with social media activity to directly reach customers.

In a Google Search Central SEO session recently, Google’s John Mueller shed light on a way the search engine’s systems can go astray – keeping pages on your site from being indexed and appearing in search. 

Essentially the issue comes from Google’s predictive approach to identifying duplicate content based on URL patterns, which has the potential to incorrectly identify duplicate content based on the URL alone. 

Google uses the predictive system to increase the efficiency of its crawling and indexing of sites by skipping over content which is just a copy of another page. By leaving these pages out of the index, Google’s engine has less chances of showing repetitious content in its search results and allows its indexing systems to reach other, more unique content more quickly. 

Obviously the problem is that content creators could unintentionally trigger these predictive systems when publishing unique content on similar topics, leaving quality content out of the search engine. 

John Mueller Explains How Google Could Misidentify Duplicate Content

In a response to a question from a user whose pages were not being indexed correctly, Mueller explained that Google uses multiple layers of filters to weed out duplicate content:

“What tends to happen on our side is we have multiple levels of trying to understand when there is duplicate content on a site. And one is when we look at the page’s content directly and we kind of see, well, this page has this content, this page has different content, we should treat them as separate pages.

The other thing is kind of a broader predictive approach that we have where we look at the URL structure of a website where we see, well, in the past, when we’ve looked at URLs that look like this, we’ve seen they have the same content as URLs like this. And then we’ll essentially learn that pattern and say, URLs that look like this are the same as URLs that look like this.”

He also explained how these systems can sometimes go too far and Google could incorrectly filter out unique content based on URL patterns on a site:

“Even without looking at the individual URLs we can sometimes say, well, we’ll save ourselves some crawling and indexing and just focus on these assumed or very likely duplication cases. And I have seen that happen with things like cities.

I have seen that happen with things like, I don’t know, automobiles is another one where we saw that happen, where essentially our systems recognize that what you specify as a city name is something that is not so relevant for the actual URLs. And usually we learn that kind of pattern when a site provides a lot of the same content with alternate names.”

How Can You Protect Your Site From This?

While Google’s John Mueller wasn’t able to provide a full-proof solution or prevention for this issue, he did offer some advice for sites that have been affected:

“So what I would try to do in a case like this is to see if you have this kind of situations where you have strong overlaps of content and to try to find ways to limit that as much as possible.

And that could be by using something like a rel canonical on the page and saying, well, this small city that is right outside the big city, I’ll set the canonical to the big city because it shows exactly the same content.

So that really every URL that we crawl on your website and index, we can see, well, this URL and its content are unique and it’s important for us to keep all of these URLs indexed.

Or we see clear information that this URL you know is supposed to be the same as this other one, you have maybe set up a redirect or you have a rel canonical set up there, and we can just focus on those main URLs and still understand that the city aspect there is critical for your individual pages.”

It should be clarified that duplicate content or pages impacted by this problem will not hurt the overall SEO of your site. So, for example, having several pages tagged as being duplicate content won’t prevent your home page from appearing for relevant searches. 

Still, the issue has the potential to gradually decrease the efficiency of your SEO efforts, not to mention making it harder for people to find the valuable information you are providing. 

To see Mueller’s full explanation, watch the video below:

Blog comments are a tricky issue for many business websites. 

On one hand, everyone dreams of building a community of loyal customers that follow every post and regularly have a healthy discussion in the comments. Not only can it be helpful for other potential customers, but comments tend to help Google rankings and help inspire future content for your site. 

On the other hand, most business-based websites receive significantly more spam than genuine comments. Even the best anti-spam measures can’t prevent every sketchy link or comment on every post. For the most part, these are more annoying than being an actual problem. However, if left completely unmonitored, spam could build up and potentially hurt your rankings.

This can make it tempting to just remove comments from your blog entirely. If you do, you don’t have to worry about monitoring comments, responding to trolls, or weeding out spam. After all, your most loyal fans can still talk about your posts on your Facebook page, right?

Unfortunately, as Google’s John Mueller recently explained, removing comments from your blog is likely to hurt more than it helps. 

John Mueller Addresses Removing Blog Comments

In a Google Search Central SEO hangout on February 5, Google’s John Mueller explored a question from a site owner about how Google factors blog comments into search rankings. Specifically, they wanted to remove comments from their site but worried about potentially dropping in the search results if they did. 

While the answer was significantly more complicated, the short version is this:

Google does factor blog comments into where they decide to rank web pages. Because of this, it is unlikely that you could remove comments entirely without affecting your rankings. 

How Blog Comments Impact Search Rankings

Google sees comments as a separate but significant part of your content. So, while they recognize that comments may not be directly reflective of your content, it does reflect things like engagement and occasionally provide helpful extra information. 

This also means that removing blog comments is essentially removing a chunk of information, keywords, and context from every blog post on your site in the search engine’s eyes. 

However, John Mueller didn’t go as far as recommending to keep blog comments over removing them. This depends on several issues including how many comments you’ve received, what type of comments you’ve gotten, and how much they have added to your SEO.

As Mueller answered:

“I think it’s ultimately up to you. From our point of view we do see comments as a part of the content. We do also, in many cases, recognize that this is actually the comment section so we need to treat it slightly differently. But ultimately if people are finding your pages based on the comments there then, if you delete those comments, then obviously we wouldn’t be able to find your pages based on that.

So, that’s something where, depending on the type of comments that you have there, the amount of comments that you have, it can be the case that they provide significant value to your pages, and they can be a source of additional information about your pages, but it’s not always the case.

So, that’s something where I think you need to look at the contents of your pages overall, the queries that are leading to your pages, and think about which of these queries might go away if comments were not on those pages anymore. And based on that you can try to figure out what to do there.

It’s certainly not the case that we completely ignore all of the comments on a site. So just blindly going off and deleting all of your comments in the hope that nothing will change – I don’t think that will happen.”

It is clear that removing blog comments entirely from your site is all but certain to affect your search rankings on some level. Whether this means a huge drop in rankings or potentially a small gain, though, depends entirely on what type of comments your site is actually losing. 

To watch Mueller’s full answer, check out the video below:

When Google releases a major algorithm update, it can take weeks or months to fully understand the effect. Google itself tends to be tight-lipped about the updates, preferring to point website owners and businesses to its general webmaster guidelines for advice on an update. 

Because of all this, we are just starting to grasp what Google’s recent algorithm updates did to search engines. One thing that has become quickly apparent, though, is that one of the biggest losers from Google’s 2020 algorithm updates has consistently been online piracy. 

This is most clear in a new end-of-year report from TorrentFreak and piracy tracking company MUSO

How Google’s Algorithm Updates Affected Digital Piracy

Overall, the analysis shows that site traffic to piracy sites from search engines has fallen by nearly a third from December 2019 to November 2020. Notably, the two big periods leading to this loss of traffic line up perfectly with Google’s algorithm updates earlier this year. 

In January 2020, piracy traffic began dwindling shortly after the January 13th core update. 

After experiencing a short uptick at the start of the COVID pandemic in March, the May 4th core update then hit online pirates even harder, sending piracy traffic plummeting. 

Early indications from the public and some analysts suggest the December 2020 core update continued this trend, though it is too early to know for sure. 

Interestingly, TorrentFreak and MUSO say they corroborated the findings of their report with operators of one of the largest torrent websites online:

“To confirm our findings we spoke to the operator of one of the largest torrent sites, who prefers to remain anonymous. Without sharing our findings, he reported a 35% decline in Google traffic over the past year, which is in line with MUSO’s data.”

Is Google Completely Responsible?

It should be noted that while Google’s algorithm updates likely played a large role in the decline of search traffic to piracy sites, other factors almost certainly contributed as well. 

TorrentFreak’s report shows that direct traffic to piracy-related sites experienced a gradual 10% decline over the course of the year. This may suggest overall interest in pirating content may have fallen somewhat on its own. 

Additionally, 2020 was a unique year with less content coming out than usual. The COVID pandemic disrupted pretty much every industry, including creative industries. Music releases were pushed back or cancelled as it became difficult to safely record in studios. The closing of theaters led to the delay of many major movies, and TV creators had to completely rework how they wrote and filmed their shows. 

With less content from major studios and artists, it is highly likely users just had less available content that they were interested in pirating. 

Why This Matters

The good news is that the vast majority of business-related websites have absolutely nothing to do with online piracy and therefore should be safe from these effects of Google’s most recent algorithm updates. 

The less good news is that Google’s core algorithm updates are designed to impact a huge portion of websites around the globe, and certainly had impacts outside the realm of digital piracy. 

Still, we felt it important to highlight a real-world way a major Google algorithm update can impact an entire industry on a wide-scale within search results. 

Ultimately, the takeaway for most website owners is that keeping an eye on your analytics is essential.

If you are watching, you can respond to major shifts like this with new strategies, optimization, and even ask Google to recrawl your site. If you aren’t monitoring your analytics, however, you could lose a huge chunk of your traffic from potential customers with no idea why.