Posts

It is no secret that Google knows the price you, your competitors, and even the shady third-party companies charge for your products or services. In some cases, you might even directly tell the company how much you charge through Google’s Merchant Center. So, it is reasonable to think that the search engine might also use that information when it is ranking brands or product pages in search results.

In a recent livestream, however, Google Webmaster Trends Analyst, John Mueller, denied the idea.

What John Mueller Has To Say About Price as a Google Ranking Signal

The question arose during an SEO Office-Hours hangout on October 8, which led to Mueller explaining that while Google can access this information, it does not use it when ranking traditional search results.

As he says in the recording of the discussion:

“Purely from a web search point of view, no, it’s not the case that we would try to recognize the price on a page and use that as a ranking factor.

“So it’s not the case that we would say we’ll take the cheaper one and rank that higher. I don’t think that would really make sense.”

At the same time, Mueller says he can’t speak on how products in shopping results (which may be shown in regular search results) are ranked. 

Within shopping search results, users can manually select to sort their results by price. Whether it is used as a factor the rest of the time isn’t something Mueller can answer:

“A lot of these products also end up in the product search results, which could be because you submit a feed, or maybe because we recognize the product information on these pages, and the product search results I don’t know how they’re ordered.

“It might be that they take the price into account, or things like availability, all of the other factors that kind of come in as attributes in product search.”

Price Is And Isn’t A Ranking Factor

At the end of the day, Mueller doesn’t work in the areas related to product search so he really can’t say whether price is a ranking factor within those areas of Google. This potentially includes when they are shown within normal search results pages.

What he can say for sure, is that within traditional web search results, Google does not use price to rank results:

“So, from a web search point of view, we don’t take price into account. From a product search point of view it’s possible.

“The tricky part, I think, as an SEO, is these different aspects of search are often combined in one search results page. Where you’ll see normal web results, and maybe you’ll see some product review results on the side, or maybe you’ll see some mix of that.”

You can hear Mueller’s full response in the recording from the October 8, 2021, Google SEO Office Hours hangout below:

We all know that the search results you get on mobile and the ones you get on desktop devices can be very different – even for the same query, made at the same time, in the same place, logged into the same Google account. 

Have you ever found yourself asking exactly why this happens?

One site owner did and recently got the chance to ask one of Google’s Senior Webmaster Trends Analyst, John Mueller.

In the recent SEO Office Hours Session, Mueller explained that a wide range of factors decide what search results get returned for a search query – including what device you are using and why this happens.

Why Are Mobile Search Rankings Different From Desktop?

The question asked to Mueller specifically wanted to clarify why there is still a disparity between mobile and desktop search results after the launch of mobile-first indexing for all sites. Here’s what was asked:

“How are desktop and mobile ranking different when we’ve already switched to mobile-first indexing.”

Indexing and Ranking Are Different

In response to the question, Mueller first tried to clarify that indexing and rankings are not exactly the same thing. Instead, they are more like two parts of a larger system. 

“So, mobile-first indexing is specifically about that technical aspect of indexing the content. And we use a mobile Googlebot to index the content. But once the content is indexed, the ranking side is still (kind of) completely separate.”

Although the mobile-first index was a significant shift in how Google brought sites into their search engine and understood them, it actually had little direct effect on most search results. 

Mobile Users and Desktop Users Have Different Needs

Beyond the explanation about indexing vs. ranking, John Mueller also said that Google returns unique rankings for mobile and desktop search results because they reflect potentially different needs in-the-moment. 

“It’s normal that desktop and mobile rankings are different. Sometimes that’s with regards to things like speed. Sometimes that’s with regards to things like mobile-friendliness.

“Sometimes that’s also with regards to the different elements that are shown in the search results page.

“For example, if you’re searching on your phone then maybe you want more local information because you’re on the go. Whereas if you’re searching on a desktop maybe you want more images or more videos shown in the search results. So we tend to show …a different mix of different search results types.

“And because of that it can happen that the ranking or the visibility of individual pages differs between mobile and desktop. And that’s essentially normal. That’s a part of how we do ranking.

“It’s not something where I would say it would be tied to the technical aspect of indexing the content.”

With this in mind, there’s little need to be concerned if you aren’t showing up in the same spot for the same exact searches on different devices.

Instead, watch for big shifts in what devices people are using to access your page. If your users are overwhelmingly using phones, assess how your site can better serve the needs of desktop users. Likewise, a majority of traffic coming from desktop devices may indicate you need to assess your site’s speed and mobile friendliness.

If you want to hear Mueller’s full explanation and even more discussion about search engine optimization, check out the SEO Office Hours video below:

In a Google Search Central SEO session recently, Google’s John Mueller shed light on a way the search engine’s systems can go astray – keeping pages on your site from being indexed and appearing in search. 

Essentially the issue comes from Google’s predictive approach to identifying duplicate content based on URL patterns, which has the potential to incorrectly identify duplicate content based on the URL alone. 

Google uses the predictive system to increase the efficiency of its crawling and indexing of sites by skipping over content which is just a copy of another page. By leaving these pages out of the index, Google’s engine has less chances of showing repetitious content in its search results and allows its indexing systems to reach other, more unique content more quickly. 

Obviously the problem is that content creators could unintentionally trigger these predictive systems when publishing unique content on similar topics, leaving quality content out of the search engine. 

John Mueller Explains How Google Could Misidentify Duplicate Content

In a response to a question from a user whose pages were not being indexed correctly, Mueller explained that Google uses multiple layers of filters to weed out duplicate content:

“What tends to happen on our side is we have multiple levels of trying to understand when there is duplicate content on a site. And one is when we look at the page’s content directly and we kind of see, well, this page has this content, this page has different content, we should treat them as separate pages.

The other thing is kind of a broader predictive approach that we have where we look at the URL structure of a website where we see, well, in the past, when we’ve looked at URLs that look like this, we’ve seen they have the same content as URLs like this. And then we’ll essentially learn that pattern and say, URLs that look like this are the same as URLs that look like this.”

He also explained how these systems can sometimes go too far and Google could incorrectly filter out unique content based on URL patterns on a site:

“Even without looking at the individual URLs we can sometimes say, well, we’ll save ourselves some crawling and indexing and just focus on these assumed or very likely duplication cases. And I have seen that happen with things like cities.

I have seen that happen with things like, I don’t know, automobiles is another one where we saw that happen, where essentially our systems recognize that what you specify as a city name is something that is not so relevant for the actual URLs. And usually we learn that kind of pattern when a site provides a lot of the same content with alternate names.”

How Can You Protect Your Site From This?

While Google’s John Mueller wasn’t able to provide a full-proof solution or prevention for this issue, he did offer some advice for sites that have been affected:

“So what I would try to do in a case like this is to see if you have this kind of situations where you have strong overlaps of content and to try to find ways to limit that as much as possible.

And that could be by using something like a rel canonical on the page and saying, well, this small city that is right outside the big city, I’ll set the canonical to the big city because it shows exactly the same content.

So that really every URL that we crawl on your website and index, we can see, well, this URL and its content are unique and it’s important for us to keep all of these URLs indexed.

Or we see clear information that this URL you know is supposed to be the same as this other one, you have maybe set up a redirect or you have a rel canonical set up there, and we can just focus on those main URLs and still understand that the city aspect there is critical for your individual pages.”

It should be clarified that duplicate content or pages impacted by this problem will not hurt the overall SEO of your site. So, for example, having several pages tagged as being duplicate content won’t prevent your home page from appearing for relevant searches. 

Still, the issue has the potential to gradually decrease the efficiency of your SEO efforts, not to mention making it harder for people to find the valuable information you are providing. 

To see Mueller’s full explanation, watch the video below:

Blog comments are a tricky issue for many business websites. 

On one hand, everyone dreams of building a community of loyal customers that follow every post and regularly have a healthy discussion in the comments. Not only can it be helpful for other potential customers, but comments tend to help Google rankings and help inspire future content for your site. 

On the other hand, most business-based websites receive significantly more spam than genuine comments. Even the best anti-spam measures can’t prevent every sketchy link or comment on every post. For the most part, these are more annoying than being an actual problem. However, if left completely unmonitored, spam could build up and potentially hurt your rankings.

This can make it tempting to just remove comments from your blog entirely. If you do, you don’t have to worry about monitoring comments, responding to trolls, or weeding out spam. After all, your most loyal fans can still talk about your posts on your Facebook page, right?

Unfortunately, as Google’s John Mueller recently explained, removing comments from your blog is likely to hurt more than it helps. 

John Mueller Addresses Removing Blog Comments

In a Google Search Central SEO hangout on February 5, Google’s John Mueller explored a question from a site owner about how Google factors blog comments into search rankings. Specifically, they wanted to remove comments from their site but worried about potentially dropping in the search results if they did. 

While the answer was significantly more complicated, the short version is this:

Google does factor blog comments into where they decide to rank web pages. Because of this, it is unlikely that you could remove comments entirely without affecting your rankings. 

How Blog Comments Impact Search Rankings

Google sees comments as a separate but significant part of your content. So, while they recognize that comments may not be directly reflective of your content, it does reflect things like engagement and occasionally provide helpful extra information. 

This also means that removing blog comments is essentially removing a chunk of information, keywords, and context from every blog post on your site in the search engine’s eyes. 

However, John Mueller didn’t go as far as recommending to keep blog comments over removing them. This depends on several issues including how many comments you’ve received, what type of comments you’ve gotten, and how much they have added to your SEO.

As Mueller answered:

“I think it’s ultimately up to you. From our point of view we do see comments as a part of the content. We do also, in many cases, recognize that this is actually the comment section so we need to treat it slightly differently. But ultimately if people are finding your pages based on the comments there then, if you delete those comments, then obviously we wouldn’t be able to find your pages based on that.

So, that’s something where, depending on the type of comments that you have there, the amount of comments that you have, it can be the case that they provide significant value to your pages, and they can be a source of additional information about your pages, but it’s not always the case.

So, that’s something where I think you need to look at the contents of your pages overall, the queries that are leading to your pages, and think about which of these queries might go away if comments were not on those pages anymore. And based on that you can try to figure out what to do there.

It’s certainly not the case that we completely ignore all of the comments on a site. So just blindly going off and deleting all of your comments in the hope that nothing will change – I don’t think that will happen.”

It is clear that removing blog comments entirely from your site is all but certain to affect your search rankings on some level. Whether this means a huge drop in rankings or potentially a small gain, though, depends entirely on what type of comments your site is actually losing. 

To watch Mueller’s full answer, check out the video below:

It’s a question we all have dealt with at least once or twice, and one that rarely has a satisfying answer: “Why did my Google rankings suddenly drop?”

Sometimes, a simple audit will reveal a technical hiccup or issue that is downgrading your rankings. Just as often, though, it appears everything is working as it should but you are suddenly further down the page or not even on the first page anymore. 

In this situation, Google’s John Mueller says there are four major reasons for sites to lose rankings. 

John Mueller Explains Why Sites Lose Rankings

In a recent Google Webmaster Central chat, Mueller was asked why a publisher who had ranked well for “seven or eight years” had suddenly lost rankings for three different sites. Notably, the person asking the question couldn’t find any signs of problems in their inbound or outbound links, and all the sites used the same keywords (they sell similar products by different brands). 

Of course, Mueller couldn’t get too specific with his answer because he didn’t have actual data or analytics on the sites. Still, he did his best to address four general reasons sites may suddenly rank worse.

1) Rankings Are Temporary

Once a site is ranking at the top for its ideal keywords, many site owners feel like they have accomplished their mission and will continue to rank there. Unfortunately, John Mueller says that rankings are malleable and change constantly.

Mueller explained:

“In general, just because the site was appearing well in search results for a number of years does not mean that it will continue to appear well in search results in the future.

These kinds of changes are essentially to be expected on the web, it’s a very common dynamic environment”

2) The Internet Is Always Changing

The reason why rankings are so prone to fluctuations is that the internet itself is always changing. New sites are being created every day, links might die, competitors might improve their own SEO, and people’s interests change.

Each and every one of these can have a big impact on the search results people see at any given time. 

As Mueller put it:

“On the one hand, things on the web change with your competitors, with other sites…”

3) Google Changes Its Algorithms

To keep up with the constantly changing internet, Google itself has to regularly overhaul how its search engine interprets and ranks websites. 

To give you one idea how this plays out, a few years ago search results were absolutely dominated by “listicles” (short top 5 or top 10 lists). Over time, people got tired of the shallow information these types of lists provided and how easily they could be abused as clickbait. Google recognized this and tweaked its algorithm to better prioritize in-depth information hyper-focusing on a single topic or issue. Now, though a listicle can still rank on Google, it is considerably harder than it used to be.

As Mueller simply explained:

“On the other hand, things on our side change with our algorithms in search.”

4) People Change

This is one that has been touched upon throughout the list Mueller gave, but it really gets to the heart of what Google does. What people expect out of the internet is constantly changing, and it is Google’s job to keep up with these shifts. 

In some cases, this can mean that people outright change how they search. For example, simple keywords like “restaurants near me” or “fix Samsung TV” were the main tool people used to find information for years and years. As voice search has become widespread and people have gotten more accustomed to using search engines all the time, queries have expanded to frequently include full sentences or phrases like “What is the best Chinese restaurant in midtown?”

At the same time, what people expect out of the same queries is also shifting with technological innovation and content trends. 

Mueller describes the situation by saying:

“And finally on the user side as well, the expectations change over time. So, just because something performed well in the past doesn’t mean it will continue to perform well in search in the future.”

Always Be Monitoring and Improving

The big theme behind all of these reasons sites lose rankings is that they are standing still while the world moves past them. To maintain your high rankings, your site has to be constantly in motion – moving with the trends and providing the content users want and expect from sites at any given time. 

This is why successful sites are also constantly monitoring their analytics to identify upcoming shifts and respond to any drops in rankings as soon as they happen.

If you want to see the full response, watch the video below (it starts with Mueller’s response but you can choose to watch the entire Webmaster Central office-hours discussion if you wish).

When creating content to help your SEO, many people believe they should aim for an “ideal” word count. The perfect number has ranged from 300 to 1,500 words per post depending on when and who you ask. There’s just one problem – Google’s leading experts say there is no perfect word count.

Why Do Word Counts Seem Important?

Since Google is relatively tight-lipped about the exact recipe they use to rank sites on its search engine, SEO experts have traditionally had to rely on their own data to understand the inner-workings of the search engine.

Sometimes, this information is later confirmed. Marketing experts had long believed that site speed was an important ranking signal for Google before the company confirmed its impact.

The problem is this approach relies strongly on correlation – which can be unreliable or lead to incorrect conclusions.

This is also why the “ideal” word counts recommended by “experts” tends to vary so wildly. When we have to rely on relatively limited data (at least, compared to Google’s data), it can skew the conclusions taken from the data.

This is where Google’s John Mueller comes in.

What Google Has To Say

The company’s leading experts have repeatedly denied that they consider word counts to be an important ranking signal. Some have suggested it is lightly considered, but the impact is negligible compared to other factors like keyword relevance or backlinks to the page.

The latest Googler to speak out about the issue is John Mueller, Webmaster Trends Analyst at Google.

In a recent tweet, Mueller used a simple analogy to explain why focusing on word counts is the wrong approach.

Simply put, focusing on how long each piece of content is puts the attention on the wrong area. If you write long posts, simply for the point of hitting a total number of words, there is a high risk of drifting off-topic or including irrelevant details.

The better approach is to create content with the goal of answering a specific question or responding to a specific need. Then, write until you’ve provided all the relevant information – whether it takes 300 or 1,500 words to do so.

A lot of people have come to think of search engine optimization and content marketing as separate strategies these days, but Google’s John Mueller wants to remind webmasters that both are intrinsically linked. Without great content, even the most well-optimized sites won’t rank as high as they should.

The discussion was brought up during a recent Google Webmaster Central hangout where one site owner asked about improving rankings for his site.

Specifically, he explained that there were no technical issues that he could find using Google’s tools and wasn’t sure what else he could do to improve performance.

Here’s the question that was asked:

“There are zero issues on our website according to Search Console. We’re providing fast performance in mobile and great UX. I’m not sure what to do to improve rankings.”

Mueller responded by explaining that it is important to not forget about the other half of the equation. Just focusing on the technical details won’t always lead to high rankings because the content on the site still needs to be relevant and engaging for users.

The best way to approach the issue, in Mueller’s opinion, is to ask what issues users might be having with your products or services and what questions they might ask. Then, use content to provide clear and easily available answers to these questions.

In addition to these issues, Mueller noted that some industries have much stronger competition for rankings than others. If you are in one of these niches, you may still struggle to rank as well as you’d like against competition which has been maintaining an informative and well-designed site for longer.

You can read or watch Mueller’s answer in full below, starting at 32:29 in the video:

“This is always kind of a tricky situation where you’re working on your website for a while, then sometimes you focus on a lot of the technical details and forget about the bigger picture.

So what I would recommend doing here is taking your website and the queries that you’re looking [to rank] for, and going to one of the webmaster forums.

It could be our webmaster forum, there are lots of other webmaster forums out there where webmasters and SEOs hang out. And sometimes they’ll be able to look at your website and quickly pull out a bunch of issues. Things that you could be focusing on as well.

Sometimes that’s not so easy, but I think having more people look at your website and give you advice, and being open to that advice, I think that’s an important aspect here.

Another thing to keep in mind is that just because something is technically correct doesn’t mean that it’s relevant to users in the search results. That doesn’t mean that it will rank high.

So if you clean up your website, and you fix all of the issues, for example, if your website contains lots of terrible content then it still won’t rank that high.

So you need to, on the one hand, understand which of these technical issues are actually critical for your website to have fixed.

And, on the other hand, you really need to focus on the user aspect as well to find what are issues that users are having, and how can my website help solve those issues. Or help answer those questions.”

If you operate a website that is frequently creating or changing pages – such as an e-retail or publishing site – you’ve probably noticed it can take Google a while to update the search engine with your new content.

This has led to widespread speculation about just how frequently Google indexes pages and why it seems like some types of websites get indexed more frequently than others.

In a recent Q&A video, Google’s John Mueller took the time to answer this directly. He explains how Google’s indexing bots prioritize specific types of pages that are more “important” and limit excessive stress on servers. But, in typical Google fashion, he isn’t giving away everything.

The question posed was:

“How often does Google re-index a website? It seems like it’s much less often than it used to be. We add or remove pages from our site, and it’s weeks before those changes are reflected in Google Search.”

Mueller starts by explaining that Google takes its time to crawl the entirety of a website, noting that if it were to continuously crawl entire sites in short periods of time it would lead to unnecessary strain on the server. Because of this, Googlebot actually has a limit on the number of pages it can crawl every day.

Instead, Googlebot focuses on pages that should be crawled more frequently like home pages or high-level category pages. These pages will get crawled at least every few days, but it sounds like less-important pages (like maybe blog posts) might take considerably longer to get crawled.

You can watch Mueller’s response below or read the quoted statement underneath.

“Looking at the whole website all at once, or even within a short period of time, can cause a significant load on a website. Googlebot tries to be polite and is limited to a certain number of pages every day. This number is automatically adjusted as we better recognize the limits of a website. Looking at portions of a website means that we have to prioritize how we crawl.

So how does this work? In general, Googlebot tries to crawl important pages more frequently to make sure that most critical pages are covered. Often this will be a websites home page or maybe higher-level category pages. New content is often mentioned and linked from there, so it’s a great place for us to start. We’ll re-crawl these pages frequently, maybe every few days. maybe even much more frequently depending on the website.”

Google Logo

With Google’s extensive personalization of search results for users, it has gotten harder and harder to tell when a major shakeup happens thanks to changes to Google’s algorithms. That hasn’t stopped people from guessing a major algorithm shift has occurred when they notice significant changes to how sites are performing across the board.

This happened last week when many major authorities in SEO speculated Google unleashed a major algorithm update. Of course, Google won’t confirm that any major changes happened, but Webmaster Trends Analyst for Google, John Mueller, did take the time to remind everyone “we make changes almost every day.”

Google’s Gary Illyes took the stance even further, tweeting “we have 3 updates in a day average. I think it’s pretty safe to assume there was one recently…”

The truth is, the days of the major Google algorithms like Penguin and Panda upending the search world overnight are largely over. Instead, Google has shifted to a model of constant evolution, tweaking and changing things perpetually.

When there is a new important algorithm, such as recent mobile-friendliness algorithms, the company tends to warn businesses ahead of time. Even then, these recent algorithm updates have been benign, only affecting a small number of websites.

The best plan isn’t to be on constant watch for unannounced shifts, and react. Instead, take a proactive stance by making sure your site follows all of Google’s latest best practices and provides value to searchers. If you do that, you should make it through any changes Google throws at you any time soon.

Yesterday, we reported that a significant number of websites had been hit with Google penalties over the weekend for “unnatural outbound links.” Since then, Google has clarified that the manual penalties issued this weekend were specifically related to bloggers giving links to websites in exchange for free products or services.

Google had issued a warning a few weeks ago urging bloggers to disclose free product reviews and nofollow links in their blog posts related to these products. Now, they’ve taken action against sites who ignored the warning.

In the warning, Google told bloggers to “nofollow the link, if you decide to link to the company’s site, the company’s social media accounts, an online merchant’s page that sells the product, a review service’s page featuring reviews of the product or the company’s mobile app in an app store.”

As Barry Schwartz reports, John Mueller from Google explained the penalties in several threads on the Google support forums, telling people to look at the warning Google published recently named Best practices for bloggers reviewing free products they receive from companies. In one comment, Mueller went on to say:

In particular, if a post was made because of a free product (or free service, or just paid, etc.), then any links placed there because of that need to have a rel=nofollow attached to them. This includes links to the product itself, any sales pages (such as on Amazon), affiliate links, social media profiles, etc. that are associated with that post. Additionally, I imagine your readers would also appreciate it if those posts were labeled appropriately. It’s fine to keep these kinds of posts up, sometimes there’s a lot of useful information in them! However, the links in those posts specifically need to be modified so that they don’t pass PageRank (by using the rel=nofollow).

Once these links are cleaned up appropriately, feel free to submit a reconsideration request, so that the webspam team can double-check and remove the manual action.

If you are a blogger or company who has participated in an agreement to give free products to reviews, be sure to check your Google Search Console messages to see if you’ve been hit by the latest round of manual penalties.