Tag Archive for: John Mueller

Just last week, Google Search Liaison, Danny Sullivan, once again took to Twitter to dispel a longstanding myth about word counts and search engine optimization (SEO). 

The message reads:

“Reminder. The best word count needed to succeed in Google Search is … not a thing! It doesn’t exist. Write as long or short as needed for people who read your content.”

Sullivan also linked to long-existing help pages and included a screencap of a statement from these pages which says:

“Are you writing to a particular word count because you’ve heard or read that Google has a preferred word count? (No, we don’t.)”

Of course, this is not a new message from Google. Still, many of the most popular SEO tools and experts still claim that anywhere between 300 to 1,500 words is ideal for ranking in Google search results. 

Incidentally, a day later Google’s John Mueller also responded to an SEO professional who argued there was “correlation between word count and outranking competition?” In a short but simple reply, Mueller said “Are you saying the top ranking pages should have the most words? That’s definitely not the case.”

Most likely, this myth of an ideal SEO word count will continue to persist so long as search engine optimization exists in its current form. Still, it is always good to get a clear reminder from major figures at Google that content should be as long as necessary to share valuable information to your audience – whether you can do that in a couple sentences or exhaustive multi-thousand-word content. 

One of Google’s most visible spokespeople, John Mueller, made a rare appearance on Reddit to answer a series of “dumb” SEO questions covering everything from geotagging images to how often you should blog.

In a thread on the r/BigSEO subreddit called “incoming dumb question barrage”, a user asked a series of five questions:

  1. Should we be geotagging images. Does Google even care?
  2. Blogging. If we do it, is it everyday or once a week with some seriously solid stuff?
  3. Google Business Profile posting: Everyday, once a week, or why bother?
  4. Since stuff like Senuke died 10 years ago, is it all about networking with webmasters of similar and same niche sites for links?
  5. Piggybacking off #4, what about PBNs? Are they back? If so, does it have to be a group of completely legit looking websites vs some cobbled together WP blogs?

Mueller provided a series of candid answers which we will get into below:

Geotagging Images

Here Mueller kept it short and sweet: “No need to geotag images for SEO.”

How Often Should You Blog?

As always, Google won’t provide a specific post frequency that is “best” for SEO blog content. Rather, Mueller says to post “as often as you have something unique & compelling to say.”

However, the Google Search Advocate admits that more frequent posting can more traffic if you are able to maintain the quality of your content. 

“The problem with trying to keep a frequency up is that it’s easy to end up with mediocre, fluffy content, which search engine quality algorithms might pick up on.”

Additionally, he indicates that those who are using AI to create a lot of content quickly are unlikely to be rewarded.

Google Business Profile Posting Frequency

Unfortunately, this is not Mueller’s area of knowledge. His answer was a simple “no idea.”

Outdated Linkbuilding Strategies

The last two questions are devoted to asking if older methods for link building were still relevant at all. Clearly, this tickled Mueller as he largely dismissed either approach. 

“SENuke, hah, that’s a name I haven’t heard in ages, lol. Sorry. Giggle. I have thoughts on links, but people love to take things out of context to promote their link efforts / tools, so perhaps someone else will say something reasonable, or not.

“OMG, PBNs too. What is this thread even. Now I won’t say anything without a lawyer present.”

No Shortcuts To Online Riches

Of course, there is an underlying current connecting all of these questions. Mueller takes note of this as well, saying:

“Reading between the lines, it seems you want to find a short-cut to making money online.”

The truth is, there are no real shortcuts to online success these days. However, there are a lot of questionable people willing to take your money to provide tools and courses that often get you nowhere. 

“Unfortunately, there’s a long line of people trying to do the same, and some have a lot of practice. Some will even sell you tools and courses on how to make money online (and *they* will be the ones making the money, fwiw, since people pay them for the tools and courses). The good tools cost good money, and they’re not marketed towards people who just want to make money online — they’re targeted at companies who need to manage their online presence and report on progress to their leadership chain.”

At the same time, Mueller encourages individuals such as the person who started to thread to keep learning and practicing SEO:

“… learn HTML, learn a bit of programming, and go for it. 90% of the random tricks you run across won’t work, 9% of the remaining ones will burn your sites to the ground, but if you’re lucky & persistent (is that the same?), you’ll run across some things that work for you.

“If you want to go this route, accept that most – or all – of the things you build will eventually blow up, but perhaps you’ll run into some along the way that make it worthwhile.”If you want to go this route, accept that most – or all – of the things you build will eventually blow up, but perhaps you’ll run into some along the way that make it worthwhile.

“And … after some time, you might notice that actually building something of lasting value can also be intriguiing [sic], and you’ll start working on a side-project that does things in the right way, where you can put your experience to good use and avoid doing all of the slash & burn site/spam-building.”

If you’re still unclear on how Google thinks about marketing agencies that offer negative SEO linkbuilding services or link disavowal services, the latest comments from John Mueller should help clarify the company’s stance. 

In a conversation that popped up on Twitter between Mueller and several marketing experts, Mueller clearly and definitively slammed companies offering these types of services by saying that they are “just making stuff up and cashing in from those who don’t know better.”

This is particularly notable as some have accused Google of being unclear on their handling of link disavowal using their tools

The post that started it all came from Twitter user @RyanJones who said, “I’m still shocked at how many seos regularly disavow links. Why? Unless you spammed them or have a manual action you’re probably doing more harm than good.”

In response, one user began talking about negative SEO which caught the attention of Mueller. The user mentioned that “agencies know what kind of links hurt the website because they have been doing this for a long time. It’s only hard to down for very trusted sites. Even some agencies provide a money back guarantee as well. They will provide you examples as well with proper insights.”

In response, Mueller gave what is possibly his clearest statement on this type of “service” yet:

“That’s all made up & irrelevant. These agencies (both those creating, and those disavowing) are just making stuff up, and cashing in from those who don’t know better.”

Instead of spending time and effort on any of this, Mueller instead recommended something simple:

“Don’t waste your time on it; do things that build up your site instead.”

Product pages may receive a temporary reduction in their visibility in Google search results if the product is listed as out of stock, according to Google’s Search Advocate John Mueller during the most recent Google Search Central SEO Office Hours session.

Surprisingly, though, this is not always the case.

As Mueller answered questions about how product stock affects rankings, he explained that Google has a few ways of handling out-of-stock product pages.

How Google Handles Out-of-Stock Products

Mueller says that, in most cases, Google treats out-of-stock listings as a soft redirect or unavailable page:

“Out of stock – it’s possible. That’s kind of simplified like that. I think there are multiple things that come into play when it comes to products themselves in that they can be shown as a normal search result.

They can also be shown as an organic shopping result as well. If something is out of stock, I believe the organic shopping result might not be shown – I’m not 100% sure.

And when it comes to the normal search results, it can happen that we when see that something is out of stock, we will assume it’s more like a soft 404 error, where we will drop that URL from the search results as well.

Theoretically, it could affect the visibility in search if something goes out of stock.”

In some situations, though, Google will essentially override this decision and continue to show a page if it is considered particularly relevant for users.

For example, if the product page also includes helpful information about the product in general, it may still be worth keeping in search results despite the lack of stock.

As Mueller explains”

“It doesn’t have to be the case. In particular, if you have a lot of information about that product anyway on those pages, then that page can still be quite relevant for people who are searching for a specific product. So it’s not necessarily that something goes out of stock, and that page disappears from search.”

Out-of-Stock Products Don’t Hurt Your Entire Site

While it is true that listing one product as unavailable can keep that specific page from appearing in search results, Mueller is sure to reassure you that this should not impact the rest of your website:

“The other thing that’s also important to note here is that even if one product goes out of stock, the rest of the site’s rankings are not affected by that.

So even if we were to drop that one specific product because we think it’s more like a soft 404 page, then people searching for other products on the site, we would still show those normally. It’s not that there would be any kind of negative effect that swaps over into the other parts of the site.”

You can watch the entire discussion with Google’s John Mueller in a recording of the SEO Office Hours session below:

Any small-to-medium-sized business owner or operator is all too aware that it often feels like the odds are stacked against them – especially when it comes to competing with larger companies on Google. 

It’s something Google rarely addresses outright, but it seems clear that big companies have several advantages which can make it hard to compete. This is why one person decided to ask Google’s John Mueller about the situation during a recent Office Hours hangout chat with Google Search Advocate.

As Mueller acknowledges, Google is well aware that big brands often receive natural competitive advantages. But, he also had some advice for smaller brands trying to rank against massive brands – big sites face their own unique problems and limitations which can give you a chance to get the upper hand.

John Mueller’s Advice For Small Companies On Google

The original question posed to Mueller included two parts, but it was the second half that the Search Advocate decided to focus on. Specifically, he was asked:

“Do smaller organizations have a chance in competing with larger companies?”

From the outset, he says its a bit of a broader “philosophical” question, but he does his best to show how smaller companies have consistently been able to turn the tables against larger brands. For example, Mueller points to how many larger companies were so invested in using Macromedia Flash, they stuck with it long after it became clear it was not helping their SEO. Meanwhile, smaller sites often knew better and were able to use this against their competition.

“One of the things that I’ve noticed over time is that in the beginning, a lot of large companies were, essentially, incompetent with regards to the web and they made terrible websites.

And their visibility in the search results was really bad.

And it was easy for small websites to get in and kind of like say, well, here’s my small website or my small bookstore, and suddenly your content is visible to a large amount of users.

And you can have that success moment early on.

But over time, as large companies also see the value of search and of the web overall, they’ve grown their websites.

They have really competent teams, they work really hard on making a fantastic web experience.

And that kind of means for smaller companies that it’s a lot harder to gain a foothold there, especially if there is a very competitive existing market out there.

And it’s less about large companies or small companies.

It’s really more about the competitive environment in general.”

While it is true that it can seem very difficult to compete with the seemingly unlimited resources of bigger brands, history has shown time and time again that bigger brands face their own challenges. 

As Mueller concludes:

“As a small company, you should probably focus more on your strengths and the weaknesses of the competitors and try to find an angle where you can shine, where other people don’t have the ability to shine as well.

Which could be specific kinds of content, or specific audiences or anything along those lines.

Kind of like how you would do that with a normal, physical business as well.”

In the end, big brands competing are much like David facing down Goliath; if they know how to use their strengths and talents to their advantage they can overcome seemingly unbeatable challengers.

You can watch Mueller’s answer in the video below, starting around 38:14.

Most people these days understand the general idea of how search engines work. Search engines like Google send out automated bots to scan or “crawl” all the pages on a website, before using their algorithms to sort through which sites are best for specific search queries. 

What few outside Google knew until recently, was that the search engine has begun using two different methods to crawl websites – one which specifically searches out new content and another to review content already within its search index.

Google Search Advocate John Mueller revealed this recently during one of his regular Search Central SEO office-hours chats on January 7th.

During this session, an SEO professional asked Mueller about the behavior he has observed from Googlebot crawling his website. 

Specifically, the user says Googlebot previously crawled his site daily when it was frequently sharing content. Since content publishing has slowed on this site, he has seen that Googlebot has been crawling his website less often.

As it turns out, Mueller says this is quite normal and is the result of how Google approaches crawling web pages.

How Google Crawls New vs. Old Content

While Mueller acknowledges there are several factors that can contribute to how often it crawls different pages on a website – including what type of pages they are, how new they are, and how Google understands your site.

“It’s not so much that we crawl a website, but we crawl individual pages of a website. And when it comes to crawling, we have two types of crawling roughly.

One is a discovery crawl where we try to discover new pages on your website. And the other is a refresh crawl where we update existing pages that we know about.”

These different types of crawling target different types of pages, so it is reasonable that they also occur more or less frequently depending on the type of content.

“So for the most part, for example, we would refresh crawl the homepage, I don’t know, once a day, or every couple of hours, or something like that.

And if we find new links on their home page then we’ll go off and crawl those with the discovery crawl as well. And because of that you will always see a mix of discover and refresh happening with regard to crawling. And you’ll see some baseline of crawling happening every day.

But if we recognize that individual pages change very rarely, then we realize we don’t have to crawl them all the time.”

The takeaway here is that Google adapts to your site according to your own publishing habits. Which type of crawling it is using or how frequently it is happening are not inherently good or bad indicators of your website’s health, and your focus should be (as always) on providing the smoothest online sales experience for your customers. 

Nonetheless, it is interesting to know that Google has made this adjustment to how it crawls content across the web and to speculate about how this might affect its ranking process.

To hear Mueller’s full response (including more details about why Google crawls some sites more often than others), check out the video below:

When it comes to ranking a website in Google, most people agree that high-quality content is essential. But, what exactly is quality content? 

For a number of reasons, most online marketers agreed that Google defined high-quality content as something very specific: text-based content which clearly and engagingly communicated valuable information to readers.

Recently, though, Google’s John Mueller shot down that assumption during a video chat. 

While he still emphasizes that great content should inform or entertain viewers, Mueller explained that the search engine actually has a much broader view of “content quality” than most thought.

What Google Means When They Say “Quality Content”

In response to a question about whether SEO content creators should prioritize technical improvements to content or expand the scope of content, Mueller took a moment to talk about what content quality means to Google.

“When it comes to the quality of the content, we don’t mean like just the text of your articles. It’s really the quality of your overall website, and that includes everything from the layout to the design.

This is especially notable, as Mueller specifically highlights two factors that many continue to ignore – images and page speed. 

“How you have things presented on your pages? How you integrate images? How you work with speed? All of those factors, they kind of come into play there.”

Ultimately, Mueller’s response emphasizes taking a much more holistic view of your content and focusing on providing an all-around great experience for users on your website. 

There is an unspoken aspect to what Mueller says which should be mentioned. Mueller subtly shows that Google still prefers text-based content rather than videos or audio-only formats. While the company wants to integrate even more types of content, the simple fact is that the search engine still struggles to parse these formats without additional information.

Still, Mueller’s statement broadens the concept of “quality content” from what is often understood. 

“So it’s not the case that we would look at just purely the text of the article and ignore everything else around it and say, oh this is high-quality text. We really want to look at the website overall.”

It is no secret that Google knows the price you, your competitors, and even the shady third-party companies charge for your products or services. In some cases, you might even directly tell the company how much you charge through Google’s Merchant Center. So, it is reasonable to think that the search engine might also use that information when it is ranking brands or product pages in search results.

In a recent livestream, however, Google Webmaster Trends Analyst, John Mueller, denied the idea.

What John Mueller Has To Say About Price as a Google Ranking Signal

The question arose during an SEO Office-Hours hangout on October 8, which led to Mueller explaining that while Google can access this information, it does not use it when ranking traditional search results.

As he says in the recording of the discussion:

“Purely from a web search point of view, no, it’s not the case that we would try to recognize the price on a page and use that as a ranking factor.

“So it’s not the case that we would say we’ll take the cheaper one and rank that higher. I don’t think that would really make sense.”

At the same time, Mueller says he can’t speak on how products in shopping results (which may be shown in regular search results) are ranked. 

Within shopping search results, users can manually select to sort their results by price. Whether it is used as a factor the rest of the time isn’t something Mueller can answer:

“A lot of these products also end up in the product search results, which could be because you submit a feed, or maybe because we recognize the product information on these pages, and the product search results I don’t know how they’re ordered.

“It might be that they take the price into account, or things like availability, all of the other factors that kind of come in as attributes in product search.”

Price Is And Isn’t A Ranking Factor

At the end of the day, Mueller doesn’t work in the areas related to product search so he really can’t say whether price is a ranking factor within those areas of Google. This potentially includes when they are shown within normal search results pages.

What he can say for sure, is that within traditional web search results, Google does not use price to rank results:

“So, from a web search point of view, we don’t take price into account. From a product search point of view it’s possible.

“The tricky part, I think, as an SEO, is these different aspects of search are often combined in one search results page. Where you’ll see normal web results, and maybe you’ll see some product review results on the side, or maybe you’ll see some mix of that.”

You can hear Mueller’s full response in the recording from the October 8, 2021, Google SEO Office Hours hangout below:

We all know that the search results you get on mobile and the ones you get on desktop devices can be very different – even for the same query, made at the same time, in the same place, logged into the same Google account. 

Have you ever found yourself asking exactly why this happens?

One site owner did and recently got the chance to ask one of Google’s Senior Webmaster Trends Analyst, John Mueller.

In the recent SEO Office Hours Session, Mueller explained that a wide range of factors decide what search results get returned for a search query – including what device you are using and why this happens.

Why Are Mobile Search Rankings Different From Desktop?

The question asked to Mueller specifically wanted to clarify why there is still a disparity between mobile and desktop search results after the launch of mobile-first indexing for all sites. Here’s what was asked:

“How are desktop and mobile ranking different when we’ve already switched to mobile-first indexing.”

Indexing and Ranking Are Different

In response to the question, Mueller first tried to clarify that indexing and rankings are not exactly the same thing. Instead, they are more like two parts of a larger system. 

“So, mobile-first indexing is specifically about that technical aspect of indexing the content. And we use a mobile Googlebot to index the content. But once the content is indexed, the ranking side is still (kind of) completely separate.”

Although the mobile-first index was a significant shift in how Google brought sites into their search engine and understood them, it actually had little direct effect on most search results. 

Mobile Users and Desktop Users Have Different Needs

Beyond the explanation about indexing vs. ranking, John Mueller also said that Google returns unique rankings for mobile and desktop search results because they reflect potentially different needs in-the-moment. 

“It’s normal that desktop and mobile rankings are different. Sometimes that’s with regards to things like speed. Sometimes that’s with regards to things like mobile-friendliness.

“Sometimes that’s also with regards to the different elements that are shown in the search results page.

“For example, if you’re searching on your phone then maybe you want more local information because you’re on the go. Whereas if you’re searching on a desktop maybe you want more images or more videos shown in the search results. So we tend to show …a different mix of different search results types.

“And because of that it can happen that the ranking or the visibility of individual pages differs between mobile and desktop. And that’s essentially normal. That’s a part of how we do ranking.

“It’s not something where I would say it would be tied to the technical aspect of indexing the content.”

With this in mind, there’s little need to be concerned if you aren’t showing up in the same spot for the same exact searches on different devices.

Instead, watch for big shifts in what devices people are using to access your page. If your users are overwhelmingly using phones, assess how your site can better serve the needs of desktop users. Likewise, a majority of traffic coming from desktop devices may indicate you need to assess your site’s speed and mobile friendliness.

If you want to hear Mueller’s full explanation and even more discussion about search engine optimization, check out the SEO Office Hours video below:

In a Google Search Central SEO session recently, Google’s John Mueller shed light on a way the search engine’s systems can go astray – keeping pages on your site from being indexed and appearing in search. 

Essentially the issue comes from Google’s predictive approach to identifying duplicate content based on URL patterns, which has the potential to incorrectly identify duplicate content based on the URL alone. 

Google uses the predictive system to increase the efficiency of its crawling and indexing of sites by skipping over content which is just a copy of another page. By leaving these pages out of the index, Google’s engine has less chances of showing repetitious content in its search results and allows its indexing systems to reach other, more unique content more quickly. 

Obviously the problem is that content creators could unintentionally trigger these predictive systems when publishing unique content on similar topics, leaving quality content out of the search engine. 

John Mueller Explains How Google Could Misidentify Duplicate Content

In a response to a question from a user whose pages were not being indexed correctly, Mueller explained that Google uses multiple layers of filters to weed out duplicate content:

“What tends to happen on our side is we have multiple levels of trying to understand when there is duplicate content on a site. And one is when we look at the page’s content directly and we kind of see, well, this page has this content, this page has different content, we should treat them as separate pages.

The other thing is kind of a broader predictive approach that we have where we look at the URL structure of a website where we see, well, in the past, when we’ve looked at URLs that look like this, we’ve seen they have the same content as URLs like this. And then we’ll essentially learn that pattern and say, URLs that look like this are the same as URLs that look like this.”

He also explained how these systems can sometimes go too far and Google could incorrectly filter out unique content based on URL patterns on a site:

“Even without looking at the individual URLs we can sometimes say, well, we’ll save ourselves some crawling and indexing and just focus on these assumed or very likely duplication cases. And I have seen that happen with things like cities.

I have seen that happen with things like, I don’t know, automobiles is another one where we saw that happen, where essentially our systems recognize that what you specify as a city name is something that is not so relevant for the actual URLs. And usually we learn that kind of pattern when a site provides a lot of the same content with alternate names.”

How Can You Protect Your Site From This?

While Google’s John Mueller wasn’t able to provide a full-proof solution or prevention for this issue, he did offer some advice for sites that have been affected:

“So what I would try to do in a case like this is to see if you have this kind of situations where you have strong overlaps of content and to try to find ways to limit that as much as possible.

And that could be by using something like a rel canonical on the page and saying, well, this small city that is right outside the big city, I’ll set the canonical to the big city because it shows exactly the same content.

So that really every URL that we crawl on your website and index, we can see, well, this URL and its content are unique and it’s important for us to keep all of these URLs indexed.

Or we see clear information that this URL you know is supposed to be the same as this other one, you have maybe set up a redirect or you have a rel canonical set up there, and we can just focus on those main URLs and still understand that the city aspect there is critical for your individual pages.”

It should be clarified that duplicate content or pages impacted by this problem will not hurt the overall SEO of your site. So, for example, having several pages tagged as being duplicate content won’t prevent your home page from appearing for relevant searches. 

Still, the issue has the potential to gradually decrease the efficiency of your SEO efforts, not to mention making it harder for people to find the valuable information you are providing. 

To see Mueller’s full explanation, watch the video below: