Tag Archive for: Danny Sullivan

A lead Google spokesperson gave a surprising response to claims that the search engine stole content from a publisher without providing any benefit to the publisher’s website. 

Google’s rich search results have been controversial since their launch, as some feel that these results simply copy information from other websites instead of sending users to that content where it was originally posted. 

The search engine has largely ignored these criticisms by saying that rich results improve the search experience and include links to the original content. 

That’s what makes it so surprising that Google Search Liaison Danny Sullivan recently publicly responded to one publisher’s complaints directly.

The Original Complaint

In several recent tweets, a representative for travel brand Travel Lemming posted:

“Google is now stealing Travel Lemming’s own brand searches (even via site search).

They take our list — INCLUDING MY ORIGINAL PHOTOS 📸 — and present it in a rich result so people don’t click through.

I am literally IN that Red Rocks photo!…”

They are doing this across all travel searches – unbranded and branded alike.

Example: “Mexico Travel Tips” – they have an AI answer & also a rich result that basically just re-creates an entire blog post, including our stolen photos.

Again, I am IN that Mexico packing photo!

Like how is it legal for Google to just essentially create entire blog posts from creators’ content and images?

I literally have a law degree from the top law school in the world, and even I can’t figure it out!

Fair use does NOT apply if you’re using the content to compete directly against the creator, which they clearly are.

I can’t sit outside a movie theatre, project the movie on a wall, earn money from it, and claim fair use.

I spent SO much time taking those photos in Denver.

It was 10+ full days worth of work for me and partner Clara, going around the city to photograph everything. $100s of money spent in attraction admission fees, gas, parking.

Now Google just gets to extract all that value?

How much does Google get to take before creators say “enough is enough”?

How hard does the water have to boil before the frog jumps?

The comments show it is a prisoner’s dilemma as long as Google has a monopoly on search …”

Google’s Response

Danny Sullivan, Google’s Search Liaison, provided a lengthy response that delves specifically into what is happening, why, and ways they are hoping to improve the situation. 

Not only does Sullivan give insight into the company’s perspective, but also their own opinions about the function. Importantly, Sullivan doesn’t disregard Travel Lemming’s complaints and is sympathetic to how rich search results impact publishers:

“Hey Nate, this got flagged to my attention. I’ll pass along the feedback to the team. Pretty sure this isn’t a new feature. Elsewhere in the thread, you talk about it being an AI answer, and I’m pretty sure that’s not the case, either. It’s a way to refine an initial query and browse into more results.

With the example you point out, when you expand the listing, your image is there with a credit. If you click, a preview with a larger view comes up, and that lets people visit the site. Personally, I’m not a fan of the preview-to-click.

I think it should click directly to the site (feedback I’ve shared internally before, and I’ll do this again). But it’s making use of how Google Images operates, where there’s a larger preview that helps people decide if an image is relevant to their search query. Your site is also listed there, too. Click on that, people get to your site.”

If you don’t want your images to appear in Google Search, this explains how to block them:

https://developers.google.com/search/docs/crawling-indexing/prevent-images-on-your-page

I suspect you’d prefer an option to not have them appear as thumbnails in particular features. We don’t have that type of granular control, but I’ll also pass the feedback on. 

I appreciate your thoughts and concerns. I do. The intention overall is to make search better, which includes ensuring people do indeed continue to the open web — because we know for us to thrive, the open web needs to thrive.

But I can also appreciate that this might not seem obvious from how some of the features display.

I’m going to be sharing these concerns with the search team, because they’re important.

You and other creators that are producing good content (and when you’re ranking in the top results, that’s us saying it’s good content) should feel we are supporting you.

We need to look at how what we say and how our features operate ensure you feel that way.

I’ll be including your response as part of this.”

I doubt Sullivan is going to change many minds about Google’s rich search results, but this rare interaction is revealing to how Google sees the situation and is trying to walk a tightrope between providing a seamless search experience while sustaining the sites it relies on.

Google’s Search Liaison, Danny Sullivan, raised some eyebrows over the weekend by saying that “major changes” are coming to Google’s search results. 

The statement came during a live talk, where Sullivan reportedly told the crowd to “buckle up” because major changes were on the way.

As the public voice for Google’s Search team, Sullivan is uniquely positioned to speak on what the search engine’s developers are working on behind the scenes. For businesses, this means that he is one of the only people who can give advance notice about upcoming shifts to search results that could impact your online visibility and sales. 

What Did Sullivan Say?

Since it wasn’t livestreamed or recorded, there’s been some discussion about exactly what Sullivan told the crowd. Posts on X agree on a few details though. 

While attendees agree Sullivan specifically used the phrase “buckle up”, a few users provided longer versions of the quote that paint a slightly different picture. 

One person, Andy Simpson, says the entire quote was “There’s so much coming that I don’t want to say to buckle up because that makes you freak out because if you’re doing good stuff, it’s not going to be an issue for you.”

This is likely the case, as Sullivan has since clarified:

“I was talking about various things people have raised where they want to see our results improve, or where they think ‘sure, you fixed this but what about….’ And that these things all correspond to improvements we have in the works. That there’s so much coming that I don’t want to say buckle up, because those who are making good, people-first content should be fine. But that said, there’s a lot of improvements on the way.”

Either way, it is important for businesses to take note of these statements and watch their site’s search results performance for any signs of major shifts in the near future. 

A recent article from Gizmodo has lit up the world of SEO, drawing a rebuff from Google and extensive conversation about when it’s right to delete old content on your website. 

The situation kicked off when Gizmodo published a recent article detailing how CNET had supposedly deleted thousands of pages of old content to “game Google Search.” 

What makes this so interesting, is that deleting older content that is not performing well is a long-recognized part of search engine optimization called “content pruning”. By framing their article as “exposing” CNET for dirty tricks, Gizmodo sparked a discussion about when content pruning is effective for sites and if SEO is inherently negative for a site’s health.

What Happened

The trigger for all of this occurred when CNET appeared to redirect, repurpose, or fully remove old pages based on analytics data including pageviews, backlink profiles, and how long a page has gone without an update. 

An internal memo obtained by Gizmodo shows that CNET did this believing that deprecating and removing old content “sends a signal to Google that says CNET is fresh, relevant, and worthy of being placed higher than our competitors in search results.”

What’s The Problem?

First, simply deleting old content does not send a signal that your site is fresh or relevant. The only way to do this is by ensuring your content itself is fresh and relevant to your audience. 

That said, there can be benefits to removing old content if it is not actually relevant or high-quality. 

The biggest issue here seems to be that CNET believes old content is inherently bad, but there is no such “penalty” or harm of leaving older content on your site if it may still be relevant to users.

As Google Search Liaison Danny Sullivan posted on X (formerly Twitter):

“Are you deleting old content from your site because you somehow believe Google doesn’t like ‘old’ content? That’s not a thing! Our guidance doesn’t encourage this. Old content can still be helpful, too.”

Which Is It?

The real takeaway from this is a reminder that Google isn’t as concerned with “freshness” as many may think. 

Yes, the search engine prefers sites that appear to be active and up-to-date, which includes posting new relevant content regularly. That said, leaving old content on your site won’t hurt you – unless it’s low-quality. Removing low-quality or irrelevant content can always help improve your overall standing with search engines by showing that you recognize when content isn’t up to snuff. Just don’t go deleting content solely because it is ‘old’.

Just last week, Google Search Liaison, Danny Sullivan, once again took to Twitter to dispel a longstanding myth about word counts and search engine optimization (SEO). 

The message reads:

“Reminder. The best word count needed to succeed in Google Search is … not a thing! It doesn’t exist. Write as long or short as needed for people who read your content.”

Sullivan also linked to long-existing help pages and included a screencap of a statement from these pages which says:

“Are you writing to a particular word count because you’ve heard or read that Google has a preferred word count? (No, we don’t.)”

Of course, this is not a new message from Google. Still, many of the most popular SEO tools and experts still claim that anywhere between 300 to 1,500 words is ideal for ranking in Google search results. 

Incidentally, a day later Google’s John Mueller also responded to an SEO professional who argued there was “correlation between word count and outranking competition?” In a short but simple reply, Mueller said “Are you saying the top ranking pages should have the most words? That’s definitely not the case.”

Most likely, this myth of an ideal SEO word count will continue to persist so long as search engine optimization exists in its current form. Still, it is always good to get a clear reminder from major figures at Google that content should be as long as necessary to share valuable information to your audience – whether you can do that in a couple sentences or exhaustive multi-thousand-word content. 

Google Discover will not show content or images that would normally be blocked by the search engine’s SafeSearch tools. 

Though not surprising, this is the closest we have come to seeing this confirmed by someone at Google. Google Search Liaison Danny Sullivan responded to a question on Twitter by SEO Professional Lily Ray. In a recent tweet, Ray posed the question:

“Is the below article on SafeSearch filtering the best place to look for guidance on Google Discover? Seems that sites with *some* adult content may be excluded from Discover entirely; does this guidance apply?”

In his initial response, Sullivan wasn’t completely certain but stated: “It’s pretty likely SafeSearch applies to Discover, so yes. Will update later if that’s not the case.”

While Sullivan never came back to state this was not the case, he later explained that “our systems, including on Discover, generally don’t show content that might be borderline explicit or shocking etc. in situations where people wouldn’t expect it.”

Previously, other prominent figures at Google including Gary Illyes and John Mueller had indicated this may be the case, also suggesting adult language may limit the visibility of content in Discover. 

For most brands, this won’t be an issue but more adult-oriented brands may struggle to appear in the Discovery feed, even with significant optimization.

Google continues to be relatively tight-lipped about its stance on AI-generated content, but a new statement from Google’s Danny Sullivan suggests the search engine may not be a fan.

Artificial Intelligence has become a hot-button issue over the past year, as AI tools have become more complex and widely available. In particular, the use of AI to generate everything from highly-detailed paintings to articles posted online has raised questions about the viability of AI content.

In the world of SEO, the biggest question about AI-generated content has been how Google would react to content written by AI systems.

Now, we have a bit of insight into how the search engine’s stance on AI-created content – as well as any content created solely for the purpose of ranking in search results.

In a Twitter thread, Google Search Liaison, Danny Sullivan, addressed AI-generated content, saying:

“Content created primarily for search engines, however it is done, is against our guidance. If content is helpful & created for people first, that’s not an issue.”

“Our spam policies also address spammy automatically-generated content, where we will take action if content is “generated through automated processes without regard for quality or user experience.”

Lastly, Sullivan says:

“For anyone who uses *any method* to generate a lot of content primarily for search rankings, our core systems look at many signals to reward content clea/rly demonstrating E-E-A-T (experience, expertise, authoritativeness, and trustworthiness).”

In other words, while it is possible to use AI to create your content and get Google’s stamp of approval, you are walking a very thin line. In most cases, having content produced by experts with experience providing useful information to those who want it will continue to be the best option for content marketing – no matter how smart the AI tool is.

Google is making some changes to its image search results pages by removing details about image sizes and replacing them with icons indicating what type of content the image is taken from.

For example, images pulled from recipes show an icon of a fork and knife, those from product pages show a price tag icon, and pictures pulled from videos include a “play” icon.

Google’s Search Liaison Danny Sullivan says the change is coming later this week for desktop search results and shared a few examples of what the icons look like in action:

As you can see, by mousing over the icons users can get additional details including the length of a video.

Where To Find Image Size Details

To make room for these new icons, Google is removing the traditional image dimension information provided in the search results.

However, the information is still available to users after clicking on a specific thumbnail and mousing over the larger image preview.

Sullivan also shared an example of this:

Licensing Icons In Beta

Along with the announcement, Sullivan provided an update on a test to include licensing information alongside photos.

Currently, the company is beta testing the ability to pull licensing information from structured data on a website, though it is unclear if or when this feature will be widely available. Interested image owners can find out more about how to mark up your images in Google’s guide.

Google_AuthorRankLast week, Google confirmed they would be pulling all authorship information from their search results pages but confusion between Google Authorship and Author Rank has been causing some chaos in the SEO world.

Before you start burning bridges that feed into Author Rank and can legitimately help your site, take the time to check out the explanation on the situation from Danny Sullivan. The explanation helps clear up how authorship can die and Author Rank is still alive and as important to search as ever.

Every month, comScore releases a “U.S. Search Engine Rankings” report illustrating the market shares of the most commonly used search engines. From month to month the results have stayed largely the same for over a year, with Google taking in almost exactly two-thirds of the market and the other search engines like Bing and Yahoo slowly growing and shrinking by minuscule percentages.

ComScore’s report is widely trusted by most of the online marketing community, but recently analysts from Conductor attempted to challenge comScore’s findings with their own report claiming Google actually rakes in a significantly larger percentage of searches. They even went as far as to title their reports “Why You Shouldn’t Trust comScore’s Numbers for Search Engine Market Share.”

Conductor-WhitePaper-2014-comScore_pdf__page_4_of_5_-600x369

For such an obvious attack on another analytics firm, you would assume Conductor was publishing new information or even comparing the same factors. As Danny Sullivan from Search Engine Land shows in his article reviewing Conductor’s findings however, Conductor’s findings shouldn’t be news to anyone paying attention, and they don’t disprove comScore’s findings.

The issue i that, when people hear that Google controls two-thirds of the search market many publishers assume they should see close to the same proportion of traffic coming from the search engine. Instead, most publishers see significantly more traffic from Google than their market score seemingly indicates.But, market share isn’t a measurement of the traffic sites receive.

The monthly report from comScore reflects the number of actual searches conducted from the major search engines. Most importantly, their report isn’t affected by where the user goes after clicking on a search listing. Sullivan refers to this type of measurement as “before-the-click” behavior. Every search gets counted equally, no matter what the destination is.

Conductor’s analysis instead focuses on “post-click” behavior, or the traffic publishers receive from search engines. In their report, the information that matters most is the post-click activity. If someone does a search and clicks on a link that leads them back into the search engine, it isn’t measured in Conductor’s report.

The discrepancy between these two types of reports isn’t anything new. In fact, Sullivan cites 2006 as the last time it received significant attention due to Rich Skrenta writing that Google’s “true market share” being 70% while most measurement services were estimating their market share at 40%. Most entertainingly, Sullivan’s response then still perfectly explains why a gap might form. So much changes in search on a daily basis it is always noteworthy when something manages to be admirably accurate after eight years. As Danny Sullivan wrote at the time:

“But a search for something on Yahoo Sports? That might be counted as a “search” and it is – but it’s not the type of search that would register with site-based metrics. The searcher might stay entirely inside Yahoo.”

Search engines with the largest gaps favor their own services more than others, which would suggest that Bing’s 13% gap indicates they direct searchers to their own services and platforms more than any other search engine. Surprisingly, Google appears to favor themselves the least, with a -18% gap.

Of course, there is always the possibility that this gap could be created or exacerbated by other factors that may not have been in play at the time. When Sullivan asked comScore for its opinion on the difference between its reports and Conductor’s recent study he was told mobile search could also potentially be an influence. Google has a higher share of mobile search than compared to desktop figures, and comScore’s reports only include data from desktop users.

Both reports serve their own purposes, but both also highlight the same issue. Google has a huge hold on search traffic that should be recognized and planned for. But, those who buy into Conductor’s study may be tempted to ignore the other search engines entirely. To each their own, but my opinion still favors an approach which puts the most weight in Google but doesn’t cut out the other search engines too much.

We try to keep our readers and clients updated with all of Google’s biggest news, whether it be redesigns, guideline changes, or newsworthy penalties. It makes sense, as Google currently receives over half of all searches made every day.

But, even those of us who keep a careful eye on the best guidelines and policy trends of the biggest search engine can end up outright confused by Google occasionally. A story reported by Danny Sullivan yesterday happens to be one of those situations.

Google has been outspoken against guest blogging or guest posts being used “for SEO purposes”, and they have even warned that sites using these questionable guest posts could be subject to penalties. However, the latest story claims that Google has penalized a moderately respected website for a single guest post. Most interesting, the post was published well before the guidelines were put into place and seems to be relevant to the site it was posted on.

The penalty was placed against DocSheldon.com, which is run by Doc Sheldon, a long-time SEO professional. Recently, Sheldon was notified that a penalty was placed against his entire site. The penalty report informed Sheldon that Google determined there were “unnatural links” from his site.

So far, this is the typical penalty put against those who are attempting to run link schemes of some form. But, obviously someone who has been around as long as Sheldon knows better than that. So what were the “unnatural links”?

It took an open letter from Doc Sheldon to Google, which he then tweeted to Matt Cutts, one of Google’s most distinguished engineers, to get some answers.

Cutts mentions one blog post published to Sheldon’s site, which appears to have been written in March 2013.

The post is exactly what the title suggests it would be (“Best Practices for Hispanic Social Networking”), but it contains two links at the end, within the author’s bio. One of the links takes you to the author’s LinkedIn page. The other, however, claims to take people to a “reliable source for Hispanic data”, which leads to a page that appears to be closer to a lead-generation pitch about big data.

Source: Search Engine Land

Source: Search Engine Land

Now, there are a few issues with the link. The page it leads to is suspect, and some would say that the words “Hispanic data” in the anchor text could be potentially too keyword rich. But, Cutts seems to imply that the content of the blog post was as much an issue as the links. As Sullivan puts it, “Apparently, he fired up some tool at Google to take a close look at Sheldon’s site, found the page relating to the penalty and felt that a guest post on Hispanic social networking wasn’t appropriate for a blog post about SEO copywriting.”

That would be a fair criticism, but if you take a closer look at the top of Sheldon’s site, he doesn’t claim the site to be limited to SEO copywriting. In fact, the heading outright states that the site relates to “content strategy, SEO copywriting, tools, tips, & tutorials”. You may take note that social practices for any demographic could certainly be relevant to the topic of content strategy.

So, as the story stands, Google has levied a large penalty against an entire site for a single blog post with one questionable link, all because they decided it wasn’t on-topic. Does that mean Google is now the search police, judge, and jury? Sadly, it appears so for the moment. Little appears to have changed since the story broke yesterday. DocSheldon.com is still dealing with the penalties, and Google hasn’t backed down one bit since the penalty was sent.

It goes without saying, the events have sparked a large amount of debate in the SEO community, especially following the widely followed penalty placed against the guest blog network MyBlogGuest. The wide majority agree this penalty seems questionable, but for the moment it appears it is best to stay under the radar by following Google’s policies to the letter. Hopefully they will become a bit more consistent with their penalties in the meantime.