Google Discover will not show content or images that would normally be blocked by the search engine’s SafeSearch tools.
Though not surprising, this is the closest we have come to seeing this confirmed by someone at Google. Google Search Liaison Danny Sullivan responded to a question on Twitter by SEO Professional Lily Ray. In a recent tweet, Ray posed the question:
“Is the below article on SafeSearch filtering the best place to look for guidance on Google Discover? Seems that sites with *some* adult content may be excluded from Discover entirely; does this guidance apply?”
In his initial response, Sullivan wasn’t completely certain but stated: “It’s pretty likely SafeSearch applies to Discover, so yes. Will update later if that’s not the case.”
While Sullivan never came back to state this was not the case, he later explained that “our systems, including on Discover, generally don’t show content that might be borderline explicit or shocking etc. in situations where people wouldn’t expect it.”
Previously, other prominent figures at Google including Gary Illyes and John Mueller had indicated this may be the case, also suggesting adult language may limit the visibility of content in Discover.
For most brands, this won’t be an issue but more adult-oriented brands may struggle to appear in the Discovery feed, even with significant optimization.
https://www.tulsamarketingonline.com/wp-content/uploads/2023/03/Google-Ads-Safety-Report-Banner.jpg4161000Taylor Ballhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTaylor Ball2023-05-04 15:40:342023-05-04 15:40:36Content Flagged By Google SafeSearch Won’t Appear In Discover
Google continues to be relatively tight-lipped about its stance on AI-generated content, but a new statement from Google’s Danny Sullivan suggests the search engine may not be a fan.
Artificial Intelligence has become a hot-button issue over the past year, as AI tools have become more complex and widely available. In particular, the use of AI to generate everything from highly-detailed paintings to articles posted online has raised questions about the viability of AI content.
In the world of SEO, the biggest question about AI-generated content has been how Google would react to content written by AI systems.
Now, we have a bit of insight into how the search engine’s stance on AI-created content – as well as any content created solely for the purpose of ranking in search results.
In a Twitter thread, Google Search Liaison, Danny Sullivan, addressed AI-generated content, saying:
“Content created primarily for search engines, however it is done, is against our guidance. If content is helpful & created for people first, that’s not an issue.”
“Our spam policies also address spammy automatically-generated content, where we will take action if content is “generated through automated processes without regard for quality or user experience.”
Lastly, Sullivan says:
“For anyone who uses *any method* to generate a lot of content primarily for search rankings, our core systems look at many signals to reward content clea/rly demonstrating E-E-A-T (experience, expertise, authoritativeness, and trustworthiness).”
In other words, while it is possible to use AI to create your content and get Google’s stamp of approval, you are walking a very thin line. In most cases, having content produced by experts with experience providing useful information to those who want it will continue to be the best option for content marketing – no matter how smart the AI tool is.
https://www.tulsamarketingonline.com/wp-content/uploads/2023/01/AI-Content.png12602240Taylor Ballhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTaylor Ball2023-01-12 21:22:412023-01-12 21:22:43Google’s Danny Sullivan Addresses Using AI-Generated Content To Rank
Google is making some changes to its image search results pages by removing details about image sizes and replacing them with icons indicating what type of content the image is taken from.
For example, images pulled from recipes show an icon of a fork and knife, those from product pages show a price tag icon, and pictures pulled from videos include a “play” icon.
Later this week, Google Images will show new icons on desktop that provide useful information to indicate if images lead to pages with products for sale, recipes or video content. Mousing-over icons expands them to show the icons with text or length of video…. pic.twitter.com/RrbGnk27iq
Google’s Search Liaison Danny Sullivan says the change is coming later this week for desktop search results and shared a few examples of what the icons look like in action:
As you can see, by mousing over the icons users can get additional details including the length of a video.
Where To Find Image Size Details
To make room for these new icons, Google is removing the traditional image dimension information provided in the search results.
However, the information is still available to users after clicking on a specific thumbnail and mousing over the larger image preview.
Sullivan also shared an example of this:
Licensing Icons In Beta
Along with the announcement, Sullivan provided an update on a test to include licensing information alongside photos.
Currently, the company is beta testing the ability to pull licensing information from structured data on a website, though it is unclear if or when this feature will be widely available. Interested image owners can find out more about how to mark up your images in Google’s guide.
https://www.tulsamarketingonline.com/wp-content/uploads/2020/02/GoogleImageIcons.png360640Taylor Ballhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTaylor Ball2020-02-27 15:03:192020-02-27 15:03:19Google Introduces Icons For Products, Recipes, and Videos To Image Search
Last week, Google confirmed they would be pulling all authorship information from their search results pages but confusion between Google Authorship and Author Rank has been causing some chaos in the SEO world.
Before you start burning bridges that feed into Author Rank and can legitimately help your site, take the time to check out the explanation on the situation from Danny Sullivan. The explanation helps clear up how authorship can die and Author Rank is still alive and as important to search as ever.
00Taylor Ballhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTaylor Ball2014-09-03 12:53:572014-09-03 12:53:57Google Authorship Is Still Dead, But Author Rank Is Alive and Well
Every month, comScore releases a “U.S. Search Engine Rankings” report illustrating the market shares of the most commonly used search engines. From month to month the results have stayed largely the same for over a year, with Google taking in almost exactly two-thirds of the market and the other search engines like Bing and Yahoo slowly growing and shrinking by minuscule percentages.
For such an obvious attack on another analytics firm, you would assume Conductor was publishing new information or even comparing the same factors. As Danny Sullivan from Search Engine Land shows in his article reviewing Conductor’s findings however, Conductor’s findings shouldn’t be news to anyone paying attention, and they don’t disprove comScore’s findings.
The issue i that, when people hear that Google controls two-thirds of the search market many publishers assume they should see close to the same proportion of traffic coming from the search engine. Instead, most publishers see significantly more traffic from Google than their market score seemingly indicates.But, market share isn’t a measurement of the traffic sites receive.
The monthly report from comScore reflects the number of actual searches conducted from the major search engines. Most importantly, their report isn’t affected by where the user goes after clicking on a search listing. Sullivan refers to this type of measurement as “before-the-click” behavior. Every search gets counted equally, no matter what the destination is.
Conductor’s analysis instead focuses on “post-click” behavior, or the traffic publishers receive from search engines. In their report, the information that matters most is the post-click activity. If someone does a search and clicks on a link that leads them back into the search engine, it isn’t measured in Conductor’s report.
The discrepancy between these two types of reports isn’t anything new. In fact, Sullivan cites 2006 as the last time it received significant attention due to Rich Skrenta writing that Google’s “true market share” being 70% while most measurement services were estimating their market share at 40%. Most entertainingly, Sullivan’s response then still perfectly explains why a gap might form. So much changes in search on a daily basis it is always noteworthy when something manages to be admirably accurate after eight years. As Danny Sullivan wrote at the time:
“But a search for something on Yahoo Sports? That might be counted as a “search” and it is – but it’s not the type of search that would register with site-based metrics. The searcher might stay entirely inside Yahoo.”
Search engines with the largest gaps favor their own services more than others, which would suggest that Bing’s 13% gap indicates they direct searchers to their own services and platforms more than any other search engine. Surprisingly, Google appears to favor themselves the least, with a -18% gap.
Of course, there is always the possibility that this gap could be created or exacerbated by other factors that may not have been in play at the time. When Sullivan asked comScore for its opinion on the difference between its reports and Conductor’s recent study he was told mobile search could also potentially be an influence. Google has a higher share of mobile search than compared to desktop figures, and comScore’s reports only include data from desktop users.
Both reports serve their own purposes, but both also highlight the same issue. Google has a huge hold on search traffic that should be recognized and planned for. But, those who buy into Conductor’s study may be tempted to ignore the other search engines entirely. To each their own, but my opinion still favors an approach which puts the most weight in Google but doesn’t cut out the other search engines too much.
00Taylor Ballhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTaylor Ball2014-05-29 13:09:482014-05-29 13:09:48Just How Much of the Market Does Google Actually Control? Making Sense of Conflicting Reports
We try to keep our readers and clients updated with all of Google’s biggest news, whether it be redesigns, guideline changes, or newsworthy penalties. It makes sense, as Google currently receives over half of all searches made every day.
But, even those of us who keep a careful eye on the best guidelines and policy trends of the biggest search engine can end up outright confused by Google occasionally. A story reported by Danny Sullivan yesterday happens to be one of those situations.
Google has been outspoken against guest blogging or guest posts being used “for SEO purposes”, and they have even warned that sites using these questionable guest posts could be subject to penalties. However, the latest story claims that Google has penalized a moderately respected website for a single guest post. Most interesting, the post was published well before the guidelines were put into place and seems to be relevant to the site it was posted on.
The penalty was placed against DocSheldon.com, which is run by Doc Sheldon, a long-time SEO professional. Recently, Sheldon was notified that a penalty was placed against his entire site. The penalty report informed Sheldon that Google determined there were “unnatural links” from his site.
So far, this is the typical penalty put against those who are attempting to run link schemes of some form. But, obviously someone who has been around as long as Sheldon knows better than that. So what were the “unnatural links”?
It took an open letter from Doc Sheldon to Google, which he then tweeted to Matt Cutts, one of Google’s most distinguished engineers, to get some answers.
@DocSheldon what "Best Practices For Hispanic Social Networking" has to do with an SEO copywriting blog? Manual webspam notice was on point.
Cutts mentions one blog post published to Sheldon’s site, which appears to have been written in March 2013.
The post is exactly what the title suggests it would be (“Best Practices for Hispanic Social Networking”), but it contains two links at the end, within the author’s bio. One of the links takes you to the author’s LinkedIn page. The other, however, claims to take people to a “reliable source for Hispanic data”, which leads to a page that appears to be closer to a lead-generation pitch about big data.
Source: Search Engine Land
Now, there are a few issues with the link. The page it leads to is suspect, and some would say that the words “Hispanic data” in the anchor text could be potentially too keyword rich. But, Cutts seems to imply that the content of the blog post was as much an issue as the links. As Sullivan puts it, “Apparently, he fired up some tool at Google to take a close look at Sheldon’s site, found the page relating to the penalty and felt that a guest post on Hispanic social networking wasn’t appropriate for a blog post about SEO copywriting.”
That would be a fair criticism, but if you take a closer look at the top of Sheldon’s site, he doesn’t claim the site to be limited to SEO copywriting. In fact, the heading outright states that the site relates to “content strategy, SEO copywriting, tools, tips, & tutorials”. You may take note that social practices for any demographic could certainly be relevant to the topic of content strategy.
So, as the story stands, Google has levied a large penalty against an entire site for a single blog post with one questionable link, all because they decided it wasn’t on-topic. Does that mean Google is now the search police, judge, and jury? Sadly, it appears so for the moment. Little appears to have changed since the story broke yesterday. DocSheldon.com is still dealing with the penalties, and Google hasn’t backed down one bit since the penalty was sent.
It goes without saying, the events have sparked a large amount of debate in the SEO community, especially following the widely followed penalty placed against the guest blog network MyBlogGuest. The wide majority agree this penalty seems questionable, but for the moment it appears it is best to stay under the radar by following Google’s policies to the letter. Hopefully they will become a bit more consistent with their penalties in the meantime.
Yesterday we reported on the mass hijacking of thousands of Google+ Local listings. In short, over a short period of time a huge number of hotels with business listings for Google Maps and Search. The story was broke open by Danny Sullivan from Search Engine Land, who attempted to track down the source of the spam attack, with no concrete evidence to suggest who the culprit actually is.
While the issue could have a big affect on many businesses it the hotel sector, it is more notable for showing that other attacks could happen in the future. Even worse, no one outside of Google has been able to explain how this could occur, especially with the number of big hotel chains affected. The hotels hit with the spam weren’t mom-and-pop bed and breakfast places. Most of the listings were for huge hotel chains, such as the Marriott hotel shown in the example of a hijacked link below.
If Google does know how this was able to happen, they aren’t telling. In fact, Google has been terribly quiet on the issue. They’ve yet to issue an official public statement, aside from telling Sullivan that he could confirm they were aware of the problem and working to resolve it.
The only direct word from Google on the hijackings is a simple response in an obscure Google Business Help thread from Google’s Community Manager, Jade Wang. If it weren’t for Barry Schwartz’s watchful eye, it is possible the statement would never have been widely seen. Wang said:
We’ve identified a spam issue in Places for Business that is impacting a limited number of business listings in the hotel vertical. The issue is limited to changing the URLs for the business. The team is working to fix the problem as soon as possible and prevent it from happening again. We apologize for any inconvenience this may have caused.
Yesterday, thousands of hotels with Google+ Local listings had their pages manipulated to replace their links to official sites with links leading to third-party booking services. Google+ Local listings are what Google uses to provide local results in Google Maps and Google Search.
It currently appears to be isolated entirely to hotels, and Google has already said they are aware of and fixing the problem, but Danny Sullivan’s research into who is responsible for the hijacking has yet to turn up anything concrete. What we do know is that thousands of listings were changed to point to either RoomsToBook.Info, RoomsToBook.net, or HotelsWhiz.com.
Source: Search Engine Land
The problem is, we can’t be sure any of these companies are actually directly responsible. Only one person responded to Sullivan’s inquiries. Karim Miwani, listed on LinkedIn as the director of HotelsWhiz.com, replied saying (sic):
We have recently seen this issue and have reported to Google webmaster already. If you have seen any links please forward it to me and I will submit the request.
Our team is already in the process of blocking list of certain domains and IP addresses from back-linking us.
Thank you for pointing this out if you have any more external domains acting in aboce manner please report it to us on
You can get all the details on the hijacking from Danny Sullivan’s investigative report into the issue, but this event has a broader relevance outside of the hotel industry. The mass hijacking of Google’s local listings suggests their is a security flaw in the Google+ Local listings which needs to be addressed and resolved. It may explain why Google has largely remained mum on the subject aside from confirming that it occurred.
You most likely have nothing to worry about with your own local business’s listings, so long as you don’t work in the hotel industry. However, it could have implications about the future of Google+ Local listings. Either the security flaw that allowed this to happen will be fixed, or issues like these could affect other industries on a larger scale.
Considering how important these listings are to Google Maps and Search, a larger attack could be a serious problem for Google.
If you have been reading up on SEO, blogging, or content marketing, chances are you’ve been told to “nofollow” certain links. If you’re like most, you probably didn’t quite understand what that means, and you may or may not have followed the advice blindly.
But, even if you’ve been using the nofollow tag for a while, if you don’t understand what it is or how it works you may be hurting yourself as much as you’re helping.
The nofollow tag is how publishers can tell search engines to ignore certain links to other pages. Normally, these links count similar to votes in favor of the linked content, but in some circumstances this can make search engines think you are abusing optimization or blatantly breaking their guidelines. Nofollowing the right pages prevents search engines from thinking you are trying to sell you’re influence or are involved in link schemes.
To help webmasters and content creators understand exactly when to nofollow, and how it affects their online presence, the team from Search Engine Land put together an infographic explaining when and how to use the tag. They also created a comprehensive guide to the tag for those who prefer long walls of text to nice and easy infographics.
https://www.tulsamarketingonline.com/wp-content/uploads/nofollowtag-1000-2919.png00TMOhttps://www.tulsamarketingonline.com/wp-content/uploads/2018/07/TMO-Logo.pngTMO2013-10-22 16:07:382013-10-22 16:07:38What Is a Nofollow Tag and How To Use It [Infographic]
Google made waves last week when they announced the expansion of how “Shared Endorsements” are used in ads, as well as the change to their terms of service to reflect this. The funny thing is, most people don’t understand what is actually changing.
The majority were simply confused when they heard that Google was implementing the use of social information into ads, because that has been going on for about two years now. But, as Danny Sullivan explains, the devil is in the details.
Throughout 2011, Google made changes which allowed advertisers to begin integrating images of people who liked their pages on Google+ into text and display ads. All that really showed was a small profile picture, and the phrase “+1’d this page.”
Starting on November 11, that won’t quite be the case. More than simply the people who +1 a page is going to be shown in ads. For example, if you comment, leave a review, or even follow a particular brand, those types of actions can be shown in ads on Google. A mockup of how it will appear is below.
These changes won’t take place until November, but don’t expect a prompt roll-out. It is possible you may start seeing the changes starting the 11th, but more likely it will gradually appear over the span of a few days or even a couple of weeks.
Not much else is known about how advertisers will be able to create these types of ads yet. Most likely, Google would not have announced the update this early, except they had to get the terms of service updated before they could even begin to implement this feature.
If you don’t want to appear in any of these types of ads, you can go to this page and click the tickbox at the bottom to opt out for all ads in the future.