Guest Posts or User Submitted Posts are content written by another author not working for UltimateSEO. Content submitted to our site may be syndicated on as many as 300 other sites that we maintain. That can potential deliver you hundreds of backlinks from multiple domains. We offer this feature free of charge at this time, but may charge in the future.
Guest Posting is a win win scenario for us and you and your site. You can write an original article and we’ll post it if it is about SEO or SEM in general. Specific niche SEO topics are also welcome. Writers can include backlinks of relevant in their posts. We’d like to recommend no more than one link per 250 words. If there is an issue we’ll let you know. We also ask the your post include an image or an image per 500 words. So .in a 2000 word SEO post we’d like to see no more than 8 links and we would like to see about 4 images.
Submit For Review
Ultimate SEO reserves the right to remove or edit posts on our site. We will also provide credit to content creators who retain ownership of their content but license the display of that content to use by using this form.
Submitted Guest Posts
Much of the info in this post is from an article on engadget. I became aware of the situation as I was working to build an IFTTT applet that would change my office lights to red if a website went down and I got an email about it from Uptime Robot. That applet can probably still be made but just not using Gmail anymore as a trigger.
Google’s push to tighten third-party API access is already going to cost the world Google+, but a change that more of you might notice is coming to IFTTT. The service sent out emails alerting users that their “recipe” scripts involving Gmail triggers and an action that could create a draft will go away as of March 31st. According to Google, the shift is a result of the Project Strobe sweep it announced last October.
IFTTT said it worked with Google to keep the integration that will support triggers to Send an email, or to Send Yourself an email, but the API lockdown that’s coming would’ve required too much work to change its services. Otherwise, integrations with Google will still be the same, but anyone relying heavily on the automated scripts may want to double check things before they get a surprise in a few days.
First and foremost the most important aspect of your Private Blog Network is randomness. Consider what pattern or foot print your PBN might have and avoid that commonality.
Good PBNs Are Random, Start With Different Name Registrars
First off you need private domain registration, if not private then you’ll need people and addresses from all over. If you always use Godaddy you’re going to have to try out others to avoid a pattern. Incidentally if you always use Godaddy you’re getting ripped off as they will charge you for privacy and many others don’t. Some popular Name Registrars are 1and1.com namesilo.com namecheap.com cosmotown.com each of these can save you a considerable amount over Godaddy considering they offer free private registration and using more than one breaks a pattern.
Each time you add a new site to your PBN you need to approach it from the beginning as if you’re playing a character in a story who has never made a website before, when I say that I mean if you know you have a site on Host A and you like that host you’re making decisions based on previous sites and are more likely to create a pattern. Forget Host A how would you find a host for the first time? Google popular web hosts and pick a cheap new partner.
One thing that’s really beneficial about building PBNs that is more helpful to you in the long run is the forced exploration. After you’ve built ten sites on ten hosts using ten registrars and ten WordPress themes you’ll be able to write three top ten lists and rank the best of the 720 combinations that were available to you. It’s a lot of practice and as you’re avoiding patterns and repetition you’ll find yourself stepping out of your norm.
Vary Your Web Hosts
Speed of a web host is important normally but not necessarily when your building a PBN. While you want your primary or money site to load in under 3 seconds its perfectly fine if your PBN site loads in 7 seconds and that opens the door to all manner of generic no name web hosts. Your primary goal with multiple web hosts is to utilize a different IP address.
The only two big issues with this model …
Organization OF PBN Resources
What site is down? Oh….well which domain registrar did I use? Am I using their nameservers, someone else’s? Where did I point that to be hosted? Sure these aren’t that annoying to answer with a 10 site network, but try answering it when you’ve built and scaled up to 200 sites using 7 registrars, 20 name servers, 150 different IPs … it becomes unmanageable as you find yourself searching for your site more than you are building new sites, and why are you having to search? Maintaining a site is essential, as updates roll out to WordPress, plugins get updated and hackers exploit new vulnerabilities. If you log into every site you own and spend 5 minutes on each site your 200 domain name network will take 16 hours … or two days a week and consider that you only spend 5 minutes on a site, you likely didn’t fix any issues and took no breaks! It’s time to consider an apprentice or spreadsheets that fully document every aspect of your network, or both.
Somewhere around 100 domains I figured out I needed to approach this like an enterprise would and have actual uptime monitoring allowing me to see the state of the network easily. UptimeRobot allows you to set up 50 monitors on a free account.
In the real world 94% Uptime is horrible. Consider that in the last 30 days I had a recorded 104765 minutes sites were down in this sample of sites. I had issues with a server getting attacked by someone using 1700 servers causing a DOS attack. Why? Anyone’s guess … usually its a game to them and they aren’t paying for those 1700 servers but they’re other people’s hacked resources being used to grow their network.
You may be interested in MainWP or InfiniteWP … Godaddy provides Godaddy Pro. You need to be mindful that these only work when they work and will they give away a signature pattern? Likely they can create an easier management solution but easier is dangerous.
Costs Ballon And Randomness Prevents Savings
As you scale up from 10 to 20 to 50 sites your going to wake up one day and realize youre spending hundreds of dollars a month on infrastructure and all of your time will now be consumed with maintaining your network. Adding someone to help you is going to increase costs and take your time to train them in being effective at maintaining the network. Be careful who you bring in to help you, friends are obvious choices but when they get upset about something unrelated to the network they could leave you high and dry. Worse yet, they are the most likely to teach you a lesson by bailing on you for a couple weeks. Trust the people who are in it for the money … pay them more than they can get at a retail job to build loyalty to your mission. They need not be technical people but they need to understand that if a site is down, Google can’t index it and that backlink is missing now. They need to be able to follow a logical progression and understand the parts that are in play to help you maintain the site.
The obvious answer to addressing costs is to bundle services and make sure you’re utilizing resources in the most effective manner but that is accomplished by making patterns. You can’t find cost savings by giving away your sites.
Cloudflare Allows Consolidation And The Pattern Is Indistinguishable
Cloudflare offers the ability to hide among the masses. Who is Cloudflare? They stand in front of your server and take the brunt of the internets crap. Upwork.com, Medium.com, Themeforest.net, Chaturbate.com are among the names using Cloudflare.com services. Some estimates suggest that Cloudflare is about 8% of the entire internet. Thats huge! At one point they found themselves protecting the Israeli government’s network as well as the PLOs.
Using Cloudflare is hiding in plain sight and free. I recommend it but in a mixture capacity still have some sites out side of their network just to avoid any one bottleneck, it would seem odd if 100& of the sites linking to a domain are using Cloudflare….remember they are 8% and while the largest chunk of the internet they aren’t the internet.
This article has focused mainly on external and infrastructure concerns of building a PBN. This is really a third of topic and in the coming weeks I’ll include two more posts that address on site content issues of building a PBN and site design considerations for a network of sites.
This post was originally published on the STAT blog.
Featured snippets, a vehicle for voice search and the answers to our most pressing questions, have doubled on the SERPs — but not in the way we usually mean. This time, instead of appearing on two times the number of SERPS, two snippets are appearing on the same SERP. Hoo!
In all our years of obsessively stalking snippets, this is one of the first documented cases of them doing something a little different. And we are here for it.
While it’s still early days for the double-snippet SERP, we’re giving you everything we’ve got so far. And the bottom line is this: double the snippets mean double the opportunity.
Google’s case for double-snippet SERPs
The first time we heard mention of more than one snippet per SERP was at the end of January in Google’s “reintroduction” to featured snippets.
Not yet launched, details on the feature were a little sparse. We learned that they’re “to help people better locate information” and “may also eventually help in cases where you can get contradictory information when asking about the same thing but in different ways.”
Thankfully, we only had to wait a month before Google released them into the wild and gave us a little more insight into their purpose.
Calling them “multifaceted” featured snippets (a definition we’re not entirely sure we’re down with), Google explained that they’re currently serving “‘multi-intent’ queries, which are queries that have several potential intentions or purposes associated,” and will eventually expand to queries that need more than one piece of information to answer.
With that knowledge in our back pocket, let’s get to the good stuff.
The double snippet rollout is starting off small
Since the US-en market is Google’s favorite testing ground for new features and the largest locale being tracked in STAT, it made sense to focus our research there. We chose to analyze mobile SERPs over desktop because of Google’s (finally released) mobile-first indexing, and also because that’s where Google told us they were starting.
After waiting for enough two-snippet SERPs to show up so we could get our (proper) analysis on, we pulled our data at the end March. Out of the mobile keywords currently tracking in the US-en market in STAT, 122,501 had a featured snippet present, and of those, 1.06 percent had more than one to its name.
With only 1,299 double-snippet SERPs to analyze, we admit that our sample size is smaller than our big data nerd selves would like. That said, it is indicative of how petite this release currently is.
Two snippets appear for noun-heavy queries
Our first order of business was to see what kind of keywords two snippets were appearing for. If we can zero in on what Google might deem “multi-intent,” then we can optimize accordingly.
By weighting our double-snippet keywords by tf-idf, we found that nouns such as “insurance,” “computer,” “job,” and “surgery” were the primary triggers — like in [general liability insurance policy] and [spinal stenosis surgery].
It’s important to note that we don’t see this mirrored in single-snippet SERPs. When we refreshed our snippet research in November 2017, we saw that snippets appeared most often for “how,” followed closely by “does,” “to,” “what,” and “is.” These are all words that typically compose full sentence questions.
Essentially, without those interrogative words, Google is left to guess what the actual question is. Take our [general liability insurance policy]keyword as an example — does the searcher want to know what a general liability insurance policy is or how to get one?
Because of how vague the query is, it’s likely the searcher wants to know everything they can about the topic. And so, instead of having to pick, Google’s finally caught onto the wisdom of the Old El Paso taco girl — why not have both?
Better leapfrogging and double duty domains
Next, we wanted to know where you’d need to rank in order to win one (or both) of the snippets on this new SERP. This is what we typically call “source position.”
On a single-snippet SERP and ignoring any SERP features, Google pulls from the first organic rank 31 percent of the time. On double-snippet SERPs, the top snippet pulls from the first organic rank 24.84 percent of the time, and the bottom pulls from organic ranks 5–10 more often than solo snippets.
What this means is that you can leapfrog more competitors in a double-snippet situation than when just one is in play.
And when we dug into who’s answering all these questions, we discovered that 5.70 percent of our double-snippet SERPs had the same domain in both snippets. This begs the obvious question: is your content ready to do double duty?
Snippet headers provide clarity and keyword ideas
In what feels like the first new addition to the feature in a long time, there’s now a header on top of each snippet, which states the question it’s set out to answer. With reports of headers on solo snippets (and “People also search for” boxes attached to the bottom — will this madness never end?!), this may be a sneak peek at the new norm.
Instead of relying on guesses alone, we can turn to these headers for what a searcher is likely looking for — we’ll trust in Google’s excellent consumer research. Using our [general liability insurance policy] example once more, Google points us to “what is general liabilities insurance” and “what does a business insurance policy cover” as good interpretations.
Because these headers effectively turn ambiguous statements into clear questions, we weren’t surprised to see words like “how” and “what” appear in more than 80 percent of them. This trend falls in line with keywords that typically produce snippets, which we touched on earlier.
So, not only does a second snippet mean double the goodness that you usually get with just one, it also means more insight into intent and another keyword to track and optimize for.
Both snippets prefer paragraph formatting
Next, it was time to give formatting a look-see to determine whether the snippets appearing in twos behave any differently than their solo counterparts. To do that, we gathered every snippet on our double-snippet SERPs and compared them against our November 2017 data, back when pairs weren’t a thing.
While Google’s order of preference is the same for both — paragraphs, lists, and then tables — paragraph formatting was the clear favorite on our two-snippet SERPs.
It follows, then, that the most common pairing of snippets was paragraph-paragraph — this appeared on 85.68 percent of our SERPs. The least common, at 0.31 percent, was the table-table coupling.
We can give two reasons for this behavior. One, if a query can have multiple interpretations, it makes sense that a paragraph answer would provide the necessary space to explain each of them, and two, Google really doesn’t like tables.
We saw double-snippet testing in action
When looking at the total number of snippets we had on hand, we realised that the only way everything added up was if a few SERPs had more than two snippets. And lo! Eleven of our keywords returned anywhere from six to 12 snippets.
For a hot minute we were concerned that Google was planning a full-SERP snippet takeover, but when we searched those keywords a few days later, we discovered that we’d caught testing in action.
Here’s what we saw play out for the keyword [severe lower back pain]:
After testing six variations, Google decided to stick with the first two snippets. Whether this is a matter of top-of-the-SERP results getting the most engagement no matter what, or the phrasing of these questions resonating with searchers the most, is hard for us to tell.
The multiple snippets appearing for [full-time employment] left us scratching our head a bit:
Our best hypothesis is that searchers in Florida, NYS, Minnesota, and Oregon have more questions about full-time employment than other places. But, since we’d performed a nation-wide search, Google seems to have thought better of including location-specific snippets.
Share your double-snippet SERP experiences
It goes without saying — but here we are saying it anyway — that we’ll be keeping an eye on the scope of this release and will report back on any new revelations.
In the meantime, we’re keen to know what you’re seeing. Have you had any double-snippet SERPs yet? Were they in a market outside the US? What keywords were surfacing them?
In my last post, I explained how using network visualization tools can help you massively improve your content marketing PR/Outreach strategy —understanding which news outlets have the largest syndication networks empowers your outreach team to prioritize high-syndication publications over lower syndication publications. The result? The content you are pitching enjoys significantly more widespread link pickups.
Today, I’m going to take you a little deeper — we’ll be looking at a few techniques for forming an even better understanding of the publisher syndication networks in your particular niche. I’ve broken this technique into two parts:
- Technique One — Leveraging Buzzsumo influencer data and twitter scraping to find the most influential journalists writing about any topic
- Technique Two — Leveraging the Gdelt Dataset to reveal deep story syndication networks between publishers using in-context links.
Why do this at all?
If you are interested in generating high-value links at scale, these techniques provide an undeniable competitive advantage — they help you to deeply understand how writers and news publications connect and syndicate to each other.
In our opinion at Fractl, data-driven content stories that have strong news hooks, finding writers and publications who would find the content compelling, and pitching them effectively is the single highest ROI SEO activity possible. Done correctly, it is entirely possible to generate dozens, sometimes even hundreds or thousands, of high-authority links with one or a handful of content campaigns.
Let’s dive in.
Using Buzzsumo to understand journalist influencer networks on any topic
First, you want to figure out who your topc influencers are your a topic. A very handy feature of Buzzsumo is its “influencers” tool. You can locate it on the influences tab, then follow these steps:
- Select only “Journalists.” This will limit the result to only the Twitter accounts of those known to be reporters and journalists of major publications. Bloggers and lower authority publishers will be excluded.
- Search using a topical keyword. If it is straightforward, one or two searches should be fine. If it is more complex, create a few related queries, and collate the twitter accounts that appear in all of them. Alternatively, use the Boolean “and/or” in your search to narrow your result. It is critical to be sure your search results are returning journalists that as closely match your target criteria as possible.
- Ideally, you want at least 100 results. More is generally better, so long as you are sure the results represent your target criteria well.
- Once you are happy with your search result, click export to grab a CSV.
The next step is to grab all of the people each of these known journalist influencers follows — the goal is to understand which of these 100 or so influencers impacts the other 100 the most. Additionally, we want to find people outside of this group that many of these 100 follow in common.
To do so, we leveraged Twint, a handy Twitter scraper available on Github to pull all of the people each of these journalist influencers follow. Using our scraped data, we built an edge list, which allowed us to visualize the result in Gephi.
Here is an interactive version for you to explore, and here is a screenshot of what it looks like:
This graph shows us which nodes (influencers) have the most In-Degree links. In other words: it tells us who, of our media influencers, is most followed.
These are the top 10 nodes:
- Maia Szalavitz (@maiasz) Neuroscience Journalist, VICE and TIME
- Radley Balko (@radleybalko) Opinion journalist, Washington Post
- Johann Hari (@johannhari101) New York Times best-selling author
- David Kroll (@davidkroll) Freelance healthcare writer, Forbes Heath
- Max Daly (@Narcomania) Global Drugs Editor, VICE
- Dana Milbank (@milbank)Columnist, Washington Post
- Sam Quinones (@samquinones7), Author
- Felice Freyer (@felicejfreyer), Boston Globe Reporter, Mental health and Addiction
- Jeanne Whalen (@jeannewhalen) Business Reporter, Washington Post
- Eric Bolling (@ericbolling) New York Times best-selling author
Who is the most influential?
Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (influencers) in the network act as hubs of information transfer. Those with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. These are the top 10 influencers:
- Maia Szalavitz (@maiasz) Neuroscience Journalist, VICE and TIME
- David Kroll (@davidkroll) Freelance healthcare writer, Forbes Heath
- Jeanne Whalen (@jeannewhalen) Business Reporter, Washington Post
- Travis Lupick (@tlupick), Journalist, Author
- Johann Hari (@johannhari101) New York Times best-selling author
- Radley Balko (@radleybalko) Opinion journalist, Washington Post
- Sam Quinones (@samquinones7), Author
- Eric Bolling (@ericbolling) New York Times best-selling author
- Dana Milbank (@milbank)Columnist, Washington Post
- Mike Riggs (@mikeriggs) Writer & Editor, Reason Mag
@maiasz, @davidkroll, and @johannhari101 are standouts. There’s considerable overlap between the winners in “In-Degree” and “Betweenness Centrality” but they are still quite different.
What else can we learn?
The middle of the visualization holds many of the largest sized nodes. The nodes in this view are sized by “In-Degree.” The large, centrally located nodes are disproportionately followed by other members of the graph and enjoy popularity across the board (from many of the other influential nodes). These are journalists commonly followed by everyone else. Sifting through these centrally located nodes will surface many journalists who behave as influencers of the group initially pulled from BuzzSumo.
So, if you had a campaign about a niche topic, you could consider pitching to an influencer surfaced from this data —according to our the visualization, an article shared in their network would have the most reach and potential ROI
Using Gdelt to find the most influential websites on a topic with in-context link analysis
The first example was a great way to find the best journalists in a niche to pitch to, but top journalists are often the most pitched to overall. Often times, it can be easier to get a pickup from less known writers at major publications. For this reason, understanding which major publishers are most influential, and enjoy the widest syndication on a specific theme, topic, or beat, can be majorly helpful.
By using Gdelt’s massive and fully comprehensive database of digital news stories, along with Google BigQuery and Gephi, it is possible to dig even deeper to yield important strategic information that will help you prioritize your content pitching.
We pulled all of the articles in Gdelt’s database that are known to be about a specific theme within a given timeframe. In this case (as with the previous example) we looked at “behaviour health.” For each article we found in Gdelt’s database that matches our criteria, we also grabbed links found only within the context of the article.
Here is how it is done:
- Connect to Gdelt on Google BigQuery — you can find a tutorial here.
- Pull data from Gdelt. You can use this command: SELECT DocumentIdentifier,V2Themes,Extras,SourceCommonName,DATE FROM [gdelt-bq:gdeltv2.gkg] where (V2Themes like ‘%Your Theme%’).
- Select any theme you find, here — just replace the part between the percentages.
- To extract the links found in each article and build an edge file. This can be done with a relatively simple python script to pull out all of the <PAGE_LINKS> from the results of the query, clean the links to only show their root domain (not the full URL) and put them into an edge file format.
Note: The edge file is made up of Source–>Target pairs. The Source is the article and the Target are the links found within the article. The edge list will look like this:
- Article 1, First link found in the article.
- Article 1, Second link found in the article.
- Article 2, First link found in the article.
- Article 2, Second link found in the article.
- Article 2, Third link found in the article.
From here, the edge file can be used to build a network visualization where the nodes publishers and the edges between them represent the in-context links found from our Gdelt data pull around whatever topic we desired.
This final visualization is a network representation of the publishers who have written stories about addiction, and where those stories link to.
What can we learn from this graph?
This tells us which nodes (Publisher websites) have the most In-Degree links. In other words: who is the most linked. We can see that the most linked-to for this topic are:
Which publisher is most influential?
Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (publishers) in the network act as hubs of information transfer. The nodes with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. Getting pickups from these high-betweenness centrality nodes gives a much greater likelihood of syndication for that specific topic/theme.
What else can we learn?
Similar to the first example, the higher the betweenness centrality numbers, number of In-degree links, and the more centrally located in the graph, the more “important” that node can generally be said to be. Using this as a guide, the most important pitching targets can be easily identified.
Understanding some of the edge clusters gives additional insights into other potential opportunities. Including a few clusters specific to different regional or state local news, and a few foreign language publication clusters.
I’ve outlined two different techniques we use at Fractl to understand the influence networks around specific topical areas, both in terms of publications and the writers at those publications. The visualization techniques described are not obvious guides, but instead, are tools for combing through large amounts of data and finding hidden information. Use these techniques to unearth new opportunities and prioritize as you get ready to find the best places to pitch the content you’ve worked so hard to create.
Do you have any similar ideas or tactics to ensure you’re pitching the best writers and publishers with your content? Comment below!
With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!
1. Answers will drive search
People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]
2. Voice search will continue to be utterly useless for optimization
3. Mobile is table stakes
This is barely a prediction. If your 2019 plan is to finally figure out mobile, you’re already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]
4. Further SERP feature intrusions in organic search
Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]
5. Video will dominate niches
Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]
6. SERPs will become more interactive
We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]
7. Local SEO: Google will continue getting up in your business — literally
Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]
8. Business proximity-to-searcher will remain a top local ranking factor
In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]
9. Local SEO: Google is going to increase monetization
10. Monetization tests for voice
Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]
11. Marketers will place a greater focus on the SERPs
I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]
12. Targeting topics will be more productive than targeting queries
2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]
13. Linked unstructured citations will receive increasing focus
I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]
14. Reviews will remain a competitive difference-maker
A Google rep recently stated that about one-third of local searches are made with the intent of reading reviews. This is huge. Local businesses that acquire and maintain a good and interactive reputation on the web will have a critical advantage over brands that ignore reviews as fundamental to customer service. Competitive local businesses will earn, monitor, respond to, and analyze the sentiment of their review corpus. [Miriam Ellis]
We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months).
Retail clients are battling tough economics offline and tough competitors online. They need every bit of help your agency can give them.
I was heartened when 75 percent of the 1,400+ respondents to the Moz State of Local SEO Industry Report 2019 shared that they contribute to offline strategy recommendations either frequently or at least some of the time. I can’t think of a market where good and relatively inexpensive experiments are more needed than in embattled retail. The ripple effect of a single new idea, offered up generously, can spread out to encompass new revenue streams for the client and new levels of retention for your agency.
And that’s why win-win seemed written all over three statistics from a 2018 Yes Marketing retail survey when I read it because they speak to motivating about one quarter to half of 1,000 polled customers without going to any extreme expense. Take a look:
I highly recommend downloading Yes Marketing’s complete survey which is chock-full of great data, but today, let’s look at just three valuable stats from it to come up with an actionable strategy you can gift your offline retail clients at your next meeting.
Getting it right: A little market near me
For the past 16 years, I’ve been observing the local business scene with a combination of professional scrutiny and personal regard. I’m inspired by businesses that open and thrive and am saddened by those that open and close.
Right now, I’m especially intrigued by a very small, independently-owned grocery store which set up shop last year in what I’ll lovingly describe as a rural, half-a-horse town not far from me. This locale has a single main street with less than 20 businesses on it, but I’m predicting the shop’s ultimate success based on several factors. A strong one is that the community is flanked by several much larger towns with lots of through traffic and the market is several miles from any competitor. But other factors which match point-for-point with the data in the Yes Marketing survey make me feel especially confident that this small business is going to “get it right”.
Encourage your retail clients to explore the following tips.
1) The store is visually appealing
43–58 percent of Yes Marketing’s surveyed retail customers say they’d be motivated to shop with a retailer who has cool product displays, murals, etc. Retail shoppers of all ages are seeking appealing experiences.
At the market near me, there are many things going on in its favor. The building is historic on the outside and full of natural light on this inside, and the staff sets up creative displays, such as all of the ingredients you need to make a hearty winter soup gathered up on a vintage table. The Instagram crowd can have selfie fun here, and more mature customers will appreciate the aesthetic simplicity of this uncluttered, human-scale shopping experience.
For your retail clients, it won’t break the bank to become more visually appealing. Design cues are everywhere!
Share these suggestions with a worthy client:
Basic cleanliness is the starting point
This is an old survey, but I think we’re safe to say that at least 45 percent of retail customers are still put off by dirty premises — especially restrooms. Janitorial duties are already built into the budget of most businesses and only need to be accomplished properly. I continuously notice how many reviewers proclaim the word “clean” when a business deserves it.
Inspiration is affordable
Whatever employees are already being paid is the cost of engaging them to lend their creativity to creating merchandise displays that draw attention and/or solve problems. My hearty winter soup example is one idea (complete with boxed broth, pasta, veggies, bowls, and cookware).
For your retail client? It might be everything a consumer needs to recover from a cold (medicine, citrus fruit, electric blanket, herbal tea, tissue, a paperback, a sympathetic stuffed animal, etc.). Or everything one needs to winterize a car, take a trip to a beach, build a beautiful window box, or pamper a pet. Retailers can inexpensively encourage the hidden artistic talents in staff.
Feeling stuck? The Internet is full of free retail display tips, design magazines cost a few bucks, and your clients’ cable bills already cover a subscription to channels like HGTV and the DIY network that trade on style. A client who knows that interior designers are all using grey-and-white palettes and that one TV ad after another features women wearing denim blue with aspen yellow right now is well on their way to catching customers’ eyes.
Aspiring artists live near your client and need work
The national average cost to have a large wall mural professionally painted is about $8,000, with much less expensive options available. Some retailers even hold contests surrounding logo design, and an artist near your client may work quite inexpensively if they are trying to build up their portfolio. I can’t predict how long the Instagram mural trend will last, but wall art has been a crowd-pleaser since Paleolithic times. Any shopper who stops to snap a photo of themselves has been brought in close proximity to your front door.
While your clients’ industries and aesthetics will vary, tell them they can aim for a similar, positive response from at least 49 percent of their customers with a little more care put into the shopping environment.
2) The store offers additional services beyond the sale of products
19–40 percent of survey respondents are influenced by value-adds. Doubtless, you’ve seen the TV commercials in which banks double as coffee houses to appeal to the young, and small hardware chains emphasize staff expertise over loneliness in a warehouse. That’s what this is all about, and it can be done at a smaller scale, without overly-strapping your retail clients.
At the market near me, reviews like this are coming in:
The market has worked out a very economic arrangement with a massage therapist, who can build up their clientele out of the deal, so it’s a win for everybody.
For your retail clients, sharing these examples could inspire appealing added services:
The cost of these efforts is either the salary of an employee, nominal or free.
3) The store hosts local events
20–36 percent of customers feel the appeal of retailers becoming destinations for things to learn and do. Coincidentally, this corresponds with two of the tasks Google dubbed micro-moments a couple of years back, and while not everyone loves that terminology, we can at least agree that large numbers of people use the Internet to discover local resources.
At the market near me, they’re doing open-mic readings, and this is a trend in many cities to which Google Calendar attests:
For your clients, the last two words of that event description are key. When there’s a local wish to build community, retail businesses can lend the space and the stage. This can look like:
- Any type of class, like these ones that teach how to operate an appliance or machinery, how to re-skill at something like wilderness survival, or how to cook/make things.
- Any type of event, like the open mic night I’ve cited, above, or celebrations, or appearances by well-known locals such as authors, or ongoing club meetups.
- Any type of special appeal, like this recycling deal gifting participants $20 off new jeans if they donate their old ones, or housing a drop-off point for light bulbs, batteries or charitable giving, or hosting the kick-off of a neighborhood cleanup with some added benefit to participants like a breakfast or discount.
Again, costs here can be quite modest and you’ll be bringing the community together under the banner of your business.
Putting it in writing
The last item on the budget for any of these ventures is whatever it costs to publicize it. For sure, your client will want:
- A homepage announcement and/or one or more blog posts
- Google Posts, Q&A, photos and related features
- Social mentions
- If the concept is large enough (or the community is small) some outreach to local news in hopes of a write-up and inclusion of local/social calendars
- Link building would be great if the client can afford a reasonable investment in your services, where necessary
- And, of course, be sure your client’s local business listings are accurate so that newcomers aren’t getting lost on their way to finding the cool new offering
Getting the word out about events, features, and other desirable attributes don’t have to be exorbitant, but it will put the finishing touch on ensuring a community knows the business is ready to offer the desired experience.
Sometimes, you’ll find yourself in a client meeting and things will be a bit flat. Maybe the client has been disengaged from your contract lately, or sales have been leveling out for lack of new ideas. That’s the perfect time to put something fresh on the table, demonstrating that you’re thinking about the client’s whole picture beyond CTR and citations.
One thing that I find to be an inspiring practice for agencies is to do an audit of competitors’ reviews looking for “holes” In many communities, shopping is really dull and reviews reflect that, with few shoppers feeling genuinely excited by a particular vertical’s local offerings. Your client could be the one to change that, with a little extra attention from you.
Every possibility won’t be the perfect match for every business, but if you can help the company see a new opportunity, the few minutes spent brainstorming could benefit you both.
From irrelevant, off-topic backlinks to cookie-cutter anchor text, there are more than a few clues hidden in your backlink profile that something spammy is going on. Alone they might not be something to worry about, but in conjunction, common red flags can spell trouble when you’re performing an audit on your backlink profile. In this week’s Whiteboard Friday, Kameron Jenkins shares her best advice from years working with clients on what to watch out for in a link profile audit.
Hey, guys. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins, and I work here at Moz. Today we’re going to be talking about auditing your backlink profile, why you might want to do it, when you should do it, and then how to do it. So let’s just dive right in.
It might be kind of confusing to be talking about auditing your backlink profile. When I say auditing your backlink profile, I’m specifically talking about trying to diagnose if there’s anything funky or manipulative going on. There’s been quite a bit of debate among SEOs, so in a post-Penguin 4.0 world, we all wonder if Google can ignore spammy backlinks and low-quality backlinks, why would we also need to disavow, which essentially tells Google the same thing: “Just ignore these links.”
I posed three reasons why we might still want to consider this in some situations.
Why should you audit your backlink profile?
Disavow is still offered
Disavow is still an option — you can go to and submit a disavow file right now if you wanted to.
You can still get manual penalties
Google still has guidelines that outline all of the link schemes and types of link manipulation. If you violate those, you could get a manual penalty. In your Google Search Console, it will say something like unnatural links to your site detected, total or partial. You can still get those. That’s another reason I would say that the disavow is still something you could consider doing.
Google says their stance hasn’t changed
I know there’s like a little bit of back-and-forth about this, but technically Google has said, “Our stance hasn’t changed. Still use the disavow file carefully and when it’s appropriate.” So we’ll talk about when it might be appropriate, but that’s why we consider that this is still a legitimate activity that you could do.
When should you audit your backlink profile?
Look for signs of a link scheme or link manipulation
I would say that, in today’s climate, it’s probably best just to do this when you see overt signs of a link scheme or link manipulation, something that looks very wrong or very concerning. Because Google is so much better at uncovering when there are manipulative links and just ignoring them and not penalizing a whole site for them, it’s not as important, I think, to be as aggressive as we maybe used to be previously. But if you do, maybe you inherit a client and you just look at their link profile for the first time and you notice that there’s something sketchy in there, I might want to consider doing it if there are signs. You’re an SEO. You can detect the signs of whether there’s a link scheme going on.
How do you audit your backlink profile?
Check for red flags in Moz Link Explorer
But if you’re not quite sure how to diagnose that, check for red flags in Moz Link Explorer, and that’s the second part of this. We’re going to go through some red flags that I have noticed. But huge disclaimer — seven possible red flags. Please don’t just take one of these and say, “Oh, I found this,” and immediately disavow.
These are just things that I have noticed over time. I started in SEO in 2012, right around the time of Penguin, and so I did a lot of cleanup of so many spammy links. I kind of just saw patterns, and this is the result of that. I think that’s stayed true over the last couple of years, links that haven’t been cleaned up. Some people are still doing these kinds of low-quality link building techniques that actually could get you penalized.
These are some things that I have noticed. They should just pique your interest. If you see something like this, if you detect one of these red flags, it should prompt you to look into it further, not immediately write off those links as “those are bad.” They’re just things to spark your interest so that you can explore further on your own. So with that big disclaimer, let’s dive into the red flags.
7 possible red flags
Countries you don’t serve
A couple of examples of this. Maybe you are working on a client. They are US-based, and all of their locations are in the US. Their entire audience is US-based. But you get a quick glimpse of the inbound links. Maybe you’re on Link Explorer and you go to the inbound links report and you see a bunch of domains linking to you that are .ru and .pl, and that’s kind of confusing. Why is my site getting a huge volume of links from other countries that we don’t serve and we don’t have any content in Russian or Polish or anything like that? So that might spark my interest to look into it further. It could be a sign of something.
Another thing is off-topic. My favorite example, just because it was so ridiculous, was I was working with an Atlanta DUI attorney, and he had a huge chunk of backlinks that were from party planning, like low-quality party planning directories, and they didn’t make any sense. I clicked on them just to see what it was. You can go to it and see okay, yes, there really is no reason they should be linking to each other. It was clear he just went to Fiverr and was like, “$5, here build me links,” and he didn’t care where they came from. So you might notice a lot of totally off-topic, irrelevant stuff.
But obviously a disclaimer, it might look irrelevant, but then when you dive in further, they are in the same market and they kind of have a co-marketing relationship going on. Just be careful with that. But it could be a sign that there is some link manipulation going on if you have totally off-topic links in there.
2. Anchor text
The second red flag is anchor text. Again, this is another cool report in Moz Link Explorer. You can go in there and see the anchor text report. When I notice that there’s link manipulation going on, usually what I see is that there is a huge majority of their backlinks coming with the same exact anchor text, and usually it’s the exact match keyword that they want to rank for. That’s usually a huge earmark of, hey, they’ve been doing some shady linking.
The example I like to use for this and why that is concerning — and there’s no percentage that’s like, whoa, that’s manipulative. But if you see a really disproportionate percentage of links coming with the same exact anchor text, it might prompt you to look into it further. The example I like to use is, say you meet with five different friends throughout the course of your day, different occasions. They’re not all in the same room with you. You talk to each of them and they all say, “Hey, yeah, my weekend was great, but like I broke my foot.” You would be suspicious: “What, they all broke their foot? This is weird. What’s going on?”
Same thing with anchor text. If you’re earning links naturally, they’re not all going to look the same and mechanical. Something suspicious is probably going on if they’re all linking with the exact same anchor text. So that’s that.
Nofollow to follow, this is another one — please don’t use this as a sweeping rule, because I think even Russ Jones has come out and said at a mass scale that’s not a good predictor of spamminess. But what I have tended to see is usually if they also have spammy anchor text and they’re irrelevant, usually I also see that there’s a really, really disproportionate ratio of nofollow to follow. Use these red flags in conjunction with each other. When they start to pile on, it’s even more of a sign to me that there’s something fishy going on.
Nofollow to follow, you might see something ridiculous. Again, it’s something you can see in Link Explorer. Maybe like 99% of all of their backlinks are follow, which are the ones that pass PageRank. If you’re going to do a link scheme, you’re going to go out and get the ones that you think are going to pass PageRank to your site. Then one percent or no percent is nofollow. It may be something to look into.
Same thing with links to domains. Again, not an overt sign of spamminess. There’s no magic ratio here. But sometimes when I notice all of these other things, I will also notice that there’s a really disproportionate ratio of, say, they have 10,000 inbound links, but they’re coming from only 5 domains. Sometimes this happens. An example of this: I was auditing a client’s backlink profile, and they had set up five different websites, and on those websites they had put site-wide links to all of their other websites. They had created their own little network. By linking to each other, they were hoping to bolster all of their sites’ authority. Obviously, be careful with something like that. It could indicate that you’re self-creating follow links, which is a no-no.
5. Domain naming
“DIR” or “directory”
This one is just kind of like the eyeball test, which I’ll get to later. If you go to your inbound links, you can start to notice domain names that just look weird, and they’ll start to look off the more you look into stuff like this. When I was doing a lot of backlink auditing, what I noticed was that a lot of these spammier links came from low-quality directory submission sites. A lot of those tend to have or they would say “directory” in it or “DIR,” so like bestlinkdir.co, whatever. A lot of times when they have naming conventions like that, I have noticed that those tend to be low-quality directory submission sites. You could even eyeball or scan and see if there are any “DIR” directory-type of links.
Same thing with articles. Like back in the day, when people use to submit like e-zine articles or Article Base or something like that, if it has the word “article” in the domain name, it might be something to look into. Maybe they were doing some low-quality article submission with backlinks to their site.
Then if you tend to see a lot of links in their backlink profile that have like SEO link type naming conventions, unless you’re working on a site that is in the SEO space, they shouldn’t have a bunch of links that say like bestSEOsite.com or bestlinksforyou.com. I’ve seen a lot of that. It’s just something that I have noticed. It’s something to maybe watch out for.
6. Tool metrics
These can be super helpful. If you see tool metrics that maybe there is a really high Spam score, it’s something to look into. It might be helpful that Moz on their Help Hub has a list of all 27 criteria that they look at when evaluating a site’s spamminess. That might be something helpful to look into how Moz’s Spam Score calculates spamminess.
DA and PA, just to know on this Domain Authority and Page Authority, if you see links coming from low DA or low PA URLs, just make sure you don’t write those off right off the bat. It could just be that those domains are very new. Maybe they haven’t engaged in a lot of marketing yet. It doesn’t necessarily mean they’re spammy. It just means they haven’t done much to earn any authority. Watch out for kind of writing off links and thinking they’re spammy just because they have a low DA or PA. Just something to consider.
7. Eyeball test
Then finally we have the eyeball test. Like I said, the more you do this, and it’s not something that you should be engaging in constantly all the time nowadays, but you’ll start to notice patterns if you are working on clients with spammier link profiles. These kind of low-quality sites tend to have like the same template. You’ll have 100 sites that are all blue. They have the exact same navigation, exact same logo. They’re all on the same network. You’ll start to notice themes like that. A lot of times they don’t have any contact information because no one maintains the things. They’re just up for the purpose of links. They don’t care about them, so no phone number, no contact information, no email address, nothing. Also a telltale sign, which I tend to notice on these like self-submission type of link sites is they’ll have a big PayPal button on the top and it will say, “Pay to Submit Links” or even worse it will be like “Uses this PayPal to get your links removed from this site,” because they know that it’s low-quality and people ask them all the time. Just something to consider on the eyeball test front.
I hope this was helpful. Hopefully it helped you understand when you might want to do this, when you might not want to do this, and then if you do try to engage in some kind of link audit, some things to watch out for. So I hope that was helpful. If you have any tips for this, if you’ve noticed anything else that you think would be helpful for other SEOs to know, drop it in the comments.
That’s it for this week’s Whiteboard Friday. Come back again next week for another one.
It’s finally here, for your review and feedback: Chapter 7 of the new Beginner’s Guide to SEO, the last chapter. We cap off the guide with advice on how to measure, prioritize, and execute on your SEO. And if you missed them, check out the drafts of our outline, Chapter One, Chapter Two, Chapter Three, Chapter Four, Chapter Five, and Chapter Six for your reading pleasure. As always, let us know what you think of Chapter 7 in the comments!
Set yourself up for success.
They say if you can measure something, you can improve it.
In SEO, it’s no different. Professional SEOs track everything from rankings and conversions to lost links and more to help prove the value of SEO. Measuring the impact of your work and ongoing refinement is critical to your SEO success, client retention, and perceived value.
It also helps you pivot your priorities when something isn’t working.
Start with the end in mind
While it’s common to have multiple goals (both macro and micro), establishing one specific primary end goal is essential.
The only way to know what a website’s primary end goal should be is to have a strong understanding of the website’s goals and/or client needs. Good client questions are not only helpful in strategically directing your efforts, but they also show that you care.
Client question examples:
- Can you give us a brief history of your company?
- What is the monetary value of a newly qualified lead?
- What are your most profitable services/products (in order)?
Keep the following tips in mind while establishing a website’s primary goal, additional goals, and benchmarks:
Goal setting tips
- Measurable: If you can’t measure it, you can’t improve it.
- Be specific: Don’t let vague industry marketing jargon water down your goals.
- Share your goals: Studies have shown that writing down and sharing your goals with others boosts your chances of achieving them.
Now that you’ve set your primary goal, evaluate which additional metrics could help support your site in reaching its end goal. Measuring additional (applicable) benchmarks can help you keep a better pulse on current site health and progress.
How are people behaving once they reach your site? That’s the question that engagement metrics seek to answer. Some of the most popular metrics for measuring how people engage with your content include:
Conversion rate – The number of conversions (for a single desired action/goal) divided by the number of unique visits. A conversion rate can be applied to anything, from an email signup to a purchase to account creation. Knowing your conversion rate can help you gauge the return on investment (ROI) your website traffic might deliver.
In Google Analytics, you can set up goals to measure how well your site accomplishes its objectives. If your objective for a page is a form fill, you can set that up as a goal. When site visitors accomplish the task, you’ll be able to see it in your reports.
Time on page – How long did people spend on your page? If you have a 2,000-word blog post that visitors are only spending an average of 10 seconds on, the chances are slim that this content is being consumed (unless they’re a mega-speed reader). However, if a URL has a low time on page, that’s not necessarily bad either. Consider the intent of the page. For example, it’s normal for “Contact Us” pages to have a low average time on page.
Pages per visit – Was the goal of your page to keep readers engaged and take them to a next step? If so, then pages per visit can be a valuable engagement metric. If the goal of your page is independent of other pages on your site (ex: visitor came, got what they needed, then left), then low pages per visit are okay.
Bounce rate – “Bounced” sessions indicate that a searcher visited the page and left without browsing your site any further. Many people try to lower this metric because they believe it’s tied to website quality, but it actually tells us very little about a user’s experience. We’ve seen cases of bounce rate spiking for redesigned restaurant websites that are doing better than ever. Further investigation discovered that people were simply coming to find business hours, menus, or an address, then bouncing with the intention of visiting the restaurant in person. A better metric to gauge page/site quality is scroll depth.
Scroll depth – This measures how far visitors scroll down individual webpages. Are visitors reaching your important content? If not, test different ways of providing the most important content higher up on your page, such as multimedia, contact forms, and so on. Also consider the quality of your content. Are you omitting needless words? Is it enticing for the visitor to continue down the page? Scroll depth tracking can be set up in your Google Analytics.
Ranking is a valuable SEO metric, but measuring your site’s organic performance can’t stop there. The goal of showing up in search is to be chosen by searchers as the answer to their query. If you’re ranking but not getting any traffic, you have a problem.
But how do you even determine how much traffic your site is getting from search? One of the most precise ways to do this is with Google Analytics.
Using Google Analytics to uncover traffic insights
Google Analytics (GA) is bursting at the seams with data — so much so that it can be overwhelming if you don’t know where to look. This is not an exhaustive list, but rather a general guide to some of the traffic data you can glean from this free tool.
Isolate organic traffic – GA allows you to view traffic to your site by channel. This will mitigate any scares caused by changes to another channel (ex: total traffic dropped because a paid campaign was halted, but organic traffic remained steady).
Traffic to your site over time – GA allows you to view total sessions/users/pageviews to your site over a specified date range, as well as compare two separate ranges.
How many visits a particular page has received – Site Content reports in GA are great for evaluating the performance of a particular page — for example, how many unique visitors it received within a given date range.
Traffic from a specified campaign – You can use UTM (urchin tracking module) codes for better attribution. Designate the source, medium, and campaign, then append the codes to the end of your URLs. When people start clicking on your UTM-code links, that data will start to populate in GA’s “campaigns” report.
Click-through rate (CTR) – Your CTR from search results to a particular page (meaning the percent of people that clicked your page from search results) can provide insights on how well you’ve optimized your page title and meta description. You can find this data in Google Search Console, a free Google tool.
In addition, Google Tag Manager is a free tool that allows you to manage and deploy tracking pixels to your website without having to modify the code. This makes it much easier to track specific triggers or activity on a website.
Additional common SEO metrics
- Domain Authority & Page Authority (DA/PA) – Moz’s proprietary authority metrics provide powerful insights at a glance and are best used as benchmarks relative to your competitors’ Domain Authority and Page Authority.
- Keyword rankings – A website’s ranking position for desired keywords. This should also include SERP feature data, like featured snippets and People Also Ask boxes that you’re ranking for. Try to avoid vanity metrics, such as rankings for competitive keywords that are desirable but often too vague and don’t convert as well as longer-tail keywords.
- Number of backlinks – Total number of links pointing to your website or the number of unique linking root domains (meaning one per unique website, as websites often link out to other websites multiple times). While these are both common link metrics, we encourage you to look more closely at the quality of backlinks and linking root domains your site has.
How to track these metrics
There are lots of different tools available for keeping track of your site’s position in SERPs, site crawl health, SERP features, and link metrics, such as Moz Pro and STAT.
The Moz and STAT APIs (among other tools) can also be pulled into Google Sheets or other customizable dashboard platforms for clients and quick at-a-glance SEO check-ins. This also allows you to provide more refined views of only the metrics you care about.
Dashboard tools like Data Studio, Tableau, and PowerBI can also help to create interactive data visualizations.
Evaluating a site’s health with an SEO website audit
By having an understanding of certain aspects of your website — its current position in search, how searchers are interacting with it, how it’s performing, the quality of its content, its overall structure, and so on — you’ll be able to better uncover SEO opportunities. Leveraging the search engines’ own tools can help surface those opportunities, as well as potential issues:
- Google Search Console – If you haven’t already, sign up for a free Google Search Console (GSC) account and verify your website(s). GSC is full of actionable reports you can use to detect website errors, opportunities, and user engagement.
- Bing Webmaster Tools – Bing Webmaster Tools has similar functionality to GSC. Among other things, it shows you how your site is performing in Bing and opportunities for improvement.
- Lighthouse Audit – Google’s automated tool for measuring a website’s performance, accessibility, progressive web apps, and more. This data improves your understanding of how a website is performing. Gain specific speed and accessibility insights for a website here.
- PageSpeed Insights – Provides website performance insights using Lighthouse and Chrome User Experience Report data from real user measurement (RUM) when available.
- Structured Data Testing Tool – Validates that a website is using schema markup (structured data) properly.
- Mobile-Friendly Test – Evaluates how easily a user can navigate your website on a mobile device.
- Web.dev – Surfaces website improvement insights using Lighthouse and provides the ability to track progress over time.
- Tools for web devs and SEOs – Google often provides new tools for web developers and SEOs alike, so keep an eye on any new releases here.
While we don’t have room to cover every SEO audit check you should perform in this guide, we do offer an in-depth Technical SEO Site Audit course for more info. When auditing your site, keep the following in mind:
Crawlability: Are your primary web pages crawlable by search engines, or are you accidentally blocking Googlebot or Bingbot via your robots.txt file? Does the website have an accurate sitemap.xml file in place to help direct crawlers to your primary pages?
Indexed pages: Can your primary pages be found using Google? Doing a site:yoursite.com OR site:yoursite.com/specific-page check in Google can help answer this question. If you notice some are missing, check to make sure a meta robots=noindex tag isn’t excluding pages that should be indexed and found in search results.
Check page titles & meta descriptions: Do your titles and meta descriptions do a good job of summarizing the content of each page? How are their CTRs in search results, according to Google Search Console? Are they written in a way that entices searchers to click your result over the other ranking URLs? Which pages could be improved? Site-wide crawls are essential for discovering on-page and technical SEO opportunities.
Page speed: How does your website perform on mobile devices and in Lighthouse? Which images could be compressed to improve load time?
Content quality: How well does the current content of the website meet the target market’s needs? Is the content 10X better than other ranking websites’ content? If not, what could you do better? Think about things like richer content, multimedia, PDFs, guides, audio content, and more.
Pro tip: Website pruning!
Removing thin, old, low-quality, or rarely visited pages from your site can help improve your website’s perceived quality. Performing a content audit will help you discover these pruning opportunities. Three primary ways to prune pages include:
Keyword research and competitive website analysis (performing audits on your competitors’ websites) can also provide rich insights on opportunities for your own website.
- Which keywords are competitors ranking on page 1 for, but your website isn’t?
- Which keywords is your website ranking on page 1 for that also have a featured snippet? You might be able to provide better content and take over that snippet.
- Which websites link to more than one of your competitors, but not to your website?
Discovering website content and performance opportunities will help devise a more data-driven SEO plan of attack! Keep an ongoing list in order to prioritize your tasks effectively.
Prioritizing your SEO fixes
In order to prioritize SEO fixes effectively, it’s essential to first have specific, agreed-upon goals established between you and your client.
While there are a million different ways you could prioritize SEO, we suggest you rank them in terms of importance and urgency. Which fixes could provide the most ROI for a website and help support your agreed-upon goals?
Stephen Covey, author of The 7 Habits of Highly Effective People, developed a handy time management grid that can ease the burden of prioritization:
Putting out small, urgent SEO fires might feel most effective in the short term, but this often leads to neglecting non-urgent important fixes. The not urgent & important items are ultimately what often move the needle for a website’s SEO. Don’t put these off.
SEO planning & execution
“Without strategy, execution is aimless. Without execution, strategy is useless.”
– Morris Chang
Much of your success depends on effectively mapping out and scheduling your SEO tasks. You can use free tools like Google Sheets to plan out your SEO execution (we have a free template here), but you can use whatever method works best for you. Some people prefer to schedule out their SEO tasks in their Google Calendar, in a kanban or scrum board, or in a daily planner.
Use what works for you and stick to it.
Measuring your progress along the way via the metrics mentioned above will help you monitor your effectiveness and allow you to pivot your SEO efforts when something isn’t working. Say, for example, you changed a primary page’s title and meta description, only to notice that the CTR for that page decreased. Perhaps you changed it to something too vague or strayed too far from the on-page topic — it might be good to try a different approach. Keeping an eye on drops in rankings, CTRs, organic traffic, and conversions can help you manage hiccups like this early, before they become a bigger problem.
Communication is essential for SEO client longevity
Many SEO fixes are implemented without being noticeable to a client (or user). This is why it’s essential to employ good communication skills around your SEO plan, the time frame in which you’re working, and your benchmark metrics, as well as frequent check-ins and reports.
Source Ultimate SEO”Ultimate SEO”
Late last week (Feb 28 – Mar 1), we saw a sizable two-day spike in Google rankings flux, as measured by MozCast. Temperatures on Friday reached 108°F. The original temperature on Thursday was 105°F, but that was corrected down to 99°F (more on that later).
Digging in on Friday (March 1st), we saw a number of metrics shift, but most notably was a spike in page-one Google SERPs with more than 10 organic results. Across the 10,000 keywords in MozCast, here’s what we observed at the high end:
Counting “organic” results in 2019 is challenging — some elements, like expanded site-links (in the #1 position), Top Stories, and image results can occupy an organic position. In-depth Articles are particularly challenging (more on that in a moment), and the resulting math usually leaves us with page-one SERPs with counts from 4 to 12. Friday’s numbers were completely beyond anything we’ve seen historically, though, with organic counts up to 19 results.
Dissecting the 19-result SERP
Across 10K keywords, we saw 9 SERPs with 19 results. Below is one of the most straightforward (in terms of counting). There was a Featured Snippet in the #0 position, followed by 19 results that appear organic. This is a direct screenshot from a result for “pumpkin pie recipe” on Google.com/US:
Pardon the long scroll, but I wanted you to get the full effect. There’s no clear marker here to suggest that part of this SERP is a non-organic feature or in some way different. You’ll notice, though, that we transition from more traditional recipe results (with thumbnails) to what appear to be a mix of magazine and newspaper articles. We’ve seen something like this before …
Diving into the depths of in-depth
You may not think much about In-depth Articles these days. That’s in large part because they’re almost completely hidden within regular, organic results. We know they still exist, though, because of deep source-code markers and a mismatch in page-one counts. Here, for example, are the last 6 results from today (March 4th) on a search for “sneakers”:
Nestled in the more traditional, e-commerce results at the end of page one (like Macy’s), you can see articles from FiveThirtyEight, Wired, and The Verge. It’s hard to tell from the layout, but this is a 3-pack of In-depth Articles, which takes the place of a single organic position. So, this SERP appears to have 12 page-one results. Digging into the results on March 1st, we saw a similar pattern, but those 3-packs had expanded to as many as 10 articles.
We retooled the parser to more flexibly detect In-depth Articles (allowing for packs with more than 3 results), and here’s what we saw for prevalence of In-depth Articles over the past two weeks:
Just under 23% of MozCast SERPs on the morning of March 1st had something similar to In-depth Articles, an almost 4X increase from the day before. This number returned to normal (even slightly lower) the next day. It’s possible that our new definition is too broad, and these aren’t really traditional “In-depth” packs, but then we would expect the number to stay elevated. We also saw a large spike in SERP “real-estate” shares for major publications, like the New York Times, which typically dominate In-depth Articles. Something definitely happened around March 1st.
By the new method (removing these results from organic consideration), the temperature for 2/28 dropped from 105°F to 99°F, as some of the unusual results were treated as In-depth Articles and removed from the weather report.
Note that the MozCast temperatures are back-dated, since they represent the change over a 24-hour period. So, the prevalence of In-depth articles on the morning of March 1st is called “3/1” in the graph, but the day-over-day temperature recorded that morning is labeled “2/28” in the graph at the beginning of this post.
Sorting out where to go from here
Is this a sign of things to come? It’s really tough to say. On March 1st, I reached out to Twitter to see if people could replicate the 19-result SERPs and many people were able to, both on desktop and mobile:
This did not appear to be a normal test (which we see roll out to something like 1% or less of searchers, typically). It’s possible this was a glitch on Google’s end, but Google doesn’t typically publicize temporary glitches, so it’s hard to tell.
It appears that the 108°F was, in part, a reversal of these strange results. On the other hand, it’s odd that the reversal was larger than the original rankings flux. At the same time, we saw some other signals in play, such as a drop in image results on page one (about 10.5% day-over-day, which did not recover the next day). It’s possible that an algorithm update rolled out, but there was a glitch in that update.
If you’re a traditional publisher or someone who generally benefits from In-depth Articles, I’d recommend keeping your eyes open. This could be a sign of future intent by Google, or it could simply be a mistake. For the rest of us, we’ll have to wait and see. Fortunately, these results appeared mostly at the end of page one, so top rankings were less impacted, but a 19-result page one would certainly shake-up our assumptions about organic positioning and CTR.
Moz’s Domain Authority is requested over 1,000,000,000 times per year, it’s referenced millions of times on the web, and it has become a veritable household name among search engine optimizers for a variety of use cases, from determining the success of a link building campaign to qualifying domains for purchase. With the launch of Moz’s entirely new, improved, and much larger link index, we recognized the opportunity to revisit Domain Authority with the same rigor as we did keyword volume years ago (which ushered in the era of clickstream-modeled keyword data).
What follows is a rigorous treatment of the new Domain Authority metric. What I will not do in this piece is rehash the debate over whether Domain Authority matters or what its proper use cases are. I have and will address those at length in a later post. Rather, I intend to spend the following paragraphs addressing the new Domain Authority metric from multiple directions.
Correlations between DA and SERP rankings
The most important component of Domain Authority is how well it correlates with search results. But first, let’s get the correlation-versus-causation objection out of the way: Domain Authority does not cause search rankings. It is not a ranking factor. Domain Authority predicts the likelihood that one domain will outrank another. That being said, its usefulness as a metric is tied in large part to this value. The stronger the correlation, the more valuable Domain Authority is for predicting rankings.
Determining the “correlation” between a metric and SERP rankings has been accomplished in many different ways over the years. Should we compare against the “true first page,” top 10, top 20, top 50 or top 100? How many SERPs do we need to collect in order for our results to be statistically significant? It’s important that I outline the methodology for reproducibility and for any comments or concerns on the techniques used. For the purposes of this study, I chose to use the “true first page.” This means that the SERPs were collected using only the keyword with no additional parameters. I chose to use this particular data set for a number of reasons:
- The true first page is what most users experience, thus the predictive power of Domain Authority will be focused on what users see.
- By not using any special parameters, we’re likely to get Google’s typical results.
- By not extending beyond the true first page, we’re likely to avoid manually penalized sites (which can impact the correlations with links.)
- We did NOT use the same training set or training set size as we did for this correlation study. That is to say, we trained on the top 10 but are reporting correlations on the true first page. This prevents us from the potential of having a result overly biased towards our model.
I randomly selected 16,000 keywords from the United States keyword corpus for Keyword Explorer. I then collected the true first page for all of these keywords (completely different from those used in the training set.) I extracted the URLs but I also chose to remove duplicate domains (ie: if the same domain occurred, one after another.) For a length of time, Google used to cluster domains together in the SERPs under certain circumstances. It was easy to spot these clusters, as the second and later listings were indented. No such indentations are present any longer, but we can’t be certain that Google never groups domains. If they do group domains, it would throw off the correlation because it’s the grouping and not the traditional link-based algorithm doing the work.
I collected the Domain Authority (Moz), Citation Flow and Trust Flow (Majestic), and Domain Rank (Ahrefs) for each domain and calculated the mean Spearman correlation coefficient for each SERP. I then averaged the coefficients for each metric.
Moz’s new Domain Authority has the strongest correlations with SERPs of the competing strength-of-domain link-based metrics in the industry. The sign (-/+) has been inverted in the graph for readability, although the actual coefficients are negative (and should be).
Moz’s Domain Authority scored a ~.12, or roughly 6% stronger than the next best competitor (Domain Rank by Ahrefs.) Domain Authority performed 35% better than CitationFlow and 18% better than TrustFlow. This isn’t surprising, in that Domain Authority is trained to predict rankings while our competitor’s strength-of-domain metrics are not. It shouldn’t be taken as a negative that our competitors strength-of-domain metrics don’t correlate as strongly as Moz’s Domain Authority — rather, it’s simply exemplary of the intrinsic differences between the metrics. That being said, if you want a metric that best predicts rankings at the domain level, Domain Authority is that metric.
Note: At first blush, Domain Authority’s improvements over the competition are, frankly, underwhelming. The truth is that we could quite easily increase the correlation further, but doing so would risk over-fitting and compromising a secondary goal of Domain Authority…
Handling link manipulation
Historically, Domain Authority has focused on only one single feature: maximizing the predictive capacity of the metric. All we wanted were the highest correlations. However, Domain Authority has become, for better or worse, synonymous with “domain value” in many sectors, such as among link buyers and domainers. Subsequently, as bizarre as it may sound, Domain Authority has itself been targeted for spam in order to bolster the score and sell at a higher price. While these crude link manipulation techniques didn’t work so well in Google, they were sufficient to increase Domain Authority. We decided to rein that in.
The first thing we did was compile a series off data sets that corresponded with industries we wished to impact, knowing that Domain Authority was regularly manipulated in these circles.
- Random domains
- Moz customers
- Blog comment spam
- Low-quality auction domains
- Mid-quality auction domains
- High-quality auction domains
- Known link sellers
- Known link buyers
- Domainer network
- Link network
While it would be my preference to release all the data sets, I’ve chosen not to in order to not “out” any website in particular. Instead, I opted to provide these data sets to a number of search engine marketers for validation. The only data set not offered for outside validation was Moz customers, for obvious reasons.
For each of the above data sets, I collected both the old and new Domain Authority scores. This was conducted all on February 28th in order to have parity for all tests. I then calculated the relative difference between the old DA and new DA within each group. Finally, I compared the various data set results against one another to confirm that the model addresses the various methods of inflating Domain Authority.
In the above graph, blue represents the Old Average Domain Authority for that data set and orange represents the New Average Domain Authority for that same data set. One immediately noticeable feature is that every category drops. Even random domains drops. This is a re-centering of the Domain Authority score and should cause no alarm to webmasters. There is, on average, a 6% reduction in Domain Authority for randomly selected domains from the web. Thus, if your Domain Authority drops a few points, you are well within the range of normal. Now, let’s look at the various data sets individually.
Random domains: -6.1%
Using the same methodology of finding random domains which we use for collecting comparative link statistics, I selected 1,000 domains, we were able to determine that there is, on average, a 6.1% drop in Domain Authority. It’s important that webmasters recognize this, as the shift is likely to affect most sites and is nothing to worry about.
Moz customers: -7.4%
Of immediate interest to Moz is how our own customers perform in relation to the random set of domains. On average, the Domain Authority of Moz customers lowered by 7.4%. This is very close to the random set of URLs and indicates that most Moz customers are likely not using techniques to manipulate DA to any large degree.
Link buyers: -15.9%
Surprisingly, link buyers only lost 15.9% of their Domain Authority. In retrospect, this seems reasonable. First, we looked specifically at link buyers from blog networks, which aren’t as spammy as many other techniques. Second, most of the sites paying for links are also optimizing their site’s content, which means the sites do rank, sometimes quite well, in Google. Because Domain Authority trains against actual rankings, it’s reasonable to expect that the link buyers data set would not be impacted as highly as other techniques because the neural network learns that some link buying patterns actually work.
Comment spammers: -34%
Here’s where the fun starts. The neural network behind Domain Authority was able to drop comment spammers’ average DA by 34%. I was particularly pleased with this one because of all the types of link manipulation addressed by Domain Authority, comment spam is, in my honest opinion, no better than vandalism. Hopefully this will have a positive impact on decreasing comment spam — every little bit counts.
Link sellers: -56%
I was actually quite surprised, at first, that link sellers on average dropped 56% in Domain Authority. I knew that link sellers often participated in link schemes (normally interlinking their own blog networks to build up DA) so that they can charge higher prices. However, it didn’t occur to me that link sellers would be easier to pick out because they explicitly do not optimize their own sites beyond links. Subsequently, link sellers tend to have inflated, bogus link profiles and flimsy content, which means they tend to not rank in Google. If they don’t rank, then the neural network behind Domain Authority is likely to pick up on the trend. It will be interesting to see how the market responds to such a dramatic change in Domain Authority.
High-quality auction domains: -61%
One of the features that I’m most proud of in regards to Domain Authority is that it effectively addressed link manipulation in order of our intuition regarding quality. I created three different data sets out of one larger data set (auction domains), where I used certain qualifiers like price, TLD, and archive.org status to label each domain as high-quality, mid-quality, and low-quality. In theory, if the neural network does its job correctly, we should see the high-quality domains impacted the least and the low-quality domains impacted the most. This is the exact pattern which was rendered by the new model. High-quality auction domains dropped an average of 61% in Domain Authority. That seems really high for “high-quality” auction domains, but even a cursory glance at the backlink profiles of domains that are up for sale in the $10K+ range shows clear link manipulation. The domainer industry, especially the domainer-for-SEO industry, is rife with spam.
Link network: -79%
There is one network on the web that troubles me more than any other. I won’t name it, but it’s particularly pernicious because the sites in this network all link to the top 1,000,000 sites on the web. If your site is in the top 1,000,000 on the web, you’ll likely see hundreds of root linking domains from this network no matter which link index you look at (Moz, Majestic, or Ahrefs). You can imagine my delight to see that it drops roughly 79% in Domain Authority, and rightfully so, as the vast majority of these sites have been banned by Google.
Mid-quality auction domains: -95%
Continuing with the pattern regarding the quality of auction domains, you can see that “mid-quality” auction domains dropped nearly 95% in Domain Authority. This is huge. Bear in mind that these drastic drops are not combined with losses in correlation with SERPs; rather, the neural network is learning to distinguish between backlink profiles far more effectively, separating the wheat from the chaff.
Domainer networks: -97%
If you spend any time looking at dropped domains, you have probably come upon a domainer network where there are a series of sites enumerated and all linking to one another. For example, the first site might be sbt001.com, then sbt002.com, and so on and so forth for thousands of domains. While it’s obvious for humans to look at this and see a pattern, Domain Authority needed to learn that these techniques do not correlate with rankings. The new Domain Authority does just that, dropping the domainer networks we analyzed on average by 97%.
Low-quality auction domains: -98%
Finally, the worst offenders — low-quality auction domains — dropped 98% on average. Domain Authority just can’t be fooled in the way it has in the past. You have to acquire good links in the right proportions (in accordance with a natural model and sites that already rank) if you wish to have a strong Domain Authority score.
What does this mean?
For most webmasters, this means very little. Your Domain Authority might drop a little bit, but so will your competitors’. For search engine optimizers, especially consultants and agencies, it means quite a bit. The inventories of known link sellers will probably diminish dramatically overnight. High DA links will become far more rare. The same is true of those trying to construct private blog networks (PBNs). Of course, Domain Authority doesn’t cause rankings so it won’t impact your current rank, but it should give consultants and agencies a much smarter metric for assessing quality.
What are the best use cases for DA?
- Compare changes in your Domain Authority with your competitors. If you drop significantly more, or increase significantly more, it could indicate that there are important differences in your link profile.
- Compare changes in your Domain Authority over time. The new Domain Authority will update historically as well, so you can track your DA. If your DA is decreasing over time, especially relative to your competitors, you probably need to get started on outreach.
- Assess link quality when looking to acquire dropped or auction domains. Those looking to acquire dropped or auction domains now have a much more powerful tool in their hands for assessing quality. Of course, DA should not be the primary metric for assessing the quality of a link or a domain, but it certainly should be in every webmaster’s toolkit.
What should we expect going forward?
We aren’t going to rest. An important philosophical shift has taken place at Moz with regards to Domain Authority. In the past, we believed it was best to keep Domain Authority static, rarely updating the model, in order to give users an apples-to-apples comparison. Over time, though, this meant that Domain Authority would become less relevant. Given the rapidity with which Google updates its results and algorithms, the new Domain Authority will be far more agile as we give it new features, retrain it more frequently, and respond to algorithmic changes from Google. We hope you like it.
Be sure to join us on Thursday, March 14th at 10am PT at our upcoming webinar discussing strategies & use cases for the new Domain Authority:
Hands down The Best Site To Buy Your Domain Name is NameSilo.com. They deal with all manner of TLDs (the last few letters after the last . in a name i.e. .com) and they are consistently cheap to register, they come with FREE Privacy Protection (Which Godaddy Charges Like $8 For) AND heres the kicker … when you need to renew the next year ….
Godaddy’s .com is 17.99 + 8.99 Privacy or $26.99 … Name Silo $8.99 + free
Now that might not be a big deal if you have one .com address and you just like paying 3 times the price for stuff. But we are currently operating about 200 domains and NameSilo.com will save Ultimate SEO about $3598! There isn’t any other trade off I can think of … Godaddy provides lots of add on stupid services like email ( comes with your hosting usually ) … Website Builders … look like website builders sites look. Hosting but I prefer AWS which is what they are anyway.
Scroll to the bottom and you’ll find the complete listing of NameSilo’s renewal costs for domains.
Network Solutions is the absolute worst registrar I’ve used in twenty years of buying domain names. My experience runs from 2018 to 2019 with only one domain name, I was skeptical of them and bought just the one address as a test and they failed in every opportunity and then some. I say and then some because 3 or 4 months before my name had to be renewed they couldnt bill the card on file ( I didnt want them too ) so they emailed and I ignored it…they suspended the domain pending email verification!
I had been tipped off to issues from the start when their website kept crashing or the forms kept forgetting what was going in to them…it took me multiple attempts to order the single name. I then pointed the DNS as I wanted but nothing happened for 3 days…so I created a ticket and had a run around issue when finally they gave it turned on….I lost a couple weeks there. It also then stopped working magically a week later and was down another week. They also have Auto Opt In services you need to be watchful for…like email.
I had months of paid time that I didnt get from them … I was ignoring their promotional crap. So they turned me off until I verified my email again. Outrageous I immediately started the transfer process out. Here are some more people’s Network Solutions stories…..the worst place to buy a domain name.
The Second Worst Place To Buy Your Domain Name
Is with your hosting company. I prefer checks and balances and your hosting company already holds all your files over your head…dont give them your name too. It may be free but for 8 bucks pay someone else so no matter what your host company says or does you can take your name and run.
BTW No one paid us or gave us anything for free, this is what Ultimate SEO actually thinks. Here’s the prices from our pick for Best Site To Buy Your Domain Name.