Why You Should Track Traffic, Not Rankings – SEO Theory


Rank Sculpting has become the new obsession among Web marketers. I’m not talking about PageRank Sculpting, which is the misinformed practice of using “nofollow” link attributes on your internal links to try to squeeze more PageRank toward certain pages.

No, Rank Sculpting is the practice of seeking out “the best rank trackers”. I see requests for opinions on which rank trackers work best in many online discussion groups every week. You’d think with so many people expressing frustration with their rank trackers, they would begin to realize there is something rotten in the rank tracking mythology.

One upon a time each search engine assigned about 1 ranking per Website per query. No matter where you were in the world or what device you were using to run a query, a search engine showed you the same thing it showed to everyone else. And all that ended about 15 years ago (somewhere around 2005).

Unfortunately, despite the endless recycling of SEO tutorials and presentations about local search, mobile search, contextual search, entity search, and search ad nauseum, a sizable percentage of the Web marketing community continues to “watch their rankings” through various tools.

And yet these rankings are not real. I won’t call them made up or imaginary numbers (like the traffic estimates the rank checking tools often provide). But these rankings are not real rankings.

So choosing the best tool means sculpting the ranking report you want to see, even though it has nothing to do with reality.

Bing and Google Only Show You Average Rankings for Good Reason

Some people reject any average position report from Bing, Google, or other search engines out of hand for being unusable. These same people always cite the unreal ranking reports they pay for. They are quite proud of – or totally devastated by – whatever these ranking reports show them, even though those ranking reports don’t reflect reality.

If every city, every browser, every operating system, and every personal search history influences what each person who types in a query is shown, and those results are all different in some way, then how can a search engine report anything BUT an average position?

You wouldn’t be happy with the lowest position in a ranking report. Nor should you accept a highest position report. The average position for a query is about as close as you can get to reality.

Get 1 premium article each week

It’s been a tough year. We understand.

If you’re not ready to pay for a full subscription of $25/month, try our $5/month Tweekly newsletter.

But you need to be careful about how you interpret these average position reports.

The Average Position Changes Based on Reporting Criteria

If you change the scope of the report – say, from 7 days to 7 months, then your average position data may change, too.

If you look only at the average position reported by query, you may be looking at an average computed from the positions of 2 or more pages on your site. And it’s no accident that some SEO tool vendors have whipped up a frenzy of fear and trepidation over imaginary keyword cannibalization.

I have yet to see people agree on a consistent definition for the phrase keyword cannibalization, but the general trend of these alarmist warnings is toward forbidding any site from having more than 1 page appear in a search result.

If a search algorithm consistently includes more than 1 page from your site in the results for the same query, why would you think the search engine is going to penalize you?

Average position may change because of people’s locations, especially if they are searching for a brick and mortar business from different locations.

The Search System Naturally Jumbles Up Data Used to Create Search Results

Search engines constantly crawl the Web to refresh their indexes. That means they constantly dump data they no longer want or need – and that data includes older images of pages they’ve crawled, information about links they extracted from those older pages, and other “signals” they use to choose content for inclusion in search results and for ranking that content.

On the other side of the table, Website publishers change content, delete or redirect content, and add new content all day long. Even if you don’t touch a Website for months or years, other Websites that appear in the same search results change over time. Some sites go offline, some sites abandon old topics, and new sites come along all the time.

And the link graph expands and contracts as if it’s breathing. In fact, all the data that search engines use expand and contract. Data respiration is a fact of life for search engines – they draw it in and spit it out all day long.

A search index is a vast ocean of information and information about information. There are tides, waves, crests, whirlpools, and other fluidic phenomena in the data. I’m not being metaphorical here. When you collect data about trillions of URLs on a continual basis, you can use the changes in that data to create a bobbing image of an ocean-like massive super data structure. The visualization of data respiration is quite dramatic. In fact, if you’ve ever watched a graphic equalizer while listing to a song or podcast, you’ve seen a very simplistic measurement of data respiration.

Imagine millions, billions of graphic equalizers pulsating every day in vast pool – that is what a search system’s data looks like as it respirates.

Data Respiration Invalidates Real-time Measurement

You will never find a reliable rank tracking system because there is no system that can capture all the rankings at the same time. Well, the search engines extrapolate an average position based on all the rankings they record, but even Bing and Google don’t promise 100% accuracy.

If the people with the actual ranking data cannot give you what you want, why are you paying others to substitute unrealistic numbers for what you cannot have?

The data respiration problem is also the reason why you cannot reverse engineer Google’s search system (or Bing’s, Baidu’s, Yandex’s, or anyone else’s). If you’re only trying to analyze search systems based on the results they provide in-between algorithmic updates, you have at best a window of maybe 30 minutes to a few days, based on how often the search engineers change something.

And that’s just addressing the algorithmic changes. They may add new document classifiers on any given day – or modify one, or remove one from the system. All of these events count as algorithmic changes.

You can’t measure rankings, you can’t measure algorithmic signals, and you can’t measure the flow of value between documents in a search index. The system is in constant flux.

The Google that existed 1 hour before you read this article no longer exists. It will never exist again. Regardless of how much or how little data you collected 1 hour ago, it’s no longer synchronized with the search engine from which you obtained it.

The Only Reliable Metric You Have Is Traffic To Your Website

And unfortunately too many marketers obsess over a single source of traffic: Google search referrals. And yet, you don’t even have a single Google to deal with.

In the last 30 days, SEO Theory has received visitors from dozens of Google Websites in many different countries. People from all over Europe, Africa, and Asia read this blog every month. And they don’t all arrive here via one of Google’s country or language-specific searches.

There are a few Google search partners out there, like Ask.Com and several ISP search portals.

Bing’s partner network is even larger than Google’s partner network. Microsoft has done a great job of poaching search partners away from Google over the past 10 years – or maybe Google simply let those relationships end because it was so big.

Either way, if you want to measure your Bing search referrals you need a calculator to add up the numbers from all their search partners. Most analytics packages at least group the majority of Google search portals together, but Bing’s network partners are all reported separately.

Many Web marketers earn far more traffic from Bing than they realize, and yet they keep asking why Bing is such a bad search engine. Bing isn’t the problem with your reporting.

You Should Optimize for All Kinds of Search

Bing, Google, and their partners and competitors are not the only search engines you should be concerned with. Generally speaking, Google only drives somewhere between 5% and 10% of the Web’s traffic.

Most of the Web’s traffic comes from site search (mainly on ecommerce sites, but any large enough Website forces its visitors to use site search for navigation).

Social media search is another massive source of traffic. Every time you click on a hashtag on one of the social media platforms you trigger a query on their search engine. But social media search is about more than just hashtags and trending topics. The major platforms have very sophisticated advanced search features and they process billions of queries every day.

If you’re not tracking performance on search platforms other than Google, I guarantee you’re not getting very good performance from those platforms. And I’ve worked with companies that earn millions of visitors from non-Google sources. Those visitors drive those companies’ revenues.

It’s not humanly possible for one person to manage all those different search environments, but the Web is far greater than Google and you should be mindful of that. More importantly, you should cultivate as much non-Google search as you can. You never know when you’ll need those alternative traffic channels.


SEO tool vendors will continue selling whatever bells and whistles they can add to their services. They’ll share their honest opinions with you about how good and reliable their tools are. They – like all companies with products or services to sell – have a right to promote themselves.

But you don’t owe them anything. You’re not obligated to by a car every time you drive by a car dealership. You can live a long, happy, productive life without ever climbing Mount Everest, going on a cruise, or buying a horse.

Just because someone tells you whatever they are selling is what you need doesn’t mean it is what you need.

In search engine optimization rankings and page or domain quality matters are at best only estimates or opinions. They’re not facts. They’re not the data by which you should live. And they won’t provide you with insight into how the search systems work.

Worst of all, if you only look at rankings when you think there was a major algorithmic event – and you don’t look at your search referral data – you might as well be serving squash in an apple pie baking contest. Rankings are as relevant to Website traffic to as camels are to fuel injection.

That’s an indisputable fact, not an opinion, and you don’t have to pay anyone for it.

Follow SEO Theory

A confirmation email will be sent to new subscribers AND unsubscribers. Please look for it!

Original Post

About Author


Please enter your comment!
Please enter your name here

Share post:




More like this

7 Best Practices to Improve Emoji Accessibility on the Web

Emojis are frequently called the universal language, or...

Understanding Typographic Hierarchy | Envato Tuts+

What is hierarchy in typography, and how does...

Best Adobe XD Templates for Web Designers

Whatever business you’re designing for, chances are there’s...

10+ Best Ebook HTML Website Templates for 2024

Having a dedicated ebook website not only gives...