Tags Archives

You are currently viewing all posts tagged with Google.

Posted by randfish

In early June of this year, SEOmoz released some ranking correlation data about Google’s web results and how they mapped against specific metrics. This exciting work gave us valuable insight into Google’s rankings system and both confirmed many assumptions as well as opened up new lines of questions. When Google announced their new Places Results at the end of October, we couldn’t help but want to learn more.

In November, we gathered data for 220 search queries – 20 US cities and 11 business "types" (different kinds of queries). This dataset is smaller than our web results, and was intended to be an initial data gathering project before we dove deeper, but our findings proved surprising significant (from a statistical standpoint) and thus, we’re making the results and report publicly available.

As with our previous collection and analysis of this type of data, it’s important to keep a few things in mind:

  1. Correlation ≠ Causation – the findings here are merely indicative of what high ranking results are doing that lower ranking results aren’t (or, at least, are doing less of). It’s not necessarily the case that any of these factors are the cause of the higher rankings, they could merely be a side effect of pages that perform better. Nevertheless, it’s always interesting to know what higher ranking sites/pages are doing that they’re lower ranking peers aren’t.
  2. Statistical Signifigance – the report specifically highlights results that are more than two standard errors away from statistical significance (98%+ chance of non-zero correlation). Many of the factors we measured fall into this category, which is why we’re sharing despite the smaller dataset. In terms of the correlation numbers, remember that 0.00 is no correlation and 1.0 is perfect correlation. It’s in our opinion that in algorithms like Google’s, where hundreds of factors are supposedly at play together, data in the 0.05-0.1 range is interesting and data in the 0.1-0.3 range potentialy worth more significant attention.
  3. Ranked Correlations – the correlations are comparing pages that ranked higher vs. those that ranked lower, and the datasets in the report and below are reporting on average correlations across the entire dataset (except where specified), with standard error as a metric for accuracy.
  4. Common Sense is Essential – you’ll see some datapoints, just like in our web results set, that would suggest that sites not following the  commonly held "best practices" (like using the name of the queried city in your URL) results in better rankings. We strongly urge readers to use this data as a guideline, but not a rule (for example, it could be that many results using the city name in the URL are national chains with multiple "city" pages, and thus aren’t as "local" in Google’s eyes as their peers).

With those out of the way, let’s dive into the dataset, which you can download a full version of here:

  • The 20 cities included:
    • Indianapolis
    • Austin
    • Seattle
    • Portland
    • Baltimore
    • Boston
    • Memphis
    • Denver
    • Nashville
    • Milwaukee
    • Las Vegas
    • Louisville
    • Albuquerque
    • Tucson
    • Atlanta
    • Fresno
    • Sacramento
    • Omaha
    • Miami
    • Cleveland
  • The 11 Business Types / Queries included:
    • Restaurants
    • Car Wash
    • Attorneys
    • Yoga Studio
    • Book Stores
    • Parks
    • Ice Cream
    • Gyms
    • Dry Cleaners
    • Hospitals

Interestingly, the results we gathered seem to indicate that across multiple cities, the Google Places ranking algorithm doesn’t differ much, but when business/query types are considered, there’s indications that Google may indeed be changing up how the rankings are calculated (an alternative explanation is that different business segments simply have dramatically different weights on the factors depending on their type).

For this round of correlation analysis, we contracted Dr. Matthew Peters (who holds a PhD in Applied Math from Univ. of WA) to create a report of his findings based on the data. In discussing the role that cities/query types played, he noted:

City is not a significant source of variation for any of the variables, suggesting that Google’s algorithm is the same for all cities. However, for 9 of the 24 variables we can reject the null hypothesis that business type is a not significant source of variation in the correlation coefficients at a=0.05. This is highly unlikely to have occurred by chance. Unfortunately there is a caveat to this result. The results from ANOVA assume the residuals to be normally distributed, but in most cases the residuals are not normal as tested with a Shapiro-Wilk test.

You can download his full report here.

Next, let’s look at some of the more interesting statistical findings Matt discovered. These are split into 4 unique sections, and we’re looking only at the correlations with Places results (though the data and report also include web results).

Correlation with Page-Specific Link Popularity Factors

Google Places Correlations with Page-Specific Link Popularity Elements

With the exception of PageRank, all data comes via SEOmoz’s Linkscape data API.

NOTE: In this data, mozRank and PageRank are not significantly different than zero.

Domain-Wide Link Popularity Factors

Google Places Domain Link Factor Correlations

All data comes via SEOmoz’s Linkscape data API.

NOTE: In this data, all of the metrics are significant.

Keyword Usage Factors

Google Places Keyword Usage Correlations 

All data comes directly from the results page URL or the Places page/listing. Business keyword refers to the type, such as "ice cream" or "hospital" while city keyword refers to the location, such as "Austin" or "Portland." The relatively large, negative correlation with the city keyword in URLs is an outlier (as no other element we measured for local listings had a significant negative correlation). My personal guess is nationwide sites trying to rank individually on city-targeted pages don’t perform as well as local-only results in general and this could cause that biasing, but we don’t have evidence to prove that theory and other explanations are certainly possible.

NOTE: In this data, correlations for business keyword in the URL and city keyword in the title element were not significantly different than zero.

Places Listings, Ratings + Reviews Factors

Google Places Li<br />
stings Correlations 

All data comes directly from Google Places’ page about the result.

NOTE: In this data, all of the metrics are significant. 

Interest Takeaways and Notes from this Research:

  • In Places results, domain-wide link popularity factors seem more important than page-specific ones. We’ve heard that links aren’t as important in local/places and the data certainly suggest that’s accurate (see the full report to compare correlations), but they may not be completely useless, particularly on the domain level.
  • Using the city and business type keyword in the page title and the listing name (when claiming/editing your business’s name in the results) may give a positive boost. Results using these keywords seem to frequently outrank their peers. For example: Portland Attorneys Places Results
     
  • More is almost always better when it comes to everything associated with your Places listing – more related maps, more reviews, more "about this place" results, etc. However, this metric doesn’t appear as powerful as we’d initially thought. It could be that the missing "consistency" metric is a big part of why the correlations here weren’t higher.
  • Several things we didn’t measure in this report are particularly interesting and it’s sad we missed them. These include:
    • Proximity to centroid (just tough to gather for every result at scale)
    • Consistency of listings (supposedly a central piece of the Local rankings puzzle) in address, phone number, business name, type
    • Presence of specific listing sources (like those shown on GetListed.org for example)
  • This data isn’t far out of whack with the perception/opinions of Local SEOs, which we take to be a good sign, both for the data, and the SEOs surveyed :-)

Our hope is to do this experiment again with more data and possibly more metrics in the future. Your suggestions are, of course, very welcome.


As always, we invite you to download the report and raw data and give us any feedback or feel free to do your own analyses and come to your own conclusions. It could even be valuable to use this same process for results you (or your clients) care about and find the missing ingredients between you and the competition.

p.s. Special thanks to Paris Childress and Evgeni Yordanov for help in the data collection process.

Do you like this post? Yes No


SEOmoz Daily SEO Blog
Post image for Google the Answer Engine

One of the topics that emerged from Pubcon was “Should SEO’s Focus on Where Google is Heading”, and I’m going to agree with Aaron that focusing on short term algorithmic holes isn’t a smart thing for most people (churn and burn folks–you keep on keeping on). I agree that most publishers should focus on where Google is going. However, the one thing I think publishers need to be aware of and be wary of is Google’s transition to becoming an answer engine.

this represents a clear and present danger to every web publisher…

When I refer to Google trying to become an answer engine, what exactly do I mean? I mean that Google will provide the answer right on the SERP itself if possible and, more frequently, from a Google-owned or Google-maintained property. What exactly do I mean by that? I would be willing to bet that at least one Googler is hunched over a monitor somewhere trying to figure out how to convert voice searches into standardized results. Get out your best Jean Luc Picard impersonation, grab your android phone, and say “COMPUTER … Show me airline prices from Los Angeles to Las Vegas on March 15th.” Now imagine that Google, using its recently acquired ITA travel data, could show you the 5 cheapest flights without needing to send you to the airline, travelocity, or any of the other intermediaries.  Good for Google, good for the user … but scary if you are a publisher.

Google has been moving in this direction for years with queries like [what is george washington's birthday]

There’s no need for the person performing that query to visit any website because Google became the answer machine.  Earlier this year, they began making inroads in commercial searches for things like [mortgage rates]

Google’s latest incursion into becoming the answer machine came from its local results when they began stealing … err aggregating … reviews from other sites and mixing them with their own on place pages.

IMHO this represents a clear and present danger to every web publisher. For a while, Google will be content to let publishers keep serving the information that Google hasn’t figured out how to gather efficiently/profitably, even if that means referring users to low quality, demand media style pages from About.com and eHow.com. However there’s no doubt in my mind that once Google thinks they can do better, they will scrape your data and throw you under the bus without a second thought … cause it’s all about the users, right?

The one exception that may leave you a leg to stand on is if you are a brand and are building some sense of brand loyalty. If users type in [<brand name> + <keyword phrase>] Google will show less “Google answers”. For example [george washington's birthday wikipedia] or [bank of america mortgage rates] contain none of the Google properties. Of course, it would seem to me that this is a massive conflict of interest as far as Google is concerned, but I’m not a legislator, so what do I know.

The days of being a pure affiliate and building sites without any thought to branding are coming to a close. They will never disappear completely, but there will be less of them. The purely keyword-based traffic without a hint of branding is going to become more competitive and, in some cases, you will be competing with Google itself or with Google owned properties like Boutiques.com. Heed these warnings Caesar and fear the Ides of March …
Creative Commons License photo credit: Michal Osmenda

tla starter kit

Related posts:

  1. Does W3C compliance and accessibility impact your Search Engine Optimization Douglas Karr sent me a question that I get asked…
  2. Search Engine Spiders I’ve been conducting some experiments with search engines, spiders and…
  3. What Search Engine Conferences Should I Attend One of the questions I get asked frequently is “which…
  4. Search Engine Glossary Review SO I was asked to take a look at Aaron…
  5. Redirect Single Page for Search Engine Query So if anyone reading this is an htaccess expert and…

Advertisers:

  1. Text Link Ads – New customers can get 0 in free text links.
  2. BOTW.org – Get a premier listing in the internet’s oldest directory.
  3. Ezilon.com Regional Directory – Check to see if your website is listed!
  4. Glass Whiteboards – For a professional durable white board with no ghosting, streaking or marker stains, see my Glass Whiteboard Review
  5. Need an SEO Audit for your website, look at my SEO Consulting Services
  6. KnowEm – Protect your brand, product or company name with a continually growing list of social media sites.
  7. Scribe SEO Review find out how to better optimize your wordpress posts.
  8. TigerTech – Great Web Hosting service at a great price.

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

Google the Answer Engine


Michael Gray – Graywolf’s SEO Blog

Generally I have not been a huge fan of registering all your websites with Google (profiling risks, etc.), but they keep using the carrot nicely to lead me astray. :D … So much so that I want to find a Googler and give them a hug.

Google recently decided to share some more data in their webmaster tools. And for many webmasters the data is enough to make it worth registering (at least 1 website)!

AOL Click Data

When speaking of keyword search volume beakdown data people have typically shared information from the leaked AOL search data.

The big problem with that data is it is in aggregate. It is a nice free tool, and a good starting point, but it is fuzzy.

Types of Searches

There are 3 well known search classifications: navigational, transactional, and informational. Each type of query has a different traffic breakdown profile.

  • In general, for navigational searches people click the top result more often than they would on an informational search.
  • In general, for informational searches people tend to click throughout the full set of search results at a more even distribution than they would for navigational or transactional searches.
  • The only solid recently-shared publicly data on those breakdowns is from Dogpile [PDF], a meta search engine. But given how polluted meta search services tend to be (with ads mixed in their search results) those numbers were quite a bit off from what one might expect. And once more, they are aggregate numbers.

Other Stuff in the Search Results

Further, anecdotal evidence suggests that the appearance of vertical / universal results within the search results set can impact search click distribution. Google shows maps on 1 in 13 search results, and they have many other verticals they are pushing – video, updates, news, product search, etc. And then there are AdWords ads – which many searchers confuse as being the organic search results.

Pretty solid looking estimates can get pretty rough pretty fast. ;)

The Value of Data

If there is one critical piece of marketing worth learning above all others it is that context is important.

My suggestions as to what works, another person’s opinions or advice on what you should do, and empirical truth collected by a marketer who likes to use numbers to prove his point … well all 3 data sets fall flat on their face when compared against the data and insights and interactions that come from running your own business. As teachers and marketers we try to share tips to guide people toward success, but your data is one of the most valuable things you own.

A Hack to Collect Search Volume Data & Estimated CTR Data

In their Excel plug-in Microsoft shares the same search data they use internally, but its not certain that when they integrate the Yahoo! Search deal that Microsoft will keep sharing as much data as they do now.

Google offers numerous keyword research tools, but getting them to agree with each other can be quite a challenge.

There have been some hacks to collect organic search clickthrough rate data on Google. One of the more popular strategies was to run an AdWords ad for the exact match version of a keyword and bid low onto the first page of results. Keep the ad running for a while and then run an AdWords impression share report. With that data in hand you can estimate how many actual searches there were, and then compare your organic search clicks against that to get an effective clickthrough rate.

The New Solution

Given search personalization and localization and the ever-changing result sets with all the test Google runs, even the above can be rough. So what is a webmaster to do?

Well Google upgraded the data they share inside their webmaster tools, which includes (on a per keyword level)

  • keyword clickthrough rank
  • clickthrough rate at various ranking positions
  • URL that was clicked onto

Trophy Keywords vs Brand Keywords

Even if your site is rather well known going after some of the big keywords can be a bit self-defeating in terms of the value delivered. Imagine ranking #6 or #7 for SEO. Wouldn’t that send a lot of search traffic? Nope.

When you back away the ego searches, the rank checkers, etc. it turns out that there isn’t a ton of search volume to be had ranking on page 1 of Google for SEO.

With only a 2% CTR the core keyword SEO is driving less than 1/2 the traffic driven by our 2 most common brand search keywords. Our brand might not seem like it is getting lots of traffic with only a few thousand searches a month, but when you have a > 70% CTR that can still add up to a lot of traffic. More importantly, that is the kind of traffic which is more likely to buy from you than someone searching for a broad discovery or curiosity type of keyword.

The lessons for SEOs in that data?

  • Core keywords & raw mechanical SEO are both quite frequently heavily over-rated in terms of value.
  • Rather than sweating trying to rank well for the hardest keywords first focus on more niche keywords that are easy to rank for.
  • If you have little rank and little work to do then there is lots of time to focus on giving people reasons to talk about you and reference you.
  • Work on building up brand & relationships. This not only gives your link profile more karma, but it sends you a steady stream of leads for if/when you fall out of favor a bit with the search engines.
Those who perceive you well will seek you out and buy from you. But it is much harder to sell to someone who sees you as just another choice amongst many results.

Search is becoming the default navigational tool for the web. People go to Google and then type in “yahoo.” If you don’t have a branded keyword as one of your top keywords that might indicate long-term risk to your business. If a competitor can clone most of what you are doing and then bake in a viral component you are toast.

Going After the Wrong Brand Keywords

Arbitraging 3rd party brands is an easy way to build up distribution quickly. This is why there are 4,982 Britney Spears fan blogs (well 2 people are actually fans, but the other 4,980 are marketers).

But if you want to pull in traffic you have to go after a keyword that is an extension of the brand. Ranking for “eBay” probably won’t send you much traffic (as their clickthrough rate on their first result is probably even higher than the 70% I had above). Though if you have tips on how to buy or sell on eBay those kinds of keywords might pull in a much higher clickthrough rate for you.

To confirm the above I grabbed data for a couple SEO tool brands we rank well for. A number 3 ranking (behind a double listing) and virtually no traffic!

Different keyword, same result

Informational Keywords

Link building is still a bit of a discovery keyword, but I think it is perhaps a bit later staged than just the acronym “SEO.” Here the click volume distribution is much flatter / less consolidated than it was on the above brand-oriented examples.

If when Google lowers your rank you still pull in a fairly high CTR that might be a signal to them that your site should rank a bit higher.

Enough Already!

Enough about our keywords, what does your keyword data tell you? How can you better integrate it to grow your business?

SEO Book.com – Learn. Rank. Dominate.

Posted by randfish

As of yesterday, both Bing and Google have confirmed (via an excellent interview by Danny Sullivan) that links shared through Twitter and Facebook have a direct impact on rankings (in addition to the positive second-order effects they may have on the link graph). This has long been suspected by SEOs (in fact, many of us posited it was happening as of November of last year following Google + Bing’s announcements of partnerships with Twitter), but getting this official confirmation is a substantive step forward.

In addition to that revelation, another piece of critical data came via yesterday’s announcement:

Danny Sullivan: If an article is retweeted or referenced much in Twitter, do you count that as a signal outside of finding any non-nofollowed links that may naturally result from it?

Bing: We do look at the social authority of a user. We look at how many people you follow, how many follow you, and this can add a little weight to a listing in regular search results. It carries much more weight in Bing Social Search, where tweets from more authoritative people will flow to the top when best match relevancy is used.

Google: Yes, we do use it as a signal. It is used as a signal in our organic and news rankings. We also use it to enhance our news universal by marking how many people shared an article.

Danny Sullivan: Do you try to calculate the authority of someone who tweets that might be assigned to their Twitter page. Do you try to “know,” if you will, who they are?

Bing: Yes. We do calculate the authority of someone who tweets. For known public figures or publishers, we do associate them with who they are. (For example, query for Danny Sullivan)

Google: Yes we do compute and use author quality. We don’t know who anyone is in real life :-)

Danny Sullivan: Do you calculate whether a link should carry more weight depending on the person who tweets it?

Bing: Yes.

Google: Yes we do use this as a signal, especially in the “Top links” section [of Google Realtime Search]. Author authority is independent of PageRank, but it is currently only used in limited situations in ordinary web search.

We now know that those link sharing activities on Twitter + Facebook are evaluated based on the person/entity sharing them through a score Google calls "Author Authority," and Bing calls "Social Authority."

We can probably predict a lot of the signals the search engines care about when it comes to social sharing; some of my guesses include:

  • Diversity of Sources – having 50 tweets of a link from one account, like having 50 links from one site, is not nearly as valuable as 50 tweets from 50 unique accounts.
  • Timing – sharing that occurs when an RSS feed first publishes a story may be valuable in QDF, but tweets/shares of older pieces could be seen as more indicative of lasting value and interest (rather than just sharing what’s new).
  • Surrounding Content – the message(s) accompanying the link may give the engines substantive information about their potential relevance and topic; it could even fill the gap that’s left by the lack of anchor text, particularly on Twitter.
  • Engagement Level – the quantity of clicks, retweets, likes, etc. (if/when measurable) could certainly impact how much weight is given to the link.

We can probably also take a stab at some of the signals Google + Bing use for Author/Social Authority in the context of the sharing/tweeting source:

  • Quantity of Friends/Followers – like links, it’s likely the case that more is better, though there will likely be caveats; low quality bots and inauthentic accounts are likely to be filtered (and may be much easier to spot than spammy links, due to the challenge they find in getting any "legitimate" friends/followers).
  • Importance of Friends/Followers – the friends/followers you have, like the link sources you have, are also probably playing a role. Earn high "authority" followers and you yourself must be a high authority person.
  • Analysis of Friends/Followers Ratios – Much like the engines’ analysis of the editorial nature of links, consideration of whether a social user is engaging in following/follower behavior purely out of reciprocity vs. true interest and engagement may be part of authority scoring. If you have 100K followers and follow 99K of them, but the engagement between you and your followers is slim, you’re likely not as authoritative as an account with 100K followers + 5K following, but those followers are constantly engaged, retweeting, liking, sharing, etc. 
  • Topic Focus / Relevance – The consistency or patterns between your sharing behaviors could also be a consideration, using topic analysis, patterns in the sources of shared/tweeted links, etc. Being an "authority" could even be subject-specific, such that when a prominent SEO tweets links to celebrity news it has less of an impact than when they tweet links to a web marketing resource.
  • Association Bias – I suspect Google and Bing do a good job of associating social authors with the sites/domains they’re "part of" vs. independent from. Sometimes, this might be as easy as looking at the URL associated with the account, other times it could be based on patterns like where you most often tweet/share links to or whether your account is listed on pages from that site. Basically, if @randfish tweets links to *.seomoz.org, that probably means less than when I tweet links to bitlynews or when someone outside the company tweets links to SEOmoz.

These signals represent my opinions only, and while it’s very likely that at least some are being used, it’s even more likely that there are many more that aren’t listed above. Over time, hopefully we’ll discover more about the impact of social sharing on web rankings and how we can best combine SEO + social media marketing.

To me, the most exciting part about this is the potential to reduce webspam and return to a more purely editorial model. While people often link to, read and enjoy sources that link out manipulatively, very few of us will be likely to follow a Twitter account, friend someone on Facebook, or "like" something in a social site that’s inauthentic, manipulative or spammy.
The social graph isn’t necessarily cleaner, but the complexity of spam is far lower.

Here’s to the evolution of organic marketing – search, social, content, blogs, links – it’s all coming together faster than ever before, and that’s a very good thing for holisticly minded web marketers.

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Google is growing *far* more complex.

Page x can rank based primarily on the criteria for page y from that same site. So if you analyze the data behind the page which is showing up in the search results, in some cases you will be looking at the wrong data sets!

Google has typically displayed page titles in the search results. However there are now numerous exceptions to that.

Google has aggressively pushed into regionalization and localization, but sometimes they miscategorize a website’s target market or a user’s location … delivering irrelevant search results.

Sometimes Google pulls data from 3rd party data sources and lists that next to your listing. I mean, sure they have used DMOZ historically, but why exactly are they showing my site as having Russian text on it?

As Google grows in complexity, the number of bugs in their system multiply. Sometimes you don’t rank because you screwed up. But sometimes you don’t rank because Google screwed up. Typically Google sees minimal difference either way, as there will always be another website to fill up the search results. But as a business owner, when Google gets it wrong you can be screwed pretty bad, particularly if you stock physical inventory and have to tightly manage your supply chain & cash flow.

Recently Google created a news source attribution tag. If it works, it might be a good idea. But (even outside of spam) there are ways it can backfire.

Consider Google’s webmaster verification tags. One of our customers had an outing with an old webmaster who in turn did a sneaky change of location inside of Google Webmaster Tools over the weekend. After seeing traffic fall off a cliff, we figured out what happened & registered the site in Google webmaster tools. There are instructions on how to remove the former registered user, however the option does not appear in my client’s account.

The redirect will allegedly be in place for 180 days! The only way to get around it is to ask for a review by the Google engineering team.

In the past SEO was primarily about encouraging (and, perhaps, in some nefarious angles, coercing) Google into giving you a little more love. But now a big part of SEO is identifying & fixing Google bugs. And this is before you consider things like universal search, custom vertical result sets, search personalization, social layers, traffic shaping, changing crawling priorities, sites getting hacked, filters hit by aggressive SEO or competitive sabotage, more random filters, changing related words in search vocabularies, using search query chains, Google Instant shifting keyword demand profiles, search engines competing against SEOs by monetizing the organic search results, basic/straightforward SEO concerns, technical crawling & indexing related issues, improper server configurations, new additional signals being added to search – like sentiment analysis. etc.

A big part of brand building in terms of SEO is not only to help you build up passive organic links (and to be able to charge a premium for your product), but it is also something which helps establish a bit of a competitive moat from algorithmic errors & makes it harder for Google to accidentally dump you. Further, if you have brand & catch a Google bug there is little risk to asking for a review. But if you do not have brand then even asking for a review could be risky.

Anyone who tells you that ‘SEO is easy’ across the board is either ignorant or exceptionally ignorant.

SEO Book.com – Learn. Rank. Dominate.

Posted by randfish

 If you use Google Analytics, you’ve undoubtedly seen a report like this:

Google Analytics Pie Chart

The problem is, there’s no breakdown of "social media" in this view of traffic sources, and with the dramatic rise of social media marketing, marketers need an easy way to segment and "see" this traffic separately from the rest of their referrers. We know it’s mixed in with "referring sites" and "direct traffic" but luckily, there’s a way to extract that data in just a few simple steps.

Step 1: Create a Custom Segment

Custom segments are the way to go for separating traffic into filter-able buckets for deeper analysis. GA makes this fairly painless:

Step 1

From any of the "Traffic Sources" sections, just click the "Advanced Segments" in the upper-right hand corner and then the link to "Create a new advanced segment."

Step 2: Add Social Sources

This is the most crucial part, and requires that you have a full list of the sites/words to include. I don’t recommend using just the domain names or URLs of the most popular social sites, but instead, some clever "catch-all" words using the "source" condition, as shown below:

Step 2

Make sure to continue adding "OR" statements, not "and" statements – the latter will require that both conditions are met vs. any one of the "ORs". Here’s the list of words I used, though you can certainly feel free to add to it:

  • twitter
  • tweet
  • facebook
  • linkedin
  • youtube
  • reddit
  • digg
  • delicious
  • stumbleupon
  • ycombinator
  • flickr
  • myspace
  • hootsuite
  • popurls
  • wikipedia

Depending on your niche, it might be valuable to run through your top 2-500 referring domains looking for any obvious matches. You could also refer to Wikipedia’s list of popular social sites.

Step 3: Test & Name Your Segment

In order to create a fully functional segment, you’ll want to test the logic you’ve created to be sure results are returning. Before you do that, though, GA requires naming your segment (I used "social media"):

Step 3

Once it’s complete and working properly, click "save segment." You’ll be returned to the prior screen with the segment ready to rumble.

Step 4: Filter Traffic by "Social Media"

Your new segment is ready to be applied. You can now filter social media exclusively or see it in comparison to other traffic sources on any report in GA. Just use the advanced segments drop-down and choose "social media" under the custom segments list like so:

Of course, just having data is useless unless there’s some action you can take from it. Segmenting social traffic is useful for reporting, particularly to gauge value (if you have action tracking on commercial activities set up in GA, for example) and see growth/impact over time. But, there’s more you can learn than just raw traffic and conversions numbers.

Here’s some examples of reports I ran, along with the value/intelligence extracted from the data:

It can be tough to "see" the social sites between other referring domains, but once they’re broken out, combing through and finding the sites where your efforts are working is vastly more simple. If you then compare this against traffic "opportunity" from these sites (using a combination of traffic data and gut check), you’ll be able to find which sites have the greatest chance to improve. For SEOmoz, Facebook, LinkedIn, Reddit and Wikipedia stand out to me as places where we likely have more opportunity than we’re currently capturing.

This next chart compares search vs. social traffic over time:

If I’m looking to evaluate progress and make comparisons, this view is fairly useful. I can tell if my share of social media is growing or shrinking and how it compares to overall traffic and search individually. I’m only looking at a short timeframe here, but over the course of weeks or months, I can quickly gauge whether my efforts in social are paying off with traffic and whether they’re improving my performance in search engines (through new links, citations, etc). When someone asks if social helps search, showing these two segments over time can be persuasive.

Next, I’m reviewing the level of engagement of social media visitors:

At first, I can compare this against other segments (like "search" or "direct") as a measure of comparative value. But, I also want to compare this over time, particularly if I’m making tweaks to my site to encourage greater engagement and click-through to see if those efforts are successful.

Just because I’m curious, I’ll check out some browser stats:

 

Admittedly, this isn’t especially actionable, but it is fascinating to see the browser "savvy" of social users. Dominated by Firefox and Chrome with very little Internet Explorer use. If I’m trying to see what the cutting edge users are shifting towards, this is where to look. I suspect Rockmelt will soon be joining the list. (BTW – I love that 5 people came with the user-agent "Googlebot" – awesome).

Last, let’s peek at the pages social visitors see:

These are all potential opportunities to create more customized landing experiences based on the referrer path, and the report can also give me insight about what content I need to continue producing if I want to draw in more social traffic. 


If social media marketing is a focus of your organization, segmenting that traffic in reporting is critical to determining the value of your efforts and improving. So get into GA, segment, and start seeing your traffic for what it really is. 

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Interesting little snippet from Mr Cutts:

“Matt recommends SEOs do not “chase the algorithm” and instead try to predict where Google will be going in the future”. Matt was addressing PubCon.

Good advice, methinks.

Trying to predict where Google is going is something we do a lot of at SEOBook.com. Whilst no one has a crystal ball, it’s good practice to keep one eye on the search horizon.

So, where do we think Google might be heading?

Google Will Continue To Dominate Search

Easy one, huh.

Their biggest competitors appear clueless when it comes to search. Bing may make some inroads. Maybe. It’s hard to imagine anyone eating Google’s lunch when it comes to search, for many years to come.

Is Facebook a threat? I doubt it. Search is difficult, and I can see no reason why Facebook – which has a media focus – could own the search channel any more than Yahoo could.

Search is, after all, an infrastructure problem. Google’s infrastructure would be very difficult to replicate.

Google Won’t Be Doing All That Much About Blackhat Sites

A search result set only really contains spam if the Google users think it contains spam i.e. they don’t see the answer they were expecting.

The fact a website may fall outside Google’s guidelines might get competing webmasters’ knickers in a knot, but it probably doesn’t matter that much to Google, or anyone else.

Even though Matt Cutts says Google will devote more resources to this, I suspect Google’s efforts will largely remain focused on outright deception i.e. misrepresentation, hijacking and malware.

The Web Reflects Power Structures

We can forget the San Fran techno-hippy ethos of the web. It will not be a free-for-all democracy, if it ever was. History shows us that power tries to centralize control in order to maintain it.

Google may try to keep users on Google for longer. They do this by owning more and more verticals, and extracting data and reformatting it. When they send visitors away from Google, they’ll try to do so more and more on their own terms. Watch very carefully what type of sites Google rewards, as opposed to what they may say they reward.

Expect less competition in the market as a result. Some people are already getting angry about it.

Be Where Your Users Are

Google follows users. So do Facebook. Anywhere your users are, you’ve got to be there, too. On Google Maps. On YouTube. Wherever and whenever. Think beyod your website. Think in terms of getting your data out there.

As Rich Skrenta pointed out in a recent interview:

Social media can drive tons of attention, awareness and traffic. But the search box is the best way to navigate to stuff you want. Now what will drive those results – if I type in “pizza”, what should I get? The answer can be very different depending on whether the results are coming from the web, Yelp, or Facebook. So I guess my answer is that I still see search being the core way to navigate, but I think what gets searched is going to get a lot more structured and move away from simple keyword matches against unstructured web pages

A Shift To Localization

Microsoft Research found that people tend to organize their memories in geographic terms i.e. where they were when something happened.

If you want to know where Google is heading, then watch Marissa Meyer. Marissa has been responsible for much of what you see in Google in terms of how it is organized. Marissa has just moved to head of Geographic and Location Services.

Google Earth. Google Maps. Google Local. Google Street View. Mobile location data and targeting. Expect more data to be organized around locality.

Everything Changes, But Not That Fast

Aaron talked about TechCrunch’s tendancy to over-hype new developments:

“…but this changes everything…”

SEO hasn’t changed all that much in years. We still find an audience (keyword research), we publish content, we build links to the content, and then we repeat it all over again.

The changes come around the edges, especially for big companies like Google. There is a lot of risk to Google in making radical changes. Shareholders don’t like it. Why risk breaking something that makes so much money, and is so popular?

The biggest changes in the way we do things on the web are probably going to come from the upstarts. They’re probably hard at work in their garage right now.

SEO Book.com – Learn. Rank. Dominate.

Ben Edelman did it again :)

This time he highlighted how Google hard codes their search results:

[When] we roll[ed] out Google Finance, we did put the Google link first. It seems only fair right, we do all the work for the search page and all these other things, so we do put it first… That has actually been our policy, since then, because of Finance. So for Google Maps again, it’s the first link. – Marissa Mayer

If they gain certain privileges in the marketplace by claiming to not abuse their power and that their algorithmic results are neutral, but those algorithmic results may be pushed below the fold, then is it “only fair” for them to put themselves in a default market leading position in any category they feel they can make money from by selling ads in? Or is that an abuse of power?

As Google adds features, collects more data, puts ads everywhere, and pushes into being a publisher on more fronts, at some point there will be a straw that breaks the camel’s back. Big money is paying attention and the list of “evidence” grows weekly. Sometimes they still think like a start up. And that will lead to their demise.

It might not be anytime soon, but eventually they will hit a whammy.

SEO Book.com – Learn. Rank. Dominate.
Post image for The Google Holiday Update Conspiracy Theory

For the past few years, right before the prime 4th quarter holiday selling period, Google pushes out an update that causes an upheaval in the organic SERP’s. This has lead to talk of there being a Google Holiday Update Conspiracy, with the goals of making SEO look bad to drive up Adwords spending. Is there any truth to these rumors or is is just people upset about rankings loss? Let’s take a deeper look at the issue …

First let me preface this with a little background. When I worked as an in house SEO for a large regional retailer many moons ago we had a policy: no new programming goes live on the website between November 1st and January 1st. That was when the company made its greatest profit and, like any smart business, we didn’t want to “break” the checkout process or have some other programming issue mess things up. Later on, we became concerned about not messing with our search engine rankings as well. We wanted to make sure we had our best foot forward, and this is a sentiment shared by businesses today … including Google.

Google doesn’t really care about your rankings … only you do

Google has the same goals you do, giving their customers (ie users) the best product (the best SERPs) during a time of peak search volume. If more people believe they will get better results, more people will use Google and the advertisers will get more exposure for their ads. Your goal of maintaining your rankings during the holiday period doesn’t play a role in Google’s decision making. Google doesn’t really care about your rankings … only you do. Google just wants to give users the overall best results.

So is this a conspiracy on Google’s part to make SEO’s look bad, or to drive up revenue? I’m going to uncharacteristically disagree with Aaron and Peter and say … No I don’t think the holiday update is designed to shake up SEO or drive up adwords revenue.  Those are just nice side bonuses ;-)

As I said before, Google knows search volume increases right before the holidays, and they want to make sure they are putting out the best product they can (their SERP’s). Your specific website’s ranking or traffic doesn’t play a role in the decision making process. From Google’s perspective, they want to eliminate spam and be as resistant to manipulation as much as possible, so it makes sense that some “bad,” “aggressive,” or “guideline violating” tactics will get negated. The sad fact is that, with any change, there will likely be some collateral damage of “innocent” websites in the short term, but that will get sorted out down the road. I know that’s not much consolation for a business owner who has seen a 90% drop in traffic, but it’s not personal. There’s no shortage of websites to fill the SERP’s. Sometimes the players just get shuffled around.

Google makes its decisions based on what it thinks is best for users … not publishers …

Does Google engineer these changes to occur right before the holidays to drive up ad revenue?  I don’t think so. Google made instant search to give users results quicker and hopefully have a better user experience. The fact that it helped them make millions of extra dollars in revenue is just a bonus. Google makes its decisions based on what it thinks is best for users … not publishers … The sooner you understand that concept the better. From Google’s perspective, Google’s job isn’t to drive traffic for you to your website, Google’s job is to give users the best results. It’s your job to build a brand or create some other unique service offering that makes people look for you … not just for the generic keywords.

tla starter kit

Related posts:

  1. Oddities From the Latest Google Update Man the latest update on Google is clearly showing some…
  2. Logo Update I put my new logo up this morning check it…
  3. Matt this is not an Update Cutts C’mon Matt do we really have to have this semantics…
  4. Local Search Interview Update Due to some production issues I made some adjustments to…
  5. Dave Pasternack PPC Update I updated my Dave Pasternack ads let me know which…

Advertisers:

  1. Text Link Ads – New customers can get 0 in free text links.
  2. BOTW.org – Get a premier listing in the internet’s oldest directory.
  3. Ezilon.com Regional Directory – Check to see if your website is listed!
  4. Page1Hosting – Class C IP Hosting starting at .99.
  5. Directory Journal – List your website in our growing web directory today.
  6. Content Customs – Unique and high quality SEO writing services, providing webmasters with hundreds of SEO articles per week
  7. Majestic SEO – Competitive back link intellegence for SEO Analysis
  8. Glass Whiteboards – For a professional durable white board with no ghosting, streaking or marker stains, see my Glass Whiteboard Review
  9. Need an SEO Audit for your website, look at my SEO Consulting Services
  10. KnowEm – Protect your brand, product or company name with a continually growing list of social media sites.
  11. Scribe SEO Review find out how to better optimize your wordpress posts.
  12. TigerTech – Great Web Hosting service at a great price.

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

The Google Holiday Update Conspiracy Theory


Michael Gray – Graywolf’s SEO Blog

When Bing launched, one of the interesting things they did to make the organic search results appear more relevant was to use link anchor text to augment page titles (where relevant). This would mean if people searched for a phrase that was mostly in your title (but maybe your page title was missing a word or 2 from the search) then Bing might insert those words into the page title area of your listing if they were in some of the link anchor text pointing into your page.

Before being switched over to Bing, Yahoo! would sometimes display the H1 heading as the clickable link to your site (rather than the page title). Bing also uses on page headings to augment page titles.

Historically if Google has thought it would appear more relevant to searchers, sometimes they have shown a relevant machine generated piece of your page displaying those keywords in context rather than the meta description in the snippet, but typically Google has been far more conservative with the page titles. Sometimes Google would list the ODP title for the title of a page, most until recently they have generally typically just listed the page title as the clickable link to your site.

Recently Google has grown more experimental on this front, being willing to use link anchor text and on-page headings as part of the listing. In addition, if the page title is short, Google may add the site’s name at the end of the title.

Here is an example in Google of the page title being replaced by part of an on-page heading & also showing the site’s name being added to the end of the link

And here is the associated on-page heading for the above

I have also seen a few examples of the link anchor text being added to the page title in Google, however it was on a client project & the client would prefer that I didn’t share his new site on an SEO blog with 10′s of thousands of readers. :D

Last November Matt Cutts recently did a video on the topic of Google editing the page titles for relevancy & how it was a fairly new thing for Google. Even back then Google was quite conservative in editing the clickable link … I think they have only grown more aggressive on that front in the past month or so.

SEO Book.com – Learn. Rank. Dominate.