Tags Archives

You are currently viewing all posts tagged with Google.

Posted by RobOusbey

This post begins with a particular dilemma that SEOs have often faced:

  • websites that use AJAX to load content into the page can be much quicker and provide a better user experience
  • BUT: these websites can be difficult (or impossible) for Google to crawl, and using AJAX can damage the site’s SEO.

Fortunately, Google has made a proposal for how webmasters can get the best of both worlds. I’ll provide links to Google documentation later in this post, but it boils down to to some relatively simple concepts.

Although Google made this proposal a year ago, I don’t feel that it’s attracted a great deal of attention – even though it ought to be particularly useful for SEOs. This post is targeted to people who’ve not explored Google’s AJAX crawling proposal yet – I’ll try to keep it short, and not too technical!

I’ll explain the concepts and show you a famous site where they’re already in action. I’ve also set up my own demo, which includes code that you can download and look at.

The Basics

Essentially, sites following this proposal are required to make two versions of their content available:

  1. Content for JS-enabled users, at an ‘AJAX style’ URL
  2. Content for the search engines, at a static ‘traditional’ URL – Google refers to this as an ‘HTML snapshot’

Historically, developers had made use of the ‘named anchor‘ part of URLs on AJAX-powered websites (this is the ‘hash’ symbol, #, and the text following it). For example, take a look at this demo  – clicking menu items changes named anchor and loads the content into the page on the fly. It’s great for users, but search engine spiders can’t deal with it.

Rather than using a hash, #, the new proposal requires using a hash and an exclamation point: #!

The #! combination has occasionally been called a ‘hashbang’ by people geekier than me; I like the sound of that term, so I’m going to stick with it.

Hashbang Wallop: The AJAX Crawling Protocol

As soon as you use the hashbang in a URL, Google will spot that you’re following their protocol, and interpret your URLs in a special way – they’ll take everything after the hashbang, and pass it to the site as a URL parameter instead. The name they use for the parameter is: _escaped_fragment_

Google will then rewrite the URL, and request content from that static page. To show what the rewritten URLs look like, here are some examples:

  • www.demo.com/#!seattle/hotels becomes www.demo.com/?_escaped_fragment=seattle/hotels
  • www.demo.com/users#!name=rob becomes www.demo.com/users?_escaped_fragment_=name=rob

As long as you can get the static page (the URL on the right in these examples) to display the same content that a user would see (at the left-hand URL), then it works just as planned.

Two Suggestions about Static URLs

For now, it seems that Google is returning static URLs in its index – this makes sense, since they don’t want to damage a non-JS user’s experience by sending them to a page that requires Javascript. For that reason, sites may want to add some Javascript that will detect JS-enabled users, and take the to the ‘enhanced’ AJAX version of the page they’ve landed on.

In addition, you probably don’t want your indexed URLs to show up in the SERPs with the ‘_escaped_fragment_’ parameter in them. This can easily be avoided by having your ‘static version’ pages at more attractive URLs, and using 301 redirects to guide the spiders from the _escaped_parameter_ version to the more attractive example.

E.G.: In my first example above, the site may choose to implement a 301 redirect from
www.demo.com?_escaped_fragment=seattle/hotels to www.demo.com/directory/seattle/hotels

 

A Live Example

Fortunately for us, there’s a great demonstration of this proposal already in place on a pretty big website: the new version of Twitter.

If you’re a Twitter user, logged-in, and have Javascript, you’ll be able to see my profile here:

However, Googlebot will recognize that as a URL in the new format, and will instead request this URL:

Sensibly, Twitter want to maintain backward compatibility (and not have their indexed URLs look like junk) so they 301 redirect that URL to:

(And if you’re a logged-in Twitter user, that last URL will actually redirect you back to the first one.)

 

Another Example, With Freely Downloadable Code

I’ve set up a demo of these practices in action, over at: www.gingerhost.com/ajax-demo

Feel free to have a play and see how that page behaves. If you’d like to see how it’s implemented from a ‘backend’ perspective, hit the download link on that page to grab the PHP code I used. (N.B.: I’m not a developer; if anyone spots any glaring errors, please feel free to let me know so I can correct them!)

 

More Examples, Further Reading

The Google Web Toolkit showcase adheres to this proposal; experimenting with removing the hasbang is left as an exercise for the reader.

The best place to being further reading on this topic is definitely Google’s own help pages. They give information about how sites should work to fit with this proposal, and have some interesting implementation advice, such as using server-side DOM manipulation to create the snapshot (though I think their focus on this ‘headless browser’ may well have put people off implementing this sooner.)

Google’s Webmaster Central blog has the official announcement of this, and John Mueller invited discussion in the WMC Forums.

Between Google’s blog, forum and help pages, you should find everything you need to turn your fancy AJAX sites into something that Google can love, as well as your users. Have fun!

 

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Some sites have seen pretty drastic drops in Google search traffic recently, related to indexing issues. Google maintains that it is a glitch:

Just to be clear, the issues from this thread, which I have reviewed in detail, are not due to changes in our policies or changes in our algorithms; they is due to a technical issue on our side that will be visibly resolved as soon as possible (it may take up to a few days to be visible for all sites though). You do not need to change anything on your side and we will continue to crawl and index your content (perhaps not as quickly at the moment, but we hope that will be resolved for all sites soon). I would not recommend changing anything significantly at this moment (unless you spot obvious problems on your side), as these may result in other issues once this problem is resolved on our side.

An example of one site’s search traffic that was butchered by this glitch, see the below images. Note that in the before, Google traffic is ~ 10x what Yahoo! or Bing drive, and after the bug the traffic is ~ even.

Not that long ago I saw another site with over 500 unique linking domains which simply disappeared from the index for a few days, then came right back 3 days later. Google’s push to become faster and more comprehensive has perhaps made them less stable, as digging into social media highlights a lot of false signals & often promotes a copy over the original. Add in any sort of indexing issues and things get really ugly really fast.

Now this may just be a glitch, but as Tedster points out, many such “glitches” often precede or coincide with major index updates. Ever since I have been in the SEO field I think Google has done a major algorithmic change just before the holidays every year except last year.

I think the reasons they do it are likely 3 or 4 fold

  • they want to make SEO unpredictable & unreliable (which ultimately means less resources are spent on SEO & the results are overall less manipulated)

  • they want to force businesses (who just stocked up on inventory) to enter the AdWords game in a big way
  • by making changes to the core relevancy algorithms (and having the market discuss those) they can slide in more self promotion via their vertical search services without it drawing much anti-trust scrutiny
  • the holidays are when conversion rates are the highest, so if they want to make changes to seek additional yield it is the best time to do it, and the holidays give them an excuse to offer specials or beta tests of various sorts

As an SEO with clients, the unpredictability is a bad thing, because it makes it harder to manage expectations. Sharp drops in rankings from Google “glitches” erode customer trust in the SEO provider. Sometimes Google will admit to major issues happening, and other times they won’t until well *after* the fact. Being proven right after the fact still doesn’t take back 100% of the uncertainty unleashed into the marketplace weeks later.

Even if half your clients double their business while 1/3 lose half their search traffic, as an SEO business you typically don’t generally get to capture much of the additional upside…whereas you certainly capture the complaints from those who just fell behind. Ultimately this is one of the reasons why I think being a diversified web publisher is better than being an SEO consultant… if something takes off & something else drops then you can just pour additional resources into whatever is taking well and capture the lift from those changes.

If you haven’t been tracking rankings now would be a great time to get on it. It is worth tracking a variety of keywords (at various levels of competition) daily while there is major flux going on, because that gives you another lens through which to view the relevancy algorithms, and where they might be headed.

SEO Book.com – Learn. Rank. Dominate.

Friday fun. Search for Matt Cutts and Google recommends paid links. ;) SEO Book.com – Learn. Rank. Dominate.

So the conversation in tech media of late is that Facebook is set to become a bigger cash cow than Google.

Why?

People spend more time on Facebook. Facebook has users locked-in (kinda). Facebook “owns” the social map. Facebook is popular. Facebook is everywhere. Facebook is big.

Uh-huh.

Facebook may be all those things, but when it comes to translating “viewers” into revenue, Google currently wins hands down.

Google wins because Google’s advertising is closely aligned with the users primary activity, which is to seek topics and click links. The primary activity of a user on Facebook is to socialize. Translating this activity to a commercial imperative, in a way advertisers find profitable, is the challenge Facebook faces.

The primary user activity on Facebook isn’t yet as conducive to effective advertising as the topic-matching system used by Google. This shows up in the revenue data.

Google’s revenue, with supposedly fewer users than Facebook, is .531 billion – and rising. Facebook, with more users, who reportedly spend more time on the site, has estimated revenues around b. Admittedly a bit of an apples-and-oranges comparison, but useful to get the two entities in perspective. Facebook is nowhere near Google in terms of advertiser revenue.

In short, being popular doesn’t necessarily translate into revenue, or marketing value. Ask any popular blogger who is blogging on a non-commercial topic. It can be difficult to convert some audiences, and some activities, into revenue and advertiser value.

As a commenter, Chris Norstrom, on the TechCrunch page I linked to above pointed out:

500 Millions users does not mean those users want to accomplish EVERYTHING on your site. Facebook already tried their own version of “yahoo.Answers” and it failed. People come to facebook to lol with friends and waste time, nothing more. Not to check inboxes, not to ask questions, not to participate in groups, not to rate stores or check into places, not to send or receive money, not to edit documents.

Is he right, do you think?

Like Button Replacing The Link

Some commentators have suggested that the “like” button on Facebook will replace the link

Enter the Like button, the social solution to search, and the replacement of the link as a voting mechanism. The people as a whole are more effective at determining what content is relevant and most of those people are unfortunately not effective at creating links

A “thumbs up” system doesn’t say much. It may help people find out what is most popular amongst the heard on any given day, but as anyone can see from Digg, exploding pancakes doesn’t mean much, popular as the topic may be. I suspect Facebook users will use the Like button even less when they come to realise it’s a form of permission marketing.

Google, on the other hand, is oriented around topical queries. Relevance is decided by alorithms that measure over a hundred different factors. It’s fair to say that if a simple “Like” button worked as a means to determine relevance, Google would have implemented it years ago. They pretty much have one, but who really uses it?

In short, user voting is fraught with problems. It won’t replace sophisticated algorithms. The link, the basis of the web, isn’t going away.

Fit The Message To The Medium

Which, in a rather long-winded way, brings me around to my point.

The Google vs Facebook contest doesn’t really matter as far as marketing is concerned. Both environments are valuable to marketers. Both need to be approached in different ways.

As we discussed in Google Keyword Research Tool: Not Popular, search is suited to concepts and services of which the searcher is already aware. Facebook is better suited to distraction media, viral campaigns, and marketing targeted at specific demographic groups.

Facebook may be useful at introducing people to new concepts – especially if those concepts fit into an existing social activity, as defined by members of a specific demographic i.e. the group “Porsche Owners Club” may be interested in new Porsche merchandise, whether they’re actively seeking it or not.

Keep in mind the core function of Facebook. The Facebook user isn’t likely to be actively hunting for something. They are killing time, or socializing. As a result, Facebook is less suited to direct sales, as it is difficult to determine which phase the buyer is at in the sales funnel. Facebook is more suited to brand building and awareness campaigns. It is suited to relationship building. Adjust your marketing approach accordingly.

For further reading on the specifics of Facebook marketing, SEOMoz offers a great overview of marketing approaches on Facebook.

SEO Book.com – Learn. Rank. Dominate.

Posted by Tom_C

1) Regex for Counting " " and "/"

Regex is teh awesome. I don’t claim to be amazing at it but there are a few common regex strings I use all the time in my analysis.

Length of Keyword

To quickly filter your keywords report by the length of keyword, I use some regex to count the number of spaces in the keyword like this:

^([^ ]+ ){5,50}[^ ]+$

The above regex searches for keywords that have between 5 and 50 spaces in them. You can also search across a single number as shown below. This image is a search for all keywords with 6 spaces in them for the distilled site (i.e. 7 words):

Depth of Page

Very similar to the above regex, but when I’m looking at top landing pages I use regex like this to count the number of slashes in a URL:

^/([^/]+/){3}[^/]*$

Note that because I’m not a full regex ninja this actually counts those URLs that have 4 slashes in (i.e. n+1). So the following image is showing all traffic to those pages with 5 slashes in them:

Note how useful this search is? Pretty much all of these pages are low quality like pagination or blog pages that have multiple categories assigned. For large sites if you construct the regex correctly this can be a great way to analyse where traffic is landing on the site and identify low quality pages to remove from the index.

If you’re new to regex – this is my goto guide for using regex in Google Analytics (PDF).

2) Check Your Analytics Code Is Correctly Installed

This is a super easy one, but definitely one worth running on any new site you take a look at. SiteScan will crawl your site and check for the analytics code which is pretty nifty. It even intelligently checks for the old and new versions of the GA code. Nice. Unfortunately the free version only checks 100 pages but it’s definitely a solid resource for smaller sites:

Another quick check for correctly installed Google Analytics is to look for referrals from your own domain. Any referral from your own domain indicates that there are pages not correctly tagged (and will even show you which ones!). Nice.

3) 5 Ways to Segment your Funnel

Segmenting your funnel is not something you can do natively in Google Analytics which annoys the hell out of me. I’m hopeful that Google will be adding this feature sometime in the near future. In the meantime, there’s a few ways to segment your funnel:

Why do you care about segmenting your funnel? Well I give a detailed run-down of why this is important over here but hopefully this image should explain itself (the output of segmenting the funnel using my method):

4) Track SEO Variables In Google Analytics

This is a nifty use of custom variables which I recently started using on a few sites. Imagine you’re running a hotels reviews website. Some of your reviews have 100s of reviews and are lovely content-rich pages. But some of your hotels are awaiting their first review. In that case, your hotel page might be very light on content and might only have the name and address of the hotel on the page (which is duplicated on 100s of other sites). Wouldn’t it be nice to be able to segment your Google traffic by how many reviews your hotel page had? Well using page level custom variables this is as easy as the following code:

_gaq.push(['_setCustomVar',
         
1,                   // This custom var is set to slot #1. Required.
         
'Num_Reviews',       // The name of the custom variable. Required.
         
0,           // Sets the value of "Num_Reviews" to 0. Required.
         
3                    // Sets the scope to page-level.  Optional.
         
]);

You don’t have to limit yourself to just using this for number of reviews, you could look at other factors that you think might be affecting your pages ability to rank and pass those into GA. For example, you could pass the length of the description of a page. Or the number of tweets it has or anything you can think of really!

Learn more about page level custom variables over here.

5) T
rack Form Abandonment

This one comes from a blog post Duncan wrote a little while back, but I love how simple this is to use and how useful the insight is. Basically, using jquery it becomes very easy to track how far through a form people get. The idea was prompted by Sam’s post from some time ago, but uses events instead of virtual page views.

You should read the full write-up on Duncan’s post but the code looks something like this:

1.  $(document).ready(function() {   
2. var currentPage = jQuery.url.attr("path");
3. $(':input').blur(function () {
4. if($(this).val().length > 0){
5. pageTracker._trackEvent("Form: " + currentPage, "input_exit", $(this).attr('name'));
6. }
7. });
8. });

Bonus!

While writing this post, one of Dave Naylor’s gang posted about a new interface for in-page analytics which replaces the old site overlay. I’m quite excited about this, I think it paves the way for all kinds of cool things (not least of which is heatmaps as David points out…)

Do you like this post? Yes No


SEOmoz Daily SEO Blog

At a recent SMX conference, Baris Gultekin, Group Product Manager for Google AdWords, put the cat amongst the pigeons when he said the Google Keyword Tool only provides keyword data for the terms Google deems “commercial”.

Teething problems? New policy? Bit of both? Regardless, it’s fair to say there has been a backlash against the changes made to the keyword tool.

For example, Marty Weinberg points out:

“Facebook” Must Not Be “Commercial” Do Google users really only articulate 12 semantic permutations of “Facebook” at phrase, broad and exact match? Eeesh… Obviously that’s a laughable proposition. These 12 keywords are what Google wants to sell as they productize Facebook related queries into AdWords inventory”

Google’s Business

It shouldn’t come as a surprise that Google is only showing webmasters what it wants webmasters to see. Google will show data that works to Google’s advantage.

There’s no advantage to Google in revealing all their keyword data – a valuable asset – especially the data that Google thinks can’t be monetized as profitably via Adwords. Adwords research is, after all, what the Keyword Tool is for, at least as far as Google is concerned. As much as SEOs like keyword data, Google isn’t there to make SEOs lives easier.

Adwords advertisers might argue that we know which terms provide value, but that’s a slightly different issue. Google may prefer to force more bid competition on keyword terms Google deems work best – in terms of searcher relevance, clickability, and for Google’s bottom line. There’s some merit in this, given their number crunching ability, although they don’t have end revenue data for sites using Adwords. Well, not unless you give it to them.

There may well be bugs Google are working out, or we’re seeing a a change in the PPC game – i.e. encourage advertisers towards the most profitable terms. At SES San Jose last year Google’s Nicholas Fox highlighted that Google had about 30 million words in their ad auction. For advertising purposes, Google figures they do not need to give you a deep set of data, just the core relevant keywords and the ability to taste them via a broad match or phrase match AdWords campaign and refine with negative keywords.

As predicted, Google instant has had a significant impact on keyword diversity in some markets: “While organic traffic levels have risen about 5% for all Drive users since Instant was introduced, keyword variety has fallen more than 15%!”

However, there is still a big keyword tail, and the Google keyword tool is but one keyword resource. ;)

Other Ways To Research Keywords

There are many ways to discover keywords. But first, let’s back up and focus on the user.

In a user-driven environment, like search, everything centers on typical user behavior, or, more specifically, what’s in their head. Those who don’t understand this seemingly innocuous piece of information often go wrong in SEO.

For a user to conduct search, they must already be aware of a concept. In this respect, search is reactive. It is difficult – although not impossible – to break a new idea or brand using the search channel, as the searcher isn’t already aware of the new concept, therefore is unlikely to search on it. These type of “awareness generation” campaigns are generally better suited to interruption media, such as banners, videos and such.

Is your product/service/concept already known? Is it a brand? If so, it’s a good candidate for search marketing. Listen to the way your customers talk. What phrases do they use? What questions do they ask? What problems do they have? Read the sites/magazines/publications they read and look for common terminology and reference points. Keep an eye on social networks and see what news they discuss. Feed all this information – the phrases, questions and terminology – back into your keyword list. Chances are, many of these terms will not appear on keyword research tools.

The next step is to consider searcher behavior.

82% of searchers will rephrase their query if they don’t find what they are looking for on their first attempt. Combine this with the fact that 55% of queries use more than three terms, and a staggering 20 to 25% of the queries have never been seen before i.e. they are unique.

This means that there are many more keywords permutations than a keyword tool will ever give you.

If you focus on multiple low traffic terms, this can result in more traffic than can be gained from a single high traffic term. You can often achieve this simply by knowing the topics your audience are interested in, and writing about them. Is this SEO? Of course. Your language matches that of your intended audience.

So publish often. Each page you publish is a keyword net.

Look deep into your web analytics / log file. Use keyword terms found in your logs as topic/titles/starter ideas for new pages. Repeat indefinitely. You’ll eventually build your unique own body of keyword data that people using keyword research tools are unlikely to find.

Always listen and adapt to your audience. Always listen and adapt to your site’s analytics, as it is the purest (and most relevant) data you will ever get to use in your search marketing campaigns.

Free Keyword Research Tools

We’re going to blow our own horn here and recommend the SEOBook keyword tool, powered by Wordtracker. It’s free, and provides a lot data across various search services. The SEOBook members section has some very cool tools, too, including a Competitive Research tool based on SEMRush data. This data can list keyword value distribution i.e. keyword value * estimated traffic. Aaron did a thorough review of SEMRush here.

But enough about us…. :)

Google still offer a range of great freebie tools, including:

Google Trends

Google trends for websites

Insights for search

Google Sets

Microsoft’s Ad Intelligence is too good to not mention.

Don’t forget to use a Thesaurus – such as Thesaurus.com. A Thesaurus can often cough up synonyms the keyword research tools miss. Aaron has a video and a few more keyword tools listed here.

And virtually anything can be a source of data to explore

The well is deep!

There is a ton of data out there, whether Google chooses to share it or not.

The very best keyword data is seldom shared intentionally ;) though sometimes when people sell their site they do offer “free milk.” SEO Book.com – Learn. Rank. Dominate.

Posted by MikeCP

Today I want to talk about tracking organic ranking in Google Analytics. Previously, we were able to determine the page from which a Google organic click was coming from (detailed on the Distilled blog by Will Critchlow). This was nice because we could append this to the end of our keywords in Google Analytics for some interesting data (André Scholten’s post at Yoast.com has a step by step) as seen below.

Keyword Page Rankings
Image courtesy of Yoast.com

This solution provides limited utility, and if you’re like me, you implemented it, maybe checked it out once in a while, but never really turned this into actionable or otherwise meaningful data. I’m going to detail how rank tracking in Google Analytics can be made a lot more useful thanks to custom variables and a change in Google’s referring URLs. But first…

Some History

When Google began testing an AJAX search interface in early 2009 there was a flurry of concern that it could mean the end of traditional rank tracking, web analytics packages, and I’m sure someone said SEO was dead, too. The concern wasn’t without merit; Google was serving results in AJAX with the URL pattern as http://www.google.com/#q=keyword, and most analytics packages ignored the hash and everything after.

Fast forward to September 8th, when Google introduced Google Instant. The AJAX SERP usage had been steadily increasing over time, but shot up in usage when Instant was rolled out. Fortunately for Omniture, WebTrends, and other third party analytics packages, Google worked out a way to pass the referrer information from the SERPs, rank tracking still works, and I’m still working as an SEO in a living industry.

As it turns out, Google includes even more information in the AJAX SERPs than they previously did, including one really interesting parameter: "cd=". The cd= parameter contains the exact ranking position of the search listing, which makes for some really awesome possibilities, especially when paired with Google Analytics’ custom variables.

Why Custom Variables?

Custom variables are a bit of an enigma to even advanced Analytics users. I’ll admit that I never really made much use of them in the past. You’ll often see examples where custom variables are used to track logged in vs. unlogged in users, which is definitely a great use. Rob Ousbey’s 6 cool things YOU can do with custom variables is a good set of examples to get your feet wet.

In André Scholten’s example above we’re using Google Analytics user defined value, isn’t that just as good a custom variable? Well, the difference depends on how you intend on using your data. With custom variables, you’re granted much more flexibility within Google Analytics for slicing and dicing data. For instance, through the use of either custom reporting or advanced segments with custom variables, I can pretty easily track how much revenue a keyword has brought in when ranked in the 2nd position, as opposed to the 4th. While this may be possible with the user defined variable, it would require quite a bit of work after an excel data dump. 

Now, let’s get to business:

The How

Getting this properly set up was remarkably easy for me, and I have so very little programming knowledge, so I would imagine most wouldn’t have much issue. I used PHP, as I was working with a WordPress site, but I’m sure you crazy hackers can do the same in most any language.

Step One – Extract cd= Value from Referrer String

I used this snippet to do this.

<?php preg_match("/cd=(d+)/",$_SERVER['HTTP_REFERER'], $matches);
$str = $matches[0];
preg_match("/(d+)/",$str,$matches);
$rank = $matches[0] ?>

Please don’t make fun of my hacky coding

This assigns the cd= value to the $rank variable. We’ll reference this in…

Step 2 – Call cd= Value in our Google Analytics snippet

Now, we want to insert the custom variable call between the setAccount and trackPageview lines in our Analytics snippet (shown below using the asynchronous code):

var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_setCustomVar',1,'Google_Rank','$rank',2]);
  _gaq.push(['_trackPageview']);"

We’ve set the custom variable slot to 1, and the scope to the session-level (the last argument, set as 2). If you are already making use of custom variables, be sure to not overwrite a previously occupied slot. For more information on how the custom variable is formatted, see Google’s help page on the topic.

Step 3 – Create an IF Statement so the CustomVar isn’t Called Every Time

We only want to include this line when we have a cd= value, otherwise every new click will overwrite the last value. To do this, I used the following IF statement, again coded in PHP. This is the final step, and the complete Google Analytics snippet:

<?php if ($rank != '' ) {
echo "<script type="text/javascript">n
  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_setCustomVar',1,'Google_Rank','$rank',2]);
  _gaq.push(['_trackPageview']);";
echo "n";
  }
else {
echo "<script type="text/javascript">n

  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_trackPageview']);";
    }
echo "n";
?>

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
  })();

</script>

Here we’re checking if $rank has a value. If it does, we’ll include the custom variable call with that $rank value, if not, we’ll print the Google Analytics code as normal. Also included in the above are some line breaks (n), so that the code formats correctly.

The Most Important Part – Analyzing Our Data

What’s the point of going through all this effort if it doesn’t provide you with any analytical insight? None, of course. But this rank tracking solution has some added benefits over the traditional rank tracking software that may be really useful to some SEOs. These include:

Rankings by City, Region, Country

Traditional rank tracking software suffers in that its ranking results are dependent on the location of the servers. With custom variable rank tracking and a little spreadsheet pivot table magic it’s pretty easy to get your site’s rank for any location.

Historical, Definite, Data

Once this is properly set up you’ve got access to definite rankings within your Analytics data from that point on. So as holiday season 2011 rolls around, its easy enough to review where your site ranked during the 2010 holidays, helping to set budgets, goals, and expectations.

Bounce Rate/eCommerce Data/etc. by Rank

Whatever your KPI, you can compare it against search ranking. Reporting the ROI of link building efforts or on site optimization becomes much easier when you’ve got rankings included in your dataset.

Some of the quick ideas I had around this include:

  • Average rank over time for top 50 keywords
  • Average rank over time for 4+ word keyphrases
  • Bounce rate for 2nd+ page clicks
  • Revenue % increase for Keyword X when ranking increases from 2 to 1

I should note that getting averages is a lot easier in Excel with a pivot table, as seen below:

Average rank pivot table
This can also be adjusted to show your minimum rank, as well

Creating Custom Reports and Advanced Segments

Custom variables aren’t included in the default reports for Google Analytics, so unless you do all your work in Excel, you’ll probably want to create some custom reports or advanced segmentation to work with the data directly in Analytics.

Advanced segmentation is great for this data. Below is the function one would use to track rankings between 11 and 15, which might be strong candidates for on-page optimization that could provide the boost onto the first page:

Advanced Segmentation
You can apply this particular advanced segment with this link.

The Downsides

The most obvious downside is that you’re only receiving a ranking when a listing is being clicked on, so for very small sites there may be limited utility. Ranking data will be spotty past the 2nd page, as well.

Additionally, the AJAX SERPs are not being served to all users in all locations. Small sample size warning here, but I’m seeing about 40% of organic Google traffic coming from the AJAX SERPs (done through a simple calculation of visits with our custom variable divided by total Google organic visits over the same time period). Michael Whitaker is seeing this number over 50% in his data. This number is likely going to increase as Instant is rolled out further.

The #-pack local listings can really throw things off, too. If a particular query gets one of these to start the SERP, the cd= continues after:

cd= rankings

Lastly, there does exist the possibility that Google discontinues its use of the cd= variable for whatever reason.

Go Analyze

I hope some of you can make some good use out of this functionality. I’ve only had it installed on my sites for a short time, but I’ve definitely found it interesting to play around with. If you don’t already have Excellent Analytics installed in your Excel I would highly recommend doing so, even if you don’t implement this tracking, and especially if you do.

I’d like to thank Michael Whitaker of Monitus for his help. He’s been installing this setup for his clients for a bit now. Monitus offers proper eCommerce Google Analytics installation for Yahoo! stores, which is surprisingly difficult without Monitus.

If you’ve got any other ideas for working with this data, sound off in the comments or let me know on Twitter @MikeCP. Personally, I’m really excited to have this data rolling in and the possibilities are nearly endless. I’ll be sure to report any interesting ways to manipulate the data in future blog posts. Cheers!

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by Justin Briggs

Hey everyone! My name is Justin Briggs, and I’m a SEO consultant at Distilled. A few weeks ago, I packed up and moved across the country to come to Seattle. Some of you might know me better as "seozombie" on Twitter. This is my first post on SEOmoz, but you can expect to see more from me here and on our blog at Distilled.

With the transition of Yahoo! to Microsoft’s Bing backend, webmasters have lost the ability to perform advanced searches using the link: and linkdomain: parameters. Rand Fishkin wrote a post about replacing the Yahoo! linkdomain: data with other data sources. Although Linkscape and Open Site Explorer provide a great data source, there is some functionality that Yahoo! had that isn’t present in other tools yet. The primary functionality I missed was the ability to perform searches against page content; not just page title, URL, and anchor text.

These link searches can help you identify link opportunities from other websites’ (such as competitors) backlinks.

Searching Content of Backlinks

To solve this problem, I setup a Google Custom Search Engine using data from Open Site Explorer. There are two exports of data you can use, which are links and linking domains. I’ll briefly go over the pros and cons of each as a data source in GCSE.

Linking URLS

Pros

  • Only search content that has links
  • Less noise

Cons

  • Limited to top links
  • Limited to 25 URLs per domain
  • Multiple links per domain reduces domain diversity
  • Limited content (5,000 annotations = 5,000 URLS)

Linking Domains

Pros

  • Search all indexed content on a linking domain
  • Find linking sources not included in OSE export
  • Greater domain diversity
  • More content (5,000 annotations = 5,000 domains of content)

Cons

  • More noise
  • Large linking domains like WordPress.com and Blogger.com have subdomains (lots of noise)
  • Results that don’t have link

Setup of Custom Search Engine

Setup of your custom search engine is very easy. For this example, I’m going to use linking domains from OSE.

1) Perform search in Open Site Explorer

Search Open Site Explorer

2) Pull linking domains for all pages on the root domain,  export to CSV

Link Domains in OSE

3) Get list from Excel

Domains in Excel

I used Find & Replace to add a * to the end of all URLs, for matching. You can sort by DA or linking domains. Google Custom Search Engine only allows 5,000 annotations, so only copy up to 5,000 domains.

4) Create Custom Search Engine

Go to Google Custom Search Engine.

How to Create Google Custom Search Engine

5) Perform your searches

So here are the pages on domains that link to distilled.co.uk, that include “link building” in the content and “resources” in the title.

Replace Yahoo linkdomain with GCSE

This solution gives you a new way to mine for backlinks opportunities using your competitor’s backlinks. You can also include linking domains from multiple competitors at the same time. However, you can only include up to 5,000 annotations at a time, so you might want to use some Excel filters to remove noise and duplicate entries.

Tips

Here are a few quick tips to speed things up.

  • Remove massive domains – Large domains like wordpress.com and blogspot.com can produce a lot of noise.
  • Use the –site:  search to reduce noise – If a particular domain is creating a lot of noise in your search, use a negative site search to remove it.
  • Search brand mentions – A search for the brand can help find the linking pages on these domains.
  • Search top anchors from OSE – Find the pages that include the anchors the site is targeting.

Example Queries

"powered by wordpress" "distilled"

Find pages that mention the brand “Distilled” and include “Powered by WordPress”. This is an easy way to find the blogs linking to Distilled.

“guest blogger” OR “guest post” OR “guest article” OR “guest column” -site:blogspot.com -site:wordpress.com -wordpress.org

Find guest blogging opportunities, but filter out domains that may create a significant amount of noise.

"powered by vbulletin" AND seo

Find vBulletin powered forums mentioning SEO.

“link building” intitle:resources

Find link building resource pages.

Give it a Try & Search SEOmoz’s Backlinks

A few queries to try:
"top seo tools"
“link building” intitle:resources
"open site explorer" "powered by wordpress"
allinurl:seomoz

Go ahead, try it, you know you want to!

Loading

I removed linking domains with a DA greater than 90, just to remove some noise from larger domains. (Selecting this value to filter by was completely arbitrary and is just to make the example easier to use.)

Need More Queries?

Long List of Link Searches (SEOmoz) 21 Link Builders Share Advanced Link Building Queries 74 B2B Link Building Queries 106 Sponsorship-Based Link Building Queries

I hope this helps everyone replace some of the functionality of the Yahoo! linkdomain comand. If you’ve got more link searches or ideas to add, please share.

Technorati Tags

, , , ,

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by randfish

Ugh… Part of me just wants to link to this old blog post and leave it at that.

But, since there’s actually a bit of data to share helping to show that (at least so far) Google Instant changes less than your average algorithmic rankings update, let’s share.

880,000 Search Visits Analyzed

Conductor released some nice research from anonymized data of sites on their software platform making a compelling case:

Search Term Keyword Length for Visits Post-Google-Instant
If Conductor keeps putting out this kind of stuff, they’ll be a "must-read" in no time

Hmm… Looks pretty darn similar to me. A tiny increase in 4, 5 and 6 word phrases would seem to go against many of the prognostications and fears that this move would decimate the long tail (though, to be fair, plenty of savvier search folks predicted a slight increase as Google’s "Suggest" function would be more obvious/visible to searchers and push them to perform more specific queries).

Google Search Traffic for SEOmoz & Open Site Explorer 

While I don’t have as much data to share as Conductor, I can show you some tidbits from SEOmoz.

Here’s SEOmoz.org’s traffic from Google in the past week compared to the week prior:

SEOmoz.org's Traffic Pre-and-Post Google Instant

 

And here’s a similar look at OpenSiteExplorer’s Google traffic:

 

OpenSiteExplorer Traffic Pre-and-Post Google Instant

 

There’s a suspiciously small amount of change in the keyword demand, and although these are certainly un-representative of the broader web, we can be relatively confident that lots and lots of folks in our industry, performing queries that might lead them to these two sites, have awareness of and are using Google Instant.

One change that did catch my eye (thanks to some Tweets on the topic) is that Google’s Suggest itself seems to have changed a bit:

Querying for SEOmoz in Google Instant

Hard to complain about that :-)

Other Sources Worth Reading on the Topic

I was a bit dismayed to see so many in the SEO field taking this as a serious threat or even touting the massive "changes" that would be coming soon to SEO best practices or even search query demand. We’re usually pretty good about shrugging off Google’s pressbait around technical changes that don’t have much of an impact, but this one seemed to have more legs than usual.

That said, there are a few pieces I think warrant a read-through (or at least, knowledge of):

Very much looking forward to the discussion, but I’m leaving for Social Media Week Milan and will be hard pressed to contribute at normal levels until my return next week. Until then – Buona notte!

p.s. If you have data to share on how Instant has or hasn’t impacted your traffic-driving queries, that would be awesome. If you blog/upload it, we’ll be happy to update the post with links.

Do you like this post? Yes No


SEOmoz Daily SEO Blog
Post image for Can Google Detect an Affiliate Website

One of the questions that often comes up is does Google hate affiliate websites, and are they penalized in the algorithm?

I’m also quite sure Google has an idea at what point, whether by percentage or by total number of links, that a site becomes an affiliate website …

The answer to that is slightly nuanced but, for simplicity’s sake, they don’t hate affiliate websites. Nor have I seen any evidence that shows affiliate sites are penalized. What Google does hate is thin affiliate websites with little or no trust. However, a better question to ask is can Google detect affiliate websites, and can they make it harder  for affiliate websites to rank … ? But those are entirely different questions.

If you’ve read the leaked quality rater guide from 2009, you’ll see that Google has set up lot of hurdles specifically making it harder for affiliate websites to “pass” the sniff test. One of the quickest and easiest ways that Google can determine an affiliate website is through “naked” links to common affiliate programs like Linkshare, CJ, ShareASale, and others. But, really, how good can Google be at detecting those links? Well, here’s a publicly available free tool put out by Sitonomy that checks what types of programming tools are being used by a website.

Now if the folks at Sitonomy can detect that 4% of the links on a page are from CJ, I’m positive that Google can as well. I’m sure Google can tell on page level throughout the site and the site as a whole. I’m also quite sure Google has an idea at what point, whether by percentage or by total number of links, that a site becomes an affiliate website. It would also be fairly easy to say, once you cross that threshold, you need a higher level of trust to rank for competitive terms. This is one of the reasons I strongly disagree with Lori Weiman, who says affiliates should never cloak links.

So what are the takeaways here:

  • Use a tool like Sitonomy to check your most important pages and see what they are able to find as far as affiliate links
  • Look into redirection tools that mask your links, and make sure you block them from search engine spiders
  • Obfuscate some of your other links as well even if they aren’t affiliate links: people should always be unsure of your intent
  • Always make sure you comply with FTC regulations for disclosure. If needed, use a nice non-machine-readable graphic for maximum stealthiness

Creative Commons License photo credit: Aditya Rakhman

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

Can Google Detect an Affiliate Website

tla starter kit

Related posts:

  1. Cloaking Affiliate Links, How and Why Today’s topic is one that gets asked about pretty regularly,…
  2. The Affiliate Marketing Newbie’s Guide To Finding Niches When I started out in internet marketing, I browsed around…
  3. How to Be Crafty and Mask Affiliate Links Fine since Matt started this whole thing on hiding links…
  4. Is My Website Banned in Google? One of the more difficult problems to diagnose is has…
  5. Google The Double Standard of Being an Authority Website If you’ve ever been to search engine conference and attended…

Advertisers:

  1. Text Link Ads – New customers can get 0 in free text links.
  2. BOTW.org – Get a premier listing in the internet’s oldest directory.
  3. Ezilon.com Regional Directory – Check to see if your website is listed!
  4. Page1Hosting – Class C IP Hosting starting at .99.
  5. Directory Journal – List your website in our growing web directory today.
  6. Content Customs – Unique and high quality SEO writing services, providing webmasters with hundreds of SEO articles per week
  7. Majestic SEO – Competitive back link intellegence for SEO Analysis
  8. Glass Whiteboards – For a professional durable white board with no ghosting, streaking or marker stains, see my Glass Whiteboard Review
  9. Need an SEO Audit for your website, look at my SEO Consulting Services
  10. KnowEm – Protect your brand, product or company name with a continually growing list of social media sites.
  11. Scribe SEO Review find out how to better optimize your wordpress posts.
  12. TigerTech – Great Web Hosting service at a great price.

Michael Gray – Graywolf’s SEO Blog