Tags Archives

You are currently viewing all posts tagged with Using.

Posted by fabioricotta

Hi SEOmoz folks,

Some weeks ago my coworker Leandro Riolino published in our blog an experiment he was working with. The idea of the experiment was to try link to a page A from a page B with 3 different anchor texts providing value of all those anchor texts.

The idea is simple:  we chose 3 random keywords, created an internal page, created 3 links to different URLs that have a canonical tag to the main page. You can see this idea illustrated bellow:

Canonical Tag Experiment

So, after choosing the 3 keywords we submitted each one to check if Google has any occurrences of them:

Keyword 1

Keyword 2

Keyword 3

Then we bought a new domain, that has no backlinks and as you can see bellow, Google shows us that this website isn’t in the index:

New Website

Creating the Index Page

To start the experiment my coworker downloaded a random template from the Internet with some random content inside, changing only the page title, meta description and H1 tag focusing all them into the main website keyword “jogos online de corrida” (online race games in English). The major change he made into the template was to add a conditional check with PHP to insert the canonical tag if the URL requested had any parameter:

<? if (isset($_GET[keyword]))

{ ?>

<link rel="canonical" href="http://www.jogosonlinedecorrida.com.br" />

<? } ?>

For those who know something about PHP language, this code checks if the variable $_GET exists. If this check returns true the code insert the canonical tag line into the HTML.

It’s important to say that we do not mention any of those 3 keywords in the Index Page. So, this page can’t rank for having a keyword mention… instead Google needs to check it’s backlinks.

Internal Page

The next step was to create the internal page. We created it with 3 links in 3 different page positions: one in the header, another one in the content area and the last one in the footer area with the following anchor text: “nanuoretfcvds ksabara1″, “esjstisfdfkf aasjdkwer” e “gisrterssia fdswreasfs”. Each link had different targets:

  • http://www.jogosonlinedecorrida.com.br/?keyword=key1
  • http://www.jogosonlinedecorrida.com.br/?keyword=key2
  • http://www.jogosonlinedecorrida.com.br/?keyword=key3

It’s important to say that we used the meta tag <meta name=”robots” content=”noindex,follow” /> into this internal page, so this page would not rank for those 3 keywords.

Indexing the Content

In order to have the pages indexed by Google my coworker created a Sitemap.XML with the 2 pages (home and internal) and submitted it to Google Webmaster Tools. It is important to say that we did not share this page in any webpage and did not submit in any bookmarking service.

After 2 weeks, our website was showing the 2 pages when we used the operator “site:”. After one more week Google was showing the 2 pages and the link to their cache.

After this “waiting time” we searched in Google on the 3 keywords that we created and noticed that the main page was appearing for ALL of them as you can see bellow:

Keyword 1 - Home

Keyword 2 - Home

Keyword 3 - Home

So, with this small experiment we noticed that Google was giving to a page 3 anchor text values if we use the canonical tag as a funnel.

Conclusions and Applications

With this small experiment we have a hint on how Google treats the anchor text of a page that uses the rel=canonical tag and now we can try to create some new experiments (eg.: use a parameter in the logo link to your main page, and then receive the anchor text of the second link – because we know that only the first anchor text counts).

We know that this is a single experiment and we need to see if this works in a real website, because we know that Google understands the page segments and this maybe does not work as we presented in this article. We still need to try and check this.

I can’t end this article until saying congratulations to my coworker Leandro that provided me a huge amount of knowledge with this experiment – thank you.

Hope you liked this article!

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by kieronhughes

This isn’t a post about keyword research for video content on YouTube, but about exploring the ways in which it can be used to find relevant data, information and keywords in topics where you have a low understanding of the services and industry.

Now for an example…

You’ve just taken on the SEO contract for a private speech therapist based in the UK, and need to carry out industry and keyword research into the sector to best understand the opportunities available, and to structure their in-development website accordingly.

You don’t know much about speech therapy, but you’ve been given some information by your client and now you’re on the hunt for keywords. Heading over to the Google Adwords Keyword Tool, you put in "speech therapy" to see what suggestions/volumes are thrown up.

Keyword Research Results for Speech Therapy

Now although the results displayed by the Google Keyword Tool might be relevant to what you are looking for, they don’t provide the bigger picture, which is what you should be looking for.

  • Are there any common phrases used by experts/consumers in the industry?
  • Are there any related (similar) services that the client could offer, but not have mentioned?
  • What are the conditions that most commonly lead to speech problems?

All of this information can no doubt be found by carrying out your own research on Google/your other favourite search engine, but it can still be difficult to sift through the results to find the correct information you are looking for… and this is where YouTube can really help.

Using YouTube as a Keyword Tool

Unlike creating a web page, uploading a video to YouTube is very accessible to anybody with a video file and an internet connection. The great advantage of the upload process is that Google prompts people to provide descriptive content about the video, such as explanatory text (description), a relevant title, and appropriate tags – so not only is it easier for the videos to be sorted, it means more data is available for us to mine.

A search on YouTube for "speech therapy" provides 3,870 results, and at random I chose a video recording of a speech therapy class.

Scrolling past the video to look at the related information provides a great initial insight:

YouTube Related Video Data

 

From the above information we can get the following keywords:

  • down syndrome (40,500 searches)
  • oral motor therapy (58 searches – low, but still relevant)
  • speech therapist (we already knew this one – 12,100 searches)
  • apraxia (2,900 searches)

Even with a somewhat basic knowledge of speech therapy it would have been difficult to know that "oral motor therapy" and "apraxia" were related to problems with speech – and it’s a great stepping stone in our keyword research.

Scrolling further down the page to analyse the comments, we see:

YouTube Comment on Speech Therapy Video

Which provides us with further keywords and opportunities:

  • autism (201,000 searches)
  • oral motor exercise (91 searches)
  • chewy tube (the "t-shaped orange material" – answered in a further comment – 390 searches)

Other comments on the page are equally as helpful, with examples such as "decreasing hyper sensitivity" being another useful research avenue.

Looking around YouTube, comments can actually be much more helpful than the published video data. Conversations often arise between people, and this is an essential place to look if you want to know more information.

Once you have taken the time to browse the videos, note down some of the related information and make a list of possible opportunities, then you can go back to the Google Keyword Tool (or whatever tool you might be using) with a whole load of ammo for your next stage of research. The advantage to this second iteration is that you will have also used YouTube what it is intended for – to watch videos, and to find out more information about the services and conditions related to speech therapy (greater understanding of your client’s business means you can ultimately do a better job for them).

Also note that you should research any of the phrases you find in more detail before optimising a website for them – more about this point at the end of the article.

From just the initial view carried out on YouTube above, we have gone from a sitemap looking similar to this:

Example Simple Website Structure

To something more representative of this:

Expanded Website Structured Based on Speech Therapy Keyword Research

The above sitemap has been generated after looking at only one YouTube video on the subject, so imagine the level of data you could get into if you carried out full research?

By following an iterative process of looking at YouTube, understanding the opportunities, and analysing the search volumes, you can begin to form a visual picture of how products and services are related – something that can be then portrayed back to the client for approval and additional ideas. Clients can often be too familiar with their business, and miss out a basic level of information when attempting to explain what it is they offer – and this is usually the most valuable detail from an SEO perspective.

Carrying out your own research is vital to covering all of the best angles when working with a new website, and you shouldn’t just stop at the list of service offerings that the client provides you with. Take things one step further, and you’ll no doubt find the website is in a greater position to dominate search visibility than some of the key comptitors in the industry.

Use All of the Data Available

With people of all ages, backgrounds, specialties and even personal experiences uploading and contributing to YouTube, it really is a gold mine of information and can help a great deal for search marketing campaigns. If you’re researching a particular service/product/industry, why wouldn’t you use all of the information that is freely available? – especially when you have a user-updated resource such as YouTube at your finger tips.

The one point I’ll leave on, is that YouTube shouldn’t be used for all of your research on a particular subject, as it is, after all, open to mislabelling, incorrect information and of course the efforts by fellow SEOs to promote video content ;)

I’m no way affiliated with anyone in speech therapy, but I’m sure the children’s communication charity I CAN would appreciate a donatio
n if you’re feeling generous: http://www.ican.org.uk/Support%20Us/Donate/Donate%20Now

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by MikeCP

Today I want to talk about tracking organic ranking in Google Analytics. Previously, we were able to determine the page from which a Google organic click was coming from (detailed on the Distilled blog by Will Critchlow). This was nice because we could append this to the end of our keywords in Google Analytics for some interesting data (André Scholten’s post at Yoast.com has a step by step) as seen below.

Keyword Page Rankings
Image courtesy of Yoast.com

This solution provides limited utility, and if you’re like me, you implemented it, maybe checked it out once in a while, but never really turned this into actionable or otherwise meaningful data. I’m going to detail how rank tracking in Google Analytics can be made a lot more useful thanks to custom variables and a change in Google’s referring URLs. But first…

Some History

When Google began testing an AJAX search interface in early 2009 there was a flurry of concern that it could mean the end of traditional rank tracking, web analytics packages, and I’m sure someone said SEO was dead, too. The concern wasn’t without merit; Google was serving results in AJAX with the URL pattern as http://www.google.com/#q=keyword, and most analytics packages ignored the hash and everything after.

Fast forward to September 8th, when Google introduced Google Instant. The AJAX SERP usage had been steadily increasing over time, but shot up in usage when Instant was rolled out. Fortunately for Omniture, WebTrends, and other third party analytics packages, Google worked out a way to pass the referrer information from the SERPs, rank tracking still works, and I’m still working as an SEO in a living industry.

As it turns out, Google includes even more information in the AJAX SERPs than they previously did, including one really interesting parameter: "cd=". The cd= parameter contains the exact ranking position of the search listing, which makes for some really awesome possibilities, especially when paired with Google Analytics’ custom variables.

Why Custom Variables?

Custom variables are a bit of an enigma to even advanced Analytics users. I’ll admit that I never really made much use of them in the past. You’ll often see examples where custom variables are used to track logged in vs. unlogged in users, which is definitely a great use. Rob Ousbey’s 6 cool things YOU can do with custom variables is a good set of examples to get your feet wet.

In André Scholten’s example above we’re using Google Analytics user defined value, isn’t that just as good a custom variable? Well, the difference depends on how you intend on using your data. With custom variables, you’re granted much more flexibility within Google Analytics for slicing and dicing data. For instance, through the use of either custom reporting or advanced segments with custom variables, I can pretty easily track how much revenue a keyword has brought in when ranked in the 2nd position, as opposed to the 4th. While this may be possible with the user defined variable, it would require quite a bit of work after an excel data dump. 

Now, let’s get to business:

The How

Getting this properly set up was remarkably easy for me, and I have so very little programming knowledge, so I would imagine most wouldn’t have much issue. I used PHP, as I was working with a WordPress site, but I’m sure you crazy hackers can do the same in most any language.

Step One – Extract cd= Value from Referrer String

I used this snippet to do this.

<?php preg_match("/cd=(d+)/",$_SERVER['HTTP_REFERER'], $matches);
$str = $matches[0];
preg_match("/(d+)/",$str,$matches);
$rank = $matches[0] ?>

Please don’t make fun of my hacky coding

This assigns the cd= value to the $rank variable. We’ll reference this in…

Step 2 – Call cd= Value in our Google Analytics snippet

Now, we want to insert the custom variable call between the setAccount and trackPageview lines in our Analytics snippet (shown below using the asynchronous code):

var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_setCustomVar',1,'Google_Rank','$rank',2]);
  _gaq.push(['_trackPageview']);"

We’ve set the custom variable slot to 1, and the scope to the session-level (the last argument, set as 2). If you are already making use of custom variables, be sure to not overwrite a previously occupied slot. For more information on how the custom variable is formatted, see Google’s help page on the topic.

Step 3 – Create an IF Statement so the CustomVar isn’t Called Every Time

We only want to include this line when we have a cd= value, otherwise every new click will overwrite the last value. To do this, I used the following IF statement, again coded in PHP. This is the final step, and the complete Google Analytics snippet:

<?php if ($rank != '' ) {
echo "<script type="text/javascript">n
  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_setCustomVar',1,'Google_Rank','$rank',2]);
  _gaq.push(['_trackPageview']);";
echo "n";
  }
else {
echo "<script type="text/javascript">n

  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-XXXXXX-X']);
  _gaq.push(['_trackPageview']);";
    }
echo "n";
?>

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
  })();

</script>

Here we’re checking if $rank has a value. If it does, we’ll include the custom variable call with that $rank value, if not, we’ll print the Google Analytics code as normal. Also included in the above are some line breaks (n), so that the code formats correctly.

The Most Important Part – Analyzing Our Data

What’s the point of going through all this effort if it doesn’t provide you with any analytical insight? None, of course. But this rank tracking solution has some added benefits over the traditional rank tracking software that may be really useful to some SEOs. These include:

Rankings by City, Region, Country

Traditional rank tracking software suffers in that its ranking results are dependent on the location of the servers. With custom variable rank tracking and a little spreadsheet pivot table magic it’s pretty easy to get your site’s rank for any location.

Historical, Definite, Data

Once this is properly set up you’ve got access to definite rankings within your Analytics data from that point on. So as holiday season 2011 rolls around, its easy enough to review where your site ranked during the 2010 holidays, helping to set budgets, goals, and expectations.

Bounce Rate/eCommerce Data/etc. by Rank

Whatever your KPI, you can compare it against search ranking. Reporting the ROI of link building efforts or on site optimization becomes much easier when you’ve got rankings included in your dataset.

Some of the quick ideas I had around this include:

  • Average rank over time for top 50 keywords
  • Average rank over time for 4+ word keyphrases
  • Bounce rate for 2nd+ page clicks
  • Revenue % increase for Keyword X when ranking increases from 2 to 1

I should note that getting averages is a lot easier in Excel with a pivot table, as seen below:

Average rank pivot table
This can also be adjusted to show your minimum rank, as well

Creating Custom Reports and Advanced Segments

Custom variables aren’t included in the default reports for Google Analytics, so unless you do all your work in Excel, you’ll probably want to create some custom reports or advanced segmentation to work with the data directly in Analytics.

Advanced segmentation is great for this data. Below is the function one would use to track rankings between 11 and 15, which might be strong candidates for on-page optimization that could provide the boost onto the first page:

Advanced Segmentation
You can apply this particular advanced segment with this link.

The Downsides

The most obvious downside is that you’re only receiving a ranking when a listing is being clicked on, so for very small sites there may be limited utility. Ranking data will be spotty past the 2nd page, as well.

Additionally, the AJAX SERPs are not being served to all users in all locations. Small sample size warning here, but I’m seeing about 40% of organic Google traffic coming from the AJAX SERPs (done through a simple calculation of visits with our custom variable divided by total Google organic visits over the same time period). Michael Whitaker is seeing this number over 50% in his data. This number is likely going to increase as Instant is rolled out further.

The #-pack local listings can really throw things off, too. If a particular query gets one of these to start the SERP, the cd= continues after:

cd= rankings

Lastly, there does exist the possibility that Google discontinues its use of the cd= variable for whatever reason.

Go Analyze

I hope some of you can make some good use out of this functionality. I’ve only had it installed on my sites for a short time, but I’ve definitely found it interesting to play around with. If you don’t already have Excellent Analytics installed in your Excel I would highly recommend doing so, even if you don’t implement this tracking, and especially if you do.

I’d like to thank Michael Whitaker of Monitus for his help. He’s been installing this setup for his clients for a bit now. Monitus offers proper eCommerce Google Analytics installation for Yahoo! stores, which is surprisingly difficult without Monitus.

If you’ve got any other ideas for working with this data, sound off in the comments or let me know on Twitter @MikeCP. Personally, I’m really excited to have this data rolling in and the possibilities are nearly endless. I’ll be sure to report any interesting ways to manipulate the data in future blog posts. Cheers!

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by caseyhen

Earlier this year, jtkaczuk wrote a YOUmoz post about “Using Twitter as a Sitemap”.  After reading it I began to think about the power of Twitter and if using Twitter more can help indexation.  Many Twitter users will tweet about new post or products on their account hoping to draw attention and links from their followers.  What if this process can also help with getting more pages indexed and indexed faster?  I was surprised with the results of this quick little experiment that I threw together in a few months.

Experiment Setup

The experiment started with 15 local clients of mine who often tweet about new products or posts on their Twitter accounts. These accounts vary in followers from 75 to about 1500. While I did not have direct control over these accounts, I was able to track when a new product was added, tweeted, crawled by GoogleBot, and indexed by Google via a PHP script I wrote and installed on their CMS. Along with tracking those, I monitored when the number of RTs, when the product was indexed, and if it stayed indexed for at least 48 hours after it was launched.
 
For each product or post that was launched, they were placed in one of three categories for 48 hours:
 
  • Twitter Links Only – 48 hours with no internal links and 1 tweet only from the orginal account
  • Site Architecture Links Only – No tweets about product or post, only internal links and sitemaps generated
  • Both Twitter & Site Architecture Links – Both tweets, internal links, and sitemaps to post or product
After the 48 hour observation period was over, the products or posts were launched like a normal, which included tweets, internal links, and anything else my clients might do to promote it.  We also stopped collecting data at that point.
 

Experiment Warning

As Rand and Ben always say, correlation does not imply causation. Nor do I encourage that you SPAM Twitter with a whole bunch of links to content that is not useful to your followers. Take the results of this experiment and try to find where you can fit them in your business without upsetting and losing your followers.
 

Experiment Data Summary

During the course of the experiment: 120 products or posts where published – 40 in each of the categories above, there over 96 RTs, over 350 GoogleBot visits, and an 87% indexation rate. Here are some quick highlights of the findings:
  • Twitter Only Launch
    • GoogleBot averaged its first visit within 78 seconds of the first tweet.
    • Tweets with more than 3 RTs were indexed 325% faster, along with 125% more of its products and post indexed than tweets with no RTs.
    • Average indexation of the post or product was different depending on number of RTs
      • 3 or more RTs had an average indexation time of 8 hours, with 86% indexed.
      • 1 or no RTs had an average indexation time of 26 hours, with 69% indexed.
  • Internal Links Only Launch
    • GoogleBot averaged its first visit within 2.5 hours of the first internal link on the site.
    • Average indexation of the post or product was 8.5 hours, with 85% indexed.
  • Both Internal Links and Twitter Launch
    • GoogleBot averaged its first visit within 82 seconds of being launched.
    • Average indexation of the post or product was again different depending of the number of RTs the Twitter updated received.
      • 3 or more RTs had an average indexation time of 4.25 hours, with 88% indexed.
      • 1 or no RTs had an average indexation time of 5 hours, with 79% indexed.

Experiment Raw Data

Twitter Only
 
 
 
Internal Links Only
 
 
 
Both Internal Links and Twitter
 
 
 

Experiment Conclusion

The data concludes that creating your new product or post with internal links along with a tweet that gets 3 or more RTs, will help in increasing the time and rate at which they get indexed. While the data may show there is evidence that this technique will help your site increase its indexation and crawl time, I would advise you to do it with caution and care. All of my clients took care not to launch more than 1 product a day and did continue to tweet other things besides the new products launched.   My personal warning is to remember that Twitter is designed for your clients and not as a launching pad for Google, it would be horrible to see your account lose its following due to mass product tweeting.  What are your feelings or experiences on using Twitter to increase your indexation?

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Posted by RobOusbey

Let’s start with a sneaky tactic.

I know that SEOmoz blog readers are an internet-savvy crowd, so many of you are probably familiar with the ‘browser history sniffing’ techniques that exist. (Bear with me, we’ll get to internet marketing advice in a moment.)

In case you’ve not come across the concept before, it’s probably best exemplified by the site Start Panic – just hit the ‘Let’s start!’ button to watch it trawl through your browser history, and start listing sites that even you forgot you’d visited.
 
StartPanic uses Javascript to do the dirty work, but it’s also possible to do this completely using CSS, and without Javascript. (There’s advice about implementing the technical side of this in a popular post by Niall Kennedy.)
 
I wanted to show how you can use this to help your website perform better – let’s begin with the least controversial, and work on from there
 

1 – Customize the User Experience

Niall’s post – linked above – suggests one very sensible use of this technique: offering your users links to the social sites they use, and hiding the ones they don’t. In this bottom of this live example page, you’ll see a ‘Digg It’ button if you’ve been to Digg, a ‘Share on Facebook’ button if you’ve been there, etc. By limiting the set of sharing buttons, you can remove that ‘social clutter’ that is prevalent on some sites – this doesn’t just give a cleaner page to the user, but may have a much higher ‘sharing’ rate for your page.
 
Customization can also be made is to the content of your site: use the browser history sniffing technique to see the kinds of blogs and news sites your visitors are reading, and then adjust your content based on the results. For example: I might consider writing a weekly post about PPC for the Distilled blog. We could check to see how many of the Distilled visitors had looked at PPC Hero, the AdWords blog, and the AdWords support pages. If the number was high enough, we might consider adding content to satisfy that niche.
 
Likewise, if you find that a high proportion of your readers visit KittenWar, then you might consider adding a little more ‘cute’ to your posts.
 

2 – Retarget Your Publicity

 
Traditional ad-network retargeting works in the following way:
  • a visitor comes to your site, and leaves without making a purchase
  • your advertising network drops a cookie onto that user’s computer
  • the user visits a different site which displays ads from that network
  • the network recognizes the user, and shows them an ad for your product
  • hopefully they’re reminded of you, and come back to the site to make a purchase.
However, this retargeting only works when you can cookie people once they’ve visited your site. I’d propose using this technique to alter the copy on your site, based on what the user has already seen about you elsewhere.
 
For example: check for new posts about your brand each morning (or can I assume you do this already?) If your company had three product reviews on blogs and news sites today, then record these URLs, and check to see if each visitor to your site has already read one of them. You could then display a prominent content box on the front page with information about the exact product they saw reviewed, and a link to your page for that product. You might even acknowledge they’d seen the review: "Initech wants to offer you a 10% discount, as a reader of The Daily Bugle"
 
You could use the same technique for Reputation Management. If a site has published a negative article about you, there’s a potential that people will come to your site to find out more. However, you may not want to simply have a message on your front page that reads "The Bluth Company has NOT committed treason – read more" – but you could choose display this headline only to people who’ve read about the story already.
 

3 – Find Your Competitors’ Customers

This is where you could really up-the-ante with your CRO efforts.
 
I recently saw a bank who offered 0 to people who closed their account at a competitor’s bank and switched over. This would be a perfect opportunity to sniff each visitor’s browser history, to see if you should promote this offer to them on your site. You can even avoid showing it to people who have been shopping around (and looking at every bank’s website homepage) by checking to see if they’ve visited the URLs for logging in and out of the competitor’s online banking to see if they’re actually a customer of that company.
 
For e-commerce sites, you could check to see if your visitor has visited your competitor’s site, but could also check if they’ve looked at the competitor’s product on Amazon or other retailers. Your product page could then include a comparison between the two products. That could increase conversions, but you’d avoid comparing your product to a competitor’s for anyone who’d never seen the competing product.
 

To Conclude

So, the practice of checking to see if a visitor has already been to particular pages might seem a little shady at first – but this part of the way that the web and web browsers are designed, and people can block their browser history if they’d prefer.
 
Executed in the right way, it could be a very powerful technique for creating high performing, high converting websites. Use it wisely.
 
 

(Thanks for reading; you can follow me on Twitter: @RobOusbey, and I’m pleased to be speaking alongside some of the best SEO practitioners around at this year’s Pro Training Seminar – tickets are still available.)

Do you like this post? Yes No


SEOmoz Daily SEO Blog

Perhaps part of the “interesting data” Richard Rosenblatt was talking about was link anchor text on expired domains & cybersquatting efforts that he could redirect in bulk at high earning eHow pages.

Not to fear, Demand Media is a trusted Google partner, so the algorithm and engineers are prohibited to take action against the same activity which would get your website removed from the search results.

Google’s blind eye and double standards toward the large MFA spam sites are becoming such a big issue that it looks to be at the core of the marketing strategy for new search engines!

SEO Book.com – Learn. Rank. Dominate.

Posted by Tom_C

Wouldn’t it be great if you could somehow spot those SEO opportunities on your site which were low effort and high value? Well this post gives you a few ways you can do that! Sweet.

I’m going to be digging around in the recently released search queries report in Google Webmaster Tools:

Step 1 – Gathering The Fruit

The first thing we need to do is gather all the fruit (aka keyphrases). So within GWT select search queries and select just "web" queries and in this case I’ve selected "United States" since that’s the main target market for SEOmoz. The more we can narrow this down the better data we get, if we leave image search etc in there and leave countries like Serbia in there the less accurate our data will be:

Once we have filtered the data we then want to download the data to Excel:

Step 2 – Identify The Low Hanging Fruit

Once we have the data in Excel we can do some monkeying around to get some meaningful insights. When you download the data you will be presented with a lot of dummy data like this:

 So I run a find and replace on the following two items:

  1. Find "<10" and replace with 1
  2. Find "-" and replace with 0

Be sure to only run these over columns B,C,D to avoid stripping out anything from your queries column in A!

Now, once we have this data it will depend on the number of impressions and number of keyphrases how exactly you want to slice the data. Analysing the data for SEOmoz I found that selecting all avg positions that were not equal to 1 and impressions over 200 gave me a sample set of 97 keyphrases to look at which wouldn’t take very long at all to whiz through and look at. If you have more time or if you have more keyphrases you might want to get a longer or shorter list.

 

As I mentioned this gives me a list of 97 keyphrases for the SEOmoz site. Let’s take a look at what some of those opportunities are!

An Example: SEOmoz’s Low Hanging Fruit

In this post I not only wanted to show you how to get the data but also to give you an insight into what kinds of actions you could take and what sorts of keyphrases you might look at so I ran the above process for the SEOmoz site and found the following low hanging fruit. Bear in mind that there weren’t that many really easy wins in the data since SEOmoz generally has fairly good SEO (unsurprisingly!). Still, it gives you an idea of the thought process.

Keyphrase: SEO   |   Ranking: 9.4   |   Impressions: 49,500   |   Clicks: 590

Ranking URL: http://www.seomoz.org

Now, I’ve shot myself in the foot a little here by picking a keyphrase which isn’t really a low hanging fruit, it’s actually a highly competitive keyphrase! That said, it’s useful information to have. Without having rank tracking set up for the site it instantly tells me that SEOmoz are highly competitive for this term, especially as some of the sites that rank above them are Google and Wikipedia. That said, there’s room for improvement. Maybe time for some more badge-bait Rand?!

Keyphrase: Social media marketing  |   Ranking: 7.9   |   Impressions: 8,100   |   Clicks: 320

Ranking URL: http://www.seomoz.org/article/social-media-marketing-tactics

Again, this is a highly competitive keyphrase but one which I feel SEOmoz could perform better for. The current ranking is working ok for them but could certainly improve dramatically if they could shift the ranking from 7.9 into the top 3. Digging around we see that the page is linked internally from every page in the navigation with good anchor text and it has a total of 255 root domains linking to the page so there’s clearly a fair amount of work already done here. That said, I feel like there’s an opportunity waiting since SEOmoz links out to lots of other sites from here and most of those blogs will likely link back to the SEOmoz guide if there was a nicely written email. Incidentally, if you’re looking for a link from SEOmoz and have a top notch guide for one of the sites which doesn’t have any resources attached then get in touch! So long as you link back to the page ;-)

Also, BONUS TIP – while researching backlinks in this space I stumbled across the fact that Amazon authors can get links from Amazon.com check out Darren Rowse on Amazon and then compare to Rand Fishkin on Amazon and you’ll see that Rand has missed an opportunity to get blog posts imported and hence get clean followed links from Amazon. Sweet!

Keyphrase: What is seo  |   Ranking: 3.9   |   Impressions: 1,900   |   Clicks: 210

Ranking URL: http://guides.seomoz.org/beginners-guide-to-search-engine-optimization (Note here that this URL isn’t reported in GWT, it’s the old beginner’s guide URL which now redirects but the same keyphrase stands).

Here, I see the answer being a little easier than the above keyphrases. The term is less competitive and the title of the page doesn’t even mention "what is seo"! My actions would be to reword the title tag to be "What is SEO? The Free Beginner’s Guide to SEO from SEOmoz" and to mention "What is SEO" on the page at least once (currently it only mentions "what is search engine optimisation" and although Google knows they’re the same phrase I’d still like to see the exact phrase on the page somewhere). Also, there is no navigation link on the site to the beginner’s guide so slipping a few links into the next few blog posts with the anchor text "what is seo" will help boost the rankings for that phrase.

Keyphrase: Free seo tools  |   Ranking: 4.2   |   Impressions: 480   |   Clicks: 73

Ranking URL: http://www.seomoz.org/tools

The term "seo tools" is fiercely competitive but the "free seo tools" term seems like it would be a lot easier to go after, in fact SEOmoz is one of only 2 of the top 10 results which doesn’t mention the term free in the title tag of the page. This could be rectified easily and in addition to that the page doesn’t even mention "free seo tools" on the page. Personally, since this is something people search for I’d be tempted to re-des
ign the page to add a "Free SEO Tools" sub-header to differentiate between the free and PRO tools. That way it’s a good user experience and also gets the phrase on the page.

Keyphrase: Keyword research  |   Ranking: 19   |   Impressions: 110   |   Clicks: 12

Ranking URL: http://www.seomoz.org/blog/10-steps-to-advanced-keyword-research

Note here that the impression numbers are so low because the page is ranking 2nd page. Not having a page ranking in the top 10 here is a mistake for SEOmoz I think (sorry, I mean opportunity!). The correct page is clearly the page on keyphrase research from the new beginner’s guide and the best way to make that page rank is to throw some more internal links to the page. Currently there are basically no internal links to that page except from other beginner’s guide pages. Linking to it from elsewhere should be able to get that page ranking. One idea to help get internal links to pages of the beginner’s guide would be to automatically link to the keyword research page from any blog post within the keyphrase research category. That way it would essentially get more deep links internally to the individual pages of the beginner’s guide.

A Data Warning

A warning here that I’m still not sure how much I trust this impression and click data. I wrote a post over on Distilled about how the reported numbers are way out from analytics numbers. To be honest, if I was doing SEO full-time for SEOmoz I’d like to think I’d have better resources of keyphrase research, ranking data and visitor numbers but using Google’s webmaster tools search queries report can be a nice quick way to identify keyphrase opportunities for a site which you’re less familiar with or for which you don’t have these things set up.

Do you like this post? Yes No


SEOmoz Daily SEO Blog
Post image for How to FTP Files From Email Using an iPad

When the iPad was announced in January, it was the very first time Apple created something that I immediately wanted. I’ve never been an Apple fanboy, I don’t work on Macs for either my primary workstations or laptop – I use them for testing and ensuring compatibility only. But the iPad was different. I immediately saw it’s purpose as consumption device and it’s promise as a productivity tool.

The naysayers that tagged the iPad as “the iFail” seemed to focus on how it fails as a laptop or netbook, and how it lacks the features expected of those devices. While there are definitely limitations to what you can do, and how easily certain tasks can be accomplished, I gave Apple the benefit of the doubt, based on their track record, and knowing that this is a first generation device. As Edgar Bronfman, Warner Music Group CEO noted “No one’s got rich betting against Steve Jobs.”

Having never purchased a Kindle or netbook, I was looking forward to the 3G iPad enabling me to essentially ditch my laptop while traveling, and be a complete solution for any media (reading, music, movies) consumption needs. Most of my colleagues derided the “working exclusively from the iPad” notion as wishful thinking, and told me to be sure to pack the laptop as well. After using the iPad for over a month now, I’m looking forward to proving them wrong this week, as I thoroughly road test it during SMX Advanced. I’ve spent the last few weeks assembling the various tools (apps) that I’ll need, setting up accounts, passwords, etc. and can honestly say, with the benefit of the cloud, there’s nothing that I’d need to take care of while on the road, that I can’t do on the iPad.

Most of the apps that I use or need, relate to development – managing sites, updating sites, and keeping the trains running on time. To that end, these are the tools I employ on the iPad:

The Tools

The lack of mulitasking (for now) on the iPad, means accomplishing some tasks can be a little tricky, or are at best, not intuitive. One such task involves taking an attachment from an email, and getting it onto a server to use in post, on a page, or otherwise link to. To accomplish this – you really just need one .99 app – GoodReader.

Step-by-Step: How To FTP Email Attachments on the iPad

  1. Download the GoodReader app
  2. Configure your email account(s) under “Connect to Servers” in the right pane
  3. Configure your FTP account(s) under “Connect to Servers”
  4. Connect to the email account with the attachment by selecting that account under “Connect to Servers”
  5. Find attachment needed (GoodReader will automatically poll your account for any messages with attachments) and download it by selecting it/clicking on it from the email window
  6. Close the email dialog box – the dowloaded file should now appear under “My Documents” in the left pane
  7. Click “Manage Files” in the right pane
  8. Select the attachment in the left pane, then in the right pane, click “copy”
  9. Click “Connect to Servers” and select the FTP server
  10. In the pop up window for the now-connected server, navigate to the folder you want to upload the file to
  11. Click “Paste” (button in lower right corner of server window)
  12. …and you’re done!

Creative Commons License photo credit: anitakhart

Advertisement: Find out how to get your bloggers talking about your products ViralConversations.com. #8

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

How to FTP Files From Email Using an iPad


Related posts:
  1. How to FTP Upload Image Files From an iPad I recently became the new owner of an iPad…
  2. iPad Apps and Social Media In case you didn’t already know, I’m the owner…
  3. Web Based Email For More than One Address In addition to playing with my feedreader I’ve also been…

Related posts brought to you by Yet Another Related Posts Plugin.


Michael Gray – Graywolf’s SEO Blog
This series is supported by Rackspace, the better way to do hosting. Learn more about Rackspace’s hosting solutions here.

For designers and developers, CSS has long been a powerful part of the web design process, but with the emergence of CSS3 and the growth of both the spec and the number of browsers that support it, amazing options are being opened up for designers.

It’s now possible to do all sorts of cool things just using CSS — even animation. Although not all of the effects work across browsers (even browsers that have CSS3 support) and some things are more for show and tell than actual use, there is seemingly no shortage of cool things you can do, just with the stylesheet.

Here are seven of our favorite CSS3 techniques that have a real-world use and that you can start experimenting with right now.



1. Awesome CSS3 Buttons


One of the coolest things about CSS3 is that you can create scalable elements, like buttons, without having to create brand new images every time! ZURB highlights how to make some super excellent CSS3 buttons, complete with RGBA for awesome shading.

Also check out ZURB’s Playground which has tons of really cool CSS3 examples and coding techniques.



2. CSS3 Bar Graph


Ben Lister details a bevy of practical CSS3 techniques for web designers, including this awesome image-less 3D bar chart. Using the -webkit-transform or -moz-transform calls and a designated skew, you can do some really impressive image-free effects completely in CSS!



3. CSS3 Dropdown Menu


Nick La from WebDesignerWall shows off how to make a really great looking CSS3 dropdown menu that really pop off the page. He has instructions for both a gradient-image and completely CSS versions, so depending on your audience, you might be able to do the whole thing, gradient effects and all, completely in CSS.



CSS3 Speech Bubbles


Nicolas Gallagher has created some absolutely stellar pure CSS speech bubbles. These things not only look great, they can even be modified to be used as blockquote elements.

If you want to see my favorite implementation, check out Gallagher’s demo page and look at the Twitter example. It seriously beats the pants off of Twitter’s official embed method.



CSS3 Border Radius Hover States


At Impressive Webs, Louis Lazaris put together a fantastic tutorial for recreating a really great looking effect he saw on Media Temple’s website. Using CSS3 border radius, he shows how you can create awesome looking hover effects that look really, really sharp.



CSS3 Box Shadow


Speaking of CSS3 borders, the gang at Carsonified has put together a really stellar video tutorial showing off some advanced CSS3 box shadow techniques for creating depth in your projects, just using CSS3.

Following the tutorial, you can see how to create that great letterpressed effect in elements of your design.



CSS3 Photo Gallery


Apple just launched a new page showing off some of the cool things that HTML5 can do. Because HTML5 and CSS3 are often linked together, there is some overlap in techniques. One of my favorite demos was this Polaroid-style photo gallery/lightbox created using CSS3.

Some of the transitions and 3D effects won’t work on all browsers, but its a really cool example of some of the WebKit specific CSS3 transitions that Chrome, Safari and other WebKit-based browsers can benefit from.



Your Thoughts


If you’re looking for more inspiration, keep an eye on Neven Mrgan’s HTML5 Watch blog. It’s a great way to find examples of CSS3 and HTML5 in the wild. What are some of your favorite examples of CSS3? Let us know!




For more technology coverage, follow Mashable Tech on Twitter or become a fan on Facebook






Series supported by Rackspace


rackspace
Rackspace is the better way to do hosting. No more worrying about web hosting uptime. No more spending your time, energy and resources trying to stay on top of things like patching, updating, monitoring, backing up data and the like. Learn why.


Reviews: Chrome, Facebook, Safari, Twitter

Tags: css3, css3 tricks, HTML5, web design


Mashable!