Posted by fabioricotta
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
Hi SEOmoz folks,
Some weeks ago my coworker Leandro Riolino published in our blog an experiment he was working with. The idea of the experiment was to try link to a page A from a page B with 3 different anchor texts providing value of all those anchor texts.
The idea is simple: we chose 3 random keywords, created an internal page, created 3 links to different URLs that have a canonical tag to the main page. You can see this idea illustrated bellow:
So, after choosing the 3 keywords we submitted each one to check if Google has any occurrences of them:
Then we bought a new domain, that has no backlinks and as you can see bellow, Google shows us that this website isn’t in the index:
Creating the Index Page
To start the experiment my coworker downloaded a random template from the Internet with some random content inside, changing only the page title, meta description and H1 tag focusing all them into the main website keyword “jogos online de corrida” (online race games in English). The major change he made into the template was to add a conditional check with PHP to insert the canonical tag if the URL requested had any parameter:
<? if (isset($_GET[keyword]))
<link rel="canonical" href="http://www.jogosonlinedecorrida.com.br" />
<? } ?>
For those who know something about PHP language, this code checks if the variable $_GET exists. If this check returns true the code insert the canonical tag line into the HTML.
It’s important to say that we do not mention any of those 3 keywords in the Index Page. So, this page can’t rank for having a keyword mention… instead Google needs to check it’s backlinks.
The next step was to create the internal page. We created it with 3 links in 3 different page positions: one in the header, another one in the content area and the last one in the footer area with the following anchor text: “nanuoretfcvds ksabara1″, “esjstisfdfkf aasjdkwer” e “gisrterssia fdswreasfs”. Each link had different targets:
It’s important to say that we used the meta tag <meta name=”robots” content=”noindex,follow” /> into this internal page, so this page would not rank for those 3 keywords.
Indexing the Content
In order to have the pages indexed by Google my coworker created a Sitemap.XML with the 2 pages (home and internal) and submitted it to Google Webmaster Tools. It is important to say that we did not share this page in any webpage and did not submit in any bookmarking service.
After 2 weeks, our website was showing the 2 pages when we used the operator “site:”. After one more week Google was showing the 2 pages and the link to their cache.
After this “waiting time” we searched in Google on the 3 keywords that we created and noticed that the main page was appearing for ALL of them as you can see bellow:
So, with this small experiment we noticed that Google was giving to a page 3 anchor text values if we use the canonical tag as a funnel.
Conclusions and Applications
With this small experiment we have a hint on how Google treats the anchor text of a page that uses the rel=canonical tag and now we can try to create some new experiments (eg.: use a parameter in the logo link to your main page, and then receive the anchor text of the second link – because we know that only the first anchor text counts).
We know that this is a single experiment and we need to see if this works in a real website, because we know that Google understands the page segments and this maybe does not work as we presented in this article. We still need to try and check this.
I can’t end this article until saying congratulations to my coworker Leandro that provided me a huge amount of knowledge with this experiment – thank you.
Hope you liked this article!
Posted by kieronhughes
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
This isn’t a post about keyword research for video content on YouTube, but about exploring the ways in which it can be used to find relevant data, information and keywords in topics where you have a low understanding of the services and industry.
Now for an example…
You’ve just taken on the SEO contract for a private speech therapist based in the UK, and need to carry out industry and keyword research into the sector to best understand the opportunities available, and to structure their in-development website accordingly.
You don’t know much about speech therapy, but you’ve been given some information by your client and now you’re on the hunt for keywords. Heading over to the Google Adwords Keyword Tool, you put in "speech therapy" to see what suggestions/volumes are thrown up.
Now although the results displayed by the Google Keyword Tool might be relevant to what you are looking for, they don’t provide the bigger picture, which is what you should be looking for.
All of this information can no doubt be found by carrying out your own research on Google/your other favourite search engine, but it can still be difficult to sift through the results to find the correct information you are looking for… and this is where YouTube can really help.
Using YouTube as a Keyword Tool
Unlike creating a web page, uploading a video to YouTube is very accessible to anybody with a video file and an internet connection. The great advantage of the upload process is that Google prompts people to provide descriptive content about the video, such as explanatory text (description), a relevant title, and appropriate tags – so not only is it easier for the videos to be sorted, it means more data is available for us to mine.
A search on YouTube for "speech therapy" provides 3,870 results, and at random I chose a video recording of a speech therapy class.
Scrolling past the video to look at the related information provides a great initial insight:
From the above information we can get the following keywords:
Even with a somewhat basic knowledge of speech therapy it would have been difficult to know that "oral motor therapy" and "apraxia" were related to problems with speech – and it’s a great stepping stone in our keyword research.
Scrolling further down the page to analyse the comments, we see:
Which provides us with further keywords and opportunities:
Other comments on the page are equally as helpful, with examples such as "decreasing hyper sensitivity" being another useful research avenue.
Looking around YouTube, comments can actually be much more helpful than the published video data. Conversations often arise between people, and this is an essential place to look if you want to know more information.
Once you have taken the time to browse the videos, note down some of the related information and make a list of possible opportunities, then you can go back to the Google Keyword Tool (or whatever tool you might be using) with a whole load of ammo for your next stage of research. The advantage to this second iteration is that you will have also used YouTube what it is intended for – to watch videos, and to find out more information about the services and conditions related to speech therapy (greater understanding of your client’s business means you can ultimately do a better job for them).
Also note that you should research any of the phrases you find in more detail before optimising a website for them – more about this point at the end of the article.
From just the initial view carried out on YouTube above, we have gone from a sitemap looking similar to this:
To something more representative of this:
The above sitemap has been generated after looking at only one YouTube video on the subject, so imagine the level of data you could get into if you carried out full research?
By following an iterative process of looking at YouTube, understanding the opportunities, and analysing the search volumes, you can begin to form a visual picture of how products and services are related – something that can be then portrayed back to the client for approval and additional ideas. Clients can often be too familiar with their business, and miss out a basic level of information when attempting to explain what it is they offer – and this is usually the most valuable detail from an SEO perspective.
Carrying out your own research is vital to covering all of the best angles when working with a new website, and you shouldn’t just stop at the list of service offerings that the client provides you with. Take things one step further, and you’ll no doubt find the website is in a greater position to dominate search visibility than some of the key comptitors in the industry.
Use All of the Data Available
With people of all ages, backgrounds, specialties and even personal experiences uploading and contributing to YouTube, it really is a gold mine of information and can help a great deal for search marketing campaigns. If you’re researching a particular service/product/industry, why wouldn’t you use all of the information that is freely available? – especially when you have a user-updated resource such as YouTube at your finger tips.
The one point I’ll leave on, is that YouTube shouldn’t be used for all of your research on a particular subject, as it is, after all, open to mislabelling, incorrect information and of course the efforts by fellow SEOs to promote video content
I’m no way affiliated with anyone in speech therapy, but I’m sure the children’s communication charity I CAN would appreciate a donatio
n if you’re feeling generous: http://www.ican.org.uk/Support%20Us/Donate/Donate%20Now
Posted by MikeCP
Today I want to talk about tracking organic ranking in Google Analytics. Previously, we were able to determine the page from which a Google organic click was coming from (detailed on the Distilled blog by Will Critchlow). This was nice because we could append this to the end of our keywords in Google Analytics for some interesting data (André Scholten’s post at Yoast.com has a step by step) as seen below.
Image courtesy of Yoast.com
This solution provides limited utility, and if you’re like me, you implemented it, maybe checked it out once in a while, but never really turned this into actionable or otherwise meaningful data. I’m going to detail how rank tracking in Google Analytics can be made a lot more useful thanks to custom variables and a change in Google’s referring URLs. But first…
When Google began testing an AJAX search interface in early 2009 there was a flurry of concern that it could mean the end of traditional rank tracking, web analytics packages, and I’m sure someone said SEO was dead, too. The concern wasn’t without merit; Google was serving results in AJAX with the URL pattern as http://www.google.com/#q=keyword, and most analytics packages ignored the hash and everything after.
Fast forward to September 8th, when Google introduced Google Instant. The AJAX SERP usage had been steadily increasing over time, but shot up in usage when Instant was rolled out. Fortunately for Omniture, WebTrends, and other third party analytics packages, Google worked out a way to pass the referrer information from the SERPs, rank tracking still works, and I’m still working as an SEO in a living industry.
As it turns out, Google includes even more information in the AJAX SERPs than they previously did, including one really interesting parameter: "cd=". The cd= parameter contains the exact ranking position of the search listing, which makes for some really awesome possibilities, especially when paired with Google Analytics’ custom variables.
Custom variables are a bit of an enigma to even advanced Analytics users. I’ll admit that I never really made much use of them in the past. You’ll often see examples where custom variables are used to track logged in vs. unlogged in users, which is definitely a great use. Rob Ousbey’s 6 cool things YOU can do with custom variables is a good set of examples to get your feet wet.
In André Scholten’s example above we’re using Google Analytics user defined value, isn’t that just as good a custom variable? Well, the difference depends on how you intend on using your data. With custom variables, you’re granted much more flexibility within Google Analytics for slicing and dicing data. For instance, through the use of either custom reporting or advanced segments with custom variables, I can pretty easily track how much revenue a keyword has brought in when ranked in the 2nd position, as opposed to the 4th. While this may be possible with the user defined variable, it would require quite a bit of work after an excel data dump.
Now, let’s get to business:
Getting this properly set up was remarkably easy for me, and I have so very little programming knowledge, so I would imagine most wouldn’t have much issue. I used PHP, as I was working with a WordPress site, but I’m sure you crazy hackers can do the same in most any language.
I used this snippet to do this.
<?php preg_match("/cd=(d+)/",$_SERVER['HTTP_REFERER'], $matches); $str = $matches; preg_match("/(d+)/",$str,$matches); $rank = $matches ?>
Please don’t make fun of my hacky coding
This assigns the cd= value to the $rank variable. We’ll reference this in…
Now, we want to insert the custom variable call between the setAccount and trackPageview lines in our Analytics snippet (shown below using the asynchronous code):
var _gaq = _gaq || ; _gaq.push(['_setAccount', 'UA-XXXXXX-X']); _gaq.push(['_setCustomVar',1,'Google_Rank','$rank',2]); _gaq.push(['_trackPageview']);"
We’ve set the custom variable slot to 1, and the scope to the session-level (the last argument, set as 2). If you are already making use of custom variables, be sure to not overwrite a previously occupied slot. For more information on how the custom variable is formatted, see Google’s help page on the topic.
We only want to include this line when we have a cd= value, otherwise every new click will overwrite the last value. To do this, I used the following IF statement, again coded in PHP. This is the final step, and the complete Google Analytics snippet:
Here we’re checking if $rank has a value. If it does, we’ll include the custom variable call with that $rank value, if not, we’ll print the Google Analytics code as normal. Also included in the above are some line breaks (n), so that the code formats correctly.
What’s the point of going through all this effort if it doesn’t provide you with any analytical insight? None, of course. But this rank tracking solution has some added benefits over the traditional rank tracking software that may be really useful to some SEOs. These include:
Traditional rank tracking software suffers in that its ranking results are dependent on the location of the servers. With custom variable rank tracking and a little spreadsheet pivot table magic it’s pretty easy to get your site’s rank for any location.
Once this is properly set up you’ve got access to definite rankings within your Analytics data from that point on. So as holiday season 2011 rolls around, its easy enough to review where your site ranked during the 2010 holidays, helping to set budgets, goals, and expectations.
Whatever your KPI, you can compare it against search ranking. Reporting the ROI of link building efforts or on site optimization becomes much easier when you’ve got rankings included in your dataset.
Some of the quick ideas I had around this include:
I should note that getting averages is a lot easier in Excel with a pivot table, as seen below:
This can also be adjusted to show your minimum rank, as well
Custom variables aren’t included in the default reports for Google Analytics, so unless you do all your work in Excel, you’ll probably want to create some custom reports or advanced segmentation to work with the data directly in Analytics.
Advanced segmentation is great for this data. Below is the function one would use to track rankings between 11 and 15, which might be strong candidates for on-page optimization that could provide the boost onto the first page:
You can apply this particular advanced segment with this link.
The most obvious downside is that you’re only receiving a ranking when a listing is being clicked on, so for very small sites there may be limited utility. Ranking data will be spotty past the 2nd page, as well.
Additionally, the AJAX SERPs are not being served to all users in all locations. Small sample size warning here, but I’m seeing about 40% of organic Google traffic coming from the AJAX SERPs (done through a simple calculation of visits with our custom variable divided by total Google organic visits over the same time period). Michael Whitaker is seeing this number over 50% in his data. This number is likely going to increase as Instant is rolled out further.
The #-pack local listings can really throw things off, too. If a particular query gets one of these to start the SERP, the cd= continues after:
Lastly, there does exist the possibility that Google discontinues its use of the cd= variable for whatever reason.
I hope some of you can make some good use out of this functionality. I’ve only had it installed on my sites for a short time, but I’ve definitely found it interesting to play around with. If you don’t already have Excellent Analytics installed in your Excel I would highly recommend doing so, even if you don’t implement this tracking, and especially if you do.
I’d like to thank Michael Whitaker of Monitus for his help. He’s been installing this setup for his clients for a bit now. Monitus offers proper eCommerce Google Analytics installation for Yahoo! stores, which is surprisingly difficult without Monitus.
If you’ve got any other ideas for working with this data, sound off in the comments or let me know on Twitter @MikeCP. Personally, I’m really excited to have this data rolling in and the possibilities are nearly endless. I’ll be sure to report any interesting ways to manipulate the data in future blog posts. Cheers!
Posted by caseyhen
Earlier this year, jtkaczuk wrote a YOUmoz post about “Using Twitter as a Sitemap”. After reading it I began to think about the power of Twitter and if using Twitter more can help indexation. Many Twitter users will tweet about new post or products on their account hoping to draw attention and links from their followers. What if this process can also help with getting more pages indexed and indexed faster? I was surprised with the results of this quick little experiment that I threw together in a few months.
Posted by RobOusbey
Let’s start with a sneaky tactic.
I know that SEOmoz blog readers are an internet-savvy crowd, so many of you are probably familiar with the ‘browser history sniffing’ techniques that exist. (Bear with me, we’ll get to internet marketing advice in a moment.)
(Thanks for reading; you can follow me on Twitter: @RobOusbey, and I’m pleased to be speaking alongside some of the best SEO practitioners around at this year’s Pro Training Seminar – tickets are still available.)
Perhaps part of the “interesting data” Richard Rosenblatt was talking about was link anchor text on expired domains & cybersquatting efforts that he could redirect in bulk at high earning eHow pages.
Not to fear, Demand Media is a trusted Google partner, so the algorithm and engineers are prohibited to take action against the same activity which would get your website removed from the search results.
Google’s blind eye and double standards toward the large MFA spam sites are becoming such a big issue that it looks to be at the core of the marketing strategy for new search engines!SEO Book.com – Learn. Rank. Dominate.
Posted by Tom_C
Wouldn’t it be great if you could somehow spot those SEO opportunities on your site which were low effort and high value? Well this post gives you a few ways you can do that! Sweet.
I’m going to be digging around in the recently released search queries report in Google Webmaster Tools:
The first thing we need to do is gather all the fruit (aka keyphrases). So within GWT select search queries and select just "web" queries and in this case I’ve selected "United States" since that’s the main target market for SEOmoz. The more we can narrow this down the better data we get, if we leave image search etc in there and leave countries like Serbia in there the less accurate our data will be:
Once we have filtered the data we then want to download the data to Excel:
Once we have the data in Excel we can do some monkeying around to get some meaningful insights. When you download the data you will be presented with a lot of dummy data like this:
So I run a find and replace on the following two items:
Be sure to only run these over columns B,C,D to avoid stripping out anything from your queries column in A!
Now, once we have this data it will depend on the number of impressions and number of keyphrases how exactly you want to slice the data. Analysing the data for SEOmoz I found that selecting all avg positions that were not equal to 1 and impressions over 200 gave me a sample set of 97 keyphrases to look at which wouldn’t take very long at all to whiz through and look at. If you have more time or if you have more keyphrases you might want to get a longer or shorter list.
As I mentioned this gives me a list of 97 keyphrases for the SEOmoz site. Let’s take a look at what some of those opportunities are!
In this post I not only wanted to show you how to get the data but also to give you an insight into what kinds of actions you could take and what sorts of keyphrases you might look at so I ran the above process for the SEOmoz site and found the following low hanging fruit. Bear in mind that there weren’t that many really easy wins in the data since SEOmoz generally has fairly good SEO (unsurprisingly!). Still, it gives you an idea of the thought process.
Keyphrase: SEO | Ranking: 9.4 | Impressions: 49,500 | Clicks: 590
Ranking URL: http://www.seomoz.org
Now, I’ve shot myself in the foot a little here by picking a keyphrase which isn’t really a low hanging fruit, it’s actually a highly competitive keyphrase! That said, it’s useful information to have. Without having rank tracking set up for the site it instantly tells me that SEOmoz are highly competitive for this term, especially as some of the sites that rank above them are Google and Wikipedia. That said, there’s room for improvement. Maybe time for some more badge-bait Rand?!
Keyphrase: Social media marketing | Ranking: 7.9 | Impressions: 8,100 | Clicks: 320
Again, this is a highly competitive keyphrase but one which I feel SEOmoz could perform better for. The current ranking is working ok for them but could certainly improve dramatically if they could shift the ranking from 7.9 into the top 3. Digging around we see that the page is linked internally from every page in the navigation with good anchor text and it has a total of 255 root domains linking to the page so there’s clearly a fair amount of work already done here. That said, I feel like there’s an opportunity waiting since SEOmoz links out to lots of other sites from here and most of those blogs will likely link back to the SEOmoz guide if there was a nicely written email. Incidentally, if you’re looking for a link from SEOmoz and have a top notch guide for one of the sites which doesn’t have any resources attached then get in touch! So long as you link back to the page
Also, BONUS TIP – while researching backlinks in this space I stumbled across the fact that Amazon authors can get links from Amazon.com check out Darren Rowse on Amazon and then compare to Rand Fishkin on Amazon and you’ll see that Rand has missed an opportunity to get blog posts imported and hence get clean followed links from Amazon. Sweet!
Keyphrase: What is seo | Ranking: 3.9 | Impressions: 1,900 | Clicks: 210
Ranking URL: http://guides.seomoz.org/beginners-guide-to-search-engine-optimization (Note here that this URL isn’t reported in GWT, it’s the old beginner’s guide URL which now redirects but the same keyphrase stands).
Here, I see the answer being a little easier than the above keyphrases. The term is less competitive and the title of the page doesn’t even mention "what is seo"! My actions would be to reword the title tag to be "What is SEO? The Free Beginner’s Guide to SEO from SEOmoz" and to mention "What is SEO" on the page at least once (currently it only mentions "what is search engine optimisation" and although Google knows they’re the same phrase I’d still like to see the exact phrase on the page somewhere). Also, there is no navigation link on the site to the beginner’s guide so slipping a few links into the next few blog posts with the anchor text "what is seo" will help boost the rankings for that phrase.
Keyphrase: Free seo tools | Ranking: 4.2 | Impressions: 480 | Clicks: 73
Ranking URL: http://www.seomoz.org/tools
The term "seo tools" is fiercely competitive but the "free seo tools" term seems like it would be a lot easier to go after, in fact SEOmoz is one of only 2 of the top 10 results which doesn’t mention the term free in the title tag of the page. This could be rectified easily and in addition to that the page doesn’t even mention "free seo tools" on the page. Personally, since this is something people search for I’d be tempted to re-des
ign the page to add a "Free SEO Tools" sub-header to differentiate between the free and PRO tools. That way it’s a good user experience and also gets the phrase on the page.
Keyphrase: Keyword research | Ranking: 19 | Impressions: 110 | Clicks: 12
Note here that the impression numbers are so low because the page is ranking 2nd page. Not having a page ranking in the top 10 here is a mistake for SEOmoz I think (sorry, I mean opportunity!). The correct page is clearly the page on keyphrase research from the new beginner’s guide and the best way to make that page rank is to throw some more internal links to the page. Currently there are basically no internal links to that page except from other beginner’s guide pages. Linking to it from elsewhere should be able to get that page ranking. One idea to help get internal links to pages of the beginner’s guide would be to automatically link to the keyword research page from any blog post within the keyphrase research category. That way it would essentially get more deep links internally to the individual pages of the beginner’s guide.
A warning here that I’m still not sure how much I trust this impression and click data. I wrote a post over on Distilled about how the reported numbers are way out from analytics numbers. To be honest, if I was doing SEO full-time for SEOmoz I’d like to think I’d have better resources of keyphrase research, ranking data and visitor numbers but using Google’s webmaster tools search queries report can be a nice quick way to identify keyphrase opportunities for a site which you’re less familiar with or for which you don’t have these things set up.
When the iPad was announced in January, it was the very first time Apple created something that I immediately wanted. I’ve never been an Apple fanboy, I don’t work on Macs for either my primary workstations or laptop – I use them for testing and ensuring compatibility only. But the iPad was different. I immediately saw it’s purpose as consumption device and it’s promise as a productivity tool.
The naysayers that tagged the iPad as “the iFail” seemed to focus on how it fails as a laptop or netbook, and how it lacks the features expected of those devices. While there are definitely limitations to what you can do, and how easily certain tasks can be accomplished, I gave Apple the benefit of the doubt, based on their track record, and knowing that this is a first generation device. As Edgar Bronfman, Warner Music Group CEO noted “No one’s got rich betting against Steve Jobs.”
Having never purchased a Kindle or netbook, I was looking forward to the 3G iPad enabling me to essentially ditch my laptop while traveling, and be a complete solution for any media (reading, music, movies) consumption needs. Most of my colleagues derided the “working exclusively from the iPad” notion as wishful thinking, and told me to be sure to pack the laptop as well. After using the iPad for over a month now, I’m looking forward to proving them wrong this week, as I thoroughly road test it during SMX Advanced. I’ve spent the last few weeks assembling the various tools (apps) that I’ll need, setting up accounts, passwords, etc. and can honestly say, with the benefit of the cloud, there’s nothing that I’d need to take care of while on the road, that I can’t do on the iPad.
Most of the apps that I use or need, relate to development – managing sites, updating sites, and keeping the trains running on time. To that end, these are the tools I employ on the iPad:
The lack of mulitasking (for now) on the iPad, means accomplishing some tasks can be a little tricky, or are at best, not intuitive. One such task involves taking an attachment from an email, and getting it onto a server to use in post, on a page, or otherwise link to. To accomplish this – you really just need one .99 app – GoodReader.
Advertisement: Find out how to get your bloggers talking about your products ViralConversations.com. #8
Related posts brought to you by Yet Another Related Posts Plugin.