According to Matt Cutts, speaking at a recent PubCon, Google will be looking at why exact domain matches rank so well. For example, if you have a site at blue-widgets.com it may rank a bit too well for the keyword phrase [blue widgets].
Don’t Google know?
More likely, Matt would not make a concrete statement, one way or the other. “Yes, exact Match domains rank better!”, is not something Matt is likely to say.
Secondly, the implication is that exact match domains are a problem.
Exact match domains names, as the name suggests, are domain names that match the search keyword term. i.e. Hotels.com, shoes.net, planetickets.org etc.
Is it a good idea to adopt this strategy for SEO? Ask ten different SEOs and you’ll likely get ten different answers.
On the plus side, an exact match may help you target one, specific keyword phrase. Your link text and domain name match up naturally. The domain name will likely be highlighted in Google’s search results, thus giving the listing more visibility. There may be ranking advantages, depending on who you ask.
On the negative side, an exact match only “helps” you target one keyword. It may be too generic for wider applications, such as brand building. Exact match domains may be over-hyped, and not worth a premium. There are, after all, many domains ranking #1 that aren’t exact match, so it is debatable how much SEO advantage they actually provide, particularly as Google keeps pushing brand.
So why would Matt imply exact match domain names might be a problem?
It is understandable that some in the SEO community – perhaps an SEO working on client sites, or those who don’t own any exact match domains and see others ranking above them – would have a vested interest in making a noise about the competition. If webmasters make enough noise about it, then Matt Cutts may feel a need to respond.
The supposed ranking power of exact match is probably a red herring. The problem Google may be hinting at is that exact match may be more likely to be involved with spam, thin affiliate, or other low value content than other types of domains. In other words, it becomes a quality signal.
If that is the case – and I’m not saying it is – then that may be the reason Google would look closer at exact match domains, not the fact that a domain matching a keyword is somehow evil.
Because it isn’t.
There is nothing wrong with owning an exact match domain.
Aaron covered this question in an earlier post, Why Exact Match Domains Aren’t As Important As Many SEO’s Believe.
In summary, it depends.
It comes down to business fundamentals. If you’re trying to build a unique brand, and resulting keyword stream, then an exact match domain name will be a hindrance rather than a help. You’ll forever be competing with generic search traffic. Keyword domains names aren’t particularly memorable.
The premium that an exact-match domain name commands, when sold on the after-market, may not be worth it. You don’t need an exact-match domain name to rank well, so the money may be better spent getting a new domain name to rank. Or, alternatively, you could buy an existing site that already ranks well for your keyword, and others, for similar money as an inflated exact match domain.
Finally, if you’re competing with a clear market leader, then generic isn’t going to help you much. i.e. owning searchengine.com isn’t going to make Google lose any sleep. You may also be over-looking an opportunity to differentiate your offering against the market leader in terms of brand. Think Blekko vs searchengine.com.SEO Book.com – Learn. Rank. Dominate.
Interesting little snippet from Mr Cutts:
“Matt recommends SEOs do not “chase the algorithm” and instead try to predict where Google will be going in the future”. Matt was addressing PubCon.
Good advice, methinks.
Trying to predict where Google is going is something we do a lot of at SEOBook.com. Whilst no one has a crystal ball, it’s good practice to keep one eye on the search horizon.
So, where do we think Google might be heading?
Easy one, huh.
Their biggest competitors appear clueless when it comes to search. Bing may make some inroads. Maybe. It’s hard to imagine anyone eating Google’s lunch when it comes to search, for many years to come.
Is Facebook a threat? I doubt it. Search is difficult, and I can see no reason why Facebook – which has a media focus – could own the search channel any more than Yahoo could.
Search is, after all, an infrastructure problem. Google’s infrastructure would be very difficult to replicate.
A search result set only really contains spam if the Google users think it contains spam i.e. they don’t see the answer they were expecting.
The fact a website may fall outside Google’s guidelines might get competing webmasters’ knickers in a knot, but it probably doesn’t matter that much to Google, or anyone else.
Even though Matt Cutts says Google will devote more resources to this, I suspect Google’s efforts will largely remain focused on outright deception i.e. misrepresentation, hijacking and malware.
We can forget the San Fran techno-hippy ethos of the web. It will not be a free-for-all democracy, if it ever was. History shows us that power tries to centralize control in order to maintain it.
Google may try to keep users on Google for longer. They do this by owning more and more verticals, and extracting data and reformatting it. When they send visitors away from Google, they’ll try to do so more and more on their own terms. Watch very carefully what type of sites Google rewards, as opposed to what they may say they reward.
Expect less competition in the market as a result. Some people are already getting angry about it.
Google follows users. So do Facebook. Anywhere your users are, you’ve got to be there, too. On Google Maps. On YouTube. Wherever and whenever. Think beyod your website. Think in terms of getting your data out there.
As Rich Skrenta pointed out in a recent interview:
Social media can drive tons of attention, awareness and traffic. But the search box is the best way to navigate to stuff you want. Now what will drive those results – if I type in “pizza”, what should I get? The answer can be very different depending on whether the results are coming from the web, Yelp, or Facebook. So I guess my answer is that I still see search being the core way to navigate, but I think what gets searched is going to get a lot more structured and move away from simple keyword matches against unstructured web pages
Microsoft Research found that people tend to organize their memories in geographic terms i.e. where they were when something happened.
If you want to know where Google is heading, then watch Marissa Meyer. Marissa has been responsible for much of what you see in Google in terms of how it is organized. Marissa has just moved to head of Geographic and Location Services.
Google Earth. Google Maps. Google Local. Google Street View. Mobile location data and targeting. Expect more data to be organized around locality.
“…but this changes everything…”
SEO hasn’t changed all that much in years. We still find an audience (keyword research), we publish content, we build links to the content, and then we repeat it all over again.
The changes come around the edges, especially for big companies like Google. There is a lot of risk to Google in making radical changes. Shareholders don’t like it. Why risk breaking something that makes so much money, and is so popular?
The biggest changes in the way we do things on the web are probably going to come from the upstarts. They’re probably hard at work in their garage right now.SEO Book.com – Learn. Rank. Dominate.
Posted by Paddy_Moogan
There has been quite a lot of discussion lately about the use of rel=canonical and we’ve certainly seen a decent amount of Q&A from SEOmoz members on the subject. Dr. Pete of course blogged about his rel-canonical experiment which had somewhat interesting results and Lindsay wrote a great guide to rel=canonical. Additionally, there seem to be a few common problems that are along the following lines -
I’m going to attempt to answer these questions here.
A 301 redirect is designed to help users and search engines find pieces of content that have moved to a new URL. Adding a 301 redirect means that the content of the page has permanently moved somewhere else.
Users will probably never notice that the URL redirects to a new one unless they spot the change in URL in their browser. Even if they do spot it, as long as the content is still what they were originally looking for, they’re unlikely to be affected. So in terms of keeping visitors happy, 301 redirects are fine as long as you are redirecting to a URL which doesn’t confuse them.
In theory, if a search engine finds a URL with a 301 redirect on it, they will follow the redirect to the new URL then de-index the old URL. They should also pass across any existing link juice to the new URL, although they probably will not pass 100% of the link juice or the anchor text. Google have said that a 301 can pass anchor text, but they don’t guarantee it.
In theory a search engine should also remove the old page from their index so that their users can’t find them. This can take a little bit of time but usually can take no longer than a few weeks. I’ve seen pages removed within a few days on some clients but its never set in stone.
Where is can go wrong
Not knowing your 301s from your 302s
The classic one which I’ve seen more than once, is developers getting mixed up and using a 302 redirect instead. The difference with this is that a 302 is meant to be used when content is temporalily moved somewhere else. So the link juice and anchor text is unlikely to be passed across. I highlighted an example of this in a previous blog post, if you go to http://www.dcsf.gov.uk/ you’ll see a 302 is used. I first spotted this several months ago and it still hasn’t been fixed and I’d assume that this isn’t a genuine temporary redirect.
Redirecting all pages in one go to a single URL
Another common mistake I see involves site migration. An example being if your website has 500 pages which are moving somewhere else. You should really put 500 301 redirects on these pages which point to the most relevant page on the new site. However I’ve often see people redirect all of these 500 pages to a single URL, usually the homepage. Although the intention may not be manipulative, there have been cases of people doing this to try and consolidate all the link juice from loads of pages into one page, to make that page stronger. This can sometimes put up a flag to Google who may come and take a closer look at whats going on.
Matt Cutts talks about this in this Webmaster Tools video:
You should certainly use 301 redirects if you are moving your website to a new location or changing your URLs to a new structure. In this situation, you don’t want users or search engines to see the old site, especially if the move is happening because of a new design or structural changes. Google give clear guidelines here on this and advise the use of 301s in this situation.
You should also use a 301 if you have expired content on your website such as old terms and conditions, old products or news items which are no longer relevant and of no use to your users. There are a few things to bear in mind though when removing old content from your website -
Multiple Versions of the Homepage
This is another common mistake. Potentially a homepage URL could be access through the following means, depending on how it has been built -
If the homepage can be accessed via these type of URLs, they should 301 to the correct URL which in this case would be www.seomoz.org.
Quick caveat – the only exception would be if these multiple versions of the homepage served a unique purpose, such as being shown to users who are logged in or have cookies dropped. In this case, you’d be better to use rel=canonical instead of a 301.
This is a relatively new tool for SEOs to use, it was first announced back in February 2009. Wow was it really that long ago?!
As I mentioned above, we get a lot of Q&A around the canonical tag and I can see why. We’ve had some horror stories of people putting the canonical tag on all their pages pointing to their homepage (like Dr Pete did) and Google aggressively took notice of it and de-indexed most of the site. This is surprising as Google say that they may take notice of the tag but do not promise. Howev
er experience has shown that they take notice of it most of the time – sometimes despite pages not being duplicates which was the whole point of the tag!
Where 301s may not be possible
There are unfortunate situations where the implementation of 301 redirects can be very tricky, perhaps the developers of the site don’t know how to do it (I’ve seen this), perhaps they just don’t like you, perhaps the CMS doesn’t let you do it. Either way, this situation does happen. Technically, a rel=canonical tag is a bit easier to implement as it doesn’t involve doing anything server side. Its just a case of editing the <head> tag on a page.
Rand illustrated this quite well in this diagram from his very first post on rel=canonical:
Multiple Ways of Navigating to a Page
This is a common problem on large ecommerce websites. Some categories and sub-categories can be combined in the URL, for example you could have -
In theory, both of these pages could return the same set of results and therefore a duplicate page would be seen. A 301 wouldn’t be appropriate as you’d want to keep the URL in the same format as what someone has navigated. Therefore a rel=canonical would work fine in this situation.
Again, if this situation can be avoided in the first place, then thats the ideal solution as opposed to using the canonical tag.
When dynamic URLs are generated on the fly
By this I mean URLs which tend to be database driven and can vary depending on how the user navigates through the site. The classic example is session IDs which are different every time for every user, so it isn’t practical to add a 301 to each of these. Another example could be if you add tracking code to the end of URLs to measure paths to certain URLs or clicks on certain links, such as:
On New Websites
I’ve seen a few instances where rel=canonical is being used on brand new websites – this is NOT what the tag was designed for. If you are in the fortunate position of helping out with the structure of a new website, take the chance to make sure you avoid situations where you could get duplicate content. Ensure that they don’t happen right from the start. Therefore there should be no need for the rel=canonical tag.
On Pagination – maybe! At least use with caution
Having said that, John Mu has made a point of not ruling it out totally. He just advises caution, which should be the case for any implementation of the canonical tag really – except if you’re Dr Pete!
Across your entire site to one page
Just a quick note on this one as this is one way which using the rel=canonical tag can hurt you. As I’ve mentioned above, Dr Pete did this as an experiment and killed most of his site. He set the rel=canonical tag across his entire site pointing back to his homepage and Google de-indexed a large chunk of his website as a result. The following snapshot from Google Analytics pretty much sums up the effect:
In summary, you should use caution when using 301s or the canonical tag. These type of changes have the potential to go wrong if you don’t do them right and can hurt your website. If you’re not 100% confident, do some testing on a small set of URLs first and see what happens. If everything looks ok, roll out the changes slowly across the rest of the site.
In terms of choosing the best method, its best to bear in mind what you want for the users and what you want them to still see. Then think about the search engines and what content you want them to index and pass authority and link juice to.
While a lot of the attention on AT&T and Apple in the past few weeks has been focused on the release of the iPad and new iPhone, the elimination of unlimited data plans is an equally important development, especially for website owners and publishers.
In prior years, AT&T offered unlimited data for a month; however, they maintained that a small percentage of users were using a disproportionate share of data. To compensate for this, they announced two new data plans and eliminated the unlimited plan. As I understand it, existing customers are grandfathered until they renew. Upon renewal, they have to choose. Engadget has an excellent breakdown of the details of the plan.
So what does this mean to website owners and publishers? IMHO if you are a publisher, you really need to evaluate your use of rich media and use of a mobile version of your site. If you think that AT&T dropping the unlimited plan is an aberration, you might want to reconsider that position. While free wifi may be on the rise, it’s not as ubiquitous as many in the valley would have you believe. I can find open free hot spots if I really need one, but it isn’t easy. So it’s not unreasonable to expect consumers to start being more conscious of their data use. Additionally, while smart phones and devices like the iPad, Blackberry, or Android can handle some rich media, studies have shown that many users prefer “lite” or mobile websites when on these devices.
From an SEO perspective, creating a mobile website has a few pitfalls to watch out for. In my experience, it’s best to avoid using a separate subdomain or subfolder for a mobile version; instead, you want to serve a different CSS version or serve modified content based on mobile user agents. Again this strategy is tricky if you don’t want to look like you are cloaking; however, as long as you serve the same content to Google’s mobile crawler as you do to mobile browsers, you will be fine (for more info, see this post from Google’s webmaster central team).
While using Word Press as a CMS has issues, this is one area in which it works to your advantage: there are multiple plugins to help you address the problem. I use WP Touch, but you can also use WP Mobile. I’m sure there are other plugins or adapters for other CMS systems. Make sure the systems can handle mission critical functions like shopping and ordering. In the month I’ve owned my iPad, I’ve made a dozen purchases from my iPad, which I suspect is a growing trend.
To wrap up, here is what I would concentrate on as a publisher:
This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.
Why You Should Care About AT&T’s iPhone Data Plans Even If You Don’t Own an iPhone