Tags Archives

You are currently viewing all posts tagged with Google’s.

For years Google has championed the concept of an open web. Some took it as an altruistic sign, while others thought it was a convenient angle to commoditize complimentary business models.

Google pushed for net neutrality but made wireless connections an exception. Why would they do that? Could it be they are invested in disrupting that market elsewhere?

As Google started to reach the bulk of potential returns based on direct response they started to lean on brands as a signal of quality & recommend brands more in their search results. When you search for Amazon you might get 8 or 9 links from the official site & even on some generic keywords Google recommends associated brands.

When you think about what brand is, it is a mental shorthand for a concept. It leads to increased recall, fatter sustained profit margins, and thus the ability to spend more on marketing. If Google is to put more weight on reviews and look at sentiment analysis then of course that will benefit the larger players who invested into establishing positive associations, even at a young age. The results of such branding efforts are quite powerful.

And even moreso if you don’t use them for evil, Pepsi! :D

In the past Google has positioned that affiliates are evil (the body language says it all IMHO), though there are Google’s remote quality rater documents which provide further proof to anyone with lingering doubts.

As Google is becoming the affiliate they are getting direct signals into what consumers like most & are able to serve them a personalized recommendation engine. New extended ad formats & using location data will allow Google to further drive down the organic results.

Not only does Google sell CPA priced product ads on their search results, but they also allow your Google Base account to drive additional product links, which gives them over 150 million products to advertise. The name of the game is to give Google a bit more data to get a higher clickthrough rate & thus have a higher quality score & be enabled for additional profitable opportunities sold at below fair market rates. That seems like a free lunch and works great, up until the day Google decides to use the aggregate data to compete directly against you. ;)

Google now runs a thin affiliate site in Boutiques.com. Google’s ability to recommend consumption behaviors not only impacts ecommerce, but every type of media in the world. They control the ad rates of various advertisers & can create custom ad integration opportunities.

Youtube offers related videos, a never-ending personalized streaming service in LeanBack and ads which users can select from.

When Google started scanning books it was supposed to be for search, rather than to have ebooks for sale. A couple lawsuits later and today Google finally opened up their ebook marketplace.

One of the leading features of Google’s ‘open’ marketplace is DRM: “Publishers can choose whether or not to lock down their books with DRM. Google also says it will have a strict privacy policy that forbids it from using your book buying habits to advertise to, or profile readers.” If you are outside of the United States the store is simply unavailable. That same article states that “Google hopes to layer on social features into the service in the near-future and says the infrastructure is in place to let people buy both a digital and paper copy of a book in a bundle.”

Would that be Google moving from pushing bits & people to pushing physical products?

Google announced their copyright “improvements” in front of the Viacom vs Youtube copyright lawsuit appeal.

Meanwhile Google is the same company which published this & recommends keygens and serials when you search for a brand. Google promises to fix that later issue – something that has only took them a few YEARS to do, even though they were blocking porn words (& other words that could have earned them negative press) much sooner.

In much the same way that Google has captured most of the revenue streams they will be able to with direct response ads, I think they realize that they will need to work better at managing property rights of big media & other publishers if they really want to drive brand advertising revenues. This will likely lead to a decline of the “anything goes” web.

If you think of the whole reason Google was so liberal in their approach to supporting (and even funding) copyright violation it was so that they could weaken the positions of the companies that hold those rights, such that Google can eventually negotiate a deal with them. But the main thing holding back Google music is that based on Google’s past performance the labels do not trust the idea of a digital music locker hosted by Google. After all, Google AdSense ads are what allow sites dedicated to downloading MP3s from Youtube to be monetized today.

Google offers promotional links on Youtube & knows how much money they are missing out on. Google’s boondoggle of using public relations to paint a clean show publicly while using legal loopholes to circumvent the intent of the law was good for getting them into a strong market position, but if they want to have a leadership position in more big media markets they will need to get buy in from established players.

Google wants to get big into television ads. And that is going to mean having better respect for copyright. To some degree as we see the Google business model change we will see their approach of “paying anyone to steal anything & wrap it in Google ads” (to soften up copyright) change to a model where the put themselves as a gatekeeper on DRM content & push the “official” sources of the media (and try to make a cut of the profits). Already on Youtube if you view certain content from outside the United States they will tell you that it is unavailable in your area.

Google’s first video store was a complete failure. Once again they are pushing hard on video with Google TV & by buying the DRM company widevine. Big media companies have opted out of Google TV awaiting more favorable deals.

Part of such deals will ultimately rely on backroom payouts coupled with hard coded promotions. There will be a lot of collateral damage as entire categories become unprofitable. I think we are seeing the organic search results take a page from the ad book: pay to play.

Google’s old model of paying people to scrape content & wrap it in ads was leading to a market for lemons, driven by Gresham’s theorem. Much like how the most fraudulent banks could afford to buy out less fraudulent ones, and how Chinese milk with melamine was cheaper than real milk sent real companies into bankruptcy, the search results were suffering from the age of scrape/mash/publish. Given the surrounding economic forces crushing newspapers, Google was making things worse than they needed to be.

Those who are creating original high-quality content have real business costs. Google paying scraper sites like Mahalo and Ask to borrow your content & wrap it in ads means that you are sometimes getting outranked for scraped duplications of your own content. That drives down publisher margins and pushes marginally profitable publishers into losing money.

Google news has been described as a sewage factory plagued with nefarious players & is now undergoing clean up as well.

Slowly but surely the search results will fill up with official hotel sites, official music sources, official video sources, official ebook sources, etc etc etc … with Google putting a big foot on the gas & accepting whatever cut they can get. If they want to avoid regulatory scrutiny they need to work with the official sources (which are every bit as politically connected as Google is).

As that shift happens the longtail spam model will lose out on its profitability because it will be forced to compete with higher quality content that is automatically mixed into the search results. (The whole point of universal search was to allow Google to short cut certain types of information right into the core search results…as they start making money from micro-payments and such look for that trend to accelerate).

Ultimately what has doomed most portals was that they were willing to produce second rate holder stuff that filled out a vertical and was ‘good enough.’ What makes Google so powerful with the stealth backdoor portal approach is that it allows them to mix 3rd party content without forcing them to either absorb the cost or create low quality backfill stuff on their own.

One area that is ripe for ad innovations is books:

I’m genuinely glad to have Google enter this market because it will be reaching potential customers at a unique point in their book-buying journey: at the point of web search, not at the point of searching the bookstore. This means many things you didn’t realize a book can help you with—overcoming depression, remodeling a bathroom, making friends and influencing people—will now be surfaced alongside all the YouTube and other results Google will offer. This is a net plus for books.

But the ultimate effect of Google e-books, if Google knows what’s good for it, will be the creation of an ad-supported publishing model.

Now that books are digital & Google has rights to sell them I would expect in the next year or 2 that Google starts to display them in the organic search results more aggressively. The free samples can be monetized via ads & upsells of the whole book. That endless stream of editorially vetted content could put a dent in the content farm business model.

SEO Book.com – Learn. Rank. Dominate.

Posted by randfish

First off, let me just say that there are a lot of people smarter and more experienced in scalably attacking web spam than I am working in the Search Quality division at Google and specifically on the Spam team. However, as a search enthusiast, a Google fan and an SEO, it seems to me that, all due respect, they’re getting played – hard.

Word is, the Spam team’s key personnel had some time off working on other projects and supposedly they’re coming back together to renew the fight. I hope that’s the case because the uproar about black/gray hat SEO gaming the results is worse than ever, and deservedly so. It’s getting bad enough to where I actually worry that early adopters might stop using Google for commercial queries and start looking for alternatives because of how manipulative the top results feel. That behavior often trickles down over time.

Thus, I’m going to expound a bit on a tactic I discussed in my interview with Aaron for fighting what I see as a large part of the manipulation of results in Google – the abuse of anchor text rich links.

The basic problem is that if you want to rank well in Google for a high value, commercial search query like discount printer cartridges or home security camera systems, getting links with that anchor text containing those words, preferrably exact matches, is invaluable to rankings. Unfortunately, natural, editorially given links are extremely unlikely to use anchor text like that. They’re more likely to use the business or website name, possibly a single relevant word or two, but finding dozens or hundreds of domains that will link with this kind of anchor text without push-marketing intervention from an SEO is next to impossible.

That means sites that earn the natural, editorial links fall behind, while those who find ways to grab the anchor text match links and evade Google’s spam detection systems nab those top spots. It’s been going on for 10 years like this, and it’s insane. It needs to stop. Just as Google’s said they’ll be taking a hard look at exact match domain names, they need to take a hard look at precise matches for commercial anchor text links.

Here’s the methodology I like:

Step 1: Create a list of oft-spammed, commercially-directed anchor text. With Google’s resoures, this won’t be hard at all. In fact, a good starting point might be something some top adsense keywords lists (this one was readily available).

Top AdSense Keywords
Just a sample of some of the 3,400+ phrases in one file I found

I suspect Google’s Webspam team would have no trouble compiling hundreds of thousands of phrases like this that have a high potential for gaming and are found in large quantities of anchor text links.

Step 2: Locate any page on the web containing 3+ links with any of these anchor text phrases linking to different sites. An obvious example might look something like this:

Anchor Text Match Links

But, any collection of exact-match anchor, followed links to pages on multiple domains could be flagged by the system.

Step 3: Have manual spam raters spot check through a significant sample size of the pages flagged by this filtration process (maybe 5-10,00) and record the false positives (pages where Google would, legitimately want to count those links).

Step 4: If the false positives follow some easily identifiable pattern, write code to exclude them and their ilk from the filtration system. If the pattern is tougher to detect, machine learning could be applied to the sample, running across the positives and false positives to identify features that give an accurate algorithmic method for filtration.

Step 5: Devalue the manipulative links by applying the equivalent of a rel="nofollow" on them behind the scenes.

Step 6: Create a notification in Webmaster Tools saying "we’ve identified potentially manipulative links on pages on your site and have removed the value these links pass." Add this notification to 60-75% of the sites engaged in this activity AND write a blog post saying "we’ve applied this to 65% of the sites we’ve found engaging in this activity." If webmasters send re-consideration requests that they believe the filter caught false positives, you can send these back through Step 4 for evaluation and refinement.

Step 7: Create a flag in the PageRank toolbar for these same 60-75%, making the PR bar appear red on all the pages of the site. Announce this on the Webmaster Blog as well, noting that "65% of the sites we know about have been flagged with this."

Red PageRank Bar
That’s gonna scare a lot of webmasters

Step 8: Watch as search quality improves from the algorithmic filtration of manipulative link power and less spam is created as link buyers and spammers realize their efforts are going to waste.

Is this plan foolproof? No. Are there loopholes and messiness and ways clever spammers will work around it? Absolutely. But the folks I’ve talked to about this agree that for a huge quantity of the most "obvious" webspam via link manipulation, this could have a big, direct, fast and scalable impact. The addition of steps 6 and 7 would also send a much needed message that site owners and content creators would hear and feel loud and clear, while creating enough uncertainty about the value of the non "marked" sites to cause a behavioral shift.

Maybe Google’s already thought of this and shot it down, maybe they’ve already implemented it and we just think all those anchor text rich links are helping, but maybe, this thing has legs, and if it does, I hope Google does something. I’m bombarded so often with the question of "isn’t Google irreleva
nt now?" and "hasn’t SEO ruined Google?" that I’m fairly certain action’s needed. This type of manipulation seems to me the most obvious, most painful and most addressable.

Looking forward to your comments, suggestions and ideas – undoubtedly my concept is riddled with holes, but perhaps with your help, we can patch it up.

p.s. Yes, conceptually we could create a metric like this with Linkscape and show it in the mozBar and via Open Site Explorer and/or the Web App, but I’m not sure how accurate we could be, nor do I think it’s the best way to help web marketers through software (given our dozens of priorities). However, the fact that our engineering team thinks it’s relatively simple to build means it must be equally (if not more) simple for Google.

Do you like this post? Yes No

SEOmoz Daily SEO Blog

Posted by number1george

As you probably noticed, last week Google did a pretty big makeover of its local search results page, incorporating the local results directly within the organic results. In some cases it appeared that the old “7-Pack” was just given larger real estate on the SERP. In others, it just looked like the websites were just given links to their Places page. And sometimes, it just looked like an entirely new SERP, different than both the original organic rankings and the lettered, local results. But what was the real effect this change had on local search results?

How I Got My Data

Visually, the new local search results page includes information from the both website and the business’s Places page. The title and description are taken from the website but select information from the Places page is also included as well as a direct link to the Places page in Maps. Here we see an example of a search for "tanning salon seattle wa" and how the combined results are displayed.

An example of a new local SERP in Google

To find out the effects of combining the results, I grabbed the rankings of 50 somewhat random websites we’ve been tracking. As an initial criteria, I tried to use sites we’d been tracking for at least 2 months. I also eliminated sites with substantial fluctuations in their rankings within the prior few weeks since there would be no way to attribute those changes to any particular factor. Lastly, though I originally intended to use a completely random sampling, I eventually skipped over several sites that had no change since several of these were in non-competitive areas where they pretty much dominated all other websites for their searches.

After I had my sample, I did some quick research, comparing the organic rankings of several websites prior to the change to their rankings after the change. I then performed the same search in Google Maps in order to determine how their Places pages were ranking individually.

Example of Google Maps SERP

With a few exceptions, the top 7 ranked results in Maps are what were displayed in the old 7-pack for the same search. These listings were ranked independently of the organic results beneath them. By comparing their former organic ranking to their current organic ranking, I was able to see if a change could be correlated to their Places page’s ranking in Maps.

So, Was There Any Change?

Of the 50 websites examined, 30 of them had an improvement in the new, “combined” results while 6 of them dropped. In most cases, this shift in their ranking could definitely be attributed to the performance of their local listings.

The Good

First, let’s look at the ones that improved. I did eliminate 4 outliers but, for the most part, you can see a direct correlation between the sites’ improved ranking and their local ranking in Maps. Obviously, I can’t publish any actual websites or keyword searches, but the searches all used a typical local query consisting of “business/service city st”.

Local searches with a positive change

Generally, it can be said that sites performing well in both organic and local perform even better in the new consolidated SERP. In several cases you can directly see how a well-performing Google Places listing now pulls up your organic ranking.

In some instances, the combined performance of a business with both a decently ranking website and Places page was enough to push it up a rank or two in the new results. In others, it appears that a well-optimized Places page was able to significantly improve a decently performing website and increase its ranking by several spots. Basically, your local listing’s performance appears to be a significant ranking factor in the new organic results.

The Bad

Since a business’s local listing has the ability to positively affect its website’s performance organic results, let’s look at the ones that dropped in ranking to determine if there is a negative factor associated with the new SERP.

Example of searches with a negative change

First, the fact that the sample size I was able to obtain was so small already implies that a poorly performing business listing doesn’t seem to have much of an effect on a website’s performance. Looking at the original rankings, you can also see that 3 of these sites weren’t doing that great to begin with. In fact, it would probably be fair to assume that their drop was due to an already negative trend. But what about the websites that were doing well but dropped after the update?

Digging deeper into these, I soon discovered that this wasn’t really a direct result of the poorly performing business listings dragging the websites down, but rather that, due to the local results being buried so deep in Maps, Google didn’t associate a business’s Places page with their website. As a result, other websites that did have strong Places pages were ranking higher. So, while having a poorly ranked local listing didn’t penalize the website, it was a whole category of optimization that the website was lacking. Almost like having a great inbound linking strategy but no content structure.

Other Observations

While going through dozens of various local searches, there were a few things that stood out:

  • Directory listings appear to be showing up more frequently in local results, in some cases taking up the top 3 spots in results.
  • The 7-Pack, or rather one-line business listings similar to the old 7-Pack, aren’t gone entirely. Lettered results still tend to show up when Google isn’t entirely sure you’re trying to do a local search. Typically, this happens in searches for smaller cities or regions.
  • When using rank-checking tools, the one-lined, lettered listings won’t be counted – just like before. The larger results being discussed here, however, are treated just as normal organic results prior to the change, completely disregarding the letter and local information assigned to it.
  • Lastly, while I encountered plenty of websites on the first page without a Places page, I encountered very few Places page ranking on the first page without a website. Prior to the change, it was not uncommon to regularly see local listings with no associated website ranking in the 7-pack. Now it appears that, without a website, it is nearly impossible to be in the first page of Google’s general SERP for most searches.

What Does This Mean?

So what can we learn from all this? Basically, it’s just what Google said all along – everything is important. Your best bet is to have both a terrifically optimized website and an optimized, claimed Places page to associate with it.

Not only does Google seem to use a Places page as an organic ranking factor, but having one also gives you nearly twice the real estate devoted to your business in the results. Instead of just having a few words in your title tag and meta description to sell your business, you now have yo
ur address, phone number, reviews, lists of other websites that mention you, and even a picture to draw attention to your website.

Bottom line: all those old debates about whether it was better to have the top-ranking website in organic or have your business at the top of the 7-pack are over. Even if this isn’t the final layout, it’s clear that Google intends to make both count.

Do you like this post? Yes No

SEOmoz Daily SEO Blog

Posted by Danny Dover

 As people in relationships spend time with each other they start to leverage each others natural strengths to efficiently store information about the world around them. "Honey, what is the name of my Aunt’s employer?" "Babe, what do you call that thing that heats bread?" They rely on each other to store information that is mutually beneficial.  Some believe this process is one of the reasons breakups are so hard. “I feel like when s/he left, s/he took a part of me.” It is common to hear statements similar to this because when it comes to memory, it is more true than many may realize.

Sharing Information

While this phenomenon has historically happened between two people offline, it is now happening online between people and technology. How many times have you checked Google for a fact that you once knew? How many times have you Googled for a resource that you have already read? Like it or not, Google is quickly becoming a second brain in much the same way loved ones have done in the past. While this search engine has benefits that humans don’t (ubiquity), it does have some severe limitations that should be examined.

The self declared mission of the people who run Google is to “organize the world’s information…”. While they have done a remarkable job of this online, they have failed to do this offline in the tangible world. To understand these unspoken failures, all you need to do is examine the five major senses humans use to organize the world’s information.




“Who is that guy?” “I recognize that place, where was that scene filmed?” “What is the name of that color?” For most people, sight is the primary sense for experiencing the world. While technology does exist for identifying objects within images (facial recognition algorithms, OCR, color detection, etc…) you can’t utilize these tools directly through Google. This may possibly be the biggest limitation of Google. Be it remembering the name of a person after a date or an entire government agency trying to identify a suspect, identifying someone or something by sight is critical for organizing the world’s information.


Smell is the closest sense tied to memory. Have you ever walked by a stranger and instantly been flooded with memories of a significant other who happened to wear the same perfume or cologne? It can be a jarring experience. Want to identify that scent? Google can’t help you. While the technology exists for detecting smells and there are databases for identifying smells, a method to easily cross reference and identify a smell online is not available.


You are watching How I Met Your Mother and you recognize the voice in the opening sequence. Whose voice is that? You hear an obscure tune as a car blaring loud music drives by your home. What is the name of that song? Like the situation with sight, the technology for identifying sounds exists (Shazam, SoundHound, etc…) but it is not available through Google.  While you can search via verticals for text, video and images, you can’t search for sounds. This is almost certainly a legal limitation rather than a technology one. (After all, Google can identify audio clips in YouTube videos.)


You are traveling in Greece and you order the most interesting looking item on the menu. When it arrives, it looks like nothing you have ever seen. You bite into it and instantly recognize the flavors but can’t remember the name of the meal as it is hidden by an “unique” texture. Again, Google won’t help you (although a napkin might). The same problem happens more frequently with allergies. Want to make sure a meal a friend made for you doesn’t have an ingredient you are allergic to? Some technology can help but Google isn’t one of them.


BEEP BEEP BEEP! It is 6:00 AM and your alarm clock is screaming. Eyes still closed and crusty, you reach across your bed and use touch to identify the snooze button on your alarm clock. Later that same day, you reach into your bag and navigate its contents by touch to pull out your cell phone. Although more subtlely tied to memory than the other senses, touch can also help you identify objects.

But why would you need to search for something by touch if the object is already at arms length? Good question… unless you are blind. Many blind people use their sense of touch to catalogue the world. Imagine you are not able to see and you find something new and want to know what it is. A friend might be able to help but Google won’t.

And these major limitations are only the beginning:


Where are you?

In the United States, the most common text message is “where are you”. While other websites (Twitter, Facebook, Foursquare) have been getting better at answering this question, Google has largely remained stagnant.

Where did I put that?

You are getting ready for work but can’t for the life of you remember where you put your favorite shirt. This type of situation happens daily. Be it car keys, shoes or your little sister, countless man hours have been spent looking for things. When it comes to finding the location of personal items, again Google can’t help.

So Google has some major limitations, why is that a big deal?

These limits are worth writing a blog post about for two reasons; context and awareness.

The great thing about being alive is that everyone is constantly at the forefront of human progress. Right now we are the most evolved we have ever been. And right now, we are even more evolved than when you read that last sentence. It is very likely that while you have read this post, someone, somewhere has invented something that will make your life better moving forward. Google is a great example of that. The limitations I listed above could be fixed with the creation of new features. That is not the point. The point is that while we are currently living in the most technologically advanced time that has ever existed, we still have a long way to go. The Google of today is not the end-all-be-all, it is only a milepost on a much longer stretch of highway.

The second reason I am writing this post is to promote awareness. Whether you like it or not, Google is becoming an important factor in how you experience the world. Just like a person wearing glasses literally sees the world through predefined frames, humans are seeing the Internet through the limits of Google.

Think about that.

If you were a fish living in a fish bowl, would you know the bowl existed? You would certainly know there was an edge to your environment (the glass) but having been enclosed in a bowl throughout your entire existence, you wouldn’t be able to “organize your world’s information” beyond what you could sense. Google is not sensing the world like we do. It can’t see, smell, hear, taste or touch. Yet at the same time, it is largely defining how we experience the Internet. As the Internet becomes an increasingly essential part of our world, the search engine’s limitations become our limitations. These limitations whether noticed or not are limiting your potential to experience the world.

Danny Dover Twitter

If you have any other related limitations that you think are worth sharing, feel free to post it in the comments. This post is very much a work in progress. As always, feel free to e-mail me if you have any suggestions on how I can make my posts more useful. All of my contact information is available on my profile: Danny Thanks!

Do you like this post? Yes No

SEOmoz Daily SEO Blog

Some of you may have been hit by Google’s 20 October algorithm change.

And some of you wouldn’t have noticed any difference.

On 20 October, a number of sites got trashed. Rankings, and traffic, plummeted through the floor. The webmaster forums lit up. Aaron noticed it. I noticed it. Yet, other webmasters wondered what all the fuss was about.

As many of you know, there is not just one ranking algothimn. There are many algorithms. What affects one site may not affect another. Rather interestingly, Google’s John Mu dipped into this thread on Google’s support forum, offering these words of wisdom (HatTip: Barry)

It looks like the changes you’re seeing here may be from an algorithmic change. As part of our recent algorithmic changes (which the outside world sometimes refers to as the “May Day update” because it happened primarily in May), our algorithms are assessing the site differently. This is a ranking change, not any sort of manual spam penalty, and not due to any technical issues with regards to crawling or indexing your content. You can hear more about this change in Matt’s video: ”


Various parts of our algorithms can apply to sites at different times, depending on what our algorithms find. While we initially rolled out this change earlier this year, the web changes, sites change, and with that, our algorithms will continually adapt to the current state on the web, on those sites. While it might be confusing to see these changes at the same time as this issue, they really aren’t related, nor is this a general algorithm change (so if other sites have seen changes recently, it probably doesn’t apply to them as well).

Matt’s video, made four months ago, was talking about the algorithmic MayDay change. John Mu adds: “Various parts of our algorithms can apply to sites at different times” In other words, whatever happened in May may not affect your site in May, or June, or July, but might hit you many months later. This implies that your site may trip a threshold, and be judged quite differently than it was the day before.

This still doesn’t completely explain why so many sites were hit on the same day, but then Google don’t typically explain things in detail.

To complicate matters, there was an acknowledged indexing problem, affecting new content, particularly on blogs. Again, John appears to suggest this was a separate issue.

Forget About Search Engines, Just Publish

Now, all SEOs are used to algorithm changes. Nothing new. But this one has me genuinely perplexed, mainly because of the type of sites that got hit.

Time for some self-searching Q&A about one of my own sites:

Q: So, how many links did you buy?
A: None.
Q: Are you selling links?
A: Nope.
Q: Linking to “bad neighborhoods”?
A: Not that’ I’m aware of…..
Q: Did you link-build in an aggressive manner?
A: No. I did no link building, whatsoever.
Q: Huh?
A: That’s not a question.
Q: So you just published content?
A: Right.
Q: And people linked to your site, of their own accord?
A: Yep. I guess they liked it.
Q: Was your content heavily SEO’d?
A: No. In fact, I gave writers specific instructions not to do anything resembling “SEO copywriting”. It ruins the flow for readers.
Q: All original content?
A: All original. Hand written. No machines involved anywhere.
Q: So this site conforms to Google’s Webmaster Guidelines?
A: I’d say it lies well within them. “Be useful to end users”, was the guiding principle.

Yet it got hit hard.

What’s also interesting is the nature of the sites that replaced it. I checked keyword after keyword, and found script driven, aggressive black-hat, content-free sites in top positions. Not in all cases – there are certainly useful sites that deserve to be there, and deserve to appear above mine. Fair play. However, there were plenty of sites of – shall we say – dubious merit- occupying high positions.


Be Useful. Perhaps

Now, I believe in publishing useful, unique content, and not paying too much attention to SEO, other than covering the basics. SEO is one strategy amongst many, and sites should, first and foremost, prove useful to people.

Clearly, no site is immune. You can stay within Google’s Webmaster guidelines, and get taken out. I knew that anyway, but when the sites that don’t follow the guidelines replace you…

….I’ll admit – it grates.

Presumably, Google rewards the sites it likes with high rankings, and if we see a lot of aggressive sites filling the top page, should we therefore assume that aggressive sites are what Google actually wants?

I’d like to think not.

Perhaps they are just trying to mess with our heads?

Or they messed up?

Or the changes are still bedding in?

Or they really do want it this way?

I’m still watching, and considering. Perhaps the site will just pop back up in due course. Or perhaps I need to go back to the drawing board. I’ll let you know how I get on.

If you’ve noticed something similar on your sites, chime in on the comments.

SEO Book.com – Learn. Rank. Dominate.

Marin software manages about 5% of Google AdWords spend for clients, and they noticed that since Google Instant was unveiled, AdWords ad clicks are up 5%. Since the launch Google’s Jonathan Rosenberg has mentioned that the impact on AdWords was “not material.”

I found the repeated use of those exact words suspicious and diversionary, and, as it turned out, with good reason! When Google Instant launched I highlighted what Google was doing to screen real estate & predicted this shift.

Turns out that the “tin foil hat wearing SEOs” were right once again.

And that 5% lift in AdWords clicks is on top of the lift Google has seen from

  • creating a 4th ad slot for comparison ads (in high paying verticals like “credit cards” and “mortgage”)

  • sitelinks, merchant ratings, and other ad extensions, that gave Google another lift. On the last quarterly call Jonathan Rosenberg stated: “These ads appear on more than 10% of the queries where we show ads and people like them. We see this because click-through rates are up for some formats as much as 10% and up more than 30% on some others.”

It is thus no surprise that Google’s move into other verticals is met with resistance. The travel industry recently put together the Fair Search site to oppose Google’s purchase of ITA Software.

The Google as Monopoly meme continues to grow.

Is Google a Monopoly?Graphic by Scores.org

As Google continues to make enemies this is a great time for the launch of a back to the basics approach to core algorithmic search. Blekko is launching publicly on November 1st.

SEO Book.com – Learn. Rank. Dominate.

Posted by randfish

Last week at our annual mozinar, Ben Hendrickson gave a talk on a unique methodology for improving SEO. The reception was overwhelming – I’ve never previously been part of a professional event where thunderous applause broke out not once but multiple times in the midst of a speaker’s remarks.

Ben Hendrickson of SEOmoz speaking at the London Distilled/SEOmoz PRO Training
Ben Hendrickson speaking in last Fall at the Distilled/SEOmoz PRO Training London
(he’ll be returning this year)


I doubt I can recreate the energy and excitement of the 320-person filled room that day, but my goal in this post is to help explain the concepts of topic modeling, vector space models as they relate to information retrieval and the work we’ve done on LDA (Latent Dirichlet Allocation). I’ll also try to explain the relationship and potential applications to the practice of SEO.

A Request: Curiously, prior to the release of this post and our research publicly, there have been a number of negative remarks and criticisms from several folks in the search community suggesting that LDA (or topic modeling in general) is definitively not used by the search engines. We think there’s a lot of evidence to suggest engines do use these, but we’d be excited to see contradicting evidence presented. If you have such work, please do publish!

The Search Rankings Pie Chart

Many of us are likely familar with the ranking factors survey SEOmoz conducts every two years (we’ll have another one next year and I expect some exciting/interesting differences). Of course, we know that this aggregation of opinion is likely missing out on many factors and may over or under-emphasize the ones it does show.

Here’s an illustration I created for a presentation recently to help illustrate the major categories in the overall results:

Illustration of Ranking Factors Survey Data

This suggests that many SEOs don’t ascribe much weight to on-page optimization

I myself have often felt that from all the metrics, tests and observations of Google’s ranking results, the importance of on-page factors like keyword usage or TF*IDF (explained below) is fairly small. Certainly, I’ve not observed many results, even in low competitive spaces, where one can simply add in a few more repetitions of the keyword, maybe toss in a few synonyms or "related searches" and improve rankings. This experience, which many SEOs I’ve talked to share, has led me to believe that linking signals are an overwhelming majority of how the engines order results.

But, I love to be wrong.

Some of the work we’ve been doing around topic modeling, specifically using a process called LDA (Latent Dirichlet Allocation), has shown some surprisingly strong results. This has made me (and I think a lot of the folks who attended Ben’s talk last Tuesday) question whether it was simply a naive application of the concept of "relevancy" or "keyword usage" that gave us this biased perspective.

Why Search Engines Need Topic Modeling

Some queries are very simple – a search for "wikipedia" is non-ambiguous, straightforward and can be effectively returned by even a very basic web search engine. Other searches aren’t nearly as simple. Let’s look at how engines might order two results – a simple problem most of the time that can be somewhat complex depending on the situation.

Query for Batman

Query for Chief Wiggum

Query for Superman

Query for Pianist

For complex queries or when relating large quantities of results with lots of content-related signals, search engines need ways to determine the intent of a particular page. Simply because it mentions the keyword 4 or 5 times in prominent places or even mentions similar phrases/synonyms won’t necessarily mean that it’s truly relevant to the searcher’s query.

Historically, lots of SEOs have put effort into this process, so what we’re doing here isn’t revolutionary, and topic models, LDA included, have been around for a long time. However, no one in the field, to our knowledge, has made a topic modeling system public or compared its output with Google rankings (to help see how potentially influential these signals might be). The work Ben presented, and the really exciting bit (IMO), is in those numbers.

Term Vector Spaces & Topic Modeling

Term vector spaces, topic modeling and cosine similarity sound like a tough concepts, and when Ben first mentioned them on stage, a lot of the attendees (myself included) felt a bit lost. However, Ben (along with Will Critchlow, whose Cambridge mathematics degree came in handy) helped explain these to me, and I’ll do my best to replicate that here:

Simplistic Term Vector Model

In this imaginary example, every word in the English language is related to either "cat" or "dog," the only topics available. To measure whether a word is more related to "dog," we use a vector space model that creates those relationships mathematically. The illustration above does a reasonable job showing our simplistic world. Words like "bigfoot" are perfectly in the middle with no more closeness to "cat" than to "dog." But words like "canine" and "feline" are clearly closer to one that the other and the degree of the angle in the vector model illustrates this (and gives us a number).

BTW – in an LDA vector space model, topics wouldn’t have exact label associations like "dog" and "cat" but would instead be things like "the vector around the topic of dogs."

Unfortunately, I can’t really visualize beyond this step, as it relies on taking the simple model above and scaling it to thousands or millions of topics, each of which would have its own dimension (and anyone who’s tried knows that drawing more than 3 dimensions in a blog post is pretty hard). Using this construct, the model can compute the similarity between any word or groups of words and the topics its created. You can learn more about this from Stanford University’s posting of Introduction to Information Retrieval, which has a specific section on Vector Space Models.

Correlation of our LDA Results w/ Google.com Rankings

Over the last 10 months, Ben (with help from oth
er SEOmoz team members) has put together a topic modeling system based on a relatively simple implementation of LDA. While it’s certainly challenging to do this work, we doubt we’re the first SEO-focused organization to do so, though possibly the first to make it publicly available.

When we first started this research, we didn’t know what kind of an input LDA/topic modeling might have on search engines. Thus, on completion, we were pretty excited (maybe even ecstatic) to see the following results:


Correlation Between Google.com Rankings and Various Single Metrics
Spearman Correlation of LDA, Linking IPs and TF*IDF


(the vertical blue bars indicate standard error in the diagram, which is relatively low thanks to the large sample set)

Using the same process we did for our release of Google vs. Bing correlation/ranking data at SMX Advanced (we posted much more detail on the process here), we’ve shown the Spearman correlations for a set of metrics familiar to most SEOs against some of the LDA results, including:

  • TF*IDF – the classic term weighting formula, TF*IDF measures keyword usage in a more accurate way than a more primitive metric like keyword density. In this case, we just took the TF*IDF score of the page content that appeared in Google’s rankings
  • Followed IPs – this is our highest correlated single link-based metric, and shows the number of unique IP addresses hosting a website that contains a followed link to the URL. As we’ve shown in the past, with metrics like Page Authority (which uses machine learning to build more complex ranking models) we can do even better, but it’s valuable in this context to just think and compare raw link numbers.
  • LDA Cosine – this is the score produced from the new LDA labs tool. It measures the cosine similarity of topics between a given page or content block and the topics produced by the query.

The correlation with rankings of the LDA scores are uncanny. Certainly, they’re not a perfect correlation, but that shouldn’t be expected given the supposed complexity of Google’s ranking algorithm and the many factors therein. But, seeing LDA scores show this dramatic result made us seriously question whether there was causation at work here (and we hope to do additional research via our ranking models to attempt to show that impact). Perhaps, good links are more likely to point to pages that are more "relevant" via a topic model or some other aspect of Google’s algorithm that we don’t yet understand naturally biases towards these.

However, given that many SEO best practices (e.g. keywords in title tags, static URLs and ) have dramatically lower correlations and the same difficulties proving causation, we suspect a lot of SEO professionals will be deeply interested in trying this approach.

The LDA Labs Tool Now Available; Some Recommendations for Testing & Use

We’ve just recently made the LDA Labs tool available. You can use this to input a word, phrase, chunk of text or an entire page’s content (via the URL input box) along with a desired query (the keyword term/phrase you want to rank for) and the tool will give back a score that represents the cosine similarity in a percentage form (100% = perfect, 0% = no relationship).

LDA Topics Tool

When you use the tool, be aware of a few issues:

  • Scores Change Slightly with Each Run
    This is because, like a pollster interviewing 100 voters in a city to get a sense of the local electorate, we check a sample of the topics a content+query combo could fit with (checking every possibility would take an exceptionally long time). You can, therefore, expect the percentage output to flux 1-5% each time you check a page/content block against a query.
  • Scores are for English Only
    Unfortunately, because our topics are built from a corpus of English language documents, we can’t currently provide scores for non-English queries.
  • LDA isn’t the Whole Picture
    Remember that while the average correlation is in the 0.33 range, we shouldn’t expect scores for any given set of search results to go in precisely descending order (a correlation of 1.0 would suggest that behavior).
  • The Tool Currently Runs Against Google.com in the US only
    You should be able to see the same results the tool extracts from by using a personalization-agnostic search string like http://www.google.com/xhtml?q=my+search&pws=0
  • Using Synonyms, "Related Searches" or Wonder Wheel Suggestions May Not Help
    Term vector models are more sophisticated representations of "concepts" and "topics," so while many SEOs have long recommended using synonyms or adding "related searches" as keywords on their pages and others have suggested the importance of "topically relevant content" there haven’t been great ways to measure these or show their correlation with rankings. The scores you see from the tool will be based on a much less naive interpretation of the connections between words than these classic approaches.
  • Scores are Relative (20% might not be bad)
    Don’t presume that getting a 15% or a 20% is always a terrible result. If the folks ranking in the top 10 all have LDA scores in the 10-20% range, you’re likely doing a reasonable job. Some queries simply won’t produce results that fit remarkably well with given topics (which could be a weakness of our model or a weirdness about the query itself).
  • Our Topic Models Don’t Currently Use Phrases
    Right now, the topics we construct are around single word concepts. We imagine that the search engines have probably gone above and beyond this into topic modeling that leverages multi-word phrases, too, and we hope to get there someday ourselves.
  • Keyword Spamming Might Improve Your LDA Score, But Probably Not Your Rankings
    Like anything else in the SEO world, manipulatively applying the process is probably a terrible idea. Even if this tool worked perfectly to measure keyword relevance and topic modeling in Google, it would be unwise to simply stuff 50 words over and over on your page to get the highest LDA score you could. Quality content that real people actually want to find should be the goal of SEO and Google’s almost certainly sophisticated enough to determine the different between junk content that matches topic models and real content that real users will like (even if the tool’s scoring can’t do that).

If you’re trying to do serious SEO analysis and improvement, my suggested methodology is to build a chart something like this:

Analysis of "SEO" SERPs in Google
SERPs analysis of "SEO"
 in Google.com w/ Linkscape Metrics + LDA (click for larger)

Right now, you can use Keyword Difficulty’s export function and then add in some of these metrics manually (though in the future, we’re working towards building this type of analysis right into the web app beta).

Once you’ve got a chart like this, you can get a better sense of what’s propping up your competitors rankings – anchor text, domain authority, or maybe something related to topic modeling relevancy (which the LDA tool could help with).

Undoubtedly, Google’s More Sophisticated than This

While the correlations are high, and the excitement around the tool both inside SEOmoz and from a lot of our members and community is equally high, this is not us "reversing the algorithm." We may have built a great tool for improving the relevancy of your pages and helping to judge whether topic modeling is another component in the rankings, but it remains to be seen if we can simply improve scores on pages and see them rise in the results.

What’s exciting to us isn’t that we’ve found a secret formula (LDA has been written about for years and vector space models have been around for decades), but that we’re making a potentially valuable addition to the parts of SEO we’ve traditionally had little measurement around.

BTW – Thanks to Michael Cottam, who suggested the reference of research work by a number of Googlers on pLDA. There are hundreds of papers from Google and Microsoft (Bing) researchers around LDA-related topics, too, for those interested. Reading through some of these, you can see that major search engines have almost certainly built more advanced models to handle this problem. Our correlation and testing of the tool’s usefulness will show whether a naive implementation can still provide value for optimizing pages.

How Do I Explain this to the Boss/Client?

The simplest method I’ve found is to use an analogy like:

If we want to rank well for "the rolling stones" it’s probably a really good idea to use words like "Mick Jagger," "Keith Richards," and "tour dates." It’s also probably not super smart to use words like "rubies," "emeralds," "gemstones," or the phrase "gathers no moss," as these might confuse search engines (and visitors) as to the topic we’re covering.

This tool tries to give a best guess number about how well we’re doing on this front vs. other people on the web (or sample blocks of words or content we might want to try). Hopefully, it can help us figure out when we’ve done something like writing about the Stones but forgetting to mention Keith Richards.

As always, we’re looking forward to your feedback and results. We’ve already had some folks write in to us saying they used the tool to optimize the contents of some pages and seen dramatic rankings boosts. As we know, that might not mean anything about the tool itself or the process, but it certainly has us hoping for great things.

p.s. The next step, obviously, is to produce a tool that can make recommendations on words to add or remove to help improve this score. That’s certainly something we’re looking into.

p.p.s. We’re leaving the Labs LDA tool free for anyone to use for a while, as we’d love to hear what the community thinks of the process and want to get as broad input as possible. Future iterations may be PRO-only.

Do you like this post? Yes No

SEOmoz Daily SEO Blog

As Google reached the limits of returns in direct marketing they started pushing the value of branding (because, hey, if you can chalk it up to latent branding value there is no cap on your max bid). Surprisingly, they even got many big brands to buy their own brands AND buy sitelinks on the AdWords ads. Some went so far as providing case studies for how much of their own brand traffic they were now willing to pay for, which they previously got free. :D

Sure that can make sense for seasonal promotions, but you could do the same thing by having subdomains and sister websites. Dell.com can be the main site, Dell.net (or deals.dell.com) can be the deals & promotions website, and Dell.org can be the good karma charity site. No paying someone else for brand you already spent to build. Beautiful. But I digress…

In October of 2008 Google’s CEO revealed which ad Dollars they were chasing, and what loophole they were opening up in their relevancy algorithms

“Brands are the solution, not the problem,” Mr. Schmidt said. “Brands are how you sort out the cesspool.”

That led to the brand update, and now Google even recommends specific brand modifiers when you search for words like “digital cameras.”

…and here is the problem…

Less than 2 years after Mr. Schmidt’s prophetic brand drivel, the Financial Times is doing a series on Google, in which Google’s Amit Singhal is blaming brands as being a major issue:

Companies with a high page rank are in a strong position to move into new markets. By “pointing” to this new information from their existing sites they can pass on some of their existing search engine aura, guaranteeing them more prominence.

Google’s Mr Singhal calls this the problem of “brand recognition”: where companies whose standing is based on their success in one area use this to “venture out into another class of information which they may not be as rich at”. Google uses human raters to assess the quality of individual sites in order to counter this effect, he adds.

No mention (of course) that it was Google which put excessive emphasis on domain authority, or how Google gutted the link graph, or how Google funds most of the content mills with AdSense.

Those are all irrelevant details, just beyond Google’s omniscient view. :D

The other thing which is absurd, is that if you listen to Google’s SEO tips, they will tell you to dominate a small niche then expand. Quoting Matt Cutts: “In general, I’ve found that starting with a small niche and building your way up is great practice.”

And now brand extension is somehow a big deal worth another layer of arbitrary manual inspection and intervention?

Meanwhile scraper sites are still clogging up Google, and they claim they need to write better algorithms to detect them. It isn’t hard to see the sun at noon!

If sites which expand in scope deserve more scrutiny then why is there so much scrape & mash flotsam in the search results? What makes remixed chunks of content better than the original source? A premium AdSense feed? Brand?

SEO Book.com – Learn. Rank. Dominate.

Posted by Danny Dover

  Want happier website visitors and higher rankings? This week’s Whiteboard Friday is about how and why to speed up your website. It is more technical than previous videos so I tried to spice it up with an ode to one of my favorite canceled TV Shows, Pop-up Video. Can’t stand the content? At least the added commentary is entertaining. (It is the perfect plan ;-p)

7 Ways to Take Advantage of Google’s Site Speed Algorithm

The following are seven proven techniques well known websites use to boost their site speed.

1. Enable Gzip

Gzip is a open source compression algorithm that can used to compress your website’s content before your server sends the data to a visitor’s browser. This makes your servers job easier and makes pages load faster for your users. You can learn how to enable Gzip here.

2. Minify Javascript/CSS

Minify is the process (and software) for removing unnecessary formatting characters from code. This makes your files smaller and your visitors happier. You can learn all about this process here.

3. Use a CDN (Content Distribution Network)

CDNs are systems of interconnected server resources that spread content and assets around the globe to shorten the distance between server and prospective user. They are commonly used by the Web’s most popular websites. You can find a list of free CDNs here.

4. Optimize Images

You can take advantage of the countless man hours that have been devoted to image compression and make your users happier by simply saving your images as the appropriate type. As a very general rule of thumb, I recommend saving photos as JPEGs and graphics as PNGs.

5. Use External Javascript/CSS

When a browser requests a website from a server it can only download a set number of files of the same type at any given point. While this isn’t true of all file types, it is a good enough reason to host applicable files on alternative subdomains. This is only recommended for sites where the pros of speed will outweigh the SEO cons of creating a new subdomain.

6. Avoid Using Excess Redirects

While redirects can be extremely useful, it is important to know that implementing them does force your servers to do slightly more work per applicable request. Always avoid redirect strings (301 -> 301 -> 200 or even worse 301 -> 302 -> 200) and use these tools sparingly.

7. Use Fewer Files

The most straightforward way to speed up your website is to simply use fewer files. Less files means less data. My favorite method of doing this is utilizing CSS sprites. You can read how popular websites are using this trick here.

Google’s Mission to Speed Up the Web

Fueled by the massive potential of the Internet, Googlers are working on many projects in their attempt to speed up the Web:

Follow me on Twitter, Fool!
Follow SEOmoz on Twitter (who is slightly less blunt)

If you have any other advice that you think is worth sharing, feel free to post it in the comments. This post is very much a work in progress. As always, feel free to e-mail me if you have any suggestions on how I can make my posts more useful. All of my contact information is available on my SEOmoz profile under Danny. Thanks!

Do you like this post? Yes No

SEOmoz Daily SEO Blog

Posted by Tom_C

Wouldn’t it be great if you could somehow spot those SEO opportunities on your site which were low effort and high value? Well this post gives you a few ways you can do that! Sweet.

I’m going to be digging around in the recently released search queries report in Google Webmaster Tools:

Step 1 – Gathering The Fruit

The first thing we need to do is gather all the fruit (aka keyphrases). So within GWT select search queries and select just "web" queries and in this case I’ve selected "United States" since that’s the main target market for SEOmoz. The more we can narrow this down the better data we get, if we leave image search etc in there and leave countries like Serbia in there the less accurate our data will be:

Once we have filtered the data we then want to download the data to Excel:

Step 2 – Identify The Low Hanging Fruit

Once we have the data in Excel we can do some monkeying around to get some meaningful insights. When you download the data you will be presented with a lot of dummy data like this:

 So I run a find and replace on the following two items:

  1. Find "<10" and replace with 1
  2. Find "-" and replace with 0

Be sure to only run these over columns B,C,D to avoid stripping out anything from your queries column in A!

Now, once we have this data it will depend on the number of impressions and number of keyphrases how exactly you want to slice the data. Analysing the data for SEOmoz I found that selecting all avg positions that were not equal to 1 and impressions over 200 gave me a sample set of 97 keyphrases to look at which wouldn’t take very long at all to whiz through and look at. If you have more time or if you have more keyphrases you might want to get a longer or shorter list.


As I mentioned this gives me a list of 97 keyphrases for the SEOmoz site. Let’s take a look at what some of those opportunities are!

An Example: SEOmoz’s Low Hanging Fruit

In this post I not only wanted to show you how to get the data but also to give you an insight into what kinds of actions you could take and what sorts of keyphrases you might look at so I ran the above process for the SEOmoz site and found the following low hanging fruit. Bear in mind that there weren’t that many really easy wins in the data since SEOmoz generally has fairly good SEO (unsurprisingly!). Still, it gives you an idea of the thought process.

Keyphrase: SEO   |   Ranking: 9.4   |   Impressions: 49,500   |   Clicks: 590

Ranking URL: http://www.seomoz.org

Now, I’ve shot myself in the foot a little here by picking a keyphrase which isn’t really a low hanging fruit, it’s actually a highly competitive keyphrase! That said, it’s useful information to have. Without having rank tracking set up for the site it instantly tells me that SEOmoz are highly competitive for this term, especially as some of the sites that rank above them are Google and Wikipedia. That said, there’s room for improvement. Maybe time for some more badge-bait Rand?!

Keyphrase: Social media marketing  |   Ranking: 7.9   |   Impressions: 8,100   |   Clicks: 320

Ranking URL: http://www.seomoz.org/article/social-media-marketing-tactics

Again, this is a highly competitive keyphrase but one which I feel SEOmoz could perform better for. The current ranking is working ok for them but could certainly improve dramatically if they could shift the ranking from 7.9 into the top 3. Digging around we see that the page is linked internally from every page in the navigation with good anchor text and it has a total of 255 root domains linking to the page so there’s clearly a fair amount of work already done here. That said, I feel like there’s an opportunity waiting since SEOmoz links out to lots of other sites from here and most of those blogs will likely link back to the SEOmoz guide if there was a nicely written email. Incidentally, if you’re looking for a link from SEOmoz and have a top notch guide for one of the sites which doesn’t have any resources attached then get in touch! So long as you link back to the page ;-)

Also, BONUS TIP – while researching backlinks in this space I stumbled across the fact that Amazon authors can get links from Amazon.com check out Darren Rowse on Amazon and then compare to Rand Fishkin on Amazon and you’ll see that Rand has missed an opportunity to get blog posts imported and hence get clean followed links from Amazon. Sweet!

Keyphrase: What is seo  |   Ranking: 3.9   |   Impressions: 1,900   |   Clicks: 210

Ranking URL: http://guides.seomoz.org/beginners-guide-to-search-engine-optimization (Note here that this URL isn’t reported in GWT, it’s the old beginner’s guide URL which now redirects but the same keyphrase stands).

Here, I see the answer being a little easier than the above keyphrases. The term is less competitive and the title of the page doesn’t even mention "what is seo"! My actions would be to reword the title tag to be "What is SEO? The Free Beginner’s Guide to SEO from SEOmoz" and to mention "What is SEO" on the page at least once (currently it only mentions "what is search engine optimisation" and although Google knows they’re the same phrase I’d still like to see the exact phrase on the page somewhere). Also, there is no navigation link on the site to the beginner’s guide so slipping a few links into the next few blog posts with the anchor text "what is seo" will help boost the rankings for that phrase.

Keyphrase: Free seo tools  |   Ranking: 4.2   |   Impressions: 480   |   Clicks: 73

Ranking URL: http://www.seomoz.org/tools

The term "seo tools" is fiercely competitive but the "free seo tools" term seems like it would be a lot easier to go after, in fact SEOmoz is one of only 2 of the top 10 results which doesn’t mention the term free in the title tag of the page. This could be rectified easily and in addition to that the page doesn’t even mention "free seo tools" on the page. Personally, since this is something people search for I’d be tempted to re-des
ign the page to add a "Free SEO Tools" sub-header to differentiate between the free and PRO tools. That way it’s a good user experience and also gets the phrase on the page.

Keyphrase: Keyword research  |   Ranking: 19   |   Impressions: 110   |   Clicks: 12

Ranking URL: http://www.seomoz.org/blog/10-steps-to-advanced-keyword-research

Note here that the impression numbers are so low because the page is ranking 2nd page. Not having a page ranking in the top 10 here is a mistake for SEOmoz I think (sorry, I mean opportunity!). The correct page is clearly the page on keyphrase research from the new beginner’s guide and the best way to make that page rank is to throw some more internal links to the page. Currently there are basically no internal links to that page except from other beginner’s guide pages. Linking to it from elsewhere should be able to get that page ranking. One idea to help get internal links to pages of the beginner’s guide would be to automatically link to the keyword research page from any blog post within the keyphrase research category. That way it would essentially get more deep links internally to the individual pages of the beginner’s guide.

A Data Warning

A warning here that I’m still not sure how much I trust this impression and click data. I wrote a post over on Distilled about how the reported numbers are way out from analytics numbers. To be honest, if I was doing SEO full-time for SEOmoz I’d like to think I’d have better resources of keyphrase research, ranking data and visitor numbers but using Google’s webmaster tools search queries report can be a nice quick way to identify keyphrase opportunities for a site which you’re less familiar with or for which you don’t have these things set up.

Do you like this post? Yes No

SEOmoz Daily SEO Blog