Google pushed for net neutrality but made wireless connections an exception. Why would they do that? Could it be they are invested in disrupting that market elsewhere?
As Google started to reach the bulk of potential returns based on direct response they started to lean on brands as a signal of quality & recommend brands more in their search results. When you search for Amazon you might get 8 or 9 links from the official site & even on some generic keywords Google recommends associated brands.
When you think about what brand is, it is a mental shorthand for a concept. It leads to increased recall, fatter sustained profit margins, and thus the ability to spend more on marketing. If Google is to put more weight on reviews and look at sentiment analysis then of course that will benefit the larger players who invested into establishing positive associations, even at a young age. The results of such branding efforts are quite powerful.
In the past Google has positioned that affiliates are evil (the body language says it all IMHO), though there are Google’s remote quality rater documents which provide further proof to anyone with lingering doubts.
As Google is becoming the affiliate they are getting direct signals into what consumers like most & are able to serve them a personalized recommendation engine. New extended ad formats & using location data will allow Google to further drive down the organic results.
Not only does Google sell CPA priced product ads on their search results, but they also allow your Google Base account to drive additional product links, which gives them over 150 million products to advertise. The name of the game is to give Google a bit more data to get a higher clickthrough rate & thus have a higher quality score & be enabled for additional profitable opportunities sold at below fair market rates. That seems like a free lunch and works great, up until the day Google decides to use the aggregate data to compete directly against you.
Google now runs a thin affiliate site in Boutiques.com. Google’s ability to recommend consumption behaviors not only impacts ecommerce, but every type of media in the world. They control the ad rates of various advertisers & can create custom ad integration opportunities.
Youtube offers related videos, a never-ending personalized streaming service in LeanBack and ads which users can select from.
Would that be Google moving from pushing bits & people to pushing physical products?
Meanwhile Google is the same company which published this & recommends keygens and serials when you search for a brand. Google promises to fix that later issue – something that has only took them a few YEARS to do, even though they were blocking porn words (& other words that could have earned them negative press) much sooner.
In much the same way that Google has captured most of the revenue streams they will be able to with direct response ads, I think they realize that they will need to work better at managing property rights of big media & other publishers if they really want to drive brand advertising revenues. This will likely lead to a decline of the “anything goes” web.
If you think of the whole reason Google was so liberal in their approach to supporting (and even funding) copyright violation it was so that they could weaken the positions of the companies that hold those rights, such that Google can eventually negotiate a deal with them. But the main thing holding back Google music is that based on Google’s past performance the labels do not trust the idea of a digital music locker hosted by Google. After all, Google AdSense ads are what allow sites dedicated to downloading MP3s from Youtube to be monetized today.Google offers promotional links on Youtube & knows how much money they are missing out on. Google’s boondoggle of using public relations to paint a clean show publicly while using legal loopholes to circumvent the intent of the law was good for getting them into a strong market position, but if they want to have a leadership position in more big media markets they will need to get buy in from established players.
Google wants to get big into television ads. And that is going to mean having better respect for copyright. To some degree as we see the Google business model change we will see their approach of “paying anyone to steal anything & wrap it in Google ads” (to soften up copyright) change to a model where the put themselves as a gatekeeper on DRM content & push the “official” sources of the media (and try to make a cut of the profits). Already on Youtube if you view certain content from outside the United States they will tell you that it is unavailable in your area.Google’s first video store was a complete failure. Once again they are pushing hard on video with Google TV & by buying the DRM company widevine. Big media companies have opted out of Google TV awaiting more favorable deals.
Part of such deals will ultimately rely on backroom payouts coupled with hard coded promotions. There will be a lot of collateral damage as entire categories become unprofitable. I think we are seeing the organic search results take a page from the ad book: pay to play.
Google’s old model of paying people to scrape content & wrap it in ads was leading to a market for lemons, driven by Gresham’s theorem. Much like how the most fraudulent banks could afford to buy out less fraudulent ones, and how Chinese milk with melamine was cheaper than real milk sent real companies into bankruptcy, the search results were suffering from the age of scrape/mash/publish. Given the surrounding economic forces crushing newspapers, Google was making things worse than they needed to be.
Those who are creating original high-quality content have real business costs. Google paying scraper sites like Mahalo and Ask to borrow your content & wrap it in ads means that you are sometimes getting outranked for scraped duplications of your own content. That drives down publisher margins and pushes marginally profitable publishers into losing money.
Slowly but surely the search results will fill up with official hotel sites, official music sources, official video sources, official ebook sources, etc etc etc … with Google putting a big foot on the gas & accepting whatever cut they can get. If they want to avoid regulatory scrutiny they need to work with the official sources (which are every bit as politically connected as Google is).
As that shift happens the longtail spam model will lose out on its profitability because it will be forced to compete with higher quality content that is automatically mixed into the search results. (The whole point of universal search was to allow Google to short cut certain types of information right into the core search results…as they start making money from micro-payments and such look for that trend to accelerate).
Ultimately what has doomed most portals was that they were willing to produce second rate holder stuff that filled out a vertical and was ‘good enough.’ What makes Google so powerful with the stealth backdoor portal approach is that it allows them to mix 3rd party content without forcing them to either absorb the cost or create low quality backfill stuff on their own.
One area that is ripe for ad innovations is books:
I’m genuinely glad to have Google enter this market because it will be reaching potential customers at a unique point in their book-buying journey: at the point of web search, not at the point of searching the bookstore. This means many things you didn’t realize a book can help you with—overcoming depression, remodeling a bathroom, making friends and influencing people—will now be surfaced alongside all the YouTube and other results Google will offer. This is a net plus for books.
But the ultimate effect of Google e-books, if Google knows what’s good for it, will be the creation of an ad-supported publishing model.
Now that books are digital & Google has rights to sell them I would expect in the next year or 2 that Google starts to display them in the organic search results more aggressively. The free samples can be monetized via ads & upsells of the whole book. That endless stream of editorially vetted content could put a dent in the content farm business model.SEO Book.com – Learn. Rank. Dominate.
Posted by randfish
First off, let me just say that there are a lot of people smarter and more experienced in scalably attacking web spam than I am working in the Search Quality division at Google and specifically on the Spam team. However, as a search enthusiast, a Google fan and an SEO, it seems to me that, all due respect, they’re getting played – hard.
Word is, the Spam team’s key personnel had some time off working on other projects and supposedly they’re coming back together to renew the fight. I hope that’s the case because the uproar about black/gray hat SEO gaming the results is worse than ever, and deservedly so. It’s getting bad enough to where I actually worry that early adopters might stop using Google for commercial queries and start looking for alternatives because of how manipulative the top results feel. That behavior often trickles down over time.
Thus, I’m going to expound a bit on a tactic I discussed in my interview with Aaron for fighting what I see as a large part of the manipulation of results in Google – the abuse of anchor text rich links.
The basic problem is that if you want to rank well in Google for a high value, commercial search query like discount printer cartridges or home security camera systems, getting links with that anchor text containing those words, preferrably exact matches, is invaluable to rankings. Unfortunately, natural, editorially given links are extremely unlikely to use anchor text like that. They’re more likely to use the business or website name, possibly a single relevant word or two, but finding dozens or hundreds of domains that will link with this kind of anchor text without push-marketing intervention from an SEO is next to impossible.
That means sites that earn the natural, editorial links fall behind, while those who find ways to grab the anchor text match links and evade Google’s spam detection systems nab those top spots. It’s been going on for 10 years like this, and it’s insane. It needs to stop. Just as Google’s said they’ll be taking a hard look at exact match domain names, they need to take a hard look at precise matches for commercial anchor text links.
Here’s the methodology I like:
Step 1: Create a list of oft-spammed, commercially-directed anchor text. With Google’s resoures, this won’t be hard at all. In fact, a good starting point might be something some top adsense keywords lists (this one was readily available).
Just a sample of some of the 3,400+ phrases in one file I found
I suspect Google’s Webspam team would have no trouble compiling hundreds of thousands of phrases like this that have a high potential for gaming and are found in large quantities of anchor text links.
Step 2: Locate any page on the web containing 3+ links with any of these anchor text phrases linking to different sites. An obvious example might look something like this:
But, any collection of exact-match anchor, followed links to pages on multiple domains could be flagged by the system.
Step 3: Have manual spam raters spot check through a significant sample size of the pages flagged by this filtration process (maybe 5-10,00) and record the false positives (pages where Google would, legitimately want to count those links).
Step 4: If the false positives follow some easily identifiable pattern, write code to exclude them and their ilk from the filtration system. If the pattern is tougher to detect, machine learning could be applied to the sample, running across the positives and false positives to identify features that give an accurate algorithmic method for filtration.
Step 5: Devalue the manipulative links by applying the equivalent of a rel="nofollow" on them behind the scenes.
Step 6: Create a notification in Webmaster Tools saying "we’ve identified potentially manipulative links on pages on your site and have removed the value these links pass." Add this notification to 60-75% of the sites engaged in this activity AND write a blog post saying "we’ve applied this to 65% of the sites we’ve found engaging in this activity." If webmasters send re-consideration requests that they believe the filter caught false positives, you can send these back through Step 4 for evaluation and refinement.
Step 7: Create a flag in the PageRank toolbar for these same 60-75%, making the PR bar appear red on all the pages of the site. Announce this on the Webmaster Blog as well, noting that "65% of the sites we know about have been flagged with this."
That’s gonna scare a lot of webmasters
Step 8: Watch as search quality improves from the algorithmic filtration of manipulative link power and less spam is created as link buyers and spammers realize their efforts are going to waste.
Is this plan foolproof? No. Are there loopholes and messiness and ways clever spammers will work around it? Absolutely. But the folks I’ve talked to about this agree that for a huge quantity of the most "obvious" webspam via link manipulation, this could have a big, direct, fast and scalable impact. The addition of steps 6 and 7 would also send a much needed message that site owners and content creators would hear and feel loud and clear, while creating enough uncertainty about the value of the non "marked" sites to cause a behavioral shift.
Maybe Google’s already thought of this and shot it down, maybe they’ve already implemented it and we just think all those anchor text rich links are helping, but maybe, this thing has legs, and if it does, I hope Google does something. I’m bombarded so often with the question of "isn’t Google irreleva
nt now?" and "hasn’t SEO ruined Google?" that I’m fairly certain action’s needed. This type of manipulation seems to me the most obvious, most painful and most addressable.
Looking forward to your comments, suggestions and ideas – undoubtedly my concept is riddled with holes, but perhaps with your help, we can patch it up.
p.s. Yes, conceptually we could create a metric like this with Linkscape and show it in the mozBar and via Open Site Explorer and/or the Web App, but I’m not sure how accurate we could be, nor do I think it’s the best way to help web marketers through software (given our dozens of priorities). However, the fact that our engineering team thinks it’s relatively simple to build means it must be equally (if not more) simple for Google.
Posted by number1george
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
As you probably noticed, last week Google did a pretty big makeover of its local search results page, incorporating the local results directly within the organic results. In some cases it appeared that the old “7-Pack” was just given larger real estate on the SERP. In others, it just looked like the websites were just given links to their Places page. And sometimes, it just looked like an entirely new SERP, different than both the original organic rankings and the lettered, local results. But what was the real effect this change had on local search results?
How I Got My Data
Visually, the new local search results page includes information from the both website and the business’s Places page. The title and description are taken from the website but select information from the Places page is also included as well as a direct link to the Places page in Maps. Here we see an example of a search for "tanning salon seattle wa" and how the combined results are displayed.
To find out the effects of combining the results, I grabbed the rankings of 50 somewhat random websites we’ve been tracking. As an initial criteria, I tried to use sites we’d been tracking for at least 2 months. I also eliminated sites with substantial fluctuations in their rankings within the prior few weeks since there would be no way to attribute those changes to any particular factor. Lastly, though I originally intended to use a completely random sampling, I eventually skipped over several sites that had no change since several of these were in non-competitive areas where they pretty much dominated all other websites for their searches.
After I had my sample, I did some quick research, comparing the organic rankings of several websites prior to the change to their rankings after the change. I then performed the same search in Google Maps in order to determine how their Places pages were ranking individually.
With a few exceptions, the top 7 ranked results in Maps are what were displayed in the old 7-pack for the same search. These listings were ranked independently of the organic results beneath them. By comparing their former organic ranking to their current organic ranking, I was able to see if a change could be correlated to their Places page’s ranking in Maps.
So, Was There Any Change?
Of the 50 websites examined, 30 of them had an improvement in the new, “combined” results while 6 of them dropped. In most cases, this shift in their ranking could definitely be attributed to the performance of their local listings.
First, let’s look at the ones that improved. I did eliminate 4 outliers but, for the most part, you can see a direct correlation between the sites’ improved ranking and their local ranking in Maps. Obviously, I can’t publish any actual websites or keyword searches, but the searches all used a typical local query consisting of “business/service city st”.
Generally, it can be said that sites performing well in both organic and local perform even better in the new consolidated SERP. In several cases you can directly see how a well-performing Google Places listing now pulls up your organic ranking.
In some instances, the combined performance of a business with both a decently ranking website and Places page was enough to push it up a rank or two in the new results. In others, it appears that a well-optimized Places page was able to significantly improve a decently performing website and increase its ranking by several spots. Basically, your local listing’s performance appears to be a significant ranking factor in the new organic results.
Since a business’s local listing has the ability to positively affect its website’s performance organic results, let’s look at the ones that dropped in ranking to determine if there is a negative factor associated with the new SERP.
First, the fact that the sample size I was able to obtain was so small already implies that a poorly performing business listing doesn’t seem to have much of an effect on a website’s performance. Looking at the original rankings, you can also see that 3 of these sites weren’t doing that great to begin with. In fact, it would probably be fair to assume that their drop was due to an already negative trend. But what about the websites that were doing well but dropped after the update?
Digging deeper into these, I soon discovered that this wasn’t really a direct result of the poorly performing business listings dragging the websites down, but rather that, due to the local results being buried so deep in Maps, Google didn’t associate a business’s Places page with their website. As a result, other websites that did have strong Places pages were ranking higher. So, while having a poorly ranked local listing didn’t penalize the website, it was a whole category of optimization that the website was lacking. Almost like having a great inbound linking strategy but no content structure.
While going through dozens of various local searches, there were a few things that stood out:
What Does This Mean?
So what can we learn from all this? Basically, it’s just what Google said all along – everything is important. Your best bet is to have both a terrifically optimized website and an optimized, claimed Places page to associate with it.
Not only does Google seem to use a Places page as an organic ranking factor, but having one also gives you nearly twice the real estate devoted to your business in the results. Instead of just having a few words in your title tag and meta description to sell your business, you now have yo
ur address, phone number, reviews, lists of other websites that mention you, and even a picture to draw attention to your website.
Bottom line: all those old debates about whether it was better to have the top-ranking website in organic or have your business at the top of the 7-pack are over. Even if this isn’t the final layout, it’s clear that Google intends to make both count.
Posted by Danny Dover
As people in relationships spend time with each other they start to leverage each others natural strengths to efficiently store information about the world around them. "Honey, what is the name of my Aunt’s employer?" "Babe, what do you call that thing that heats bread?" They rely on each other to store information that is mutually beneficial. Some believe this process is one of the reasons breakups are so hard. “I feel like when s/he left, s/he took a part of me.” It is common to hear statements similar to this because when it comes to memory, it is more true than many may realize.
While this phenomenon has historically happened between two people offline, it is now happening online between people and technology. How many times have you checked Google for a fact that you once knew? How many times have you Googled for a resource that you have already read? Like it or not, Google is quickly becoming a second brain in much the same way loved ones have done in the past. While this search engine has benefits that humans don’t (ubiquity), it does have some severe limitations that should be examined.
The self declared mission of the people who run Google is to “organize the world’s information…”. While they have done a remarkable job of this online, they have failed to do this offline in the tangible world. To understand these unspoken failures, all you need to do is examine the five major senses humans use to organize the world’s information.
“Who is that guy?” “I recognize that place, where was that scene filmed?” “What is the name of that color?” For most people, sight is the primary sense for experiencing the world. While technology does exist for identifying objects within images (facial recognition algorithms, OCR, color detection, etc…) you can’t utilize these tools directly through Google. This may possibly be the biggest limitation of Google. Be it remembering the name of a person after a date or an entire government agency trying to identify a suspect, identifying someone or something by sight is critical for organizing the world’s information.
Smell is the closest sense tied to memory. Have you ever walked by a stranger and instantly been flooded with memories of a significant other who happened to wear the same perfume or cologne? It can be a jarring experience. Want to identify that scent? Google can’t help you. While the technology exists for detecting smells and there are databases for identifying smells, a method to easily cross reference and identify a smell online is not available.
You are watching How I Met Your Mother and you recognize the voice in the opening sequence. Whose voice is that? You hear an obscure tune as a car blaring loud music drives by your home. What is the name of that song? Like the situation with sight, the technology for identifying sounds exists (Shazam, SoundHound, etc…) but it is not available through Google. While you can search via verticals for text, video and images, you can’t search for sounds. This is almost certainly a legal limitation rather than a technology one. (After all, Google can identify audio clips in YouTube videos.)
You are traveling in Greece and you order the most interesting looking item on the menu. When it arrives, it looks like nothing you have ever seen. You bite into it and instantly recognize the flavors but can’t remember the name of the meal as it is hidden by an “unique” texture. Again, Google won’t help you (although a napkin might). The same problem happens more frequently with allergies. Want to make sure a meal a friend made for you doesn’t have an ingredient you are allergic to? Some technology can help but Google isn’t one of them.
BEEP BEEP BEEP! It is 6:00 AM and your alarm clock is screaming. Eyes still closed and crusty, you reach across your bed and use touch to identify the snooze button on your alarm clock. Later that same day, you reach into your bag and navigate its contents by touch to pull out your cell phone. Although more subtlely tied to memory than the other senses, touch can also help you identify objects.
But why would you need to search for something by touch if the object is already at arms length? Good question… unless you are blind. Many blind people use their sense of touch to catalogue the world. Imagine you are not able to see and you find something new and want to know what it is. A friend might be able to help but Google won’t.
In the United States, the most common text message is “where are you”. While other websites (Twitter, Facebook, Foursquare) have been getting better at answering this question, Google has largely remained stagnant.
You are getting ready for work but can’t for the life of you remember where you put your favorite shirt. This type of situation happens daily. Be it car keys, shoes or your little sister, countless man hours have been spent looking for things. When it comes to finding the location of personal items, again Google can’t help.
These limits are worth writing a blog post about for two reasons; context and awareness.
The great thing about being alive is that everyone is constantly at the forefront of human progress. Right now we are the most evolved we have ever been. And right now, we are even more evolved than when you read that last sentence. It is very likely that while you have read this post, someone, somewhere has invented something that will make your life better moving forward. Google is a great example of that. The limitations I listed above could be fixed with the creation of new features. That is not the point. The point is that while we are currently living in the most technologically advanced time that has ever existed, we still have a long way to go. The Google of today is not the end-all-be-all, it is only a milepost on a much longer stretch of highway.
The second reason I am writing this post is to promote awareness. Whether you like it or not, Google is becoming an important factor in how you experience the world. Just like a person wearing glasses literally sees the world through predefined frames, humans are seeing the Internet through the limits of Google.
Think about that.
If you were a fish living in a fish bowl, would you know the bowl existed? You would certainly know there was an edge to your environment (the glass) but having been enclosed in a bowl throughout your entire existence, you wouldn’t be able to “organize your world’s information” beyond what you could sense. Google is not sensing the world like we do. It can’t see, smell, hear, taste or touch. Yet at the same time, it is largely defining how we experience the Internet. As the Internet becomes an increasingly essential part of our world, the search engine’s limitations become our limitations. These limitations whether noticed or not are limiting your potential to experience the world.
If you have any other related limitations that you think are worth sharing, feel free to post it in the comments. This post is very much a work in progress. As always, feel free to e-mail me if you have any suggestions on how I can make my posts more useful. All of my contact information is available on my profile: Danny Thanks!
Some of you may have been hit by Google’s 20 October algorithm change.
And some of you wouldn’t have noticed any difference.
On 20 October, a number of sites got trashed. Rankings, and traffic, plummeted through the floor. The webmaster forums lit up. Aaron noticed it. I noticed it. Yet, other webmasters wondered what all the fuss was about.
As many of you know, there is not just one ranking algothimn. There are many algorithms. What affects one site may not affect another. Rather interestingly, Google’s John Mu dipped into this thread on Google’s support forum, offering these words of wisdom (HatTip: Barry)
It looks like the changes you’re seeing here may be from an algorithmic change. As part of our recent algorithmic changes (which the outside world sometimes refers to as the “May Day update” because it happened primarily in May), our algorithms are assessing the site differently. This is a ranking change, not any sort of manual spam penalty, and not due to any technical issues with regards to crawling or indexing your content. You can hear more about this change in Matt’s video: ”
Various parts of our algorithms can apply to sites at different times, depending on what our algorithms find. While we initially rolled out this change earlier this year, the web changes, sites change, and with that, our algorithms will continually adapt to the current state on the web, on those sites. While it might be confusing to see these changes at the same time as this issue, they really aren’t related, nor is this a general algorithm change (so if other sites have seen changes recently, it probably doesn’t apply to them as well).
Matt’s video, made four months ago, was talking about the algorithmic MayDay change. John Mu adds: “Various parts of our algorithms can apply to sites at different times” In other words, whatever happened in May may not affect your site in May, or June, or July, but might hit you many months later. This implies that your site may trip a threshold, and be judged quite differently than it was the day before.
This still doesn’t completely explain why so many sites were hit on the same day, but then Google don’t typically explain things in detail.
To complicate matters, there was an acknowledged indexing problem, affecting new content, particularly on blogs. Again, John appears to suggest this was a separate issue.
Now, all SEOs are used to algorithm changes. Nothing new. But this one has me genuinely perplexed, mainly because of the type of sites that got hit.
Time for some self-searching Q&A about one of my own sites:
Q: So, how many links did you buy?
Q: Are you selling links?
Q: Linking to “bad neighborhoods”?
A: Not that’ I’m aware of…..
Q: Did you link-build in an aggressive manner?
A: No. I did no link building, whatsoever.
A: That’s not a question.
Q: So you just published content?
Q: And people linked to your site, of their own accord?
A: Yep. I guess they liked it.
Q: Was your content heavily SEO’d?
A: No. In fact, I gave writers specific instructions not to do anything resembling “SEO copywriting”. It ruins the flow for readers.
Q: All original content?
A: All original. Hand written. No machines involved anywhere.
Q: So this site conforms to Google’s Webmaster Guidelines?
A: I’d say it lies well within them. “Be useful to end users”, was the guiding principle.
Yet it got hit hard.
What’s also interesting is the nature of the sites that replaced it. I checked keyword after keyword, and found script driven, aggressive black-hat, content-free sites in top positions. Not in all cases – there are certainly useful sites that deserve to be there, and deserve to appear above mine. Fair play. However, there were plenty of sites of – shall we say – dubious merit- occupying high positions.
Now, I believe in publishing useful, unique content, and not paying too much attention to SEO, other than covering the basics. SEO is one strategy amongst many, and sites should, first and foremost, prove useful to people.
Clearly, no site is immune. You can stay within Google’s Webmaster guidelines, and get taken out. I knew that anyway, but when the sites that don’t follow the guidelines replace you…
….I’ll admit – it grates.
Presumably, Google rewards the sites it likes with high rankings, and if we see a lot of aggressive sites filling the top page, should we therefore assume that aggressive sites are what Google actually wants?
I’d like to think not.
Perhaps they are just trying to mess with our heads?
Or they messed up?
Or the changes are still bedding in?
Or they really do want it this way?
I’m still watching, and considering. Perhaps the site will just pop back up in due course. Or perhaps I need to go back to the drawing board. I’ll let you know how I get on.
If you’ve noticed something similar on your sites, chime in on the comments.SEO Book.com – Learn. Rank. Dominate.
Marin software manages about 5% of Google AdWords spend for clients, and they noticed that since Google Instant was unveiled, AdWords ad clicks are up 5%. Since the launch Google’s Jonathan Rosenberg has mentioned that the impact on AdWords was “not material.”
I found the repeated use of those exact words suspicious and diversionary, and, as it turned out, with good reason! When Google Instant launched I highlighted what Google was doing to screen real estate & predicted this shift.
Turns out that the “tin foil hat wearing SEOs” were right once again.
And that 5% lift in AdWords clicks is on top of the lift Google has seen from
The Google as Monopoly meme continues to grow.Graphic by Scores.org SEO Book.com – Learn. Rank. Dominate.
As Google reached the limits of returns in direct marketing they started pushing the value of branding (because, hey, if you can chalk it up to latent branding value there is no cap on your max bid). Surprisingly, they even got many big brands to buy their own brands AND buy sitelinks on the AdWords ads. Some went so far as providing case studies for how much of their own brand traffic they were now willing to pay for, which they previously got free.
Sure that can make sense for seasonal promotions, but you could do the same thing by having subdomains and sister websites. Dell.com can be the main site, Dell.net (or deals.dell.com) can be the deals & promotions website, and Dell.org can be the good karma charity site. No paying someone else for brand you already spent to build. Beautiful. But I digress…
In October of 2008 Google’s CEO revealed which ad Dollars they were chasing, and what loophole they were opening up in their relevancy algorithms
“Brands are the solution, not the problem,” Mr. Schmidt said. “Brands are how you sort out the cesspool.”
…and here is the problem…
Less than 2 years after Mr. Schmidt’s prophetic brand drivel, the Financial Times is doing a series on Google, in which Google’s Amit Singhal is blaming brands as being a major issue:
Companies with a high page rank are in a strong position to move into new markets. By “pointing” to this new information from their existing sites they can pass on some of their existing search engine aura, guaranteeing them more prominence.
Google’s Mr Singhal calls this the problem of “brand recognition”: where companies whose standing is based on their success in one area use this to “venture out into another class of information which they may not be as rich at”. Google uses human raters to assess the quality of individual sites in order to counter this effect, he adds.
No mention (of course) that it was Google which put excessive emphasis on domain authority, or how Google gutted the link graph, or how Google funds most of the content mills with AdSense.
Those are all irrelevant details, just beyond Google’s omniscient view.
The other thing which is absurd, is that if you listen to Google’s SEO tips, they will tell you to dominate a small niche then expand. Quoting Matt Cutts: “In general, I’ve found that starting with a small niche and building your way up is great practice.”
And now brand extension is somehow a big deal worth another layer of arbitrary manual inspection and intervention?
Meanwhile scraper sites are still clogging up Google, and they claim they need to write better algorithms to detect them. It isn’t hard to see the sun at noon!
If sites which expand in scope deserve more scrutiny then why is there so much scrape & mash flotsam in the search results? What makes remixed chunks of content better than the original source? A premium AdSense feed? Brand?SEO Book.com – Learn. Rank. Dominate.
Posted by Danny Dover
Want happier website visitors and higher rankings? This week’s Whiteboard Friday is about how and why to speed up your website. It is more technical than previous videos so I tried to spice it up with an ode to one of my favorite canceled TV Shows, Pop-up Video. Can’t stand the content? At least the added commentary is entertaining. (It is the perfect plan ;-p)
The following are seven proven techniques well known websites use to boost their site speed.
Gzip is a open source compression algorithm that can used to compress your website’s content before your server sends the data to a visitor’s browser. This makes your servers job easier and makes pages load faster for your users. You can learn how to enable Gzip here.
Minify is the process (and software) for removing unnecessary formatting characters from code. This makes your files smaller and your visitors happier. You can learn all about this process here.
CDNs are systems of interconnected server resources that spread content and assets around the globe to shorten the distance between server and prospective user. They are commonly used by the Web’s most popular websites. You can find a list of free CDNs here.
You can take advantage of the countless man hours that have been devoted to image compression and make your users happier by simply saving your images as the appropriate type. As a very general rule of thumb, I recommend saving photos as JPEGs and graphics as PNGs.
When a browser requests a website from a server it can only download a set number of files of the same type at any given point. While this isn’t true of all file types, it is a good enough reason to host applicable files on alternative subdomains. This is only recommended for sites where the pros of speed will outweigh the SEO cons of creating a new subdomain.
While redirects can be extremely useful, it is important to know that implementing them does force your servers to do slightly more work per applicable request. Always avoid redirect strings (301 -> 301 -> 200 or even worse 301 -> 302 -> 200) and use these tools sparingly.
The most straightforward way to speed up your website is to simply use fewer files. Less files means less data. My favorite method of doing this is utilizing CSS sprites. You can read how popular websites are using this trick here.
Fueled by the massive potential of the Internet, Googlers are working on many projects in their attempt to speed up the Web:
If you have any other advice that you think is worth sharing, feel free to post it in the comments. This post is very much a work in progress. As always, feel free to e-mail me if you have any suggestions on how I can make my posts more useful. All of my contact information is available on my SEOmoz profile under Danny. Thanks!
Posted by Tom_C
Wouldn’t it be great if you could somehow spot those SEO opportunities on your site which were low effort and high value? Well this post gives you a few ways you can do that! Sweet.
I’m going to be digging around in the recently released search queries report in Google Webmaster Tools:
The first thing we need to do is gather all the fruit (aka keyphrases). So within GWT select search queries and select just "web" queries and in this case I’ve selected "United States" since that’s the main target market for SEOmoz. The more we can narrow this down the better data we get, if we leave image search etc in there and leave countries like Serbia in there the less accurate our data will be:
Once we have filtered the data we then want to download the data to Excel:
Once we have the data in Excel we can do some monkeying around to get some meaningful insights. When you download the data you will be presented with a lot of dummy data like this:
So I run a find and replace on the following two items:
Be sure to only run these over columns B,C,D to avoid stripping out anything from your queries column in A!
Now, once we have this data it will depend on the number of impressions and number of keyphrases how exactly you want to slice the data. Analysing the data for SEOmoz I found that selecting all avg positions that were not equal to 1 and impressions over 200 gave me a sample set of 97 keyphrases to look at which wouldn’t take very long at all to whiz through and look at. If you have more time or if you have more keyphrases you might want to get a longer or shorter list.
As I mentioned this gives me a list of 97 keyphrases for the SEOmoz site. Let’s take a look at what some of those opportunities are!
In this post I not only wanted to show you how to get the data but also to give you an insight into what kinds of actions you could take and what sorts of keyphrases you might look at so I ran the above process for the SEOmoz site and found the following low hanging fruit. Bear in mind that there weren’t that many really easy wins in the data since SEOmoz generally has fairly good SEO (unsurprisingly!). Still, it gives you an idea of the thought process.
Keyphrase: SEO | Ranking: 9.4 | Impressions: 49,500 | Clicks: 590
Ranking URL: http://www.seomoz.org
Now, I’ve shot myself in the foot a little here by picking a keyphrase which isn’t really a low hanging fruit, it’s actually a highly competitive keyphrase! That said, it’s useful information to have. Without having rank tracking set up for the site it instantly tells me that SEOmoz are highly competitive for this term, especially as some of the sites that rank above them are Google and Wikipedia. That said, there’s room for improvement. Maybe time for some more badge-bait Rand?!
Keyphrase: Social media marketing | Ranking: 7.9 | Impressions: 8,100 | Clicks: 320
Again, this is a highly competitive keyphrase but one which I feel SEOmoz could perform better for. The current ranking is working ok for them but could certainly improve dramatically if they could shift the ranking from 7.9 into the top 3. Digging around we see that the page is linked internally from every page in the navigation with good anchor text and it has a total of 255 root domains linking to the page so there’s clearly a fair amount of work already done here. That said, I feel like there’s an opportunity waiting since SEOmoz links out to lots of other sites from here and most of those blogs will likely link back to the SEOmoz guide if there was a nicely written email. Incidentally, if you’re looking for a link from SEOmoz and have a top notch guide for one of the sites which doesn’t have any resources attached then get in touch! So long as you link back to the page
Also, BONUS TIP – while researching backlinks in this space I stumbled across the fact that Amazon authors can get links from Amazon.com check out Darren Rowse on Amazon and then compare to Rand Fishkin on Amazon and you’ll see that Rand has missed an opportunity to get blog posts imported and hence get clean followed links from Amazon. Sweet!
Keyphrase: What is seo | Ranking: 3.9 | Impressions: 1,900 | Clicks: 210
Ranking URL: http://guides.seomoz.org/beginners-guide-to-search-engine-optimization (Note here that this URL isn’t reported in GWT, it’s the old beginner’s guide URL which now redirects but the same keyphrase stands).
Here, I see the answer being a little easier than the above keyphrases. The term is less competitive and the title of the page doesn’t even mention "what is seo"! My actions would be to reword the title tag to be "What is SEO? The Free Beginner’s Guide to SEO from SEOmoz" and to mention "What is SEO" on the page at least once (currently it only mentions "what is search engine optimisation" and although Google knows they’re the same phrase I’d still like to see the exact phrase on the page somewhere). Also, there is no navigation link on the site to the beginner’s guide so slipping a few links into the next few blog posts with the anchor text "what is seo" will help boost the rankings for that phrase.
Keyphrase: Free seo tools | Ranking: 4.2 | Impressions: 480 | Clicks: 73
Ranking URL: http://www.seomoz.org/tools
The term "seo tools" is fiercely competitive but the "free seo tools" term seems like it would be a lot easier to go after, in fact SEOmoz is one of only 2 of the top 10 results which doesn’t mention the term free in the title tag of the page. This could be rectified easily and in addition to that the page doesn’t even mention "free seo tools" on the page. Personally, since this is something people search for I’d be tempted to re-des
ign the page to add a "Free SEO Tools" sub-header to differentiate between the free and PRO tools. That way it’s a good user experience and also gets the phrase on the page.
Keyphrase: Keyword research | Ranking: 19 | Impressions: 110 | Clicks: 12
Note here that the impression numbers are so low because the page is ranking 2nd page. Not having a page ranking in the top 10 here is a mistake for SEOmoz I think (sorry, I mean opportunity!). The correct page is clearly the page on keyphrase research from the new beginner’s guide and the best way to make that page rank is to throw some more internal links to the page. Currently there are basically no internal links to that page except from other beginner’s guide pages. Linking to it from elsewhere should be able to get that page ranking. One idea to help get internal links to pages of the beginner’s guide would be to automatically link to the keyword research page from any blog post within the keyphrase research category. That way it would essentially get more deep links internally to the individual pages of the beginner’s guide.
A warning here that I’m still not sure how much I trust this impression and click data. I wrote a post over on Distilled about how the reported numbers are way out from analytics numbers. To be honest, if I was doing SEO full-time for SEOmoz I’d like to think I’d have better resources of keyphrase research, ranking data and visitor numbers but using Google’s webmaster tools search queries report can be a nice quick way to identify keyphrase opportunities for a site which you’re less familiar with or for which you don’t have these things set up.