Tags Archives

You are currently viewing all posts tagged with World.

Posted by Kate Morris meta tags css code

Meta tags are the beginning of most SEO training, for better and worse. I contemplated on how to introduce this topic because we always hear the bad part of meta tags, namely the keywords meta tag. One of the first things dissected in any site review is the misuse of meta tags, mainly because they are at the top of every page in the header, therefore the first thing seen. But we don’t want to get too negative, meta tags are some of the best tools in a search marketer’s repertoire.

There are more meta tags than just description and keywords, though those two are picked on the most. I’ve broken down the most used (in my experience) by the good, the bad and the indifferent. You’ll notice that the list gets longer as we get to the bad ones. I didn’t get to cover all of the meta tags possible to add, but there is a great Meta Tag resource you should check out if you’re interested in what is out there.

My main piece of advice is to stick to the core minimum, don’t add meta tags you don’t need as they just take up code space. The less code you have the better. Think about it like this, your page code is like a set of step by step directions to get somewhere, but for a browser. Extraneous meta tags are the annoying 200 feet line items in directions that tell you to stay on the same road you were on!  

The Good Meta Tags

These are the meta tags that should be on every page, no matter what. Notice that this is a small list, these are the only two that are required, so if you can work with just these two, please do. 

  • Meta Content Type – This tag is necessary to declare your character set for the page and should be present on every page. Leaving this out could impact how your page renders in the browser. A few options are listed below, but your web designer should know what is best for your site.
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> 
  • Meta Description – The infamous meta description tag is used for one major purpose, to describe the page to searchers as they read through SERPs. This tag does not influence ranking, but is very important regardless. It is your ad copy that will determine if the user clicks on your result. Keep it within 160 characters, and write it to get the user’s attention. Sell the page, get them to click on the result. Rand wrote a great article in 2007 on meta descriptions that goes more in detail.

The Indifferent

Different sites will need to use these in specific circumstances, but if you can go without, please do.

  • Robots – One of the largest misconceptions is that you have to have a robots meta tag. Let’s make this clear: In terms of indexing and link following, if you don’t specify a meta robots tag, they read that as index,follow. It is only if you want to change one of those two commands that you need to add meta robots. Therefore, if you want to noindex but follow the links on the page, you would add the following tag with only the noindex, as the follow is implied. Only change what you want different than the norm.
    <meta name="robots" content="noindex" />
  • Specific Bots (Googlebot) – These tags are used to give a specific bot instructions like noodp (forcing them not to use your DMOZ listing information) and noydir (same, instead the Yahoo Directory listing information). Generally the search engines are really good at this kind of thing, but if you think you need it, feel free. There have been some cases I’ve seen where it’s necessary, but if you must consider using the overall robots tag listed above.
  • Language – The only reason to use this tag is if you are moving internationally and need to declare the main language used on the page. Check out this meta languages resource for a full list of languages you can declare.
  • Geo – These meta tags, last I heard, are supported by Bing, but not Google (you can target to country inside Webmaster Tools). There are three kinds: placename, position (latitude and longitude) and region
    <META NAME="geo.position" CONTENT="latitude; longitude">
    <META NAME="geo.placename" CONTENT="Place Name">
    <META NAME="geo.region" CONTENT="Country Subdivision Code">
  • Source – The new kid on the block, this is a tag (really two tags) that Google is testing out (thanks to the tip from my coworker Justin Briggs). It’s mainly for news providers so that they can help the search engines understand who the original news source is and which ones are syndicates. The news world and search world are trying so very hard to get along, glad to see this one pop up. 
  • Keywords – Yes, I put this on the indifferent because while no good SEO is going to recommend spending time on this tag, there is the small possibility it could help you somewhere. So please leave it out if you’re building a site, but if it’s automated there is no reason to take it out.
  • Refresh – This is the poor man’s redirect and really should not be used if at all possible. You should always use a server side 301 redirect. But I know that sometimes things need to happen now. But Google is NOT a fan
  • Site Verification – Your site is verified with Google and Bing right? Who has the verification meta tags on their homepage? These are sometimes necessary because you can’t get the other forms of site verification loaded, but if at all possible try to verify another way. Google allows you to verify by DNS, external file, or by linking your Google Analytics account. Bing still only allows by XML file or meta tag, so go with the file if you can. 

The Bad Meta Tags

There is nothing that will happen to your site if you use these, let me just make that clear. They are a waste of space though, even Google says so (and that was 5-6 years ago!). If you’re ready and willing, it might be time for some spring cleaning of your <head> area. 

  • Author/Web Author – This tag is used to name the author of the page. It’s just not necessary on the page.
  • Revisit After – This meta tag is a command to the robots to return to a page after a specific period of time. It’s not followed by any major search engine
  • Rating – This tag is used to denote the maturity rating of content. I wrote a post about how to tag a page with adult images using a very confusing system that has since been updated (See the comments). It seems as if the best way to note bad images is to place them on a separate directory from other images on your site and alert Google.
  • Expiration/Date – Expiration is used to note when the page expires, and date is the date the page was made. Are any of your pages going to expire? Just remove them if they are (but please don’t, keep updating content, even contests, make it an annual contest!). And for date, make an XML sitemap and keep it up to date, that is so much more useful!
  • Copyright - That Google article debates with me here, but look at the footer of your site. I would guess it says "Copyright 20xx" in some form. Why say it twice?
  • Abstract - This tag is sometimes used to place an abstract of the content and used mainly by educational pursuits. 
  • Distribution – The distribution value is supposedly used to control who can access the document, typically set to global. It’s inherent that if the page is open (not password protected like on an intranet) that it is for the world. Go with it, and leave the tag off the page.
  • Generator – This is used to note what program created the page. Like author, useless. 
  • Cache Control - This tag is set in hopes of controlling when and how often a page is cached in the browser. It’s best to do this in the HTTP Header
  • Resource Type – This is used to name the type of resource the page is, like "document." Save yourself time, as the DTD declaration does it for you. 
Stock Photo by Shutterstock

Do you like this post? Yes No

SEOmoz Daily SEO Blog
Forbes AdVoice, a new Forbes editorial strategy that does away with the traditional barriers between advertising and editorial content:

The pitch is this: We’ll sell you a blog, and your content will live alongside that of Forbes’ journalists and bloggers. This isn’t the “sponsored post” of yore; rather, it is giving advocacy groups or corporations such as Ford or Pfizer the same voice and same distribution tools as Forbes staffers, not to mention the Forbes brand.

“In this case the marketer or advertiser is part of the Forbes environment, the news environment,” Mr. DVorkin said.

If that stuff has legs & spreads across most the major media sites then Google’s “authority first” relevancy algorithm strategy is dead.

Google has always considered paid links bad (as Forbes certainly knows) but as paid content spreads how will Google fight it? And if that content contains links then is it still a paid link? Will Google once more end up purging the payola?

The other question is … when media has tons of press releases alongside the articles, what value add is there for consumers to pay attention to the media? And if the media teaches advertisers to create their own media, won’t many of those advertisers do so on their own websites & cut the mainstream media out of the loop?

SEO Book.com – Learn. Rank. Dominate.
Post image for The Loss of Privacy in An Opt-Out World

Anyone over the age of 30 who spends any amount of time online has to be aware of the gradual erosion of online privacy and how we are slowly moving to a point where our online life has become opted-in by default.

… sacrifice your privacy on the altar faster than an Aztec Priest could ever rip out your still-beating heart …

A disturbing trend among people in the valley and other technology centric locations is the subjugation or near-complete surrender of one’s work, one’s copyright, intellectual property, and sometimes complete self to the great computer singularity (a concept I am shamelessly stealing from Jaron Lanier). All of your work and data becomes part of the collective sum of information. The value of the individual is subservient to the singularity of data. Your work, even if it’s under copyright, is used and often abused in mashups by others building and growing this singularity. Your sense of self worth and individuality are, as Mark Zuckerberg would have you believe, not your own: they belong to the collective body sum of human knowledge. If he sells it to the highest bidder, you shouldn’t have done it if you didn’t want that to happen

Click here to view the embedded video.

… Google will find out about it (when they invade your privacy by snooping your wifi networks, emails, passwords …

 This isn’t just a Facebook only trend. Google has been opting you into things automatically whenever they wanted to for years. In fact at this point it’s standard operating procedure. If they opt you into a program that puts your life in jeopardy with your abusive ex husband … oops! Sorry, our bad. We’ll try harder next time to not invade your privacy. But really we are working towards this singularity of storing all the world’s information. And not to worry: our leader Eric Schmidt says you just shouldn’t do things you don’t want online, like have an extra marital girlfriend. Don’t worry–Google will find out about it (when they invade your privacy by snooping your wifi networks, emails, passwords, and lord knows what else).

Now, bloggers, who often fancy themselves as reporters, are considering your tips as opt-in by default. Send them a tip and, unless you explicitly tell them otherwise, they will sell you down river almost as fast as Mark Zuckerberg will. When I grew up, we were taught the value of trust, and confidentiality was the normal rule when dealing with the press and reporters. That’s not the case anymore. Silicon valley wingnuts, who have a zealot-like, almost-religious devotion to building and becoming part of the computer singularity they are constructing, will sacrifice your privacy on the altar faster than an Aztec Priest could ever rip out your still-beating heart and spill your blood on the steps of a ziggurat.

Welcome to the age of enlightenment and progress …!

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

The Loss of Privacy in An Opt-Out World

tla starter kit

Related posts:

  1. Privacy and Security Aren’t Opposites and Google as Tool You know I speak against the Google machine a lot,…
  2. Google is Scarier than Facebook on Privacy In the past few weeks Facebook has been publicly criticized…
  3. A Look at Full and Partial Feeds in an Increasingly Mobile World I’ve long stated that I prefer full feeds over partial…
  4. Google’s Two Tiered Internet World Sinks to a New Low Today Google released an new feature that gives away traffic…
  5. World Wide Web Now Serving Over 1 Billion Users According to Usability Guru Jakob Nielsen the internet now has…


  1. Text Link Ads – New customers can get 0 in free text links.
  2. CrazyEgg.com – Supplement your analytics with action information from click tracking heat maps.
  3. BOTW.org – Get a premier listing in the internet’s oldest directory.
  4. Ezilon.com Regional Directory – Check to see if your website is listed!
  5. Directory Journal – List your website in our growing web directory today.
  6. Need an SEO Audit for your website, look at my SEO Consulting Services
  7. KnowEm – Protect your brand, product or company name with a continually growing list of social media sites.
  8. Scribe SEO Review find out how to better optimize your wordpress posts.
  9. TigerTech – Great Web Hosting service at a great price.

Michael Gray – Graywolf’s SEO Blog
Post image for A Look at Full and Partial Feeds in an Increasingly Mobile World

I’ve long stated that I prefer full feeds over partial feeds. Now that I’ve spent the past few weeks using an iPad, I feel even more strongly that full feeds are the way to go and that offering partial feeds is an obstacle to getting your posts read by as many people as possible.
Two sites I read on a regular basis that offer partial feeds are Search Engine Land and Wired Magazine. I like both websites and read them on a mostly daily basis, using a either Google reader or the Newsrack iPad app. In an effort to be clear, I’ll disclose that I am also a regular columnist for Search Engine Land.

As use of blackberries, iPhones, iPads, android, smart phones, and other Internet consumption appliances increases, I think it’s time that publishers rethink the use of partial feeds …

My typical routine includes scanning the list of feeds 1-3 times per day. If an article is interesting and something I want to read, I’ll send it off to Instapaper to read on the iPad while I’m at the gym later that day or (if absolutely necessary) on the laptop. The iPad has really affected the way I consume. It’s much easier to read on the iPad than it is on a laptop or desktop computer.

When I encounter a partial feed, it’s problematic because I have to send it through a middle service provided by Google. Google actually scrapes the content and provides a “light” version. You can see an example at this URL or the screen shot below.

As use of blackberries, iPhones, iPads, android, smart phones, and other Internet consumption appliances increases, I think it’s time that publishers rethink the use of partial feeds. Providing content that has barriers to consumption isn’t a smart long term solution.

I think it’s time for publishers to rethink using partial feeds as consumption habits change …

The most common argument revolves around advertising and not being able to include it in the feeds. To be honest, this really isn’t a valid argument. There is a lot of off the shelf space available right now for free. You can use the feed footer to randomly insert advertising links in the bottom of each post. Want the ads at the top? Use the RSS Footer and you’re all set. If you need a more sophisticated solution you should have enough of a budget to build a  custom plugin.

The next biggest argument is that the posts will get scraped. Getting scraped sucks but, to be honest, it’s a non issue most of the time since Google is pretty good at figuring out the original. They aren’t perfect, but they are right more often than they are wrong. Lastly in most cases getting scraped works to your advantage.

Another solution would be to turn full feeds into a revenue-generating opportunity. You could offer partial feeds for free, and publish full feeds using a subscription model. Give each subscriber a unique feed that redirects to the full feed published at a secret URL. If the subscription is expired redirect to a partial feed. Change the full feed secret URL every month to eliminate people sharing or getting access when the subscription expires. Concerned about people republishing? Embed a unique identifier in each feed in the form of a tracking bug. Its not a perfect or foolproof solution, but it’s a big step forward.

I think it’s time for publishers to rethink using partial feeds as consumption habits change and devices allow content to be read in new ways. Publishers have to adjust and make changes.

PS: I’ve read a lot of reports about mobile consumption and, unless you are delivering rich media, you should consider providing a “lite” or “mobile” version. If you view this website on an iPhone, iPad, blackberry, or even a Wii, you’ll see a slightly different version. I use the wptouch plugin with some custom user agent settings, and it works pretty well.
Creative Commons License photo credit: Yutaka Tsutano

This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis WordPress Theme review.

A Look at Full and Partial Feeds in an Increasingly Mobile World

Related posts:
  1. Google Widgets and Full RSS Feeds I totally admit the publishing full feeds is probably a…
  2. Putting Partial Feed SEO Bloggers on Notice If you’re an SEO blogger and you publish a partial…
  3. Organizing and Prioritizing Your Feeds A few weeks ago I saw a post on Sphinn…

Related posts brought to you by Yet Another Related Posts Plugin.

Michael Gray – Graywolf’s SEO Blog

A lot of people who are well known as SEOs spend too much time on self promotion and not enough time on business development. BTW I would classify myself as being in that camp, though I have been slowly migrating since meeting my wife ;)

So much of SEO stuff is sorta ego in place of performance IMHO. And the problem when you hire top SEOs is that even if they have a strong brand and do great work on their own sites, the market pricing for services tends to be so dysfunctionally under-priced that…

  • it is mostly an exercise in back patting to even do any client services after you have a good amount of capital, cashflow, and leverage online
  • even if you think you are hiring one of the best SEOs you still rarely get to work with them because the people who are out there being really well known are by and large lead generation tools for the company, and the bigger the company is the more likely you are to have an intern servicing your account

Getting serious cashflow out of servicing the SEO market is akin to squeezing water out of a rock, especially when compared against running your own websites.

To me, the measure of an SEO’s success is not in their knowledge, but in their ability to leverage their knowledge to build cashflow. I know money isn’t everything, but we live in a world where the algorithms grow more complex every day. So each day you are working for less than your market value is a day closer you are to being broke!

Spamming and jamming can get you some paydays, but its not easy to *consistently* pull down 7 or 8 figures a year in profit if you are not building at least 1 or 2 properties with serious staying power and momentum behind them.

Given the complexity of SEO and the lack of liquidity in the SEO market I think that by and large the best SEOs who generate the greatest profits derive most of their profits from publishing. Given that I thought I would highlight some of the people who I would view as top SEOs (and why).

Danny Sullivan

Few people have Danny’s knowledge about the history of and trends in search. Even fewer have that type of knowledge while being accessible. And even fewer yet would have been able to put a decade in building up momentum for a brand and website in the industry, stop, start over from scratch, and compete against what they had built for a decade.

Imagine the strongest site you have, giving it a decade of effort, and then one day trying to start from scratch competing directly against it with a similar business model. And yet he pulled it off.

Greg Boser & David Naylor

Greg is probably the first name that comes to mind when someone says “old SEO” (yes even before Bruce Clay). His knowledge is much like Danny’s in being rich with historical context. The thing that Greg has done to make consulting actually worth doing is tie payment to performance. Doing SEO in that manner is like becoming an affiliate, but one with few competitors and a huge advantage in the marketplace.

Dave is the UK version of Greg (or maybe Greg is the US version of Dave?), and they have done some successful projects together for some of the biggest brands in the world.

Stephan Spencer

Stephan Spencer branded himself as being an expert at ecommerce SEO. And, rare amongst SEOs, he has the technical chops *and* the marketing skills to sell to big companies (speaking their language & touring the world speaking at dozens of conferences each year).

They built a software program which is almost as sweet as cloaking would be (if you could get away with doing it constantly with no risk), but partnered with the right kinds of (big brand) companies and branded their GravityStream solution appropriately such that it was never viewed by Google from a negative lens. This created a business model where they could get paid based on performance (like many affiliates do) but be paid for the performance of the core brand website! :D

NetConcepts was sold to the SEM company Covario, which will be able to benefit from tying the GravityStream technology to their predictive analytics and Google’s quick-indexing caffeine search results.

Patrick Gavin & Andy Hagans

As people, at this point I don’t really trust or respect them (and feel that those who do might be in for some eventual bad news). But as far as being efficient at running businesses, few can compare. Patrick took a gamble and build the Text Link Ads link brokerage into a company he was able to sell for mid 8-figures. And his latest venture in the SEO space was so bold as to call “ensure you are not buying any links” an advanced SEO tip. Meanwhile on Andy’s personal site he recommends iAcquire for your link buying needs :D

Not content with sitting on the results from TLA, they invested the proceeds (and other investor funds) into building a domain portfolio that even Kevin Ham or Frank Schilling would admire. But they also turned those domain names into functional websites, and have kept cost structures low, while creating blogs with more top x lists than the rest of the web combined and sending out millions of “congrats” emails at potential link sources. The net result? They have built a lead generation business that has been rumored to be pulling in 8 figures a year.

Wherever there is an economic distortion in the economy leading to a large bubble you can bet these guys have at least a half dozen to a few hundred sites, chipping away at the markets 24/7/365. And the only thing increasing faster than their scale is their efficiency!

Matt Cutts

I always hate when I see Matt Cutts listed on top SEO lists and think “hey he is not even an SEO”


how many SEOs have seen Google’s source code? How many have written a good chunk of it? As one of the top few search engineers at Google, Matt not only has a pulse on what is changing with the web, but he constantly tracks & battles the evolution of spam. His knowledge and experience set allows him to just look at a search result and be able to spot the algorithmic weaknesses & exploits at a glance.

Further, Matt Cutts is better at public relations than 99% of public relations experts are. He is able to constantly promote Google products and engage in issue shaping while rarely being called out for it. And he rarely makes *any* mistakes on the public relations front, even when defending some of Google’s most bogus & hypocritical policies.

Imagine if your company had a b/s slogan of “don’t be evil” while operating with the above strategy. And yet he somehow manages to make it work.

Jason Callus Anus

Imagine entering an industry pulling in attention by calling everyone in the industry a bunch of scumbags – stating that you will clean things up through the use of manual intervention. Then imagine using the economic downturn to fire almost all your editorial employees and leveraging your built up domain authority to create a low quality automated general purpose web scraper, which stuffs Google with indexing their own search results (heavily wrapped in ads). And then imagine link farming to build authority, then using the leverage of that platform to start selling SEO services to corporate clients & selling links!

When Matt Cutts described scraper sites a few years back he said they were “shoot-on-sight“. And yet Jason’s crappy site keeps gaining traffic while almost never adding any value anywhere.

Whenever I think of Mr. Anus, I picture a used car salesman who moved to the state which doesn’t have a lemon law just so he could get the enjoyment of duping people with broken cars. And yet somehow he manages to pull it off. For public relations brilliance he gets a +1. And the same goes for claiming ignorance of SEO and claiming to be anti-spam so he can get away with passing his spam garbage off onto everyone else while rendering Google’s spam team flacid.

Richard Rosenblatt

In 1999 Richard Rosenblatt was able to sell iMall (have you ever heard of it?) for over a half-billion Dollars. He then sold MySpace near the top for 0 million. Trying to strike gold once more, he formed Demand Media and bought eHow.com to build a search-arbitrage content farm. Once growth rates began to slow he then created a controversy by trying to legitimize his model in the media, building his site tons more links. He then used that platform as a success story to get other publishing websites to engage in profit-sharing partnerships where he posts articles on huge trusted authoritative domains like USAToday.com.

Now Demand Media is rumored to be gearing up for an IPO or sale:

Demand Media, a closely watched startup that mines online search engine data to generate thousands of videos and web stories a day, has hired Goldman Sachs to explore an initial public offering.

People familiar with the plans say the company could file for an IPO as early as August. Details have yet to be finalised, but the discussions involve pricing shares around November in an offering valuing the company at about .5bn.

A little known fact amongst the SEO industry is that Richard also is the chairman of iCrossing, which is currently being rumored for sale to Hearst Publishing for ~ 0 million:

Under the deal, which is in the final stages of negotiations, iCrossing, one of the nation’s biggest independent digital-marketing shops, is likely to fetch about 5 million, plus bonus payments if it reaches certain targets, these people said.

One person familiar with the matter cautioned that iCrossing, which is based in Scottsdale, Ariz., could still decide to remain independent if it doesn’t attract the right price.

Nice side gig!

That guy flat out prints money. If he keeps it up, in a few years he might put Ben Bernanke to shame. :D

Honorable Mentions

Over the past few years certainly Jeremy Shoemaker, Brian Clark, and SugarRae have built up some nice empires – each with a vastly different approach. The Caveman is great at tying SEO metrics into real world marketing advice, and has the cashflow to prove it. In terms of being great at building on the consulting model, Bruce Clay comes to mind. Tim Armstrong is tasked with turning around AOL, and if he is successful with it he would deserve a mention. I would also put Cygnus high on any SEO list, but he tends to be a bit shy, and is not very boastful in terms of what he has accomplished. John Andrews would make the list too, but then he doesn’t like lists! :D SEO Book.com – Learn. Rank. Dominate.

Facebook Global Network ImageChris Saad is VP of strategy at Echo, the world’s leading provider of comment/conversation technology to Tier 1 publishers. He is also the co-author of the Synaptic Web Strawman, co-author of the Attention Profiling Markup Language (APML) specification, and co-founder of the DataPortability Project.

Last week, I wrote a post calling out Mark Zuckerberg for making a correlation between Facebook’s privacy issues and their altruistic pursuit of data portability.

Today I’d like to do something a little different. I’d like to examine some of the reasons why Facebook is finding it challenging to either a) implement data portability, or b) communicate about issues of privacy or data portability — and why ultimately, that might be OK, in the sense that it leaves room for the rest of the web to innovate.

I will examine the issue in three parts. These parts, I believe, make up the crux of any reasonable criticism of Facebook at this time.

Data Portability vs. Interoperability

First, Facebook tends to discuss data portability without acknowledging the significance of open standards and interoperability.

Data portability is not just allowing users to access their data. There is a silent “Interoperable” before the phrase that means the data, and the protocols to get it, need to be based on open standards and be interchangeable. Letting people code against your API is not the same as building the API in such a way that others (i.e. social aggregators or networks) could participate in the transactions just as easily.

Facebook’s challenge, however, is that they are pioneering many of these interactions and can’t necessarily wait for standards to emerge or crystallize before acting.

Standards emerge after a good pattern gets established. In many ways, Facebook’s leadership in demonstrating these patterns at scale shows the way (and provides the market momentum and emotional fuel) for standards to emerge. See the OExchange announcement earlier this week, for example.

The challenge for Facebook here is to balance their need (and obvious appetite) for innovation with architectural choices that allow for open standards and interoperable protocols to be absorbed by their system. To show true commitment, they must also quickly adopt standards as they emerge.

Taking these steps would put them in good position to avoid the Internet-scale forces that often reject proprietary platforms as fast as they emerge. In addition to decisive action, if Facebook clearly acknowledged this to stakeholders, it would go a long way.

Will they make these decisions rather than pay lip service? That seems unlikely.

Social Contracts vs. Open Culture

Facebook Global Graph Image

Second, Facebook confuses privacy concerns with anger over a violation of their social contract with users, specifically with regard to people pushing back on the general goal of human openness.

The potential for social media to make the world a more transparent and connected place is real and significant. I don’t think that any rational person who seriously considers the trends could argue against the idea that people sharing more in public, and reading more about others, could reduce fear born from any sense of ‘us’ vs ‘them’.

I don’t think that anyone really challenges Mark Zuckerberg on this point when he makes it. They shouldn’t.

The challenge for Facebook, however, is they began with a social contract — a promise to their users — of a private place for sharing amongst mutually confirmed friendships.

To their credit as a company, they want to continue to push the boundaries, and innovate and evolve as quickly as the market. And the market has evolved. Thanks to Flickr, Twitter and others, public is the new default, and Facebook needs to keep up. For them to change direction towards these open defaults, however, they must declare this intention clearly and unequivocally.

The team at Facebook tries to talk about the need for an open and connected society, and have continued to make incremental steps in this direction. But they have really failed to put it in the context of their history and our future. They need to tell the story of a cultural trend towards openness and explain that while they started as private place, their (new) unashamed goal is move towards a public one.

Doing it in small steps without explaining the end goal or the reason, or even acknowledging the fundamental shift, is a big (and unnecessary) source of criticism.

Open Culture vs. Open Technology

Third, Facebook tends to use the word ‘open’ without drawing a distinction between culture and technology

When the team at Facebook talks about wanting to make the world more “open,” I, and others who care about such things, hear them cloaking themselves in the mantra of Open Technologies when they are actually talking about open/transparent/public culture.

Trying to draw the distinction between Open Technology vs. Open Culture is difficult for Facebook, considering that their audience is not that of our little echo chamber but rather a very mainstream community that does not understand such nuance. So in some regards, glossing over these distinctions is understandable given their target market.

The problem, though, is that making the world a more open place using closed technologies all routed through a single company is not the way it can or should work.

A more interoperable, peer-to-peer method of achieving cultural transparency is critical for there to be true openness (of both kinds). There is an architecture that would allow each node (read: user/site/service) to be a first class citizen on the interoperable social web and to choose how public or private it is when it comes to sharing.

Facebook might know this, but their job is not to create an interoperable social web. They are the market leader — the clear winner — and as Dave McClure likes to say, “Open is for losers.” The nice thing about the web, though, is there are a lot of big, important, well funded and motivated losers who will ensure that Open will win in the end.

Facebook’s job, however, is to capitalize on their momentum and network effects to create maximum value for their shareholders. This is not ‘evil’ or even necessarily undesirable. Because like with open standards, oftentimes until someone shows the industry how it’s done (and frankly lights a fire under its butt), it’s often hard for us all to collectively imagine what the future might look like.

The challenge for Facebook — one they are more than capable of facing – is keeping up with the inevitable opening of the walls and the peering of the nodes. They are the most agile and innovative company at scale that has ever existed on the Internet. If anyone can do it, they can. The only question is, can the rest of the community execute as well?


In these three areas — Data Portability vs. Interoperability, Social Contracts vs. Open Culture, Open Culture vs. Open Technology — Facebook has made big moves and very high profile blunders. Their challenges are great and their ability to execute with ambition first, ask questions later and quickly clean up the mess is the very reason for their success.

Are these blunders a series of accidental missteps (a combination of ambition, scale and hubris) or a calculated risk to force their world view on unsuspecting users (easier to ask for forgiveness)? Only the executives at Facebook can ever truly answer this question.

What’s clear, though, is that their platform is tightly coupled with countless other websites and applications across the web, and their financial success is aligned with many influential investors and actors. At this stage, and at this rate, their continued success is all but assured.

But so is the success of the rest of the web. Countless social applications emerge every day and the rest of the web is, and always will be, bigger than any proprietary platform. Through its action and inaction, Facebook offers opportunities for us all. And in the dance between their moves and the rest of the web’s, innovation can be found.

The only thing that can truly hurt the web is a monopoly on ideas, and the only ones who can let that happen are web users themselves.

For more social media coverage, follow Mashable Social Media on Twitter or become a fan on Facebook

More Facebook resources from Mashable:

Why Facebook’s Privacy War Is Not Over
Why Facebook Must Get Serious About Privacy
4 Tips for B2B Marketing on Facebook
In Defense of Facebook
The Local Advertising War Will Be a Clash of the Internet Titans

Image courtesy of iStockphoto, alwyncooper

Reviews: Facebook, Flickr, Internet, Twitter, iStockphoto

Tags: data portability, editorial, facebook, open technology, privacy, social contracts, social media


Call it whatever you like — football, soccer, the game with the black and white checkered ball that you kick around the field. Pick a name and get ready for the 2010 FIFA World Cup in South Africa. Every four years soccer fans go wild and neighborhood bars are packed to the brim in broad daylight with devoted viewers.

While the BBC and ITV apps that promise to stream the games and video highlights have yet to be released, there are still some great, free iPhone apps to satisfy your World Cup appetite.

1. World Cup Countdown

We know you’ve been counting the days until the non-stop soccer action. This app counts down the excitement by the days, minutes, hours and seconds until it all begins on June 11th. The app also has an interactive photo slideshow with a series of cool pictures of South Africa. You can click on the photos to learn more about each scene.

There are even interesting video and audio options, like a spectacular overview of the countryside and some local music and dance moves. It’s a great way to learn about the country hosting the games, especially if you are planning to attend in person.

2. ESPN 2010 FIFA World Cup

This app has just about everything you need to prepare for the tournament. With a full schedule, you’ll never miss your team in action. The app allows you to select your favorite team or teams so you can easily keep track of all the news surrounding them. It also has full biographies and stats for each player for all 32 teams, so you’ll know exactly who is playing the game.

Super-fans will appreciate the app for its extensive history of the tournament, which dates back to 1930, and includes an overview, trivia and controversies for each year. Sign in with your myESPN account to post comments about your team or participate in the “Fantasy Bracket Predictor,” where you can attempt to forecast the outcome of each World Cup match.

3. World Football Live!

This app gives you the latest news about all things football/soccer from BBC Sports, Yahoo! Euro Sports, and ESPN. News is updated in real-time and you can bookmark your favorite articles or e-mail them to your friends. The app also lets users browse offline. Once you’ve updated the latest news, you can read all the downloaded articles without a WiFi or 3G connection.

4. AP 2010 World Cup Coverage

The Associated Press has been covering the games since the very beginning and is committed to giving fans full coverage. Once the games start, the app will provide up-to-the-minute news from more than 100 journalists in South Africa who will be covering the action. It has a multi-language platform and users can choose to get their soccer fix in English, Spanish, French or Portuguese.

With customizable skins for each team, you can show everyone (at least everyone you show your phone to anyway) who you are rooting for. There is an easy to follow schedule feature that shows which teams are playing and at what time. Also, a photo and video feature will be updated once the tournament starts. The app also has a “venue” feature and lists a bit of history and interesting facts about all 10 stadiums where the games will be played.

5. World Cup Factoids and History

If you really want to know what you’re talking about when you say that Germany is going to kick some butt, this is the app for you. With lists of every winner, host nation, defending champion and number of appearances since 1930, you’ll be able to make some informed predictions as to who will take home this year’s big title.

History buffs will swoon for the detailed chronicles of pre-tournament dates, starting with the world’s first international soccer match between Scotland and England in 1872. The app also connects you to news from BBC, ESPN and the official FIFA site. You can help spread World Cup fever (not that it really needs the help) by pressing the button that lets you tweet about the application.

6. World Cup Trivia Challenge Lite

Once you’ve brushed up on your facts and history, you’ll be ready for this game. Set on a soccer field, the lite version of this app has 50 questions that will test your soccer hooligan mettle. It’s a fast-paced game. With 90 seconds on the timer, you must answer questions like “How many nations appeared for the first time in 2006?”

Once you get three questions right, you score a “goal.” For every question you get wrong, your opponent (who isn’t actually answering any questions) scores a point. There are three stages to work through and it includes a penalty shoot-out for a tie game. You can even listen to the sounds of whistles, cheers and music from a real life game, or choose to turn them off.

Once you’ve mastered all 50 questions, you can upgrade to the paid version for .99, where 600 more questions are ready to test your fandom.

For more mobile coverage, follow Mashable Mobile on Twitter or become a fan on Facebook

More iPhone resources from Mashable:

Top 10 iPhone Apps for TV Fanatics
10 Great iPhone Apps for Growing a Garden
5 Fantastic Free iPhone E-book Reader Apps
10 Essential Money-Saving iPhone Apps
10 Best iPhone Apps for Dog Lovers

Image courtesy of iStockphoto, zentilia

Reviews: Facebook, Twitter, iPhone, iStockphoto, news

Tags: ESPN, fans, football, iphone, iphone application, iphone applications, iphone apps, Lists, Mobile 2.0, MyESPN, soccer, South Africa, sports, world cup, world cup 2010