Slate.com, “It’s time to move beyond those squiggly letter tests”:
Any solution that could replace CAPTCHAs en masse would have to be free, work across a wide variety of platforms, and be easy for the average blogger or Web admin to install. One of the reasons that CAPTCHAs have spread like kudzu, I suspect, is that they’re so easy to implement—in some cases, as simple as checking a box on a site that helps you set up an input form. The more a bot-fighting algorithm can insinuate itself behind the scenes, the better. In the meantime, we’ll all have to keep debating the eternal question: Is that a W, or is it a V and an I attached at the hip?
Why not just have a system like OpenID? Google knows you’re a person: you send and receive a ton of gmail every day; you make a bunch of human-like searches; you’ve been registered with them for years. Why don’t they just let us choose to log in (like OpenID), then send the website their estimate of how spammy we are on a scale from 0% to 100%?
Indeed, Google doesn’t need to be the only one doing this. Yahoo could do it too. MS. Etc. Think OpenID.
The website you want to comment on could have a list of websites whose recommendations it trusts, it tells your browser about the sites on that list, and then if you decide you want to, you could click a button on your browser (or in early versions of this, just fill in a form) to give Google permission to give the website your spamminess rating.
Google could do different things on their end, like figure that people who want to comment on one thousand pages in a day are probably spammers. As are people who just signed up and haven’t done anything human like yet. It could take months of work for the spammer to make one account that Google considers to be possibly human — and then that account is labeled spam again after a single spamming spree!
There could also be a system where if the websites end up marking the comment as spam on their side, they could tell Google about it, which would also be bad for your humanness rating.
This idea is sort of like the web of trust model that semantic web people sometimes talk about… except as I see it, it has some chance of actually working.
Or am I crazy?