Spam is now passing the Turing Test
The Turing Test is supposed to be a way to distinguish a machine from human intelligence. When I say spam is passing the turing test, I mean it's difficult or impossible for me to tell, from looking at some of the spam that I get, whether it's a human that typed in a comment on something I wrote, or a spam bot.
I don't know what the spammers are up to, but I have a couple of guesses.
Some of the spam is targeting certain kinds of common comments. There are a lot of posts on the internet about the Xbox 360's Red Ring of Death problem, for example. A spam bot that searches for blog posts that mention this, and then post a spam comment that's written from the perspective of a user who had the same problem and found a solution makes for a comment that is hard to tell at a glance whether it's spam or not.
And there's other spam that really is commenting on what I wrote, but not in a deep or useful way, and which links a personal blog which links to spam. Again, there's no way to tell at a glance if it's spam or not. If it's autogenerated, those bots are smart. But I think it's more likely that some spammers are simply employing users, somewhere labour is cheap, to write spammy comments.
And in that case, of course the spam passes the Turing Test, because it is written by humans.
Unfortunately for bloggers, it means every comment needs to be scrutinized. It takes time to them, click the links and make sure it's valuable before approving it. But I don't think this is something we're going to be able to automate. It's become part of the cost of blogging.