Soon, Only Bots Will Be Able to Complete CAPTCHA1 Comment
In the ongoing saga of CAPTCHA cracks, progress tends to be incremental: cracks are released with success rates of one or two percent, and CAPTCHA products are quickly patched to defeat them. Not so with Monday’s news that AI startup Vicarious claims to have cracked most popular CAPTCHAs—including reCAPTCHA—with a success rate of over 90%. Since CAPTCHA-solving computer networks can make thousands of attempts per minute, even a success rate as low as 1% is considered a functional crack!
In Vicarious’ video (shown below), their software scans various CAPTCHAs and identifies the letters they contain, often getting most or all of the letters correct on the first try. (And since many CAPTCHAs, including reCAPTCHA, only require that users get one of the two words correct, partial accuracy is often enough.)
Were Vicarious’ tool to be released into the wild, it could enable hackers and other nefarious actors to bring CAPTCHA systems worldwide to their knees. Luckily, that’s not going to happen here—Vicarious developed their CAPTCHA crack as part of a broader artificial intelligence system, and they have no plans to make it publicly available. But if a small company like Vicarious was able to crack CAPTCHA so effectively, how long before the spammers and scammers are able to as well?
The latest scam: creepy stock photo people who pop directly out of your monitor!
Of course, in all likelihood this crack won’t work for long: the CAPTCHA creators will update their CAPTCHAs to make them more difficult, and all will be well—or so they’ll claim. But in the war between the CAPTCHA-makers and the CAPTCHA-crackers, it’s us, the regular humans, who suffer. Every time CAPTCHAs are updated to become more effective at stopping bots and cracks, they become harder for humans1. What do we do once bots are able to solve CAPTCHAs that look like this?
This humans vs. bots arms race is just one of the reasons we designed PlayThru to be different. Since PlayThru determines humanity by analyzing user interaction, we can increase (or decrease) security without making the games any more difficult to play. In fact, we develop our humanness-scoring algorithm using the same types of machine learning that Vicarious uses. Essentially, we’re letting the bots fight it out, while the humans go on with their lives unscathed.
Score one for Team Humans!
1. Yes, we know about the recently-announced reCAPTCHA update that uses other kinds of analysis to give likely humans easier CAPTCHAs. But this only begs the question: if they know we’re most likely human, why are they showing us a CAPTCHA at all? Besides, it’s only a matter of time before the bots figure out how to exploit these techniques and we’re back to badly distorted text.
In fact, some other CAPTCHA companies have had a similar systems for a while. They attempt to guess whether or not you’re a human before they show you a CAPTCHA, and to show easier CAPTCHAs to suspected humans. And it sort of works, sometimes… but when it doesn’t, you get gobbledygook like this: ↩