Caveat: I’m part of the Security and Risk Management Strategies (SRMS) team, and not part of Identity and Privacy Strategies (IdPS). Also, fair warning… this is an incomplete thought and a bit rambling…
I’ve been mulling over authentication (and, to a degree, privacy) a bit the last couple weeks and wanted to toss a couple thoughts out there; especially in light of Apple’s iPhone announcements this week that included revealing their Touch ID fingerprint scanner for authentication. All of this further combines with my reading Aaron Pogue’s Ghost Targets series, in which he imagines a future where humanity has opted into low-privacy, high-assurance total surveillance society for the up-sides (I’ll leave the privacy side of this debate for another date;). But enough caveat and preamble… let’s get to the good stuff!
It seems to me that in the not-too-distant future, we will be able to realize the next generation of authentication. This future will be one in which we’ll be able to establish what I call “strong positive ID.” Such a feat will be accomplished, not through some magically super-strong single factor, but by shifting toward a risk-based analysis (confidence score) of the many factors that are simultaneously available, considering as many as possible (or as are appropriate) to fix ID within reason to meet the sensitivity of the interaction or transaction.
Tying this into both the iPhone and Pogue’s vision of a voluntarily accepted total surveillance society, consider this: with the fingerprint scanner, the iPhone can now potentially authenticate you based on fingerprint, facial recognition (potentially both visible and non-visible light spectrums), iris scan, voice print, GPS-based location, usage patterns, and the good old fashioned password/PIN options. With a couple small additions, it’s conceivable that other factors could also be included, too, such as the recently revealed heartbeat-based authentication from Bionym. Also, let’s not forget that the quantified self movement has already led many people to voluntarily track (and share) a LOT of this information, too.
Having all these authentication options is intriguing to me because it opens the door to many-factor authentication and risk scoring. Instead of just relying on 1, or maybe 2, factors and hoping they’ll be enough, we can now shift toward weighted confidences. Any one of those factors may (nay, do) have a less than 100% degree of confidence that the person using it truly is who they say they are. False positive and false negative rates, especially with biometrics, can be a particular pain. But now consider the picture in aggregate. Maybe I combine fingerprint with facial recognition with a PIN or trace pattern. Maybe the fingerprint doesn’t serve so much as an authentication factor, but as a signature (handwriting is already a dying art, no? just don’t tell my kids!:). Also, maybe I’m using wearable tech, or I always have the phone on me, and the phone can determine when it has left my possession, such that the device has a strong fix on who I am and can help represent that dynamically and transparently to other systems (including if I’m suddenly not alone, or if it senses I’m under duress based on monitoring audio, heart rate, etc.).
If you start from the assumption that you can now, with a reasonably high confidence, fix an identity, then everything else having to do with access and authorization, quickly falls into place. Moreover, this then allows us to make better risk-based decisions. If I’m buying a cup of coffee, then the required confidence level for the transaction is fairly low, and even more so if the transaction occurs at the same place every (week) day around the same time, and my trip originated from my known place of residence.
On the flip side, if I’m trying to make a large purchase or get access to a sensitive system at work, then the confidence level that I am who I assert to being must be commensurately higher. The beauty of this “many factor” future, though, is that simply having the device on or near you can help lock your identity with a very high level of confidence. And, even if the confidence level slips, it will soon be trivial to refresh at a higher confidence level, such as by picking the phone up and looking into the camera while holding your thumb over the fingerprint scanner.
BTW, as luck would have it, MasterCard has been thinking about risk-based authentication already, based on their “Three Domain (3-D) Secure protocol.” In the cited document, they talk about evaluating transactions given a variety of factors and then making a snap risk decision about whether or not the transaction has been adequately authenticated, or if additional authentication steps will be required, based on a contextual risk assessment. I expect this sort of practice to become more engrained and automatic, just as I think strong positive identification will also become implicit, transparent, and automatic. That is, quite literally, to say that it might start happening and you might not even realize it at first, aside from the occasional self-enrollment prompt when you get a new device or service.
Much, much, much more can and should be said about this… from a SRMS perspective, this means that now is the time to start thinking about how to apply risk-based scoring around authentication and authorization to your environment. It means you should also probably start talking to the good folks in IdPS to get a feel for current and emerging technologies. This also puts an interesting highlight on BYOD. Consider, if you will, that BYOD may in fact be the gateway to strong positive ID, and that you’ll no longer have to worry about authenticating people yourself, but simply gauging the confidence in the asserted ID and linking that into your environment. It probably means that federation will quickly become incredibly important. And then there are the privacy concerns… which may or may not be significant issues over time, depending on how technologies evolve along with cultural values.
What do you think? Are you ready for a world where you’re always positively identified and tracked? How much control should you be able to assert over that data, or will you even have a choice in the end?