Ramblings on Identity Verification in the Future

Recently I read this post on Hacker News (https://news.ycombinator.com/item?id=18180153) about how a PSN account got poached by another user. And I got a too good to be true email from someone, and questioned whether or not it was generated by a computer. Because AI has gotten really good…

This led me to question how identity verification will be done in the future. So far, the surest way seems to be two factor authentication. ANYTHING that triggers suspicious behavior on any website, such as logging in from a location that is highly unusual, will cause a two factor authentication. Via SMS.

For phone calls, it’s usually done by pins, social security numbers, addresses, and so on.

But all this information can be compromised. Even the SMS can, theoretically, be hijacked since it’s a wireless signal that is sent to your phone. I’m sure there’s a way to do it.

With the advent of machine intelligence, and compromises in our data, will there be a surefire way to guarantee that the person that is using the device or on the phone is who they are?

Afterall, lyrebird.ai already can mimic your voice, and there are projects like deepfakes which can replace your face. Sure, they are not generated in real time, but they could be if there is sufficient computing power. For example, if it takes a human 300 milliseconds to respond, then with sufficient computing power, a computer could generate a response and time it at 300 milliseconds to reply. Even if you establish a connection between two parties, and time the response that it takes, and make it so that the bounds are near impossible for computers to process and relay that information back and forth, what if the connection was hijacked already from the beginning?(Side question: How would one figure out the absolute lower bound number on a computation? Is this impossible?)

The only sensible solution that I can think up of for now is that there needs to be a solution where the captured data cannot be faked. For example, if I was speaking through a phone, the voice recording that I transfer over the network has to be guaranteed that it’s from me, and that it hasn’t been tampered.

First, there must be some kind of coding scheme embedded in our devices that guarantees that the data capture device’s time taken is foolproof. So no more of this metadata that can be easily changed by programs when you capture photos, recordings, etc. It has to be tamper proof. Maybe, via public/private key cryptography, we can engrave the private key on the device, in silicon or something, and everyone would have access to the public key. By sending the encrypted data, applications with the public key can decrypt it, but the private key would guarantee that hopefully there’s only one person that has that key.

Now the question is how to protect that key - which leads me to wonder if there’s some way that we can use our biological features to generate a consistent key. Can we take some biological signal that cannot change no matter how much we change, and put it into some function, that will always produce a consistent private key, guaranteeing that this person ? Who knows. Maybe, we can measure our telomeres at that instant down to the nearest microsecond? I have no idea…

The other thing I remembered was quantum entanglement. Maybe, just maybe, we can create quantum entangled devices, somehow embed it onto ourselves, and use that to communicate.

But now the other question is: how do you guarantee that the agent on the other side doesn’t get their devices robbed?

Anyhoo, this is an interesting question to think about. How do we guarantee that the person we are talking is who they are? And how do we guarantee that stuff that is made in our society is genuine and real, when computers keep getting better and better?