top of page
Search

Cryptography and Privacy: An Impactful Milestone Ahead

Two things are about to happen: (1) communicators yank control from professional cryptographers, (2) the advantage held today by smarter mathematicians and faster computers will be toned down. A new landscape is about to dominate cyber reality.

When you use Word, you control the contents of your writings, not Microsoft. But when you send your writings over the Internet in confidence to your reader, you do not control how much security is projected from the transmitted data. It's Microsoft or another cryptographic vendor who chooses one cipher or another. In the best case, you can switch a cipher, but you sure cannot tweak the algorithm. You cannot decide to encrypt your data with AES-Plus, or AES-Prime. You are stuck with AES, or with any other similar cipher you may choose.

So what, you say. AES-256 enjoys a stellar reputation. No breach was ever published!


Bear with me please. Imagine that the President of the United States summons you to head the National Security Agency. You find out that everyone is trusting AES to pack their secrets with, and it is your job to pry open secrets. You would clearly assign your best talent and your most powerful computers for the mission to crack the cipher that houses what you are after. And while striving to crack it with everything you have got, you add your voice to the choir that announces that "AES is unbreakable, safe, secure". After all, the only way to profit from cracking AES is to keep people trusting it as unbreakable.


In fact, if you would not do so, you will be negligent. Other national agencies are doing the same, little doubt. Has anyone cracked AES or any other trusted cipher? Those who know the answer are surely not talking. Whether AES is already an open book, or not yet, this story points out a simple fact most of us choose to ignore: modern day ciphers, all reliant on mathematical complexity, are only effective if their attacker is not smarter and not better equipped than expected. The history of cryptography is replete with adversaries who were smarter and better equipped than expected -- and carried the day.


Maybe so, you say, but what can be done? Every cipher one uses can be compromised by an attacker smart enough to overcome its cryptographic defense.


That is true if this cryptographic defense is formed by mathematical complexity. The greater the complexity, the greater the prospect that it can be overcome by advanced intelligence. So whether you use AES, Blowfish, IDEA, or RC4, you operate under the shadow cast by the threat of superior intelligence, which these days extends to artificial intelligence.


This mathematical dominance creates a cryptographic hierarchy with the NSA at the top, other national agencies closing in, large corporations, criminal organizations, etc. are all ranked according to their mathematical talent and sophisticated computing machines, with ordinary citizens thrown around at the very bottom.


It took a conceptual realignment and some new technology to come up with another modality, projecting security on a different foundation, one that would be in the hands of the communicator, the prime stake holder of the security of the message. We are talking here about randomness: a means to generate data that is completely free of pattern, and absolutely unpredictable.


Randomness is not new to cryptography. In fact, it is fundamental to every cryptographic procedure. But there is a profound difference. Today's common ciphers resort to randomness to select a cryptographic key in the form of a bit string of known size. The way this bit string is handled by the cryptographic algorithm makes it prohibitive for one to opt for a larger key. Computational burden grows exponentially. As a result, we use today known fixed size small random strings as cryptographic key.


The new modality divorces the computational burden for both encryption and decryption from the size of the key, so the key can be as large as one desires. Furthermore the new ciphers are 'decoy tolerant'. That is, they can be accessed with meaningless data mixed with the meaningful data, such that the intended reader will distinguish between these two categories and decrypt only the meaningful bits, while an attacker will have to regard the entire flow of bits as potentially meaningful, and thereby encounter a cryptanalytic barrier that becomes larger and larger as more and more meaningless bits are involved.


The key itself is pre shared between the transmitter and the recipient, the decoy-tolerant procedure involves unilateral randomness, which is not pre-shared with the recipient. All in all, the divorce between key size and computation load, and the dilution of the ciphertext with meaningless bits, create an alternative cryptographic structure, one that is not built on mathematical complexity rather on random extensity.


Today we have plenty of sources for high quality randomness, including on commercial basis. We also have the technology to store large quantities of randomness, as well as communicate a great bit load with modern 5G networks. Say then that we have reached a milestone where randomness can replace mathematical complexity as the means to generate security. Unlike mathematical complexity, which is untouchable by everyday users, randomness is a push-button away. It is the user who determines how much randomness to deploy. Granted, large keys and long ciphertexts are inconvenient. This inconvenience is the price one pays for security. The important point is that the user is the one who decides how much inconvenience to put up with in order to achieve the desired level of security. The user is the stakeholder of the data. The user is the one to suffer if the communication is compromised, so the user is the one in whom the security should be vested. This new user-centric cryptographic modality shifts control from the remote designer to the engaged user, who is the rightful claimant thereto.


A natural question that comes up is how much security can be achieved by lavish use of randomness. The answer is quite surprising. Early in the 20th century Claude Shannon proved that a cipher based on randomness of sufficient quantity, compared to the level of usage can be mathematically secure, or say secure against any attacker, however smart and however well equipped. This proof remained academic because we did not have the conditions that allowed us to conveniently use sufficiently large keys to neuter any and all cryptanalytic assaults. Alas, conditions have changed, technology advanced. Today we can handle enough randomness with our cipher to achieve the Shannon standard and remove the shadow we currently operate under. Indeed, while today modality, which is based on mathematical complexity, is under a persistent threat from a superior adversary, the new randomness-based cipher modality leaves it to the user to decide whether to pump in so much randomness to ensure mathematical security or to use less randomness, appraising the retained security as sufficient to meet the expected threat. The user decides, the user is in control.


This new modality is embryonic. Today there are but a few effective and potent ciphers that are randomness-powered, user-centric, and threat responsive. The drive towards this milestone is pioneered by BitMint, which was called the 'Tesla of Cryptography". Much as Tesla pioneered the switch from gas to electricity, so does BitMint for the switch from complexity to randomness. The former takes us from one point to another, the latter takes our messages to where we want them sent -- in confidence.






46 views0 comments

Recent Posts

See All

Comments


bottom of page