After more than a decade of full-time work in cryptography, I am sometimes challenged to summarize my work in a form "for mere mortals." While I can give a general overview of what I have done, most of that consists of fairly detailed improvements to existing technology. Seeing each improvement requires at least a general understanding of the technical problem which each improvement solves. Even academic cryptographers may have difficulty perceiving these issues since they do not address common academic problems.
No Ciphers for Sale, Just Technology and Consulting
I offer no current cipher programs, mainly because I am not happy with current computers as a security platform. Nor is the issue just Windows, but any system which is hidden or too complex for individuals to analyze and validate. It is unlikely that any cipher system could operate securely on an insecure computer (at least not to commonly expected cryptographic levels of security). There is also no point in ciphering when the plaintext itself is fundamentally at risk. Moreover, no machine connected to The Net can be considered secure, and it seems doubtful that any firewall could be certified to cryptographic levels of assurance.
Personally, I use one computer on The Net, and a different computer for my work. The work computer is not, and never has been, connected to The Net or even any LAN. Only the work computer can support secure information, and only very limited information is passed back and forth on floppy or CDR. Having two isolated computers is one way to build substantial security, but it is awkward.
My current crypto business consists of promoting my novel technology for license, and, of course, crypto consulting.
Cryptography is the art and science of hiding information while in storage or transit. The basic idea is to transform or encipher data into a different form (ciphertext), so that it seems to lack meaning to those who do not know how to transform it back. Normally, a cipher will support a plethora of different transformations, far too many to try them all, each selected by a different key value. The goal is to create a needle in a haystack, where the correct deciphering is just one of trillions of possible decipherings, all but one wrong.
Understanding cryptography can be difficult because it is fundamentally unlike anything else we design and build. Almost universally, we build something for an effect we can measure, and we can see how well we do. But cryptography "works" only to the extent that it keeps our information secret from opponents we do not know and who will not tell us when we fail. In fact, we can expect opponents to moan about how difficult it would be to break a cipher which they actually can break easily, to get us to use a cipher they can expose. In practice, cryptography is a contest, and the environment is deception and misdirection. That is completely different from what we normally find in science and technology, and calculated deception is something for which few are prepared.
After a decade and more in cryptography, I find myself increasingly distressed by the state of the field and many of those who supposedly represent it. Various example exist:
By far the major issue in cryptography during the past couple of decades has been public key cryptography. With a conventional or secret key cipher, we use some key value to select an arbitrary transformation into ciphertext, and then use exactly the same key value to reverse the transformation. Anyone who enciphers a message also can decipher it.
In a public key cipher, there are two (related) key values: One can be made public so anyone can use that transformation, while the other key is kept secret. So anyone can encipher a message, but only the party holding the hidden private key can expose the enciphered messages.
A public key transformation is very, very slow, so most "public key" ciphers actually are hybrid ciphers. These ciphers use public key technology just to transfer one arbitrary value, which is then used as the key for a much faster secret key cipher.
One problem with public key technology is that, if someone can intercept messages in transit, and also get the far party to use a new key (quite plausible, since public keys supposedly can be freely distributed), the person "in the middle" can expose messages without breaking any cipher at all. Thus, even a public key cipher which was mathematically proven unbreakable (which is probably impossible anyway), could expose information to a man in the middle attack. Only public key ciphers have that weakness, and that is not a weakness I am willing to accept.
Basically I work on automated secret-key ciphers, which, of course, are major components in most public key ciphers anyway. But the well-known state of the art has come a long, long way from its origin in classic or hand ciphering.
Background: Modern Secret-Key Ciphering
Modern automatic ciphering benefits greatly both from the availability of computing machinery, and the resulting growth of information-processing knowledge. Thus:
Consequently, in addition to generalizing the older concepts like:
I believe I have made a few fundamental advances beyond the
state of the art in cryptographic mechanisms.
Since these mechanisms were intended to address fundamental problems
in practical cryptography, they probably make no sense unless we
first understand the problems.
To address the problem of stream cipher weakness, academic
cryptography created ever more complex RNG designs.
Since each RNG is just some computer state which is combined or
mixed in some way to produce a new state value, the obvious idea
was to have a multiplicity of RNG's, and mix them together in
some way to gain exponential strength.
Surprisingly, that almost never works (see:
The Story of Combiner Correlation:
A Literature Survey).
My take on the problem was different: I saw the negative
outcome being the result of both exclusive-OR weakness
and RNG weakness.
Since RNG strength was being addressed, my response was to look
at the
combining
process, to see if that could be improved.
At the time (around 1988), it was unclear whether unbiased,
reversible, nonlinear combining could even exist.
Ultimately, I called my answer Dynamic Substitution.
Dynamic Substitution
can be seen as a modified form of
Simple Substitution
to replace the conventional exclusive-OR
stream cipher
combiner.
In Dynamic Substitution, each current plaintext character is
substituted through a table and becomes ciphertext as usual,
but then the just-used substitution element is exchanged
with some other as selected by the RNG sequence, which is novel.
In this way, the more often a particular transformation is used,
the more often it changes.
Dynamic Substitution is a keyed, nonlinear
combiner
with state.
In addition to simply replacing exclusive-OR, it can be used in
a sequence of combinings, or in a selection among different
combiners, neither of which make sense with exclusive-OR.
Dynamic Substitution thus also enables various architectures
which were previously unhelpful.
Also see the
Dynamic Substitution
section on my home page.
Dynamic Transposition
has two parts:
The strength of the technique is in the fact that the ciphertext
result could have been produced by a plethora of different
shuffling permutations, even given the exact same plaintext.
Known plaintext
thus does not disclose the keyed shuffling sequence, which is what
we are hiding.
Dynamic Transposition is thus largely immune to conventional
known plaintext attack
in a way that exclusive-OR simply cannot be.
Dynamic Transposition also offers a theoretical clarity which
cannot exist in conventional
block ciphers:
Conventional block ciphers cannot begin to key-select from among
every possible huge emulated
simple substitution
table because
there are far, far too many potential tables, making a key which
could select any of them far too large to use.
In contrast, Dynamic Transposition can produce any possible
bit-permutation, given only an RNG with large enough state.
There is thus no anxiety about a particular selected set of
transformations which may or may not have useful statistical
properties for opponents.
I see Dynamic Transposition as a useful basis for provable
strength in practice (although I have no such proof).
See the 1991 Cryptologia article
Transposition Cipher with Pseudo-Random
Shuffling: The Dynamic Transposition Combiner, and the 2001
sci.crypt article clarifying an earlier discussion:
Dynamic Transposition Revisited Again.
A conventional block cipher is nothing more than an emulated
huge substitution table.
The substitution must be computed as needed because such a table
must be far too large to construct and store completely.
In my view, what was needed was a construction technique which
could connect keyed and randomly-constructed modest-size tables
to simulate a larger-size table.
If we could do that efficiently, we could repeat it using the
emulated tables, to get even larger tables, and thus eventually
support huge block sizes.
(Huge blocks reduce the probability that the same plaintext block
will re-occur, thus perhaps avoiding the need for an
initialization vector
and thus allowing out-of-sequence ciphering as can occur in
network packet processing.)
The elemental
Balanced Block Mixer
(BBM) is a two-input, two-output mixing function which is reversible,
in which the mixing can be keyed.
The implementation consists of two
orthogonal Latin squares;
the mixing may consist of keyed, explicit, nonlinear tables or
simple linear computations, when that is acceptable (for example, see:
Fenced DES).
If we put two small
blocks or values of
data into a BBM, we get two values out, where each output value
provably
depends in a
balanced way upon both
of the input values.
Moreover, the elemental oLs transformations can be substituted for
the arithmetic computations in an
FFT or Walsh transformation.
By using FFT-like patterns, we can quickly mix the results from
many small substitution tables such that each result
provably depends upon every table, which is exactly what
we need to build an emulated larger table.
And, in marked contrast to conventional FFT or Walsh transformations,
BBM computations do not expand the data range (see:
ciphertext expansion.)
The BBM structure thus allows us to build a large block cipher out
of small substitution tables or
S-boxes
which we then mix together.
And, independent of whether or not we key-select the mixing, we
can key-select the tables from among all possible tables.
(Also see the
Balanced Block Mixing
area of my home page.)
One way to avoid the last block problem entirely is to
dynamically adjust block size so that the last block is never
very short and can be enciphered as a full block.
The ability to adjust block size is novel, and I personally
originated the term
"Variable
Size Block Cipher" (VSBC) in 1995 (see:
VSBC Newsgroup Discussion);
the term was then picked up without attribution for more popular
academic use.
VSBC's have the usual conventional block cipher characteristics, but
can cipher blocks of virtually arbitrary size (beyond some minimum)
to the byte, as dynamically selected at ciphering time.
This occurs in the context of a "constant depth" process, which
needs no additional computation per byte, even with huge blocks,
so there is no motive to cipher data in small blocks.
Even when plenty of message remains, VSBC blocks can vary in size
to help resist cryptanalysis; by adding random amounts of random
data
(nulls), simply knowing the original
plaintext is not enough
to match with resulting ciphertext.
Thus, VSBC's can avoid
known plaintext attacks
in ways that go beyond conventional block ciphering strength,
so that some of the conventional assumptions of
cryptanalysis
do not apply. In summary:
Also see the
Variable Size Block Ciphers
section on my home page.
Analysis is central to what I do, and has been for all
of my professional life.
Analysis is understanding in depth, and is far more than just
particular choreographed
cryptographic
attacks. Current attacks
are the result of a quarter-century of research, mostly
on conventional
block ciphers.
For my
novel
cipher
designs, when conventional attacks are not appropriate,
we simply do not have a similar depth of understanding,
and we cannot present what we do not have.
Before Windows, in the days of DOS (the late 80's and early 90's),
I did personally design, implement, and offer several
Dynamic Substitution
cipher systems
for sale, including
CLOAK2 and
Penknife.
One can see the design of my 1993-1995 DOS-based CLOAK2
secret key
stream cipher in
The Cloak2 Design.
The CLOAK2 cipher uses novel
Dynamic Substitution combiner
technology which I invented, patented, own and
described in Cryptologia in 1991. It also uses the huge
Additive RNG's
which were part of my article:
The Efficient
Generation of Cryptographic Confusion Sequences
which also appeared in Cryptologia in 1991.
In my view, the CLOAK2 security arguments are clearer and more
believable than those for modern block ciphers.
CLOAK2 does not use, and thus does not rely upon, public key
techonology, and so is far closer to our normal experience with
metal keys and locks and doors.
CLOAK2 has substantial secret key management methodology often
overlooked by new cipher designers.
See, for example:
CLOAK2 Features: Key Management.
CLOAK2 strength arguments are addressed in:
The Cloak2 Design: Strength Arguments.
The attacks discussed include: Brute Force on RNG,
Brute Force on User Key, Chosen Key, Differential Cryptanalysis,
Ciphertext Only, Known Plaintext, Key-Stream Repetition,
Birthday Attack on Message Keys, and Chosen Plaintext. This
list is more than we usually see in cipher presentations.
The discussions themselves are simplified by the fact that
CLOAK2 is a stream cipher, not a block cipher, and by the
use of Dynamic Substitution technology which is widely
discussed elsewhere on my pages, and which itself directly
opposes the usual stream-cipher attacks.
The development of
BBM technology was a classic
multi-year R&D project with multiple open reports BBM technology is not just one idea, and certainly not just
yet another "cipher." The original approach used the idea
of counted
balance in the
mixing mixing functions.
Subsequent articles give other constructions and analysis from
different directions. One of those is an actual
proof that even a
single input bit-change would produce a bit-change on all
output ports. That proof thus guarantees that all next-
level S-Boxes would be "active." The proof avoids the need
for any such measurement or optimization.
A particularly innovative analytic approach was my 1997
experimental statistical analysis of
Boolean function
nonlinearity
distributions in
Mixing constructions.
In 1998 I found how to create
keyed
orthogonal Latin
squares
and extended the results to nonlinear mixing. Those
experiments support the idea that a nonlinearity distribution
similar to that of ideal S-Boxes can be created by mixing
boxes of half the bit size (and then half again and so on).
In this way we can create huge emulated boxes -- conventional
block ciphers -- by mixing small boxes in sufficient levels.
In practice, the exact same code can handle legacy 64-bit
blocks, modern 256-bit blocks, and 4K byte large blocks.
My latest cipher is a description, not an implementation:
Dynamic Transposition Revisited Again,
an article which is now almost two years old.
It is actually a detailed elaboration of the Dynamic Transposition
combiner I also discussed in Cryptologia in 1991. About
the last fifth of the article discusses "ATTACKS," and
it starts off:
That analysis then continues for page after page, in
something like 19 more paragraphs.
Personally, I think Dynamic Transposition is one of the
most important ciphers around, for several reasons:
I have also found a couple of interesting combinatoric
relationships:
Basically a reviewed index to the literature on various topics.
Now a bit old, but still useful for those who do not know the
older literature:
From my crypto era:
A few of the better articles from a much larger body of work:
For some time, now, I have been involved in trying to firstly
expose, and secondly address some aspects of
cryptography
which are fundamentally
unscientific:
Ciphers
appear to differ from all other constructed devices in
our society in that we cannot know whether or not they "work."
We demand that a cipher not expose our information to an
opponent,
but we have no way to know whether our cipher actually succeeds.
It is unreasonable to hope that our opponents will inform us of
their successes.
So we have no way to know when we must improve our cipher, or use
a completely different one.
This fundamental inability to measure "success" has massive
implications with respect to cryptographic
engineering.
We cannot know our opponents, and so also cannot know their
capabilities. Since
strength
in practice is inherently
contextual
(the weakness of a cipher being dependent upon
the knowledge of particular opponents), the inability to know the
context is a fundamental problem in the discussion of practical
cryptographic strength.
Ideally we would have a strength
proof that would apply to even
the most capable opponent; but only rarely are current proofs useful
in practice.
These issues go to the heart of the
scientific investigation
of practical cryptography, as opposed to academic cryptography.
Although many things about cryptography are measurable and so can
be scientific, the overall success of a cipher is not one of those
things. (It seems especially ironic that this is the only thing
we really need.) Academic work which fails to clearly make the
distinction between theory and practice further confuses an
already widely-misunderstood situation.
Once we accept that practical cryptography is a uniquely
unscientific situation, we can seek to address it.
For example, if we assume that some "strong enough" cipher
designs exist (although we may not be able to identify them),
we may gain an advantage by
enciphering
multiple times with different ciphers and keys.
(For example, see Shannon's proposal:
algebra
of secrecy systems).
In addition, using a "stack" of ciphers means that
known plaintext
information for any particular cipher simply would not be
available to an opponent.
Since known-plaintext is the basis for many if not most attacks,
preventing known-plaintext exposure for the individual ciphers
is a big advantage.
We can carry the war to our opponents by using a continuously
expanding set of ciphers, thus requiring opponents to identify,
obtain, analyze, and break each cipher we might use.
Currently, we all tend to use "standard" ciphers, which thus protect
a huge amount of potentially profitable data.
The common use of a standard cipher sets up a profitable and fixed
target for the opponent and thus increases the
risk for the user.
Unlike most fields concerned with danger, cryptography seems remarkably
sanguine about deliberately creating a
single point of failure
situation, where the failure of just one cipher can have catastrophic
results.
I have discussed these issues extensively, including 1996
discussions on email cipher weakness (see:
Cryptography is War!),
extensive 1999 Usenet discussions (see:
Fixing Strength Problems in Cipher Use),
and a 1999 article published in IEEE Computer
(Cryptography: Is Staying with the Herd Really Best?).
Old Crypto Problems / New Crypto Mechanisms
Each block of a message is shuffled independently.
"Complex and hard-to-refute conventional block-cipher
attacks, such as Differential and Linear Cryptanalysis,
would seem to be completely irrelevant to ciphers of the
Dynamic Transposition type. Where there are no S-boxes,
there are no linear or differential relations in S-boxes.
Where there is no emulated table, there is no linear or
differential relationship in that table. And where there
is no Feistel structure, there is no ability to peel off
Feistel layers. To have a block cipher where we can make
these statements is really quite remarkable."
General-Use and Reference Pages
Peer-Reviewed and Other Journal and Magazine Articles
Terry Ritter, his
current address, and his
top page.