A discussion of noise in plaintext, homophonic ciphers, large blocks and other constructions. Also "What is a block cipher?" and FFT mixing and Mixing Cipher descriptions.
(Unfortunately, the first message in this thread did not arrive or was not saved.)
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Wed, 30 Sep 1998 11:03:14 -0700 From: "Harvey Rook" <RedRook@ZippyTheYahoo.com> Message-ID: <6utrjf$jk1@news.dns.microsoft.com> References: <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 28 Check out Rivest's Chaffing and Winnowing scheme. On a more practical note, the point of vulnerability in almost all well designed crypto-systems is the password, and not the stream. Right now we (probably, but not provably) have ciphers that can only be attacked by brute force, yet these ciphers don't increase the size of the stream. So, if the real weakness in a crypto-system is the password, then increasing the stream size by adding random noise doesn't increase your security, it only wastes band width. Harvey Rook Spam Guard. Remove the "Zippy The" to send email. RedRook@ZippyTheYahoo.com Sundial Services wrote in message <36125A50.47C3@sundialservices.com>... >One thing that puzzles me about all of the encryption designs that I >know of is that none of them seem to use the notion of deliberately >injecting noise into the stream... stuff that is really not part of the >plaintext at all but that is enciphered along with it. > >It seems to me that, with the obvious and acceptable side-effect of >making the ciphertext larger than the plaintext, this would add greatly >to the troubles of a cryptanalyst. Why isn't this done?
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Wed, 30 Sep 1998 19:34:00 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-3009981934000001@dialup178.itexas.net> References: <6utpvj$32c$1@quine.mathcs.duq.edu> <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 32 In article <6utpvj$32c$1@quine.mathcs.duq.edu>, juola@mathcs.duq.edu (Patrick Juola) wrote: > In article <36125A50.47C3@sundialservices.com>, > Sundial Services <info@sundialservices.com> wrote: > >One thing that puzzles me about all of the encryption designs that I > >know of is that none of them seem to use the notion of deliberately > >injecting noise into the stream... stuff that is really not part of the > >plaintext at all but that is enciphered along with it. > > Where do you get the noise? With a well-designed cypher, all the > "noise" you need is part of the key. > A minor increase in length, a few percent at most with an appropriate algorithm, can include information that allows much stronger encryption than maintaining plain/ciphertext block size at unity. This has nothing to do with noise such as nulls meant to confuse the analysit, or with inefficient algorithms where block size is doubled or more. Consideration is given to use of such an algorithm when used to produce a MAC, where a minimal amount of information lost also insures that the strength of the algorithm is maintained in that mode. Nothing is really lost, as the few missing characters are simply used, integrated into the key structure, a part of it predetermined by the operator. This may be as hard to convey as following only oral instructions on how to tie your shoes. -- ---- Show me a politician who does not lie through his teeth, and.....I'll show you one who can't find his dentures. ---- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 2 Oct 1998 02:11:13 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <6v1co1$ip5$1@news.umbc.edu> References: <36130001.33B5045A@null.net> <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 26 Douglas A. Gwyn (DAGwyn@null.net) wrote: : Sundial Services wrote: : > One thing that puzzles me about all of the encryption designs that I : > know of is that none of them seem to use the notion of deliberately : > injecting noise into the stream... : This *is* done in certain schemes, but it isn't so simple as you : suggest. : In general, either the "noise" needs to be cryptographically generated, : using a key so that the intended recipient can undo it, : or the recipient needs to apply some sort of general (unkeyed) noise : filter. I suppose it's not exactly simple, but a technique proposed by Rivest and Sherman falls into the second category, and it isn't all that complex. They suggest applying an error correction code to the plaintext, then randomly changing as many bits as the code can correct. : Your apparent intent can be met better by reducing redundancy before : encrypting, which also makes better use of the channel bandwidth. Not if the intent is to harden the system against known or chosen plaintext attacks. --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Fri, 02 Oct 1998 16:19:08 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <3614EEEC.482F00A@stud.uni-muenchen.de> References: <6v1co1$ip5$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 12 Bryan G. Olson; CMSC (G) wrote: > I suppose it's not exactly simple, but a technique proposed > by Rivest and Sherman falls into the second category, and it > isn't all that complex. They suggest applying an error > correction code to the plaintext, then randomly changing as > many bits as the code can correct. Could you say whether this is different in principle (spirit) from the classical technique of homophones? M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 10 Oct 1998 18:47:27 GMT From: scott@helsbreth.org (Scott Nelson) Message-ID: <361fa045.3520234@news.inreach.com> References: <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 31 On Wed, 30 Sep, Sundial Services <info@sundialservices.com> wrote: >One thing that puzzles me about all of the encryption designs that I >know of is that none of them seem to use the notion of deliberately >injecting noise into the stream... stuff that is really not part of the >plaintext at all but that is enciphered along with it. > >It seems to me that, with the obvious and acceptable side-effect of >making the ciphertext larger than the plaintext, this would add greatly >to the troubles of a cryptanalyst. Why isn't this done? > It is done. In particular, Cipher Block Chaining starts with 1 block of IV, in essence increasing the ciphertext by one block, and randomizing all the plaintext. Adding noise to each and every block isn't done mainly because it isn't necessary. In some cases it can actually be harmful, consider: The encryption is DES with 56 bits of random noise in the first 7 bytes, and 1 byte of real data. Mallet collects 256 unique plain texts. He can now forge any message he wants to. ----------------------------------- Shameless plug for random web site: http://www.helsbreth.org/random Scott Nelson <scott@helsbreth.org>
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 22 Oct 1998 13:39:21 GMT From: bo.doemstedt@mbox200.swipnet.se (Bo Dömstedt) Message-ID: <362f2d50.515707890@nntpserver.swip.net> References: <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 54 Sundial Services <info@sundialservices.com> wrote: >One thing that puzzles me about all of the encryption designs that I >know of is that none of them seem to use the notion of deliberately >injecting noise into the stream... stuff that is really not part of the >plaintext at all but that is enciphered along with it. > >It seems to me that, with the obvious and acceptable side-effect of >making the ciphertext larger than the plaintext, this would add greatly >to the troubles of a cryptanalyst. Why isn't this done? Your question refer to "old stuff" that are now very well understood. One example would be the breaking of the German Enigma by the small Polish bureau [1], exploiting the non-ranomness if initializing vectors. A systematic study of ciphers and randomness can be found in [2]. >Why isn't this done? Well, if you don't, could your system be broken using depth reading ?? Bo Dömstedt Cryptographer Protego Information AB Malmoe,Sweden SG100 true random noise generator http://www.protego.se/sg100_en.htm --- Run your own OTP! 750 MB/24hrs! --- Generate initialization vectors --- Generate session keys --- Generate encryption keys --- Provide randomness for public key generation. [1] Rejewski, Marian "How Polish mathematicians deciphered the Enigma" Annals of the History of Computing Vol 3 No 3 July 1981, pp 213-229 translated by American Federation of Information Processing from "Jak matematycy polscy rozszyfrowali Enigme" Annals of the Polihs Mathematical Society, Series II Wiadomosci Matematyczne, Volume 23, 1980, 1-28 [2] Rivest, Ronald L. and Sherman, Alan T. "Randomized Encryption Techniques" Chaum, David; Rivest, Ronald L. and Sherman, Alan T. (editors) pages 145-164 "Advances in Cryptology Proceedings of Crypto 82" Plenum Press New York 1982 ISBN 0-306-41366-3
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Fri, 02 Oct 1998 01:56:44 GMT From: bryan.olson@uptronics.com Message-ID: <6v1bss$ir3$1@nnrp1.dejanews.com> References: <36125A50.47C3@sundialservices.com> Newsgroups: sci.crypt Lines: 26 info@sundialservices.com wrote: > One thing that puzzles me about all of the encryption designs that I > know of is that none of them seem to use the notion of deliberately > injecting noise into the stream... stuff that is really not part of the > plaintext at all but that is enciphered along with it. As I recall (i.e. I haven't looked it up), the proceedings of Crypto 82 contained three papers that proposed adding randomness to symmetric ciphers. The most comprehensive was Rivest and Sherman's "Randomized Encryption Techniques". Adding randomness seems especially powerful for resisting chosen plaintext attacks. > It seems to me that, with the obvious and acceptable side-effect of > making the ciphertext larger than the plaintext, this would add greatly > to the troubles of a cryptanalyst. Why isn't this done? Primarily because there's little evidence that it's needed. Modern Ciphers used with a simple unique IV seem to resist every attack that randomized ciphers resist. --Bryan -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 03 Oct 1998 19:39:15 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36167d50.2983728@news.io.com> References: <6v5mcq$cj1$1@nnrp1.dejanews.com> <3613D2D3.9F597176@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 83 On Sat, 03 Oct 1998 17:20:28 GMT, in <6v5mcq$cj1$1@nnrp1.dejanews.com>, in sci.crypt dianelos@tecapro.com wrote: >[...] > The original scheme of including noise in the encryption process > may have important practical value. I believe that the following > is true: > > If a) one true random bit is injected for each bit encrypted and > b) the encryption function fulfills some "weak" mathematical > condition, then the resulting encryption method cannot be broken > by *any* chosen plaintext, known plaintext of ciphertext only > attack, in other words offers practically perfect security. The > only attacks that work would be exhaustive key search and, maybe, > related key, an attack difficult to curry out in practice. > > The "weak" mathematical condition mentioned above is, roughly, > that there exists no efficient method to compute R based on E( R ) and > E( T xor R) when T is known or chosen and R is a random number. > This seems to be a much weaker condition to prove than what cipher > designers would have to prove for an almost perfect cipher: that > even if a large number of T, E(T) pairs are known there is no way > to compute the secret key. Observe that for no cipher has this > last condition been proved - cipher designers can only claim that > a cipher has resisted much effort to find such a method, not that > such a method does not exist. > > I think this "weak" mathematical condition is so weak that > probably any modern cipher fulfills it. If somebody would prove > this then we could design practical, virtually impregnable > security systems, albeit doubling the size of the ciphertext. Allow me to point out that something like this is achieved in the homophonic block ciphers I have been describing here for the past several years (the basic idea is Feistel, mid-70's): 1. Reserve an authentication / keying field in the data block (this of course reduces the amount of data which will fit in the block). 2. Place a random value in that field. 3. Encipher. Note that each possible value in the a/k field produces a completely different ciphertext block. But if we decipher *any* of those blocks, we find that the data field is the same. We can use this is lots of ways: a. Simply as a homophonic block cipher, using truly random a/k. (We can assure that the Opponents will not collect a useful codebook from ECB operation, so we can use ECB, and we don't have to transport or create an IV to do it.) b. As a block self-authentication, by using an arbitrary a/k value. (Every time we decipher a block, we can compare the a/k we get to the one used originally, or just to the a/k from each block, so if The Opponent changes a block, the a/k won't check out.) c. As a dynamically-keyed cipher, with zero keying setup. (Essentially change the key on a block-by-block basis.) Note that (b) & (c) can be accomplished simultaneously, and will effectively produce (a) as well. I claim this keying is like any block keying in that enough is enough. If we have 64 bits of a/k (or 80 or whatever), we don't need more. Of course, a 64-bit a/k field is 50% of a 128-bit block, but just 12.5% of a 64 byte block. This is one of the advantages of a large block construction. I think the essence of this sort of field is the "block-ness" of the transformation, the guarantee that changing the a/k field will change the transformation from any tractable subset of bits to the output. If the transformation does not change for, say, a byte, that byte has not been dynamically keyed. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 05 Oct 1998 06:48:47 GMT From: dianelos@tecapro.com Message-ID: <6v9q4e$n10$1@nnrp1.dejanews.com> References: <36167d50.2983728@news.io.com> Newsgroups: sci.crypt Lines: 75 In article <36167d50.2983728@news.io.com>, ritter@io.com (Terry Ritter) wrote: > > On Sat, 03 Oct 1998 17:20:28 GMT, in > <6v5mcq$cj1$1@nnrp1.dejanews.com>, in sci.crypt dianelos@tecapro.com > wrote: > > >[...] > > The original scheme of including noise in the encryption process > > may have important practical value. I believe that the following > > is true: > > > > If a) one true random bit is injected for each bit encrypted and > > b) the encryption function fulfills some "weak" mathematical > > condition, then the resulting encryption method cannot be broken > > by *any* chosen plaintext, known plaintext of ciphertext only > > attack, in other words offers practically perfect security. The > > only attacks that work would be exhaustive key search and, maybe, > > related key, an attack difficult to curry out in practice. > > > > The "weak" mathematical condition mentioned above is, roughly, > > that there exists no efficient method to compute R based on E( R ) and > > E( T xor R) when T is known or chosen and R is a random number. > > This seems to be a much weaker condition to prove than what cipher > > designers would have to prove for an almost perfect cipher: that > > even if a large number of T, E(T) pairs are known there is no way > > to compute the secret key. Observe that for no cipher has this > > last condition been proved - cipher designers can only claim that > > a cipher has resisted much effort to find such a method, not that > > such a method does not exist. > > > > I think this "weak" mathematical condition is so weak that > > probably any modern cipher fulfills it. If somebody would prove > > this then we could design practical, virtually impregnable > > security systems, albeit doubling the size of the ciphertext. > > Allow me to point out that something like this is achieved in the > homophonic block ciphers I have been describing here for the past > several years (the basic idea is Feistel, mid-70's): > > 1. Reserve an authentication / keying field in the data block (this > of course reduces the amount of data which will fit in the block). > > 2. Place a random value in that field. > > 3. Encipher. > > Note that each possible value in the a/k field produces a completely > different ciphertext block. But if we decipher *any* of those blocks, > we find that the data field is the same. We can use this is lots of > ways: [...] Yes. Still, the method you describe is not sufficient as a defense against known or chosen plaintext attacks because the rest of the data block can be known or chosen. Even if the a/k field is half the block-size and is xor-ed to the other half before encryption, you still leak information. For example if an attacker chooses data filled with zeros, he knows that the block that will be enciphered has two identical halves. Compare this to the method I propose: E( k1, E( k2, R ) ) E( k3, E( k4, T xor R ) ) It is inefficient both in time and space, but can you see any way to mount an attack on this apart from exhaustive key search? -- http://www.tecapro.com email: dianelos@tecapro.com -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 05 Oct 1998 18:09:41 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36190b22.6139167@news.io.com> References: <6v9q4e$n10$1@nnrp1.dejanews.com> Newsgroups: sci.crypt Lines: 138 On Mon, 05 Oct 1998 06:48:47 GMT, in <6v9q4e$n10$1@nnrp1.dejanews.com>, in sci.crypt dianelos@tecapro.com wrote: >In article <36167d50.2983728@news.io.com>, > ritter@io.com (Terry Ritter) wrote: >> >> On Sat, 03 Oct 1998 17:20:28 GMT, in >> <6v5mcq$cj1$1@nnrp1.dejanews.com>, in sci.crypt dianelos@tecapro.com >> wrote: >> >> >[...] >> > The original scheme of including noise in the encryption process >> > may have important practical value. I believe that the following >> > is true: >> > >> > If a) one true random bit is injected for each bit encrypted and >> > b) the encryption function fulfills some "weak" mathematical >> > condition, then the resulting encryption method cannot be broken >> > by *any* chosen plaintext, known plaintext of ciphertext only >> > attack, in other words offers practically perfect security. The >> > only attacks that work would be exhaustive key search and, maybe, >> > related key, an attack difficult to curry out in practice. >> > >> > The "weak" mathematical condition mentioned above is, roughly, >> > that there exists no efficient method to compute R based on E( R ) and >> > E( T xor R) when T is known or chosen and R is a random number. >> > This seems to be a much weaker condition to prove than what cipher >> > designers would have to prove for an almost perfect cipher: that >> > even if a large number of T, E(T) pairs are known there is no way >> > to compute the secret key. Observe that for no cipher has this >> > last condition been proved - cipher designers can only claim that >> > a cipher has resisted much effort to find such a method, not that >> > such a method does not exist. >> > >> > I think this "weak" mathematical condition is so weak that >> > probably any modern cipher fulfills it. If somebody would prove >> > this then we could design practical, virtually impregnable >> > security systems, albeit doubling the size of the ciphertext. >> >> Allow me to point out that something like this is achieved in the >> homophonic block ciphers I have been describing here for the past >> several years (the basic idea is Feistel, mid-70's): >> >> 1. Reserve an authentication / keying field in the data block (this >> of course reduces the amount of data which will fit in the block). >> >> 2. Place a random value in that field. >> >> 3. Encipher. >> >> Note that each possible value in the a/k field produces a completely >> different ciphertext block. But if we decipher *any* of those blocks, >> we find that the data field is the same. We can use this is lots of >> ways: [...] > > Yes. Still, the method you describe is not sufficient as a defense > against known or chosen plaintext attacks because the rest of the > data block can be known or chosen. Obviously I have been unable to communicate what the a/k stuff is. Maybe: C[i] = E( k, (P[i] << 64) + a/k[i]) ) where E is a 64 byte block cipher C[i] is 64 byte ciphertext P[i] is 56 byte plaintext a/k[i] is 8 byte (64 bit) authentication / keying value We can assume that every possible value for a/k will produce a different ciphertext, and that every ciphertext bit will have an equal opportunity to change. This means there will be a multiplicity of ciphertext representations for the exact same data: this is a homophonic block cipher. And that means: 1) there *is* no known-plaintext or defined-plaintext attack in the usual sense; the opponent can only see or set the subset of the block which is the data field; the a/k field is internal; 2) if a/k is essentially random-like, we can use Electronic CodeBook (ECB) mode, instead of a chaining mode (this means that blocks can be enciphered and deciphered independently); (here we discard the a/k field upon deciphering); 3) if we produce a sequence of a/k values, we are essentially providing dynamic keying for each block, and if we can reproduce the a/k sequence in deciphering (just like any stream cipher), we can *also* check the validity of each block and the sequence of blocks. > Even if the a/k field is half the block-size and is xor-ed to the > other half before encryption, No, a/k functions differently. It is related to your idea by concept, not implementation. >you still leak information. For > example if an attacker chooses data filled with zeros, he knows > that the block that will be enciphered has two identical halves. No. This is not a/k. > Compare this to the method I propose: > > E( k1, E( k2, R ) ) > E( k3, E( k4, T xor R ) ) > > It is inefficient both in time and space, but can you see any > way to mount an attack on this apart from exhaustive key search? At the top we encipher R twice, under two different keys. At the bottom we encipher T xor R twice, under yet two other keys. But if enciphering is effective, why do we do it twice? And if it is not effective, why is twice enough? And if we are going to attack E(), we will have to know what E() is: We have passed the point of notational utility. Summary If we had a wide block cipher we could effectively add block keying to the data stream. And that would make size restrictions on the primary key essentially useless. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 06 Oct 1998 04:23:20 GMT From: dianelos@tecapro.com Message-ID: <6vc5vn$6fs$1@nnrp1.dejanews.com> References: <36190b22.6139167@news.io.com> Newsgroups: sci.crypt Lines: 92 In article <36190b22.6139167@news.io.com>, ritter@io.com (Terry Ritter) wrote: >[...] >Dianelos wrote: > > Yes. Still, the method you describe is not sufficient as a defense > > against known or chosen plaintext attacks because the rest of the > > data block can be known or chosen. > > Obviously I have been unable to communicate what the a/k stuff is. > > Maybe: > > C[i] = E( k, (P[i] << 64) + a/k[i]) ) > > where E is a 64 byte block cipher > C[i] is 64 byte ciphertext > P[i] is 56 byte plaintext > a/k[i] is 8 byte (64 bit) authentication / keying value > > We can assume that every possible value for a/k will produce a > different ciphertext, and that every ciphertext bit will have an equal > opportunity to change. > > This means there will be a multiplicity of ciphertext representations > for the exact same data: this is a homophonic block cipher. And that > means: > > 1) there *is* no known-plaintext or defined-plaintext attack in the > usual sense; the opponent can only see or set the subset of the block > which is the data field; the a/k field is internal; O.K. I suppose the usual sense of known-plaintext is that the whole plaintext is known to the attacker. Ciphertext only attacks will be possible though, because these only require partial knowledge of the plaintext. >[...] > > Compare this to the method I propose: > > > > E( k1, E( k2, R ) ) > > E( k3, E( k4, T xor R ) ) > > > > It is inefficient both in time and space, but can you see any > > way to mount an attack on this apart from exhaustive key search? > > At the top we encipher R twice, under two different keys. At the > bottom we encipher T xor R twice, under yet two other keys. > > But if enciphering is effective, why do we do it twice? And if it is > not effective, why is twice enough? Because if single encryption is used then the noise block R can be canceled out. Suppose we use: E( k1, R ) = C1 E( k2, T xor R ) = C2 where C1, C2 are the two ciphertext blocks. Now the attacker computes: R = D( k1, C1 ) T xor R = D( k2, C2 ) => T xor D( k1, C1 ) = D( k2, C2 ) => T = D( k1, C1 ) xor D( k2, C2 ) The noise factor has now disappeared. If the plaintexts are known, the attacker can accumulate a large number of relations where the only unknowns are k1 and k2. For similar reasons, the following method will not work either: E( k1, E( k2, R ) ) = C1 E( k3, T xor R ) = C2 It seems we need two double encryptions. > And if we are going to attack E(), we will have to know what E() is: > We have passed the point of notational utility. Suppose it is E=DES. My point is that any good cipher will probably work. At the cost of quadrupling time and doubling space it appears we can get a cipher where the only possible attack is exhaustive key search. If E=DES this is an impossible 2^224 work. -- http://www.tecapro.com email: dianelos@tecapro.com -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 06 Oct 1998 16:11:08 GMT From: jsavard@tenMAPSONeerf.edmonton.ab.ca (John Savard) Message-ID: <361a3f1a.5668671@news.prosurfr.com> References: <6vc5vn$6fs$1@nnrp1.dejanews.com> Newsgroups: sci.crypt Lines: 43 dianelos@tecapro.com wrote, in part: > O.K. I suppose the usual sense of known-plaintext is that the > whole plaintext is known to the attacker. Ciphertext only attacks > will be possible though, because these only require partial > knowledge of the plaintext. You can, however, think of the padding step as an additional encryption step - so you know the plaintext before that step, even if you can't do a true known-plaintext attack on the following step by itself. So a known-plaintext attack remains possible in a technical or terminological sense, while operationally one *may* be able to prevent a known-plaintext attack on a portion of the encipherment process that is vulnerable to it. However, let us say one is facing single-DES as a cipher. A ciphertext-only attack is not known, but a known-plaintext one is: brute force. Now then, if I use CBC mode, I fail to eliminate a known-plaintext attack, because if one knows the plaintext, one knows the input to DES, the XOR of the plaintext and the previous ciphertext block. However, I could use a different mode that does eliminate a direct known-plaintext attack on DES, and yet is still insecure: send, enciphered, a random 64-bit block, then XOR that block to every block in the message before encryption. I don't know, now, the value of any plaintext block. But I can still brute-force search: decipher a *pair* of blocks for every key, and when the difference between them, deciphered, is the same as that between the two plaintext blocks, I probably have the right key. This is a trivial example, but it illustrates the principle that a method that _apparently_ eliminates a true known-plaintext attack may not actually give security to a cipher, even if that cipher is not vulnerable in the case of unknown plaintext. John Savard http://members.xoom.com/quadibloc/index.html
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 06 Oct 1998 19:15:48 GMT From: ritter@io.com (Terry Ritter) Message-ID: <361a6bfd.13557766@news.io.com> References: <361a3f1a.5668671@news.prosurfr.com> Newsgroups: sci.crypt Lines: 37 On Tue, 06 Oct 1998 16:11:08 GMT, in <361a3f1a.5668671@news.prosurfr.com>, in sci.crypt jsavard@tenMAPSONeerf.edmonton.ab.ca (John Savard) wrote: >[...] >This is a trivial example, but it illustrates the principle that a >method that _apparently_ eliminates a true known-plaintext attack may >not actually give security to a cipher, even if that cipher is not >vulnerable in the case of unknown plaintext. One could presumably make a similar argument that not all keying is worthwhile, because some forms of keying are ineffective. Surely we can imagine taking half of every 64-bit DES block and filling that with noise. Now, for any 32-bit data (in the other half of the block), there are 2**32 different ciphertexts which represent the exact same data under the exact same DES key! Known-plaintext is now 32 bits of data with 64 bits of ciphertext. Then the question is: "How many different DES keys can produce the same plaintext-to-ciphertext transformation (given some homophonic keying value)?" Well, *some* plaintext must produce a given ciphertext under *every* key, so the chance that it has our particular data is just the size of that field, 1 in 2**32. So with a 56-bit DES key, we expect that fully 2**24 different DES keys will have the exact same plaintext-to-ciphertext transformation. Placing keying in the data block is the homophonic construction, and if we have a bigger block, we can have more keying and less overhead. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Wed, 07 Oct 1998 11:09:55 -0400 From: Jerry Leichter <leichter@smarts.com> Message-ID: <361B8443.32DF@smarts.com> References: <361a6bfd.13557766@news.io.com> Newsgroups: sci.crypt Lines: 51 | Surely we can imagine taking half of every 64-bit DES block and | filling that with noise. Now, for any 32-bit data (in the other half | of the block), there are 2**32 different ciphertexts which represent | the exact same data under the exact same DES key! Known-plaintext is | now 32 bits of data with 64 bits of ciphertext. | | Then the question is: "How many different DES keys can produce the | same plaintext-to-ciphertext transformation (given some homophonic | keying value)?" Well, *some* plaintext must produce a given | ciphertext under *every* key, so the chance that it has our particular | data is just the size of that field, 1 in 2**32. So with a 56-bit DES | key, we expect that fully 2**24 different DES keys will have the exact | same plaintext-to-ciphertext transformation. Fine. Now how many of those keys also give a reasonable decryption of the next block? Let's continue with your worst-case analysis: 1 in 2^32 keys give the right plaintext for one block. Since everything here is independent, only 1 in 2^64 keys gives the right plaintext for any two distinct blocks. Since there are only 2^56 keys, it's highly likely that the only key that actually gives the right plaintext for two blocks is the one being sought. Note that this is true even assuming the block key value is chosen completely at random for each block. So, it would seem as if you've doubled the known plaintext required. In fact, however, that's an illusion: What've you've doubled is the *amount of ciphertext corresponding to known plaintext*. However, each block of ciphertext now only encodes half the plaintext it did before. Originally, you needed one 64-bit block of known plaintext. Now you need to 32-bit blocks of known plaintext. You haven't gained anything at all! This argument is independent of the number of bytes you reserve for keying material, and the size of the block. All you're doing is spreading the known material out through a number of blocks. There's a bit more work, but not all that much. (At the very worst, if you use k blocks, you make the opponent do k times as many trial decryptions - actually, less, since he can often stop early. However, the user of the system also has to do k times as many encryptions and decryptions, so you haven't increased the work factor for the cryptanalyst at all.) The underlying problem is fundamental: The cleartext and the random filler material are trivially separable once the external encipherment has been removed. So they don't really add anything in defending against a brute-force attack: In a brute-force attack, the attacker has his hands on the results of removing the external encipherment, along with a lot of random junk that he has to discard. You haven't changed the work needed in the brute force step - the external encipherment keyspace is exactly the size it originally was - and you haven't made it significantly harder for the attacker to discard the junk. -- Jerry
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 08 Oct 1998 07:09:31 GMT From: ritter@io.com (Terry Ritter) Message-ID: <361c6524.53243250@news.io.com> References: <361B8443.32DF@smarts.com> Newsgroups: sci.crypt Lines: 138 On Wed, 07 Oct 1998 11:09:55 -0400, in <361B8443.32DF@smarts.com>, in sci.crypt Jerry Leichter <leichter@smarts.com> wrote: >| Surely we can imagine taking half of every 64-bit DES block and >| filling that with noise. Now, for any 32-bit data (in the other half >| of the block), there are 2**32 different ciphertexts which represent >| the exact same data under the exact same DES key! Known-plaintext is >| now 32 bits of data with 64 bits of ciphertext. >| >| Then the question is: "How many different DES keys can produce the >| same plaintext-to-ciphertext transformation (given some homophonic >| keying value)?" Well, *some* plaintext must produce a given >| ciphertext under *every* key, so the chance that it has our particular >| data is just the size of that field, 1 in 2**32. So with a 56-bit DES >| key, we expect that fully 2**24 different DES keys will have the exact >| same plaintext-to-ciphertext transformation. > >Fine. Now how many of those keys also give a reasonable decryption of >the next block? *Any* time we have more data than key, *any* cipher system is "theoretically solvable." If you were expecting this construction to repeal Shannon, I am sorry to say it does not. But if you just want more keyspace, that is simple enough to do: Just add another cipher with its own large keyspace and multi-cipher. >Let's continue with your worst-case analysis: 1 in >2^32 keys give the right plaintext for one block. Since everything here >is independent, only 1 in 2^64 keys gives the right plaintext for any >two distinct blocks. Since there are only 2^56 keys, it's highly likely >that the only key that actually gives the right plaintext for two blocks >is the one being sought. Note that this is true even assuming the block >key value is chosen completely at random for each block. > >So, it would seem as if you've doubled the known plaintext required. In >fact, however, that's an illusion: What've you've doubled is the >*amount of ciphertext corresponding to known plaintext*. However, each >block of ciphertext now only encodes half the plaintext it did before. >Originally, you needed one 64-bit block of known plaintext. Now you >need to 32-bit blocks of known plaintext. You haven't gained anything >at all! Since this was not the intended gain, we should not be surprised to not achieve it. If adding keyspace to DES was this easy, we would have heard about it 20 years ago. The real "illusion" here is the assumption that homophonic keying is in all cases equivalent to the cipher main key. But if one is going to use every new construction exactly like the old one, it is going to be very difficult to deliver any useful new features. There is more to cryptography than just doing DES again, with a bigger block and a larger key. The homophonic field can oppose codebook attacks. Is this "keying"? Well, we normally think that each key value will generate a different ciphertext, and that is exactly what the homophonic field does. So the homophonic field *is* "keying," even if it is a little different than usual. >This argument is independent of the number of bytes you reserve for >keying material, and the size of the block. All you're doing is >spreading the known material out through a number of blocks. There's a >bit more work, but not all that much. (At the very worst, if you use k >blocks, you make the opponent do k times as many trial decryptions - >actually, less, since he can often stop early. However, the user of the >system also has to do k times as many encryptions and decryptions, so >you haven't increased the work factor for the cryptanalyst at all.) > >The underlying problem is fundamental: The cleartext and the random >filler material are trivially separable once the external encipherment >has been removed. So they don't really add anything in defending >against a brute-force attack: In a brute-force attack, the attacker has >his hands on the results of removing the external encipherment, along >with a lot of random junk that he has to discard. You haven't changed >the work needed in the brute force step - the external encipherment >keyspace is exactly the size it originally was - and you haven't made it >significantly harder for the attacker to discard the junk. We should note that you somehow failed to take note of the next step, as it was in my original: >>Placing keying in the data block is the homophonic construction, >>and if we have a bigger block, we can have more keying and less >>overhead. The DES example was explanatory; the advantages accrue mainly with use in block ciphers with large blocks (which minimize homophonic overhead) and large main keys (which eliminate the need to expand the main keyspace). I see two real advantages in the homophonic construction: 1. First, homophonic keying is a randomization which is effective against codebook attack: When a plaintext block is repeated with a different homophonic value it will have a different ciphertext. (The homophonic keying value can be really random in this use.) And codebook attacks are important enough that most systems defend against them with something beyond the cipher per se, typically "CBC mode." The homophonic keying field thus provides a way to avoid CBC chaining and the sequentiality that implies. (Only huge blocks -- say 64 bytes or larger -- would have enough uniqueness from text to avoid codebook attack without randomizing the plaintext or the transform.) Sequentiality means that throughput generally cannot be improved with multiple ciphering hardware, and that lost or delayed packets will delay ciphering even if later packets are already received and ready. And the longer the round trip, the worse the consequences. These are problems we can avoid with homophonic keying. 2. Next, a homophonic field also can supply "authentication" -- provided we know what value the field should have upon deciphering. Possibly the authentication value could be related to the message key, so that any block received for that message would give the same authentication value. Perhaps another part of the field could hold the message block number. Indeed, with transport level ciphering, the authentication value might even be used as the network error-detection code (for ciphered packets only), thus avoiding perhaps *two* error-detection passes across the data. There are advantages to the homophonic construction (and large blocks and variable-size blocks) to the extent that the cipher is allowed to do what it can do and is not forced into the cramped and primitive DES-style box by which all ciphers are currently judged. The new advantages *can* be significant -- in particular applications. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 08 Oct 1998 10:54:41 -0400 From: Jerry Leichter <leichter@smarts.com> Message-ID: <361CD231.2B3B@smarts.com> References: <361c6524.53243250@news.io.com> Newsgroups: sci.crypt Lines: 76 Terry Ritter wrote: [A number of interesting things.] Basically, I don't disagree with any of it. The *context* in which it was posted made it appear as if Mr. Ritter was proposing to use homophonic keying as a way to increase the effective cost of a brute- force attack (which *is* a goal of Rivest's all-or-nothing construc- tion). As I pointed out, that won't work. | I see two real advantages in the homophonic construction: | | 1. First, homophonic keying is a randomization which is effective | against codebook attack: When a plaintext block is repeated with a | different homophonic value it will have a different ciphertext. (The | homophonic keying value can be really random in this use.) | | And codebook attacks are important enough that most systems defend | against them with something beyond the cipher per se, typically "CBC | mode." The homophonic keying field thus provides a way to avoid CBC | chaining and the sequentiality that implies. (Only huge blocks -- say | 64 bytes or larger -- would have enough uniqueness from text to avoid | codebook attack without randomizing the plaintext or the transform.) This has been known for many years (*not* a criticism). A common thing to throw at the inexperienced, after they've convinced themselves that RSA, say, is "mathematically unbreakable independent of message content", is: "OK, so you want to send me a one-word message - either YES or NO. How do you do it?" The "obvious" answer - pad with 0's the block size and encrypt - of course fails miserably against a trivial codebook attack. So, where did the "mathematical security" go? Note that for the particular proposed method to work, your encryption algorithm need to have rather strong properties - "semantic security" and other similar notions. (At the least, all the bits must be "equally strong". *Some* of the bits of the input are known to have this property in RSA - the low bit, something like the top log n bits. Combinatorial ciphers *seem* on their face to treat all bits the same, and security against differential crypto guarantees you that certain kinds of information about the input must be hidden to some known degree, but beyond that ... it's an interesting question.) | 2. Next, a homophonic field also can supply "authentication" -- | provided we know what value the field should have upon deciphering. | Possibly the authentication value could be related to the message key, | so that any block received for that message would give the same | authentication value. Perhaps another part of the field could hold | the message block number. However, uses 1 and 2 contradict each other to some degree. If the homophonic field is constant within a session, an attacker can build a dictionary for that session. Not nearly as good as a global dictionary, but it may well reveal useful information. Then again, if you're using a session key, even *without* the HF, the dictionary is only good for one session - so what have you gained? So the HF has to change fairly regularly. You could, of course, change the session key for the same reason - though admittedly for many algorithms this is much more expensive. I'm not sure I understand how the homophonic field would be used as an authentication value. If it's a pre-agreed constant, then you get no help against a dictionary attack. If it's computed by some method, and that method is attackable, then if an attacker manages to break one block, he may be able to attack the HF generation algorithm and thus forge further blocks. Could you talk more about how you'd compute an HF field for authentication? | Indeed, with transport level ciphering, the authentication value might | even be used as the network error-detection code (for ciphered packets | only), thus avoiding perhaps *two* error-detection passes across the | data. On the other hand, this *helps* brute-force (or brute-force-like) attacks: It provides a quick test of whether you've gotten the right key, without knowing anything at all about the plaintext. -- Jerry
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 08 Oct 1998 22:10:49 GMT From: ritter@io.com (Terry Ritter) Message-ID: <361d385a.9708199@news.io.com> References: <361CD231.2B3B@smarts.com> Newsgroups: sci.crypt Lines: 190 On Thu, 08 Oct 1998 10:54:41 -0400, in <361CD231.2B3B@smarts.com>, in sci.crypt Jerry Leichter <leichter@smarts.com> wrote: >Terry Ritter wrote: >[A number of interesting things.] > >Basically, I don't disagree with any of it. The *context* in which it >was posted made it appear as if Mr. Ritter was proposing to use >homophonic keying as a way to increase the effective cost of a brute- >force attack (which *is* a goal of Rivest's all-or-nothing construc- >tion). As I pointed out, that won't work. Upon rereading, I agree that it was misleading. Part of my point was that block ciphers already function similarly to all-or-nothing, with respect to mixing within their particular block. And if we can inject noise into the plaintext block, known-plaintext effectively goes away. I think the idea of injecting noise into the plaintext was part of this thread and that prompted my entry into the discussion. >| I see two real advantages in the homophonic construction: >| >| 1. First, homophonic keying is a randomization which is effective >| against codebook attack: When a plaintext block is repeated with a >| different homophonic value it will have a different ciphertext. (The >| homophonic keying value can be really random in this use.) >| >| And codebook attacks are important enough that most systems defend >| against them with something beyond the cipher per se, typically "CBC >| mode." The homophonic keying field thus provides a way to avoid CBC >| chaining and the sequentiality that implies. (Only huge blocks -- say >| 64 bytes or larger -- would have enough uniqueness from text to avoid >| codebook attack without randomizing the plaintext or the transform.) > >This has been known for many years (*not* a criticism). Fine, let's see some references (*not* a criticism). My reference is one of the Feistel patents, mid 70's, which an examiner was kind enough to point out as prior art to one of my claims. As far a I know I introduced the idea on sci.crypt several years ago in discussions about my large block designs, You participated in some of those discussions, and some of those are archived on my pages. I am not aware of a text reference. The main problem with the homophonic construction is that it eats bits in the block. So unless we have a sizable block, we can't have a sizable homophonic field, and even then it means per-block overhead, so we need a pretty big block to make all this efficient. >A common thing >to throw at the inexperienced, after they've convinced themselves that >RSA, say, is "mathematically unbreakable independent of message >content", is: "OK, so you want to send me a one-word message - either >YES or NO. How do you do it?" The "obvious" answer - pad with 0's the >block size and encrypt - of course fails miserably against a trivial >codebook attack. So, where did the "mathematical security" go? This is why DES is almost always used in some form of plaintext randomization such as CBC. The reason for using CBC is also not covered well in texts. >Note that for the particular proposed method to work, your encryption >algorithm need to have rather strong properties - "semantic security" >and other similar notions. (At the least, all the bits must be "equally >strong". *Some* of the bits of the input are known to have this >property in RSA - the low bit, something like the top log n bits. >Combinatorial ciphers *seem* on their face to treat all bits the same, >and security against differential crypto guarantees you that certain >kinds of information about the input must be hidden to some known >degree, but beyond that ... it's an interesting question.) Well, this seems to be what we expect from a block cipher anyway. Presumably, if we can get a statistical bias which is independent of key, we can look into the plaintext. But the real problem is a statistical bias to the key, and if there is any, we can start to develop that key. So this is bad, but I am unaware of any reasoning which would prove statistical key independence for any block cipher. The possibility of doing this is one of the hoped-for but as yet unrealized advantages of the Mixing cipher construction. Personally, I think "counter mode" is dangerous for just this reason in any Feistel structure, and would always work to avoid the situation where the total plaintext change could be a single increment. >| 2. Next, a homophonic field also can supply "authentication" -- >| provided we know what value the field should have upon deciphering. >| Possibly the authentication value could be related to the message key, >| so that any block received for that message would give the same >| authentication value. Perhaps another part of the field could hold >| the message block number. > >However, uses 1 and 2 contradict each other to some degree. Sure. >If the >homophonic field is constant within a session, an attacker can build a >dictionary for that session. Yes. Don't do that. >Not nearly as good as a global dictionary, >but it may well reveal useful information. Then again, if you're using >a session key, even *without* the HF, the dictionary is only good for >one session - so what have you gained? So the HF has to change fairly >regularly. Well, *part* of it does, by which I imply that part need not. I guess the problem here is that one field, with a single homophonic property, can be partitioned and used in different ways, and so becomes conceptually different fields. We can use one, the other, or both simultaneously. The whole concept is like that; the only fields in the data block are what we agree to have. The cipher sees it as just more data. And we just interpret that data differently. >You could, of course, change the session key for the same >reason - though admittedly for many algorithms this is much more >expensive. > >I'm not sure I understand how the homophonic field would be used as an >authentication value. If it's a pre-agreed constant, then you get no >help against a dictionary attack. Then just imagine that we have a separate field for authentication. We may also have a field for homophonic keying. >If it's computed by some method, and >that method is attackable, then if an attacker manages to break one >block, he may be able to attack the HF generation algorithm and thus >forge further blocks. Could you talk more about how you'd compute an HF >field for authentication? I think there are many possibilities with various tradeoffs. For authentication only, I like a hash of the message key, or part of that key itself. This gives a particular authentication value which is fixed for all blocks in a particular message, and we can check that value on each block after deciphering. For randomization, I like some sort of really random or at least unknowable value, different for each block. We just throw that away when deciphering. Both authentication and randomization can be used simultaneously in separate fields. A different authentication alternative is to use a really random value which is fixed for all blocks, and then just require that each block have the same value -- so we compare block to block, but without inducing sequentiality. Again, the authentication value is fixed for all blocks. Another alternative is to use a LFSR or LCG to produce values for subsequent blocks, and this means that the same field can be used for both authentication and randomization, but this returns us to hated sequentiality. It may be that in many cases, with a large enough block, only authentication is needed. >| Indeed, with transport level ciphering, the authentication value might >| even be used as the network error-detection code (for ciphered packets >| only), thus avoiding perhaps *two* error-detection passes across the >| data. > >On the other hand, this *helps* brute-force (or brute-force-like) >attacks: It provides a quick test of whether you've gotten the right >key, without knowing anything at all about the plaintext. Well, yeah, but the only time we can really use this stuff is if we have large blocks (and presumably large keys). We are thus past brute-force threat anyway. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Fri, 09 Oct 1998 15:17:18 -0400 From: Jerry Leichter <leichter@smarts.com> Message-ID: <361E613E.1862@smarts.com> References: <361d385a.9708199@news.io.com> Newsgroups: sci.crypt Lines: 128 | >[Homophonic keying] | >This has been known for many years (*not* a criticism). | | Fine, let's see some references (*not* a criticism). We may have different notions of what "This" refers to. However ... the classic paper on this topic is: Shafi Goldwasser, Silvio Micali, and Po Tong. Why and how to establish a private code on a public network (extended abstract). In 23rd Annual Symposium on Foundations of Computer Science, pages 134-144, Chicago, Illinois, 3-5 November 1982. IEEE. Citations. This is among the earliest crypto papers written by two people who later became very active in the crypto community, Shafi Goldwasser and Silvio Micali. I don't know what happened to Po Tong - it's the only paper with his name on it ever to appear at FOCS. It's been years since I read this paper, so I may be mis-remembering it, but I believe one of the thing it discusses is the problem of sending a single bit securely. This is a kind of least-common-denominator problem; if you can do this, you can do many things safely. And it immediately forces you to look at attacks on the system, not just the mathematics. Their proposal to deal with it is "probabilistic encryption", which is homophonic keying under another name: For each plaintext, there must be (very) many possible cryptotexts. The actual way they *implement* this is very different from Mr. Ritter's proposal, however. The same idea obviously occurs the "salt" used in Unix password encryption, and for that matter in the choice of a random IV in CBC encryption. All of these are related to the indicators used in systems going back at least to World War I. (An indicator was an extra bit of keying material, chosen - nominally at random - for each message, and sent along with the message in the clear. It's whole purpose is to keep the opponent from seeing any two messages encrypted with exactly the same key. As best I can recall, Kahn mentions this in The Code- breakers.) I used such a thing in a stream cipher system I developed in the early '80's, and never thought I was inventing anything. The idea of using random *padding* has been proposed too many times to track down; and nonces are used in many protocols for authentication in the style Mr. Ritter proposes. However, I will not claim that I've previously seen the *specific* configuration Mr. Ritter discusses, in which the "indicator" is used, not to change the key, but to perturb each block of a block cipher. This may very well be new. | >I'm not sure I understand how the homophonic field would be used as | >an authentication value. If it's a pre-agreed constant, then you get | >no help against a dictionary attack. | | Then just imagine that we have a separate field for authentication. | We may also have a field for homophonic keying. | | >If it's computed by some method, and | >that method is attackable, then if an attacker manages to break one | >block, he may be able to attack the HF generation algorithm and thus | >forge further blocks. Could you talk more about how you'd compute an | >HF field for authentication? | | I think there are many possibilities with various tradeoffs.... | | For authentication only, I like a hash of the message key, or part of | that key itself. This gives a particular authentication value which | is fixed for all blocks in a particular message, and we can check that | value on each block after deciphering.... | | A different authentication alternative is to use a really random value | which is fixed for all blocks, and then just require that each block | have the same value -- so we compare block to block, but without | inducing sequentiality. Again, the authentication value is fixed for | all blocks. Someone who doesn't know the session key can't produce cryptotext that will decrypt to anything reasonable. (If, in such a protocol, it's hard to tell if something is "reasonable" - i.e., there is almost no redundancy in valid messages - then there are many ways to add such redundancy. This is certainly one way, though perhaps not the best.) On the other hand, someone who *does* have the key can decrypt a message and will then know the authentication value. This strikes me as a very weak approach. (BTW, recall Matt Blaze's attack against the Clipper LEAF algorithm. It used this kind of approach, though it was vulnerable because its authenticator was only 16 bits long.) | Another alternative is to use a LFSR or LCG to produce values for | subsequent blocks, and this means that the same field can be used for | both authentication and randomization, but this returns us to hated | sequentiality.... If you're going to do that, why not just used a MAC? That has its own internal strength; if it uses a key independent of the session key, it provides protection even if the attacker manages to get the session key. "Sequentiality" is an issue only if you make it one - the MAC can be over as large or as small a part of the data as you like. In practice, for most purposes there's a semantic "message size" that's much longer than one block, and little point to using a MAC over things shorter than one meaningful message. However, if you really want it, a per-block MAC is indeed reasonable if the block size is large enough. | >| Indeed, with transport level ciphering, the authentication value | >|might even be used as the network error-detection code (for ciphered | >|packets only), thus avoiding perhaps *two* error-detection passes | >|across the data. | > | >On the other hand, this *helps* brute-force (or brute-force-like) | >attacks: It provides a quick test of whether you've gotten the right | >key, without knowing anything at all about the plaintext. | | Well, yeah, but the only time we can really use this stuff is if we | have large blocks (and presumably large keys). We are thus past | brute-force threat anyway. Pure brute-force, yes - independent of block size, any cryptosystem designed today can easily, and should, be designed to make pure brute-force attacks impossible. However, speaking abstractly, there may very well be attacks that limit the possible keyspace by analytic means, but still require a huge (but now possible) search over what's left. All sorts of mathematical results come down to "case analysis", and computers have already been used where there are huge number of cases to try out. Differential crypto has this form, of course - though it doesn't look at a large number of possible keys, but at a large number of key *differentials*. Anyway, if there were such an attack on an algorithm, an internal checksum would help it. (On the other hand, an internal *MAC* with a secret key wouldn't help at all. There's an interesting tradeoff between an internal MAC (of the cleartext) and an external MAC (of the ciphertext): If the opponent learns the MAC key, but not the cipher key, for an internal MAC, he might have a brute-force -like attack; while for an external key, he could create fake ciphertext (though he wouldn't be able to control what it decrypted to).) -- Jerry
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 10 Oct 1998 08:03:43 GMT From: ritter@io.com (Terry Ritter) Message-ID: <361f145f.3782561@news.io.com> References: <361E613E.1862@smarts.com> Newsgroups: sci.crypt Lines: 55 On Fri, 09 Oct 1998 15:17:18 -0400, in <361E613E.1862@smarts.com>, in sci.crypt Jerry Leichter <leichter@smarts.com> wrote: >| >[Homophonic keying] >| >This has been known for many years (*not* a criticism). >| >| Fine, let's see some references (*not* a criticism). >[...] >The same idea obviously occurs the "salt" used in Unix password >encryption, and for that matter in the choice of a random IV in CBC >encryption. All of these are related to the indicators used in systems >going back at least to World War I. (An indicator was an extra bit of >keying material, chosen - nominally at random - for each message, and >sent along with the message in the clear. It's whole purpose is to keep >the opponent from seeing any two messages encrypted with exactly the >same key. As best I can recall, Kahn mentions this in The Code- >breakers.) I used such a thing in a stream cipher system I developed in >the early '80's, and never thought I was inventing anything. This last smirking comment would be a lot more amusing if: a) Horst Feistel had not thought the homophonic construction was new; b) IBM patent lawyers had not thought it was new; c) The PTO examiner had not thought it was new; and c) A patent had not been granted on it. This of course happened in the mid-70's, so if Jerry had been just a few years earlier, he could have intervened, because surely anybody who ever made a stream cipher with a key indicator would know how to make a homophonic block cipher. There is a fundamental difference between the homophonic construction and the simple randomization of a salt, or a message key, or the CBC IV: In the homophonic construction, each possible plaintext "letter" has multiple unique ciphertext "codes" which represent that letter. This is the classic meaning of a "homophonic" cipher, and the implication is that each "letter" has its own unique subset of ciphertext codes. But CBC and other randomization techniques are not confined in this way, and in general can take any plaintext to any possible ciphertext. This is a striking difference, and one implication is that a randomized cipher cannot be uniquely deciphered without extra information (such as the preceding-block ciphertext, in CBC mode), but a homophonic cipher *can*. This is block independence, and it can be useful. If someone else has comments on this, perhaps we could have a more reasonable discussion. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 11:16:45 -0400 From: Jerry Leichter <leichter@smarts.com> Message-ID: <36221D5D.1628@smarts.com> References: <361f145f.3782561@news.io.com> Newsgroups: sci.crypt Lines: 47 | >The same idea obviously occurs the "salt" used in Unix password | >encryption, and for that matter in the choice of a random IV in CBC | >encryption. All of these are related to the indicators used in | >systems going back at least to World War I. (An indicator was an | >extra bit of keying material, chosen - nominally at random - for each | >message, and sent along with the message in the clear. It's whole | >purpose is to keep the opponent from seeing any two messages | >encrypted with exactly the same key. As best I can recall, Kahn | >mentions this in The Codebreakers.) I used such a thing in a stream | >cipher system I developed in the early '80's, and never thought I was | >inventing anything. | | This last smirking comment would be a lot more amusing if: | [Irrelevent stuff] As always with Mr. Ritter, attempts at reasoned discussion eventually turn into selective quotations and insults. I never thought what *I* was original because I simply applied ideas taken from public sources, many of which I listed. The *very next paragraph*, which Mr. Ritter chose not to quote, read: The idea of using random *padding* has been proposed too many times to track down; and nonces are used in many protocols for authentication in the style Mr. Ritter proposes. However, I will not claim that I've previously seen the *specific* configuration Mr. Ritter discusses, in which the "indicator" is used, not to change the key, but to perturb each block of a block cipher. This may very well be new. Mr. Ritter's indicates that it is, indeed, new. Fine; congratulations to Mr. Ritter on developing a clever, original idea. (Absolutely *no* condescention intended, though I'm sure Mr. Ritter will manage to take offense, regardless of what I say here.) The very *first* paragraph of my message - also not quoted read: We may have different notions of what "This" refers to. Given all the additional discussion in Mr. Ritter's message about CBC, I *still* don't understand exactly what it is *he* thinks our disagreement is about. Since he has a patent, perhaps he can cite the patent number - or, even better, provide the text of the patent or one of his white papers that describes what he has that's new. I will say nothing further on this subject. -- Jerry
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 11 Oct 1998 23:55:29 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <6vrghh$roq$1@news.umbc.edu> References: <361d385a.9708199@news.io.com> Newsgroups: sci.crypt Lines: 30 Terry Ritter (ritter@io.com) wrote: : My reference is one of the Feistel patents, mid 70's, which an : examiner was kind enough to point out as prior art to one of my : claims. As far a I know I introduced the idea on sci.crypt several : years ago in discussions about my large block designs, You : participated in some of those discussions, and some of those are : archived on my pages. I am not aware of a text reference. I'm not sure what specific result you're addressing, but I'll once again recommended the paper "Randomized Encryption Techniques" by Ronald Rivest and Alan Sherman from Crypto 82. Of course it post-dates Feistel, but pre-dates discussions on sci.crypt. [...] : Personally, I think "counter mode" is dangerous for just this reason : in any Feistel structure, and would always work to avoid the situation : where the total plaintext change could be a single increment. I think the same intuition supports Rivest and Sherman's idea of adding randomness by applying an error correction code, and then flipping as many bits as the code can correct. Unlike the simple random field, it doesn't tell the attacker the location of known and unknown plaintext bits. --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 14:09:53 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <3621FFA1.5C10765A@stud.uni-muenchen.de> References: <6vrghh$roq$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 18 Bryan G. Olson; CMSC (G) wrote: > > Terry Ritter (ritter@io.com) wrote: > > ................................... > I think the same intuition supports Rivest and Sherman's idea > of adding randomness by applying an error correction code, and > then flipping as many bits as the code can correct. Unlike > the simple random field, it doesn't tell the attacker the location > of known and unknown plaintext bits. This is clearly a homophonic substitution. The subset of all code words that are error-corrected to one code word map to that one code word, i.e. a many-to-one (homophonic) mapping. (Note that one assumes that the transmission is error free now, which is certainly given on the application level.) M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 13 Oct 1998 04:54:23 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3622dc88.6129964@news.io.com> References: <6vrghh$roq$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 89 On 11 Oct 1998 23:55:29 GMT, in <6vrghh$roq$1@news.umbc.edu>, in sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: >Terry Ritter (ritter@io.com) wrote: > >: My reference is one of the Feistel patents, mid 70's, which an >: examiner was kind enough to point out as prior art to one of my >: claims. As far a I know I introduced the idea on sci.crypt several >: years ago in discussions about my large block designs, You >: participated in some of those discussions, and some of those are >: archived on my pages. I am not aware of a text reference. > >I'm not sure what specific result you're addressing, but >I'll once again recommended the paper "Randomized Encryption >Techniques" by Ronald Rivest and Alan Sherman from Crypto 82. >Of course it post-dates Feistel, but pre-dates discussions >on sci.crypt. Thanks for the reference; until I can get to it, I will assume that it does indeed cover the homophonic use of a block cipher (the use of a field in the plaintext block for keying or noise). Presumably it references the Feistel patent as a prior publication in the field -- if not, the paper must have been rather a hoot for the boys at IBM. Again -- as far as I know -- I did introduce the homophonic block cipher construction into a previous discussion on sci.crypt, a discussion to which you were also a party. Perhaps you would care to review the archives and tell us exactly what message did present this construction. I think that message was my own, but I could be wrong. In either case, who cares? That comment was a reply to the earlier message, and I think it was appropriate in context. As far as I remember, the paper mentioned in that discussion of two years ago was Probabilistic Encryption, 1984, Goldwasser and Micali. That basically deals with number theoretic homophonic ciphering, but without using that name, without giving the homophonic block cipher construction and also without reference to the classic techniques. That paper is, therefore, different than the technique I was trying to discuss before the conversation took its negative turn. But a paper is not a text, and the fact that a 1982 reference has not made it into the crypto texts should tell us just how well "known" that reference and this construction really are. (This refers to a deleted part of my earlier message, which also nobody cares about.) >[...] >: Personally, I think "counter mode" is dangerous for just this reason >: in any Feistel structure, and would always work to avoid the situation >: where the total plaintext change could be a single increment. > >I think the same intuition supports Rivest and Sherman's idea >of adding randomness by applying an error correction code, and >then flipping as many bits as the code can correct. Unlike >the simple random field, it doesn't tell the attacker the location >of known and unknown plaintext bits. It seems to me that adding noise to an error-correcting code is in itself the construction of a homophonic code. It does seem odd to go to that trouble, only to do it again in a block cipher field. I guess this would make more sense if they were using the whole plaintext block, and thus just using the block cipher for strength, but I don't have the article at hand. Surely we can agree that doing the necessary error-correction on each deciphering will add overhead to what otherwise would be a fast process. As for my intuition, either "counter mode" can be a problem for Feistel structures, or it cannot: * If counter mode *is* a problem, there are a lot of proposals around that recommend it and so need to be changed. * But if counter mode is *not* a problem, then it is also no problem for the homophonic construction. As I see it, the problem with counter mode is that a counting value produces a strong and potentially unique change signal for each different bit position. And when the counting field is at the right side of the block, the bit with the strongest signal is at the edge of the block, where supposedly "ideal" block cipher properties may be at their weakest. This is certainly something to avoid in any construction, including the homophonic block cipher construction. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 14 Oct 1998 08:47:19 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <701oen$8go$1@news.umbc.edu> References: <3622dc88.6129964@news.io.com> Newsgroups: sci.crypt Lines: 57 Terry Ritter wrote: : Bryan G. Olson wrote: [...] : >I'm not sure what specific result you're addressing, but : >I'll once again recommended the paper "Randomized Encryption : >Techniques" by Ronald Rivest and Alan Sherman from Crypto 82. : >Of course it post-dates Feistel, but pre-dates discussions : >on sci.crypt. : Thanks for the reference; until I can get to it, I will assume that it : does indeed cover the homophonic use of a block cipher (the use of a : field in the plaintext block for keying or noise). Presumably it : references the Feistel patent as a prior publication in the field -- : if not, the paper must have been rather a hoot for the boys at IBM. The "use of a field" is one way to it. The paper cites 30 other works, but not the Feistel patent. : Again -- as far as I know -- I did introduce the homophonic block : cipher construction into a previous discussion on sci.crypt, a : discussion to which you were also a party. Yes, it's come up several times, and I think you're right that no one on cares who put it on Usenet first. I usually point people to Rivest and Sherman because I think it covers the options nicely. (Maybe I'm biased since Dr. Sherman was my grad advisor). : As far as I remember, the paper mentioned in that discussion of two : years ago was Probabilistic Encryption, 1984, Goldwasser and Micali. : That basically deals with number theoretic homophonic ciphering, but : without using that name, without giving the homophonic block cipher : construction and also without reference to the classic techniques. Correct. The Rivest and Sherman paper considers both secret key and public key methods, and references classic ciphers. Rivest and Sherman cite a 1982 version of the Goldwasser and Micali paper. : But a paper is not a text, and the fact that a 1982 reference has not : made it into the crypto texts should tell us just how well "known" : that reference and this construction really are. They get one sentence in the Handbook. [...] : * But if counter mode is *not* a problem, then it is also no problem : for the homophonic construction. True, but if counter mode is secure, then homophonic block ciphers are not needed. --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Wed, 14 Oct 1998 15:44:03 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3624c694.3976180@news.io.com> References: <701oen$8go$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 73 On 14 Oct 1998 08:47:19 GMT, in <701oen$8go$1@news.umbc.edu>, in sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: >[...] >: But a paper is not a text, and the fact that a 1982 reference has not >: made it into the crypto texts should tell us just how well "known" >: that reference and this construction really are. > >They get one sentence in the Handbook. Actually there are a couple of references to "randomized encryption" in the Handbook of Applied Cryptography. Paragraph 7.3 is a definition of "randomized encryption," which is basically a homophonic cipher with random keying. There is no use of the term "homophonic" nor reference to the classical systems or literature, nor is there any hint of authentication. Paragraph 8.22 names various ciphers which use randomization. They give three advantages: "(i) increasing the effective size of the plaintext message space. "(ii) precluding or decreasing the effectiveness of chosen-plaintext attacks by virtue of a one-to-many mapping of plaintext to ciphertext; and "(iii) precluding or decreasing the effectiveness of statistical attacks by leveling the a priori probability distribution of inputs." But none of these says anything about authentication, and so fail to describe one of the main opportunities and advantages. One might well wonder whether the Rivest-Sherman reference is similar. >[...] >True, but if counter mode is secure, then homophonic block >ciphers are not needed. False. The homophonic construction has more utility than counter mode: * Counter mode is dangerous specifically because it is not randomly keyed. But homophonic mode can be randomly keyed. * Homophonic mode uses one or more fields in the plaintext block. This means that The Opponent simply does not have the full plaintext block to use in known plaintext or defined plaintext attacks. * Randomized plaintext ciphers (like CBC mode) need other information (like the ciphertext of the previous block) to support deciphering. So when a packet / block is delayed, the next received CBC block can't be immediately deciphered, but instead must wait for delivery. But homophonic blocks can be deciphered independently and immediately. * Counter mode does not provide authentication, but a homophonic block cipher can be cryptographically self-authenticating. Homophonic mode provides a cryptographic keyed nonlinear error check coding that cannot be reproduced without both the authentication field value and the key. * The general idea of placing a hidden identifier in each block of plaintext is similar to putting a sending number on a fax, or the ID code that IBM used to have on their copy machines; it can be a way to distinguish different users who have the same key access. * It is often possible to combine multiple of these advantages (although perhaps not all) in the same homophonic field. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 14 Oct 1998 18:55:39 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <702s3b$qld$1@news.umbc.edu> References: <3624c694.3976180@news.io.com> Newsgroups: sci.crypt Lines: 55 Terry Ritter (ritter@io.com) wrote: : On 14 Oct 1998 08:47:19 GMT, in <701oen$8go$1@news.umbc.edu>, in : sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: : >They get one sentence in the Handbook. : Actually there are a couple of references to "randomized encryption" : in the Handbook of Applied Cryptography. By "they" I meant Rivest and Sherman for the Crypto 82 paper. : Paragraph 7.3 is a definition of "randomized encryption," which is : basically a homophonic cipher with random keying. There is no use of : the term "homophonic" nor reference to the classical systems or : literature, nor is there any hint of authentication. : Paragraph 8.22 names various ciphers which use randomization. They : give three advantages: : "(i) increasing the effective size of the plaintext message space. : "(ii) precluding or decreasing the effectiveness of chosen-plaintext : attacks by virtue of a one-to-many mapping of plaintext to ciphertext; : and : "(iii) precluding or decreasing the effectiveness of statistical : attacks by leveling the a priori probability distribution of inputs." Those are adapted from Rivest and Sherman. : But none of these says anything about authentication, and so fail to : describe one of the main opportunities and advantages. One might well : wonder whether the Rivest-Sherman reference is similar. That's because it's the redundancy, not the randomness that provides authentication. For authentication we need a cipher in which for any given key the vast majority of ciphertext blocks do not map to any legal plaintext. That has nothing to do with homophones. : >[...] : >True, but if counter mode is secure, then homophonic block : >ciphers are not needed. : False. The homophonic construction has more utility than counter : mode: Did you miss the direction of the implication? Randomized block ciphers are at least as secure as counter mode. The other advantages you cite are due to adding a redundant field (except the one that compares to CBC mode instead of counter mode). --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 15 Oct 1998 03:31:51 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36256c79.6945376@news.io.com> References: <702s3b$qld$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 67 On 14 Oct 1998 18:55:39 GMT, in <702s3b$qld$1@news.umbc.edu>, in sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: >[...] >: But none of these says anything about authentication, and so fail to >: describe one of the main opportunities and advantages. One might well >: wonder whether the Rivest-Sherman reference is similar. > >That's because it's the redundancy, not the randomness >that provides authentication. For authentication we need >a cipher in which for any given key the vast majority of >ciphertext blocks do not map to any legal plaintext. That >has nothing to do with homophones. On the contrary, that is what homophones do, provided we can label them and distinguish between them. The data channel is homophonic. The authentication channel is not. >: >[...] >: >True, but if counter mode is secure, then homophonic block >: >ciphers are not needed. > >: False. The homophonic construction has more utility than counter >: mode: > >Did you miss the direction of the implication? Randomized >block ciphers are at least as secure as counter mode. The >other advantages you cite are due to adding a redundant >field (except the one that compares to CBC mode instead of >counter mode). That counts. In starting this off, I responded to the idea of mixing noise with data. I pointed out that this idea could be extended to do both authentication and a form of keying. I cited mid-70's Feistel, who was cited to me by the PTO. Then I got various learned responses that this was "known." Well, sure it was, and it was "known" from Feistel. None of the references cited were before that. All of which is irrelevant to the real issue, which is that the homophonic block cipher construction -- essentially "noise" in the plaintext "data" -- can be advantageous, but nobody really knows about it. The various popular references *I* have don't say anything about doing authentication like this. And the texts *I* have don't describe that. Is the homophonic block cipher construction presented in Applied Cryptography? Is it in the Handbook of Applied Cryptography? If not, the technique is hardly likely to be known by the ordinary reader of those texts. I claim there can be real advantages to using the homophonic block cipher construction, especially when we have large blocks. Surely it must be hard to disagree with this, yet based on extensive prior experience, I suspect that you somehow will. But if so, let's see some quotes. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 15 Oct 1998 17:42:51 GMT From: jsavard@tenMAPSONeerf.edmonton.ab.ca (John Savard) Message-ID: <3626307e.4493624@news.prosurfr.com> References: <36256c79.6945376@news.io.com> Newsgroups: sci.crypt Lines: 58 ritter@io.com (Terry Ritter) wrote, in part: >On 14 Oct 1998 18:55:39 GMT, in <702s3b$qld$1@news.umbc.edu>, in >sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: >>[...] >>: But none of these says anything about authentication, and so fail to >>: describe one of the main opportunities and advantages. One might well >>: wonder whether the Rivest-Sherman reference is similar. >>That's because it's the redundancy, not the randomness >>that provides authentication. For authentication we need >>a cipher in which for any given key the vast majority of >>ciphertext blocks do not map to any legal plaintext. That >>has nothing to do with homophones. >On the contrary, that is what homophones do, provided we can label >them and distinguish between them. The data channel is homophonic. >The authentication channel is not. >Is the homophonic block cipher construction presented in Applied >Cryptography? Is it in the Handbook of Applied Cryptography? If not, >the technique is hardly likely to be known by the ordinary reader of >those texts. Applied Cryptography does indeed refer to a system where there are many collisions in one direction but no collisions in the other direction, although it may not use the same terminology as you do for it. Simple authentication - what Bryan Olson is referring to - involves redundancy before encryption - i.e., an encrypted checksum of the message. One could say that if one replaced the checksum by random bits, one had homophones, but since the wrong checksum combinations aren't valid, one could indeed quibble about the term being applicable in that case, and I'd have to agree with him that there is a risk of causing confusion by using the term here. (OTOH, if the authenticator were encrypted by an extra level of encryption, and if messages with invalid authentication were deliberately generated to create confusion to attackers, then the invalid messages would be homophones at one level.) However, what Applied Cryptography was referring to that sounds more like what you may be talking about was a digital signature method in which collisions of one type made it likely that a forgery would be detected by denying the information needed for an attack. I shall have to look at my copy this evening to refresh my memory here. AC doesn't mention, however, the idea of adding a few extra bits to every block, either for variation (like an IV) or for authentication. These techniques are only discussed on a per-message basis (of course, a communications block, say 1024 bytes, could be handled as a message, but the idea of doing it for each 64-bit or 128-bit encryption block is definitely not noted). John Savard http://members.xoom.com/quadibloc/index.html
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 17 Oct 1998 17:20:10 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3628d1c6.6287748@news.io.com> References: <3626307e.4493624@news.prosurfr.com> Newsgroups: sci.crypt Lines: 126 On Thu, 15 Oct 1998 17:42:51 GMT, in <3626307e.4493624@news.prosurfr.com>, in sci.crypt jsavard@tenMAPSONeerf.edmonton.ab.ca (John Savard) wrote: >[...] >>Is the homophonic block cipher construction presented in Applied >>Cryptography? Is it in the Handbook of Applied Cryptography? If not, >>the technique is hardly likely to be known by the ordinary reader of >>those texts. > >Applied Cryptography does indeed refer to a system where there are >many collisions in one direction but no collisions in the other >direction, although it may not use the same terminology as you do for >it. So the never-ending argument continues about whether or not the technique of adding "extra information" to the plaintext block of a block cipher for keying and authentication is well known. Jeez, I guess we could just take a poll. Did *you* know that one could authenticate individual blocks in exactly this way? If so, where did you learn it? (Certainly I have discussed this here before, and described it also on my pages and my Crypto Glossary.) This technique has little to do with a "system," but instead is confined to individual blocks. That is the meaning of block independence, which is one of the advantages of the technique. And the technique is most useful with *large* blocks, which is one of the advantages of having *large* blocks. >Simple authentication - what Bryan Olson is referring to - involves >redundancy before encryption - i.e., an encrypted checksum of the >message. One could say that if one replaced the checksum by random >bits, one had homophones, but since the wrong checksum combinations >aren't valid, I believe I pointed out previously that the error-correcting code with errors example was indeed a homophonic coding. But it is not a *secret* coding; it is not a homophonic cipher. But we can accomplish the same end *without* using an error-correcting code. Which means we have an advance, folks. >one could indeed quibble about the term being applicable >in that case, and I'd have to agree with him that there is a risk of >causing confusion by using the term here. Oh, please. The only real confusion here is that some people cannot stand their crypto background or favorite books to be shown up as not including a simple technique of significant value, and they are willing to confuse and distort the conversation to hide that bitter truth. If that were not the case, we would see direct quotes that we could all evaluate -- but we don't. Or we might see arguments confined to the worth of the technique (instead of whether we all know about it) -- but we don't. We might see issues formulated to be clear and testable so they could actually be resolved -- but we don't. There is nothing new about this. It occurs all the time on sci.crypt. It really is very embarrassing. The reason I sometimes use the same term for these fields is because exactly the same process occurs. To the cipher there is no difference between any of these fields. When one labels these fields as having a particular purpose, one is simply labeling one's own interpretations. And while that can be useful, believing the joke and demanding that everyone else recognize it is something else again. >(OTOH, if the authenticator >were encrypted by an extra level of encryption, and if messages with >invalid authentication were deliberately generated to create confusion >to attackers, then the invalid messages would be homophones at one >level.) Simply having the ability to select from among a wide array of ciphertext code values for a particular plaintext is homophonic. Please feel free to see: http://www.io.com/~ritter/GLOSSARY.HTM#Homophonic http://www.io.com/~ritter/GLOSSARY.HTM#HomophonicSubstitution Does a classical homophonic cipher not become homophonic until the first homophone goes out? When we confine the input data to part of a block we get the homophonic effect. There is very little here to discuss about agreeing or not agreeing or seeing things one way or the other. This construction creates homophonic blocks from normal block ciphers, it can be useful, and it is generally unknown. If you disagree, give quotes. >However, what Applied Cryptography was referring to that sounds more >like what you may be talking about was a digital signature method in >which collisions of one type made it likely that a forgery would be >detected by denying the information needed for an attack. I shall have >to look at my copy this evening to refresh my memory here. > >AC doesn't mention, however, the idea of adding a few extra bits to >every block, either for variation (like an IV) or for authentication. Since this last comment *is* what I described (except for the "few" part), I fail to see the point of your previous comments. If the technique is not known in common texts, it is not well known. My point always has been that this useful and valuable technique is *not* well known, and is *not* in the common texts. Please address the point. Include quotes. >These techniques are only discussed on a per-message basis (of course, >a communications block, say 1024 bytes, could be handled as a message, >but the idea of doing it for each 64-bit or 128-bit encryption block >is definitely not noted). And, if those techniques *are* only discussed on a per-message basis, they are obviously *not* the technique of this discussion. So your point would be? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sun, 18 Oct 1998 00:14:28 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-1810980014280001@207.22.198.205> References: <3628d1c6.6287748@news.io.com> Newsgroups: sci.crypt Lines: 25 In article <3628d1c6.6287748@news.io.com>, ritter@io.com (Terry Ritter) wrote: > On Thu, 15 Oct 1998 17:42:51 GMT, in > <3626307e.4493624@news.prosurfr.com>, in sci.crypt > jsavard@tenMAPSONeerf.edmonton.ab.ca (John Savard) wrote: > > >These techniques are only discussed on a per-message basis (of course, > >a communications block, say 1024 bytes, could be handled as a message, > >but the idea of doing it for each 64-bit or 128-bit encryption block > >is definitely not noted). > > And, if those techniques *are* only discussed on a per-message basis, > they are obviously *not* the technique of this discussion. I read several places where a specified signature length is considered. There need not be any particular size. Indeed, the MAC's that I generate are additive; I seamlessly add the results from sequential blocks together. One could hash this down to any length, but that is not particularily helpful in finding where a corruption of the original might have occured, which I would consider most important in some cases. -- --- Insanity means doing the same thing over and over again and expecting different results...like CDA2. --- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 17 Oct 1998 08:53:11 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <709ltn$g09$1@news.umbc.edu> References: <36256c79.6945376@news.io.com> Newsgroups: sci.crypt Lines: 72 Terry Ritter (ritter@io.com) wrote: : On 14 Oct 1998 18:55:39 GMT, in <702s3b$qld$1@news.umbc.edu>, in : sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: : >[...] : >: But none of these says anything about authentication, and so fail to : >: describe one of the main opportunities and advantages. One might well : >: wonder whether the Rivest-Sherman reference is similar. : > : >That's because it's the redundancy, not the randomness : >that provides authentication. For authentication we need : >a cipher in which for any given key the vast majority of : >ciphertext blocks do not map to any legal plaintext. That : >has nothing to do with homophones. : On the contrary, that is what homophones do, provided we can label : them and distinguish between them. The data channel is homophonic. : The authentication channel is not. Cryptologists talk about the sender encrypting a plaintext into a ciphertext which gets transmitted to the receiver. You seem to have this two channel model. Where did you get it? Why do you need it? What do think "channel" means? Homophones are multiple ciphertexts induced by the same key and the same plaintext. That's what they are; what they do is increase the entropy of the ciphertext. They do not provide authentication, since each homophone has a corresponding legitimate plaintext. Suppose we expand a plaintext block by n bits. Say the n bit expansion adds k bits of entropy. Then on average each plaintext has 2^k homophones. A random text has a probability of 1/2^(n-k) of corresponding to some expanded plaintext, and we can use this for authentication by rejecting all those that don't decrypt to a possible block. Under the random cipher model an adversary has the 1/2^(n-k) chance of successful forgery on each block. That n-k number is the measure of redundancy. The number of truly random bits doesn't effect the chance at all. That qualification "provided we can label them and distinguish between them" hides what's really going on. You have to add redundancy to get authentication, and then you can throw out the randomness and the authentication doesn't go away. That's not to say the randomness is worthless - it does just what Rivest and Sherman said it does. In practice our ciphers are not random permutations, so the advantages cited may increase security of both privacy and authentication. [...] : I claim there can be real advantages to using the homophonic block : cipher construction, especially when we have large blocks. Surely it : must be hard to disagree with this, yet based on extensive prior : experience, I suspect that you somehow will. But if so, let's see : some quotes. What are you talking about? I've been recommending the Rivest and Sherman paper on this group for years. It's the major paper advocating the technique and analyzing the advantages. As Rivest and Sherman state "The goal of randomized encryption is increased security". That security may apply to both privacy and authentication. But homophones do not provide a means to authenticate. Redundancy, not randomness, offers authentication. --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 17 Oct 1998 17:20:20 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3628d1cd.6295356@news.io.com> References: <709ltn$g09$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 172 On 17 Oct 1998 08:53:11 GMT, in <709ltn$g09$1@news.umbc.edu>, in sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: >Terry Ritter (ritter@io.com) wrote: > >: On 14 Oct 1998 18:55:39 GMT, in <702s3b$qld$1@news.umbc.edu>, in >: sci.crypt olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: > >: >[...] >: >: But none of these says anything about authentication, and so fail to >: >: describe one of the main opportunities and advantages. One might well >: >: wonder whether the Rivest-Sherman reference is similar. >: > >: >That's because it's the redundancy, not the randomness >: >that provides authentication. For authentication we need >: >a cipher in which for any given key the vast majority of >: >ciphertext blocks do not map to any legal plaintext. That >: >has nothing to do with homophones. > >: On the contrary, that is what homophones do, provided we can label >: them and distinguish between them. The data channel is homophonic. >: The authentication channel is not. > >Cryptologists talk about the sender encrypting a plaintext >into a ciphertext which gets transmitted to the receiver. >You seem to have this two channel model. Where did you get >it? Why do you need it? What do think "channel" means? Electrical Engineers have some modest acquaintance with the term "channel." Perhaps you should expand your view. In general, a "channel" is bandwidth reserved for a particular connection or information type. Here we take the second tack. For example, some systems use a "control channel" along with a "data channel" to control data flow over the same connection. When we partition the plaintext block into a section for message data and one or more other sections for authentication data and keying data, each of these sections can be considered a separate "channel." >Homophones are multiple ciphertexts induced by the same >key and the same plaintext. That's what they are; what >they do is increase the entropy of the ciphertext. They do >not provide authentication, since each homophone has a >corresponding legitimate plaintext. I can agree with the definition. But you seem to have gone quite a ways beyond that. The homophonic block cipher constructs homophonic ciphertext simply by partitioning the plaintext block into message and keying fields (or channels). Each different value in the keying field generates a different ciphertext for exactly the same data in the message field. This is homophonic operation, by your definition. In classical cryptography, homophonic systems may not distinguish between homophones at all. And all that means is that the modern-day homophonic block cipher construction goes beyond the classical designs, just as all modern systems do. Your definition says nothing about an inability to distinguish between homophones, and if it did, that would define a type of homophone system, not homophones themselves. >Suppose we expand a plaintext block by n bits. Say the >n bit expansion adds k bits of entropy. Then on average >each plaintext has 2^k homophones. A random text has a >probability of 1/2^(n-k) of corresponding to some expanded >plaintext, and we can use this for authentication by >rejecting all those that don't decrypt to a possible block. >Under the random cipher model an adversary has the >1/2^(n-k) chance of successful forgery on each block. I accept this. I was just going to explain it to you. >That n-k number is the measure of redundancy. The number >of truly random bits doesn't effect the chance at all. >That qualification "provided we can label them and >distinguish between them" hides what's really going on. You >have to add redundancy to get authentication, and then you >can throw out the randomness and the authentication doesn't >go away. Generally right. So what a surprise that -- as I have been saying all along -- the homophonic block cipher construction is mainly useful in ciphers with large blocks, in which we can afford some overhead. >That's not to say the randomness is worthless - it does just >what Rivest and Sherman said it does. The source of this idea was Feistel, years earlier. It just does what Feistel said it does. >In practice our ciphers >are not random permutations, so the advantages cited may >increase security of both privacy and authentication. > >[...] >: I claim there can be real advantages to using the homophonic block >: cipher construction, especially when we have large blocks. Surely it >: must be hard to disagree with this, yet based on extensive prior >: experience, I suspect that you somehow will. But if so, let's see >: some quotes. > >What are you talking about? I've been recommending the Rivest >and Sherman paper on this group for years. Quote please. >It's the major >paper advocating the technique and analyzing the advantages. >As Rivest and Sherman state "The goal of randomized encryption >is increased security". That security may apply to both >privacy and authentication. Quote please. >But homophones do not provide a >means to authenticate. Redundancy, not randomness, offers >authentication. Try to follow along: 1) The homophonic block cipher construction partitions the plaintext into multiple "fields" or "communications sub-channels." 2) One field is used for message data. 3) Another field can be used for keying data. Each value in this field creates a different ciphertext for exactly the same message data. Each of these ciphertexts *is* a homophonic code for the same message. In a sense, each of these homophonic ciphertexts is "numbered" by the keying field. When we decipher, we can throw that number away if we so decide. This is classical homophonic cryptography. 4) Yet another field can be used for authentication. Each value in this field *also* creates a different ciphertext for exactly the same message data. Each of these ciphertext is *also* a homophonic code for the same message. But, since the homophones are numbered, we *can* distinguish between them if we wish. We can thus identify the "correct" homophone, by whatever way we wish to identify a correct authentication value. This process is analogous to other forms of error detection used for authentication like CRC, but CRC is weak, whereas the homophonic form can be strong cryptographic authentication. Now, each of these different fields may be *used* in different ways. Yet they have *exactly* *the* *same* *construction*. Whatever words you want to use for it, the same thing is happening in every field. Homophones are created and distinguished in two fields, both of which are treated by the cipher in exactly the same way. The only thing that changes is our *interpretation* of the field, and if you wish to put two different names on that same operation, well, fine. Certainly I call them "keying" and "authentication." If you want to call them "randomness" and "redundancy," feel free. But don't imagine that you have the only two terms that can be properly used. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 18 Oct 1998 01:21:19 GMT From: olson@umbc.edu (Bryan G. Olson; CMSC (G)) Message-ID: <70bfqf$3om$1@news.umbc.edu> References: <3628d1cd.6295356@news.io.com> Newsgroups: sci.crypt Lines: 170 Terry Ritter (ritter@io.com) wrote: : Bryan G. Olson; CMSC (G)) wrote: : >Terry Ritter (ritter@io.com) wrote: : > : >: Bryan G. Olson wrote: : >: >it's the redundancy, not the randomness : >: >that provides authentication. For authentication we need : >: >a cipher in which for any given key the vast majority of : >: >ciphertext blocks do not map to any legal plaintext. That : >: >has nothing to do with homophones. : > : >: On the contrary, that is what homophones do, provided we can label : >: them and distinguish between them. The data channel is homophonic. : >: The authentication channel is not. [...] : >Homophones are multiple ciphertexts induced by the same : >key and the same plaintext. That's what they are; what : >they do is increase the entropy of the ciphertext. They do : >not provide authentication, since each homophone has a : >corresponding legitimate plaintext. : I can agree with the definition. But you seem to have gone quite a : ways beyond that. Well, that's something. : The homophonic block cipher constructs homophonic ciphertext simply by : partitioning the plaintext block into message and keying fields (or : channels). Each different value in the keying field generates a : different ciphertext for exactly the same data in the message field. : This is homophonic operation, by your definition. Absolutely, though it "A", not "The" homophonic block cipher construction. : In classical cryptography, homophonic systems may not distinguish : between homophones at all. And all that means is that the modern-day : homophonic block cipher construction goes beyond the classical : designs, just as all modern systems do. Your definition says nothing : about an inability to distinguish between homophones, and if it did, : that would define a type of homophone system, not homophones : themselves. : >Suppose we expand a plaintext block by n bits. Say the : >n bit expansion adds k bits of entropy. Then on average : >each plaintext has 2^k homophones. A random text has a : >probability of 1/2^(n-k) of corresponding to some expanded : >plaintext, and we can use this for authentication by : >rejecting all those that don't decrypt to a possible block. : >Under the random cipher model an adversary has the : >1/2^(n-k) chance of successful forgery on each block. : I accept this. I was just going to explain it to you. : >That n-k number is the measure of redundancy. The number : >of truly random bits doesn't effect the chance at all. : >That qualification "provided we can label them and : >distinguish between them" hides what's really going on. You : >have to add redundancy to get authentication, and then you : >can throw out the randomness and the authentication doesn't : >go away. : Generally right. : So what a surprise that -- as I have been saying all along -- the : homophonic block cipher construction is mainly useful in ciphers with : large blocks, in which we can afford some overhead. This is a surprise why? : >: Surely it : >: must be hard to disagree with this, yet based on extensive prior : >: experience, I suspect that you somehow will. But if so, let's see : >: some quotes. : > : >What are you talking about? I've been recommending the Rivest : >and Sherman paper on this group for years. : Quote please. Quote of what? I have to document that I don't disagree? [...] : Try to follow along: O.K. I read it and I think I followed. I'll intersperse my comments. : 1) The homophonic block cipher construction partitions the plaintext : into multiple "fields" or "communications sub-channels." We agreed on what homophonic means. All (invertible) homophonic block ciphers expand the ciphertext, but they may or may not do so by adding fields to the plaintext before encryption. All constructions that add fields to the plaintext before encryption also expand the ciphertext. They may or may not be homophonic. Specifically, they are homophonic if and only if for a given key and plaintext, the construction admits more than one possible field value (or if it was already homophonic without the extra field). : 2) One field is used for message data. I follow, but conventional terminology is that the message data _is_ the plaintext. See Applied Cryptography page 1, or the Handbook page 11. Fields added by the cryptographic process are not properly plaintext. : 3) Another field can be used for keying data. And that gives us _one_ way to produce a homophonic block cipher. It is not synonymous with homophonic block ciphers in general. : Each value in this : field creates a different ciphertext for exactly the same message : data. Each of these ciphertexts *is* a homophonic code for the same : message. In a sense, each of these homophonic ciphertexts is : "numbered" by the keying field. When we decipher, we can throw that : number away if we so decide. This is classical homophonic : cryptography. Agreed. : 4) Yet another field can be used for authentication. Each value in : this field *also* creates a different ciphertext for exactly the same : message data. If, only if, and to the degree that the content of the field is nondeterministic. If your construction always puts the same value in that field for the same values elsewhere, then it cannot induce homophones. : Each of these ciphertext is *also* a homophonic code : for the same message. But, since the homophones are numbered, we : *can* distinguish between them if we wish. We can thus identify the : "correct" homophone, by whatever way we wish to identify a correct : authentication value. This process is analogous to other forms of : error detection used for authentication like CRC, but CRC is weak, : whereas the homophonic form can be strong cryptographic : authentication. : Now, each of these different fields may be *used* in different ways. : Yet they have *exactly* *the* *same* *construction*. But you've confused the extra field construction with "homophonic". You can make the cipher homophonic by putting in an extra random field. You can use an extra field for authentication by filling it with redundant data (whether or not there's another nondeterministic expansion). That doesn't mean extra redundant fields are homophonic. : Whatever words : you want to use for it, the same thing is happening in every field. The issue is what goes into the field, not what we call it. If it's random - additional entropy - nondeterministic, that means it induces homophones. If it's redundant - deterministic, then and only then is it useful for authentication. --Bryan
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sun, 18 Oct 1998 00:02:30 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-1810980002300001@207.22.198.205> References: <709ltn$g09$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 102 In article <709ltn$g09$1@news.umbc.edu>, olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: > > Homophones are multiple ciphertexts induced by the same > key and the same plaintext. That's what they are; what > they do is increase the entropy of the ciphertext. They do > not provide authentication, since each homophone has a > corresponding legitimate plaintext. Actually, there is a need for more information, some source to choose amongst the possible outputs. (This can also be recovered intact in decryption. In fact, the nature of the choices can amount to a second channel, even being the main channel itself with the rest representing noise.) The selections can be driven more or less by an input of randomness, a picking of alternatives beyond user definition. If the nature of these selections is recovered, it can be used in the authenticaton process to modify the results, each homophone could produce a different plaintext, even with the same key, legitimacy is merely a subjective evaluation of the results of a deductive process. > > Suppose we expand a plaintext block by n bits. Say the > n bit expansion adds k bits of entropy. Then on average > each plaintext has 2^k homophones. A random text has a > probability of 1/2^(n-k) of corresponding to some expanded > plaintext, and we can use this for authentication by > rejecting all those that don't decrypt to a possible block. > Under the random cipher model an adversary has the > 1/2^(n-k) chance of successful forgery on each block. > > That n-k number is the measure of redundancy. The number > of truly random bits doesn't effect the chance at all. > That qualification "provided we can label them and > distinguish between them" hides what's really going on. You > have to add redundancy to get authentication, and then you > can throw out the randomness and the authentication doesn't > go away. You need not add the redundancy to get authentication, merely treat the string as if it had had it added. Using an algorithm especially tuned to demonstrate this, taking the above sentence, I will encrypt it, adding randomness, 2^54. I will use the same key to get a MAC from both the encrypted text, and the plaintext: Encrypted: {zqfz*,7*gn\m3,h(_^r4<8min@9)c_z9w+0g-avp!e7yep*@*g\ 2@o$17?v|;iye=)(qy4r`b\&[3er_#atq@0kt`<!&e|-d8y^#zd qa(__n0s.ce>t~ } MAC from Encrypted: `you|need|not|add|the|redundancy|to|get|authentication,|mere ly|treat|the|string|as|if|it|had|had|it|added.| MAC from Plaintext: su)/]+,>0j%.*h,;iz)w$/q)8_5o6e%?f>vy"y,i!vv^$>qu=mjea]+yh|'? 2d%!kc7x6o#|7i*`txsk4@(a,j"nh6%/@b6%8| Double MAC, MAC of above MAC: zk*[@;8@4)]^jnp$^*+pf072*4$-i,t6d;&n()emg-d#)"%e9!<4,-vv-0/% 4]o&+d)^&;0"y!x<<t-'#n[i(d$+x5| All homophonic encryptions using the same user defined keys would produce the same MAC appearing as recognizable text, while a simple change in the keys would make them all different. I say *keys* because with the cryptosystem at hand, it takes two; hereinafter below, I will consider an immediate combination as a single key. A MAC based on plaintext is shorter than the plaintext by the same amount as the normal encryption of the text would be longer than the plaintext. The Double MAC is twice a shorter. There may be even more possibilities than I show. To get the same length of MAC as plaintext, encrypt with one key and generate a MAC from the cyphertext with another. Since versions of the encryptions are all going to be different, one of 2^54 in this case, changing the key can cause a similiar quantity of different MAC's that you can get. A different keyed MAC would be only useful to authenticate a particular encryption since for all practical purposes, this is a one-way process. > ...As Rivest and Sherman state "The goal of randomized encryption > is increased security". That security may apply to both > privacy and authentication. But homophones do not provide a > means to authenticate. Redundancy, not randomness, offers > authentication. > I believe that the example I gave demonstrates that it can. In the system used, a private key, known only to a single user, could be used to render a MAC of any plaintext or ciphertext, even another MAC made by another key. The requirement to make this work unconditionally is in using different keys. It is obvious to me that an algorithm that produces output variability can use the same characteristic quite easily in authentication. It seemed natural to me to combine them, which I have done. -- --- Insanity means doing the same thing over and over again and expecting different results...like CDA2. --- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Wed, 14 Oct 1998 23:46:32 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-1410982346320001@dialup115.itexas.net> References: <702s3b$qld$1@news.umbc.edu> Newsgroups: sci.crypt Lines: 20 In article <702s3b$qld$1@news.umbc.edu>, olson@umbc.edu (Bryan G. Olson; CMSC (G)) wrote: > > That's because it's the redundancy, not the randomness > that provides authentication. For authentication we need > a cipher in which for any given key the vast majority of > ciphertext blocks do not map to any legal plaintext. That > has nothing to do with homophones. > Strength of an algorithm in encryption can be related to strength of an algorithm in producing a MAC. In authentication using a MAC, the mapping can be to only one plaintext, jumbled garbage if ciphertext is used as the input and a wrong key is used to get output, or if plaintext is used as input and any key is used to find an output. In short the more output possibilities in encryptive mode, the stronger it is in MAC mode. -- --- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 14:25:26 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36220346.9B039373@stud.uni-muenchen.de> References: <361a3f1a.5668671@news.prosurfr.com> Newsgroups: sci.crypt Lines: 22 John Savard wrote: > However, I could use a different mode that does eliminate a direct > known-plaintext attack on DES, and yet is still insecure: > > send, enciphered, a random 64-bit block, then XOR that block to every > block in the message before encryption. > > I don't know, now, the value of any plaintext block. But I can still > brute-force search: decipher a *pair* of blocks for every key, and > when the difference between them, deciphered, is the same as that > between the two plaintext blocks, I probably have the right key. > > This is a trivial example, but it illustrates the principle that a > method that _apparently_ eliminates a true known-plaintext attack may > not actually give security to a cipher, even if that cipher is not > vulnerable in the case of unknown plaintext. If the the random 64-bit block is encrypted with another key, wouldn't that be a good scheme? M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 14:57:21 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36220AC1.AEFFD28E@stud.uni-muenchen.de> References: <36190b22.6139167@news.io.com> Newsgroups: sci.crypt Lines: 32 Terry Ritter wrote: > Obviously I have been unable to communicate what the a/k stuff is. > > Maybe: > > C[i] = E( k, (P[i] << 64) + a/k[i]) ) > > where E is a 64 byte block cipher > C[i] is 64 byte ciphertext > P[i] is 56 byte plaintext > a/k[i] is 8 byte (64 bit) authentication / keying value > > We can assume that every possible value for a/k will produce a > different ciphertext, and that every ciphertext bit will have an equal > opportunity to change. > > This means there will be a multiplicity of ciphertext representations > for the exact same data: this is a homophonic block cipher. And that This is certainly a convenient way of obtaining homophones (of 56 byte) entities. I think another way would be this: Expand each byte (8 bit entities) to 9 bit homophones. This gives 63 bytes. The 64th byte could be random. I guess this would be more difficult to analyse (though it does not provide the 'authentication'). In a previous post you mentioned the advantages of large blocks. I agree with you on the desirability of larger block sizes but wonder why it has apparently not been stressed in the literature as it deserves. (Because of hardware implementations?) M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 18:20:34 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36224706.5167449@news.io.com> References: <36220AC1.AEFFD28E@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 55 On Mon, 12 Oct 1998 14:57:21 +0100, in <36220AC1.AEFFD28E@stud.uni-muenchen.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >[...] >This is certainly a convenient way of obtaining homophones (of 56 >byte) entities. I think another way would be this: Expand each byte >(8 bit entities) to 9 bit homophones. This gives 63 bytes. The 64th >byte could be random. I guess this would be more difficult to >analyse (though it does not provide the 'authentication'). Sure. Some benefit flows from simply having more data space than message space. But if we don't collect the extra space in one place, it will be more difficult to both set and check, and we may be more tempted to say that the extra space is not worth the trouble. >In a previous post you mentioned the advantages of large blocks. >I agree with you on the desirability of larger block sizes but >wonder why it has apparently not been stressed in the literature as >it deserves. (Because of hardware implementations?) My guess is that the avoidance of large blocks is first related to the use of Feistel constructions for block ciphers. Feistel constructions need a half-block-wide f() function, which ideally is 1:1 (a permutation) as in DES. It is just difficult to construct ciphers with large blocks using "normal" technology. Even here on sci.crypt I have often seen the opinion expressed that a large block cipher simply could not provide the necessary mixing or diffusion. The presentation of FFT-style mixing with log n levels is at least a plausible counterexample, but it has not appeared in the texts. Another problem is a general lack of knowledge of the advantages of large blocks. When you have a hammer, you tend to see all your problems as nails, and here people have a particular vision of a DES-like block cipher. Changing that view requires changing many other things about the way ciphers operate and which had been thought fully resolved. Some of the advantages of large blocks are best realized by system re-design, as opposed to simply dropping a new implementation into the DES-box slot. I suppose tradeoffs are inherently more engineering than general academics, but we don't have a good engineering text on crypto, and the academic texts have uniformly failed to explore the possibilities of new structures and new designs. Perhaps after AES we will see texts with wider horizons. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 13 Oct 1998 09:35:48 +0100 From: Paul Crowley <paul@hedonism.demon.co.uk> Message-ID: <8767do3llp.fsf@hedonism.demon.co.uk> References: <36224706.5167449@news.io.com> Newsgroups: sci.crypt Lines: 34 ritter@io.com (Terry Ritter) writes: > My guess is that the avoidance of large blocks is first related to the > use of Feistel constructions for block ciphers. Feistel constructions > need a half-block-wide f() function, which ideally is 1:1 (a > permutation) as in DES. It is just difficult to construct ciphers > with large blocks using "normal" technology. I'm in the process of designing a very fast large block cipher based around a balanced Feistel network, and my current favourite design does use a bijective F function. I've found the use of a Feistel construction has made the cipher easier to think about, and I'd be interested to know what disadvantages you see. http://www.hedonism.demon.co.uk/paul/mercy > Even here on sci.crypt I have often seen the opinion expressed that a > large block cipher simply could not provide the necessary mixing or > diffusion. The presentation of FFT-style mixing with log n levels is > at least a plausible counterexample, but it has not appeared in the > texts. One design I've played with uses an F function with data-dependent indexing into the right half; this isn't used in the cipher at the moment, but does appear in the current revision of the key schedule. I suspect that with work such a construction could provide diffusion as rapidly as the rotates in RC5; however, it pretty much rules out a bijective F-function so far as I can tell. You even could shuffle the words in the left half using randomising information taken from the right half, though I suspect that data-dependent indexing provides the same advantages. -- __ \/ o\ paul@hedonism.demon.co.uk Edinburgh fetish club Permission \ / /\__/ Paul Crowley Nov 8 http://www.hedonism.demon.co.uk/permission /~\
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 13 Oct 1998 15:31:39 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <3623644B.167B2050@stud.uni-muenchen.de> References: <8767do3llp.fsf@hedonism.demon.co.uk> Newsgroups: sci.crypt Lines: 28 Paul Crowley wrote: > http://www.hedonism.demon.co.uk/paul/mercy I only very quickly went through your Web page. Naive questions: What does you check in 'key validation'? Do you mean the key has to consist of characters in the English alphabet and nothing else? I suppose that the task of turning a passphrase into a key is because the system can accepts only a key of a fixed format. Is this right? > One design I've played with uses an F function with data-dependent > indexing into the right half; this isn't used in the cipher at the > moment, but does appear in the current revision of the key schedule. < I suspect that with work such a construction could provide diffusion < as rapidly as the rotates in RC5; however, it pretty much rules out a < bijective F-function so far as I can tell. You even could shuffle the I believe that this is advantageous. This comes from my experience with a design of my own where I chain blocks using some hash value of a previous block to modify the encryption process of the current block. There is some parallels in the ideas. I don't understand your 'it pretty much rules out a bijective F-function'. Doesn't the possibility of construction depend on the the type of constructin that you invisage? M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 13 Oct 1998 15:19:17 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36236dfe.5386998@news.io.com> References: <8767do3llp.fsf@hedonism.demon.co.uk> Newsgroups: sci.crypt Lines: 104 On 13 Oct 1998 09:35:48 +0100, in <8767do3llp.fsf@hedonism.demon.co.uk>, in sci.crypt Paul Crowley <paul@hedonism.demon.co.uk> wrote: >ritter@io.com (Terry Ritter) writes: >> My guess is that the avoidance of large blocks is first related to the >> use of Feistel constructions for block ciphers. Feistel constructions >> need a half-block-wide f() function, which ideally is 1:1 (a >> permutation) as in DES. It is just difficult to construct ciphers >> with large blocks using "normal" technology. > >I'm in the process of designing a very fast large block cipher based >around a balanced Feistel network, and my current favourite design >does use a bijective F function. I've found the use of a Feistel >construction has made the cipher easier to think about, Easier than what? To what are you comparing it, and why should Feistel be easier to think of than, say, FFT-style mixings? In particular, why should a complex design with many different component types arranged in irregular ways be easier to think about than a design with few component types arranged in regular ways? Suppose we have a 1 bit change in the input: Feistel must propagate that change through a complex f() with properties which do not directly apply to reasoning about this case, or diffusion in general. But FFT-style mixings are *guaranteed* to propagate that 1-bit change to all intermediate results. And, although this is not exactly what we want, each of these select a new entry in their own random table, which, when mixed one more time, *does* give us what we want. This is a scalable and measurable result, from a construction using 2 small component types in a simple structure very similar to that used in the well known FFT. What is difficult about this? >and I'd be >interested to know what disadvantages you see. Before seeing your proposal, I would first question some part of: speed, quality of f(), number of rounds, and quality of diffusion. > >http://www.hedonism.demon.co.uk/paul/mercy This certainly is a nice presentation. It is easy to get around in, and probably much easier to maintain than my web stuff. I like the generally sparse nature of the articles, which seem qualitatively different than paper articles translated to html. I do think the variety of components in this design make it hard to understand at first shot, however, and I think the articles could also better help the reader to reason about the ciphering process. After seeing your proposal, the first thing I would like to know concerns actual measurements: What kind of speeds are you seeing (on what platform), and have you done any diffusion testing? In what sense and to what extent does your construction emulate a large Simple Substitution? Also it would seem that Mixing ciphers do have the property described in the justification, have been described many times right here, and recommended to you particularly, so one might well wonder why they are not cited as an alternative. Also it looks like the main nonlinearity in the system is the single 8 to 32 bit "T" table, and that makes me nervous. This is 1KB of total state, and the differential input to and output from that table are directly visible in the last round. That makes me nervous. >> Even here on sci.crypt I have often seen the opinion expressed that a >> large block cipher simply could not provide the necessary mixing or >> diffusion. The presentation of FFT-style mixing with log n levels is >> at least a plausible counterexample, but it has not appeared in the >> texts. > >One design I've played with uses an F function with data-dependent >indexing into the right half; this isn't used in the cipher at the >moment, but does appear in the current revision of the key schedule. >I suspect that with work such a construction could provide diffusion >as rapidly as the rotates in RC5; however, it pretty much rules out a >bijective F-function so far as I can tell. You even could shuffle the >words in the left half using randomising information taken from the >right half, though I suspect that data-dependent indexing provides the >same advantages. Yes. I am aware of other mixing approaches which avalanche. But I suspect that we can sort them into 4 piles: * n log n effort "full" balanced mixing, * more effort, * less mixing, * n effort "constant depth" mixing. We ought to be able to more or less measure this stuff, in scaled-down implementations. But if we can't scale it down, and can't measure it, a complex design has very serious believability problems. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Fri, 16 Oct 1998 18:47:07 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <3627869B.DA0DA918@stud.uni-muenchen.de> References: <36224706.5167449@news.io.com> Newsgroups: sci.crypt Lines: 59 Terry Ritter wrote: > My guess is that the avoidance of large blocks is first related to the > use of Feistel constructions for block ciphers. Feistel constructions > need a half-block-wide f() function, which ideally is 1:1 (a > permutation) as in DES. It is just difficult to construct ciphers > with large blocks using "normal" technology. I suppose to say more plainly the cause is that at the time of invention of the Feistel cipher fast software encryption was not yet available. So one concentrated on hardware and hardware was also expensive then. This led to very small block size and Feistel has apparently succeeded to make the best out of that by operating on the bit level in an ingenious way. (If one uses a fairly large block size, operating on the bit level would be too costly or complicated and one would choose larger units for operations.) > > Even here on sci.crypt I have often seen the opinion expressed that a > large block cipher simply could not provide the necessary mixing or > diffusion. The presentation of FFT-style mixing with log n levels is > at least a plausible counterexample, but it has not appeared in the > texts. In Schneier's book there is only a reference to FFT-Hash. Evidently you mean something different. Could you give the relevant references? > Another problem is a general lack of knowledge of the advantages of > large blocks. When you have a hammer, you tend to see all your > problems as nails, and here people have a particular vision of a > DES-like block cipher. Changing that view requires changing many > other things about the way ciphers operate and which had been thought > fully resolved. Some of the advantages of large blocks are best > realized by system re-design, as opposed to simply dropping a new > implementation into the DES-box slot. But even some classical transpositions can be considered to operate on block sizes much larger than 64 bits. So one seems to have been somehow dazzeled by the intensity of light emitted from DES. Further the distinction between stream and block cipher is not absolute but a matter of terminological convenience, if I don't err. > I suppose tradeoffs are inherently more engineering than general > academics, but we don't have a good engineering text on crypto, and > the academic texts have uniformly failed to explore the possibilities > of new structures and new designs. Perhaps after AES we will see > texts with wider horizons. If there are no new structures and new designs invented or better deployment of old stuffs discovered, you can't blame the authors of the texts for failing to treat them. The volume of publications in cryptology is yet small compared to most fields of computer science. There are only a couple of journals. Your opinion on the influence of AES in the said direction appears to be an over-estimation for me. M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 17 Oct 1998 04:07:29 GMT From: ritter@io.com (Terry Ritter) Message-ID: <362817e5.3263247@news.io.com> References: <3627869B.DA0DA918@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 64 On Fri, 16 Oct 1998 18:47:07 +0100, in <3627869B.DA0DA918@stud.uni-muenchen.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >[...] >> Even here on sci.crypt I have often seen the opinion expressed that a >> large block cipher simply could not provide the necessary mixing or >> diffusion. The presentation of FFT-style mixing with log n levels is >> at least a plausible counterexample, but it has not appeared in the >> texts. > >In Schneier's book there is only a reference to FFT-Hash. Evidently >you mean something different. Could you give the relevant references? http://www.io.com/~ritter/ARTS/NONLBBM.HTM#1998092201 http://www.io.com/~ritter/#MixTech >[...] >Further the distinction between stream and block cipher is not >absolute but a matter of terminological convenience, if I don't err. Most crypto scientists see a block cipher as an emulated huge Simple Substitution. So if a "stream cipher" had that characteristic, it would not *be* a stream cipher, no matter what the convenience. It is certainly tempting to use this definition, because keyed or randomized Simple Substitution has a diffusion signature which a stream cipher cannot have. This would be a practical and conclusive experimental difference. Nevertheless, I use a slightly different definition: I see a block cipher as one which requires the accumulation of data before ciphering can occur. This intuitive definition is consistent with the term "block," but it also means that transposition is a form of block cipher, and transposition does not have the diffusion signature. Either definition allows us to distinguish a stream cipher mainly as "not a block cipher," in particular, a cipher which does *not* require the accumulation of data before ciphering can occur. We can also see that stream ciphers only diffuse changes to later cipherings. Then I see block cipher "chaining modes" (like CBC) as stream meta-ciphers which use a block cipher as a component. This provides a welcome unity between the concepts used in classical stream cipher designs and current block cipher usage. >[...] >If there are no new structures and new designs invented or better >deployment of old stuffs discovered, you can't blame the authors of >the texts for failing to treat them. This is one sentiment to which I am particularly sensitive. I have been inventing fundamentally new cryptography and writing about it (sometimes in peer-reviewed ink-on-paper articles) for about a decade. But I have yet to see any of this stuff appear in a text. So you tell me: Should I *not* blame the authors? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary 1998-08-27: http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Sat, 17 Oct 1998 22:53:42 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-1710982253430001@dialup175.itexas.net> References: <362817e5.3263247@news.io.com> Newsgroups: sci.crypt Lines: 36 In article <362817e5.3263247@news.io.com>, ritter@io.com (Terry Ritter) wrote: > > Most crypto scientists see a block cipher as an emulated huge Simple > Substitution. So if a "stream cipher" had that characteristic, it > would not *be* a stream cipher, no matter what the convenience. It is > certainly tempting to use this definition, because keyed or randomized > Simple Substitution has a diffusion signature which a stream cipher > cannot have. This would be a practical and conclusive experimental > difference. Severing a stream of data into blocks for the convenience of handling should be included. In particular where a restarting of some process with each such chunk is involved. > > Nevertheless, I use a slightly different definition: I see a block > cipher as one which requires the accumulation of data before ciphering > can occur. This intuitive definition is consistent with the term > "block," but it also means that transposition is a form of block > cipher, and transposition does not have the diffusion signature. Neither would the keyed substitutions I use, where no overlaping transposition is involved. The block construction, or text segmentation, is more of a convenience for the key to keep them from beeing too large to be easily implemented. > > Either definition allows us to distinguish a stream cipher mainly as > "not a block cipher," in particular, a cipher which does *not* require > the accumulation of data before ciphering can occur. We can also see > that stream ciphers only diffuse changes to later cipherings. I accept that as a great definition. -- --- Insanity means doing the same thing over and over again and expecting different results...like CDA2. --- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 19 Oct 1998 16:20:33 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <362B58C1.45F3D896@stud.uni-muenchen.de> References: <362817e5.3263247@news.io.com> Newsgroups: sci.crypt Lines: 70 Terry Ritter wrote: > > >> The presentation of FFT-style mixing with log n levels is > >> at least a plausible counterexample, but it has not appeared in the > >> texts. > http://www.io.com/~ritter/ARTS/NONLBBM.HTM#1998092201 > http://www.io.com/~ritter/#MixTech I have a couple of tiny questions: Where is the link (in meaning) between orthogonal Latin squares and FFT (which has an established definition in numerical mathematics)? Where does the 'log n levels' come into the encryption process? > Nevertheless, I use a slightly different definition: I see a block > cipher as one which requires the accumulation of data before ciphering > can occur. This intuitive definition is consistent with the term > "block," but it also means that transposition is a form of block > cipher, and transposition does not have the diffusion signature. I prefer simply to think that a block cipher operate on units of blocks (bunches of characters or bits (which are the fundamental units)). This (larger) unit, the block, must of course be complete before being operated on. So your definition is inline with mine. A transposition is certainly a block cipher then (independent of whether diffusion is achieved). If we XOR two character streams, it would according to my definition be a stream cipher if the fundamental unit is character and a block cipher if the fundamental unit is bit. The choice of the fundamental unit and the larger unit (block) is essentially arbitrary, depending on the standpoint one prefers. (One can also choose to consider a word of 32 bits or a group of 64 bits to be a fundamental unit.) This is what I meant by there being no absolute distinction between stream and block cipher. > Then I see block cipher "chaining modes" (like CBC) as stream > meta-ciphers which use a block cipher as a component. This provides a > welcome unity between the concepts used in classical stream cipher > designs and current block cipher usage. Taking the block as the fundamental unit CBC is a stream cipher. So we agree. > >If there are no new structures and new designs invented or better > >deployment of old stuffs discovered, you can't blame the authors of > >the texts for failing to treat them. > > This is one sentiment to which I am particularly sensitive. I have > been inventing fundamentally new cryptography and writing about it > (sometimes in peer-reviewed ink-on-paper articles) for about a decade. > But I have yet to see any of this stuff appear in a text. So you tell > me: Should I *not* blame the authors? Apology for having been provocative. If I remember correctly, Bruce Schneier wrote recently in a post somewhere that cryptology is still a black art. Given this fact and remembering the history of development of natural sciences, one should nonetheless have some good understanding of the phenomenon that hurts you. Anyway, Web publication has helped to disseminate certain ideas that would otherwise have been totally suppressed (particularly to the satisfaction of certain crypto bureaucrats). So the time is certainly much better than that of our far ancestors. Discussion groups like sci.crypt and some mailing lists have further helped quite a bit. I think there is a need though of a site(s) that gives comprehensive links to Web publications (not software packages, nor simply reprints from journals) on cryptology so that one can conveniently find them, since the common search machines are ineffective, I believe. M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 19 Oct 1998 11:50:52 -0600 From: jgfunj@EnqvbSerrGrknf.pbz (W T Shaw) Message-ID: <jgfunj-1910981151080001@207.101.116.84> References: <362B58C1.45F3D896@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 106 In article <362B58C1.45F3D896@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > > I prefer simply to think that a block cipher operate on units of blocks > (bunches of characters or bits (which are the fundamental units)). I wrote some block ciphers that have nothing to do with bits some time ago; it is based on trits. One version will retain capitalization, and handle word spacing, Trinity 27. Punctuation is substituted with rare characters, X, J, and Q. I'll use the above couple of sentences as the source material. Hashing the text gets these keys, two substitution and one transposition. Sub1: k/tpxevqo fbnrcmdls hguwyiazj Sub2: azdmougin fvpcq/jwh bkxstrley Trans: jneaiqkxl ucbvfdroy tmzwspg/h 27 characters targeted in the set means each one is coded to 3 trits; the values are assigned according to placement in the first substitution key. Groups of 9 characters mean 27 trits per block. Each block is rearranged according to the transposition key. The 9 groups of 3 trits in each block are converted to strings using the second substitution key. Here is the plaintext after automatic conversion of numbers to words, and preformatting the bulk of the text: I/wrote/s ome/block /ciphers/ that/have /nothing/ to/do/wit h/bits/so me/time/a go/it/is/ based/on/ tritsX/On e/version /will/ret ain/capit alization j/and/han dle/word/ spacingj/ Trinity/t wo/sevenX /Punctuat ion/is/su bstituted /with/rar e/charact ersj/Xj/J j/and/QX/ Here are the encrypted groups: z Zyjlcczzy cjzdywfqj vmonvsstb ljggbdldz jmmdzhmyl dynzflwgf ibgtdtxmr gekvqwaby isukflymj euu/bba/j jywnzBbVc iija/tlpl r/infbqvf svirhyfja /dhyjsykl gnjlgmehf czuphmstb oeojgkeys Jeinffiks nkuk/lzmI gQinaaejl meyruebss wsppaypay lmvnfasev angotqhta hjeutFndH gnmllcVPc Here is the recovered text: I wrote some block ciphers that have nothing to do with bits some time ago it is based on tritsX One version will retain capitalizationj and handle word spacingj Trinity two sevenX Punctuation is substituted with rare charactersj Xj Jj and QX To keep it simple, I made some compromises, but there is narry a bit it the whole thing. Adding a trit to each character length means having a character set of 81, upper, lower, and symbol case. The transposition key for Trinity 36 is a little longer since there are 36 trits in each block, and I will not go into all the details, but this scaled-up version of Trinity 27 does work. Using the above text to generate the keys, they are: Sub1: aijpsmbxu qyztkcfng odveh/lwr Sub2: hrkymqias wnb/ptlcf oduegzjvx Trans: g2sl985vw hxnt6iyaq f/7pduoeb 1j3mk4zcr The preformed text is: Adding!a! trit!to!e ach!chara cter!leng th!means! having!a! character !set!of!8 1,!upper, !lower,!a nd!symbol !case.!Th e!transpo sition!ke y!for!Tri nity!36!i s!a!littl e!longer! since!the re!are!36 !trits!in !each!blo ck,!and!I !will!not !go!into! all!the!d etails,!b ut!this!s caled-up! version!o f!Trinity !27!does! work.fill The encrypted groups are: +yW,jK8K; orsKHHq2( WIiO+Ko9A uNe2,kscV edoclkhT~ +qw,v,8by ,njR,hiT7 dU*rIkqKc rw0XlsyAD 5b6_+k8h- +glIrkiU+ dk@,wo;$C YBo88aht5 OH/2rr;Ty d3id4vHi9 4DfRRd,Oy Owyr4ijt8 +(i)0shf; o+jnNDyTl osqXxk9g1 N,+Bw,9rR oz0hWohs3 MRs*$RoWM eu9r!OyR- Ht9_9ko,Q N+sK+kok* EbfRoh9wR G/siBkyoN J7h8*uhtw y:frk,hk3 MMe;8ahg5 o(8f91oqs e,l8D,sx= The recovered text is: Adding a trit to each character length means having a character set of 81, upper, lower, and symbol case. The transposition key for Trinity 36 is a little longer since there are 36 trits in each block, and I will not go into all the details, but this scaled-up version of Trinity 27 does work.fill ---- Bits are particularily unfriendly to the realization of this version of the Trinity cipher. Now, this was not meant to be the strongest thing around, but it is very useful for something fairly short., much better than most old classics. I classify these types of algorithms as *neoclassical*, utilizing calculations that would are best done on a computer but could be clearly demonstrated, if laboriously so, with physical models. Something simple like this could be made extended in a trit version takeoff of a common round based ciphers, even something modified from a Feistel construction, but it was my sole purpose to make the good point that many do not cheerfully accept, that a bit is just another unit of logic, nothing special or superior about it at all. What makes the Trinity Ciphers block ciphers has nothing to do with the bit, and the concept in them is somewhat scalable as a bonus. -- --- Insanity means doing the same thing over and over again and expecting different results...like CDA2. --- Decrypt with ROT13 to get correct email address.
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 19 Oct 1998 19:02:22 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <362B7EAE.181E7245@stud.uni-muenchen.de> References: <jgfunj-1910981151080001@207.101.116.84> Newsgroups: sci.crypt Lines: 19 W T Shaw wrote: > > In article <362B58C1.45F3D896@stud.uni-muenchen.de>, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > > > I prefer simply to think that a block cipher operate on units of blocks > > (bunches of characters or bits (which are the fundamental units)). > > I wrote some block ciphers that have nothing to do with bits some time > ago; it is based on trits. One version will retain capitalization, and > handle word spacing, Trinity 27. Punctuation is substituted with rare > characters, X, J, and Q. Just an addendum to my sentence: I mean both character and bit, depending on one's preference, can serve as the fundamental unit in discourse. Generalizing, also ideographs (the Chinese characters) or similar entities could be considered units. M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 19 Oct 1998 17:54:28 GMT From: ritter@io.com (Terry Ritter) Message-ID: <362b7c6f.5322932@news.io.com> References: <362B58C1.45F3D896@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 220 On Mon, 19 Oct 1998 16:20:33 +0100, in <362B58C1.45F3D896@stud.uni-muenchen.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Terry Ritter wrote: >> >> >> The presentation of FFT-style mixing with log n levels is >> >> at least a plausible counterexample, but it has not appeared in the >> >> texts. > >> http://www.io.com/~ritter/ARTS/NONLBBM.HTM#1998092201 >> http://www.io.com/~ritter/#MixTech > >I have a couple of tiny questions: Where is the link (in meaning) >between orthogonal Latin squares and FFT (which has an established >definition in numerical mathematics)? Where does the 'log n levels' >come into the encryption process? OK, I'm going to explain this in different words, but first here is the section from the recent OLS article, and if that does not make this clear on its own, please try to help me understand why: "A TYPICAL MIXING CIPHER" "In figure 1 we have a typical Mixing cipher in schematic form, with 3 "fencing" layers of 8 invertible substitutions each, and two full "FFT-style" mixings between them. If these are 8-bit substitutions, we have a 64-bit block cipher. Each substitution (and now each mixing operation also) is individually keyed. "The vertical dotted lines represent typically 8-bit-wide data paths, and the data flow is from top to bottom. Each S is a substitution table, and *--* represents the "butterfly" action of a single BBM. For 8 elements we have log2(8) = 3 mixing FFT sublayers (Mix0, Mix1, Mix2) of 8/2 = 4 BBM operations each. All BBM's in the same sublayer are independent and can occur simultaneously in parallel hardware. The wider the block, the more BBM's needed, which also means that more computation can occur simultaneously. | A 64-Bit Mixing Cipher Fig. 1 | | : : : : : : : : <- Input Block | S S S S S S S S <- Fencing | : : : : : : : : | *---* *---* *---* *---* Mix0 0..0, 0..3 | : : : : : : : : | *-------* : *-------* : Mix1 0..0, 0..1 | : *-------* : *-------* Mix1 1..1, 0..1 | : : : : : : : : | *---------------* : : : Mix2 0..0, 0..0 | : *---------------* : : Mix2 1..1, 0..0 | : : *---------------* : Mix2 2..2, 0..0 | : : : *---------------* Mix2 3..3, 0..0 | : : : : : : : : | S S S S S S S S <- Fencing | : : : : : : : : | *---------------* : : : | : *---------------* : : | : : *---------------* : | : : : *---------------* | : : : : : : : : | *-------* : *-------* : | : *-------* : *-------* | : : : : : : : : | *---* *---* *---* *---* | : : : : : : : : | S S S S S S S S <- Fencing | : : : : : : : : <- Output Block "Each of the mixing BBM's uses an orthogonal pair of Latin squares. If these are nonlinear, we must of course process the FFT-style "sublayers" in reverse order when deciphering. So if we use the opposite orders in the top and bottom mixing sections, we can use the exact same structure for both enciphering and deciphering; only the table contents then need be changed." OK, that was the old section, now for some more: Take a look at any of the diagrams, including the nicer .GIF ones on my pages, which start at the top left of my main page. (Click on that and you get to the bigger one some ways down. Click on that and you get to the html article on this mixing and an even bigger diagram.) These diagrams show data flowing from top to bottom on lines which transport one element each (that would be a single value, possibly represented in binary on 8 copper wires). Now, see the horizontal lines? They each connect two lines or elements which are to be mixed together. Each of these mixings is one BBM or pair of orthogonal Latin squares, and takes two elements, mixes them, and returns two mixed values. The mixed results then replace the original values in the same positions just like the "butterfly" operations used in some FFT's. So we can mix each element with another, and then each pair with another pair and so on until every element is mixed with every other element. This occurs in log2( n ) "sub-levels," because each BBM mixing is "dyadic" in the sense of taking 2 inputs, and so each mixing sublevel has the mixed results from twice as many elements as the sublevel before it. So we can obviously produce a pair of elements which each contain "an equal contribution" from n original input values in 2**m steps, where m = log2( n ). And if we do this across the full width of the block, we get a full block of mixed elements in which each of the original block elements is represented "equally" in each element of the result. This block is decipherable to the same extent that each of the mixings is reversible. And this is guaranteed by the nature of the mixings. The pattern of these mixings is exactly like some implementations of the FFT, which is thus "FFT-style." >> Nevertheless, I use a slightly different definition: I see a block >> cipher as one which requires the accumulation of data before ciphering >> can occur. This intuitive definition is consistent with the term >> "block," but it also means that transposition is a form of block >> cipher, and transposition does not have the diffusion signature. > >I prefer simply to think that a block cipher operate on units of blocks >(bunches of characters or bits (which are the fundamental units) Bits are the fundamental units of *information*, but not necessarily the fundamental units of *operation*, which could be any representation, such as "trits" or whatever. >). This >(larger) unit, the block, must of course be complete before being >operated on. So your definition is inline with mine. A transposition >is certainly a block cipher then (independent of whether diffusion >is achieved). If we XOR two character streams, it would according to >my definition be a stream cipher if the fundamental unit is character >and a block cipher if the fundamental unit is bit. The choice of the >fundamental unit and the larger unit (block) is essentially arbitrary, >depending on the standpoint one prefers. (One can also choose to >consider a word of 32 bits or a group of 64 bits to be a fundamental >unit.) This is what I meant by there being no absolute distinction >between stream and block cipher. I still think this would generally be considered a wrong interpretation of streaming. When true streaming occurs it is boundless. Now, we might have a system which for its own reasons inserts boundaries and so streams in "chunks" (this is *not* a block cipher), but the streaming itself is boundless. If the distinction between "block" and "stream" is going to have any meaning, streaming across a 64-bit block is still streaming: From the point of view of the streaming operation, the 64-bit block of data did not need to all be available at one time. A "true" block cipher must have a full block at once for operations to proceed. I note that the inability to make this distinction would essentially invalidate the expanded form of the block definition, and so return us to the emulated Simple Substitution definition (which is already the dominant understanding). The problem with *that* is that it says "block" for a distinction which is *not* based on the "block" but instead based on *a* *particular* *type* of block transformation. And *that* means that we need a name for "block" transformations of *other* types, and *that* will be very, very confusing. >> Then I see block cipher "chaining modes" (like CBC) as stream >> meta-ciphers which use a block cipher as a component. This provides a >> welcome unity between the concepts used in classical stream cipher >> designs and current block cipher usage. > >Taking the block as the fundamental unit CBC is a stream cipher. >So we agree. Great! >I think there is a need though of a >site(s) that gives comprehensive links to Web publications (not >software packages, nor simply reprints from journals) on cryptology >so that one can conveniently find them, Well, possibly, but that site would be very difficult to maintain. Ideally we might think that for some reason every author would want to make sure that every one their works was there, but that won't happen. On the other hand, every author nowadays should have some sort of home page which links to their particular articles. Each author should have the responsibility of making their particular articles available. So then the problem becomes finding those home pages. I have links to a few lists of crypto people's home pages in my crypto links. I think a better idea would be to have short reviews of each article, organized by topic. I have been doing some of this for ink-on-paper articles in the "Literature Surveys" section on my pages: http://www.io.com/~ritter/#LiteratureSurveys In this way one can find the articles *on a particular topic* in one place, and from the quoted text make a meaningful selection of those which might be worth pursuing, whether on-line or in a library. Doing this for every topic in cryptography would, however, be a very big and generally thankless job. >since the common search >machines are ineffective, I believe. Yeah, I don't know what happened. It seemed like they were far more useful even just a year ago. Now it seems like one can hardly find anything. Odd. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: 20 Oct 1998 04:26:57 GMT From: jsavard@freenet.edmonton.ab.ca () Message-ID: <70h3eh$ek6$2@news.sas.ab.ca> References: <362b7c6f.5322932@news.io.com> Newsgroups: sci.crypt Lines: 21 Terry Ritter (ritter@io.com) wrote: : Yeah, I don't know what happened. It seemed like they were far more : useful even just a year ago. Now it seems like one can hardly find : anything. Odd. I have noticed things have gotten worse since I've started using the Internet - but since I started recently, and even more recently had a Web page, I think I know why. For one thing, many web pages are not permanent, and move to locations. For another, the number of web sites is growing at a rate the search engines are finding it hard to keep up with. As a result, they're indexing a smaller proportion of existing Web sites, and a larger number of out-of-date links. Plus, many of the services are being funded, or advertisers are trying them, on a speculative basis. If banner ads on AltaVista or Yahoo! don't bring the results advertisers expect, eventually the funds for the elaborate computer installations needed to provide these free services will dry up. John Savard
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 20 Oct 1998 11:00:06 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <362C5F26.D347BAB3@stud.uni-muenchen.de> References: <362b7c6f.5322932@news.io.com> Newsgroups: sci.crypt Lines: 120 Terry Ritter wrote: > > <mok-kong.shen@stud.uni-muenchen.de> wrote: .................... > Bits are the fundamental units of *information*, but not necessarily > the fundamental units of *operation*, which could be any > representation, such as "trits" or whatever. We indeed agree. A unit of operation can consist of several (arbitrarily defined) fundamental units of information. Such a unit I call a block. > > This > >(larger) unit, the block, must of course be complete before being > >operated on. So your definition is inline with mine. A transposition > >is certainly a block cipher then (independent of whether diffusion > >is achieved). If we XOR two character streams, it would according to > >my definition be a stream cipher if the fundamental unit is character > >and a block cipher if the fundamental unit is bit. The choice of the > >fundamental unit and the larger unit (block) is essentially arbitrary, > >depending on the standpoint one prefers. (One can also choose to > >consider a word of 32 bits or a group of 64 bits to be a fundamental > >unit.) This is what I meant by there being no absolute distinction > >between stream and block cipher. > > I still think this would generally be considered a wrong > interpretation of streaming. When true streaming occurs it is > boundless. Now, we might have a system which for its own reasons > inserts boundaries and so streams in "chunks" (this is *not* a block > cipher), but the streaming itself is boundless. We seem to agree here too. My sentences nowhere contradict the thesis that a stream is boundless. A stream in my definition is simply an unspecified number of (chosen) fundamental units of information. It has no boundaries or 'chunks'. If we impose a (again arbitrary) block structure on it (see my definition) than there are boundaries of the blocks (delimiting those fundamental units belonging to each individual block) but the stream itself has no such boundaries. A practical stream of couse has on the other hand a start and an end (message begin and message end). > > If the distinction between "block" and "stream" is going to have any > meaning, streaming across a 64-bit block is still streaming: From the > point of view of the streaming operation, the 64-bit block of data did > not need to all be available at one time. A "true" block cipher must > have a full block at once for operations to proceed. I don't see disagreement between us. As I tried to explain, the same information flow can be both regarded as stream or blocks. If you are doing a streaming operation, you operate on each (of your defined) fundamental unit at a time. If that unit is the bit then all you need is one bit at one time. If the supplier gives you more bits simutaneously, that's fine but you don't need that extra service. A block cipher operates on one block at a time and by definition must have all the fundamental units belonging to it available before the operation can work. On this last point we evidently fully agree. > I note that the inability to make this distinction would essentially > invalidate the expanded form of the block definition, and so return us > to the emulated Simple Substitution definition (which is already the > dominant understanding). The problem with *that* is that it says > "block" for a distinction which is *not* based on the "block" but > instead based on *a* *particular* *type* of block transformation. And > *that* means that we need a name for "block" transformations of > *other* types, and *that* will be very, very confusing. Having said above I would like to add only that the grouping of fundamential units into blocks is 'syntatic'. The opearation done on a block is 'semantic' and is entirely dependent on the cipher in question. It can consist of more or less complex (sub-)operations on the constituent fundamental units. To look at the mapping from the (pre-)operated block to the block after the enciphering process simply as a substitution is a legitimate but extremely oversimplified view of the matter, because it renders all the (sub-)operations carried out in the processing invisible. > >I think there is a need though of a > >site(s) that gives comprehensive links to Web publications (not > >software packages, nor simply reprints from journals) on cryptology > >so that one can conveniently find them, > > Well, possibly, but that site would be very difficult to maintain. > Ideally we might think that for some reason every author would want to > make sure that every one their works was there, but that won't happen. Well, I tend to think that in the actual situation something of that sort is still better than nothing. Of course the site cannot link to just anything. Some more or less liberal policies as those commonly practiced by some mailing lists suffice to avoid certain undesirable problems, I guess. Unfortunately for technical reasons I am unable to offer that service on my site, otherwise I would have started such a project. Maintenace wouldn't be a big issue. If some links become unavailable with time one can just forget about them until someone complains. > > On the other hand, every author nowadays should have some sort of home > page which links to their particular articles. Each author should > have the responsibility of making their particular articles available. > So then the problem becomes finding those home pages. I have links to > a few lists of crypto people's home pages in my crypto links. > > I think a better idea would be to have short reviews of each article, > organized by topic. I have been doing some of this for ink-on-paper > articles in the "Literature Surveys" section on my pages: > > http://www.io.com/~ritter/#LiteratureSurveys > > In this way one can find the articles *on a particular topic* in one > place, and from the quoted text make a meaningful selection of those > which might be worth pursuing, whether on-line or in a library. Doing > this for every topic in cryptography would, however, be a very big and > generally thankless job. Your recommendations are cretainly very reasonable. I see you have done a good job. M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 08 Oct 1998 11:31:10 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <361C946E.9BC8501E@stud.uni-muenchen.de> References: <6v5mcq$cj1$1@nnrp1.dejanews.com> <3613D2D3.9F597176@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 47 dianelos@tecapro.com wrote: > > In article <3613D2D3.9F597176@stud.uni-muenchen.de>, > Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > >[...] > > I am not yet quite clear of how your scheme compare with the > > following: > > > > use a key (known to the partner) and PRNG to generate R > > but do not transmit R. > > compute and transmit C = Encrypt( T xor R ) > > > Well, the two methods are clearly different. Your method does use > a larger key but does not inject noise and therefore does not > increase the size of the ciphertext. In fact, if the PRNG is I see that I haven't yet properly understood your idea. Let me therefore reproduce your previous post: > Noise can easily be used with any cipher. Here is a possibility: > Use noise to initialize a RNG and XOR its output with plaintext > blocks before encrypting. Repeat this process periodically. If > your RNG has an internal state of 128 bits (16 bytes) and you > re-initialize it with noise every 512 bytes of plaintext then you > increase the size of the ciphertext only 3.1% > produce and transmit noise R > compute and transmit C = Encrypt( T xor R ) You use noise often times as seed to a PRNG to obtain its output R and XOR the plaintext T with R. (The phrase 'noise R' above seems to be 'output R of PRNG using some noise as seeds'.) You encrypt (T XOR R) to C and send both R and C. So you double the volume of transmission. My point is that R is available to the analyst and this is undesirable. So I think one of your sentences above probably should read: use some noise N via PRNG to generate R and transmit N compute and transmit C = Encrypt( T XOR R ) But then I think one should transmit not N but U = Encrypt( N ). Anyway I am much confused and should very much appreciate your explanation. M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Thu, 08 Oct 1998 19:24:31 GMT From: dianelos@tecapro.com Message-ID: <6vj3hf$jii$1@nnrp1.dejanews.com> References: <361C946E.9BC8501E@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 78 In article <361C946E.9BC8501E@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > dianelos@tecapro.com wrote: > > produce and transmit noise R > > compute and transmit C = Encrypt( T xor R ) > > You use noise often times as seed to a PRNG to obtain its output R > and XOR the plaintext T with R. (The phrase 'noise R' above seems to > be 'output R of PRNG using some noise as seeds'.) You encrypt (T XOR R) > to C and send both R and C. So you double the volume of transmission. > My point is that R is available to the analyst and this is > undesirable. Yes and no. Suppose a new random R is produced with each encryption. Then this simple method at least stops all *chosen* plaintext attacks. R is known after the encryption but an attacker can only choose T before the encryption. This method does not stop chosen ciphertext attacks, where the attacker can specify both R and C. > So I think one of your sentences above probably should read: > > use some noise N via PRNG to generate R and transmit N > compute and transmit C = Encrypt( T XOR R ) > > But then I think one should transmit not N but U = Encrypt( N ). > > Anyway I am much confused and should very much appreciate your > explanation. Here is what I want: A method that uses ciphers as a primitive and stops all known plaintext or ciphertext only attacks. I don't see why such a method cannot exist; if it turns out that such a method does exist and is practical it would be extremely important for many security systems where speed is not critical. Terry Ritter proposes to include a random field in the text block. This does have advantages but it does not stop ciphertext only attacks which are based on partial knowledge of the plaintext. The method I proposed earlier in this thread (with the double encryption excised - as Terry said using double encryption cannot be functionally different from using single encryption) is this: For each plaintext T generate random R and compute: C1 = E( k1, R ) C2 = E( k2, R xor T ) The idea is that the second encryption uses plaintexts that are in principle unknown to the attacker. Observe that it is possible to cancel out the random variable R: R = D( k1, C1 ) => C2 = E( k2, D( k1, C1 ) xor T ) This last expression can be rewritten thus: C2 = F( k1,k2,C1, T ) where F is a new cipher with a key that is conformed by the secret but constant k1,k2, and the known but variable C1. Therefore this method is equivalent to using a cipher with variable keys, which I understand does stop known plaintext or ciphertext only analysis. So this method may work for most ciphers E but I don't know how to prove it. -- http://www.tecapro.com email: dianelos@tecapro.com -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 13:45:54 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <3621FA02.9C7D2394@stud.uni-muenchen.de> References: <6vj3hf$jii$1@nnrp1.dejanews.com> Newsgroups: sci.crypt Lines: 63 dianelos@tecapro.com wrote: > > Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > > dianelos@tecapro.com wrote: > > > produce and transmit noise R > > > compute and transmit C = Encrypt( T xor R ) > > > > You use noise often times as seed to a PRNG to obtain its output R > > and XOR the plaintext T with R. (The phrase 'noise R' above seems to > > be 'output R of PRNG using some noise as seeds'.) You encrypt (T XOR R) > > to C and send both R and C. So you double the volume of transmission. > > My point is that R is available to the analyst and this is > > undesirable. > > Yes and no. Suppose a new random R is produced with each encryption. > Then this simple method at least stops all *chosen* plaintext > attacks. R is known after the encryption but an attacker can > only choose T before the encryption. This method does not stop > chosen ciphertext attacks, where the attacker can specify both R > and C. > > > So I think one of your sentences above probably should read: > > > > use some noise N via PRNG to generate R and transmit N > > compute and transmit C = Encrypt( T XOR R ) > > > > But then I think one should transmit not N but U = Encrypt( N ). > > > > Here is what I want: A method that uses ciphers as a primitive > and stops all known plaintext or ciphertext only attacks. I don't > see why such a method cannot exist; if it turns out that such a > method does exist and is practical it would be extremely important > for many security systems where speed is not critical. I don't yet fully understand. Isn't that your produce and transmit noise R and my use some noise N via PRNG to generate R and transmit N are functionally (without considering security) equivalent? In your case R is simply known to the analyst. In my case if the PRNG is not known (say there is a bunch of these to be chosen) R is unknown to him, or else transmit U = Encrypt (N) as I suggest. In the second case the volume of transmission is reduced. As to the topic of chosen plaintext attack I think it is the reference point that is essential. You said that the attack is impossible because R is known after encryption. Your attack refers to your 'Encrypt'. But if you consider 'XOR with R' to be an encryption function 'Encrypt1' (even though in your case R happens to be known to the analyst) then one sees that the attack is a chosen plaintext attack against the product cipher 'Encrypt1'+'Encrpyt'. Whether this is possible or not possible is, I think, dependent upon the strength of this product cipher. If you think that is impossible, then it should be all the more true in the alternative scheme I suggested. Could we perhaps discuss a little bit more? M. K. Shen
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Mon, 12 Oct 1998 21:11:55 GMT From: dianelos@tecapro.com Message-ID: <6vtrar$7lq$1@nnrp1.dejanews.com> References: <3621FA02.9C7D2394@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 98 In article <3621FA02.9C7D2394@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > dianelos@tecapro.com wrote: > > Here is what I want: A method that uses ciphers as a primitive > > and stops all known plaintext or ciphertext only attacks. I don't > > see why such a method cannot exist; if it turns out that such a > > method does exist and is practical it would be extremely important > > for many security systems where speed is not critical. > > I don't yet fully understand. Isn't that your > > produce and transmit noise R > and my > > use some noise N via PRNG to generate R and transmit N > > are functionally (without considering security) equivalent? In your > case R is simply known to the analyst. In my case if the PRNG is > not known (say there is a bunch of these to be chosen) R is unknown > to him, or else transmit U = Encrypt (N) as I suggest. In the second > case the volume of transmission is reduced. O.K. Let us suppose we have two encryption boxes, each of which includes a true random generator. For each plaintext T_i, the first box produces random R_i and outputs R_i as well as E(T_i xor R_i). The second box produces random N0 and outputs E( N0 ) - then for each plaintext it transmits E( PRNG(N_i) xor T_i ). Now, the first box cannot be attacked by a chosen plaintext attack. Even if the attacker can discover the secret key K used by E by choosing only *one* plaintext processed by E, it is impossible to make certain what data will be processed by E because each time it gets mixed with a new random R_i. The same goes for an attack that works if the attacker can choose any condition for a sufficient large number of blocks. By having the box mix one random bit with each plaintext bit the attacker has zero control over the data block that will be processed by E. I think that, in principle at least, the second box can be attacked with chosen plaintext. First let us suppose without loss of generality that the same PRNG is always used. (If there is a choice of several, repeat the attack several times.) Every PRNG displays some kind of regularity that can be used in an attack. An extreme example, but sufficient in our discussion about principle, is that every PRNG loops back, say after M iterations. Then PRNG( N_i ) = PRNG( N_(i+M) ). Now an attacker can choose T_i and T_(i+M) with a particular difference which will be preserved after XORing it with the identical PRNG value. Thus a differential attack is in principle possible against the second box. Now I agree that this discussion about principle may not be too important in practice. Suppose somebody would prove that for a particular E the following method cannot be attacked by *any* imaginable chosen plaintext attack: C1_i = E( R_i ) C2_i = E( T_i or R_i ) Then we would have a quite practical, provably secure method, albeit one that doubles the size of the ciphertext. After all, chosen plaintext is the most intrusive method of attack - if that attack will not work in principle, neither will known plaintext or ciphertext only attacks. Now if space or transmission speed is a limiting factor, then the true random variable R could be replaced by short PRNG sequences based on true random seeds N. For example a method, that produced a new random seed every 8 blocks and used it to produce a good pseudo-random R_i for the next 8 encryptions, would appear very secure too and would increase ciphertext size only in 12.5%. Any weakness in the PRNG would have to be exploited in only 8 blocks. The same method injecting noise every 100 blocks would increase ciphertext size only 1% and still look very good. This last method is what you propose. Other possibilities exist. If E can encrypt large blocks another possibilitiy would be to use Terry's random fields. One would get an advantage if that random field were used as the seed to a PRNG and the resulting sequence were XOR-ed to the rest of the data block. For example, suppose E can encrypt 128 byte blocks: For each 112 byte plaintext T_i, generate true random 16 byte N_i, use it as a seed to a good PRNG and produce 112 bytes of pseudo random R_i, and finally encrypt the 128 byte concatenation of N_i with ( T_i xor R_i ). Space inefficiency is now about 12.5% and security is probably much greater. Only, this method as well as all its variants can at least in principle be attacked by a chosen plaintext attack. On the other hand it is an open question how this method of "internal randomization" compares to the method of "external randomization" described above, which has the same space inefficiency of 12.5%. -- http://www.tecapro.com email: dianelos@tecapro.com -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
Subject: Re: Thought question: why is encrypted output routinely the same size as the input? Date: Tue, 13 Oct 1998 16:06:39 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36236C7F.3D87CBAF@stud.uni-muenchen.de> References: <6vtrar$7lq$1@nnrp1.dejanews.com> Newsgroups: sci.crypt Lines: 35 dianelos@tecapro.com wrote: > O.K. Let us suppose we have two encryption boxes, each of which > includes a true random generator. For each plaintext T_i, the > first box produces random R_i and outputs R_i as well as E(T_i xor > R_i). The second box produces random N0 and outputs E( N0 ) - then > for each plaintext it transmits E( PRNG(N_i) xor T_i ). You are right. A PRNG can hardly compete with a real TRNG. (There are some difficulties of obtaining and employing really good TRNGs though, in my humble opinion.) > Now I agree that this discussion about principle may not be too > important in practice. Suppose somebody would prove that for a > particular E the following method cannot be attacked by *any* > imaginable chosen plaintext attack: > > C1_i = E( R_i ) > C2_i = E( T_i or R_i ) > > Then we would have a quite practical, provably secure method, > albeit one that doubles the size of the ciphertext. After all, > chosen plaintext is the most intrusive method of attack - if that > attack will not work in principle, neither will known plaintext or > ciphertext only attacks. With the assumption previously of the availability of a TRNG I think one could do something practically useful by avoiding (or at least effectively reduce the probability of) the analyst being able to know the correspondence between C1_i and C2_i, like sending them through different channels or intermixing the bytes of both streams in some way. (BTW, one could certainly choose different algorithms for the two encryptions.) M. K. Shen
Terry Ritter, his current address, and his top page.
Last updated: 1999-01-19