Known-Plaintext and Compression

A Ciphers By Ritter Page

When we talk about attacking a cipher, we normally expect the opponents to have ciphertext. So known-plaintext is the information condition of having some amount of both the plaintext and the related ciphertext, for use in an attack. (The point of such an attack might be to expose the key, thus eventually exposing plaintext not otherwise known. In practice, the "known" plaintext might consist of some real knowledge and some guesses.)

Anyone who has actually attacked a real cipher in practice must know the irreplaceable advantage of known-plaintext: A cipher is a key-selected transformation between plaintext and ciphertext. To attack a cipher is to use knowledge of a particular transformation to expose the key which selected that transformation. But the very concept of a "transformation" implies knowledge of both input (plaintext) and output (ciphertext).

Obviously, ciphers can have various weaknesses which can be attacked in different ways. Normally, we expect ciphertext to be random-like and not have peculiar statistical characteristics. In this situation, nobody can mount an attack on the cipher transformation without knowing -- or assuming -- something about the plaintext. Absent actual plaintext -- or some identifiable plaintext or ciphertext characteristic -- it is impossible to expose the transformation. In this situation, the correct key cannot be identified by any means whatsoever, without knowing both plaintext and ciphertext.

Considering the general lack of proof of strength in practice in cryptography, leveraging an actual impossibility would seem very useful. But, in practice, it is very difficult to prevent some amount of plaintext from escaping, because the information in that plaintext may not have any worth, and so will not be protected.

Nevertheless, if some way could be found to prevent known-plaintext exposure, we could eliminate all attacks of any sort -- known or unknown -- on the transformation itself. And we can approach that happy state by multi-ciphering with a stack of three ciphers using independent keys.

It is true that the three ciphers in a stack could be considered a single cipher, which might well have some exposed plaintext. But we expect the overall cipher -- the Shannon addition of three ciphers -- to be vastly more complex than any particular component cipher used alone. The stack thus protects against attacks on each of the component transformations themselves. It also protects against the case where the component cipher we might have used by itself is actually weak.

Contents

• 2001-06-19 lcs Mixmaster Remailer: "Inexperienced users of crypto systems are often concerned about known plaintext." "We need to communicate clearly that known plaintext is no longer an issue." "If your cipher is weak against known plaintext, you should be using a different cipher."
• 2001-06-19 Terry Ritter: "No." "Modern ciphers are indeed designed to resist known-plaintext. That is a noble design goal. Whether or not it has been achieved is, of course, of some continuing interest."
• 2001-06-19 John Myre: "Many experts believe that it has . . . ."
• 2001-06-19 lbudney-usenet@nb.net: "I would agree . . . if you'd said that *extreme* efforts to eliminate known plaintext is just silly."
• 2001-06-19 John Myre: "I just think it's wrong to bring security into the decision, because it's too easy to get your priorities wrong."
• 2001-06-20 David Hopwood: "Not doing compression is not "just silly". Compression has a performance cost . . . ."
• 2001-06-20 Tim Tyler: ". . . if the plaintexts are all fixed size forms, then compressing them may result in the attacker getting more information that not doing so."
• 2001-06-20 lbudney-usenet@nb.net: ". . . deprecating compression is just silly. It has its place."
• 2001-06-19 Terry Ritter: "Not knowing both the input and the output acts to hide the ciphering function itself."
• 2001-06-19 John Savard: ". . . if the application tends to leave certain fields blank, does it make sense to separate this from, say, the _compression_ stage?"
• 2001-06-20 John Myre: "Complexity is the enemy of correctness. Therefore, it will most often be best to KISS your encryption modules."
• 2001-06-19 Tim Tyler: "While tying yourself in knots to avoid known plaintext may be over the top, avoiding it *is* desirable."
• 2001-06-19 John Myre: "Exactly how much effort is it worth?"
• 2001-06-19 Terry Ritter: "How can it be *wrong* to have a suspicion of weakness, when the assertion of strength is not based on proven fact?" "We need to consider the consequences of our hopes being wrong."
• 2001-06-19 John Savard: ". . . he meant you would be "wrong" if the weakness did not in fact exist, regardless of the fact that you are right that there is no proof it does not." "It is possible to have reasons for confidence in something that fall short of proof . . . ."
• 2001-06-20 Terry Ritter: "Cryptography is fundamentally different . . . we have no way of knowing whether any particular cipher is successful."
• 2001-06-20 SCOTT19U.ZIP_GUY: "I agree basically with what your said. But I don't think you have ever completely described the nature of the three ciphers that would be used in series."
• 2001-06-20 Terry Ritter: "Without some knowledge of the plaintext itself, or of structure in the plaintext, breaking a cipher becomes very difficult."
• 2001-06-26 John Savard: "I agree that the ciphers in a cascade should not expand the data being enciphered." "However, if an exception is made to allow the cipher steps to use a random IV, they should be forced to get that IV from the overall 'system' and not produce it for themselves."
• 2001-06-26 SCOTT19U.ZIP_GUY: ". . . in your cascadeing are you limiting to only methods that have matching block sizes . . . ."
• 2001-06-27 John Savard: "Either they all have the same alignment restriction (say, to whole bytes) or they must all be able to preserve the exact number of bits."
• 2001-06-27 SCOTT19U.ZIP_GUY: "I think your starting to get there."
• 2001-06-21 Bryan Olson: ". . . if we believe we cannot have confidence without proof, then we have the same problem we started with. We are just as motivated to triple the ciphers that that are themselves tripled ciphers."
• 2001-06-21 Terry Ritter: "Using a sequence of different ciphers is how we multiply ciphers both in number and complexity; this is Shannon multiplication of secrecy systems."
• 2001-06-21 John Myre: "The *logical argument*, however, goes to Bryan. From the premise that we don't know the strength of our ciphers we can only conclude that we don't know the strength of any construction based on them, either."
• 2001-06-21 Paul Pires: "Is there any known "known plaintext attack" that works without knowledge of the ciphertext? Is there any ciphertext only attack that can work when the plaintext is unknowable and Pseudo-random?"
• 2001-06-22 John Savard: ". . . the individual ciphers it is being proposed to triple up are *already* immune to any known attacks to date that is faster than brute force!"
• 2001-06-22 Mark Wooding: ". . . we can /prove/ that cascading ciphers doesn't introduce any weaknesses . . . ." "Proving that it actually helps seems very hard."
• 2001-06-22 John Savard: "That is not true when the second cipher in the chain is vulnerable to a known-plaintext attack . . . ."
• 2001-06-21 Terry Ritter: "That "logical argument" is a "red herring":" ". . . multiciphering with three different ciphers and independent keys makes an overall "cipher" that is arguably much stronger than its component parts, and also prevents the exposure of information which might otherwise have been used to attack those parts."
• 2001-06-22 Mok-Kong Shen: ". . . a good block cipher needs a sufficient number of rounds, which are put together in a cascading fashion not 'entirely' different in principle to cascading multiple ciphers"
• 2001-06-22 John Savard: "True. Of course, an important difference is that block ciphers usually repeat the same round many times, although with different subkeys, and this could be bad if the round is weak in certain ways."
• 2001-06-21 Alexis Machado: "You discovered why cipher chaining is not necessary."
• 2001-06-22 Terry Ritter: ". . . the point of multiciphering is not strength, per se. If we could trust the strength we supposedly get from a single cipher, there would be no point. Unfortunately there is no way to know how a cipher will fare against real opponents; there is no feedback, thus no way to know when we have succeeded or failed."
• 2001-06-22 John Savard: ". . . even a weak cipher, as long as it does have a variable key, will indeed prevent a known-plaintext attack on the next cipher in the chain." ". . . even a weak cipher, as long as it does have a variable key, will indeed prevent a known-plaintext attack on the next cipher in the chain."
• 2001-06-22 David Wagner: "If we don't know for certain whether our ciphers are secure (and we don't know), we can't know this for certain, either."
• 2001-06-22 John Savard: ". . . one might note that DESX doesn't gain any resistance to differential cryptanalysis, which is usually considered a "known-plaintext" attack."
• 2001-06-22 Tim Tyler: "You barely need to say anything to claim that it's *stronger* than any of the individual encryptions."
• 2001-06-21 Bryan Olson: "That mistake - evaluating all ciphers as the same - is the error you commit when you disagreed with Savard's quote above."
• 2001-06-19 Tim Tyler: "Known plaintext will never be irrelevant."
• 2001-06-19 John Savard: "What if quantum computers start appearing at corner computer stores near you? Won't simple keysearch get a lot simpler for all those back intercepts?"
• 2001-06-22 Thomas J. Boschloo: "What I wonder about is if compression does really 'reduce' plain-text, or if it does only add to it?"
• 2001-06-19 lcs Mixmaster Remailer: "For too long have naive and inexperienced users been taught to fear known plaintext . . . . It is an abrogation of the responsibility of professional cryptographers to allow users of this technology to suffer under such a misconception."
• 2001-06-19 SCOTT19U.ZIP_GUY: "Actually any known plaintext is harmful and will weaken an ecryption system."
• 2001-06-20 Tim Tyler: "Known plaintext is rightly considered harmful. It is the assertion that is is harmless that is the "sad error"."
• 2001-06-20 David Wagner: ". . . I would argue that it is a better use of your energies to spend time making sure you defend against those failure modes than to worry about known plaintext."
• 2001-06-20 Tim Tyler: "Eliminating known plaintext may not help very much - or may only help sometimes, but it can and does help."
• 2001-06-20 David Wagner: "If you're a designer of an encryption device, you have no control over the plaintext being fed to you, so you can't plausibly do anything about known plaintext anyway." "I contend, though, that the cost is very high and the benefit is low. The cost is complexity, and complexity is the worst enemy of security."
• 2001-06-20 Ben Cantrick: "What's to stop you from, say, putting a block of 16 or 32 random bytes at the beginning of any plaintext fed to you? Or putting in a random byte every 3rd byte or something."
• 2001-06-20 Tim Tyler: "This thread seems to have as a motif XML encryption." "In that domain, it seems to me that there /is/ something that can be done about known plaintext."
• 2001-06-20 John Myre: ". . . I contend that complexity is the single greatest obstacle to secure systems."
• 2001-06-20 Tim Tyler: "Ignorance, stupidity, carelessness, lack of education and laziness probably ought to be factored in somewhere."
• 2001-06-20 John Myre: "If there is *any* way to verify a key guess, then brute force will always work, if we have enough time."
• 2001-06-20 Tim Tyler: "Coompression can make a brute force search impossible from a situation where a brute force search before compression would yield a unique plaintext." ". . . all it has to do is lower the message size below the unicity distance."
• 2001-06-20 David Wagner: ". . . one of the fantastic advantages of modern ciphers, which are designed to be secure against all of these far-fetched academic attacks, is that they give you an entire set of failure modes that you no longer have to worry about when designing a system."
• 2001-06-20 Tim Tyler: "To some extent the remedy for doing things wrong is to learn how to do them right."
• 2001-06-21 David Wagner: "As you add complexity, it can become harder to verify whether the system is secure."
• 2001-06-20 David Wagner: ". . . it would be a mistake to expect to defend against this failure mode by, say, compressing."
• 2001-06-20 Tim Tyler: "If you're /always/ transmitting lots of information the defense doesn't help." "If you /ever/ transmit small quantities of information under a key, it can help."
• 2001-06-20 David Wagner: "And my claim is that this case almost never happens . . . ."
• 2001-06-21 Tim Tyler: ". . . it is important to recall that compression can increase the unicity distance - and can sometimes increase it dramatically."
• 2001-06-21 Tim Tyler:
• 2001-06-20 Douglas A. Gwyn: "Stereotypes do provide an opening wedge in cracking many systems. If a collection of messages are stacked "in depth" where the stereotype recurs, it can facilitate cryptanalysis."
• 2001-06-20 lcs Mixmaster Remailer: "Hundreds of man years have been spent analyzing 3DES and probably close to that for AES. Users don't have to worry about avoiding known plaintext when using a strong algorithm like this. The most brilliant cryptographers in the world have failed to find the smallest weakness in these ciphers." "The danger from known plaintext when using a modern cipher like these is so small, so hypothetical, so unrealistic, that it is unprofessional for a cryptographer to allow the contrary belief to stand."
• 2001-06-20 Tim Tyler: "You don't know that. Indeed, it's quite likely that you have never read anything by the world's best living cryptanalysts." "Even *if* the cypher were known to be unbroken, there would still be a good case for reducing known-plaintext, as a precaution to help in dealing with other possible problems."
• 2001-06-20 John Savard: ". . . if keeping one's secrets is felt to be important, phrasing even good advice in terms of deliberately neglecting a precaution, instead of spending one's time on more important things . . . is like waving a red flag at a bull."
• 2001-06-20 David Wagner: "Even if you use compression, you'll still be using the key far past the unicity distance."
• 2001-06-20 Tim Tyler: "Even if you ignore the unicity distance there are still some remaining security plus points for compressing. The attacker gets less plaintext/cyphertext pairs, for example."
• 2001-06-20 John Savard: "3DES is DES repeated three times, and while DES is nearly as strong as its key size, small weaknesses have been found in it, most specifically against linear cryptanalysis."
• 2001-06-20 SCOTT19U.ZIP_GUY: "No one in the open community has any idea of the work of the most brillian cryptographers. Your making the same mistake the germans and Japanese made by flasely assuming strength. That is precisely what one should not do."
• 2001-06-20 Douglas A. Gwyn: ". . . if you know that a modern cryptosystem will be used, you needn't be concerned about the possibility of a known-plaintext attack."
• 2001-06-20 Tim Tyler: "Compare with: "Modern windows are designed not to break. Therefore, if you fit your shop with modern windows, you needn't be concerned about the possibility of them breaking.""
• 2001-06-21 Douglas A. Gwyn: "Assuming that modern windows were indeed unbreakable."
• 2001-06-21 Terry Ritter: "Only if we ASSUME (that is, make an ASS out of U and ME), that modern ciphers really *are* "indeed unbreakable." But every generation thought their ciphers were unbreakable." "Simply being designed to be unbreakable is hardly the same thing. That's a wish and a hope, not reality."
• 2001-06-22 SCOTT19U.ZIP_GUY: "Every generation seemed to think they were unbreakable and they were wrong."
• 2001-06-22 John Savard: ". . . if we actually let ourselves go . . . instead of using algorithms designed to work at blazing speed on minimal hardware, we might well have grounds for a limited degree of confidence or even complacency"
• 2001-06-22 SCOTT19U.ZIP_GUY: "That same chip cuts both ways. Who would want to not use one to break a cipher. All a chip does is compress time."
• 2001-06-22 Tim Tyler: ". . . our strongest publicly-known cyphers appear to be very strong, and resist all publicly-known attacks." ". . . it is possible that the hidden world is well in advance of public knowledge."
• 2001-06-22 Douglas A. Gwyn: "That was also true 10, 50, 200 years ago."
• 2001-06-22 Tim Tyler: "Certainly for a rather long time."
• 2001-06-22 Douglas A. Gwyn: "Shannon knew more about cryptanalysis than you think."
• 2001-06-22 John Savard: ". . . making a cipher many times more complicated makes it many *more* times harder to solve, so advances in the speed of serial computation _do_ have a one-sided payoff for the cryptographer."
• 2001-06-22 wtshaw: ". . . for Shannon, I am a critic but also a fan."
• 2001-06-22 DJohn37050: "Known plaintext *at the least* allows a dictionary to be built, which is not a good thing. So it should not be considered harmless with zero effect. It might have a negligible effect or it might be large but it is not zero."
• 2001-06-22 David Wagner: "Fortunately, we have proofs for our favorite modes of operation that, so long as you don't try to get too close to the birthday, the effect will be negligible."
• 2001-06-22 Tim Tyler: "If you don't see broken cyphers, don't conclude that cyphers don't get broken."
• 2001-06-20 David Wagner: "Far more systems have failed from too much complexity than have failed because there was a practical cryptanalytic known-plaintext attack on a well-chosen cipher. I'd gladly trade known plaintext for simplicity, any day."
• 2001-06-20 Tim Tyler: ". . . cryptanalytic known-plaintext attacks are *not* the only type of failure that eliminating known plaintext helps defend against."
• 2001-06-21 David Hopwood: "Your (Tim Tyler's) argument seems to be that there are a few situations where the message might already be short enough that compression will reduce the length to below the unicity distance. I think this will happen in so few cases that it is not worth bothering with as a security measure."
• 2001-06-21 Tim Tyler: "Recall that compression increases the unicity distance - and can do so arbitrarily far. "Perfect compression" results in a unicity distance which is infinite." "We may not yet be anwhere near perfect compressors for the domain of English text, but in other domains, much better can be done - and compressors are improving all the time."
• 2001-06-22 David Hopwood: "My assumption is that the compressed text still has a significant amount of redundancy - say, at least 0.1 bits/bit." ". . . I don't consider a factor of at most 10 to be "much" larger . . . ."
• 2001-06-22 SCOTT19U.ZIP_GUY: "Actaully if the uncity disctance increased by a factor of 10 then the amount of plain text needed goes up much higher since you really are taking about the amount of "cipher text" . . . ."
• 2001-06-22 Tim Tyler: "Consider email encryption. Say you use Rijndael with a 256-bit key. That gives a unicity distance of some 37 characters with English text." "English text has 84% redundancy. Taking your (admittedly optomistic) example, say that is reduced to 10%. That gives a unicity distance of 320 characters of *cyphertext*. Decompress those 320 characters with your (admittedly rather good) compressor and the message will be quite significant - over 2.6K on average - if my sums based on your entropy figures are correct." "This is not a fantastically and unrealistically short email message."
• 2001-06-22 Joseph Ashwood: "I agree with Hopwood in saying that the likelihood of the compressed value being below the unicity distance will be extremely rare . . . ."
• 2001-06-23 Tim Tyler: "Firstly, compression reduces the unicity distance. The better the compression, the more the increase - with *no* theoretical upper limit to what increase is possible." "Secondly, the unicity distance is a measure of cyphertext. The cyphertext is then decompressed into the plaintext, resulting in another increase in length."
• 2001-06-26 Bryan Olson: "No it doesn't. I think in your previous post you assumed a 256-bit key." ". . . I know of no users of complexity-based ciphers who use independent keys for each 2.6K of data."
• 2001-06-27 Tim Tyler: "That's correct. Unicity distance = key size / redundancy of input. 256/0.1 = 2560 bits = 320 bytes."
• 2001-06-23 wtshaw: "The obvious solution is to use a cipher with a native estimated unicity distance that is much longer than any simple or pile of messages."
• 2001-06-24 Tim Tyler: "The unicity distance is a function of the key size and the language input." "The only things you can do are increase the key size or decrease the redundancy of the inputs (e.g. via compression)." "The properties of the cypher itself are not normally considered relevent . . . ."
• 2001-06-24 wtshaw: "I do not mean to diminish what you know but ask you to open your eyes lest you continue to ever learn to enjoy stumbling."
• 2001-06-26 Tim Tyler: "You imply I said something wrong. Explain yourself."
• 2001-06-20 lcs Mixmaster Remailer: "Our advice as professional cryptographers MUST BE to use professional-quality crypto."
• 2001-06-20 John Savard: "This is all very sensible, but do we know that even the good crypto really is good?"
• 2001-06-20 David Wagner: "We don't. But we look at the most likely failures."
• 2001-06-20 David Wagner: "There are better ways to accomplish this than to suggest using ECB mode."
• 2001-06-20 John Savard: "The 'conventional wisdom' is fundamentally sound; I'm not trying to dispute that. The anonymous poster here, though, is advocating it in such a broad, categorical fashion as to invite suspicions of trolling . . . ."
• 2001-06-20 lcs Mixmaster Remailer: "I hope the well-earned reputation of David Wagner in this forum will carry weight even if the arguments of an anonymous voice do not."
• 2001-06-21 lcs Mixmaster Remailer: "Wasn't it claimed that the early versions of PGP failed to encrypt the length fields of the private key numeric values because of this reason? They were thought to be too guessable and would provide known plaintext."
• 2001-06-20 John Savard: "Should we consider ourselves justified in feeling complete confidence in the security of our encryption, once we've settled on the "industry standard"?"
• 2001-06-20 Mark Wooding: ". . . a construction based on a block cipher can only be considerde a modern cipher' if used in an appropriate mode (e.g., CBC, CTR, OCB, whatever)."
• 2001-06-21 Mark Wooding:
• 2001-06-20 Sean Furey: "As I see it . . . known plaintext is still a problem."
• 2001-06-21 David Wagner: "All good modern algorithms make brute-force keysearch attacks totally infeasible, simply by making the key space large enough to resist such attacks."
• 2001-06-21 Terry Ritter: "It is the knowledge of the plaintext, or of some structure in the plaintext, plus the ciphertext, which allows someone to know when the correct key has been found *by* *any* *approach* *at* *all*."
• 2001-06-22 Tim Tyler: "Known-plaintext /can/ be a problem, because it makes rejecting keys practical - and this is useful, if the keyspace can be reduced to a searchable region by other means".
• 2001-06-22 Joseph Ashwood: "We can take this argument (the original one that we can ignore known-plaintext) and show that in some nearly extreme cases ignoring it is a bad thing, it may not compromise the key, but it will at times allow other forms of attack."
• 2001-06-22 David Wagner: "The problem here isn't the presence of known plaintext. The problem is the lack of message authentication."
• 2001-06-22 David Hopwood: "Without authentication, that's an obviously flawed design."

Subject: Known plaintext considered harmless
Date: 19 Jun 2001 05:20:32 -0000
From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu>
Message-ID: <20010619052032.804.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 34

Inexperienced users of crypto systems are often concerned about known
plaintext.  They have heard that known plaintext can give a cryptanalyst
an entry point into breaking ciphers.  Accordingly they fear known
plaintext and try to avoid it.

Sometimes they will even complicate their data structures or protocols
in order to avoid known plaintext.  They will incorporate compression
for no good reason, or restructure headers, or make sure that fields
set aside for future expansion hold random values rather than zeros.
All this complicates their overall system design and introduces new
possibilities for errors.

We need to communicate clearly that known plaintext is no longer an issue.
With modern ciphers and a reasonable chaining mode, you don't have
to worry about known plaintext.  If your cipher is weak against known
plaintext, you should be using a different cipher.

When designing data which will be protected by a cipher, the only
consideration should be the needs to which the data will be put.
Design for clarity and convenience.

NO CONSIDERATION WHATSOEVER should be given to manipulating or
constraining the data structures in the hopes of making the encryption
stronger!

In your overall system, the cipher is there to do the job of protecting
the plaintext.  The job of the plaintext is to represent whatever data
is being communicated.  Don't be misled into blurring these boundaries.
Modern ciphers are fully capable of providing confidentiality with any
and every plaintext.  Let the cipher do its job, don't try to "help"
it in the other parts of your design.

Let us all agree that it is time to put concerns about known plaintext
behind us.  Recommendations to avoid it are obsolete.

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 07:16:42 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b2efc49.1170309@news.io.com>
References: <20010619052032.804.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 24

On 19 Jun 2001 05:20:32 -0000, in
<20010619052032.804.qmail@nym.alias.net>, in sci.crypt lcs Mixmaster
Remailer <mix@anon.lcs.mit.edu> wrote:

>[...]
>Let us all agree that it is time to put concerns about known plaintext
>behind us.  Recommendations to avoid it are obsolete.

No.

"Known-plaintext" is an information condition; it is the condition of
knowing both the input to -- and the output from -- the secret
transformation.  The value of such knowledge to any real attack should
be obvious.

Modern ciphers are indeed designed to resist known-plaintext.  That is
a noble design goal.  Whether or not it has been achieved is, of
course, of some continuing interest.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 11:08:59 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B2F872B.1F4DAC8F@sandia.gov>
References: <3b2efc49.1170309@news.io.com>
Newsgroups: sci.crypt
Lines: 21

Terry Ritter wrote:
<snip>
> "Known-plaintext" is an information condition; it is the condition of
> knowing both the input to -- and the output from -- the secret
> transformation.  The value of such knowledge to any real attack should
> be obvious.
>
> Modern ciphers are indeed designed to resist known-plaintext.  That is
> a noble design goal.  Whether or not it has been achieved is, of
> course, of some continuing interest.

Which is the point.  Many experts believe that it has, which would
mean that the "obvious" value of known plaintext is in fact "very
little".

The lack of a proof for such strength is, in my view, insufficient
evidence for weakness such that plaintext manipulations specifically
to help the cipher are warranted.  It's an engineering point of
view: separate the encryption itself from the rest of the application.

JM

Subject: Re: Known plaintext considered harmless
Date: 19 Jun 2001 14:29:13 -0400
From: lbudney-usenet@nb.net
Message-ID: <m3r8wg8gsm.fsf@peregrine.swoop.local>
References: <3B2F872B.1F4DAC8F@sandia.gov>
Newsgroups: sci.crypt
Lines: 25

John Myre <jmyre@sandia.gov> writes:
>
> The lack of a proof for such strength is, in my view, insufficient
> evidence for weakness such that plaintext manipulations specifically
> to help the cipher are warranted.

Note, however, that reducing known plaintext is trivial; compression
helps quite a lot. The cipher should be strong enough not to depend on
compression (or anything else), but not doing compression is just silly!

(1) It makes encryption easier by shrinking the file.
(2) It saves bandwidth by shrinking the ciphertext.
(3) Reducing known plaintext certainly can't hurt.

I would agree (for what that's worth!) if you'd said that *extreme*
efforts to eliminate known plaintext is just silly. But I wouldn't
deprecate an inexpensive measure which probably helps and definitely
has other benefits.

Len.

--
Wow. Another unbiased evaluation from Nick claims against Exim's
security ... firmly grounded in prejudice'' Maclaren.
-- Dan Bernstein, author of qmail

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 13:46:27 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B2FAC13.547765F1@sandia.gov>
References: <m3r8wg8gsm.fsf@peregrine.swoop.local>
Newsgroups: sci.crypt
Lines: 23

lbudney-usenet@nb.net wrote:
<snip>
> Note, however, that reducing known plaintext is trivial; compression
> helps quite a lot. The cipher should be strong enough not to depend on
> compression (or anything else), but not doing compression is just silly!
>
> (1) It makes encryption easier by shrinking the file.
> (2) It saves bandwidth by shrinking the ciphertext.
> (3) Reducing known plaintext certainly can't hurt.
>
> I would agree (for what that's worth!) if you'd said that *extreme*
> efforts to eliminate known plaintext is just silly. But I wouldn't
> deprecate an inexpensive measure which probably helps and definitely
> has other benefits.
<snip>

All true.  But note that (the best) two of the three reasons
have nothing to do with cryptographic security.  It will of
course be rare that compression cannot be justified, in
practice.  I just think it's wrong to bring security into the
decision, because it's too easy to get your priorities wrong.

JM

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 00:33:24 +0100
From: David Hopwood <david.hopwood@zetnet.co.uk>
Message-ID: <3B2FE144.C841DE65@zetnet.co.uk>
References: <m3r8wg8gsm.fsf@peregrine.swoop.local>
Newsgroups: sci.crypt
Lines: 62

-----BEGIN PGP SIGNED MESSAGE-----

lbudney-usenet@nb.net wrote:
> John Myre <jmyre@sandia.gov> writes:
> >
> > The lack of a proof for such strength is, in my view, insufficient
> > evidence for weakness such that plaintext manipulations specifically
> > to help the cipher are warranted.
>
> Note, however, that reducing known plaintext is trivial; compression
> helps quite a lot. The cipher should be strong enough not to depend on
> compression (or anything else), but not doing compression is just silly!

Not doing compression is not "just silly". Compression has a performance
cost (which typically isn't outweighed by encrypting less data [*]), and
requires fairly large buffers. Whether compression is of benefit to a
protocol depends on the relative costs of bandwidth and storage vs the
processing needed for compression. If an evaluation of those costs suggests
that it would not be worthwhile for cleartext, then it probably won't be
worthwhile when encryption is used either.

OTOH, since just before encryption is the "last chance" to compress,
there is a good case for supporting compression in cryptography standards.
It shouldn't be expected to improve security, though; if anything, it
complicates the security analysis. (For instance, do you authenticate
the compressed data or the uncompressed data? There is a case for doing
either.)

[*] I'm assuming relatively fast ciphers like Rijndael or RC4, with a
fast MAC if applicable.

> (1) It makes encryption easier by shrinking the file.
> (2) It saves bandwidth by shrinking the ciphertext.
> (3) Reducing known plaintext certainly can't hurt.

It can't hurt security (assuming it's integrated correctly with the
security protocol), but you might not want to use it for other reasons.

- --
David Hopwood <david.hopwood@zetnet.co.uk>

RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip

-----BEGIN PGP SIGNATURE-----
Version: 2.6.3i
Charset: noconv

iQEVAwUBOy/gtDkCAxeYt5gVAQFYzwf8DvawcqlWfwegXxJekpJx+VtPlaZt+mNv
yq3aPUScglGRcjykl4H4j3DFBxJuZDpa0BAcAJ/k04z/J9YuUeCJCqRWtKdPQXQ0
g8rKn+Ism6tdDHj058I+UPj8ZYUEmqVyLbg0PxDQ2L9YPyaHKkXaPVeRsJ2a28eR
vfzjyrK+e4uo6AqqRoglYyZN5uQmlHg0JbnGydbcd0ZYV2NWiSddchVhxRlWisAz
VqPuZYRFDyx75jK7XYEGzkQdpJmif8aYjNUWGUvF1hjNFEe0nq4IJi0nBwaxPj4h
FiWloRPGCKfa9IX5qengSWlkPchvYRezqoyQh4x5t5XqU64xQ/C21w==
=bXdd
-----END PGP SIGNATURE-----

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 10:13:06 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF84Du.BBC@bath.ac.uk>
References: <3B2FE144.C841DE65@zetnet.co.uk>
Newsgroups: sci.crypt
Lines: 49

David Hopwood <david.hopwood@zetnet.co.uk> wrote:
: lbudney-usenet@nb.net wrote:

:> Note, however, that reducing known plaintext is trivial; compression
:> helps quite a lot. The cipher should be strong enough not to depend on
:> compression (or anything else), but not doing compression is just silly!

: Not doing compression is not "just silly". Compression has a performance
: cost (which typically isn't outweighed by encrypting less data [*])

Compression and encryption can sometimes be done in parallel.  *Then* it's
almost always faster due to the fact that less data is encrypted - unless

Only on a serial machine is performance much of a concern.

: OTOH, since just before encryption is the "last chance" to compress,
: there is a good case for supporting compression in cryptography standards.
: It shouldn't be expected to improve security, though; if anything, it
: complicates the security analysis. (For instance, do you authenticate
: the compressed data or the uncompressed data? There is a case for doing
: either.)

I'm not sure I can see much case for authenticating the data after
compression.  I suppose signatures won't compress well - but apart
from that...

: [*] I'm assuming relatively fast ciphers like Rijndael or RC4, with a
:     fast MAC if applicable.

:> (1) It makes encryption easier by shrinking the file.
:> (2) It saves bandwidth by shrinking the ciphertext.
:> (3) Reducing known plaintext certainly can't hurt.

: It can't hurt security (assuming it's integrated correctly with the
: security protocol), but you might not want to use it for other reasons.

I think there are /some/ cases where it can damage security.

For example, if the plaintexts are all fixed size forms, then compressing
them may result in the attacker getting more information that not doing
so.

Forms with lots of additional data in them will not compress so well -
so you can see how much data there is in the form by the length of the
compressed file - while before they were all indistinguishable.
--
__________

Subject: Re: Known plaintext considered harmless
Date: 20 Jun 2001 06:41:54 -0400
From: lbudney-usenet@nb.net
Message-ID: <m3hexb5t71.fsf@peregrine.swoop.local>
References: <3B2FE144.C841DE65@zetnet.co.uk>
Newsgroups: sci.crypt
Lines: 22

David Hopwood <david.hopwood@zetnet.co.uk> writes:
> lbudney-usenet@nb.net wrote:
>> ...not doing compression is just silly!
>
> Not doing compression is not "just silly". Compression has a performance
> cost...

1,000 pardons. I never intended the statement to be sweepingly true of
every context in the universe where encryption is used. There may indeed
be circumstances in which compression is clearly not the best idea. Are
you happy now?

Fact remains, that somehow deprecating compression is just silly. It has
its place.

Len.

--
Frugal Tip #55:
Get yourself a realistic-looking mongoose costume. Then, rent yourself
out to somebody who wants a rented mongoose.

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 19:58:07 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b2fabca.1848659@news.io.com>
References: <3B2F872B.1F4DAC8F@sandia.gov>
Newsgroups: sci.crypt
Lines: 40

On Tue, 19 Jun 2001 11:08:59 -0600, in <3B2F872B.1F4DAC8F@sandia.gov>,
in sci.crypt John Myre <jmyre@sandia.gov> wrote:

>Terry Ritter wrote:
><snip>
>> "Known-plaintext" is an information condition; it is the condition of
>> knowing both the input to -- and the output from -- the secret
>> transformation.  The value of such knowledge to any real attack should
>> be obvious.
>>
>> Modern ciphers are indeed designed to resist known-plaintext.  That is
>> a noble design goal.  Whether or not it has been achieved is, of
>> course, of some continuing interest.
>
>Which is the point.  Many experts believe that it has, which would
>mean that the "obvious" value of known plaintext is in fact "very
>little".

The "obvious" value to which I referred is the process of
cryptanalysis itself:

Some keyed function takes plaintext to ciphertext.  Consider trying to
expose that function while knowing only the plaintext but not the
ciphertext, or knowing the ciphertext but not the plaintext.  We can
create toy examples for which this is easy, but in general, this is
very tough.  Not knowing both the input and the output acts to hide
the ciphering function itself.

Now consider trying to expose the ciphering function knowing both the
plaintext and the ciphertext.  We expect this to be tough anyway, if
we assume (that is, make an ASS out of U and ME)  the cipher design is
strong.  But we wouldn't have to hope as hard if there were no known
plaintext in the first place.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 23:17:16 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b2fdc9d.29036540@news.powersurfr.com>
References: <3B2F872B.1F4DAC8F@sandia.gov>
Newsgroups: sci.crypt
Lines: 24

On Tue, 19 Jun 2001 11:08:59 -0600, John Myre <jmyre@sandia.gov>
wrote, in part:

>It's an engineering point of
>view: separate the encryption itself from the rest of the application.

That does have validity, although there are also cases when the
encryption _is_ the application, in which case that concern does not
apply.

Also, a "reasonable chaining mode" leaves some latitude for doing, in
the encryption, things that might be done elsewhere.

However, if the application tends to leave certain fields blank, does
it make sense to separate this from, say, the _compression_ stage? If
there is nothing wrong about a compressor receiving side information
about its input, so as to be able to compress it more efficiently,
then can't we make similar allowances for an encryption stage? (Thus,
part of the decryption will be to restore the blank fields and so on,
with some short indicator handling the possibility that they will stop
being blank in future expansion.)

John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 09:57:52 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B30C800.654F81F0@sandia.gov>
References: <3b2fdc9d.29036540@news.powersurfr.com>
Newsgroups: sci.crypt
Lines: 22

John Savard wrote:
<snip>
> If
> there is nothing wrong about a compressor receiving side information
> about its input, so as to be able to compress it more efficiently,
> then can't we make similar allowances for an encryption stage? (Thus,
> part of the decryption will be to restore the blank fields and so on,
> with some short indicator handling the possibility that they will stop
> being blank in future expansion.)
<snip>

The only thing wrong with approaches like this is the practical
side effects of such design decisions on the implementation effort.

It's *easy* to f*** up your security by making mistakes in the
implementation.  It's *hard* to be "sure" you haven't done so.
Complexity is the enemy of correctness.  Therefore, it will most
often be best to KISS your encryption modules.  If you want to do
compression, that's fine, but beware of compromising security by
implementation (or design!) problems.

JM

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 09:41:56 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF689w.H76@bath.ac.uk>
References: <20010619052032.804.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 19

lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote:

: NO CONSIDERATION WHATSOEVER should be given to manipulating or
: constraining the data structures in the hopes of making the
: encryption stronger! [...]

: Let us all agree that it is time to put concerns about known plaintext
: behind us.  Recommendations to avoid it are obsolete.

You go far too far.  While tying yourself in knots to avoid known
plaintext may be over the top, avoiding it *is* desirable.

You presume that cryptanalytic attack is the only possible method of
getting information relating to the key of a cypher.  This is not the
case.  If the number of possible keys can be reduced - by any means -
known-plaintext attacks can become a practical issue.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 10:46:22 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B2F81DE.901777B1@sandia.gov>
References: <GF689w.H76@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 37

Tim Tyler wrote:
<snip>
> You go far too far.  While tying yourself in knots to avoid known
> plaintext may be over the top, avoiding it *is* desirable.

But the question is: *how* desirable?  Exactly how much effort
is it worth?  I'd agree with (anonymous) that the proper solution
is to reinforce the cipher.  Failure of software systems due to
unwarranted complexity is a *bad* problem.

> You presume that cryptanalytic attack is the only possible method of
> getting information relating to the key of a cypher.

?

> This is not the
> case.  If the number of possible keys can be reduced - by any means -
> known-plaintext attacks can become a practical issue.

That is at best an exaggeration.  You can reduce the number of
possible keys quite easily by brute force: guess a few keys,
decrypt the entire message for each guess, and discard the ones
that are clearly nonsense.  (We must assume that any keyless
transformations (e.g., compression) are known to the attacker.)
In practice - this is simply not an issue.

The point is that if a simple keysearch can be made practical,
because the entropy (unknown bits in the key) is small enough,
then using obfuscation on the source text as a way to prevent
that attack is almost certainly pointless.

Terry's response is more reasoned.  The suspicion that maybe we
are all fooling ourselves, that there *aren't* any ciphers that
we can trust are as strong as we think, is at least defensible.
I think he's wrong, but of course I can't prove it.

JM

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 19:44:26 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b2fab57.1733188@news.io.com>
References: <3B2F81DE.901777B1@sandia.gov>
Newsgroups: sci.crypt
Lines: 28

On Tue, 19 Jun 2001 10:46:22 -0600, in <3B2F81DE.901777B1@sandia.gov>,
in sci.crypt John Myre <jmyre@sandia.gov> wrote:

>[...]
>Terry's response is more reasoned.  The suspicion that maybe we
>are all fooling ourselves, that there *aren't* any ciphers that
>we can trust are as strong as we think, is at least defensible.

I think that viewpoint is not only defensible but also *appropriate*
for security analysis.  (Also, the issue is not so much that no strong
ciphers exist, but that the particular cipher we propose to use may
have weakness.)

>I think he's wrong, but of course I can't prove it.

Wrong?  How can it be *wrong* to have a suspicion of weakness, when
the assertion of strength is not based on proven fact?

Absent proof of strength in practice, suspicion of weakness is
entirely appropriate.  We need to consider the consequences of our
hopes being wrong.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: Tue, 19 Jun 2001 23:33:37 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b2fde73.29506471@news.powersurfr.com>
References: <3b2fab57.1733188@news.io.com>
Newsgroups: sci.crypt
Lines: 45

On Tue, 19 Jun 2001 19:44:26 GMT, ritter@io.com (Terry Ritter) wrote,
in part:
>On Tue, 19 Jun 2001 10:46:22 -0600, in <3B2F81DE.901777B1@sandia.gov>,
>in sci.crypt John Myre <jmyre@sandia.gov> wrote:

>>I think he's wrong, but of course I can't prove it.

>Wrong?  How can it be *wrong* to have a suspicion of weakness, when
>the assertion of strength is not based on proven fact?

>Absent proof of strength in practice, suspicion of weakness is
>entirely appropriate.  We need to consider the consequences of our
>hopes being wrong.

Of course, he meant you would be "wrong" if the weakness did not in
fact exist, regardless of the fact that you are right that there is no
proof it does not.

In a way, that isn't fair, since in effect it does make it appear as
though you had done what David A. Scott did when he made his first
appearance on this newsgroup - assert for a fact that one or more
popular block ciphers (in his case, IDEA) is not secure.

It is possible to have reasons for confidence in something that fall
short of proof, and it is even possible for such reasons to be
sufficient to make some forms of precaution unwarranted.

On the one hand, I certainly could imagine cases where designing a
file format around eventual encryption of the files in question could
create serious problems in developing and debugging an application. On
the other, under most circumstances, the computational cost of
encryption is trivial these days, and making extra effort over and
above what is considered the "standard" seems difficult to argue
against.

As for the specific issue: known plaintext + small key = brute-force
search possible. Hence, any flaw in our ciphers that makes their
effective keys smaller, or any advance in computation, is more
threatening when the plaintext is known. But an answer is available
that doesn't involve the equivalent of redesigning the English
language to make it less redundant. Use larger keys, if your response
to known plaintext must be confined to the cipher stage.

John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 05:20:57 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b30322a.7707648@news.io.com>
References: <3b2fde73.29506471@news.powersurfr.com>
Newsgroups: sci.crypt
Lines: 69

On Tue, 19 Jun 2001 23:33:37 GMT, in
<3b2fde73.29506471@news.powersurfr.com>, in sci.crypt
jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote:

>[...]
>It is possible to have reasons for confidence in something that fall
>short of proof, and it is even possible for such reasons to be
>sufficient to make some forms of precaution unwarranted.

Not in cryptography.

In the normal world where we build our intuition, we can see the
consequences of design error:  If some make of car has problems, we
know (a human gets out and reports, or gets squished).  If a computer
program has problems doing what we want, we know, because it is not
doing what we want (it may of course be doing *more* than we want, but
that is a different issue).  If a medicine does not do what we want,
we know that too.  In the normal world we know when things don't work.

Cryptography is fundamentally different:  In cryptography, we want to
protect our information from opponents, but we have no way of knowing
whether any particular cipher is successful.  We don't know who our
opponents are, and they don't tell us of their successes.  There is
thus no feedback to the design and development process from the real
mission.

The feedback we have is from practice missions.  And that's fine, as
far as it goes.  But that has little to do with the real mission, and
so should not build confidence.

While we cannot affect that per se, we *can* develop systems which
reduce the consequences of weakness, and also expose as little
information as possible.

>[...]
>As for the specific issue: known plaintext + small key = brute-force
>search possible. Hence, any flaw in our ciphers that makes their
>effective keys smaller, or any advance in computation, is more
>threatening when the plaintext is known. But an answer is available
>that doesn't involve the equivalent of redesigning the English
>language to make it less redundant. Use larger keys, if your response
>to known plaintext must be confined to the cipher stage.

I disagree.  The problem is not keyspace.  We already have keyspace.

The problem is the improved ability to analyze a ciphering function
when one has both the input to -- and output from -- that function
(this is known-plaintext, as opposed to having ciphertext-only).

Some ciphers can be broken with ciphertext only.  But the *reason* for
this is that their plaintext is structured, or correlated.  Knowing
the plaintext, or something about it, is how we solve ciphers.  When
that knowledge is not available, even simple, supposedly-"weak"
ciphers can be strong in practice.

The obvious approach to minimizing the risk of known-plaintext is to
encipher at least twice with different ciphers and keys, so that the
plaintext to the last cipher is both randomized and hidden even from
the authorized user.  I always recommend using three ciphers, which
allows one to be completely broken without exposing the others.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: 20 Jun 2001 12:10:59 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90C6325BDH110W296LC45WIN3030R@207.36.190.226>
References: <3b30322a.7707648@news.io.com>
Newsgroups: sci.crypt
Lines: 89

ritter@io.com (Terry Ritter) wrote in <3b30322a.7707648@news.io.com>:

>
>On Tue, 19 Jun 2001 23:33:37 GMT, in
><3b2fde73.29506471@news.powersurfr.com>, in sci.crypt
>jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote:
>
>>[...]
>>It is possible to have reasons for confidence in something that fall
>>short of proof, and it is even possible for such reasons to be
>>sufficient to make some forms of precaution unwarranted.
>
>Not in cryptography.
>
>In the normal world where we build our intuition, we can see the
>consequences of design error:  If some make of car has problems, we
>know (a human gets out and reports, or gets squished).  If a computer
>program has problems doing what we want, we know, because it is not
>doing what we want (it may of course be doing *more* than we want, but
>that is a different issue).  If a medicine does not do what we want,
>we know that too.  In the normal world we know when things don't work.
>
>Cryptography is fundamentally different:  In cryptography, we want to
>protect our information from opponents, but we have no way of knowing
>whether any particular cipher is successful.  We don't know who our
>opponents are, and they don't tell us of their successes.  There is
>thus no feedback to the design and development process from the real
>mission.
>
>The feedback we have is from practice missions.  And that's fine, as
>far as it goes.  But that has little to do with the real mission, and
>so should not build confidence.
>
>While we cannot affect that per se, we *can* develop systems which
>reduce the consequences of weakness, and also expose as little
>information as possible.
>
>
>>[...]
>>As for the specific issue: known plaintext + small key = brute-force
>>search possible. Hence, any flaw in our ciphers that makes their
>>effective keys smaller, or any advance in computation, is more
>>threatening when the plaintext is known. But an answer is available
>>that doesn't involve the equivalent of redesigning the English
>>language to make it less redundant. Use larger keys, if your response
>>to known plaintext must be confined to the cipher stage.
>
>I disagree.  The problem is not keyspace.  We already have keyspace.
>
>The problem is the improved ability to analyze a ciphering function
>when one has both the input to -- and output from -- that function
>(this is known-plaintext, as opposed to having ciphertext-only).
>
>Some ciphers can be broken with ciphertext only.  But the *reason* for
>this is that their plaintext is structured, or correlated.  Knowing
>the plaintext, or something about it, is how we solve ciphers.  When
>that knowledge is not available, even simple, supposedly-"weak"
>ciphers can be strong in practice.
>
>The obvious approach to minimizing the risk of known-plaintext is to
>encipher at least twice with different ciphers and keys, so that the
>plaintext to the last cipher is both randomized and hidden even from
>the authorized user.  I always recommend using three ciphers, which
>allows one to be completely broken without exposing the others.
>

I agree basically with what your said. But I don't think
you have ever completely described the nature of the three ciphers
that would be used in series. Do you argee they should be fully
bijective. Meaning false keys would lead to something in the input
message space so no information is given to attacker to break the
system.

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 17:29:26 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b30dd41.6478879@news.io.com>
References: <90C6325BDH110W296LC45WIN3030R@207.36.190.226>
Newsgroups: sci.crypt
Lines: 58

On 20 Jun 2001 12:10:59 GMT, in
<90C6325BDH110W296LC45WIN3030R@207.36.190.226>, in sci.crypt
david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) wrote:

>ritter@io.com (Terry Ritter) wrote in <3b30322a.7707648@news.io.com>:
>[...]
>>Some ciphers can be broken with ciphertext only.  But the *reason* for
>>this is that their plaintext is structured, or correlated.  Knowing
>>the plaintext, or something about it, is how we solve ciphers.  When
>>that knowledge is not available, even simple, supposedly-"weak"
>>ciphers can be strong in practice.
>>
>>The obvious approach to minimizing the risk of known-plaintext is to
>>encipher at least twice with different ciphers and keys, so that the
>>plaintext to the last cipher is both randomized and hidden even from
>>the authorized user.  I always recommend using three ciphers, which
>>allows one to be completely broken without exposing the others.
>>
>
>  I agree basically with what your said. But I don't think
>you have ever completely described the nature of the three ciphers
>that would be used in series. Do you argee they should be fully
>bijective. Meaning false keys would lead to something in the input
>message space so no information is given to attacker to break the
>system.

I don't like using the term "bijective" for this.  Maybe "plaintext
complete" comes closer, as in every possible plaintext should be at
least technically valid.  But whatever we call it, it certainly is a
desirable and possibly important property that the ideal system would
assure before invoking a cipher.  Language does not have this
property, which is exactly why we can solve some simple ciphers.

Without some knowledge of the plaintext itself, or of structure in the
plaintext, breaking a cipher becomes very difficult.  If we
communicate in distinct messages, and if the "plaintext" includes a
length field, that "length" had better be modulo or block size, or we
will be describing in plaintext a size the opponents can see as
ciphertext.  This is a form of known plaintext.  But known plaintext
is also common in salutations, signatures, HTML codes and so on.

On the other hand, even if a length is added to the input of the first
cipher, the input to the second and third ciphers should be well
distributed -- such that every possible plaintext seems possible --
and so not a problem.  That sounds to me like a reason for using a
sequence or "cascade" of multiple ciphers, and is related to hiding
known plaintext.

As I see it, the problem is not so much the "bijective" nature of the
last cipher, but instead the known structure in the original
plaintext.  This is a known-plaintext issue.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: Tue, 26 Jun 2001 23:10:20 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b391527.19647451@news.powersurfr.com>
References: <90C6325BDH110W296LC45WIN3030R@207.36.190.226>
Newsgroups: sci.crypt
Lines: 25

On 20 Jun 2001 12:10:59 GMT, david_a_scott@emailv.com
(SCOTT19U.ZIP_GUY) wrote, in part:

>  I agree basically with what your said. But I don't think
>you have ever completely described the nature of the three ciphers
>that would be used in series. Do you argee they should be fully
>bijective. Meaning false keys would lead to something in the input
>message space so no information is given to attacker to break the
>system.

Whether or not, in the compression stage, one has used a bijective
compressor so that arbitrary bit strings of length N decompress to
valid messages - if that's even possible -

I agree that the ciphers in a cascade should not expand the data being
enciphered. So they will be bijective: every N bit input leads to one
of the possible N bit outputs.

However, if an exception is made to allow the cipher steps to use a
random IV, they should be forced to get that IV from the overall
'system' and not produce it for themselves. This prevents a weak
cipher layer from leaking its own key.

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: 26 Jun 2001 23:51:11 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90CCB31B0H110W296LC45WIN3030R@207.36.190.226>
References: <3b391527.19647451@news.powersurfr.com>
Newsgroups: sci.crypt
Lines: 55

jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote in
<3b391527.19647451@news.powersurfr.com>:

>On 20 Jun 2001 12:10:59 GMT, david_a_scott@emailv.com
>(SCOTT19U.ZIP_GUY) wrote, in part:
>
>>  I agree basically with what your said. But I don't think
>>you have ever completely described the nature of the three ciphers
>>that would be used in series. Do you argee they should be fully
>>bijective. Meaning false keys would lead to something in the input
>>message space so no information is given to attacker to break the
>>system.
>
>Whether or not, in the compression stage, one has used a bijective
>compressor so that arbitrary bit strings of length N decompress to
>valid messages - if that's even possible -

Actually Mr J it is possible so what are you trying to say?

>
>I agree that the ciphers in a cascade should not expand the data being
>enciphered. So they will be bijective: every N bit input leads to one
>of the possible N bit outputs.

But what you neglect here is that ciphers may have different
boundaries. Some work in 8 byte units others may work in 10 or
15 byte units. So in your cascadeing are you limiting to only
methods that have matching block sizes or do you allow transforms
to change block size if bijective?

>
>However, if an exception is made to allow the cipher steps to use a
>random IV, they should be forced to get that IV from the overall
>'system' and not produce it for themselves. This prevents a weak
>cipher layer from leaking its own key.

I think you don't really need to call it an exception if
you think of the random IV if used as input data that is
part of the unique decryption of the the resulting encrypted file.

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Wed, 27 Jun 2001 12:59:39 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b39d840.1040971@news.powersurfr.com>
References: <90CCB31B0H110W296LC45WIN3030R@207.36.190.226>
Newsgroups: sci.crypt
Lines: 20

On 26 Jun 2001 23:51:11 GMT, david_a_scott@emailv.com
(SCOTT19U.ZIP_GUY) wrote, in part:

>    But what you neglect here is that ciphers may have different
>boundaries. Some work in 8 byte units others may work in 10 or
>15 byte units. So in your cascadeing are you limiting to only
>methods that have matching block sizes or do you allow transforms
>to change block size if bijective?

Either they all have the same alignment restriction (say, to whole
bytes) or they must all be able to preserve the exact number of bits.
Block ciphers can do so through "ciphertext stealing", as long as the
message is at least one block long.

Thus, I assume that an encryption program of this type will require
that messages to be encrypted be at least 256 bits long, but will make
no other restriction on length.

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: 27 Jun 2001 14:18:41 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90CD59D08H110W296LC45WIN3030R@207.36.190.226>
References: <3b39d840.1040971@news.powersurfr.com>
Newsgroups: sci.crypt
Lines: 59

jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote in
<3b39d840.1040971@news.powersurfr.com>:

>On 26 Jun 2001 23:51:11 GMT, david_a_scott@emailv.com
>(SCOTT19U.ZIP_GUY) wrote, in part:
>
>>    But what you neglect here is that ciphers may have different
>>boundaries. Some work in 8 byte units others may work in 10 or
>>15 byte units. So in your cascadeing are you limiting to only
>>methods that have matching block sizes or do you allow transforms
>>to change block size if bijective?
>
>Either they all have the same alignment restriction (say, to whole
>bytes) or they must all be able to preserve the exact number of bits.
>Block ciphers can do so through "ciphertext stealing", as long as the
>message is at least one block long.
>
>Thus, I assume that an encryption program of this type will require
>that messages to be encrypted be at least 256 bits long, but will make
>no other restriction on length.
>

I think your starting to get there. If you used encryption
that was fully reversible with any key. And did not allow the
size of file to change. The system would be stronger than the
weakest length. Especially if one looking at 3 seperate
encryption routines. However I think you wrong about the
extreme length constrants just as the misguided souls were
about how to achive perfect encyption with an OTP.
Example if each of your methods normal encryption and
ecb,cbc or standard chaining mods. There is very little
propagation between blocks.  If bewteen each layer of
encryption say you expand the file by h2unc.exe then
reversed the file and compressed say with arib,exe  then
used the next encryption with ciphertext stealing as you
state above and do same betwwen last two ciphers.
THe length of final output would not very likely be the
length of input nessage and the actaully encrypted message
would be more like all or nothing transform where whole
file needed to decrypt it at all. While in your trusted method
if last half of file missing an attacker still only needs
the  front half of file to break the encryption since all the
information needed may be there.
Now which do you rally think would be more secure?

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 01:50:22 -0700
From: Bryan Olson <nospam@nospam.net>
Message-ID: <3B31B54E.AE0A120D@nospam.net>
References: <3b30322a.7707648@news.io.com>
Newsgroups: sci.crypt
Lines: 29

Terry Ritter wrote:
>
> John Savard) wrote:
>
> >[...]
> >It is possible to have reasons for confidence in something that fall
> >short of proof, and it is even possible for such reasons to be
> >sufficient to make some forms of precaution unwarranted.
>
> Not in cryptography.

[...]

> The obvious approach to minimizing the risk of known-plaintext is to
> encipher at least twice with different ciphers and keys, so that the
> plaintext to the last cipher is both randomized and hidden even from
> the authorized user.  I always recommend using three ciphers, which
> allows one to be completely broken without exposing the others.

Then the attacker gets known plaintext against a cipher that happens
to be built from two or three other ciphers.  And if we believe we
cannot have confidence without proof, then we have the same problem
we started with.  We are just as motivated to triple the ciphers that
that are themselves tripled ciphers.

--Bryan

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 18:18:33 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b32391a.9437472@news.io.com>
References: <3B31B54E.AE0A120D@nospam.net>
Newsgroups: sci.crypt
Lines: 72

On Thu, 21 Jun 2001 01:50:22 -0700, in <3B31B54E.AE0A120D@nospam.net>,
in sci.crypt Bryan Olson <nospam@nospam.net> wrote:

>Terry Ritter wrote:
>>
>> John Savard) wrote:
>>
>> >[...]
>> >It is possible to have reasons for confidence in something that fall
>> >short of proof, and it is even possible for such reasons to be
>> >sufficient to make some forms of precaution unwarranted.
>>
>> Not in cryptography.
>
>[...]
>
>> The obvious approach to minimizing the risk of known-plaintext is to
>> encipher at least twice with different ciphers and keys, so that the
>> plaintext to the last cipher is both randomized and hidden even from
>> the authorized user.  I always recommend using three ciphers, which
>> allows one to be completely broken without exposing the others.
>
>Then the attacker gets known plaintext against a cipher that happens
>to be built from two or three other ciphers.  And if we believe we
>cannot have confidence without proof, then we have the same problem
>we started with.  We are just as motivated to triple the ciphers that
>that are themselves tripled ciphers.

But that argument makes sense only if every "cipher" is the same as
every other "cipher," something we all know to be false.  That is a
debating ploy, not an argument.

The "cipher" composed of a sequence of different ciphers is a vastly
more complex transformation than any component cipher, as we can see
from the expanded keyspace.  Using a sequence of different ciphers is
how we multiply ciphers both in number and complexity; this is Shannon
multiplication of secrecy systems.

Even just a single tripling addresses practical problems that
conventional ciphering wisdom otherwise leaves open:

As one example, when we use one cipher alone, that cipher may actually
be already broken, in the context of the opponents.  What we can do
That hides known-plaintext information.  Knowing both clear ciphertext
and something about plaintext is how ciphers are broken; if we can
really hide either or both, a cipher becomes far stronger in practice.
Of course, even if one of the ciphers *is* broken in situ, the others
continue to protect our data.  That is Shannon multiplication of
secrecy systems.

As another example, when we use one cipher alone, if our cipher is
already broken, we will continue to use it and so expose our data
day-after-day, until we do something about it.  What we can do about
that is to change ciphers, and so have a new chance of getting a
strong one.  And if we change ciphers frequently, we also have the
advantage of reducing the amount of data under any one cipher, which
reduces both the motivation and the ability to invest in attacks on
that cipher.  Selecting among different ciphers is Shannon addition of
secrecy systems.

Clearly, if we do both -- use, say, three ciphers in sequence and
change them frequently -- we gain the advantage of changing ciphers,
with protection against using weak ciphers.  And that is Shannan's
"algebra of secrecy systems."

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 14:17:45 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B325669.A6F389E6@sandia.gov>
References: <3b32391a.9437472@news.io.com>
Newsgroups: sci.crypt
Lines: 38

Terry Ritter wrote:
>
> On Thu, 21 Jun 2001 01:50:22 -0700, in <3B31B54E.AE0A120D@nospam.net>,
> in sci.crypt Bryan Olson <nospam@nospam.net> wrote:
>
> >Terry Ritter wrote:
> >>
<snip>
> >> The obvious approach to minimizing the risk of known-plaintext is to
> >> encipher at least twice with different ciphers and keys, so that the
> >> plaintext to the last cipher is both randomized and hidden even from
> >> the authorized user.  I always recommend using three ciphers, which
> >> allows one to be completely broken without exposing the others.
> >
> >Then the attacker gets known plaintext against a cipher that happens
> >to be built from two or three other ciphers.  And if we believe we
> >cannot have confidence without proof, then we have the same problem
> >we started with.  We are just as motivated to triple the ciphers that
> >that are themselves tripled ciphers.
>
> But that argument makes sense only if every "cipher" is the same as
> every other "cipher," something we all know to be false.  That is a
> debating ploy, not an argument.
<snip>

It might be right to construct a cipher out of three component
ciphers, different in structure.  It's incredibly difficult to
believe that it wouldn't be stronger.  The *logical argument*,
however, goes to Bryan.  From the premise that we don't know
the strength of our ciphers we can only conclude that we don't
know the strength of any construction based on them, either.

If we want to conclude that tripling is right, we have to assume
something like "most of our presumed-strong ciphers actually are
strong, we just don't know which ones," or some other premise
that says *something* about the component cipher strengths.

JM

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 14:03:40 -0700
From: "Paul Pires" <diodude@got.net>
Message-ID: <cjtY6.104428$Ne5.3793518@e420r-sjo3.usenetserver.com> References: <3B325669.A6F389E6@sandia.gov> Newsgroups: sci.crypt Lines: 72 John Myre <jmyre@sandia.gov> wrote in message news:3B325669.A6F389E6@sandia.gov... > Terry Ritter wrote: > > > > On Thu, 21 Jun 2001 01:50:22 -0700, in <3B31B54E.AE0A120D@nospam.net>, > > in sci.crypt Bryan Olson <nospam@nospam.net> wrote: > > > > >Terry Ritter wrote: > > >> > <snip> > > >> The obvious approach to minimizing the risk of known-plaintext is to > > >> encipher at least twice with different ciphers and keys, so that the > > >> plaintext to the last cipher is both randomized and hidden even from > > >> the authorized user. I always recommend using three ciphers, which > > >> allows one to be completely broken without exposing the others. > > > > > >Then the attacker gets known plaintext against a cipher that happens > > >to be built from two or three other ciphers. And if we believe we > > >cannot have confidence without proof, then we have the same problem > > >we started with. We are just as motivated to triple the ciphers that > > >that are themselves tripled ciphers. > > > > But that argument makes sense only if every "cipher" is the same as > > every other "cipher," something we all know to be false. That is a > > debating ploy, not an argument. > <snip> > > It might be right to construct a cipher out of three component > ciphers, different in structure. It's incredibly difficult to > believe that it wouldn't be stronger. The *logical argument*, > however, goes to Bryan. From the premise that we don't know > the strength of our ciphers we can only conclude that we don't > know the strength of any construction based on them, either. Playing Devils advocate. If the individual threat to all three ciphers was something that involved known plaintext. Wouldn't it be obvious that combining the ciphers must be more secure since the intermediate ciphertexts (AKA the intermediate plaintexts) are not preserved? Cipher 1 may have a known plaintext but the corresponding ciphertext is gone. Ciphert 2 has no known plaintext or ciphertext. Cipher 3 has a known ciphertext but no known plaintext. Is there any known "known plaintext attack" that works without knowledge of the ciphertext? Is there any ciphertext only attack that can work when the plaintext is unknowable and Pseudo-random? Forget the philosophy of multi encryption and check the assumptions for all known attacks. Sure this only applies to Known attacks, what about the unknown ones? Who cares? It may be possible to *prove* that a method is unbreakable to all known attacks to date, faster than brute force. That has to be worth something. This is the one part of this multi-encryption argument that I just can't get a grip on. Folks seem to be arguing the Gotta-be-there weaknesses of the individual ciphers but I can't see how the requirements for the Gotta-be-there attacks are provided by the multi cipher example. It looks like a no-go from front to back, middle out and back to front. Am I too far gone here or just missing a clue? Paul > > If we want to conclude that tripling is right, we have to assume > something like "most of our presumed-strong ciphers actually are > strong, we just don't know which ones," or some other premise > that says *something* about the component cipher strengths. > > JM Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 01:00:35 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b329843.3659104@news.powersurfr.com> References: <cjtY6.104428$Ne5.3793518@e420r-sjo3.usenetserver.com>
Newsgroups: sci.crypt
Lines: 19

On Thu, 21 Jun 2001 14:03:40 -0700, "Paul Pires" <diodude@got.net>
wrote, in part:

>Sure this only applies to Known attacks, what about the unknown ones?
>Who cares? It may be possible to *prove* that a method is unbreakable
>to all known attacks to date, faster than brute force.
>That has to be worth something.

>Am I too far gone here or just missing a clue?

The only thing missing here is that the individual ciphers it is being
proposed to triple up are *already* immune to any known attacks to
date that is faster than brute force!

Still, immunity to any unknown attack remotely similar to the kinds of
attacks that are known must be worth something...

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: 22 Jun 2001 13:26:14 GMT
From: mdw@nsict.org (Mark Wooding)
Message-ID: <slrn9j6hrm.clo.mdw@daryl.nsict.org>
References: <cjtY6.104428$Ne5.3793518@e420r-sjo3.usenetserver.com> Newsgroups: sci.crypt Lines: 77 Paul Pires <diodude@got.net> wrote: > Playing Devils advocate. If the individual threat to all three > ciphers was something that involved known plaintext. Wouldn't > it be obvious that combining the ciphers must be more secure > since the intermediate ciphertexts (AKA the intermediate plaintexts) > are not preserved? I think there's very little that's obvious' around here. However, we can /prove/ that cascading ciphers doesn't introduce any weaknesses, assuming that they're keyed independently. (The proof is rather dull, and included below for the sake of completeness. See Bellare, Desai, Jokipii and Rogaway for background information.) Proving that it actually helps seems very hard. It's intuitively fairly clear that inventing a new key, pretending we don't know what it is, and gluing a cipher with that key before or after the encryption (which is basically what the proof below does) isn't the most direct or efficient way of breaking an encryption scheme. But this seems very hard to prove. The proof Suppose that X = (K, E, D) and Y = (K', E', D') are symmetric encryption systems. Define Seq(X, Y) = (K^*, E^*, D^*) as follows: * K^*(1^k) computes \kappa_X <- K (1^k) and \kappa_Y <- K'(1^k, and returns the pair (\kappa_X, \kappa_Y) * E^*(\kappa, x), where \kappa = (\kappa_X, \kappa_Y), computes c_X <- E(\kappa_X, x) and returns c <- E'(\kappa_Y, c_X). * D^*(\kappa, x), where \kappa = (\kappa_X, \kappa_Y), computes c_X<- D'(\kappa_Y, x) and returns p <- D(\kappa_X, c_X). Suppose A is an adversary which runs in time t, performs q_e queries to a left-or-right encryption oracle totalling \mu_e bits, and performs q_d queries to a decryption oracle totalling \mu_d bits, and has advantage \epsilon distinguishing Seq(X, Y) in the left-or-right sense. We construct the adversary A_X as follows: Adversary A_X^{\mathcal{LR}(., .), \mathcal{D}(.)} Choose \kappa <- K'(1^k) Get b <- A^{\mathcal{LR}'(., .), \mathcal{D}'(.)} where \mathcal{LR}'(l, r) = E'(\kappa, \mathcal{LR}(l, r)) \mathcal{D}'(c) = \mathcal{D}(D'(c)) return b This adversary distinguishes X with the same advantage as A has against Seq(X, Y), and makes the same number of queries ot its oracles. The additional running time is only the time taken to run K'. The additional size of the queries is related only to the ciphertext expansion in E. Similarly, we construct an adversary A_Y: Adversary A_X^{\mathcal{LR}(., .), \mathcal{D}(.)} Choose \kappa <- K(1^k) Get b <- A^{\mathcal{LR}'(., .), \mathcal{D}'(.)} where \mathcal{LR}'(l, r) = \mathcal{LR}(E(\kappa, l), E(\kappa, r)) \mathcal{D}'(c) = D'(\mathcal{D}(c)) return b with the same properties against Y. This proves that the combination Seq(X, Y) is no less secure than the stronger of X and Y (in the left-or-right sense). If we define by induction, Seq(A, B, C, ...) = Seq(A, Seq(B, Seq(C, ...))) we see can we can build a cascade of ciphers which is at least as strong (in the left-or-right sense) as the strongest cipher in the cascade, whichever one that is. (Left-or-right security is at most a factor of 2 weaker than real-or-random security, and much easier to use in this instance.) -- [mdw] Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 16:10:45 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b336d3f.2356247@news.powersurfr.com> References: <slrn9j6hrm.clo.mdw@daryl.nsict.org> Newsgroups: sci.crypt Lines: 13 On 22 Jun 2001 13:26:14 GMT, mdw@nsict.org (Mark Wooding) wrote, in part: >I think there's very little that's obvious' around here. However, we >can /prove/ that cascading ciphers doesn't introduce any weaknesses, >assuming that they're keyed independently. That is not true when the second cipher in the chain is vulnerable to a known-plaintext attack, particularly when the input to the first cipher is either compressible, or is expanded by the first cipher. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 23:40:30 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3b3285bc.5858111@news.io.com> References: <3B325669.A6F389E6@sandia.gov> Newsgroups: sci.crypt Lines: 67 On Thu, 21 Jun 2001 14:17:45 -0600, in <3B325669.A6F389E6@sandia.gov>, in sci.crypt John Myre <jmyre@sandia.gov> wrote: >Terry Ritter wrote: >> >> On Thu, 21 Jun 2001 01:50:22 -0700, in <3B31B54E.AE0A120D@nospam.net>, >> in sci.crypt Bryan Olson <nospam@nospam.net> wrote: >> >> >Terry Ritter wrote: >> >> ><snip> >> >> The obvious approach to minimizing the risk of known-plaintext is to >> >> encipher at least twice with different ciphers and keys, so that the >> >> plaintext to the last cipher is both randomized and hidden even from >> >> the authorized user. I always recommend using three ciphers, which >> >> allows one to be completely broken without exposing the others. >> > >> >Then the attacker gets known plaintext against a cipher that happens >> >to be built from two or three other ciphers. And if we believe we >> >cannot have confidence without proof, then we have the same problem >> >we started with. We are just as motivated to triple the ciphers that >> >that are themselves tripled ciphers. >> >> But that argument makes sense only if every "cipher" is the same as >> every other "cipher," something we all know to be false. That is a >> debating ploy, not an argument. ><snip> > >It might be right to construct a cipher out of three component >ciphers, different in structure. It's incredibly difficult to >believe that it wouldn't be stronger. The *logical argument*, >however, goes to Bryan. From the premise that we don't know >the strength of our ciphers we can only conclude that we don't >know the strength of any construction based on them, either. That "logical argument" is a "red herring": I made no claim that cipher stacks have provable known strength. But since we are comparing the situation where we have one cipher against having a stack of three ciphers, the issue would seem to be the same in both cases. I claim that, simply by using a cipher stack, known-plaintext information is not available to the opponent for use in attacks against the component ciphers. Not exposing known-plaintext is a very serious advantage. >If we want to conclude that tripling is right, we have to assume >something like "most of our presumed-strong ciphers actually are >strong, we just don't know which ones," or some other premise >that says *something* about the component cipher strengths. Well, if we suppose that *all* are ciphers are weak, there might seem to be no point in using ciphers at all. On the other hand, multiciphering with three different ciphers and independent keys makes an overall "cipher" that is arguably much stronger than its component parts, and also prevents the exposure of information which might otherwise have been used to attack those parts. So even if every fundamental block cipher really is weak, the multiciphering stack of three ciphers may *not* be. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 02:03:41 +0200 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3B328B5D.9BB0AA37@t-online.de> References: <3b3285bc.5858111@news.io.com> Newsgroups: sci.crypt Lines: 19 Terry Ritter wrote: > [snip] > On the other hand, multiciphering with three different ciphers and > independent keys makes an overall "cipher" that is arguably much > stronger than its component parts, and also prevents the exposure of > information which might otherwise have been used to attack those > parts. So even if every fundamental block cipher really is weak, the > multiciphering stack of three ciphers may *not* be. I guess that it could be useful in this connection to recall the fact that a good block cipher needs a sufficient number of rounds, which are put together in a cascading fashion not 'entirely' different in principle to cascading multiple ciphers. M. K. Shen Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 00:42:12 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b329348.2383501@news.powersurfr.com> References: <3B328B5D.9BB0AA37@t-online.de> Newsgroups: sci.crypt Lines: 15 On Fri, 22 Jun 2001 02:03:41 +0200, Mok-Kong Shen <mok-kong.shen@t-online.de> wrote, in part: >I guess that it could be useful in this connection to >recall the fact that a good block cipher needs a sufficient >number of rounds, which are put together in a cascading >fashion not 'entirely' different in principle to cascading >multiple ciphers. True. Of course, an important difference is that block ciphers usually repeat the same round many times, although with different subkeys, and this could be bad if the round is weak in certain ways. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 23:52:10 -0300 From: "Alexis Machado" <alexismachado@ieg.com.br> Message-ID: <tj5cfo5dsdl1ca@corp.supernews.com> References: <3B328B5D.9BB0AA37@t-online.de> Newsgroups: sci.crypt Lines: 28 "Mok-Kong Shen" <mok-kong.shen@t-online.de> wrote in message news:3B328B5D.9BB0AA37@t-online.de... > > > Terry Ritter wrote: > > > [snip] > > On the other hand, multiciphering with three different ciphers and > > independent keys makes an overall "cipher" that is arguably much > > stronger than its component parts, and also prevents the exposure of > > information which might otherwise have been used to attack those > > parts. So even if every fundamental block cipher really is weak, the > > multiciphering stack of three ciphers may *not* be. > > I guess that it could be useful in this connection to > recall the fact that a good block cipher needs a sufficient > number of rounds, which are put together in a cascading > fashion not 'entirely' different in principle to cascading > multiple ciphers. > You discovered why cipher chaining is not necessary. --- Alexis Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 03:35:14 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3b32bc2f.5448490@news.io.com> References: <tj5cfo5dsdl1ca@corp.supernews.com> Newsgroups: sci.crypt Lines: 53 On Thu, 21 Jun 2001 23:52:10 -0300, in <tj5cfo5dsdl1ca@corp.supernews.com>, in sci.crypt "Alexis Machado" <alexismachado@ieg.com.br> wrote: >"Mok-Kong Shen" <mok-kong.shen@t-online.de> wrote in message >news:3B328B5D.9BB0AA37@t-online.de... >> >> >> Terry Ritter wrote: >> > >> [snip] >> > On the other hand, multiciphering with three different ciphers and >> > independent keys makes an overall "cipher" that is arguably much >> > stronger than its component parts, and also prevents the exposure of >> > information which might otherwise have been used to attack those >> > parts. So even if every fundamental block cipher really is weak, the >> > multiciphering stack of three ciphers may *not* be. >> >> I guess that it could be useful in this connection to >> recall the fact that a good block cipher needs a sufficient >> number of rounds, which are put together in a cascading >> fashion not 'entirely' different in principle to cascading >> multiple ciphers. >> > >You discovered why cipher chaining is not necessary. First of all, "chaining" is the worst possible description of multiciphering: A "chain" is only as strong as its weakest link; multiciphering is as strong as its strongest cipher. Next, the point of multiciphering is not strength, per se. If we could trust the strength we supposedly get from a single cipher, there would be no point. Unfortunately there is no way to know how a cipher will fare against real opponents; there is no feedback, thus no way to know when we have succeeded or failed. We cannot know how good a design really is with respect to real opponents. The reason for multiciphering is to address this uncertainty as best we can. Using structurally different ciphers and different keys is fundamentally different than just using more rounds. Usually we have a fixed design and don't have the opportunity to triple the number of rounds anyway. And we can't just go around re-designing standard ciphers under the illusion that more rounds and an arbitrary expanded key schedule is necessarily better; it is not. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 00:57:43 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b3294be.2757790@news.powersurfr.com> References: <3B325669.A6F389E6@sandia.gov> Newsgroups: sci.crypt Lines: 58 On Thu, 21 Jun 2001 14:17:45 -0600, John Myre <jmyre@sandia.gov> wrote, in part: >From the premise that we don't know >the strength of our ciphers we can only conclude that we don't >know the strength of any construction based on them, either. True. But even a weak cipher, as long as it does have a variable key, will indeed prevent a known-plaintext attack on the next cipher in the chain. Similarly, a weak cipher with a variable key will deny known ciphertext to the attacker of the preceding cipher in the chain. So, in a certain case - where one of the three ciphers is weak only against a known plaintext attack - something has been achieved, hence improving the odds. The point is, though, without proof of strength in either case, we don't know either that using three ciphers is warranted (any one recognized modern cipher may be strong enough, as generally believed) or that it will be sufficient to solve the problem (our unknown opponents may have even more powerful techniques than we guess). Thus, using three ciphers in a row will only improve security under a certain set of conditions. This is a valid criticism. But on the other hand, it's an easy enough thing to do, and, since many messages need to stay secret for extended periods, and since computers are getting faster at such a rate (even quantum computers may be on the horizon) it's just possible that even if modern ciphers are quite secure at present, it is really going to be worth tripling up against threats that will emerge within the next 100 years. To me, though, the killer criticism of using, say, a Rijndael/MARS/SAFER+ sandwich for encryption is that the danger of a breakthrough against block ciphers is - obviously - small in comparison to the danger of solving far less messy problems of far greater general utility and with a wider array of mathematical tools to attack them. Factoring. Discrete logarithm. If people are going to insist on using public-key cryptosystems to distribute their keys, possible weaknesses in block ciphers in general seem like a very small threat. (However, the risk that any single block cipher could have a severe, unnoticed flaw is, I have to admit, large enough to be real, which is a strong argument for multiciphering.) Thus, I tend to think that any serious attempt to upgrade the security of conventional cryptography - and such attempts are quite possible, and don't require inordinate amounts of computer time to implement - is quite properly viewed as pointless, except for those who are willing to give up the convenience of public-key cryptography. (They might still *use* PKC, however, to augment the security of those hand-transported secret keys.) John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 01:48:27 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gu85b$2261$1@agate.berkeley.edu> References: <3b3294be.2757790@news.powersurfr.com> Newsgroups: sci.crypt Lines: 7 John Savard wrote: >But even a weak cipher, as long as it does have a variable key, >will indeed prevent a known-plaintext attack on the next cipher in the >chain. How do you know? If we don't know for certain whether our ciphers are secure (and we don't know), we can't know this for certain, either. Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 06:15:02 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b32dfaf.1885474@news.powersurfr.com> References: <9gu85b$2261$1@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 43 On Fri, 22 Jun 2001 01:48:27 +0000 (UTC), daw@mozart.cs.berkeley.edu (David Wagner) wrote, in part: >John Savard wrote: >>But even a weak cipher, as long as it does have a variable key, >>will indeed prevent a known-plaintext attack on the next cipher in the >>chain. >How do you know? If we don't know for certain whether our ciphers are >secure (and we don't know), we can't know this for certain, either. Maybe I just defined my terms badly. One can have a cipher that isn't secure, but is still known to (usually) produce different ciphertext for any plaintext if the key is changed. Of course, in that connection, one might note that DESX doesn't gain any resistance to differential cryptanalysis, which is usually considered a "known-plaintext" attack. Because you don't really need to know the plaintext itself with certainty, only the XOR between pairs of plaintexts, to do differential cryptanalysis. Thus, a cascade of three ciphers that are very weak against a differential attack can still be submitted to a differential attack; as long as one can cascade the characteristics. (If they're very weak, there will be multiple possibilities.) So if some unknown attack has similar properties... I do agree that it is only for a very specific circumstance that triple encipherment (for example) is neither unnecessary nor useless, and there is no particularly compelling evidence to suggest that the strength of contemporary block ciphers is in that narrow range. But I also noted that, if triple encipherment is unnecessary at present, as most experts believe, then within the next 75-100 years, advances in computer power just might cause their strength to at least enter that zone. Of course, on http://home.ecn.ab.ca/~jsavard/crypto/co041205.htm I suggest rather more extreme measures for the worriers among us than mere triple encipherment. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 08:03:15 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFBnpF.19o@bath.ac.uk> References: <3B325669.A6F389E6@sandia.gov> Newsgroups: sci.crypt Lines: 19 John Myre <jmyre@sandia.gov> wrote: : If we want to conclude that tripling is right, we have to assume : something like "most of our presumed-strong ciphers actually are : strong, we just don't know which ones," or some other premise : that says *something* about the component cipher strengths. That would be what was needed if we wanted to claim that a cypher stack was *strong*. You barely need to say anything to claim that it's *stronger* than any of the individual encryptions. No doubt the proper point of comparison for a cypher stack would be a purpose-designed algorithm with three times the keyspace of conventional cyphers. -- __________ |im |yler http://rockz.co.uk/ http://alife.co.uk/ http://atoms.org.uk/ Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 22:14:41 -0700 From: Bryan Olson <nospam@nospam.net> Message-ID: <3B32D441.CFF98E11@nospam.net> References: <3b32391a.9437472@news.io.com> Newsgroups: sci.crypt Lines: 52 Terry Ritter wrote: > > On Thu, 21 Jun 2001 01:50:22 -0700, in <3B31B54E.AE0A120D@nospam.net>, > in sci.crypt Bryan Olson <nospam@nospam.net> wrote: > > >Terry Ritter wrote: > >> > >> John Savard) wrote: > >> > >> >[...] > >> >It is possible to have reasons for confidence in something that fall > >> >short of proof, and it is even possible for such reasons to be > >> >sufficient to make some forms of precaution unwarranted. > >> > >> Not in cryptography. > > > >[...] > > > >> The obvious approach to minimizing the risk of known-plaintext is to > >> encipher at least twice with different ciphers and keys, so that the > >> plaintext to the last cipher is both randomized and hidden even from > >> the authorized user. I always recommend using three ciphers, which > >> allows one to be completely broken without exposing the others. > > > >Then the attacker gets known plaintext against a cipher that happens > >to be built from two or three other ciphers. And if we believe we > >cannot have confidence without proof, then we have the same problem > >we started with. We are just as motivated to triple the ciphers that > >that are themselves tripled ciphers. > > But that argument makes sense only if every "cipher" is the same as > every other "cipher," something we all know to be false. That is a > debating ploy, not an argument. Oh spare us the nonsense. That mistake - evaluating all ciphers as the same - is the error you commit when you disagreed with Savard's quote above. > The "cipher" composed of a sequence of different ciphers is a vastly > more complex transformation than any component cipher, as we can see > from the expanded keyspace. But you don't even look at the component ciphers. You blindly advocate composing three with no regard to their complexity. Whatever the right extent of conservative design is, it makes no sense to say it's automatically three times what any other cipher designers choose. --Bryan Subject: Re: Known plaintext considered harmless Date: Tue, 19 Jun 2001 23:09:17 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GF79nH.E3H@bath.ac.uk> References: <3B2F81DE.901777B1@sandia.gov> Newsgroups: sci.crypt Lines: 56 John Myre <jmyre@sandia.gov> wrote: : Tim Tyler wrote: : <snip> :> You go far too far. While tying yourself in knots to avoid known :> plaintext may be over the top, avoiding it *is* desirable. : But the question is: *how* desirable? Exactly how much effort : is it worth? I'd agree with (anonymous) that the proper solution : is to reinforce the cipher. [...] Reinforcing the cypher is not a solution. Consider, for example, the case where there is no known attack on the cypher, but the key generator is broken. :> You presume that cryptanalytic attack is the only possible method of :> getting information relating to the key of a cypher. : ? Attackers can get hold of information about keys by other means besides cryptanalysis. :> This is not the case. If the number of possible keys can be reduced - :> by any means - known-plaintext attacks can become a practical issue. : That is at best an exaggeration. You can reduce the number of : possible keys quite easily by brute force: guess a few keys, : decrypt the entire message for each guess, and discard the ones : that are clearly nonsense. [...] Known-plaintext attacks can *sometimes* become a practical issue, then. : In practice - this is simply not an issue. So cryptosystems are never compromised by regularities and lack of entropy in key generaors? : The point is that if a simple keysearch can be made practical, : because the entropy (unknown bits in the key) is small enough, : then using obfuscation on the source text as a way to prevent : that attack is almost certainly pointless. I believe I can see the point - but where is the argument supporting it? : Terry's response is more reasoned. The suspicion that maybe we : are all fooling ourselves, that there *aren't* any ciphers that : we can trust are as strong as we think, is at least defensible. : I think he's wrong, but of course I can't prove it. Even in fantasy-land - where the cyphers are known to be theoretically invulnerable - there are still other ways things can go wrong. Known plaintext will never be irrelevant. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Tue, 19 Jun 2001 23:20:53 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b2fddcd.29340382@news.powersurfr.com> References: <3B2F81DE.901777B1@sandia.gov> Newsgroups: sci.crypt Lines: 19 On Tue, 19 Jun 2001 10:46:22 -0600, John Myre <jmyre@sandia.gov> wrote, in part: >The point is that if a simple keysearch can be made practical, >because the entropy (unknown bits in the key) is small enough, >then using obfuscation on the source text as a way to prevent >that attack is almost certainly pointless. It's true that the old days of 40-bit keys are now behind us...finding a way to achieve security under such a constraint may well be an impossible challenge, but not entirely without value. Particularly if those days might be coming back - in effect. What if quantum computers start appearing at corner computer stores near you? Won't simple keysearch get a lot simpler for all those back intercepts? John Savard http://home.ecn.ab.ca/~jsavard/frhome.htm Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 19:33:42 +0200 From: "Thomas J. Boschloo" <nospam@multiweb.nl> Message-ID: <3B338176.539BDD09@multiweb.nl> References: <GF689w.H76@bath.ac.uk> Newsgroups: sci.crypt Lines: 73 -----BEGIN PGP SIGNED MESSAGE----- Tim Tyler wrote: > > lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote: > > : NO CONSIDERATION WHATSOEVER should be given to manipulating or > : constraining the data structures in the hopes of making the > : encryption stronger! [...] > > : Let us all agree that it is time to put concerns about known plaintext > : behind us. Recommendations to avoid it are obsolete. > > You go far too far. While tying yourself in knots to avoid known > plaintext may be over the top, avoiding it *is* desirable. > > You presume that cryptanalytic attack is the only possible method of > getting information relating to the key of a cypher. This is not the > case. If the number of possible keys can be reduced - by any means - > known-plaintext attacks can become a practical issue. I am in no way a cryptographer, so I'll try to keep this short. What I wonder about is if compression does really 'reduce' plain-text, or if it does only add to it? I am at the moment reading an interesting article on practical low-tech attacks on steganography <C'T magazine 7/8 2001> and the randomness of certain stego applications is what gives them away, so it might be a problem in crypto also. If you get 8 zero's of plaintext inside a PGP(tm) encrypted packet, you sure know that it is not PGP encrypted because it is clearly not compressed. So you can discard possible plaintexts that you could otherwise not discard at all. It may not be much situations in which this will happen, but my motto has always been 'Keep It Simple (stupid)'. Compression also results in various 'end of data' markers and maybe even checksums to see if it decompressed right. And if compression is so important to crypto security, then why isn't it incorporated into new encryption algorithms to make them more secure? Another interesting question is where to start compressing. If you start at the beginning your LZ buffer (is it called that?) might still be empty and certainly a certain plaintext will always be compressed the same way with most algorithms (if you can choose random redundancy, you could and should have compressed it further). If you start at the end, you might as well have run an extra layer of encryption backwards to solve any problems there were to begin with. BTW I like the fixed plaintext size argument someone made. Very Anon-server like (and I like Anon servers, like the one the original message was posted from and which for once made anonymous messages look good for a change, it might have been a renown cryptographer who made the original statement for all we know, which would thus have polluted the argument (or ruined his reputation, depending on how you look at it ;-)). Happy regards, sorry if I didn't make any sense, I am only a crypto implementator wannabe, not a specialist. Thomas J. Holland -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 5.5.3i for non-commercial use <http://www.pgpi.com> Comment: My homepage <http://home.soneraplaza.nl/mw/prive/boschloo> iQB5AwUBOzNzZAEP2l8iXKAJAQFR2wMfZLgE2lMaAMBO32cDeO5RSkJr0dIr1BJ/ WidiS/MvPqWo2WndZWq1vdwfo6VoKeRcua2ikGR635o1eEZ+gnwCA7CwnPiJHQ8i 9LEG8qPzF0DP8PfkdhVmuYHV/3wJjX8YMUzbSA== =8xF2 -----END PGP SIGNATURE----- -- "Only flawed software is more secure with a closed source policy", me Subject: Re: Known plaintext considered harmless Date: 19 Jun 2001 22:40:34 -0000 From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> Message-ID: <20010619224034.10622.qmail@nym.alias.net> References: <20010619052032.804.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 52 Here's an example of known-plaintext-phobia at its worst, from a post which appeared a few hours ago: : From: Thierry Nivon <tnivon@waika9.com> : Date: Tue, 19 Jun 2001 22:50:33 +0200 : Message-ID: <9godml$2ou$2@fenris.isp.9tel.net> : : because the beginning of the document is always (nearly) the same : <xml .... : wich brings a little (verry little) bit of information : : Thierry Nivon : : Mok-Kong Shen wrote: : : > : > I read somewhere an article stating that the possibility : > in the new standard for XML-security of user selective determination of : > parts of an XML document to be encrypted : > is essential, for otherwise one could encrypt only the : > whole document and that would be bad for security. Could : > someone explain why the encryption of the whole document : > is bad in clear-cut terms? Thanks in advance. : > : > M. K. Shen We see a claim that piecewise encryption of XML documents is essential, with the suggested reason being that otherwise there is known plaintext in that every document starts with "<xml". Of course, it's unlikely that this is the true reason why the XML encryption effort is supporting encryption of subparts. But it is disturbing that it is offered as an explanation, that someone presumably at least slightly knowledgeable in cryptography believes that it is unsafe to encrypt XML documents because they all start the same way. This is exactly the error which the cryptographic community must correct. For too long have naive and inexperienced users been taught to fear known plaintext and to produce elaborate monstrosities in a futile attempt to avoid weakening their encryption. It is an abrogation of the responsibility of professional cryptographers to allow users of this technology to suffer under such a misconception. Imagine if this attitude were allowed to go uncorrected, and if the XML encryption group had actually adopted the need to encrypt parts, a requirement which has added tremendous complexity to the specification, purely out of a misguided fear about known plaintext! This disaster would be the fault, ultimately, of the cryptographic community for failing to speak with a clear and united voice on this issue. It is time to put this sad error to an end. Known plaintext is not harmful, and will not weaken a properly-applied encryption transform. Subject: Re: Known plaintext considered harmless Date: 19 Jun 2001 23:05:17 GMT From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) Message-ID: <90C5AFD90H110W296LC45WIN3030R@207.36.190.226> References: <20010619224034.10622.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 27 mix@anon.lcs.mit.edu (lcs Mixmaster Remailer) wrote in <20010619224034.10622.qmail@nym.alias.net>: > >It is time to put this sad error to an end. Known plaintext is not >harmful, and will not weaken a properly-applied encryption transform. > Actually any known plaintext is harmful and will weaken an ecryption system. But if you properly compress to remove the common known parts then you apply the encryption. Since if they are known they can be added in during the decryption decompression process. David A. Scott -- SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM" http://www.jim.com/jamesd/Kong/scott19u.zip My website http://members.nbci.com/ecil/index.htm My crypto code http://radiusnet.net/crypto/archive/scott/ MY Compression Page http://members.nbci.com/ecil/compress.htm **TO EMAIL ME drop the roman "five" ** Disclaimer:I am in no way responsible for any of the statements made in the above text. For all I know I might be drugged. As a famous person once said "any cryptograhic system is only as strong as its weakest link" Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 10:34:09 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GF85Cx.Ctp@bath.ac.uk> References: <20010619224034.10622.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 16 lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote: : It is time to put this sad error to an end. Known plaintext is not : harmful, and will not weaken a properly-applied encryption transform. How do you know whan you have one of them? How do you know how much entropy there is in your key generator? How do you know whether your keybook has been quietly copied? You don't know any of these things. Known plaintext is rightly considered harmful. It is the assertion that is is harmless that is the "sad error". -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 16:24:31 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gqinv$96v$3@agate.berkeley.edu> References: <GF85Cx.Ctp@bath.ac.uk> Newsgroups: sci.crypt Lines: 20 Tim Tyler wrote: >lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote: >: It is time to put this sad error to an end. Known plaintext is not >: harmful, and will not weaken a properly-applied encryption transform. > >How do you know whan you have one of them? > >How do you know how much entropy there is in your key generator? > >How do you know whether your keybook has been quietly copied? > >You don't know any of these things. If you any of these failures occur, all bets are off. Whether or not you've tried to minimize known plaintext, getting those wrong puts your system in great danger. Therefore, I would argue that it is a better use of your energies to spend time making sure you defend against those failure modes than to worry about known plaintext. Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 19:24:44 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GF8tx8.Apw@bath.ac.uk> References: <9gqinv$96v$3@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 40 David Wagner <daw@mozart.cs.berkeley.edu> wrote: : Tim Tyler wrote: :>lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote: :>: It is time to put this sad error to an end. Known plaintext is not :>: harmful, and will not weaken a properly-applied encryption transform. :> :>How do you know whan you have one of them? :> :>How do you know how much entropy there is in your key generator? :> :>How do you know whether your keybook has been quietly copied? :> :>You don't know any of these things. : If you any of these failures occur, all bets are off. Whether or not : you've tried to minimize known plaintext, getting those wrong puts your : system in great danger. Indeed. : Therefore, I would argue that it is a better use of your energies to : spend time making sure you defend against those failure modes than to : worry about known plaintext. It depends on who you are, and what the relative expenditure is. As a designer of an encryption device, you may have no control over the way others feed keys to you. It may not be up to you to control how codebooks are destroyed upon enemy capture. However, you /may/ be in a position to eliminate some classes of known plaintexts. Eliminating known plaintext may not help very much - or may only help sometimes, but it can and does help. Whether you consider it worth doing should be a function of the percieved costs and benefits. Known plaintext *can* be harmful, and *can* weaken the security of messages transmitted under encryption applied with great care. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 19:45:07 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gqug3$fpf$3@agate.berkeley.edu> References: <GF8tx8.Apw@bath.ac.uk> Newsgroups: sci.crypt Lines: 28 Tim Tyler wrote: >David Wagner <daw@mozart.cs.berkeley.edu> wrote: >: Therefore, I would argue that it is a better use of your energies to >: spend time making sure you defend against those failure modes than to >: worry about known plaintext. > >It depends on who you are, and what the relative expenditure is. As a >designer of an encryption device, you may have no control over the way >others feed keys to you. If you're a designer of an encryption device, you have no control over the plaintext being fed to you, so you can't plausibly do anything about known plaintext anyway. In any case, I thought we were talking about protocol and systems design. >Eliminating known plaintext may not help very much - or may only help >sometimes, but it can and does help. Whether you consider it worth >doing should be a function of the percieved costs and benefits. Sure. I contend, though, that the cost is very high and the benefit is low. The cost is complexity, and complexity is the worst enemy of security. The benefit is only applicable if the cipher is weak (but not too weak---it has to be just weak enough that there are practical known plaintext attacks, but no practical attacks with the reduced amount of knowledge on the text), and given the state of modern cryptography, this failure mode currently seems to have a relatively low probability when compared to the other types of failures of security, empirically speaking. Subject: Re: Known plaintext considered harmless Date: 20 Jun 2001 14:10:15 -0600 From: mackys+usenet@dim.com (Ben Cantrick) Message-ID: <9gqvv7$ehk@flatland.dimensional.com>
References: <9gqug3$fpf$3@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 30

In article <9gqug3$fpf$3@agate.berkeley.edu>,
David Wagner <daw@mozart.cs.berkeley.edu> wrote:
>Tim Tyler  wrote:
>>David Wagner <daw@mozart.cs.berkeley.edu> wrote:
>>: Therefore, I would argue that it is a better use of your energies to
>>: spend time making sure you defend against those failure modes than to
>>
>>It depends on who you are, and what the relative expenditure is.  As a
>>designer of an encryption device, you may have no control over the way
>>others feed keys to you.
>
>If you're a designer of an encryption device, you have no control
>over the plaintext being fed to you, so you can't plausibly do anything

I'm not sure I always agree. What's to stop you from, say, putting a
block of 16 or 32 random bytes at the beginning of any plaintext fed to
you? Or putting in a random byte every 3rd byte or something. Yes, you
pay in complexity of device design (good random number gens aren't cheap)
and bandwidth, but if you're realy worried about known plaintext attacks,
maybe it's worth it.

-Ben
--
Ben Cantrick (mackys@dim.com)        |   Yes, the AnimEigo BGC dubs still suck.
BGC Nukem:     http://www.dim.com/~mackys/bgcnukem.html
The Spamdogs:  http://www.dim.com/~mackys/spamdogs
http://www.wins.uva.nl/~mes/jargon/a/AppendixB.html

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 20:36:31 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF8x8v.GDv@bath.ac.uk>
References: <9gqug3$fpf$3@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 56

David Wagner <daw@mozart.cs.berkeley.edu> wrote:
: Tim Tyler  wrote:
:>David Wagner <daw@mozart.cs.berkeley.edu> wrote:

:>: Therefore, I would argue that it is a better use of your energies to
:>: spend time making sure you defend against those failure modes than to
:>
:>It depends on who you are, and what the relative expenditure is.  As a
:>designer of an encryption device, you may have no control over the way
:>others feed keys to you.

: If you're a designer of an encryption device, you have no control
: over the plaintext being fed to you, so you can't plausibly do anything

: In any case, I thought we were talking about protocol and systems design.

This thread seems to have as a motif XML encryption.

In that domain, it seems to me that there /is/ something that can be done

:>Eliminating known plaintext may not help very much - or may only help
:>sometimes, but it can and does help.  Whether you consider it worth
:>doing should be a function of the percieved costs and benefits.

: Sure.  I contend, though, that the cost is very high and the benefit
: is low.  The cost is complexity, and complexity is the worst enemy of
: security. [...]

The costs of /not /compressing can include having signals being sent
for longer than necessary (resulting in increased possibility of
interception and being traced), signals taking longer to arrive,
and other problems.

Complexity is a factor to be weighed - but I don't think it's
necessarily overwhelming.  It's not as though compression is rocket
science.

: The benefit is only applicable if the cipher is weak (but not
: too weak---it has to be just weak enough that there are practical known
: plaintext attacks, but no practical attacks with the reduced amount of
: knowledge on the text) [...]

This is something I dispute.  As an example, another failure mode in
which known-plaintext can help is a random number generator with
insuifficient entropy - or something like a hashed password as a key.

If cryptanalytic attack was the /only/ consideration, correspondingly less
effort should be put into eliminating known plaintext - but it is only one
of a number of possible ways that the attacker can reduce the work
required for a brute force search.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 15:22:37 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B31141D.4E148B8B@sandia.gov>
References: <GF8x8v.GDv@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 12

Tim Tyler wrote:
<snip>
> Complexity is a factor to be weighed - but I don't think it's
> necessarily overwhelming.
<snip>

Remind me not to buy any important software from you.

Ok, ok, that was harsh.  Still, I contend that complexity
is the single greatest obstacle to secure systems.

JM

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 23:16:13 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF94n1.2Ix@bath.ac.uk>
References: <3B31141D.4E148B8B@sandia.gov>
Newsgroups: sci.crypt
Lines: 9

John Myre <jmyre@sandia.gov> wrote:

: I contend that complexity is the single greatest obstacle to secure systems.

Perhaps.  Ignorance, stupidity, carelessness, lack of education
and laziness probably ought to be factored in somewhere.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 15:36:27 -0600
From: John Myre <jmyre@sandia.gov>
Message-ID: <3B31175B.6557690A@sandia.gov>
References: <GF8x8v.GDv@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 32

Tim Tyler wrote:
>
> David Wagner <daw@mozart.cs.berkeley.edu> wrote:
<snip>
> : The benefit is only applicable if the cipher is weak (but not
> : too weak---it has to be just weak enough that there are practical known
> : plaintext attacks, but no practical attacks with the reduced amount of
> : knowledge on the text) [...]
>
> This is something I dispute.  As an example, another failure mode in
> which known-plaintext can help is a random number generator with
> insuifficient entropy - or something like a hashed password as a key.

But the "just weak enough" objection applies here, as well.

> If cryptanalytic attack was the /only/ consideration, correspondingly less
> effort should be put into eliminating known plaintext - but it is only one
> of a number of possible ways that the attacker can reduce the work
> required for a brute force search.

If there is *any* way to verify a key guess, then brute force
will always work, if we have enough time.  Plaintext obfuscation
techniques (compression, large block encryption) can only multiply
the work factor by a relatively small amount.  It is *possible*
that this amount is enough to make the difference between secure
and not secure.  It seems extraordinarly *unlikely* in any real,
practical system.  If nothing else, note that the adversary's
capabilities, a moving target at best, factor in to the equation.

JM

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 23:12:36 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF94H0.2EI@bath.ac.uk>
References: <3B31175B.6557690A@sandia.gov>
Newsgroups: sci.crypt
Lines: 46

John Myre <jmyre@sandia.gov> wrote:
: Tim Tyler wrote:
:> David Wagner <daw@mozart.cs.berkeley.edu> wrote:

:> : The benefit is only applicable if the cipher is weak (but not
:> : too weak---it has to be just weak enough that there are practical known
:> : plaintext attacks, but no practical attacks with the reduced amount of
:> : knowledge on the text) [...]
:>
:> This is something I dispute.  As an example, another failure mode in
:> which known-plaintext can help is a random number generator with
:> insuifficient entropy - or something like a hashed password as a key.

: But the "just weak enough" objection applies here, as well.

"the cipher is weak (but not too weak---it has to be just weak enough [])"

The "just weak enough" objection referred explicitly to the cypher.

If you wanted to apply the original statement to RNG failure, it would
need rephrasing.

: If there is *any* way to verify a key guess, then brute force
: will always work, if we have enough time.  Plaintext obfuscation
: techniques (compression, large block encryption) can only multiply
: the work factor by a relatively small amount. [...]

You are mistaken here.  Coompression can make a brute force search
impossible from a situation where a brute force search before
compression would yield a unique plaintext.

: It is *possible* that this amount is enough to make the difference
: between secure and not secure.  It seems extraordinarly *unlikely* in
: any real, practical system.

Not really - all it has to do is lower the message size below the unicity
distance.

: If nothing else, note that the adversary's
: capabilities, a moving target at best, factor in to the equation.

The adversary's computational abilities don't enter into it, if
finding a unique solution has become theoretically impossible.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 21:46:55 +0000 (UTC)
From: daw@mozart.cs.berkeley.edu (David Wagner)
Message-ID: <9gr5kf$k2b$1@agate.berkeley.edu>
References: <GF8x8v.GDv@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 37

Tim Tyler  wrote:
>Complexity is a factor to be weighed - but I don't think it's
>necessarily overwhelming.  It's not as though compression is rocket
>science.

Compression isn't rocket science.  Security in the presence of complex
systems, on the other hand, is rocket science.

Consider the following, intended only as a random illustratory example
(since you picked compression as your example of a technique that adds
to complexity but reduces known plaintext).  If you add compression,
you have to start worrying about the following new attack.

Consider: After the receiver decrypts a packet and verifies its
authenticity, and then subsequently decompresses the result, how do we
know that the decompressed result is also authentic?  Well, in some cases,
it need not be.  For example, if the decompressor has any side inputs
(let's say, a compression window, or a 'compression factor' tuning knob)
that have not been authenticated, then an attacker might be able to
affect the results of the decompression, despite the presence of a MAC
on the encrypted ciphertext.  This could, in some (hopefully uncommon)
situations, potentially lead to a failure of the message authentication
property.

I'm not saying that this attack will always apply whenever you use
compression.  Rather, I'm saying that it is an example of a new failure
mode that you have to worry about if you introduce the extra complexity
associated with measures for reducing known plaintext.  The question is,
is this extra complexity worth the cost?  In the case of reducing known
plaintext, I assert that the answer is almost always an emphatic "No!".

For me, one of the fantastic advantages of modern ciphers, which
are designed to be secure against all of these far-fetched academic
attacks, is that they give you an entire set of failure modes that you
no longer have to worry about when designing a system.  As a result,
you can better concentrate on avoiding the resulting failure modes,
and that is a big win.

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 23:06:07 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF9467.282@bath.ac.uk>
References: <9gr5kf$k2b$1@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 41

David Wagner <daw@mozart.cs.berkeley.edu> wrote:
: Tim Tyler  wrote:

:>Complexity is a factor to be weighed - but I don't think it's
:>necessarily overwhelming.  It's not as though compression is rocket
:>science.

: Compression isn't rocket science.  Security in the presence of complex
: systems, on the other hand, is rocket science.

: Consider the following, intended only as a random illustratory example
: (since you picked compression as your example of a technique that adds
: to complexity but reduces known plaintext).  If you add compression,
: you have to start worrying about the following new attack.

: Consider: After the receiver decrypts a packet and verifies its
: authenticity, and then subsequently decompresses the result, how do we
: know that the decompressed result is also authentic? [...]

Signatures should normally be attacked to the document being signed.
Usually this occurs prior to compression.

: I'm not saying that this attack will always apply whenever you use
: compression.  Rather, I'm saying that it is an example of a new failure
: mode that you have to worry about if you introduce the extra complexity
: associated with measures for reducing known plaintext.  The question is,
: is this extra complexity worth the cost?  In the case of reducing known
: plaintext, I assert that the answer is almost always an emphatic "No!".

There are always ways to do things wrong.  Avoiding complexity at all
costs leads towards CTR mode, and ultimately ROT-N - surely not where
we should be going.

To some extent the remedy for doing things wrong is to learn how to do
them right.  In many cases, this only needs to be done once.  Today
combined compress-encrypt systems exist, and they can be used much like
any cypher.  There are a *few* security differences because of the
compression, but I would not say they were much to write home about.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 00:08:38 +0000 (UTC)
From: daw@mozart.cs.berkeley.edu (David Wagner)
Message-ID: <9grdu6$ouo$1@agate.berkeley.edu>
References: <GF9467.282@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 8

Tim Tyler  wrote:
>Signatures should normally be attacked to the document being signed.
>Usually this occurs prior to compression.

In application-layer encryption, yes.  In network-layer encryption, no.
As I said before, I don't claim this applies to all systems.  Rather,
it is an example of a more general point: As you add complexity, it can
become harder to verify whether the system is secure.

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 21:50:07 +0000 (UTC)
From: daw@mozart.cs.berkeley.edu (David Wagner)
Message-ID: <9gr5qf$k2b$2@agate.berkeley.edu>
References: <GF8x8v.GDv@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 23

Tim Tyler  wrote:
>: The benefit is only applicable if the cipher is weak (but not
>: too weak---it has to be just weak enough that there are practical known
>: plaintext attacks, but no practical attacks with the reduced amount of
>: knowledge on the text) [...]
>
>This is something I dispute.  As an example, another failure mode in
>which known-plaintext can help is a random number generator with
>insuifficient entropy - or something like a hashed password as a key.

How?

Maybe you're imagining that a failed random number generator can make it
easy for the attacker to narrow down the keyspace to, say, only 2^20
possibilities, and then exhaustive keysearch is a terrible threat.
I certainly agree that this is a possible failure mode.

But it would be a mistake to expect to defend against this failure mode
by, say, compressing.  After all, exhaustive keysearch will be possible
whenever you can find a total of 20 bits of redundancy spread out in any
way whatsoever among the entire set of messages encrypted under that
key, and it seems extremely difficult in practice to reduce the level
of redundancy enough that exhaustive keysearch becomes impossible.

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 22:41:28 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF9314.16v@bath.ac.uk>
References: <9gr5qf$k2b$2@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 42

David Wagner <daw@mozart.cs.berkeley.edu> wrote:
: Tim Tyler  wrote:

:>: The benefit is only applicable if the cipher is weak (but not
:>: too weak---it has to be just weak enough that there are practical known
:>: plaintext attacks, but no practical attacks with the reduced amount of
:>: knowledge on the text) [...]
:>
:>This is something I dispute.  As an example, another failure mode in
:>which known-plaintext can help is a random number generator with
:>insuifficient entropy - or something like a hashed password as a key.

: How?

: Maybe you're imagining that a failed random number generator can make it
: easy for the attacker to narrow down the keyspace to, say, only 2^20
: possibilities, and then exhaustive keysearch is a terrible threat.

Yes.

: But it would be a mistake to expect to defend against this failure mode
: by, say, compressing.  After all, exhaustive keysearch will be possible
: whenever you can find a total of 20 bits of redundancy spread out in any
: way whatsoever among the entire set of messages encrypted under that
: key, and it seems extremely difficult in practice to reduce the level
: of redundancy enough that exhaustive keysearch becomes impossible.

Varying quantities of information may be transmitted under keys.

If you're /always/ transmitting lots of information the defense doesn't
help.

If you /ever/ transmit small quantities of information under a key, it
can help.

This is essentially the same situation as cryptanalytic attack.  I think
if you are regarding compression as an attempt to defend against broken
cyphers, you should equally consider it as a defense against low entropy
key generators.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 23:33:06 +0000 (UTC)
From: daw@mozart.cs.berkeley.edu (David Wagner)
Message-ID: <9grbri$n8q$3@agate.berkeley.edu>
References: <GF9314.16v@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 11

Tim Tyler  wrote:
>If you /ever/ transmit small quantities of information under a key, it
>can help.

Right.  And my claim is that this case almost never happens (it is so
rare that it should have minimal weight in any decision).  But I'll
certainly concede that my claim is purely empirical in nature, and can
only be settled by looking at real systems and doing some measurements.
I haven't done the measurements; I'm just going on my experience with
how crypto tends to be used, in the systems I've looked at.  I freely
admit that I don't have any evidence for my claim that I could show you.

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 09:15:05 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF9wD5.E21@bath.ac.uk>
References: <9grbri$n8q$3@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 29

David Wagner <daw@mozart.cs.berkeley.edu> wrote:
: Tim Tyler  wrote:

:>If you /ever/ transmit small quantities of information under a key, it
:>can help.

: Right.  And my claim is that this case almost never happens (it is so
: rare that it should have minimal weight in any decision). [...]

I guess it depends on how small we are thinking.

The "small" refers to messages whose length is roughly equal to the
unicity distance for the system.

However, it is important to recall that compression can increase the
unicity distance - and can sometimes increase it dramatically.

For example if random ASCII (i.e. 7-bit) plaintexts were being transmitted
under a 128-bit key, then compression could increase the unicity distance
from thirty-something bytes to *infinity* - no matter how long your
messages were they would still be within the unicity distance.

No we may not have access to such compressors for the domain of English
text messages - but it seems worth noting that in some domains, the
unicity distance after compression has been applied can be quite large -
and will only grow as compression algorithms increase in sophistication.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 09:21:13 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF9wnC.Ent@bath.ac.uk>
References: <GF9wD5.E21@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 12

Tim Tyler <tt@iname.com> wrote:

: For example if random ASCII (i.e. 7-bit) plaintexts were being transmitted
: under a 128-bit key, then compression could increase the unicity distance
: from thirty-something bytes [...]

Apologies for this gross under-estimate.

This math makes no odds to the argument, though.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 02:00:50 GMT
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Message-ID: <3B300419.EE578CDF@null.net>
References: <20010619052032.804.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 13

lcs Mixmaster Remailer wrote:
> Inexperienced users of crypto systems are often concerned about
> known plaintext.  They have heard that known plaintext can give
> a cryptanalyst an entry point into breaking ciphers.
> Accordingly they fear known plaintext and try to avoid it.

(1) What evidence is there for any of the above claims?
(2) Resistance to a "known plaintext" attack is an essential
requirement for a general-purpose encryption system, because
in fact quite often the enemy *will* know or guess plaintext.
(3) Stereotypes do provide an opening wedge in cracking many
systems.  If a collection of messages are stacked "in depth"
where the stereotype recurs, it can facilitate cryptanalysis.

Subject: Re: Known plaintext considered harmless
Date: 20 Jun 2001 05:40:04 -0000
From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu>
Message-ID: <20010620054004.24435.qmail@nym.alias.net>
References: <20010619052032.804.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 121

Douglas A. Gwyn writes:
> lcs Mixmaster Remailer wrote:
> > Inexperienced users of crypto systems are often concerned about
> > known plaintext.  They have heard that known plaintext can give
> > a cryptanalyst an entry point into breaking ciphers.
> > Accordingly they fear known plaintext and try to avoid it.
>
> (1) What evidence is there for any of the above claims?

Already an example appeared on sci.crypt the day after the original
posting, which has been quoted elsewhere.  Someone thought that XML
encryption was unsafe because every message began with the same string
"<xml".

Here is an example from RFC1889, a protocol for audio-video transport:

sequence number: 16 bits
The sequence number increments by one for each RTP data packet
sent, and may be used by the receiver to detect packet loss and
to restore packet sequence. The initial value of the sequence
number is random (unpredictable) to make known-plaintext attacks
on encryption more difficult, even if the source itself does not
encrypt, because the packets may flow through a translator that
does. Techniques for choosing unpredictable numbers are
discussed in [7].

Here is a perfect example from the W3C workgroup on forms,
http://lists.w3.org/Archives/Public/www-forms/2001Mar/0052.html:

This approach, however appealing it may be, lacks in two key areas.

The first is plain security.  If you encrypt the whole form, then you
make yourself vulnerable to known- plaintext attacks.  Since parts
of the form are known (e.g. tag names), they can be used to break the
encryption if it's vulnerable to known-plaintext attacks.  Encrypting
just the sensitive information gives an attacker less to play with.

Here is an example from the IETF TLS (aka SSL) design group from 1996.
They flirt with the error but in the end they make the right decision.
http://lists.w3.org/Archives/Public/ietf-tls/1996OctDec/0170.html:

I think this is a good suggestion, except that I would specify the
contents of the block data rather than using random data. I don't
believe that random data adds significantly to the security of the
protocol, as any significant communication will be well beyond the
unicity distance; thus, the verifiable plaintext of the padding
won't aid attackers. The benefit of using predictable data is that
the peer can verify that padding is being done as expected, which
allows a greater rigidity around implementations, which I believe
aids security analysis.

If you'd like to avoid the padding data being known plaintext,
you could specify it in some what that it is unknown, yet
verifiable. However, the protocol should be strong against
known-plaintext attacks anyway, as such attacks are commonly available
to attackers.

An example from the XML encryption draft,
http://lists.w3.org/Archives/Public/xml-encryption/2000Dec/att-0024/01-XMLEncryption_v01.html:

5. It has been noted by Hal Finney, Steve Wiley, et al that encrypting
XML raises concerns over attacks based on known plaintext as well
as known length of the encrypted plaintext.  This issue, in regards
to encrypting enumerated attributes values, is one reason for not
supporting attribute-value encryption.

A message from the XML encryption archives recommending compression
(aka "redundancy removal") to thwart known plaintext attacks,
http://lists.w3.org/Archives/Public/xml-encryption/2000Nov/0059.html:

>People didn't seem keen on compression.
Maybe we should use the term "redundancy removal" instead of
compression.  This should be possible to harden encrypted docs against
known-plaintext attacks.  How this is technically achieved should be
considered later.  But it may be achieved via compression.

The original XML encryption organizational ("BoF") meeting notes
raise a point similar to the one which came up earlier today, concerns
about known plaintext due to the stereotypical nature of XML documents,
http://lists.w3.org/Archives/Public/xml-encryption/2000Sep/att-0014/01-XML_Encryption_BoF.htm:

Known plaintext attacks

E.g. if DTD is given, attackers can know an element to be encrypted
starts with a particular tag name. Good reason to not use stream
ciphers.

I can find more examples if you like.  Or are you convinced now that

> (2) Resistance to a "known plaintext" attack is an essential
> requirement for a general-purpose encryption system, because
> in fact quite often the enemy *will* know or guess plaintext.

Absolutely.

> (3) Stereotypes do provide an opening wedge in cracking many
> systems.  If a collection of messages are stacked "in depth"
> where the stereotype recurs, it can facilitate cryptanalysis.

True in principle, but this is not something that a user should be
concerned about.  Hundreds of man years have been spent analyzing
3DES and probably close to that for AES.  Users don't have to worry
about avoiding known plaintext when using a strong algorithm like this.
The most brilliant cryptographers in the world have failed to find the
smallest weakness in these ciphers.

The danger from known plaintext when using a modern cipher like these
is so small, so hypothetical, so unrealistic, that it is unprofessional
for a cryptographer to allow the contrary belief to stand.  The examples
above prove that it is a widespread mistake, quoted frequently by people
who think they know something about cryptography.  The community has to
take on the responsibility of setting the record straight.

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 10:29:24 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF8550.CIH@bath.ac.uk>
References: <20010620054004.24435.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 26

lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote:

: Hundreds of man years have been spent analyzing
: 3DES and probably close to that for AES.  Users don't have to worry
: about avoiding known plaintext when using a strong algorithm like this.
: The most brilliant cryptographers in the world have failed to find the
: smallest weakness in these ciphers.

Really?  How do you know that?  You don't know that.  Indeed, it's quite
likely that you have never read anything by the world's best living
cryptanalysts.

: The danger from known plaintext when using a modern cipher like these
: is so small, so hypothetical, so unrealistic, that it is unprofessional
: for a cryptographer to allow the contrary belief to stand. [...]

Again, you assume that cryptanalytic attack is the only possible means of
key recovery.  This is false.  Even *if* the cypher were known to
be unbroken, there would still be a good case for reducing known-plaintext,
as a precaution to help in dealing with other possible problems.

XML's high redundancy decreases the unicity distance for it - making
measures that increase it again more desirable than usual.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 11:03:12 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b30814c.1270021@news.powersurfr.com>
References: <GF8550.CIH@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 30

On Wed, 20 Jun 2001 10:29:24 GMT, Tim Tyler <tt@iname.com> wrote, in
part:
>lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote:

>: Hundreds of man years have been spent analyzing
>: 3DES and probably close to that for AES.  Users don't have to worry
>: about avoiding known plaintext when using a strong algorithm like this.
>: The most brilliant cryptographers in the world have failed to find the
>: smallest weakness in these ciphers.

>Really?  How do you know that?  You don't know that.  Indeed, it's quite
>likely that you have never read anything by the world's best living
>cryptanalysts.

Well, he *is* using an anonymous remailer. So we don't _know_ that he
doesn't work for the NSA - or that his motive in writing this post is
not to get us to leave the known plaintext in to make his job easier.

I think we can leave that out of our thinking, though. The problem
isn't that Rijndael and Triple-DES are particularly likely to be less
strong than "advertised". As far as that is concerned, I have to admit
the 'conventional wisdom' is likely quite right. The problem is,
though, that if keeping one's secrets is felt to be important,
phrasing even good advice in terms of deliberately neglecting a
precaution, instead of spending one's time on more important things
(like possible side channels, exposure of one's plaintext data to
computer viruses) is like waving a red flag at a bull.

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 16:26:02 +0000 (UTC)
From: daw@mozart.cs.berkeley.edu (David Wagner)
Message-ID: <9gqiqq$96v$4@agate.berkeley.edu>
References: <GF8550.CIH@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 7

Tim Tyler  wrote:
>XML's high redundancy decreases the unicity distance for it

Even if you use compression, you'll still be using the key
far past the unicity distance.  It seems hard to precisely
justify why the unicity distance should be relevant to systems
based on computational security.

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 19:36:51 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF8uHF.BMw@bath.ac.uk>
References: <9gqiqq$96v$4@agate.berkeley.edu>
Newsgroups: sci.crypt
Lines: 32

David Wagner <daw@mozart.cs.berkeley.edu> wrote:
: Tim Tyler  wrote:

:>XML's high redundancy decreases the unicity distance for it

: Even if you use compression, you'll still be using the key
: far past the unicity distance. [...]

That depends on exactly what you are doing.

XML is not known for its conciceness - but then again it /is/ pretty
redundant, and a dedicated compressor could no-doubt perform wonders.

However, I expect those wonders would only rarely get you anywhere
near the unicity distance.  Rarely is better than never, though.

: It seems hard to precisely justify why the unicity distance should be
: relevant to systems based on computational security.

Block cyphers are based mainly on computational security - but partly
on information-theoretic security.  The former provides no guarantee of
secrecy, while the latter does.  If you can get a guarantee that
even with a powerful cryptanalytic attack - or some super-duper quantum
computer - knowledge of the cyphertext cannot possibly reveal the key,
then that is something which might be considered desirable.

Even if you ignore the unicity distance there are still some
remaining security plus points for compressing.  The attacker
gets less plaintext/cyphertext pairs, for example.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 11:31:21 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b3087b8.2913870@news.powersurfr.com>
References: <20010620054004.24435.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 32

On 20 Jun 2001 05:40:04 -0000, lcs Mixmaster Remailer
<mix@anon.lcs.mit.edu> wrote, in part:

>Hundreds of man years have been spent analyzing
>3DES and probably close to that for AES.  Users don't have to worry
>about avoiding known plaintext when using a strong algorithm like this.
>The most brilliant cryptographers in the world have failed to find the
>smallest weakness in these ciphers.

As has been noted, you are laying it on just a little bit thick here.

Yes, Rijndael has enough rounds that the Square attack seems to be
useless. But even a mere dabbler such as myself was able to note that
the number of rounds with the most common key sizes gave it disturbing
structural similarities to DES with an odd number of rounds, which is
known to be weak.

3DES is DES repeated three times, and while DES is nearly as strong as
its key size, small weaknesses have been found in it, most
specifically against linear cryptanalysis.

And, in any case, someone taking your statements above too closely to
heart might commit the error of forgetting to use a "reasonable
chaining mode" with one of these wonderful block ciphers.

The problem, though, isn't so much that the message you want the
cryptological community to convey is not true. In general, it is
indeed valid. It's just that an appearance of dogmatism is the last
thing one needs to get the message across effectively.

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: 20 Jun 2001 12:21:22 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90C6481C6H110W296LC45WIN3030R@207.36.190.226>
References: <20010620054004.24435.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 49

mix@anon.lcs.mit.edu (lcs Mixmaster Remailer) wrote in
<20010620054004.24435.qmail@nym.alias.net>:

>
>True in principle, but this is not something that a user should be
>concerned about.  Hundreds of man years have been spent analyzing
>3DES and probably close to that for AES.  Users don't have to worry
>about avoiding known plaintext when using a strong algorithm like this.
>The most brilliant cryptographers in the world have failed to find the
>smallest weakness in these ciphers.
>

This is where your full of shit. No one in the open community
has any idea of the work of the most brillian cryptographers. Your
making the same mistake the germans and Japanese made by flasely
assuming strength. That is precisely what one should not do.

For XML they should use a bijective compressor tuned to XML
and then encrypt with a properly mated RIJNDAEL. Since that
is going to be the AES standard. If RIJNDAEL is weak the compression
still may save them till a new standard is adapted. For my own
use I would not use AES but for a standard of this type I am not
sure you really will have much choice.

>The danger from known plaintext when using a modern cipher like these
>is so small, so hypothetical, so unrealistic, that it is unprofessional
>for a cryptographer to allow the contrary belief to stand.  The examples
>above prove that it is a widespread mistake, quoted frequently by people
>who think they know something about cryptography.  The community has to
>take on the responsibility of setting the record straight.

Bullshit the dangers of known plaintext are always present and to
forget that is to make a serious mistake.

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 15:20:22 GMT
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Message-ID: <3B30BF36.63D4AE7@null.net>
References: <20010620054004.24435.qmail@nym.alias.net>
Newsgroups: sci.crypt
Lines: 19

lcs Mixmaster Remailer wrote:
> The danger from known plaintext when using a modern cipher like these
> is so small, so hypothetical, so unrealistic, that it is unprofessional
> for a cryptographer to allow the contrary belief to stand.  The examples
> above prove that it is a widespread mistake, quoted frequently by people
> who think they know something about cryptography.  The community has to
> take on the responsibility of setting the record straight.

I think the right way to present it is "Modern cryptosystems are
specifically designed for resistance to known-plaintext attacks,
among other things.  Therefore, if you know that a modern
cryptosystem will be used, you needn't be concerned about the
possibility of a known-plaintext attack.  However, if you can't
be sure what kind of encryption will be applied, then all bets
are off, and it *might* be useful to vary the plaintext somewhat;
however, there are many other protective measures that would be
more useful (like wrapping the plaintext in a modern encryption
that you have control over, before submitting it to be encrypted
with the unknown system)."

Subject: Re: Known plaintext considered harmless
Date: Wed, 20 Jun 2001 19:42:41 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GF8ur5.C5I@bath.ac.uk>
References: <3B30BF36.63D4AE7@null.net>
Newsgroups: sci.crypt
Lines: 16

Douglas A. Gwyn <DAGwyn@null.net> wrote:

: I think the right way to present it is "Modern cryptosystems are
: specifically designed for resistance to known-plaintext attacks,
: among other things.  Therefore, if you know that a modern
: cryptosystem will be used, you needn't be concerned about the
: possibility of a known-plaintext attack. [...]

Compare with: "Modern windows are designed not to break.  Therefore,
if you fit your shop with modern windows, you needn't be concerned
about the possibility of them breaking."

Alas, this leaves the shoplifters out of the picture.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 22:41:19 GMT
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Message-ID: <3B32780F.EEA34717@null.net>
References: <GF8ur5.C5I@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 16

Tim Tyler wrote:
> Douglas A. Gwyn <DAGwyn@null.net> wrote:
> : I think the right way to present it is "Modern cryptosystems are
> : specifically designed for resistance to known-plaintext attacks,
> : among other things.  Therefore, if you know that a modern
> : cryptosystem will be used, you needn't be concerned about the
> : possibility of a known-plaintext attack. [...]
> Compare with: "Modern windows are designed not to break.  Therefore,
> if you fit your shop with modern windows, you needn't be concerned
> about the possibility of them breaking."

Assuming that modern windows were indeed unbreakable.

> Alas, this leaves the shoplifters out of the picture.

It also leaves purple unicorns out of the picture.

Subject: Re: Known plaintext considered harmless
Date: Thu, 21 Jun 2001 23:46:44 GMT
From: ritter@io.com (Terry Ritter)
Message-ID: <3b328729.6223104@news.io.com>
References: <3B32780F.EEA34717@null.net>
Newsgroups: sci.crypt
Lines: 28

On Thu, 21 Jun 2001 22:41:19 GMT, in <3B32780F.EEA34717@null.net>, in
sci.crypt "Douglas A. Gwyn" <DAGwyn@null.net> wrote:

>Tim Tyler wrote:
>> Douglas A. Gwyn <DAGwyn@null.net> wrote:
>> : I think the right way to present it is "Modern cryptosystems are
>> : specifically designed for resistance to known-plaintext attacks,
>> : among other things.  Therefore, if you know that a modern
>> : cryptosystem will be used, you needn't be concerned about the
>> : possibility of a known-plaintext attack. [...]
>> Compare with: "Modern windows are designed not to break.  Therefore,
>> if you fit your shop with modern windows, you needn't be concerned
>> about the possibility of them breaking."
>
>Assuming that modern windows were indeed unbreakable.

Only if we ASSUME (that is, make an ASS out of U and ME), that modern
ciphers really *are* "indeed unbreakable."  But every generation
thought their ciphers were unbreakable.

Simply being designed to be unbreakable is hardly the same thing.
That's a wish and a hope, not reality.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Re: Known plaintext considered harmless
Date: 22 Jun 2001 00:18:41 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90C7B8866H110W296LC45WIN3030R@207.36.190.226>
References: <3b328729.6223104@news.io.com>
Newsgroups: sci.crypt
Lines: 48

ritter@io.com (Terry Ritter) wrote in <3b328729.6223104@news.io.com>:

>
>On Thu, 21 Jun 2001 22:41:19 GMT, in <3B32780F.EEA34717@null.net>, in
>sci.crypt "Douglas A. Gwyn" <DAGwyn@null.net> wrote:
>
>>Tim Tyler wrote:
>>> Douglas A. Gwyn <DAGwyn@null.net> wrote:
>>> : I think the right way to present it is "Modern cryptosystems are
>>> : specifically designed for resistance to known-plaintext attacks,
>>> : among other things.  Therefore, if you know that a modern
>>> : cryptosystem will be used, you needn't be concerned about the
>>> : possibility of a known-plaintext attack. [...]
>>> Compare with: "Modern windows are designed not to break.  Therefore,
>>> if you fit your shop with modern windows, you needn't be concerned
>>> about the possibility of them breaking."
>>
>>Assuming that modern windows were indeed unbreakable.
>
>Only if we ASSUME (that is, make an ASS out of U and ME), that modern
>ciphers really *are* "indeed unbreakable."  But every generation
>thought their ciphers were unbreakable.
>

One great teacher I had said look at history. Only a fool thinks
our frame of reference is different or that we are better. History
has showed. Every generation seemed to think they were unbreakable
and they were wrong. There is no reason to believe we are specail
and that we know they are unbreakable. Because it we do in 50 years
they will laugh at our stupidity. I am already laughing. since
Shannon has taught what some of the goals we should strive towards
and the fools say its of no concern with modern ciphers. They are wrong
and quantum computers for one thing will prove that.

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 06:18:27 GMT
From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard)
Message-ID: <3b32e2b9.2662727@news.powersurfr.com>
References: <90C7B8866H110W296LC45WIN3030R@207.36.190.226>
Newsgroups: sci.crypt
Lines: 17

On 22 Jun 2001 00:18:41 GMT, david_a_scott@emailv.com
(SCOTT19U.ZIP_GUY) wrote, in part:

>Every generation seemed to think they were unbreakable
>and they were wrong. There is no reason to believe we are specail
>and that we know they are unbreakable.

Actually, there _is_ a reason. The microchip. It lets us use
arbitrarily complicated ciphers.

So if we actually let ourselves go, and really made it work to
encipher our messages, instead of using algorithms designed to work at
blazing speed on minimal hardware, we might well have grounds for a
limited degree of confidence or even complacency.

John Savard
http://home.ecn.ab.ca/~jsavard/index.html

Subject: Re: Known plaintext considered harmless
Date: 22 Jun 2001 11:50:42 GMT
From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY)
Message-ID: <90C83AB47H110W296LC45WIN3030R@207.36.190.226>
References: <3b32e2b9.2662727@news.powersurfr.com>
Newsgroups: sci.crypt
Lines: 32

jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote in
<3b32e2b9.2662727@news.powersurfr.com>:

>On 22 Jun 2001 00:18:41 GMT, david_a_scott@emailv.com
>(SCOTT19U.ZIP_GUY) wrote, in part:
>
>>Every generation seemed to think they were unbreakable
>>and they were wrong. There is no reason to believe we are specail
>>and that we know they are unbreakable.
>
>Actually, there _is_ a reason. The microchip. It lets us use
>arbitrarily complicated ciphers.
>

That same chip cuts both ways. Who would want to not use one
to break a cipher. All a chip does is compress time.

David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 15:50:04 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GFC9BF.Hvz@bath.ac.uk>
References: <90C83AB47H110W296LC45WIN3030R@207.36.190.226>
Newsgroups: sci.crypt
Lines: 31

SCOTT19U.ZIP_GUY <david_a_scott@emailv.com> wrote:
: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote in
:> david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) wrote, in part:

:>>Every generation seemed to think they were unbreakable
:>>and they were wrong. There is no reason to believe we are
:>>specail and that we know they are unbreakable.
:>
:>Actually, there _is_ a reason. The microchip. It lets us use
:>arbitrarily complicated ciphers.

:   That same chip cuts both ways. Who would want to not use one
: to break a cipher. All a chip does is compress time.

Another reason would be that knowledge has improved -
today's cryptographers stand on the shoulders of giants.

At one time, nobody knew how to make a reasonable cypher that no-one could
break.

Now that situation is no longer with us - our strongest publicly-known
cyphers appear to be very strong, and resist all publicly-known attacks.

It seems possible that as time passes, the apparent gap between
public-codemakers and public-code-breakers will widen further.

This is not to encourage complanceny, though.  For one thing, it is
possible that the hidden world is well in advance of public knowledge.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 16:55:07 GMT
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Message-ID: <3B33786B.7826214C@null.net>
References: <GFC9BF.Hvz@bath.ac.uk>
Newsgroups: sci.crypt
Lines: 7

Tim Tyler wrote:
> At one time, nobody knew how to make a reasonable cypher that no-one could
> break.
> Now that situation is no longer with us - our strongest publicly-known
> cyphers appear to be very strong, and resist all publicly-known attacks.

That was also true 10, 50, 200 years ago.

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 22:06:35 GMT
From: Tim Tyler <tt@iname.com>
Message-ID: <GFCqqz.HsA@bath.ac.uk>
References: <3B33786B.7826214C@null.net>
Newsgroups: sci.crypt
Lines: 14

Douglas A. Gwyn <DAGwyn@null.net> wrote:
: Tim Tyler wrote:

:> At one time, nobody knew how to make a reasonable cypher that no-one could
:> break.
:> Now that situation is no longer with us - our strongest publicly-known
:> cyphers appear to be very strong, and resist all publicly-known attacks.

: That was also true 10, 50, 200 years ago.

Certainly for a rather long time.
--
__________

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 21:51:54 GMT
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Message-ID: <3B33BDFA.5B68F94@null.net>
References: <YFNY6.2059$Mf5.748008@news3.rdc1.on.home.com> <GFC9BF.Hvz@bath.ac.uk> Newsgroups: sci.crypt Lines: 8 Tom St Denis wrote: > I seriously doubt that. Shannon new very little if anything > about modern cryptanalysis techniques in 1949. Apart from wondering why you were disputing the "shoulder of giants" claim.. Shannon knew more about cryptanalysis than you think. Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 18:04:16 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b33882f.9254035@news.powersurfr.com> References: <90C83AB47H110W296LC45WIN3030R@207.36.190.226> Newsgroups: sci.crypt Lines: 13 On 22 Jun 2001 11:50:42 GMT, david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) wrote, in part: >That same chip cuts both ways. Who would want to not use one >to break a cipher. All a chip does is compress time. As David Kahn pointed out in one of the articles reprinted in his book "Kahn on Codes", making a cipher many times more complicated makes it many *more* times harder to solve, so advances in the speed of serial computation _do_ have a one-sided payoff for the cryptographer. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 12:35:18 -0600 From: jgfunj@vgrknf.arg (wtshaw) Message-ID: <jgfunj-2206011235180001@dial-246-034.itexas.net> References: <90C7B8866H110W296LC45WIN3030R@207.36.190.226> Newsgroups: sci.crypt Lines: 23 In article <90C7B8866H110W296LC45WIN3030R@207.36.190.226>, david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) wrote: > One great teacher I had said look at history. Only a fool thinks > our frame of reference is different or that we are better. History > has showed. Every generation seemed to think they were unbreakable > and they were wrong. There is no reason to believe we are specail > and that we know they are unbreakable. Because it we do in 50 years > they will laugh at our stupidity. I am already laughing. since > Shannon has taught what some of the goals we should strive towards > and the fools say its of no concern with modern ciphers. They are wrong > and quantum computers for one thing will prove that. > Shannon's work is limited without a view of some recent advances, but many are limited also by neglecting the validity of what he said as it applies still today. So, for Shannon, I am a critic but also a fan. -- Everything about life is so simple... Problems belong to those that think... Just surrender to those responsible... So you will not be called the weakest link. Subject: Re: Known plaintext considered harmless Date: 22 Jun 2001 00:28:05 GMT From: djohn37050@aol.com (DJohn37050) Message-ID: <20010621202805.13854.00000447@ng-bk1.aol.com> References: <3b328729.6223104@news.io.com> Newsgroups: sci.crypt Lines: 4 Known plaintext *at the least* allows a dictionary to be built, which is not a good thing. So it should not be considered harmless with zero effect. It might have a negligible effect or it might be large but it is not zero. Don Johnson Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 01:53:20 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gu8eg$2261$2@agate.berkeley.edu> References: <20010621202805.13854.00000447@ng-bk1.aol.com> Newsgroups: sci.crypt Lines: 16 DJohn37050 wrote: >Known plaintext *at the least* allows a dictionary to be built, which is not a >good thing. So it should not be considered harmless with zero effect. It >might have a negligible effect or it might be large but it is not zero. Yes. Fortunately, we have proofs for our favorite modes of operation that, so long as you don't try to get too close to the birthday, the effect will be negligible. So this should not be a concern. And it this should not be surprising. It is very rare in crypto to be able to achieve zero effect; usually we have to be satisfied with giving the attacker a negligible advantage, unless we are willing to use one-time pads and so forth. If you're concerned about negligible advantages, then you shouldn't waste time thinking about which mode of operation to use: you must not use any computationally-secure cryptosystems, no matter what mode they're in. Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 08:31:54 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFBp16.392@bath.ac.uk> References: <3B32780F.EEA34717@null.net> Newsgroups: sci.crypt Lines: 55 Douglas A. Gwyn <DAGwyn@null.net> wrote: : Tim Tyler wrote: :> Douglas A. Gwyn <DAGwyn@null.net> wrote: :> : I think the right way to present it is "Modern cryptosystems are :> : specifically designed for resistance to known-plaintext attacks, :> : among other things. Therefore, if you know that a modern :> : cryptosystem will be used, you needn't be concerned about the :> : possibility of a known-plaintext attack. [...] :> :> Compare with: "Modern windows are designed not to break. Therefore, :> if you fit your shop with modern windows, you needn't be concerned :> about the possibility of them breaking." : Assuming that modern windows were indeed unbreakable. This is exactly the assumption you are making about modern block cyphers. :> Alas, this leaves the shoplifters out of the picture. : It also leaves purple unicorns out of the picture. Shoplifters try to break windows. Window manufacturers try to make them hard to break. Cryptanalysts try to break cyphers. Cryptographers try to make them hard to break. I really don't see where the purple unicorns enter into it - both cryptanalysts and shoplifters *exist*. The idea that someone may be trying to break your cypher is not necessarily a paranoid fantasy. You might be able to argue that we see broken windows around, but not broken cyphers. I think we *do* see broken cyphers around - but anyway, there is an important difference to note between the two systems. When a shoplifter breaks a window, he graps the loot and scarpers, leaving broken glass everywhere. When a cryptanalyst breaks a cypher, he can resist the temptation to get academic glory, and instead offer his services to those who would wish to eavsdrop on the encrypted traffic. This can be far more lucrative. Consequently he carefully cleans up after himself, and leaves as little trace as possible to indicate that the cypher has been broken. Indeed great effort is generally taken with any intercepted traffic to ensure this secret is kept. If you don't see broken cyphers, don't conclude that cyphers don't get broken. This is probably what you would observe anyway. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 16:27:36 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gqito$96v$5@agate.berkeley.edu> References: <20010620054004.24435.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 4 I agree. Far more systems have failed from too much complexity than have failed because there was a practical cryptanalytic known-plaintext attack on a well-chosen cipher. I'd gladly trade known plaintext for simplicity, any day. Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 19:44:33 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GF8uu9.CDv@bath.ac.uk> References: <9gqito$96v$5@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 11 David Wagner <daw@mozart.cs.berkeley.edu> wrote: : I agree. Far more systems have failed from too much : complexity than have failed because there was a practical : cryptanalytic known-plaintext attack on a well-chosen cipher. Indeed. However cryptanalytic known-plaintext attacks are *not* the only type of failure that eliminating known plaintext helps defend against. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 01:58:09 +0100 From: David Hopwood <david.hopwood@zetnet.co.uk> Message-ID: <3B3146A1.20BCF560@zetnet.co.uk> References: <GF8uu9.CDv@bath.ac.uk> Newsgroups: sci.crypt Lines: 56 -----BEGIN PGP SIGNED MESSAGE----- Tim Tyler wrote: > David Wagner <daw@mozart.cs.berkeley.edu> wrote: > : I agree. Far more systems have failed from too much > : complexity than have failed because there was a practical > : cryptanalytic known-plaintext attack on a well-chosen cipher. > > Indeed. However cryptanalytic known-plaintext attacks are *not* the only > type of failure that eliminating known plaintext helps defend against. Modern ciphers are almost always used a long way past the unicity distance. So, if the keyspace is reduced enough (by whatever means) to be searchable, then the cipher can be broken, period. Reducing the amount of known plaintext by compression (application-specific or otherwise) will not change this for practical, real-world protocols. Your (Tim Tyler's) argument seems to be that there are a few situations where the message might already be short enough that compression will reduce the length to below the unicity distance. I think this will happen in so few cases that it is not worth bothering with as a security measure. Of course, there are other, non-security-related reasons for using compression, which should be considered on their merits. Also, excessive redundancy of formats is undesirable regardless of whether they are encrypted or not. (Redundancy is not excessive if there is a good reason for it, such as human readability, or provision for future expansion.) Re: the OP's argument - I generally agree that there is a great deal of FUD and misunderstanding about the "dangers of known plaintext", and that some protocols are unnecessarily bent out of shape because of that. The worst cases are protocols that do not encrypt data that needs to be encrypted, just because someone argued that it could provide known plaintext. - -- David Hopwood <david.hopwood@zetnet.co.uk> Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/ RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01 Nothing in this message is intended to be legally binding. If I revoke a public key but refuse to specify why, it is because the private key has been seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip -----BEGIN PGP SIGNATURE----- Version: 2.6.3i Charset: noconv iQEVAwUBOzFCWDkCAxeYt5gVAQFr8wf+McYB8NgCh+m/Y48GGvxMC+mOFPPE6csq FBVKROSkUPTyF/KLXiuK6OHqyuvIka1rfqedb8JFxOdBDI9T2EwTRZMmRBZA9FSR UNHceGMlNSq6O/R8bW09QaL0u0UwuKbZln4IkInMTJOd+GMkoCy3gBEpUKQnwkN1 a5qCEkQ2H1bTdIz1gaKvlIz+TOGEYwJDOR9VpxndQaUpGB1257j+rqVxCjx4cvHE ufGH0Ulfk4m2rvodOQQp5EvOy18hNCaMWfZ2tqxggK8GfUm8wVnLaS+tIyGkr/pb 5G+flHBw1oMiismhe1jAQK6HUMqitq91enpE8GkGgpu1TlVS0HPCUA== =Uo9f -----END PGP SIGNATURE----- Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 09:08:27 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GF9w23.DJu@bath.ac.uk> References: <3B3146A1.20BCF560@zetnet.co.uk> Newsgroups: sci.crypt Lines: 68 David Hopwood <david.hopwood@zetnet.co.uk> wrote: : Tim Tyler wrote: :> David Wagner <daw@mozart.cs.berkeley.edu> wrote: :> : I agree. Far more systems have failed from too much :> : complexity than have failed because there was a practical :> : cryptanalytic known-plaintext attack on a well-chosen cipher. :> :> Indeed. However cryptanalytic known-plaintext attacks are *not* the only :> type of failure that eliminating known plaintext helps defend against. : Modern ciphers are almost always used a long way past the unicity distance. : So, if the keyspace is reduced enough (by whatever means) to be searchable, : then the cipher can be broken, period. [...] It seems you go from "almost always" to complete certainty, without showing your working. Also note that the unicity distance for the system can be much larger when compression has been employed. : Your (Tim Tyler's) argument seems to be that there are a few situations : where the message might already be short enough that compression will : reduce the length to below the unicity distance. I think this will happen : in so few cases that it is not worth bothering with as a security measure. : Of course, there are other, non-security-related reasons for using : compression, which should be considered on their merits. [...] To summarise - compression has a number of benefits, which include: * Improved information-theoretic security when message size is in the vicinity of the unicity distance; * Fewer plaintext/cyphertext pairs given to opponent (this can be achived in other ways as well); * Plaintext diffusion - this can increase the work factor significantly. E.g. say a message always ends "yours, Fred Bloggs." Before compression, an attacker could decrypt the last block and look for this text. After compression, he has to decrypt the entire message, decompress the entire message, and then apply his test. * Messages are shorter, can be transmitted faster, allowing less time to be spent broadcasting a signal, reducing opportunities for tracing. * Detecting correct decrypts after decompression takes more sophisticated algorithms. Where before you could check for ASCII's missing bit, now every decrypt has that. Where before you could check for randomness, now every decrypt is highly non-random. This means more programming work for each message type, more computer work in testing correct decrypts, and more plaintext required to perform the tests in the first place. * Rejecting plaintexts is /slightly/ harder work - because of the work in performing the additional unkeyed transform involved. The argument that compression doesn't generally get messages down to the unicity distance seems to assume that the unicity distance is small. Recall that compression increases the unicity distance - and can do so arbitrarily far. "Perfect compression" results in a unicity distance which is infinite. In such a system, every cyphertext decrypts to a plausible-looking plaintext. If this happens, brute force search is not a practical possibility, regardless of message size. We may not yet be anwhere near perfect compressors for the domain of English text, but in other domains, much better can be done - and compressors are improving all the time. The unicity distance is the redundancy in the inputs to the encryptor. After compression, this redundancy may be rather small - making the unicity distance rather large. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 03:03:08 +0100 From: David Hopwood <david.hopwood@zetnet.co.uk> Message-ID: <3B32A75C.353CD92A@zetnet.co.uk> References: <GF9w23.DJu@bath.ac.uk> Newsgroups: sci.crypt Lines: 178 -----BEGIN PGP SIGNED MESSAGE----- Tim Tyler wrote: > David Hopwood <david.hopwood@zetnet.co.uk> wrote: > : Tim Tyler wrote: > :> David Wagner <daw@mozart.cs.berkeley.edu> wrote: > > :> : I agree. Far more systems have failed from too much > :> : complexity than have failed because there was a practical > :> : cryptanalytic known-plaintext attack on a well-chosen cipher. > :> > :> Indeed. However cryptanalytic known-plaintext attacks are *not* the only > :> type of failure that eliminating known plaintext helps defend against. > > : Modern ciphers are almost always used a long way past the unicity distance. > : So, if the keyspace is reduced enough (by whatever means) to be searchable, > : then the cipher can be broken, period. [...] > > It seems you go from "almost always" to complete certainty, without > showing your working. Suppose the keyspace is searchable. If an attacker can break a cryptosystem in almost all cases where that happens, it's more than enough from the attacker's point of view. The aim of a cryptosystem is to prevent attacks from being successful in more than a negligable fraction of cases. Obviously there's a big difference between "negligable fraction" and "almost all". > Also note that the unicity distance for the system can be much larger when > compression has been employed. My assumption is that the compressed text still has a significant amount of redundancy - say, at least 0.1 bits/bit. This estimate is very generous to your side of the argument; in fact, I've never seen a compressor do that well on real-world data. Given this assumption, the unicity distance will be at most 10 * log2(size of remaining keyspace) bits, and this is an upper bound on the amount of compressed text that can be sent with information theoretic security (in fact information starts to be leaked before that bound). I don't consider a factor of at most 10 to be "much" larger, and for general-purpose compressors like DEFLATE, etc., the effect of compression on unicity distance will be considerably less than that. I don't care about artificial examples like sending meaningless random data which is then never used, since no-one would want to do that in practice. > : Your (Tim Tyler's) argument seems to be that there are a few situations > : where the message might already be short enough that compression will > : reduce the length to below the unicity distance. I think this will happen > : in so few cases that it is not worth bothering with as a security measure. > : Of course, there are other, non-security-related reasons for using > : compression, which should be considered on their merits. [...] > > To summarise - compression has a number of benefits, which include: > > * Improved information-theoretic security when message size is > in the vicinity of the unicity distance; See above. > * Fewer plaintext/cyphertext pairs given to opponent (this can be > achived in other ways as well); This is not in dispute, but note that modern ciphers can reasonably be expected to be computationally secure even when huge numbers of plaintext/ ciphertext pairs are available. > * Plaintext diffusion - this can increase the work factor > significantly. At best it increases the work for a key search proportionally to the length of the message, but if you want to do this reliably then you should use an AONT; compression won't necessarily suffice, because it is not designed to provide diffusion. > E.g. say a message always ends "yours, Fred Bloggs." > Before compression, an attacker could decrypt the last block and > look for this text. After compression, he has to decrypt the entire > message, decompress the entire message, and then apply his test. If "Fred Bloggs" did not occur earlier in the message, then for typical LZ-based algorithms, no, the attacker would not necessarily have to decrypt the entire message. > * Messages are shorter, can be transmitted faster, allowing less time > to be spent broadcasting a signal, reducing opportunities for tracing. They are shorter by a small factor. For most applications, tracing is not an issue. > * Detecting correct decrypts after decompression takes more sophisticated > algorithms. Not really. If you want to claim that, you'll have to demonstrate it for real data and real compression algorithms. > Where before you could check for ASCII's missing bit, now > every decrypt has that. Where before you could check for randomness, > now every decrypt is highly non-random. This means more programming > work for each message type, more computer work in testing correct > decrypts, and more plaintext required to perform the tests in the first > place. No, more plaintext is probably not required if the beginning of the plaintext is redundant (since most compression algorithms have the property that decompressing a prefix of the output will give a prefix of the input). > * Rejecting plaintexts is /slightly/ harder work - because of the > work in performing the additional unkeyed transform involved. All of the above points amount to not very much security benefit. > The argument that compression doesn't generally get messages down to the > unicity distance seems to assume that the unicity distance is small. The unicity distance *is* small for short-key ciphers used on typical data formats - often only a few hundred bits. > Recall that compression increases the unicity distance - and can do so > arbitrarily far. "Perfect compression" results in a unicity distance > which is infinite. Perfect compression does not and cannot exist in non-trivial cases. > In such a system, every cyphertext decrypts to a > plausible-looking plaintext. If this happens, brute force search is > not a practical possibility, regardless of message size. > > We may not yet be anwhere near perfect compressors for the domain of > English text, ... nor is there any reason to believe that problem is solvable. To even begin to approach that goal, the compressor would need to include an English dictionary and grammar model [*] (otherwise the attacker can detect redundancy by checking spelling and grammar). That would make such compression impractical for most protocols. [*] Or, for internationalised applications, a dictionary and grammar model for every language that might be used. > but in other domains, much better can be done - and compressors are > improving all the time. Meaningful data will have some redundancy that is difficult to remove, because it is too hard to model accurately. I suspect this problem is generally as hard for other formats as it is for natural language text. > The unicity distance is the redundancy in the inputs to the encryptor. > After compression, this redundancy may be rather small - making the > unicity distance rather large. You keep repeating this, without any evidence that the unicity distance would ever be large enough to provide a significant security benefit in practical cases. - -- David Hopwood <david.hopwood@zetnet.co.uk> Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/ RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01 Nothing in this message is intended to be legally binding. If I revoke a public key but refuse to specify why, it is because the private key has been seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip -----BEGIN PGP SIGNATURE----- Version: 2.6.3i Charset: noconv iQEVAwUBOzKnDDkCAxeYt5gVAQHy2AgAtrMnuysI34FkZ+Y+R56p6QFIAMQ2x9Il LqWfMGwmsxdVuHAIALjb9NQCCDXKYY6pjoP3Bep5yqPFEI9t/zBmD0oo2i6dnqYJ AOmzqAI26i2iSaDuFjeU+63On2LYwAB+czKrmJ9HICnNdC2SfyBad3pBQLFvUDhU U67jvcie4DU8H0qa4/Z5xOGvrs1wxHrixfQCeHxlxGwP91WrT2IbMatUBgE7ZxbR fMC4725ec5j/R1hrizxJQ5rNi4iPkUdx46qtN6gPzHUx6GXkSZPgb52el9iN7PyF UEjXpxZ1FABDNOJWSnHyUmxFSrLBcpl9nSnbbp9KNLGSq54w2y1pNw== =bP22 -----END PGP SIGNATURE----- Subject: Re: Known plaintext considered harmless Date: 22 Jun 2001 03:55:47 GMT From: david_a_scott@emailv.com (SCOTT19U.ZIP_GUY) Message-ID: <90C7D308BH110W296LC45WIN3030R@207.36.190.226> References: <3B32A75C.353CD92A@zetnet.co.uk> Newsgroups: sci.crypt Lines: 49 david.hopwood@zetnet.co.uk (David Hopwood) wrote in <3B32A75C.353CD92A@zetnet.co.uk>: > >> Also note that the unicity distance for the system can be much larger >> when compression has been employed. > >My assumption is that the compressed text still has a significant amount >of redundancy - say, at least 0.1 bits/bit. This estimate is very >generous to your side of the argument; in fact, I've never seen a >compressor do that well on real-world data. Given this assumption, the >unicity distance will be at most 10 * log2(size of remaining keyspace) >bits, and this is an upper bound on the amount of compressed text that >can be sent with information theoretic security (in fact information >starts to be leaked before that bound). > >I don't consider a factor of at most 10 to be "much" larger, and for >general-purpose compressors like DEFLATE, etc., the effect of >compression on unicity distance will be considerably less than that. I >don't care about artificial examples like sending meaningless random >data which is then never used, since no-one would want to do that in >practice. Actaully if the uncity disctance increased by a factor of 10 then the amount of plain text needed goes up much higher since you really are taking about the amount of "cipher text" the plain text length would be the factor of 10 times however much it expands when you decompress it. Secondly the rest of stuff is talking about poor compressers that start slowly. Obviously you've never heard of BWT compressors. I can give you half the file and you want know what it means. Since the actaully message is diffused through the whole thing during the compression process. David A. Scott -- SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM" http://www.jim.com/jamesd/Kong/scott19u.zip My website http://members.nbci.com/ecil/index.htm My crypto code http://radiusnet.net/crypto/archive/scott/ MY Compression Page http://members.nbci.com/ecil/compress.htm **TO EMAIL ME drop the roman "five" ** Disclaimer:I am in no way responsible for any of the statements made in the above text. For all I know I might be drugged. As a famous person once said "any cryptograhic system is only as strong as its weakest link" Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 09:16:00 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFBr2o.63B@bath.ac.uk> References: <3B32A75C.353CD92A@zetnet.co.uk> Newsgroups: sci.crypt Lines: 63 David Hopwood <david.hopwood@zetnet.co.uk> wrote: : Tim Tyler wrote: :> David Hopwood <david.hopwood@zetnet.co.uk> wrote: :> : Tim Tyler wrote: :> :> David Wagner <daw@mozart.cs.berkeley.edu> wrote: :> : Modern ciphers are almost always used a long way past the unicity distance. :> : So, if the keyspace is reduced enough (by whatever means) to be searchable, :> : then the cipher can be broken, period. [...] [...] :> Also note that the unicity distance for the system can be much larger when :> compression has been employed. : My assumption is that the compressed text still has a significant amount of : redundancy - say, at least 0.1 bits/bit. This estimate is very generous to : your side of the argument; in fact, I've never seen a compressor do that well : on real-world data. Given this assumption, the unicity distance will be at : most 10 * log2(size of remaining keyspace) bits, and this is an upper : bound on the amount of compressed text that can be sent with : information theoretic security (in fact information starts to be leaked : before that bound). : I don't consider a factor of at most 10 to be "much" larger, and for : general-purpose compressors like DEFLATE, etc., the effect of compression : on unicity distance will be considerably less than that. David Scott's point - that the unicity distance relates to the cyphertext, and that the length of plaintext that can be sent increases faster than the unicity distance because the cyphertext is subsequently decompressed, and expands to form the plaintext - is a good one. You don't think this size is significant? It will be in some applications. Consider email encryption. Say you use Rijndael with a 256-bit key. That gives a unicity distance of some 37 characters with English text. English text has 84% redundancy. Taking your (admittedly optomistic) example, say that is reduced to 10%. That gives a unicity distance of 320 characters of *cyphertext*. Decompress those 320 characters with your (admittedly rather good) compressor and the message will be quite significant - over 2.6K on average - if my sums based on your entropy figures are correct. This is not a fantastically and unrealistically short email message. Sure some messages will be longer - but increasing the messages that could be sent with some information-theortic security from 37 characters to a Kb or so would be something worthwhile. In practice, the unicity distance represents the point where there's a single unique decrypt. Since a human being is likely to be a better plausible plaintext-detector than any machine, you ideally want to get things to a position where it is impractical for a human to search through the decrypts - by making sure that there are thousands - or tens-of-thousands of them. This is likely to involve the use of larger keys. [snip detailed objections] -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 16:39:35 -0700 From: "Joseph Ashwood" <ashwood@msn.com> Message-ID: <#YoYOU3#AHA.292@cpmsnbbsa07> References: <GF9w23.DJu@bath.ac.uk> Newsgroups: sci.crypt Lines: 33 "Tim Tyler" <tt@iname.com> wrote in message news:GF9w23.DJu@bath.ac.uk... > David Hopwood <david.hopwood@zetnet.co.uk> wrote: > : Modern ciphers are almost always used a long way past the unicity distance. > : So, if the keyspace is reduced enough (by whatever means) to be searchable, > : then the cipher can be broken, period. [...] > > It seems you go from "almost always" to complete certainty, without > showing your working. Also note that the unicity distance for the > system can be much larger when compression has been employed. Well I will use a measure of common length that may or may not have much bearing on the real situation, looking at this newsgroup I see messages that are certainly averaging over 100 bytes each for the data that would be encrypted. For the sake of simplicity let's assume 128-bytes per message, that's 8 blocks of Rijndael, and the unicity distance for Rijndael encrypting English is? Offhand I don't remember it exactly, but it is less than 8 blocks. This is based on an extremely small estimate of our verbosity, this message alone has several times that amount. I agree with Hopwood in saying that the likelihood of the compressed value being below the unicity distance will be extremely rare, you can work it out for yourself, take what you feel is a rough estimate of the length of messages, what constitutes a short message, and what constitutes a long one, determine the unicity disctance and see how many of them will fall below it. I think you'll find that an amazingly small percentage of them actually will be small enough in most circumstances. Joe Subject: Re: Known plaintext considered harmless Date: Sat, 23 Jun 2001 02:03:22 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFD1pM.BsM@bath.ac.uk> References: <#YoYOU3#AHA.292@cpmsnbbsa07> Newsgroups: sci.crypt Lines: 53 Joseph Ashwood <ashwood@msn.com> wrote: : "Tim Tyler" <tt@iname.com> wrote in message news:GF9w23.DJu@bath.ac.uk... :> note that the unicity distance for the system can be much larger when :> compression has been employed. : Well I will use a measure of common length that may or may not have much : bearing on the real situation, looking at this newsgroup I see messages that : are certainly averaging over 100 bytes each for the data that would be : encrypted. For the sake of simplicity let's assume 128-bytes per message, : that's 8 blocks of Rijndael, and the unicity distance for Rijndael : encrypting English is? Offhand I don't remember it exactly, but it is less : than 8 blocks. This is based on an extremely small estimate of our : verbosity, this message alone has several times that amount. : I agree with Hopwood in saying that the likelihood of the compressed value : being below the unicity distance will be extremely rare, you can work it out : for yourself, take what you feel is a rough estimate of the length of : messages, what constitutes a short message, and what constitutes a long one, : determine the unicity disctance and see how many of them will fall below it. : I think you'll find that an amazingly small percentage of them actually will : be small enough in most circumstances. I gave concrete estimates in a recent post in reply to David Hopwood: Rijndael with a 128-bit key gives a unicity distance of some 37 characters with English text. English text has approximately 84% redundancy. It was hypothesized by David Hopwood - for the sake of argument - that compression might be able to reduce this figure to 10%. That gives a unicity distance of 320 characters of *cyphertext*. Decompress those 320 characters with the compressor, and the message will be quite significant - over 2.6K on average, I believe. That is not an unrealistically short email message. Two factors combine to produce this length: Firstly, compression reduces the unicity distance. The better the compression, the more the increase - with *no* theoretical upper limit to what increase is possible. Secondly, the unicity distance is a measure of cyphertext. The cyphertext is then decompressed into the plaintext, resulting in another increase in length. The combination of these factors allows the unicity distance to represent suprisingly long plaintext messages. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Tue, 26 Jun 2001 21:45:46 -0700 From: Bryan Olson <nospam@nospam.net> Message-ID: <3B3964FA.71FFB460@nospam.net> References: <GFD1pM.BsM@bath.ac.uk> Newsgroups: sci.crypt Lines: 33 Tim Tyler wrote: > I gave concrete estimates in a recent post in reply to David Hopwood: > > Rijndael with a 128-bit key gives a unicity distance of some 37 characters > with English text. > > English text has approximately 84% redundancy. It was hypothesized by > David Hopwood - for the sake of argument - that compression might be able > to reduce this figure to 10%. > > That gives a unicity distance of 320 characters of *cyphertext*. No it doesn't. I think in your previous post you assumed a 256-bit key. > Decompress those 320 characters with the compressor, and the > message will be quite significant - over 2.6K on average, I believe. > > That is not an unrealistically short email message. But not that you still have no realistic case. First, for this to make a difference you have to assume you can break the cipher. Second, I know of no users of complexity-based ciphers who use independent keys for each 2.6K of data. They may use a master key, but of course in that case we have to book redundancy against the master key too. In fact the master key is commonly a public/private key pair, with a unicity distance of zero. --Bryan Subject: Re: Known plaintext considered harmless Date: Wed, 27 Jun 2001 11:12:55 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFL5tJ.3t0@bath.ac.uk> References: <3B3964FA.71FFB460@nospam.net> Newsgroups: sci.crypt Lines: 56 Bryan Olson <nospam@nospam.net> wrote: : Tim Tyler wrote: :> I gave concrete estimates in a recent post in reply to David Hopwood: :> :> Rijndael with a 128-bit key gives a unicity distance of some 37 characters :> with English text. :> :> English text has approximately 84% redundancy. It was hypothesized by :> David Hopwood - for the sake of argument - that compression might be able :> to reduce this figure to 10%. :> :> That gives a unicity distance of 320 characters of *cyphertext*. : No it doesn't. I think in your previous post you assumed a : 256-bit key. That's correct. Unicity distance = key size / redundancy of input. 256/0.1 = 2560 bits = 320 bytes. I think you refer to my clerical error, above: Rijndael with a 128-bit key gives a unicity distance of some 37 characters with English text.'' That should have said "256-bit key". Apologies for any confusion. :> Decompress those 320 characters with the compressor, and the :> message will be quite significant - over 2.6K on average, I believe. :> :> That is not an unrealistically short email message. : But not that you still have no realistic case. First, for this : to make a difference you have to assume you can break the cipher. For the unicity distance to translate into security problems, yes. : Second, I know of no users of complexity-based ciphers who use : independent keys for each 2.6K of data. [...] You don't need to. All you need to know about is a system where variable quantities of information are encrypted. Email encryption between two parties with a shared secret key per day - or per-message - seems quite sufficient to me. : In fact the master key is commonly a public/private : key pair, with a unicity distance of zero. That's true. In such a model, there's no information-theoretic security. The model is still applicable if there's an attack on the symmetric cypher, and not one on the asymmetric one. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Sat, 23 Jun 2001 05:10:04 -0600 From: jgfunj@vgrknf.arg (wtshaw) Message-ID: <jgfunj-2306010510040001@dial-243-157.itexas.net> References: <#YoYOU3#AHA.292@cpmsnbbsa07> Newsgroups: sci.crypt Lines: 20 In article <#YoYOU3#AHA.292@cpmsnbbsa07>, "Joseph Ashwood" <ashwood@msn.com> wrote: > > I agree with Hopwood in saying that the likelihood of the compressed value > being below the unicity distance will be extremely rare, you can work it out > for yourself, take what you feel is a rough estimate of the length of > messages, what constitutes a short message, and what constitutes a long one, > determine the unicity disctance and see how many of them will fall below it. > I think you'll find that an amazingly small percentage of them actually will > be small enough in most circumstances. > Joe The obvious solution is to use a cipher with a native estimated unicity distance that is much longer than any simple or pile of messages. Doing that means that my general reservation about ciphers with low solvable lengths is respectfully answered. Otherwise, we are destined to forever be thrashing about in the toy cipher realm. -- If you cut the grass too short, you never see wildflowers. Subject: Re: Known plaintext considered harmless Date: Sun, 24 Jun 2001 17:46:56 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFG428.H6J@bath.ac.uk> References: <jgfunj-2306010510040001@dial-243-157.itexas.net> Newsgroups: sci.crypt Lines: 18 wtshaw <jgfunj@vgrknf.arg> wrote: : The obvious solution is to use a cipher with a native estimated unicity : distance that is much longer than any simple or pile of messages. Doing : that means that my general reservation about ciphers with low solvable : lengths is respectfully answered. Otherwise, we are destined to forever : be thrashing about in the toy cipher realm. The unicity distance is a function of the key size and the language input. The only things you can do are increase the key size or decrease the redundancy of the inputs (e.g. via compression). The properties of the cypher itself are not normally considered relevent - unless perhaps it makes exceptionally poor use of its key. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Sun, 24 Jun 2001 17:24:22 -0600 From: jgfunj@vgrknf.arg (wtshaw) Message-ID: <jgfunj-2406011724230001@dial-245-138.itexas.net> References: <GFG428.H6J@bath.ac.uk> Newsgroups: sci.crypt Lines: 42 In article <GFG428.H6J@bath.ac.uk>, tt@iname.com wrote: > wtshaw <jgfunj@vgrknf.arg> wrote: > > : The obvious solution is to use a cipher with a native estimated unicity > : distance that is much longer than any simple or pile of messages. Doing > : that means that my general reservation about ciphers with low solvable > : lengths is respectfully answered. Otherwise, we are destined to forever > : be thrashing about in the toy cipher realm. > > The unicity distance is a function of the key size and the language input. > > The only things you can do are increase the key size or decrease the > redundancy of the inputs (e.g. via compression). > > The properties of the cypher itself are not normally considered relevent - > unless perhaps it makes exceptionally poor use of its key. > -- It is obvious that you really don't understand and are bound by imaginary chains. You should learn the advantage of N-dimensional logic as pioneered by Einstein and Gamov. Linear and low dimensional systems by default tend to have very fragile and simplistic key structures. While verging on jumping into n-dimensional logic, the so-called crop of modern-ciphers are always caught with a bare foot mired in the mud of archaic pathos. It is reasonable to expect as much as unappreciated inspiration is most often seen as comtemptable delusion when it eclipses the prevalent prejudice that tremendous cryptological strength not easily available. Rest uneasy, such strength is revolutionary, and goes well beyond what Shannon envisioned. As for compression, chaining, fragile whole-file these are useful tools but not shortcuts to the promised land of strong crypto. It's nice over here. I do not mean to diminish what you know but ask you to open your eyes lest you continue to ever learn to enjoy stumbling. -- If you cut the grass too short, you never see wildflowers. Subject: Re: Known plaintext considered harmless Date: Tue, 26 Jun 2001 21:15:38 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFK322.Hvn@bath.ac.uk> References: <jgfunj-2406011724230001@dial-245-138.itexas.net> Newsgroups: sci.crypt Lines: 25 wtshaw <jgfunj@vgrknf.arg> wrote: : In article <GFG428.H6J@bath.ac.uk>, tt@iname.com wrote: :> wtshaw <jgfunj@vgrknf.arg> wrote: :> : The obvious solution is to use a cipher with a native estimated unicity :> : distance that is much longer than any simple or pile of messages. Doing :> : that means that my general reservation about ciphers with low solvable :> : lengths is respectfully answered. Otherwise, we are destined to forever :> : be thrashing about in the toy cipher realm. :> :> The unicity distance is a function of the key size and the language input. :> :> The only things you can do are increase the key size or decrease the :> redundancy of the inputs (e.g. via compression). :> :> The properties of the cypher itself are not normally considered relevent - :> unless perhaps it makes exceptionally poor use of its key. : It is obvious that you really don't understand and are bound by imaginary : chains. [...] You imply I said something wrong. Explain yourself. -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: 20 Jun 2001 18:20:05 -0000 From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> Message-ID: <20010620182005.18782.qmail@nym.alias.net> References: <20010620054004.24435.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 52 Douglas A. Gwyn writes: > I think the right way to present it is "Modern cryptosystems are > specifically designed for resistance to known-plaintext attacks, > among other things. Therefore, if you know that a modern > cryptosystem will be used, you needn't be concerned about the > possibility of a known-plaintext attack. An excellent summary. > However, if you can't > be sure what kind of encryption will be applied, then all bets > are off, and it *might* be useful to vary the plaintext somewhat; Here though you start to get ambiguous. How is an crypto-ignorant protocol or data structure designer to know what to do with advice like this? It "might" be useful to vary the plaintext "somewhat"? He has to make concrete decisions. Should he specify that unused fields are zeros, which makes the protocol cleaner and can detect errors and version mismatch, or should he specify that they are filled with random values, to prevent known plaintext? Which is it? He's got to make the decision. Tell us, right now. And if he does make changes like this, how much has he helped himself? Should he go over his design to look for other places where there might be known plaintext? Supose he notices that he has a bunch of data that is easily guessed. Should he rearrange his data to intersperse this relatively fixed data with more variable data, in the hopes that any single encryption block will have a mix of the two types? Yes or no? This is the kind of ad hoc, seat of the pants, ignorance-driven manipulation which results from ambiguous advice like you have above. It doesn't buy true security. > however, there are many other protective measures that would be > more useful (like wrapping the plaintext in a modern encryption > that you have control over, before submitting it to be encrypted > with the unknown system)." Yes, this is much better advice. The point is to use good crypto. That should be the beginning and the end of the advice from the crypto community. It is our take-home message. We don't say, use good crypto, but if you can't, maybe you ought to eliminate obvious cases of known plaintext, but well, don't knock yourself out doing it. That's not professional. Our advice as professional cryptographers MUST BE to use professional-quality crypto. If someone says they can't, we say, then you don't have security. We don't sell them false hopes and weak obfuscation. That is unprofessional. In fact it borders on fraud. As crypto experts, we need to give expert advice. Use good crypto. That's it. Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 21:14:50 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b3111ab.23137752@news.powersurfr.com> References: <20010620182005.18782.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 13 On 20 Jun 2001 18:20:05 -0000, lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote, in part: >As crypto experts, we need to give expert advice. Use good crypto. >That's it. This is all very sensible, but do we know that even the good crypto really is good? And shouldn't we encourage our clients to develop critical thinking skills of their own, instead of simply uncritically accepting "expert" advice? John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 21:52:21 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gr5ul$k2b$3@agate.berkeley.edu> References: <3b3111ab.23137752@news.powersurfr.com> Newsgroups: sci.crypt Lines: 13 John Savard wrote: >This is all very sensible, but do we know that even the good crypto >really is good? We don't. But we look at the most likely failures. Which of the following is more likely? - There's some flaw, but the RFC is so durn complicated that noone ever notices it in time to prevent deployment of a broken system. - There is a *practical* known-plaintext attack on AES or 3DES that causes your system to break if you encrypted uncompressed text but that isn't enough to break the system when you use compression. I know which I'm putting my money on! (and it's not the latter) Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 21:52:44 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gr5vc$k2b$4@agate.berkeley.edu> References: <3b3111ab.23137752@news.powersurfr.com> Newsgroups: sci.crypt Lines: 6 John Savard wrote: >And shouldn't we encourage our clients to develop >critical thinking skills of their own, instead of simply uncritically >accepting "expert" advice? There are better ways to accomplish this than to suggest using ECB mode. Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 22:39:12 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b3124dc.28051115@news.powersurfr.com> References: <9gr5vc$k2b$4@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 20 On Wed, 20 Jun 2001 21:52:44 +0000 (UTC), daw@mozart.cs.berkeley.edu (David Wagner) wrote, in part: >John Savard wrote: >>And shouldn't we encourage our clients to develop >>critical thinking skills of their own, instead of simply uncritically >>accepting "expert" advice? >There are better ways to accomplish this than to suggest using ECB mode. That's certainly a good point. The 'conventional wisdom' is fundamentally sound; I'm not trying to dispute that. The anonymous poster here, though, is advocating it in such a broad, categorical fashion as to invite suspicions of trolling - and apparently there is other reason to suspect some sort of personal issue with Joseph Atwood. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: 20 Jun 2001 20:20:31 -0000 From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> Message-ID: <20010620202031.15233.qmail@nym.alias.net> References: <20010620054004.24435.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 11 David Wagner writes: > I agree. Far more systems have failed from too much > complexity than have failed because there was a practical > cryptanalytic known-plaintext attack on a well-chosen cipher. > I'd gladly trade known plaintext for simplicity, any day. As usual, you are a voice of reason. I hope the well-earned reputation of David Wagner in this forum will carry weight even if the arguments of an anonymous voice do not. Subject: Re: Known plaintext considered harmless Date: 21 Jun 2001 04:00:05 -0000 From: lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> Message-ID: <20010621040005.6152.qmail@nym.alias.net> References: <20010620054004.24435.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 13 David Hopwood writes: > Re: the OP's argument - I generally agree that there is a great deal of FUD > and misunderstanding about the "dangers of known plaintext", and that some > protocols are unnecessarily bent out of shape because of that. The worst > cases are protocols that do not encrypt data that needs to be encrypted, > just because someone argued that it could provide known plaintext. Excellent point! Wasn't it claimed that the early versions of PGP failed to encrypt the length fields of the private key numeric values because of this reason? They were thought to be too guessable and would provide known plaintext. Apparently the new key formats do have them encrypted, which is surely superior. It's probably simpler as well. Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 11:22:42 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3b3082fa.1700603@news.powersurfr.com> References: <20010619052032.804.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 88 On 19 Jun 2001 05:20:32 -0000, lcs Mixmaster Remailer <mix@anon.lcs.mit.edu> wrote, in part: >Inexperienced users of crypto systems are often concerned about known >plaintext. They have heard that known plaintext can give a cryptanalyst >an entry point into breaking ciphers. Accordingly they fear known >plaintext and try to avoid it. True. >Sometimes they will even complicate their data structures or protocols >in order to avoid known plaintext. They will incorporate compression >for no good reason, or restructure headers, or make sure that fields >set aside for future expansion hold random values rather than zeros. >All this complicates their overall system design and introduces new >possibilities for errors. It certainly is important to remember that program bugs can result in exposing plaintext. >We need to communicate clearly that known plaintext is no longer an issue. >With modern ciphers and a reasonable chaining mode, you don't have >to worry about known plaintext. If your cipher is weak against known >plaintext, you should be using a different cipher. If you communicate _too_ clearly that "known plaintext is no longer an issue", though, people might forget the need for a "reasonable chaining mode". By noting that such a mode is still needed to accompany a "modern cipher", you are admitting that known plaintext _remains_ a consideration. Incidentally, these "reasonable chaining modes" include random initialization vectors. >When designing data which will be protected by a cipher, the only >consideration should be the needs to which the data will be put. >Design for clarity and convenience. >NO CONSIDERATION WHATSOEVER should be given to manipulating or >constraining the data structures in the hopes of making the encryption >stronger! >In your overall system, the cipher is there to do the job of protecting >the plaintext. The job of the plaintext is to represent whatever data >is being communicated. Don't be misled into blurring these boundaries. >Modern ciphers are fully capable of providing confidentiality with any >and every plaintext. Let the cipher do its job, don't try to "help" >it in the other parts of your design. Now, this may well be sound advice if one is designing a word processor or a paint program that happens to have an optional password protection mode. However, it is an unjustified assumption that every system which uses encryption falls into that paradigm. Sometimes the encryption _is_ the application. >Let us all agree that it is time to put concerns about known plaintext >behind us. Recommendations to avoid it are obsolete. People who are interacting with cryptography at a user level, and are using, as you point out, "modern ciphers and a reasonable chaining mode", can indeed follow that advice. People trying to determine just what constitutes a "reasonable chaining mode", or who are attempting to design a "modern cipher", however, can't afford to forget its dangers. In addition, it is simply easier to compress plaintext effectively if one has additional information about its expected structure. Also, if one is just using Rijndael or 3DES, one has a well-defined key size of 128 or 112 bits. Should we consider ourselves justified in feeling complete confidence in the security of our encryption, once we've settled on the "industry standard"? What about quantum computers, for example? Not all data merely needs to remain confidential for the next week, or the next month. Are you prepared to tell me how powerful, and how cheap, computers will be 100 years from now? Making conventional encryption absurdly strong, though, instead of merely as impressively strong as Rijndael and 3DES, is trivially easy. Thus, another way of putting this argument behind us is simply to do so, whether it is necessary or not. Because it is true that preventing plaintext from leaking, and performing key management properly, are the real problems that deserve time and attention. John Savard http://home.ecn.ab.ca/~jsavard/index.html Subject: Re: Known plaintext considered harmless Date: 20 Jun 2001 18:11:40 GMT From: mdw@nsict.org (Mark Wooding) Message-ID: <slrn9j1pqs.jot.mdw@daryl.nsict.org> References: <3b3082fa.1700603@news.powersurfr.com> Newsgroups: sci.crypt Lines: 35 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: > If you communicate _too_ clearly that "known plaintext is no longer an > issue", though, people might forget the need for a "reasonable > chaining mode". By noting that such a mode is still needed to > accompany a "modern cipher", you are admitting that known plaintext > _remains_ a consideration. I think that a block cipher alone doesn't actually constitute a modern cipher'. I think that name should only be applied to some transformation which can be claimed (under stated assumptions) to be IND-CPA. Hence, a construction based on a block cipher can only be considerde a modern cipher' if used in an appropriate mode (e.g., CBC, CTR, OCB, whatever). > Incidentally, these "reasonable chaining modes" include random > initialization vectors. Or, in the case of CTR, a counter. This actually makes CTR mode more secure. (You use the current value of the counter as the IV for an encryption, and then increment it by the number of blocks you encrypted. Using a random IV adds an extra \mu^2 / 2^L term to the advantage of the adversary.) > However, it is an unjustified assumption that every system which uses > encryption falls into that paradigm. Sometimes the encryption _is_ the > application. Even so. I don't think that (say) a VPN program should try to hide the stereotyping of IP packets it's encrypting. It should strive to be NM-CCA rather than IND-CPA, though -- chosen ciphertext is very easy in such an application. (I changed my VPN program yesterday so that it had a chance of being NM-CCA.) -- [mdw] Subject: Re: Known plaintext considered harmless Date: 21 Jun 2001 10:51:51 GMT From: mdw@nsict.org (Mark Wooding) Message-ID: <slrn9j3ke7.jot.mdw@daryl.nsict.org> References: <slrn9j1pqs.jot.mdw@daryl.nsict.org> Newsgroups: sci.crypt Lines: 8 Mark Wooding <mdw@nsict.org> wrote: > Using a random IV adds an extra \mu^2 / 2^L term to the advantage of > the adversary.) Sorry, q, not \mu. -- [mdw] Subject: Re: Known plaintext considered harmless Date: Wed, 20 Jun 2001 17:26:00 +0100 From: newsgroups@furey.freeserve.co.unitedkingdom (Sean Furey) Message-ID: <slrn9j1k6q.a9.newsgroups@cafe.furey.dhs.org> References: <20010619052032.804.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 18 Hi! >Let us all agree that it is time to put concerns about known plaintext >behind us. Recommendations to avoid it are obsolete. As I see it (from a newbieish point of view) known plaintext is still a problem. Whilst modern encryption algorithms may prevent the key from being obtained with a known plaintext/ciphertext combination, it makes brute forcing the message easier. There has been discussions here recently about how you can tell if the plaintext you have obtained using a particular key is the correct plaintext, with the suggestion being to watch out for patterns. If you know the plaintext, or what format it will take, it is much easier to look out for these patterns. -- Sean Furey, a happy and satisfied Debian user. sean@furey.freeserve.co.uk Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 21:58:11 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9gtqlj$1qmc$2@agate.berkeley.edu> References: <slrn9j1k6q.a9.newsgroups@cafe.furey.dhs.org> Newsgroups: sci.crypt Lines: 9 Sean Furey wrote: >As I see it (from a newbieish point of view) known plaintext is still >a problem. Whilst modern encryption algorithms may prevent the key >from being obtained with a known plaintext/ciphertext combination, it >makes brute forcing the message easier. All good modern algorithms make brute-force keysearch attacks totally infeasible, simply by making the key space large enough to resist such attacks. Subject: Re: Known plaintext considered harmless Date: Thu, 21 Jun 2001 22:51:03 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3b327a2b.2896163@news.io.com> References: <9gtqlj$1qmc$2@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 35 On Thu, 21 Jun 2001 21:58:11 +0000 (UTC), in <9gtqlj$1qmc$2@agate.berkeley.edu>, in sci.crypt daw@mozart.cs.berkeley.edu (David Wagner) wrote: >Sean Furey wrote: >>As I see it (from a newbieish point of view) known plaintext is still >>a problem. Whilst modern encryption algorithms may prevent the key >>from being obtained with a known plaintext/ciphertext combination, it >>makes brute forcing the message easier. > >All good modern algorithms make brute-force keysearch attacks >totally infeasible, simply by making the key space large enough >to resist such attacks. Yes, but the argument goes way beyond brute force. Without "knowing something" about the plaintext, opponents simply have no way to know when they have the right key. It is the knowledge of the plaintext, or of some structure in the plaintext, plus the ciphertext, which allows someone to know when the correct key has been found *by* *any* *approach* *at* *all*. I think the reason we don't make a bigger deal out of known-plaintext is that, in practice, it is so difficult to prevent. But a stack of ciphers, by itself, does prevent known-plaintext exposure for the individual ciphers. Yes, known-plaintext may exist for the overall stack. However, the overall cipher has about three times the key bits of the individual ciphers, and is a far more complex transformation. The overall cipher is thus hardly the ideal target. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 08:20:51 GMT From: Tim Tyler <tt@iname.com> Message-ID: <GFBoIr.2K0@bath.ac.uk> References: <9gtqlj$1qmc$2@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 31 David Wagner <daw@mozart.cs.berkeley.edu> wrote: : Sean Furey wrote: :>As I see it (from a newbieish point of view) known plaintext is still :>a problem. Whilst modern encryption algorithms may prevent the key :>from being obtained with a known plaintext/ciphertext combination, it :>makes brute forcing the message easier. : All good modern algorithms make brute-force keysearch attacks : totally infeasible, simply by making the key space large enough : to resist such attacks. Which would be OK, *if* there were no other ways of getting information about which keys might have been used, besides via an examination of the cyphertext. There *are* other ways. They are /well known/. For example, exploiting a low-entropy key-generator. Such considerations effectively *reverse* the conclusion - from: "Known-plaintext is not a problem because brute-force is impractical" ...to... "Known-plaintext /can/ be a problem, because it makes rejecting keys practical - and this is useful, if the keyspace can be reduced to a searchable region by other means". -- __________ |im |yler tt@iname.com Home page: http://alife.co.uk/tim/ Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 13:02:35 -0700 From: "Joseph Ashwood" <ashwood@msn.com> Message-ID: <uqme4c1#AHA.261@cpmsnbbsa07> References: <20010619052032.804.qmail@nym.alias.net> Newsgroups: sci.crypt Lines: 158 I know I'm a bit behind on the conversation. We can take this argument (the original one that we can ignore known-plaintext) and show that in some nearly extreme cases ignoring it is a bad thing, it may not compromise the key, but it will at times allow other forms of attack. Let's consider a Source Code Control program, with a client Bob and a server S. They communicate using an encrypted pipe, the pipe is encrypted with RC4 and for the sake of argument they use a perfectly random key. Assume the attacker Eve also has an account on S, but just wants to cause problems for the legitimate Bob. To set up this ploy Eve first connects to the source control and performs the operations, and records the unencrypted results. Eve now knows within very small bounds what will be sent to Bob (quite often she will know in excess of 99% of the information). Eve creates a false stream that contains exactly what she wants it to, perhaps comments that seem to come from Bob thatmake explicit comments about Alice, since this is source code she puts it in comments in a few choice files that Bob will not be looking at, but are nearby to what he will be working with. Now Eve waits until Bob connects. When Bob connects Eve alters the stream replacing the files on Bob's harddisk with the custom alterations. Bob does his work, and come evening check in his work (including Eve's masterpiece). From there it's a simple matter for Eve to checkout the most recent code, and inform her manager of Bob's explicit comments about Alice. There will be a small amount of collateral damage to other files (from the 1% of the plaintext that Eve didn't know), but the result is that Bob will certainly be put on at least notice, and there is a distinct chance that he will be terminated and sued for sexual harassment (which it is very likely he will lose; after all he checked in the comments, which means he wrote them). This should be a rare situation, but having dealt with a company where I could do very nearly this exact thing (except the RC4 key was far from random) I am unsure of the magnitude of the exposure in systems. Just a tiny injection of randomness along with compression or a decent MAC would have dealt with the entire problem, instead they relied on (unkeyed) MD5, which offered no protection from this. The kind of attack mounted by Eve here would hopefully be extremely unlikely to be mounted, but could be extremely detrimental if it was. The point is that the consideration is not always whether or not the cipher is weak against X, but whether it can be exploited. [changing who I'm replying too, switching to the sub-thread on stack ciphers] While it is entirely possible to go round and round on the subject of stack ciphers there seems to be only a handful of primary arguments: 1) The likelihood of the composite being weaker than the components individually is extremely low 2) If our goal is proof, there is no proof that a stack of n ciphers is more secure than a single cipher from the stack so the indication would be to use a stack of stacks, repeat The likelihood of the composite being weaker than the components individually is extremely low, and we can use this to address some possibility of attack. That is true, but there are other requirements to make this clear. The first cipher has to be onto from the set of all strings of the length of the output to the set of all strings the length of the input, I am not sure is 1-1 is necessary because induced randomness may in fact be beneficial. Additionally the ciphers must be unrelated, so 3DES is basically a worst case scenario for this. And the keys must be independent. The most difficult requirement to meet is that the cipher be unrelated, because we are dealing with permutations and in some fashion all permutations can be considered related. Generally what is required for this is that the selected permutations have minimal overlap, or more to the point, fewer inverse permutations, and fewer partial matches. The goal here is to prove that the statement (there exists k1,k2,k3 such that for all x g(f(x,k1),k2)=f(x,k3) ) is false (I've omitted the sets). This goal is generally difficult to prove, but the proof that DES is not a group is certainly among the biggest boosts to this measure of security. If our goal is proof, there is no proof that a stack of n ciphers is more secure than a single cipher from the stack so the indication would be to use a stack of stacks, repeat infinitely. It certainly sounds correct, and may even follow logically. However I don't think I've ever met a cryptanalyst who fully believed that proofs of security were required. What we can say about the stack with regards to this is that if the ciphers are unrelated and the keys are unrelated it is with high likelihood that a proof of security in one cipher will extend across the stack, so using for example DFC in the middle of a stack will very likely result in having complete immunity to differential attacks across the entire stack (see DFC paper for reference) even though DFC has an attack against it. So there are proof advantages to the stack concept, but there still exists no proof of absolute security. Now as to whether or not there are any truly secure ciphers, the jury is still out. I personally believe that there are problems that are difficult enough to be considered for cryptography, I don't know if we currently use any of these problems, and I suspect that I will die long before the answer is known. For the time being I believe that there is some relatively minor headway remaining against 3DES (I think the limit is about 5 more bits at the most), Rijndael certainly has some attack against it but it's going to be a while before it falls below 100 bits of strength, I am fairly confident that Rijndael is not a group so 3AES will be a reasonable construct, I haven't even wrapped my mind around it enough yet to determine what kind of strength it will have. For a stack I believe there is reason to believe that a stack of Rijndael, Blowfish, and Serpent will offer most everything but won't have much in the way of proofs. I choose these with some degree of care. The choice of Rijndael is because it is fast and secure, and has some other very nice properties, Blowfish also has some very nice properties and was selected in part because it doesn't have a 128-bit block like the other 2. Serpent was chosen because it will give a strong anchor, all the attacks against it require such enormous quantities of everything it overcame my desire to have 3 different block sizes. I believe that for a triple these three will show themselves to be quite good, they are likely to be orthogonal to each other, and the presence of 2 different block sizes will certainly make reducing this believed orthogonality more difficult. However I still can't prove anything about it. Joe "lcs Mixmaster Remailer" <mix@anon.lcs.mit.edu> wrote in message news:20010619052032.804.qmail@nym.alias.net... > Inexperienced users of crypto systems are often concerned about known > plaintext. They have heard that known plaintext can give a cryptanalyst > an entry point into breaking ciphers. Accordingly they fear known > plaintext and try to avoid it. > > Sometimes they will even complicate their data structures or protocols > in order to avoid known plaintext. They will incorporate compression > for no good reason, or restructure headers, or make sure that fields > set aside for future expansion hold random values rather than zeros. > All this complicates their overall system design and introduces new > possibilities for errors. > > We need to communicate clearly that known plaintext is no longer an issue. > With modern ciphers and a reasonable chaining mode, you don't have > to worry about known plaintext. If your cipher is weak against known > plaintext, you should be using a different cipher. > > When designing data which will be protected by a cipher, the only > consideration should be the needs to which the data will be put. > Design for clarity and convenience. > > NO CONSIDERATION WHATSOEVER should be given to manipulating or > constraining the data structures in the hopes of making the encryption > stronger! > > In your overall system, the cipher is there to do the job of protecting > the plaintext. The job of the plaintext is to represent whatever data > is being communicated. Don't be misled into blurring these boundaries. > Modern ciphers are fully capable of providing confidentiality with any > and every plaintext. Let the cipher do its job, don't try to "help" > it in the other parts of your design. > > Let us all agree that it is time to put concerns about known plaintext > behind us. Recommendations to avoid it are obsolete. Subject: Re: Known plaintext considered harmless Date: Fri, 22 Jun 2001 20:35:57 +0000 (UTC) From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <9h0a7d$2ucm\$1@agate.berkeley.edu>
References: <uqme4c1#AHA.261@cpmsnbbsa07>
Newsgroups: sci.crypt
Lines: 27

Joseph Ashwood wrote:
>We can take this argument (the original one that we can ignore
>known-plaintext) and show that in some nearly extreme cases ignoring it is a
>bad thing, it may not compromise the key, but it will at times allow other
>forms of attack.
>
>Let's consider a Source Code Control program, with a client Bob and a server
>S. They communicate using an encrypted pipe, the pipe is encrypted with RC4
>and for the sake of argument they use a perfectly random key. Assume the
>attacker Eve also has an account on S, but just wants to cause problems for
>the legitimate Bob.

The problem here isn't the presence of known plaintext.  The problem
is the lack of message authentication.  The latter is easy to remedy.
The former isn't.

These types of examples are why I always recommend that anytime you use
encryption, you always use message authentication.  This design principle
is not new, and is fairly well-known by now among cryptographers.
The problem is just that many protocol designers don't seem to know it
as well as they should.

You can find many examples in the literature of where failure to follow
this design principle leads to attacks: e.g., Jueneman, Matyas, and
Meyer's work on the subject, Stubblebine and Gligor's work on Kerberos,
Bellovin's work on IPSEC, Core SDI's work on SSH, Schneier and Mudge's
work on MS-PPTP, Borisov, Goldberg, and my work on 802.11 WEP, and so on.

Subject: Re: Known plaintext considered harmless
Date: Fri, 22 Jun 2001 22:03:12 +0100
From: David Hopwood <david.hopwood@zetnet.co.uk>
Message-ID: <3B33B290.5A75F244@zetnet.co.uk>
References: <uqme4c1#AHA.261@cpmsnbbsa07>
Newsgroups: sci.crypt
Lines: 41

-----BEGIN PGP SIGNED MESSAGE-----

Joseph Ashwood wrote:
> We can take this argument (the original one that we can ignore
> known-plaintext) and show that in some nearly extreme cases ignoring it
> is a bad thing, it may not compromise the key, but it will at times allow
> other forms of attack.
>
> Let's consider a Source Code Control program, with a client Bob and a
> server S. They communicate using an encrypted pipe, the pipe is encrypted
> with RC4 and for the sake of argument they use a perfectly random key.

Without authentication, that's an obviously flawed design. The reason why it
is flawed doesn't have much to do with known plaintext; nor will it be fixed
by any of the practices that the anonymous poster was complaining about, or
by compression.

- --
David Hopwood <david.hopwood@zetnet.co.uk>

RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip

-----BEGIN PGP SIGNATURE-----
Version: 2.6.3i
Charset: noconv

iQEVAwUBOzOybzkCAxeYt5gVAQGv6gf+O1HbgkOkbHMeupgTcDrFYLfxTE1FNXgd
RQzNWljDarhk7jUxx3acgetvxW/Evlt/6QcougB7bLMqqhQi7GJKycmHxS0Mkucq
UsUHol0qtSorNr7jJ82Wo76ecZiSTGdoU6GB7u2hesbBKSQsmYYrx5eFRidrEcyX
T7RG2mhXrLt9nGto/DwzzqQxeXFMCPtsMTJD30P5V9nrAOIgpBWRRihVgjkdPFNj
JOKDnYzaw7i09FosNglkkWVN5IxKDqgKzwxrdwxm9hkHIYc8WtI/nMAjTt53Vl/q
wdNH59PfZ1xabT/TkQrPpEISW14ZaQwVGsNb2voomGJPFniQu/JCjw==
=RkuZ
-----END PGP SIGNATURE-----

`

Terry Ritter, his current address, and his top page.

Last updated: 2001-07-07