From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt,sci.math Subject: Re: wanted: pascal source of strong RNG Date: Fri, 12 Dec 1997 06:37:15 GMT Lines: 35 Message-ID: <3490db92.9825304@news.io.com> References: <6667n7$bfl$1@helga.keba.co.at> <348ED5B7.F6FE4A79@stud.uni-muenchen.de> <348ee64e.21992473@nntp.ix.netcom.com> <snrvhww88iu.fsf@sable.ox.ac.uk> On 11 Dec 1997 10:43:37 +0000, in <snrvhww88iu.fsf@sable.ox.ac.uk> in sci.crypt Paul Leyland <pcl@sable.ox.ac.uk> wrote: >[...] >Also for most purposes, running a reasonably strong >block cipher (such as or IDEA or Blowfish) in counter mode would be >sufficient. While I don't want to make too much of this, I would like to repeat my long-standing position that using *any* real cipher or hash in "counter mode" is disturbing and worrisome: The problem is that a binary counter provides almost *ideal* statistics for use in attacking a cipher or hash: the lsb always changes; by selecting alternate states one can collect ciphertexts in which one bit is in a known state; the next bit up changes at half the rate, etc., etc. While this is of course no problem for the conceptual *ideal* cipher or hash, the use of a binary counter in practice demands more perfection from exactly those bit positions which are least likely to provide it: those near the edge of the computation. There are alternatives: Use a polynomial counter, a fast preliminary block cipher, or a randomizing mode like CBC. None of these have to be secure (certainly a binary counter is not secure). The main thing is to get about half of the bits changing about half of the time, so The Opponent cannot exploit any practical statistical imperfections in the first few bit positions of a real cipher or hash. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt Subject: Re: ***Early RSA by CESG (1970)*** Date: Fri, 19 Dec 1997 23:22:45 GMT Lines: 58 Message-ID: <349b0197.14341816@news.io.com> References: <6781m8$m5h$1@pheidippides.axion.bt.co.uk> <67en9m$9kq$1@nntp.ucs.ubc.ca> On 19 Dec 1997 21:00:38 GMT, in <67en9m$9kq$1@nntp.ucs.ubc.ca> in sci.crypt unruh@physics.ubc.ca (Bill Unruh) wrote: >Question about US patent law. I thought in the USA it is the first to >invent, not the first to apply or the first to publish who gets the >patent. Yes. >And demonstrating that something was invented by someone else >earlier [...] breaks a patent. Is this true? Yes. >(even if kept secret) Not as I see it. A patent is a reward for *publishing* an invention. I think it is not uncommon for a patent to be awarded on an invention that someone else was keeping a trade secret. The trade secret guy is then out of luck, and the patent can be applied against him. (This last is one of the things that the pending patent legislation was intended to change, but I would expect that to lead to a vast increase in court action and perjury as each company -- and maybe even the military -- tried to show that it really had used each invention in secret and so need not pay royalties.) Basically, an inventor has the opportunity of availing himself of *either* trade secret *or* patent protection, not both. As I understand it, an application must be made in a reasonable time (say, a year) unless necessarily delayed. One type of delay might be the need to perform time-consuming experiments, and presumably there is some sort of exception for bad legal advice, and perhaps even the lack of funds or time. In general, though, the applicant must be seen as "moving toward" a patent throughout the period before the application, if the grant is contested. Someone not "moving toward" a patent presumably has opted for trade secrecy protection (or, in this case, military secrecy), and does not deserve a patent grant. It is difficult to argue that one has benefited society in the sense of the open exposure of new technology when one has kept that technology secret. >Do these UK >inventions therefor break the RSA patent? Not being a lawyer, I nevertheless doubt it. Sorry. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt Subject: Re: ***Early RSA by CESG (1970)*** Date: Mon, 22 Dec 1997 16:50:49 GMT Lines: 43 Message-ID: <349e99f5.2855883@news.io.com> References: <6781m8$m5h$1@pheidippides.axion.bt.co.uk> <67en9m$9kq$1@nntp.ucs.ubc.ca> <67ldoh$h47$1@news.ox.ac.uk> <349E5A92.7C01@xtra.co.nz> On Tue, 23 Dec 1997 00:18:26 +1200, in <349E5A92.7C01@xtra.co.nz> in sci.crypt Peter Smith <lucent@xtra.co.nz> wrote: >> > And demonstrating that something was invented by someone else >> >earlier (even if kept secret) breaks a patent. Is this true? Do these UK >> >inventions therefor break the RSA patent? >> >> I don't think that those are true -- and therefore, it doesn't break >> the RSA patent. > >There is a patent principle known as 'prior art' which may vitiate this >notion. Alas, "prior art" is a "term of art," which is to say that it has a particular meaning with respect patent law. This meaning differs from the idea one might get from the ordinary meanings for the words "prior" and "art." In general, "prior art" in the patent sense refers to *open* or *known* technology. This is mainly prior *publication*, especially patent publication, but really any magazine article in a periodical available in a library. Even formal ink-on-paper "publication" in a company house-organ may *not* count as "prior art" if that was not available to the public in libraries. Secret development may have produced the same "art" and it may have been "prior" to the patented invention and this may not be "prior art" in the patent sense. The whole point of the patent system is to get stuff openly *published* and widely known. Secret development does not do this, and it is not counted the equal of development which does. In the normal case an inventor can choose trade secrecy *or* patent protection, not both. It is not acceptable to keep a development secret for years and then -- when the idea is re-invented -- cry: "Prior art! Prior art!" That is not what "prior art" means. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt,talk.politics.crypto Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Fri, 26 Dec 1997 18:34:23 GMT Lines: 119 Message-ID: <34a3f86f.5301139@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> On Fri, 26 Dec 1997 10:34:48 -0600, in <wtshaw-2612971035030001@207.101.116.58> in sci.crypt wtshaw@itexas.net (W T Shaw) wrote: >[...] >Someone wrote and asked how it would be proved that their were no >backdoors. This would be proving a negative, but its merely a government >request. It's par for the course for them to ask for the impossible from >everyone so that they can make exception for anyone they want to win. On >the other hand, clean engineering of an algorithm itself might make a >backdoor pretty near impossible to place there. The issue is not just "clean engineering," but also the *lack* of any hidden structuring, such as one might find in particular tables. This means that it is not enough to have a specific table which is asserted to have been constructed with no hidden agenda, such as the use of a particular irrational expansion. Many such constructions could have been tried before one with the desired characteristics was found. Consequently, it is necessary -- if there are tables, and one wishes to avoid hidden structure -- to actually allow *random* construction. >Only complicated, >verbose, inefficient design would allow for one in what should be a rather >tight and plain area. Obviously, any sort of "ad hoc" irregularity would be serious cause for concern. But a cipher *could* be rather clever at disguising the structure necessary to create a backdoor. By hiding it inside the table construction, for example, there is no apparent part of the design to be seen. >Much todo was made about key analysis, however. The same might be said >for finding bad keys. A keyspace might have to be relatively small so that >a complete search for bad keys might be made. This is a non-starter since >any keyspace of usable size tlo guarantee passable security needs to be >huge so that it is hard to think about searching it. A *scalable* cipher design can produce a large or small cipher from the exact same construction rules. Just as we believe that mathematics works the same for both small numbers and large numbers, a backdoor cipher built from fixed construction rules must have the same sort of backdoor, whether built large or small. With a scalable cipher we can construct true small versions which *can* be exhaustively examined. I have very recently reported on sci.crypt results on the measured nonlinearity of 10-bit-wide Mixing constructions; such measurements inherently require a scalable cipher design. >It is true that >having a large keyspace might mean that hiding an bad key might make into >some sort of back door. But, with the same size of selections in such a >range, the odds of picking one at random should diminish. > >I return to memory of my question as to if key generation was considered >part of the algorithm; government answer was yes. From my perspective, >the answer is no. It is a wholely different thing. If the algorithm is >good enough, its security depends solely on the keys, which should be >variable as to equivalent bit strengths. Well, certainly, if the cipher is known to *have* any substantial quantity of weak keys, one would have to say that avoiding those keys during key-generation *is* part of the cipher, so in this case "yes" would be appropriate. Ideally, of course, there would *be* no weak keys, and then it would seem that "no" would be more appropriate. But since the keyspace had to have been checked for this, one might say that this is what "yes" was intended to convey. The idea that we *can* generate efficient ciphers with really huge keyspaces, so that there is absolutely no practical motive for using keys of any particular restricted size, might be something the government has not really understood or taken to heart. It may be easier for the government to believe that restricting key size has beneficial economic consequences for users, in addition to just coincidentally supporting export limitations. >While were at it, I'd sure like to debunk the idea that lack of randomness >in output necessarily means that a poor algorithm was used, and >consequently poor security is a result. This idea adopted by most, but it >is a gross error to make it into a generalization as it can be >specifically false. It is based on the idea that you can actually measure >randomness, which you cannot really do. I think you have just committed a logical error: You are arguing that apparent randomness does not mean strength, which is true, and thus that the lack of randomness does not mean weakness, which is false and does not follow. I claim that a cipher should indeed make plaintext (which inherently has structure) into ciphertext with little or no structure. Repeatable non-randomness in ciphertext would seem to be a clear indication of a problem. One test might be to take a counting sequence as plaintext, and then to try and find any sort of relationship -- between bits or bytes or whatever -- in the ciphertext. >[...] >Give me any so-called *random* ciphertext, and I can easily run it through >a step to skew it somehow. This is an interesting assertion, one perhaps best investigated through actual examples. Personally, I suspect that the most obvious ways of making randomness non-random would also destroy invertibility. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Sun, 28 Dec 1997 10:32:47 GMT Lines: 104 Message-ID: <34a62a93.6255227@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> <34a3f86f.5301139@news.io.com> <34A59D20.3D752504@ibm.net> On Sat, 27 Dec 1997 19:28:16 -0500, in <34A59D20.3D752504@ibm.net> in sci.crypt Uri Blumenthal <uri@ibm.net> wrote: >Terry Ritter wrote: >> A *scalable* cipher design can produce a large or small cipher from >> the exact same construction rules. Just as we believe that >> mathematics works the same for both small numbers and large numbers, a >> backdoor cipher built from fixed construction rules must have the same >> sort of backdoor, whether built large or small. > >I'm afraid I have to question the feasibility of the idea of a >scalable cipher design. To illustrate what I mean, please >consider (a) IDEA approach, and (b) what happens with an >S-box when you increase its size... Since you question feasibility, perhaps you will be moved to consider my well-known and extensively-documented technology and designs: http://www.io.com/~ritter/#MixTech in particular: http://www.io.com/~ritter/MIXCORE.HTM and: http://www.io.com/~ritter/#VSBCTech in particular: http://www.io.com/~ritter/VSBCCORE.HTM With respect to (a), let me say that I personally have reservations about the strength of IDEA. But this has nothing to do with scaling. The reason IDEA does not scale up is the restrictive form of finite field which is used in the design. I use fields of mod 2 polynomials, and so avoid that type of scaling problem. With respect to (b), it is trivially true that an S-box requires exponential storage. But since all block ciphers are designed to emulate a huge substitution table, it is quite clear that all real block ciphers have design strategies intended to surmount this issue. My designs generally fix the size of the tables, and then couple the necessary number together with various forms of mixing; this supports a dynamically-variable block size, either as a power-of-2 bytes, or an arbitrary size to the byte. For exhaustive testing, however, it is necessary to reduce the size of the tables so that the full codebook can be traversed while still using multiple tables. See: http://www.io.com/~ritter/ARTS/MIXNONLI.HTM Clearly, any real block cipher must be qualitatively different from a substitution table of the same size. However, this is an issue shared by all block ciphers of realistic size, scalable or not. >Possibly it [scalability] is too tough a nut to crack. I encourage everyone to consider just how tough this is. >On the >bright side - I've yet to see the scalability problem addressed >satisfactory in any network-related field (:-). In other fields, scalability is generally related to throughput or bandwidth. You will find this addressed in hardware realizations of my Mixing designs: http://www.io.com/~ritter/EXTRMSPD.HTM In Mixing hardware, throughput can be increased arbitrarily by increasing block size (along with transistor count), since the hardware processing rate is constant *per* *block* no matter how large that block may be. These ciphers are based on an arguably perfect mixing of two bytes into two bytes; mixing in FFT-like patterns thus mixes blocks of arbitrary power-of-2 size. For scalable throughput, each mixer is implemented as separate hardware. (Each factor-of-2 block size increase does require an additional mixing sub-layer, which does increase latency somewhat, but this does not affect throughput. For example, mixing 8 bytes requires 3 mixing sub-layers, while mixing 64 bytes requires 6 mixing sub-layers; this is twice the latency but eight times the throughput.) In addition, the use of large blocks (e.g., 64 bytes or larger) may support "electronic code book" (ECB) ciphering. This allows each block to be processed separately, on separate hardware, which is another form of throughput scalability. So, yes, cipher scalability *can* scale throughput. But I see the main scalability advantage being the ability to exhaustively examine complete ciphering transformations (i.e., the complete codebook), which is something simply not possible in normal designs. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt,talk.politics.crypto Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Mon, 29 Dec 1997 17:55:39 GMT Lines: 160 Message-ID: <34a7e3e9.2151126@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> <34a3f86f.5301139@news.io.com> <wtshaw-2812970459590001@207.101.116.62> On Sun, 28 Dec 1997 04:59:44 -0600, in <wtshaw-2812970459590001@207.101.116.62> in sci.crypt wtshaw@itexas.net (W T Shaw) wrote: >In article <34a3f86f.5301139@news.io.com>, ritter@io.com (Terry Ritter) wrote: > >> On Fri, 26 Dec 1997 10:34:48 -0600, in >> <wtshaw-2612971035030001@207.101.116.58> in sci.crypt >> wtshaw@itexas.net (W T Shaw) wrote: >> >>[...] >> This means that it is not enough to have a specific table which is >> asserted to have been constructed with no hidden agenda, such as the >> use of a particular irrational expansion. Many such constructions >> could have been tried before one with the desired characteristics was >> found. Consequently, it is necessary -- if there are tables, and one >> wishes to avoid hidden structure -- to actually allow *random* >> construction. > >Bad keys, bad tables, are bad keys. But some cipher designs do claim to have table constructions above reproach *because* they use a particular well-known irrational for initialization. >[...] >What you say seems valid enough, that scaling-up is a good thing is worth >doing, and simplifies testing. Actually, my point is that a truly-scalable cipher allows scaling *down*, which supports exhaustive testing that simply could not be done on a larger cipher. Scalability does far more than just *simplify* testing; it is an enabling technology that supports experimental analysis which is otherwise *impossible*. >Weaknesses might be ingrained in scaling >to certain sizes, sort of nodes dependent on the design if not flexible >enough and/or the algorithm is based on some type of internal factor. If it's not really scalable, it's not scalable. >For instance, if an algorithm uses some sort of pRNG, it has to be tested >at all levels intended as hidden loops in its calculations may create sort >of an interference pattern with the specific parameters of the data >allowed at that level. I've seen that before. Personally, I have been able to accept a less perfectionist approach which I think produces superior table initialization: http://www.io.com/~ritter/KEYSHUF.HTM I note that the output from that RNG is clean enough to expose a bias in the runs-up / runs-down test, as I reported in sci.crypt: http://www.io.com/~ritter/ARTS/RUNSUP.HTM And while it is not possible to test for every potential correlation, it *is* possible to *isolate* the generator so that even a highly correlated sequence would seem to be difficult to attack. Double-shuffling, for example, would appear to be a strong technique for preventing knowledge of the table contents from disclosing the shuffling sequence. Correlations are possible in *any* cipher. Even though it is not possible to test an RNG for every possible correlation, it is *also* not possible to test every correlation (between bits or groups of bits) in a block cipher. But we can hope to *isolate* the RNG sequence from block ciphering operations, while the cipher itself necessarily remains exposed. >[...] >I'm certainly open to error, but the point is one of correlation not >proving causation, but it could demonstrate a close relationship if there >was one. On the other hand, a high degree of correlation could be >measured when samples are filtered in the first place for the desired >end. Examples, even crypto examples, are easy to come by. It is certainly possible to filter a mass of equivalent ciphertexts to find a pattern you might want to present. But I expect this to be a random process, requiring exponentially more ciphering for each increase in desired pattern length. This discussion was basically about the suggestion that even a cipher which produces correlations could be OK. I assert again that this is *effectively* always false. If a ciphering "core" (the cipher proper, without massive filtering) does produce repeated correlations, and this causes us to dismiss that cipher design, we will almost never be wrong in so doing. So sayeth I. >> I claim that a cipher should indeed make plaintext (which inherently >> has structure) into ciphertext with little or no structure. >> Repeatable non-randomness in ciphertext would seem to be a clear >> indication of a problem. One test might be to take a counting >> sequence as plaintext, and then to try and find any sort of >> relationship -- between bits or bytes or whatever -- in the >> ciphertext. > >Just what I would want you to do in some design, to look for things that >were not there, and waste lots of time in the process. Wouldn't it be easier, stronger, and faster just to use more keying or more ciphering layers, or to encipher again under a complete other cipher? >> >[...] >> >Give me any so-called *random* ciphertext, and I can easily run it through >> >a step to skew it somehow. >> >> This is an interesting assertion, one perhaps best investigated >> through actual examples. Personally, I suspect that the most obvious >> ways of making randomness non-random would also destroy invertibility. >> >The cost of one process I have described before, phasing, is slight, a >minimal increase in content, or a trade off of some sort, since getting >something for nothing is not always obtainable. There are several obvious >options for phasing protocols. So, basically, we have more assertions but no examples. If your claim is true, you ought to be able, in a paragraph or two, to describe one such process in a way that most of us could understand and agree that it does in fact do what you claim it does. Your failure to explicitly do this when asked does not inspire confidence in your claim. >Having spent some more time today on transmutability of ciphertexts >between various bases, even more possibilities for the sort of mischief I >suggest become apparent. And I would suspect that ciphertexts which are random-like in one base are also random-like in another. Again, the more ciphertexts we have, the more likely it is that we can select one which seemingly has "patterns." But producing ciphertexts for pattern selection has a cost which is exponential with expected pattern length, and the expensive search and selection process buys only "mischief" with no added real strength at all. Why would we do this? >I intend to write a post to illustrate my quick >method for discovering the more cryptopromising of the relationships >between bases. >[...] Good. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt,talk.politics.crypto Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Thu, 01 Jan 1998 20:29:38 GMT Lines: 301 Message-ID: <34abfcac.9916179@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> <34a3f86f.5301139@news.io.com> <wtshaw-2812970459590001@207.101.116.62> <34a7e3e9.2151126@news.io.com> <wtshaw-2912971754400001@207.101.116.51> Sorry for the delay; I've been trying to grok the essence . . . . On Mon, 29 Dec 1997 17:54:24 -0600, in <wtshaw-2912971754400001@207.101.116.51> in sci.crypt wtshaw@itexas.net (W T Shaw) wrote: >In article <34a7e3e9.2151126@news.io.com>, ritter@io.com (Terry Ritter) wrote: >> >> But some cipher designs do claim to have table constructions above >> reproach *because* they use a particular well-known irrational for >> initialization. > >If another could be substituted for it, then it is just an optional key >choice. In the OTP marathon, the use of such a sequence as a pseudo came >up. Even though it does not necessarily repeat, it is a known sequence. >Whereever an irrational is used, might also a true random sequence be >used, hense, more key choices? Supposedly, the sequences you mention are >likely to not to be characterized as bad, but who knows. The problem is that the definition of the cipher generally includes table initialization, and using a particular irrational is *the* defined way this is done in some ciphers. Actually, I think this is related to your issue: Even though these sequences might be considered "good" random-like sequences, they obviously differ from each other. With an appropriate cipher design, it might happen that one or another of these sequences would produce an appropriate backdoor. Then it is just necessary to find a weak one, and issue *that* irrational for use. I think we can't trust tables that we cannot initialize at random, even if their structure is claimed to be optimal. >> >[...] >> Scalability does far more than just *simplify* testing; it is an >> enabling technology that supports experimental analysis which is >> otherwise *impossible*. >> >Agreed. Great! >[...] >Again, I would prefer to see that any key generation procedure passed the >test of bad generation for any particular algorithm as in a particular >implementation. I have no idea what this means. How can we "pass" a test of "bad generation"? What would such a test be? >>[...] >Having found one particular pRNG which worked generally, but had some >problems in one implementation, I may be overcautious, but being burnt on >occasion does that sort of thing to you. It is lots simpler to test when >you do not know all the things you might test for but simply select a >minimal few items, relying on chance to stumble on a problem area. Building and testing RNG's is a time-honored sub-area of "computer science" that is known to be non-trivial. People get burnt all the time in this area. I did research and then a Cryptologia article on the various possibilities, and the number of proposed schemes continues to increase. I suppose it helps to have more tests rather than fewer. But it seems to me that isolating the RNG from any exposure greatly reduces any risk that imperfections in the RNG can cause. It seems very important to get an unbiased selection among tables, and a large-state RNG is one way to do it. We *can* use other techniques, but then we have to test *them* for essentially the same sort of problems, and that may be *much* harder, because then we have to test the tables and the table distribution directly. (Or just handwave it away, as is normally done.) If we use the RNG technique, we can at least draw on the experience that exists with those designs. >>[...] >> It is certainly possible to filter a mass of equivalent ciphertexts to >> find a pattern you might want to present. But I expect this to be a >> random process, requiring exponentially more ciphering for each >> increase in desired pattern length. > >Not necessarily so, as certain algorithms rely on picking from a ready >list of ciphertexts as the last step, just pick amongst the readily >available strings. . . . as in the last step of a cylinder cipher. This is the old homophonic trick, right? But in a cylinder cipher the homophones are *related* for a particular ciphering (selecting one also selects the rest), where in most other techniques they are not. I suppose the advantage is that one can work within the same alphabet instead of an expanded alphabet, although the cost is an additional character -- beyond that needed for the plaintext -- which is used to indicate the plaintext row. But how much non-randomness can we expect, and can that amount possibly be useful? >> This discussion was basically about the suggestion that even a cipher >> which produces correlations could be OK. I assert again that this is >> *effectively* always false. If a ciphering "core" (the cipher proper, >> without massive filtering) does produce repeated correlations, and >> this causes us to dismiss that cipher design, we will almost never be >> wrong in so doing. So sayeth I. > >If this is a simple result of a simplistic method, I would wholeheartedly >concur, however, if you are dealing with a convoluted result, consciously >skewed toward some predetermined non-random appearing distribution, having >nothing to do with the underlying plaintext, any correlative method is >screwed. I see this as not very difficult to cause. OK, so the intent here is to select a pattern, a non-randomness, independent of the ciphertext per se. For strength, we would seem to want to select the pattern at random. So we are now introducing non-randomness at random . . . . The issue is whether this is practical. I'm still hoping to agree. >>[...] >> Wouldn't it be easier, stronger, and faster just to use more keying or >> more ciphering layers, or to encipher again under a complete other >> cipher? >> >The result would be obtained through added levels of key/algorithm, >chained in a most productive example of good chaining. So, your >suggestion is really my solution, in a way, but not producting random >appearing results as a goal, but the contrary. But, in this case, the ciphertexts do not all just appear, as in a physical cylinder. In the computer we have to create them, before we can select a pattern from among the random-like results. How can this possibly be a reasonable strategy? >[...] >For my initial example, I'll use something similiar to that I have >described before, making a 27 character based cipher look like a 24 >character based one: I note in passing the we know that any such technique *must* expand the ciphertext somehow. I assume that, here, the expansion will occur in any case (the expansion necessary in an automatic cylinder cipher to indicate which row holds the plaintext), and that this gives us the *opportunity* to do something else with that expansion, which is interesting. I guess this leads me to question the amount of information introduced by the row-indication character (or IV), since, if that were fully utilized, there would be no freedom to do something else with it. Oh, indeed, that *is* the case. We are foregoing a *random* row selection to achieve our bias. And we had *better* get some good strength from it, because we are giving up the "message key" properties of repeated cylinder ciphering with random row selection. >Initial Algorithm: Trinity 27 >PT: So, basically, we have more assertions but no examples. >Keys from same text, just for demonstration purposes, normally you would >not have them: Sub1: tpeaulfbm n/zgkcyho qridvwjsx > Sub2: iqoudzypr g/antkhvw sebxjfclm > Trans: /zdiayekn qfbstgrvj olucxwmph > >Preformatted text: Soj/basic allyj/we/ have/more /assertio ns/but/no >/examples X/padding I really don't understand the insertion of "/" and "j/" >Groups:(7 blocks of 9 characters each) > >CT: fhvranphm goutwdmyx whhfdiaxp kwchnmiyq jzwitsdi/ ivxvcsrye qeqikafgw > >At this point, it seems that only one group has a "/" in it, but all will >be treated alike, >selectively encrypted with a cylinder. The protocol will use the first >character in each group of ten, two groups of five, to indicate the >original line. Of the possible 26 choices, any with a "/" , "O", or "A" >in them will be rejected, Of course we can go only so far with this. With a 26-char alphabet and only 9 chars per block, we have a good chance of finding choices which do not include 3 particular chars. But even here it seems to me that there is no guarantee. We *might* not have such a row. Then what? >and a choice made of the remaining, with a "Q" >in the strin. If, and this is the case, that if a "/" occurs in the >reference position, which it will not this time, simply double the first >letter in the CT2 group pair. The cylinder is 27 x 9. > >CT2: fjqwi jxsfe gjqwh qffuu wyvzg qphqm kwqtq rcyel jndnp qbiyx iqpbe >qyfbi qjqhm zvwjf > >In the original message, there is no "F", no "G", and no "Q" and aside >from punctuation, a rather typical alphabetic distibution. > >In CT, there is no "B" or "L", but there is a good distribution of letters. > >In CT2, lots of "Q" and lots of characters with a distribution on only one. Well, I would say that any message has "a distribution" of characters. It may be that only one has its normal distribution . . . . >Larger passages where all letters were represented would still show that >the selection process could assure that there could be a reduction in >character(s), That is, a reduction in the size of the ciphertext alphabet, not a reduction in the number of characters in the message. >and/or a selective peaking of the frequency of one or more, >even at the same time. Good, but it's expensive, and I worry about the strength. While we don't have to produce multiple cipherings with a physical cylinder, in the computer, we *do* have to produce those cipherings. So here we are, doing 27 encryptions rather than just one, so that we can search for various characteristics and so bias the ciphertext. This *is* the expense. And of course we assume that The Opponents have the same program we have, and so are aware of the possibility that we might do this. And then the ciphertext alphabet tells them that we did. What is strong about this? >[...] >> Again, the more ciphertexts we have, >> the more likely it is that we can select one which seemingly has >> "patterns." > >To build "patterns", like an extra lot of "Q's", you need do it only one >point at a time, while to assimilate patterns, you try to work with as >much as you can. Yes, I see: For fairly limited goals, with small blocks, and the much higher cost of making and ranking the multiple cipherings, bias *can* be introduced. But we are not talking arbitrary patterns here. In a sense, we are talking about re-codings, which have no strength at all. We can *always* change the message coding, whether ciphering or not -- but the other end has to know it. We can make that part of the key, if we have enough codings, but then The Opponents also know of that possibility. >> But producing ciphertexts for pattern selection has a >> cost which is exponential with expected pattern length, and the >> expensive search and selection process buys only "mischief" with no >> added real strength at all. Why would we do this? > >The cost is only where you want it, with the cracker, not the encrypter >where simple selection costs practically nothing. This is the argument >against evolution, trying to make the whole seem impossible, like >shuffling a box of loose clock parts and getting a working one from it, >whereas, some careful stepwise selection in assembly means solving one >obvious little problem at a time. No, I think the cost is still in the enciphering operation. The cost is just *hidden* on a physical cylinder, and so is easily discounted in mental review. In practice, in a computer, there is a very substantial cost involved (e.g., 1/26th the possible ciphering rate) and a very significant worry about the strength of all this. Yes, bias can be achieved. But we can do that by re-coding, which can be a far more general and comprehensive solution, although I would not expect that to add strength. We need it to add strength here. >> >An allusion to my notes can be in the list of probable bases for >compatibility with base 27. It involves solving nth roots for values of >powers of the selected base. Crude logs work OK for this. I just pulled >them off of a scale on the trusty old slide rule, used some basics about >these types of caluculations such as should be learned in JrHi, and >confirmed the values on a calculator. > >An example is (27^4 = 531,441) < (14^5=537,824). So a suitable number of >characters in base 27 will fit closely into a different number for base >14, more since this is a situation of expansion in this conversion. The >trick is to have *some*, but not overrun, head room in the new base since >things are destined not to come out even in most cases. Hopefully, the >disparity is in your favor. It had *better* be in our favor, or arbitrary messages cannot be transformed. This is another form of ciphertext expansion. We must be able to represent at least as much information in the new base as we have in the old one. The closeness of "at least" is the amount of expansion. But I am still searching for a handle on how much security all this will provide in an automatic machine ciphering. How many of these options are there? Does not each give up a signature which is the resulting character set and the amount of expansion? Are we talking known-plaintext attack or not, and if not, why not? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt,talk.politics.crypto Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Sat, 03 Jan 1998 01:10:14 GMT Lines: 270 Message-ID: <34ad8fae.3962854@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> <34a3f86f.5301139@news.io.com> <wtshaw-2812970459590001@207.101.116.62> <34a7e3e9.2151126@news.io.com> <wtshaw-2912971754400001@207.101.116.51> <34abfcac.9916179@news.io.com> <wtshaw-0201980411210001@207.101.116.55> On Fri, 02 Jan 1998 04:11:05 -0600, in <wtshaw-0201980411210001@207.101.116.55> in sci.crypt wtshaw@itexas.net (W T Shaw) wrote: >In article <34abfcac.9916179@news.io.com>, ritter@io.com (Terry Ritter) wrote: > >> Sorry for the delay; I've been trying to grok the essence . . . . >> >> On Mon, 29 Dec 1997 17:54:24 -0600, in >> <wtshaw-2912971754400001@207.101.116.51> in sci.crypt >> wtshaw@itexas.net (W T Shaw) wrote: >[...] Consider >if the keys were a bunch of alphabets, they should at least be all >different. I have seen when conditions produced duplication in two >situations, where a selected seen in a pRNG would produce a short >repeating sequence of alphabets, where two selected seeds were the same. But we can remove this possibility at design time: We use an RNG with a substantial amount of state and an associated theory which eliminates short cycles. We use an algorithm which accepts fixed range random numbers and creates randomized permutations. Fin. >The first only happens with one design of several, some coincidence in the >numbers being close to a major variable in the stream to alphabet >process. Of course, I could always use another generator, but I figured >it could happen again, so why not put in a filter. It is a rare >occurance. This is exactly why one would prefer *not* put in a filter: For best efficiency, we want computation to have an effect in a large number of cases. Computation which hardly ever has an effect should be avoided, whenever possible. >>[...] >I use many different generation techniques, more than I have mentioned. >Obscure keys taken from a huge keyspace are hard to reproduce, and make >brute forcing possible only after finding out how the keys were made. It >is fairly easy to build a myrid of highly specific generators, putting >different collections of users into radically different areas of keyspace. RNG design is a tricky field: In my opinion, a good crypto RNG *requires* a theory of cycle length, and this seems unlikely in ad hoc designs. >[...] >You can work with larger cylinders, just select for the skew in output you >desire. Nothing wrong with making normal numbers look like hex, for >instance. Yes, we can always re-code the representation. That's what "hex" normally means. But it still represents the same underlying value, so how can this possibly deliver strength? The example technique represents the maximum reduction for that alphabet size and cylinder size. In general, we would like to guarantee that *some* row will meet our specifications. This cannot happen if the 3 characters we want to avoid are distributed to all 27 rows by the 9 random wheels in the cylinder. So we want the number of chars to be excluded from the alphabet (3), times the length of the cylinder (9), to be *under* the alphabet size (27), which was not quite true in the example. This means that some combination of random wheels will not allow us to meet our goal. It is also no coincidence that this (3 * 9 = 27) is exactly the amount of "entropy" introduced by the row-select character. For example, we certainly cannot use the same technique to reduce a 27-char alphabet to 16-chars with cylinders longer than 2 wheels and expect the system to work automatically. >[...] >With cylinders, you get into all sorts of possibilities, including >shuffling wheels, etc. The possibilities are many more than I would care >to explore. Yes, but there are always tradeoffs. But even in hardware, many of these possibilities will be much more expensive than others. I claim that anyone can produce a practically unbreakable "cipher," simply by repeatedly enciphering under various different ciphers. The modern practice of crypto is thus not about strength in the abstract, but is instead about the tradeoff between strength and time. Almost always, ciphering is a useless pure overhead, and this is why it must be fast. Rarely, ciphering is critical, and only then must be strong. If one is *really* willing to pay the piper, one can indeed use a keyed sequence of keyed ciphers. Many talk about strength, but few are willing to pay for it. >> [...] >> But, in this case, the ciphertexts do not all just appear, as in a >> physical cylinder. In the computer we have to create them, before we >> can select a pattern from among the random-like results. How can this >> possibly be a reasonable strategy? > >In this case, they did appear on a physical cylinder. The groups were set >up with a program and fairly easily refined through a real cylinder in my >hot little hands. The selection process that I used is fairly simple, and >could be made automatic in both encryption oand decryption. > > Remember, I used a short cylinder, 9x27, one group long. The cylinder >could be more characters, and much longer. Sure, the cylinder could be longer, but then we couldn't guarantee that there would exist any row which would meet our goal of not having three particular characters anywhere in it. >I did this little demo just to >prove a point, but if with a cylinder with many more characters, say 40 >instead of 27, the results would be still be good. Well, if the alphabet is larger, we can have more random wheels and also guarantee that at least one row will meet our goal. With an alphabet size of 40, we can have 13 random wheels and guarantee that there will be at least one row which does not have any of three specified characters. >> [...] >> I guess this leads me to question the amount of information introduced >> by the row-indication character (or IV), since, if that were fully >> utilized, there would be no freedom to do something else with it. Oh, >> indeed, that *is* the case. We are foregoing a *random* row selection >> to achieve our bias. And we had *better* get some good strength from >> it, because we are giving up the "message key" properties of repeated >> cylinder ciphering with random row selection. > >Granted, some loss of key security, but the guide could be in any >position, pulled from a fixed or varied position in the source group, or >even walked around to confuse matters, not too difficult to do any of >this. It could even be substituted, but remember the base algorithm too, >an essential part of the combined key. You would know when you had solved >the cylinder without solving the first set of keys. Well, yeah, but this is *always* the case. Any particular cipher *could* be something else, but then it would be different. In particular, if there is a multitude of options, we must assume first that The Opponents know that we have these options, and second that the options must be somehow keyed. If the keying selection then somehow exposes itself (as in the lack of three particular characters), that part of the keying would seem to have been for naught. >I don't hold this to be the best cryptosystem, just a challenging one for >discussion purposes, to illustrate some ideas. Good. >> >> Good, but it's expensive, and I worry about the strength. While we >> don't have to produce multiple cipherings with a physical cylinder, in >> the computer, we *do* have to produce those cipherings. So here we >> are, doing 27 encryptions rather than just one, so that we can search >> for various characteristics and so bias the ciphertext. This *is* the >> expense. > >No, not actually. When others try to implement cylinder ciphers, they try >to draw out long complicated formulas to keep up with the positions of >everything. My cylinder method is to simulate the real world, actually >rotate them suckers. It is not as slow as you might imagine, and cylinder >ciphers get really fast if put into hardware, because all the rotations >can happen at once, a parallel dream where data in and data out is just >about the only limiting factor for speed. > >For present purposes, it is just a matter of reading and comparing indexed >strings, all in the proper positions. Well, you say this, and I have no doubt that your scheme *is* much faster than hand ciphering. But in reality it represents a massive processing overhead which is not really needed for ciphering. So it is no surprise that when you have an application which can actually use that extra ciphering, it may seem to have a reasonable throughput. But compare your actual processing speed to other machine ciphers. And then compare the speed of your bias implementation to the speed of a cylinder implementation which does *not* actually simulate the physical cylinder. Then we get back to (up to) 27 cipherings instead of one, and (up to) 27 string searches instead of none. Note that we *could* avoid repeated ciphering by first doing a base conversion from 27 chars to 24, then ciphering with 24-char wheels. Or we could take the "base 27" result, and re-code that for base 24, and this is probably faster than doing many cipherings and searchings. Also note that if we are willing to expand the ciphertext of a block cipher (as the cylinder with IV expands ciphertext), we can try each possible IV value and get many ciphertexts which mean the same thing. Then we can search and select on those to attain "bias" or "pattern." This is the homophonic construction using conventional block ciphers (although only large-block ciphers are really practical here). >> And of course we assume that The Opponents have the same >> program we have, and so are aware of the possibility that we might do >> this. And then the ciphertext alphabet tells them that we did. What >> is strong about this? > >It is in the keys, of course. You have to reconstruct them to solve the >message, nothing less. That is certainly the goal. But in the case of this example, it seems to me that the keying necessary to support the bias also has announced itself exactly by showing bias in the ciphertext. So this amount of keying is wasted and does not add strength. Waste enough keying, and brute force becomes interesting. >How efficient this one design would be is open to >question, but I did not advance it as a cure all, just an object at hand >to illustrate a point. Nothing here is out of the realm of hand >ciphering, so it is fairly simple. And, I chose a dumb short set of keys >for the T27. They could be even be random. > >Having such a short message really makes it out of the question to solve. >It is another question of how much ciphertext is necessary to solve this >chained system. So much depends on the keys that it is hard to say. >Remember, most classic systems are duck soup to handle with less message >than two keys worth. This one is not so simple, but it is not top quality >either. >> >> Yes, I see: For fairly limited goals, with small blocks, and the much >> higher cost of making and ranking the multiple cipherings, bias *can* >> be introduced. > >Then, the major point was made. > >> But we are not talking arbitrary patterns here. In a >> sense, we are talking about re-codings, which have no strength at all. > >We are talking effective chaining, which adds lots of strength. If I understand "chaining" correctly as the repeated ciphering of the expanding message plus a new random IV on each ciphering, I agree that this adds strength. But I also suggest that it is inherently expensive. > >Remember, both compression and expansion can be obtained by choice of >bases. Well, yes, but this is just the *apparent* compression and expansion one always gets from re-coding in a different base or size of alphabet. We can represent the same information with fewer symbols by increasing the length of the string, and vice versa. The essential information is *constant* (or expanding!) in any reasonable measure. So this is a different sort of "compression" than what we normally think. >In the simple example of going from base 27 to base 62, it looks >like you get 20% compression. The practical proof is in the pudding with >any scheme, so I would not like to hang my hat on any figures for any >proposed method until a program actually does the job and demonstrates >practicality. I would much rather program first and talk later. I don't >know of an implimentation that did not have at least some mild surprises >in the coding stage. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt Subject: Re: Q: Us Govt. standard(s) for classified data encryption Date: Tue, 30 Dec 1997 08:38:12 GMT Lines: 72 Message-ID: <34a8b2e8.16615798@news.io.com> References: <67hglb$5m7@sjx-ixn6.ix.netcom.com> <67fsrf$qso$1@sparky.wolfe.net> <67hkom$8rr$1@news.ysu.edu> <349D4EC3.2B82@sprynet.com> <wtshaw-2612970002240001@207.101.116.55> <wtshaw-2612971035030001@207.101.116.58> <34a3f86f.5301139@news.io.com> <68a4tk$ap3@dfw-ixnews8.ix.netcom.com> NNTP-Posting-Host: as4-dialup-30.wc-aus.io.com On Tue, 30 Dec 1997 06:40:26 GMT, in <68a4tk$ap3@dfw-ixnews8.ix.netcom.com> in sci.crypt Douglas A. Gwyn <gwyn@ix.netcom.com> wrote: >In article <34a3f86f.5301139@news.io.com>, > ritter@io.com (Terry Ritter) wrote: >>This means that it is not enough to have a specific table which is >>asserted to have been constructed with no hidden agenda, such as the >>use of a particular irrational expansion. Many such constructions >>could have been tried before one with the desired characteristics was >>found. Consequently, it is necessary -- if there are tables, and one >>wishes to avoid hidden structure -- to actually allow *random* >>construction. > >No, that can lead to disaster. If the DES S-boxes had been >randomly constructed, the system would have been weaker, as >seems to finally have been figured out by public researchers. Fine, but this does not negate my point: DES has been criticized for two decades specifically *because* of the mysterious construction of the tables. Any ciphers which follow in those footsteps can expect similar reactions. Presumably the designers would also have a similar response: "You can trust us, really!" But we now believe that the DES tables probably were *not* constructed with Linear Cryptanalysis in mind, and that reminds us that even if we *do* trust the designers, this gets us only so far: If the designers do not know of a problem, they can hardly include its solution in their design. Are we to believe that any design group, anywhere, at any time, *can* know every possible problem in advance? Using random table construction does not mean that we take DES and put in random tables; it means that we should construct ciphering structures with sufficient overkill to handle the variations we can expect in randomly-constructed tables. We can use large tables, we can use a lot of them, and we can change them on every primary keying. Using *any* fixed table, even if it really is *optimal*, seems to be just asking for trouble. Some ciphers, by using an apparently arbitrary irrational to construct fixed tables, claim to have overcome any suspicion of a backdoor. Yet this has not overcome *my* suspicion, because the designers could have tested multitudes of values until one was found with the appropriate weakness. The weak one might have been first. >What you really want is a sufficiently powerful *theory* that >one can confidently design and check his own systems; there >are signs of such a theory beginning to emerge in the public >literature, but it's not "turn-key" yet. As reported on sci.crypt, I have conducted substantial tests of table nonlinearity, and have run avalanche tests on my ciphers for years. A number of other characteristics have been identified in the literature which increase our understanding of table structure and, indeed, block ciphers themselves. So this is moving along. But I would find it *extremely* difficult to accept any claim of a *comprehensive* theory of table weakness. And without such a theory, and without the broad acceptance of such a theory, we cannot know there is no backdoor, planned or otherwise, in any fixed table. It is no surprise that we cannot prove the negative of no weakness in a given table. The surprise is that we would put ourselves in that situation in the first place. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
From: ritter@io.com (Terry Ritter) Newsgroups: sci.crypt Subject: Re: Need help evaluating output from a PRNG Date: Sat, 10 Jan 1998 09:03:56 GMT Lines: 54 Message-ID: <34b73925.15116828@news.io.com> References: <34B5B1D0.5F895E07@crl.com> <01bd1d21$d2774500$d25096d0@phma.trellis.net> <696gak$7ks@lotho.delphi.com> <884403471.568408@wagasa.cts.com> On Sat, 10 Jan 1998 03:37:18 GMT, in <884403471.568408@wagasa.cts.com> in sci.crypt jdcooley@cts.com (J. D. Cooley) wrote: >[...] >What I find interesting is that you read in this NG and in the documents that >explain the different random number generator test programs that just because it >passes does not mean it is acceptable for cryptographic use. A cryptographic RNG must have very long cycles, a large internal state, and be somehow isolated so the internal state is hard to develop externally. Traditionally, statistical RNG's have not had a large internal state, although that may be changing. >Also, if it fails >one or more tests, that doesn't mean it isn't random! So, why even run the >tests? It looks like you can't trust them either way! There can be no test on a sequence which unfailingly decides whether the generator which produced the sequence was random. Since a random generator can produce any possible sequence, any sequence we think looks non-random could have been produced fairly at random. We do expect to see the statistics of large numbers play themselves out in long random sequences, and this allows us to identify and reject the worst generators. But a generator which always tests OK is not OK either: With a random source, we generally *expect* and *demand* that a statistic exceed the 95% level (often called "failure") 1 time in 20, and that one time may happen on the very first test. The usual role of statistics is to identify particular systematic events in the context of expected random variations that may conceal such events. This often occurs in a context of difficult and costly experimentation, and there is a premium on results which are so good that they stand above the noise; it may be that not much is lost if a weak positive is ignored. In contrast, cryptography and randomness generally support vastly larger amounts of testing at low cost, and we seek weak indications. In this context, I find it more useful to conduct many tests and collect many statistic values, then visually and mathematically compare the experimental distribution to the ideal for that statistic. Of course, the mathematical comparison between two distributions *also* has a statistical distribution, and *will* show very unusual values occasionally. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ The Crypto Glossary: http://www.io.com/~ritter/GLOSSARY.HTM
Terry Ritter, his current address, and his top page.
Last updated: 1998-01-16