The Hardware Random Number Generator


A Ciphers By Ritter Page


The conversation starts with a particular prescription for a physically-random generator. It then breaks into mostly theory, with a few comments on alternate approaches.


Contents


Subject: hardRandNumbGen Date: Fri, 22 Jan 1999 23:24:52 -1000 From: ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <fermion@false.net> Message-ID: <36A99564.D20@false.net> Newsgroups: sci.crypt Lines: 66 hardRandNumbGen Those who seek to capture random numbers in their natural habitats often are faced with two difficult paths: to wait quietly for the small whispers of thermal noise or to become immersed in the dazzling thrashes of large signal sources. Both ways may lead to success, or to a failure so desperate, that every adversary may be seen as a stalker in the night. The story you are about to read is true: "The Hardware Random Number Generator!" Our story starts in the late 1940's with a Bell Labs researcher in his Ivory Tower setting. Dusting off an old book on physics, a young researcher reads about noise from resistors: "Thermal noise is produced as a result of thermally excited random motion of free electrons in a conducting medium, such as a resistor". This is the old wisdom of Kirchhoff and Kelvin. Randomness may be gathered from an RMS voltage of about 4kTRB, where B is the bandwidth, R is the resistance, T is temperature, and k is Boltzman's constant. Faced with adversaries, the researcher knows he must use small resistors which are more immune to remote interference. But if 300 ohms are used, the voltage will only be two microvolts! An amplifier with a gain of a million will be needed to make the noise useable for his secret cryptographic purposes. Then the amplifier itself will become susceptible to outside influences. Millions of people are depending on his team to find a better source of random numbers, when he has an inspiration. Like a collection of atoms whose motion so hard to predict, if he can use variable oscillators, or gyrators, as he likes to call them, then their combined signals would be hard to predict. Small variations in conditions would change the "large signal" outputs from his circuits, which he could sample at regular intervals. That was the beginning. Today, my friends, we are ready to receive the benefits of Large Signal Random Sources. No longer will we wait, with a hope and a prayer, that the microvolt sources of randomness will not fall victum to the beamed manipulations of deviant hackers, NO, digital large signals have brought us immunity from such a fate. But it is not just the hacker who would mug our chaotic joy, it is the very regularity of our clock cycles and the very power of our conforming buses which threaten to impart a hideous regularity to our nonces, our IVs, our keys. The heartbeat of a computer is its clock, and a powerful hammerblow it is to any mere analog circuit which would dare to reside on our motherboards. This is why we cannot use sensitive amplifiers to boost the whispers of thermal noise. This is why Large Signal Sources are our refuge, our bounty, our provider of Hardware Random Number Generators. Oscillators, I tell you, OSCILLATORS, they are our main hope, and the pride modern civilization. I cannot exaggerate too much, the importance of avoiding the mistakes of past designers, who, through wishful thinking, risked it all, and lost, to the whims of a tiny hiss. So go now, brash young designers of tomorrows crytosystems, go to your keyboards and your mice, and always remember: It is better to have thrashed and lost some quality, than to never have thrashed at all. .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·. 1999/1/22
Subject: Re: hardRandNumbGen Date: Sun, 24 Jan 1999 05:08:35 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36aaaacf.5602532@news.io.com> References: <36A99564.D20@false.net> Newsgroups: sci.crypt Lines: 53 On Fri, 22 Jan 1999 23:24:52 -1000, in <36A99564.D20@false.net>, in sci.crypt ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <fermion@false.net> wrote: >[...] >it is the very regularity of our clock cycles and the very >power of our conforming buses which threaten to impart a >hideous regularity to our nonces, our IVs, our keys. The >heartbeat of a computer is its clock, and a powerful hammerblow >it is to any mere analog circuit which would dare to reside >on our motherboards. This is why we cannot use sensitive >amplifiers to boost the whispers of thermal noise. "Cannot" may be a bit of a stretch. Doing low-level analog in a digital environment is tough, there is no doubt about that. Presumably we would employ appropriate techniques to pull it off. Probably this would involve some form of power isolation and filtering, perhaps even shielding. Once the signal is large enough, it can compete with digital its own terms. We note that the output of a CD player is supposed to be low-noise, yet is produced in a digital environment. And sound-card inputs amplify relatively low analog levels. Certainly, disk-drive magnetic read heads produce very low-level signals, yet are made to work well in a highly-digital environment. >This is why >Large Signal Sources are our refuge, our bounty, our provider >of Hardware Random Number Generators. Oscillators, I tell you, >OSCILLATORS, they are our main hope, and the pride modern >civilization. Unfortunately "oscillation" inherently seems to imply some amount of saved and time-delayed energy. It is this accumulation of energy that makes it difficult to change the oscillation, and that is normally an advantage. Normally, an oscillator cannot detect quantum or molecular phenomena, and we would not want it to do so. A signal composed of many oscillators, each doing their own thing, is admittedly complex. But complex relationships are not, by themselves, cryptographically secure. We could even think to simulate such a system numerically, in which case the system is clearly no more than yet another pseudorandom state machine waiting to be exposed. And while any such simulation might not be exact, it could be close, and we could continually adjust the simulation to the reality we do see. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Sun, 24 Jan 1999 03:53:40 -1000 From: ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <real@complex.net> Message-ID: <36AB25E4.2E7E@complex.net> References: <36aaaacf.5602532@news.io.com> Newsgroups: sci.crypt Lines: 130 Terry Ritter wrote: > > On Fri, 22 Jan 1999 23:24:52 -1000, in <36A99564.D20@false.net>, in > sci.crypt ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." > <fermion@false.net> wrote: > > >[...] > >it is the very regularity of our clock cycles and the very > >power of our conforming buses which threaten to impart a > >hideous regularity to our nonces, our IVs, our keys. The > >heartbeat of a computer is its clock, and a powerful hammerblow > >it is to any mere analog circuit which would dare to reside > >on our motherboards. This is why we cannot use sensitive > >amplifiers to boost the whispers of thermal noise. >"Cannot" may be a bit of a stretch. Yes, my prose were intended to be didactic to troll for responses. Thank you for your polite and rational response. >Doing low-level analog in a >digital environment is tough, there is no doubt about that. >Presumably we would employ appropriate techniques to pull it off. >Probably this would involve some form of power isolation and >filtering, perhaps even shielding. Once the signal is large enough, >it can compete with digital its own terms. On a single chip product, like a mainstream microprocessor that is employing appropriate techniques to push the limits of speed, you may find that large signals for a random number generator are preferable to millivolt signals. The substrate junction with p+ and n- wells have a capacitive noise for which it is hard to provide accurate cancellation. >We note that the output of a CD player is supposed to be low-noise, >yet is produced in a digital environment. And sound-card inputs >amplify relatively low analog levels. Certainly, disk-drive magnetic >read heads produce very low-level signals, yet are made to work well >in a highly-digital environment. The inputs to a CD player are large signal, digital inputs from light relections. The digital codes are reproduced from a recording studio which spent millions to get way from periodic noise. I have not done the following experiment: put a spectrum analyser on the output of a CD player during quiet passages. Look for the noise outside of the human hearing range. I expect that the digital electronics in a CD player produce ordinary levels of periodic noise that we cannot hear. And that includes non-random noise BELOW 40 hz. CD players are not 400 Mhz microprocessors that are going as fast as possible driving motherboard capacitances on 400 pins. They are slow, dedicated chips with few pins being driven, and with small load capacitances. They are self-contained assemblies that are shielded from other components in a home stereo system. The kind of RNG I am interested in is one that is robust. One that is prepared to exist in a hostile electrical environment, not some pampered little dog of a processor. Hard drives are limited to 5 zeros in a row. Consider : Why? >>This is why >>Large Signal Sources are our refuge, our bounty, our provider >>of Hardware Random Number Generators. Oscillators, I tell you, >>OSCILLATORS, they are our main hope, and the pride of modern >>civilization. >Unfortunately "oscillation" inherently seems to imply some amount of >saved and time-delayed energy. It is this accumulation of energy that >makes it difficult to change the oscillation, and that is normally an >advantage. Normally, an oscillator cannot detect quantum or molecular >phenomena, and we would not want it to do so. A digital ring oscillator composed of Schmitt Triggers (with hysteresis) can be designed to have a slow rise time of ten microseconds, but they respond to an input transition in 100ps, to use round numbers. To illustrate the powerful effect that these facts have on the recording of thermal noise, I will give the details of it operation. Assume there is a +/- 1 millivolt thermal noise present on the output of an inverter. Assume a Schmitt trigger will switch when its input rises above 1v for a system using 2v power supplies. The oscillator runs at 100khz. How long will it take for the input to rise 2mV? That is 2mV divided by 1V/10us or 50ns. So on every cycle of the oscillator there is a 50ns time when uncertainty exists. This is a half percent on each cycle. Since multiple oscillators will be involved, each with a half percent uncertainty, one can see that by using 200 such oscillators XORed together, the output would be quite random. But in practice, 200 oscillators are not needed because there are several sources of uncertainty in a well designed Large Signal Random Number Generator such as the one I designed at a large semiconductor company. It was fabricated and tested by a team of engineers. It worked well. It was evaluated by the CIA and they sent us a report on its characteristics. Have you built any hardware random number generators using large signals? Small signals? Mr. Ritter says, "It is this accumulation of energy that makes it difficult to change the oscillation....". It is easy to change the oscillation period using capacitors that are connected or disconnected from the oscillator by switches that are controlled by signals from the random string. This arrangement amplifies any thermal noise that is captured. To be more detailed, when a 1mV noise does affect a bit value that is shifted into a register, that bit value changes the frequency of oscillation of one or more oscillators. The XOR combines these changes for a while until the combined bitstream is sampled. I contend that this is not just illusory complexity, it is an amplification of a thermal noise into a large product. >A signal composed of many oscillators, each doing their own thing, is >admittedly complex. But complex relationships are not, by themselves, >cryptographically secure. We could even think to simulate such a >system numerically, in which case the system is clearly no more than >yet another pseudorandom state machine waiting to be exposed. And >while any such simulation might not be exact, it could be close, and >we could continually adjust the simulation to the reality we do see. I hope that you would simulate the 200 oscillator example I gave. Yes, you can add the +/- 1mV noise and adjust to make it as accurate as you care to invest time for. I have read your pat statement above several times recently, and I disagree with it. It is possible to design a complex circuit that is poorly done and which therefore would fail tests for randomness. But you should not get hung up on poor designs that fit your expectation. You should open your mind to the possibility that talented design engineers might do a good job using techniques you wish did not exist. You can change your opinion. I will send you a relevent patent number through private email so you can see the drawings. .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·.
Subject: Re: hardRandNumbGen Date: Sun, 24 Jan 1999 17:48:24 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36ab5cde.5814163@news.io.com> References: <36AB25E4.2E7E@complex.net> Newsgroups: sci.crypt Lines: 171 On Sun, 24 Jan 1999 03:53:40 -1000, in <36AB25E4.2E7E@complex.net>, in sci.crypt ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <real@complex.net> wrote: >Terry Ritter wrote: >[...] >>Unfortunately "oscillation" inherently seems to imply some amount of >>saved and time-delayed energy. It is this accumulation of energy that >>makes it difficult to change the oscillation, and that is normally an >>advantage. Normally, an oscillator cannot detect quantum or molecular >>phenomena, and we would not want it to do so. > >A digital ring oscillator composed of Schmitt Triggers (with >hysteresis) can be designed to have a slow rise time of ten >microseconds, Presumably this means that the effective source resistance is large compared to the input capacitance, and the source of the delay is an R-C ramp to the next stage. >but they respond to an input transition in 100ps, to >use round numbers. To illustrate the powerful effect that these >facts have on the recording of thermal noise, I will give the >details of it operation. Assume there is a +/- 1 millivolt thermal >noise present on the output of an inverter. That means that this "large signal" design is probably sensitive to even tiny power and ground transients. It is going to be very hard to distinguish the effects of "real" thermal noise from transient feedback due to the structure of the circuit. So how can we have confidence in the result? Statistical testing cannot distinguish between "physical" and "pseudo" randomness. >Assume a Schmitt trigger >will switch when its input rises above 1v for a system using 2v >power supplies. The oscillator runs at 100khz. How long will it take >for the input to rise 2mV? That is 2mV divided by 1V/10us or 50ns. >So on every cycle of the oscillator there is a 50ns time when >uncertainty exists. This is a half percent on each cycle. As a rare peak value, presumably. >Since multiple oscillators will be involved, each with a half percent >uncertainty, one can see that by using 200 such oscillators XORed >together, the output would be quite random. What I see is a huge complexity-based increase in signal transitions (a frequency increase) which will be hard to distinguish from heat-based noise. And if we cannot distinguish operation *with* noise from operation *without* noise, we have no way to prove that noise is involved at all. Other than claims and handwaves, of course. >But in practice, 200 >oscillators are not needed because there are several sources of >uncertainty in a well designed Large Signal Random Number Generator >such as the one I designed at a large semiconductor company. It was >fabricated and tested by a team of engineers. It worked well. I have looked at the published results a number of times. They were in fact part of the basis for my investigation of the numerical relationship between repeats in sampling and the overall population. Easy calculations using the publushed results show that the effective population of values is 1/4 the claimed ideal, which shows that the design was not as good as you thought. >It was >evaluated by the CIA and they sent us a report on its >characteristics. The published (admittedly meager) experimental evidence says otherwise. >Have you built any hardware random number >generators using large signals? The claimed basis for your generator is thermal noise, which is NOT large-signal. A large-signal digital system is a PSEUDO-random digital RNG, and can be implemented in software as well as hardware. So, yes, certainly I have implemented and tested many large signal (software) RNG's. Some software computations are hard to reverse. But few if any of the conventional statistical RNG's have stood up to attack. Just giving a hardware design and claiming "nobody can break this" is the sort of thing we see on sci.crypt all the time. The reasoning about this design is contradictory: Supposedly the large signal design is "random" because it senses low-level noise. Yet the circuit is supposedly suitable for a noisy digital chip because it is a "large-signal" design. There is a fundamental problem in making both claims at the same time. >Small signals? Yes. >Mr. Ritter says, "It is this accumulation of energy that makes it >difficult to change the oscillation...CRYPHTML.HTM". It is easy to change the >oscillation period using capacitors that are connected or >disconnected from the oscillator by switches that are >controlled by signals from the random string. But that approach is digital complexity, and not thermal randomness. It can be simulated in software. It is PSEUDO-random. Maybe it is strong, maybe not, but there is certainly no proof. >This arrangement amplifies any thermal noise that is captured. >To be more detailed, when a 1mV noise does affect a bit value that >is shifted into a register, that bit value changes the frequency >of oscillation of one or more oscillators. The XOR combines these >changes for a while until the combined bitstream is sampled. I >contend that this is not just illusory complexity, it is an >amplification of a thermal noise into a large product. The obvious experiment, then, is to take the device to cryogenic temperatures and see how it performs. If the output still has good statistics, we can suspect that the output does not represent thermal noise at all, but is just a complex digital system. Was such an experiment performed? >>A signal composed of many oscillators, each doing their own thing, is >>admittedly complex. But complex relationships are not, by themselves, >>cryptographically secure. We could even think to simulate such a >>system numerically, in which case the system is clearly no more than >>yet another pseudorandom state machine waiting to be exposed. And >>while any such simulation might not be exact, it could be close, and >>we could continually adjust the simulation to the reality we do see. > >I hope that you would simulate the 200 oscillator example I gave. But your design does not use 200 oscillators, does it? >Yes, >you can add the +/- 1mV noise and adjust to make it as accurate as you >care to invest time for. I have read your pat statement above several >times recently, and I disagree with it. It is possible to design a >complex circuit that is poorly done and which therefore would fail >tests for randomness. Even PSEUDO-random RNG's pass statistical tests. Those tests have nothing to do with cryptographic unpredictability or "strength." Yet strength is what you claim. >But you should not get hung up on poor designs >that fit your expectation. You should open your mind to the >possibility that talented design engineers might do a good job using >techniques you wish did not exist. I have no such wish. >You can change your opinion. I think you have missed the distinction between unpredictable randomness for cryptography, and ordinary statistical randomness. >I will send you a relevent patent number through private email so >you can see the drawings. I have seen the technical article. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Sun, 24 Jan 1999 18:45:25 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ab69fd.16112067@nntp.ix.netcom.com> References: <36ab5cde.5814163@news.io.com> Newsgroups: sci.crypt Lines: 17 On Sun, 24 Jan 1999 17:48:24 GMT, ritter@io.com (Terry Ritter) wrote: >Even PSEUDO-random RNG's pass statistical tests. Those tests have >nothing to do with cryptographic unpredictability or "strength." That statement needs to be added to the FAQ on Crypto-Grade Randomness. It says it all. Bob Knauer "It is not the function of our government to keep the citizen from falling into error; it is the function of the citizen to keep the government from falling into error." --Justice Robert H. Jackson
Subject: Re: hardRandNumbGen Date: 25 Jan 99 02:37:29 GMT From: jsavard@ecn.ab.ca () Message-ID: <36abd8e9.0@ecn.ab.ca> References: <36ab69fd.16112067@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 17 R. Knauer (rcktexas@ix.netcom.com) wrote: : On Sun, 24 Jan 1999 17:48:24 GMT, ritter@io.com (Terry Ritter) wrote: : >Even PSEUDO-random RNG's pass statistical tests. Those tests have : >nothing to do with cryptographic unpredictability or "strength." : That statement needs to be added to the FAQ on Crypto-Grade : Randomness. : It says it all. It does indeed, but it will probably have to be expanded and commented upon before it will "say it all" clearly enough so that everyone understands what it means. Many people have heard this, but because they have not understood, they did not believe. John Savard
Subject: Re: hardRandNumbGen Date: Mon, 25 Jan 1999 11:55:36 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ac5b28.1691712@nntp.ix.netcom.com> References: <36abd8e9.0@ecn.ab.ca> Newsgroups: sci.crypt Lines: 73 On 25 Jan 99 02:37:29 GMT, jsavard@ecn.ab.ca () wrote: >: >Even PSEUDO-random RNG's pass statistical tests. Those tests have >: >nothing to do with cryptographic unpredictability or "strength." >: That statement needs to be added to the FAQ on Crypto-Grade >: Randomness. >: It says it all. >It does indeed, but it will probably have to be expanded and commented >upon before it will "say it all" clearly enough so that everyone >understands what it means. Many people have heard this, but because they >have not understood, they did not believe. I agree. Here is a post from Patrick Juola that expands on this in a way that can be understood by all. +++++ On 21 Jan 1999 08:23:54 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: You're not seeing the fundamnental distinction between "irrationality" and "randomness" in that randomness is a function, not of a number, but of a process. Just for clarification : *Any* number/string can be the result of a uniformly random process. In fact, a uniformly random process will always produce all numbers equiprobably, by construction. Any number can also be produced as the result of a non-random process, although for many numbers this will be a very uninteresting process such as a simple table-lookup and copy. The closest relative for irrationality is not the properties such as "non-repeating fraction" (which is a thoroughly bogus definition, by the way), but the method by which you GET a rational number. To wit, a rational number can be generated as the ratio of two integers p and q (q != 0 for the formalists, pthththththth). An irrational number is a number that cannot be so generated. Now, it so happens (lucky us) that any number that can be generated as the ratio of two integers can also be written as a terminating and/or repeating continued decimal string. This is an independent property, first proved in the year <mumble> by someone no doubt too famous for me to remember offhand. But the fact that you can characterize a number as rational or irrational by inspection is, strictly speaking, a lucky fluke. There's a similar definition for, e.g., transcendentals -- a transcendental number, of course, is a number that cannot be produced as the solution to a polynomial equation. Transcendentals are a strict subset of irrationals -- sqrt(2), for instance, is irrational but not transcendental. However, there's no way to characterize *by inspection* whether or not a given irrational number is transcendental. I can easily prove a given number is *NOT* transcendental by showing a polynomial to which &c., but I can't go the other way. So the point is that the characterization of both irrationals and transcendentals is a) strictly process-driven, and b) defined in the negative sense -- "no possible way to...CRYPHTML.HTM" That irrationals can be cleanly defined in typographic properties should *not* lead you to believe that randomness can also be defined in typographic properties or that it can be defined in positive terms. +++++ Bob Knauer "An honest man can feel no pleasure in the exercise of power over his fellow citizens." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Mon, 25 Jan 1999 04:44:51 -1000 From: ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <real@complex.net> Message-ID: <36AC8363.6D55@complex.net> References: <36abd8e9.0@ecn.ab.ca> Newsgroups: sci.crypt Lines: 240 On Sun, 24 Jan 1999 03:53:40 -1000, in <36AB25E4.2E7E@complex.net>, in sci.crypt <real@complex.net> sinewave wrote: >Terry Ritter wrote: sinewave: >>A digital ring oscillator composed of Schmitt Triggers (with >>hysteresis) can be designed to have a slow rise time of ten >>microseconds, Terry: >Presumably this means that the effective source resistance is large >compared to the input capacitance, and the source of the delay is an >R-C ramp to the next stage. Yes. >>but they respond to an input transition in 100ps, to >>use round numbers. To illustrate the powerful effect that these >>facts have on the recording of thermal noise, I will give the >>details of it operation. Assume there is a +/- 1 millivolt thermal >>noise present on the output of an inverter. >That means that this "large signal" design is probably sensitive to >even tiny power and ground transients. It is going to be very hard to >distinguish the effects of "real" thermal noise from transient >feedback due to the structure of the circuit. So how can we have >confidence in the result? Statistical testing cannot distinguish >between "physical" and "pseudo" randomness. In the real world, it is not always possible to tell. Integrating a random number generator (RNG) on a commodity IC is similar to a manned expedition to MARS: they must take everything with them into that harsh environment that they will need. If the craft is buffeted by periodic winds, they do not have the luxury of calling back to base and saying, "Houston, you told us this was a vacuum, please make it a perfect vacuum, over". The RNG will encounter non-ideal electrical environments. It should have redundant systems which are combined to give the final random number the best shot at being unpredictable, not perfect, but unpredictable. The multiple ring oscillator design described here should be a part of the on-chip subsystem: it is an unpredictable seed generator. One can also add a differential amplifier RNG with power supply noise rejection capabilities, a PRNG, a counter, a hash, and storage for the previous random number to use for initialization and for checking firmware. The RNG described above is a Large Signal Random Number Generator, to be described in more detail, below. >>Assume a Schmitt trigger >>will switch when its input rises above 1v for a system using 2v >>power supplies. The oscillator runs at 100khz. How long will it take >>for the input to rise 2mV? That is 2mV divided by 1V/10us or 50ns. >>So on every cycle of the oscillator there is a 50ns time when >>uncertainty exists. This is a half percent on each cycle. >As a rare peak value, presumably. Yes, I am using round numbers. >>Since multiple oscillators will be involved, each with a half percent >>uncertainty, one can see that by using 200 such oscillators XORed >>together, the output would be quite random. >What I see is a huge complexity-based increase in signal transitions >(a frequency increase) which will be hard to distinguish from >heat-based noise. And if we cannot distinguish operation *with* noise >from operation *without* noise, we have no way to prove that noise is >involved at all. Other than claims and handwaves, of course. I am glad you raised the "handwaves" metaphore, because handwaves are what toss coins. A complex person tosses a coin and you might think it is random. The oscillator RNG in this discussion is directly analogous to a coin toss in many ways. If a coin is not rotating (oscillating) it will fall back into the hand in the same position that it stated from. It is the rotation that helps the randomness, not only the complexity of the nervous system, the elasticity of the skin, and the trembling of the muscles. The rotation should be fast for best results. A juggler could become skilled at non-random coin tosses for one coin that rotates slowly. But if she tosses three coins with rapid rotation than it is likely that random results will occur. If a periodic wind is present in a coin toss, yes, it will influence the outcome, but the result will often be recognizable as a useful random throw, or a throw that was blown away. The same with this RNG. The major source of randomness of this RNG is the unsynchronized nature of multiple oscillators with randomly changing frequencies. This is a large signal phenomenon, which cannot be accurately described mathematically. Similar to a coin toss, many analog variables are involved. These continuous variations of many influences cause seeming randomness. If you can mathematically describe a human coin toss, then so you can with this RNG. But you cannot, and I cannot. That does not invalidate the usefulness of these seed generators, not in this century. >>But in practice, 200 >>oscillators are not needed because there are several sources of >>uncertainty in a well designed Large Signal Random Number Generator >>such as the one I designed at a large semiconductor company. It was >>fabricated and tested by a team of engineers. It worked well. >I have looked at the published results a number of times. They were >in fact part of the basis for my investigation of the numerical >relationship between repeats in sampling and the overall population. >Easy calculations using the publushed results show that the effective >population of values is 1/4 the claimed ideal, which shows that the >design was not as good as you thought. Correct, that first version in that report had an XOR gate placed in a bad position, causing twice as many ones as zeros. The CIA alerted us to my mistake with that one gate. When removed, the results are much better. I still regret my mistake in that one gate placement. >>It was >>evaluated by the CIA and they sent us a report on its >>characteristics. >The published (admittedly meager) experimental evidence says >otherwise. Single reports do not tell all of the facts. >>Have you built any hardware random number >>generators using large signals? >The claimed basis for your generator is thermal noise, which is NOT >large-signal. A large-signal digital system is a PSEUDO-random >digital RNG, and can be implemented in software as well as hardware. >So, yes, certainly I have implemented and tested many large signal >(software) RNG's. The ealier description was an illustration for some readers to examine. It was not an exhaustive explanation of the theory behind the design. I have now expanded upon the description, explaining the large signals as being analogous to coin tosses which must rotate due to a complex had waving motion. The complexity of my circuit design mimics, on a small scale, the complexities of the human hand wave and coin toss. The frequency changes in the design are the analogy of the hand motion. Thermal irregularities power supply variations also contribute to this hand motion. Radioactive decay is also a large signal RNG. It may be considered to be both digital and analog, as this RNG may be. >Some software computations are hard to reverse. But few if any of the >conventional statistical RNG's have stood up to attack. Just giving a >hardware design and claiming "nobody can break this" is the sort of >thing we see on sci.crypt all the time. I do not claim nobody can break this. I am presenting concepts to a wide reading audience. Some of these concepts are less sound than others, so the readers have the opportunity to judge various attepts to produce randomness in a harsh environment. I hope that they will fare better than I did. >The reasoning about this design is contradictory: Supposedly the >large signal design is "random" because it senses low-level noise. >Yet the circuit is supposedly suitable for a noisy digital chip >because it is a "large-signal" design. There is a fundamental problem >in making both claims at the same time. I have addressed this above. A large signal, digital oscillator has small noise on top of that. The randomness is primarily based on the coin toss analogy. The thermal noise calculation first given is a secondary source of randomness. The periodic power supply noise will affect this design more in some ways than it would affect an analog circuit with well designed differential and common mode considerations. But the ways periodic noise affects these circuits do not ruin the unpredictability of the resulting numbers. I leave that discussion for another day. snip... >>Mr. Ritter says, "It is this accumulation of energy that makes it >>difficult to change the oscillation...CRYPHTML.HTM". It is easy to change the >>oscillation period using capacitors that are connected or >>disconnected from the oscillator by switches that are >>controlled by signals from the random string. >But that approach is digital complexity, and not thermal randomness. >It can be simulated in software. It is PSEUDO-random. Maybe it is >strong, maybe not, but there is certainly no proof. It is analog complexity. I will give no proof today. Give me proof of coin tossing that does not involve complexity or strength.. snip.. >The obvious experiment, then, is to take the device to cryogenic >temperatures and see how it performs. If the output still has good >statistics, we can suspect that the output does not represent thermal >noise at all, but is just a complex digital system. Was such an >experiment performed? No. The circuit depends on many complex factors for randomness, as a coin toss does. In some imagined laboratory experiment, it is feasible to control all factors, causing non-random results. In commodity applications, Large Signal Random Number Generators are sometimes superior to small signal based generators and both may appear on a single IC. >>>A signal composed of many oscillators, each doing their own thing, is >>>admittedly complex. But complex relationships are not, by themselves, >>>cryptographically secure. We could even think to simulate such a >>>system numerically, in which case the system is clearly no more than >>>yet another pseudorandom state machine waiting to be exposed. And >>>while any such simulation might not be exact, it could be close, and >>>we could continually adjust the simulation to the reality we do see. >> >>I hope that you would simulate the 200 oscillator example I gave. >But your design does not use 200 oscillators, does it? No it had 3. The 200 oscillator example is for a simplified explanation of one source of randomness. >>Yes, >>you can add the +/- 1mV noise and adjust to make it as accurate as you >>care to invest time for. I have read your pat statement above several >>times recently, and I disagree with it. It is possible to design a >>complex circuit that is poorly done and which therefore would fail >>tests for randomness. >Even PSEUDO-random RNG's pass statistical tests. Those tests have >nothing to do with cryptographic unpredictability or "strength." Yet >strength is what you claim. Yes it is a strong source, as upcoming product releases are expected to show. Just because old PRNGs pass some tests does not mean that new designs are bad, as you imply. >I think you have missed the distinction between unpredictable >randomness for cryptography, and ordinary statistical randomness. A PSRG may be depended upon to produce the same string under certain easy to arrange conditions. This RNG does the opposite of that. Two sequential random numbers from this circuit would prove that to anyone who tests it, most of the time. Thank you for this polite discussion.
Subject: Re: hardRandNumbGen Date: Mon, 25 Jan 1999 13:23:11 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36ACB68D.34FA34A4@aspi.net> References: <36AC8363.6D55@complex.net> Newsgroups: sci.crypt Lines: 290 Two points, in context below... .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·. wrote: > On Sun, 24 Jan 1999 03:53:40 -1000, in <36AB25E4.2E7E@complex.net>, in > sci.crypt <real@complex.net> sinewave wrote: > > >Terry Ritter wrote: > > sinewave: > >>A digital ring oscillator composed of Schmitt Triggers (with > >>hysteresis) can be designed to have a slow rise time of ten > >>microseconds, > > Terry: > >Presumably this means that the effective source resistance is large > >compared to the input capacitance, and the source of the delay is an > >R-C ramp to the next stage. > > Yes. > > >>but they respond to an input transition in 100ps, to > >>use round numbers. To illustrate the powerful effect that these > >>facts have on the recording of thermal noise, I will give the > >>details of it operation. Assume there is a +/- 1 millivolt thermal > >>noise present on the output of an inverter. > > >That means that this "large signal" design is probably sensitive to > >even tiny power and ground transients. It is going to be very hard to > >distinguish the effects of "real" thermal noise from transient > >feedback due to the structure of the circuit. So how can we have > >confidence in the result? Statistical testing cannot distinguish > >between "physical" and "pseudo" randomness. > > In the real world, it is not always possible to tell. Integrating > a random number generator (RNG) on a commodity IC is similar to > a manned expedition to MARS: they must take everything with them > into that harsh environment that they will need. If the craft is > buffeted by periodic winds, they do not have the luxury of calling > back to base and saying, "Houston, you told us this was a vacuum, > please make it a perfect vacuum, over". The RNG will encounter > non-ideal electrical environments. It should have redundant systems > which are combined to give the final random number the best shot at > being unpredictable, not perfect, but unpredictable. The multiple > ring oscillator design described here should be a part of the > on-chip subsystem: it is an unpredictable seed generator. One can > also add a differential amplifier RNG with power supply noise > rejection capabilities, a PRNG, a counter, a hash, and storage for > the previous random number to use for initialization and for > checking firmware. The RNG described above is a Large Signal > Random Number Generator, to be described in more detail, below. > > >>Assume a Schmitt trigger > >>will switch when its input rises above 1v for a system using 2v > >>power supplies. The oscillator runs at 100khz. How long will it take > >>for the input to rise 2mV? That is 2mV divided by 1V/10us or 50ns. > >>So on every cycle of the oscillator there is a 50ns time when > >>uncertainty exists. This is a half percent on each cycle. > > >As a rare peak value, presumably. > > Yes, I am using round numbers. > > >>Since multiple oscillators will be involved, each with a half percent > >>uncertainty, one can see that by using 200 such oscillators XORed > >>together, the output would be quite random. > > >What I see is a huge complexity-based increase in signal transitions > >(a frequency increase) which will be hard to distinguish from > >heat-based noise. And if we cannot distinguish operation *with* noise > >from operation *without* noise, we have no way to prove that noise is > >involved at all. Other than claims and handwaves, of course. > > I am glad you raised the "handwaves" metaphore, because handwaves are > what toss coins. A complex person tosses a coin and you might think it > is random. The oscillator RNG in this discussion is directly analogous > to a coin toss in many ways. If a coin is not rotating (oscillating) > it will fall back into the hand in the same position that it stated from. > It is the rotation that helps the randomness, not only the complexity > of the nervous system, the elasticity of the skin, and the trembling of > the muscles. The rotation should be fast for best results. A juggler > could become skilled at non-random coin tosses for one coin that > rotates slowly. But if she tosses three coins with rapid rotation than > it is likely that random results will occur. If a periodic wind is > present in a coin toss, yes, it will influence the outcome, but the > result will often be recognizable as a useful random throw, or a throw > that was blown away. The same with this RNG. Human gestures are not a good foundation for system design. There are large, industrial concerns that rely upon human-gesture-generated unpredictability. Their interest is *not* statistical randomness as we find in simulations, games, and Monte Carlo tests (in spite of the latter name). Their interest is the same as ours: unpredicability. They are called casinos. In spite of the fantastic efforts taken to eliminate predictability in games of chance human gestures can still dominate the outcomes completely. I'm not referring to shuffling cards systematically, but to rolling a roulette ball against the wheel so precisely that out of 20 tries a human can obtain a predicted outcome (slot 17) 10 times. 50% success. I think that constitutes predictability. The hand-eye coordination involved is of an extreme level, requiring decades of practice to achieve. But it is real. The complexity of controlling a roulette wheel appears to me to be far larger than that of a coin toss. Even a fast one. Without detailed scrutiny of your design I cannot tell whether it is robust. No inspection of the outcome will convince me it is. However, the design philosophy you have expressed leads me to believe there will be weaknesses in the system. > The major source of randomness of this RNG is the unsynchronized > nature of multiple oscillators with randomly changing frequencies. This > is a large signal phenomenon, which cannot be accurately described > mathematically. Similar to a coin toss, many analog variables are > involved. These continuous variations of many influences cause seeming > randomness. If you can mathematically describe a human coin toss, then > so you can with this RNG. But you cannot, and I cannot. That does not > invalidate the usefulness of these seed generators, not in this > century. The phrase "randomly changing oscillators" is key to the paragrph above. I would like to question the use of the term random in the sense of unpredictable. Since the (intermediate) output of the system is driving the changes to the oscillators there is a full feedback loop present. This kind of system may pass statistical tests for randomness, but it may not be unpredictable. The result may "cause seeming randomness", but this is far from unpredictability. For instance, how much correlation would you expect from a set of such devices initialized identically? Even the presmption that the output would pass statistical tests is questionable. One famous gafffe in PRNG design was Knuth's composite generator, which he called superrandom. Unfortunately it was a closed loop design. He did not forsee the possibility of cycles so short as to be degenerate. All closed loop designs contain this danger. If the hardware output is driving the hardware configuration, it is avidly searching for a configuration that represents a local minima in its volatility. Now, given initialization in a configuration near such a local minima, how much divergence would we find in the output of a set of these devices? > >>But in practice, 200 > >>oscillators are not needed because there are several sources of > >>uncertainty in a well designed Large Signal Random Number Generator > >>such as the one I designed at a large semiconductor company. It was > >>fabricated and tested by a team of engineers. It worked well. > > >I have looked at the published results a number of times. They were > >in fact part of the basis for my investigation of the numerical > >relationship between repeats in sampling and the overall population. > > >Easy calculations using the publushed results show that the effective > >population of values is 1/4 the claimed ideal, which shows that the > >design was not as good as you thought. > > Correct, that first version in that report had an XOR gate placed in > a bad position, causing twice as many ones as zeros. The CIA alerted > us to my mistake with that one gate. When removed, the results are > much better. I still regret my mistake in that one gate placement. > > >>It was > >>evaluated by the CIA and they sent us a report on its > >>characteristics. > > >The published (admittedly meager) experimental evidence says > >otherwise. > > Single reports do not tell all of the facts. > > >>Have you built any hardware random number > >>generators using large signals? > > >The claimed basis for your generator is thermal noise, which is NOT > >large-signal. A large-signal digital system is a PSEUDO-random > >digital RNG, and can be implemented in software as well as hardware. > >So, yes, certainly I have implemented and tested many large signal > >(software) RNG's. > > The ealier description was an illustration for some readers to examine. > It was not an exhaustive explanation of the theory behind the design. > I have now expanded upon the description, explaining the large > signals as being analogous to coin tosses which must rotate due to > a complex had waving motion. The complexity of my circuit design > mimics, on a small scale, the complexities of the human hand wave > and coin toss. The frequency changes in the design are the analogy > of the hand motion. Thermal irregularities power supply variations > also contribute to this hand motion. > > Radioactive decay is also a large signal RNG. It may be considered > to be both digital and analog, as this RNG may be. > > >Some software computations are hard to reverse. But few if any of the > >conventional statistical RNG's have stood up to attack. Just giving a > >hardware design and claiming "nobody can break this" is the sort of > >thing we see on sci.crypt all the time. > > I do not claim nobody can break this. I am presenting concepts to a > wide reading audience. Some of these concepts are less sound than > others, so the readers have the opportunity to judge various attepts > to produce randomness in a harsh environment. I hope that they will > fare better than I did. > > >The reasoning about this design is contradictory: Supposedly the > >large signal design is "random" because it senses low-level noise. > >Yet the circuit is supposedly suitable for a noisy digital chip > >because it is a "large-signal" design. There is a fundamental problem > >in making both claims at the same time. > > I have addressed this above. A large signal, digital oscillator has > small noise on top of that. The randomness is primarily based on the > coin toss analogy. The thermal noise calculation first given is a > secondary source of randomness. The periodic power supply noise > will affect this design more in some ways than it would affect an > analog circuit with well designed differential and common mode > considerations. But the ways periodic noise affects these circuits > do not ruin the unpredictability of the resulting numbers. I leave > that discussion for another day. > > snip... > > >>Mr. Ritter says, "It is this accumulation of energy that makes it > >>difficult to change the oscillation...CRYPHTML.HTM". It is easy to change the > >>oscillation period using capacitors that are connected or > >>disconnected from the oscillator by switches that are > >>controlled by signals from the random string. > > >But that approach is digital complexity, and not thermal randomness. > >It can be simulated in software. It is PSEUDO-random. Maybe it is > >strong, maybe not, but there is certainly no proof. > > It is analog complexity. I will give no proof today. Give me proof > of coin tossing that does not involve complexity or strength.. > > snip.. > > >The obvious experiment, then, is to take the device to cryogenic > >temperatures and see how it performs. If the output still has good > >statistics, we can suspect that the output does not represent thermal > >noise at all, but is just a complex digital system. Was such an > >experiment performed? > > No. The circuit depends on many complex factors for randomness, as a > coin toss does. In some imagined laboratory experiment, it is feasible > to control all factors, causing non-random results. In commodity > applications, Large Signal Random Number Generators are sometimes > superior to small signal based generators and both may appear on a > single IC. > > >>>A signal composed of many oscillators, each doing their own thing, is > >>>admittedly complex. But complex relationships are not, by themselves, > >>>cryptographically secure. We could even think to simulate such a > >>>system numerically, in which case the system is clearly no more than > >>>yet another pseudorandom state machine waiting to be exposed. And > >>>while any such simulation might not be exact, it could be close, and > >>>we could continually adjust the simulation to the reality we do see. > >> > >>I hope that you would simulate the 200 oscillator example I gave. > > >But your design does not use 200 oscillators, does it? > > No it had 3. The 200 oscillator example is for a simplified explanation > of one source of randomness. > > >>Yes, > >>you can add the +/- 1mV noise and adjust to make it as accurate as you > >>care to invest time for. I have read your pat statement above several > >>times recently, and I disagree with it. It is possible to design a > >>complex circuit that is poorly done and which therefore would fail > >>tests for randomness. > > >Even PSEUDO-random RNG's pass statistical tests. Those tests have > >nothing to do with cryptographic unpredictability or "strength." Yet > >strength is what you claim. > > Yes it is a strong source, as upcoming product releases are expected to > show. Just because old PRNGs pass some tests does not mean that new > designs are bad, as you imply. > > >I think you have missed the distinction between unpredictable > >randomness for cryptography, and ordinary statistical randomness. > > A PSRG may be depended upon to produce the same string under certain > easy to arrange conditions. This RNG does the opposite of that. Two > sequential random numbers from this circuit would prove that to > anyone who tests it, most of the time. > > Thank you for this polite discussion.
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 03:24:11 -1000 From: handWave <real9@complex9.net> Message-ID: <36ADC1FB.4212@complex9.net> References: <36ACB68D.34FA34A4@aspi.net> Newsgroups: sci.crypt Lines: 116 Trevor Jackson, III wrote: handWave wrote: > > I am glad you raised the "handwaves" metaphore, because handwaves are > > what toss coins. A complex person tosses a coin and you might think it > > is random. The oscillator RNG in this discussion is directly analogous > > to a coin toss in many ways. If a coin is not rotating (oscillating) > > it will fall back into the hand in the same position that it stated from. > > It is the rotation that helps the randomness, not only the complexity > > of the nervous system, the elasticity of the skin, and the trembling of > > the muscles. The rotation should be fast for best results. A juggler > > could become skilled at non-random coin tosses for one coin that > > rotates slowly. But if she tosses three coins with rapid rotation than > > it is likely that random results will occur. If a periodic wind is > > present in a coin toss, yes, it will influence the outcome, but the > > result will often be recognizable as a useful random throw, or a throw > > that was blown away. The same with this RNG. > > Human gestures are not a good foundation for system design. There are large, > industrial concerns that rely upon human-gesture-generated unpredictability. > Their interest is *not* statistical randomness as we find in simulations, > games, and Monte Carlo tests (in spite of the latter name). Their interest > is the same as ours: unpredicability. They are called casinos. The product I designed was evaluated for casinos by Bally, a potential customer. > > In spite of the fantastic efforts taken to eliminate predictability in games > of chance human gestures can still dominate the outcomes completely. I'm not > referring to shuffling cards systematically, but to rolling a roulette ball > against the wheel so precisely that out of 20 tries a human can obtain a > predicted outcome (slot 17) 10 times. 50% success. I think that constitutes > predictability. Yes, this is like the skilled juggler I described above. The analogy to a hardRandNumbGen is a skilled hacker who controls the power supply noise, the clock glitches, the radio beams so that the RNG becomes under his control. The chip designer must anticipate such antics, and prepare the module for lunar insertion. > The hand-eye coordination involved is of an extreme level, requiring decades > of practice to achieve. But it is real. The complexity of controlling a > roulette wheel appears to me to be far larger than that of a coin toss. Even > a fast one. I dispute this. A coin has one bit of output, a wheel has many bits in one toss. A wheel is a big target with a smaller bandwidth for RPMs. A coin has a wider bandwidth, perhaps 1hz to 50 hz, a wheel, from .1 hz to .5 hz on the initial spin. A coin may be tossed from a rooftop. Wheels would fracture under such conditions. > > Without detailed scrutiny of your design I cannot tell whether it is robust. I can send you the patent number by private email, upon request posted here in sci.crypt. > No inspection of the outcome will convince me it is. However, the design > philosophy you have expressed leads me to believe there will be weaknesses in > the system. Yes there are weaknesses. A moonshot too has weaknesses, and people do their best to prepare a module for its harsh environment. The payoff is so sweet, though. It is better to have thrashed and lost some entropy, than never to have thrashed at all. > > > The major source of randomness of this RNG is the unsynchronized > > nature of multiple oscillators with randomly changing frequencies. This > > is a large signal phenomenon, which cannot be accurately described > > mathematically. Similar to a coin toss, many analog variables are > > involved. These continuous variations of many influences cause seeming > > randomness. If you can mathematically describe a human coin toss, then > > so you can with this RNG. But you cannot, and I cannot. That does not > > invalidate the usefulness of these seed generators, not in this > > century. > > The phrase "randomly changing oscillators" is key to the paragrph above. I > would like to question the use of the term random in the sense of > unpredictable. Since the (intermediate) output of the system is driving the > changes to the oscillators there is a full feedback loop present. This kind > of system may pass statistical tests for randomness, but it may not be > unpredictable. The result may "cause seeming randomness", but this is far > from unpredictability. For instance, how much correlation would you expect > from a set of such devices initialized identically? We ran mathematical auto-correlation tests looking exactly for this, and got good results. This type of multi-oscillator, frequency modulated, unsynchronized circuit is part analog and part digits. It is susceptible to realities as a hand exists in realities during a toss. Many subtle influences come into play, including the capacitance between the moon and the IC. > > Even the presmption that the output would pass statistical tests is > questionable. One famous gafffe in PRNG design was Knuth's composite > generator, which he called superrandom. Unfortunately it was a closed loop > design. It was a computer program. >He did not forsee the possibility of cycles so short as to be > degenerate. All closed loop designs contain this danger. If the hardware > output is driving the hardware configuration, it is avidly searching for a > configuration that represents a local minima in its volatility. > > Now, given initialization in a configuration near such a local minima, how > much divergence would we find in the output of a set of these devices? Good point. This is exactly what we were looking for. The results were excellent. I wave my hands vigorously at this point to emphasize that this type of circuit exists in the real world as we do. It is in the school of hard knocks. It can be defeated. But it has some value as a commodity product in certain well chosen scenarios. handWave
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 13:33:49 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36ADB62D.E681674F@stud.uni-muenchen.de> References: <36ADC1FB.4212@complex9.net> Newsgroups: sci.crypt Lines: 31 handWave wrote: > > > Even the presmption that the output would pass statistical tests is > > questionable. One famous gafffe in PRNG design was Knuth's composite > > generator, which he called superrandom. Unfortunately it was a closed loop > > design. > > It was a computer program. Having previously taken part in discussions in several threads of this group on random number generations, I doubt nevertheless that I have really known an answer to the following question: If I have two sources of randomness, one software and one hardware, both passing all statistical tests I apply equally well, why should I choose one source in preference to the other? And if additionally I don't know which sequence I get is from software and which is from hardware? (Compare the Turing test.) How does the origin of the sequence affect the workload of the analyst, if the software generation process involves so many parameters that for combinatorical reasons he has no chance of directly dealing with them but has to try to look instead for possible regularities/irregularities in the sequence itself and, by assumption, the sequences from the two sources are of equal statistical quality? (Note that the hardware source is (in my humble opinion) unpredictable simply because there are so many participating 'parameters' that the 'summation' (the end product) becomes unpredictable, cf. the casting of a dice.) M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 12:28:17 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36ADFB30.BB4B07FD@aspi.net> References: <36ADB62D.E681674F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 60 Mok-Kong Shen wrote: > handWave wrote: > > > > > > Even the presmption that the output would pass statistical tests is > > > questionable. One famous gafffe in PRNG design was Knuth's composite > > > generator, which he called superrandom. Unfortunately it was a closed loop > > > design. > > > > It was a computer program. > > Having previously taken part in discussions in several threads of > this group on random number generations, I doubt nevertheless that > I have really known an answer to the following question: > > If I have two sources of randomness, one software and one hardware, > both passing all statistical tests I apply equally well, why should > I choose one source in preference to the other? And if additionally > I don't know which sequence I get is from software and which is from > hardware? (Compare the Turing test.) How does the origin of the > sequence affect the workload of the analyst, if the software > generation process involves so many parameters that for combinatorical > reasons he has no chance of directly dealing with them but has > to try to look instead for possible regularities/irregularities in > the sequence itself and, by assumption, the sequences from the > two sources are of equal statistical quality? (Note that the > hardware source is (in my humble opinion) unpredictable simply > because there are so many participating 'parameters' that the > 'summation' (the end product) becomes unpredictable, cf. the casting > of a dice.) The fundamental reason is that, for security purposes, we have to assume that our opponent can do anything we can do. We can re-run the software and obtain the identical output. We cannot re-run the hardware and get the same output. Thus the hardware is superior. The deceptive provision in your question is the fact that the sources are hidden. This amounts to security via obscurity. Obscurity fails catastrophicaly when it is breached. A bad thing because the opponent can steal a copy of the software and get every output we will every get. He cannot steal a copy of the machine and get identical outputs to ours. This line of thought identifies a possible opportunity for Bill Gates; a true marketing genius if there ever was one. Everyone alive in 1980 knew that software was the "plastic" of the decade and that the market for software was going to grow quickly. But no other person alive in 1980 forsaw just how big the market would be for really bad software. Everyone else was concentrating on reasonably good software. This is why Gates is a multi-deca-billionaire. Now, in crypto, you have identified another case in which people cannot tell whether someone is selling Good Stuff or Really Bad Crap. Since it is not reasonable to distinguish the two, we need an organization to produce a tiny amount of Good Stuff and massive quantities of Really Bad Crap, and sell it all as the former. No one could tell the difference, and, in theory, no one would care. Bill, are you listening?
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:25:23 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ae1411.29221428@nntp.ix.netcom.com> References: <36ADFB30.BB4B07FD@aspi.net> Newsgroups: sci.crypt Lines: 45 On Tue, 26 Jan 1999 12:28:17 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >This line of thought identifies a possible opportunity for Bill Gates; a true >marketing genius if there ever was one. I guess you consider Attila the Hun to be a military genius too. :-) >But no other person alive in 1980 forsaw just how big the market would be >for really bad software. Hell, the auto industry knew that way before Gates used it in the S/W industry. He just took the same marketing concepts used by Henry Ford and built the same kind of fortune. "You can have any color Model T you want as long as it runs on Windows." >Now, in crypto, you have identified another case in which people cannot tell >whether someone is selling Good Stuff or Really Bad Crap. Since it is not >reasonable to distinguish the two, we need an organization to produce a tiny amount >of Good Stuff and massive quantities of Really Bad Crap, and sell it all as the >former. No one could tell the difference, and, in theory, no one would care. Soon Gates is gonna retire all his programmers at MicroShaft and install a TRNG to produce code. And now that his beta test force is big enough, he can partition the outputs and see what runs experimentally. Depending on which beta test group(s) order the next "revision", he can decide what to put in shrinkwrap. If it gets to the Windows Logo, it is good enough for the consuming public. If they don't like it, let them run UNIX. >Bill, are you listening? HA! Unka Bill is too busy working on his new TRNG. Bob Knauer "An honest man can feel no pleasure in the exercise of power over his fellow citizens." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:30:05 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF22ED.9D0E25B0@stud.uni-muenchen.de> References: <36ADFB30.BB4B07FD@aspi.net> Newsgroups: sci.crypt Lines: 21 Trevor Jackson, III wrote: > > The fundamental reason is that, for security purposes, we have to assume that our > opponent can do anything we can do. We can re-run the software and obtain the > identical output. We cannot re-run the hardware and get the same output. Thus the > hardware is superior. > > The deceptive provision in your question is the fact that the sources are hidden. > This amounts to security via obscurity. Obscurity fails catastrophicaly when it is > breached. A bad thing because the opponent can steal a copy of the software and > get every output we will every get. He cannot steal a copy of the machine and get > identical outputs to ours. If you produce some sequence with a sufficiently good algorithm with a sufficiently long key and later forget that key, even you wouldn't be able to reproduce the sequence. As to stealing I suppose it is irrelevant in the present context. (If you have a one-time pad and that got stolen (copied), then what?) M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 17:56:39 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ae019b.24495482@nntp.ix.netcom.com> References: <36ADB62D.E681674F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 16 On Tue, 26 Jan 1999 13:33:49 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >If I have two sources of randomness, one software and one hardware, >both passing all statistical tests I apply equally well, why should >I choose one source in preference to the other? Why do you persist in believing that statistical tests have anything to do with randomness in cryptography? Bob Knauer "An honest man can feel no pleasure in the exercise of power over his fellow citizens." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:24:18 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AE0852.16B9A95F@stud.uni-muenchen.de> References: <36ae019b.24495482@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 18 R. Knauer wrote: > > On Tue, 26 Jan 1999 13:33:49 +0100, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > >If I have two sources of randomness, one software and one hardware, > >both passing all statistical tests I apply equally well, why should > >I choose one source in preference to the other? > > Why do you persist in believing that statistical tests have anything > to do with randomness in cryptography? Tell me what other (better) tools are available for me to make the decision. These are simply easy to obtain, as far as my humble knowledge goes. Please kindly give your recipe to cope with the situation I described. Thanks in advance. M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:33:24 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ae16a8.29884341@nntp.ix.netcom.com> References: <36AE0852.16B9A95F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 41 On Tue, 26 Jan 1999 19:24:18 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> Why do you persist in believing that statistical tests have anything >> to do with randomness in cryptography? >Tell me what other (better) tools are available for me to make >the decision. If I told you that there are none, would you believe me? >These are simply easy to obtain, as far as my >humble knowledge goes. So is snake oil. >Please kindly give your recipe to cope with >the situation I described. Thanks in advance. Learn what crypto-grade randomness is. The concept is deceptively simple once you understand it. But first you have to give up all other definitions of randomness from other fields like statistics. The key to understanding is that randomness depends on the generation process, not the numbers themselves. The number 000...0 fails all sorts of statistical tests, but can be a random number if it is generated by a TRNG. Until you analyze the method of generation, you cannot know. A TRNG is a physical device that is capable of generating all possible sequences of a given finite length equiprobably. If you understand that, then you will understand crypto-grade randomness - and, as another poster pointed out yesterday, you will also understand cryptography. Bob Knauer "An honest man can feel no pleasure in the exercise of power over his fellow citizens." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:38:29 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF24E5.95D5D7F9@stud.uni-muenchen.de> References: <36ae16a8.29884341@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 17 R. Knauer wrote: > A TRNG is a physical device that is capable of generating all possible > sequences of a given finite length equiprobably. If you understand > that, then you will understand crypto-grade randomness - and, as > another poster pointed out yesterday, you will also understand > cryptography. Excellent! Then tell me HOW to get such a physical device that PROVABLY is capable of generating all possible sequences of a given finite length equiprobalbly. Secondly, your equiprobability is not at all sufficient. If the said given finite length is 2, is a physical divice outputting 0001101100011011..... a TRNG????? M. K. Shen
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 09:51:46 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78n962$lii$1@quine.mathcs.duq.edu> References: <36AF24E5.95D5D7F9@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 40 In article <36AF24E5.95D5D7F9@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >R. Knauer wrote: > >> A TRNG is a physical device that is capable of generating all possible >> sequences of a given finite length equiprobably. If you understand >> that, then you will understand crypto-grade randomness - and, as >> another poster pointed out yesterday, you will also understand >> cryptography. > >Excellent! Then tell me HOW to get such a physical device that >PROVABLY is capable of generating all possible sequences of a given >finite length equiprobalbly. You can't. Tell me how you can build a plane that will *provably* fly equally stably in any direction. >Secondly, your equiprobability is not at all sufficient. If >the said given finite length is 2, is a physical divice outputting >0001101100011011..... a TRNG????? You can't tell. You've framed the question such that it's unanswerable due to insufficient information : There is a coffee cup on the southeast corner of my desk. If it is approximately 1/3 full, what is written on the outside of the cup? What kind of mileage does a blue car get? However, the fact that you've asked a dumb question doesn't mean that the concepts aren't useful -- both paint color and mileage are important in describing and evaluating cars. But they're not connected the way you think they are. The fact that you're repeatedly asking the same dumb question does, however, suggest that you're not really interested in the answer. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:12:44 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF2CEC.B5684684@stud.uni-muenchen.de> References: <78n962$lii$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 11 Patrick Juola wrote: > > The fact that you're repeatedly asking the same dumb question does, > however, suggest that you're not really interested in the answer. The origninal purpose is evidently: Since there can't be an good answer, one can't claim hardware sequences are always to be preferred to software sequences. M. K. Shen
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 10:30:16 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78nbe8$llm$1@quine.mathcs.duq.edu> References: <36AF2CEC.B5684684@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 27 In article <36AF2CEC.B5684684@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Patrick Juola wrote: >> > >> The fact that you're repeatedly asking the same dumb question does, >> however, suggest that you're not really interested in the answer. > >The origninal purpose is evidently: Since there can't be an good >answer, one can't claim hardware sequences are always to be preferred >to software sequences. Your claim above is untrue. I can prove that there can't be a good s/w sequence running on a deterministic machine. But I can't do that merely by inspecting any finite sample of outputs -- I have to look at the generators to do it. Of course, any bad PRNG can be implemented either in h/w or s/w, so just because something is in h/w doesn't make it good. More accurately : one can't claim hardware sequences are always to be preferred to software sequences *on the basis of a statistical analysis of a finite set of output sequences.* But this is unsurprising. I can't tell you the gas mileage by looking at the color of the paint, either. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:48:46 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF355E.DA6A3DBC@stud.uni-muenchen.de> References: <78nbe8$llm$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 11 Patrick Juola wrote: > > More accurately : one can't claim hardware sequences are always to > be preferred to software sequences *on the basis of a statistical > analysis of a finite set of output sequences.* The issue is: Are there other sound scientific basis to claim the said preference. M. K. Shen
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 12:04:02 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78ngu3$lsg$1@quine.mathcs.duq.edu> References: <36AF355E.DA6A3DBC@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 16 In article <36AF355E.DA6A3DBC@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Patrick Juola wrote: >> > >> More accurately : one can't claim hardware sequences are always to >> be preferred to software sequences *on the basis of a statistical >> analysis of a finite set of output sequences.* > >The issue is: Are there other sound scientific basis to claim the >said preference. And the answer is : yes, if your goal is to provide unbounded degrees of security for messages of unbounded length. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:20:22 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF4AD6.1EC74EF0@stud.uni-muenchen.de> References: <78ngu3$lsg$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 27 Patrick Juola wrote: > > In article <36AF355E.DA6A3DBC@stud.uni-muenchen.de>, > Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: > >Patrick Juola wrote: > >> > > > >> More accurately : one can't claim hardware sequences are always to > >> be preferred to software sequences *on the basis of a statistical > >> analysis of a finite set of output sequences.* > > > >The issue is: Are there other sound scientific basis to claim the > >said preference. > > And the answer is : yes, if your goal is to provide unbounded > degrees of security for messages of unbounded length. I would be happy with a weaker goal, i.e. for messages of a certain finite length. Could you provide the requested sound scientific basis? Note that I am going to use the sequences in practical applications. So any claimed degree of security has be shown with a practical algorithm. I am also prepared to weaken the goal further to 'bounded degree of security' if you can give a precise definition of 'degree of security' that is satisfactory for the practice. M. K. Shen
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 14:27:50 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78npbm$m42$1@quine.mathcs.duq.edu> References: <36AF4AD6.1EC74EF0@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 67 In article <36AF4AD6.1EC74EF0@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Patrick Juola wrote: >> >> In article <36AF355E.DA6A3DBC@stud.uni-muenchen.de>, >> Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> >Patrick Juola wrote: >> >> >> > >> >> More accurately : one can't claim hardware sequences are always to >> >> be preferred to software sequences *on the basis of a statistical >> >> analysis of a finite set of output sequences.* >> > >> >The issue is: Are there other sound scientific basis to claim the >> >said preference. >> >> And the answer is : yes, if your goal is to provide unbounded >> degrees of security for messages of unbounded length. > >I would be happy with a weaker goal, i.e. for messages of a >certain finite length. Could you provide the requested sound >scientific basis? Sure. If the key is "as random as" the message, then Shannon's proof of secrecy goes through. In particular, if your messages are bounded by a given length N, then if you can get N bits of randomness, from whatever source, hardware or software, then you can achieve perfect secrecy. How you get them is, of course, your problem. The difficulty is in *proving* that a given sequence of N bits is contains N bits of randomness (or more formally that a given generator produces exactly random bits). But it's fairly easy to gather *MORE* than N bits -- as much more as you feel confident that it is unlikely to be more less than N bits of randomness in the resulting sample. Furthermore, I note that "sound scientific basis" doesn't necessarily rely on a formal, mathematical proof. We use the acceleration of gravity g = 9.8 m/sec on the basis of experiment rather than any first-principle analysis. Similar experiments show that, for instance, English text contains just over one bit of randomness per character. If you need a thousand bits of randomness, get a thousand characters of English text from a secure source, distill them appropriately with a trusted hashing function. Better yet, get 1500 characters to allow for sloppy engineering -- you'd never run a wire at its rated wattage, would you? Take the resulting 1000 bit string, XOR it with the plaintext, and voila. A scientifically sound method of securing 1000 bit secrets. >I am also prepared to >weaken the goal further to 'bounded degree of security' if you >can give a precise definition of 'degree of security' that is >satisfactory for the practice. Well, the usual definition is "work factor" -- the ratio of work necessary to read a message without the key vs. with the key. Again, "sound scientific basis" does not necessarily rely on proof; if you are willing to accept (as many scientists do) that RSA is as secure as factoring, then the work factor to crack an RSA code can be made as large as you like by raising the modulus appropriately. If you don't believe that RSA is as secure as factoring.... well, there are other methods out there with various conjectures about their difficulty of solution. If you don't believe *any* conjectures, you're arguably in the same camp as people who don't really believe that the force of gravity is constant just because it's always been constant so far.... -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:54:24 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36afa6cd.50106439@nntp.ix.netcom.com> References: <78npbm$m42$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 21 On 27 Jan 1999 14:27:50 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >English text contains just over one bit of randomness per character. >If you need a thousand bits of randomness, get a thousand characters >of English text from a secure source, distill them appropriately with >a trusted hashing function. Better yet, get 1500 characters to allow >for sloppy engineering -- you'd never run a wire at its rated wattage, >would you? Take the resulting 1000 bit string, XOR it with the plaintext, >and voila. A scientifically sound method of securing 1000 bit secrets. You once said that such a system was vulnerable to a Bayesian attack. Have you changed your mind? Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 28 Jan 1999 11:25:33 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78q31t$ngq$1@quine.mathcs.duq.edu> References: <36afa6cd.50106439@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 25 In article <36afa6cd.50106439@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 27 Jan 1999 14:27:50 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>English text contains just over one bit of randomness per character. >>If you need a thousand bits of randomness, get a thousand characters >>of English text from a secure source, distill them appropriately with >>a trusted hashing function. Better yet, get 1500 characters to allow >>for sloppy engineering -- you'd never run a wire at its rated wattage, >>would you? Take the resulting 1000 bit string, XOR it with the plaintext, >>and voila. A scientifically sound method of securing 1000 bit secrets. > >You once said that such a system was vulnerable to a Bayesian attack. >Have you changed your mind? No. The key point here is that the key is as large as -- larger than, in fact -- the plaintext. Such a system *would* be vulnerable if you were using it to secure secrets larger than 1000 bits. But as long as the plaintext is finite *AND BOUNDED*, if you can get key material to exceed that bound, you can get perfect secrecy. But few of us have bounded secrets. -kitten
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 23:40:37 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b0f4c7.11366854@nntp.ix.netcom.com> References: <78q31t$ngq$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 22 On 28 Jan 1999 11:25:33 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >as long >as the plaintext is finite *AND BOUNDED*, if you can get key material >to exceed that bound, you can get perfect secrecy. >But few of us have bounded secrets. You are being uncharacteristically obscure. Please elaborate on the concepts of "bounded", "unbounded" and how they apply to a "bounded secret". And just how is a plaintext "bounded", given that it is finite in length? Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 29 Jan 1999 08:56:25 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78sem9$ori$1@quine.mathcs.duq.edu> References: <36b0f4c7.11366854@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 37 In article <36b0f4c7.11366854@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 28 Jan 1999 11:25:33 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>as long >>as the plaintext is finite *AND BOUNDED*, if you can get key material >>to exceed that bound, you can get perfect secrecy. > >>But few of us have bounded secrets. > >You are being uncharacteristically obscure. > >Please elaborate on the concepts of "bounded", "unbounded" and how >they apply to a "bounded secret". And just how is a plaintext >"bounded", given that it is finite in length? The idea behind a bounded plaintext is fairly simple. Just say to yourself that you will never, EVER, in your entire life, encrypt a document larger than X with a single key. Splitting a big document into two into order to make it smaller doesn't count, as you need two different keys in that case. X is, then, "the bound." And it's a measure of how much work you need to generate the key for each and every message you send -- so make it low. The difference between bounded and finite is simple -- with finite, plaintexts, I know that my articles will eventually end, but I don't know when beforehand. With a bounded plantext, I set myself a rule beforehand that I won't go over 30 lines, or 300, or 3 million, whatever *and stick to that rule.* Can you promise yourself that you'll never want to Email yourself a copy of Microsoft Word? -kitten
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 16:10:36 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36B07DEC.7D4DE9EE@stud.uni-muenchen.de> References: <78npbm$m42$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 59 Patrick Juola wrote: > Sure. If the key is "as random as" the message, then Shannon's > proof of secrecy goes through. In particular, if your messages > are bounded by a given length N, then if you can get N bits of > randomness, from whatever source, hardware or software, then > you can achieve perfect secrecy. > > How you get them is, of course, your problem. The difficulty is > in *proving* that a given sequence of N bits is contains N bits > of randomness (or more formally that a given generator produces > exactly random bits). But it's fairly easy to gather *MORE* than > N bits -- as much more as you feel confident that it is unlikely > to be more less than N bits of randomness in the resulting sample. > > Furthermore, I note that "sound scientific basis" doesn't necessarily > rely on a formal, mathematical proof. We use the acceleration of > gravity g = 9.8 m/sec on the basis of experiment rather than any > first-principle analysis. Similar experiments show that, for instance, > English text contains just over one bit of randomness per character. > If you need a thousand bits of randomness, get a thousand characters > of English text from a secure source, distill them appropriately with > a trusted hashing function. Better yet, get 1500 characters to allow > for sloppy engineering -- you'd never run a wire at its rated wattage, > would you? Take the resulting 1000 bit string, XOR it with the plaintext, > and voila. A scientifically sound method of securing 1000 bit secrets. Thank you for the very lucid description of a sound standpoint in practice ('applied' cryptography as against 'theoretical' cryptography). We must be realistic, since theoretical stuffs may not be realizable in the real world and since 'absulute' security is never required (does it matter if a cipher is cracked after 100 years?) Incidentally, in another thread I also suggested distiling bit sequences out of natural language texts as raw materials. > > Well, the usual definition is "work factor" -- the ratio of work > necessary to read a message without the key vs. with the key. > Again, "sound scientific basis" does not necessarily rely on proof; > if you are willing to accept (as many scientists do) that RSA is > as secure as factoring, then the work factor to crack an RSA code > can be made as large as you like by raising the modulus appropriately. > If you don't believe that RSA is as secure as factoring.... well, there > are other methods out there with various conjectures about their > difficulty of solution. If you don't believe *any* conjectures, you're > arguably in the same camp as people who don't really believe that > the force of gravity is constant just because it's always been constant > so far.... I appreciate your opinions and in particular agree with you that formal proofs are not always needed but can be substituted with 'practical' yet scientifically sound procedures. One should never be pedantic but one certainly should not be on the other hand a 'believer' of 'religious assertions' (totally unfounded big words of someone). M. K. Shen M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:17:41 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af3bd0.22717926@nntp.ix.netcom.com> References: <78nbe8$llm$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 17 On 27 Jan 1999 10:30:16 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >But this is unsurprising. I can't tell you the gas mileage by looking >at the color of the paint, either. There were certain colors that were used exclusively on the Volkswagen Beetle. That would have given you a strong enough clue to infer the gas mileage, assuming standard operating conditions. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:43:43 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af340c.20729677@nntp.ix.netcom.com> References: <36AF2CEC.B5684684@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 19 On Wed, 27 Jan 1999 16:12:44 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> The fact that you're repeatedly asking the same dumb question does, >> however, suggest that you're not really interested in the answer. >The origninal purpose is evidently: Since there can't be an good >answer, one can't claim hardware sequences are always to be preferred >to software sequences. See! What did I tell you. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:40:53 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af31b5.20130325@nntp.ix.netcom.com> References: <78n962$lii$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 36 On 27 Jan 1999 09:51:46 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >In article <36AF24E5.95D5D7F9@stud.uni-muenchen.de>, >Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >>Secondly, your equiprobability is not at all sufficient. If >>the said given finite length is 2, is a physical divice outputting >>0001101100011011..... a TRNG????? >The fact that you're repeatedly asking the same dumb question does, >however, suggest that you're not really interested in the answer. The answer that the poster wants to hear is: Because TRNGs are not Perfect, PRNGs are just as good. What he fails to appreciate is that there is a fundamental difference between a TRNG and a PRNG. That is because he fails to realize that a crypto-grade random number is characterized by the generation process, not the number itself. IOW, according to the poster, regardless of whether a number is generated by a TRNG or a PRNG, if it passes some statistical tests (that only work on infinite numbers), then it makes no difference what the method of generation is. Maybe there needs to be a law that a student must take cryptography before statistics. :-) Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:52:30 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF363E.45670214@stud.uni-muenchen.de> References: <36af31b5.20130325@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 12 R. Knauer wrote: > > What he fails to appreciate is that there is a fundamental difference > between a TRNG and a PRNG. That is because he fails to realize that a > crypto-grade random number is characterized by the generation process, > not the number itself. Where is the proof of 'if the generation process is hardware then it is crypto-grade, otherwise it is not'?? M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:45:09 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af3f70.23645400@nntp.ix.netcom.com> References: <36AF363E.45670214@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 41 On Wed, 27 Jan 1999 16:52:30 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Where is the proof of 'if the generation process is hardware then >it is crypto-grade, otherwise it is not'?? There is no proof of the first part, since PRNGs can be implemented in H/W, like shift register PRNGs. The proof of the second part comes from an analysis of what makes a PRNG behave the way it does. It is based on an algorithm, which means that its output is deterministic, and that means that there could be vulnerability in using it. For example, if it did not output all possible sequences of a given finite length equiprobably but started repeating the sequneces, then it fails the definition for a TRNG. If a PRNG only puts out a few sequences most of the time, then it is obviously worthless. That takes care of the equiprobable part of the TRNG specification. Assuming that the outputs of the PRNG are equiprobable, if the PRNG is seeded with a number of length K that is smaller than the output needed to encrypt the message of length N, then it can only generate as many sequences as the seed will allow, which is not the same as all possible sequences. If K<N, then only 2^K possible plaintexts are contained in the ciphertext, instead of 2^N. That makes the cryptanalyst's job a lot easier, especially when the message is longer than the unicity distance. In that case, there is only 1 plaintext that is intelligible. When the cryptanalyst finds it, he knows with certainty that it is the intended message. That is not the case when using a pad that is generated from a TRNG, where the full 2^N outputs possible. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:01:00 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF464C.AD52C35D@stud.uni-muenchen.de> References: <36af3f70.23645400@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 41 R. Knauer wrote: > > On Wed, 27 Jan 1999 16:52:30 +0100, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > >Where is the proof of 'if the generation process is hardware then > >it is crypto-grade, otherwise it is not'?? > > There is no proof of the first part, since PRNGs can be implemented in > H/W, like shift register PRNGs. > > The proof of the second part comes from an analysis of what makes a > PRNG behave the way it does. It is based on an algorithm, which means > that its output is deterministic, and that means that there could be > vulnerability in using it. For example, if it did not output all > possible sequences of a given finite length equiprobably but started > repeating the sequneces, then it fails the definition for a TRNG. > > If a PRNG only puts out a few sequences most of the time, then it is > obviously worthless. That takes care of the equiprobable part of the > TRNG specification. > > Assuming that the outputs of the PRNG are equiprobable, if the PRNG is > seeded with a number of length K that is smaller than the output > needed to encrypt the message of length N, then it can only generate > as many sequences as the seed will allow, which is not the same as all > possible sequences. > > If K<N, then only 2^K possible plaintexts are contained in the > ciphertext, instead of 2^N. That makes the cryptanalyst's job a lot > easier, especially when the message is longer than the unicity > distance. In that case, there is only 1 plaintext that is > intelligible. When the cryptanalyst finds it, he knows with certainty > that it is the intended message. That is not the case when using a pad > that is generated from a TRNG, where the full 2^N outputs possible. Please note I don't claim PRNGs are good. I simply doubt that hardward generators are good because I have no tools to determine that they are good, except by using statistical tools. M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:02:02 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af546f.29020769@nntp.ix.netcom.com> References: <36AF464C.AD52C35D@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 18 On Wed, 27 Jan 1999 18:01:00 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Please note I don't claim PRNGs are good. I simply doubt that >hardward generators are good because I have no tools to determine >that they are good, except by using statistical tools. You must be skilled at designing a TRNG. Statistical tools are worthless. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 19:19:57 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF58CD.C158C845@stud.uni-muenchen.de> References: <36af546f.29020769@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 20 R. Knauer wrote: > > On Wed, 27 Jan 1999 18:01:00 +0100, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > >Please note I don't claim PRNGs are good. I simply doubt that > >hardward generators are good because I have no tools to determine > >that they are good, except by using statistical tools. > > You must be skilled at designing a TRNG. Statistical tools are > worthless. How do you measure or test the skill of a person designing a TRNG? Using some tests of the psychologists?? More precisely, how do you show through a test of skill that the resulting TRNG designed by that person has a certain 'crypto-grade', even if you could define that 'crypto-grade' scientifically precisely which I guess you can't. Are you excluding that a skilled person could ever make mistakes?? M. K. Shen
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 14:29:25 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78npel$m4j$1@quine.mathcs.duq.edu> References: <36AF58CD.C158C845@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 21 In article <36AF58CD.C158C845@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >R. Knauer wrote: >> >> On Wed, 27 Jan 1999 18:01:00 +0100, Mok-Kong Shen >> <mok-kong.shen@stud.uni-muenchen.de> wrote: >> >> >Please note I don't claim PRNGs are good. I simply doubt that >> >hardward generators are good because I have no tools to determine >> >that they are good, except by using statistical tools. >> >> You must be skilled at designing a TRNG. Statistical tools are >> worthless. > >How do you measure or test the skill of a person designing a TRNG? The same way you test the skill of the architect who designed your building. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:26:06 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af9b9f.47244373@nntp.ix.netcom.com> References: <36AF58CD.C158C845@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 37 On Wed, 27 Jan 1999 19:19:57 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >How do you measure or test the skill of a person designing a TRNG? >Using some tests of the psychologists?? Make him pee in a bottle and if he does, fire him. If he throws the bottle at you, hire him. Skilled people fiercely guard their personal rights as humans. Only sheep seek the comfort of conformity - and you do not want some sheep designing a TRNG for you. >More precisely, how do >you show through a test of skill that the resulting TRNG designed >by that person has a certain 'crypto-grade', even if you could define >that 'crypto-grade' scientifically precisely which I guess you can't. >Are you excluding that a skilled person could ever make mistakes?? Get several skilled people to check the design and the actual TRNG. That's why there are standards committees. But if you attempt to rely on statistical tests on the output alone, you are not going to know much of anything - except in the trivial case of a shorted or floating output. But that's it. If you could determine the randomness of a finite sequence algorithmically, you could solve both Godel's incompleteness problem and Turing's halting problem. See Chaitin for the reasons why. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 12:05:41 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78nh16$lt6$1@quine.mathcs.duq.edu> References: <36AF363E.45670214@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 23 In article <36AF363E.45670214@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >R. Knauer wrote: >> > >> What he fails to appreciate is that there is a fundamental difference >> between a TRNG and a PRNG. That is because he fails to realize that a >> crypto-grade random number is characterized by the generation process, >> not the number itself. > >Where is the proof of 'if the generation process is hardware then >it is crypto-grade, otherwise it is not'?? There isn't such a proof. There *is* a proof that if the generation process is purely software running on a deterministic routine, then the randomness of the resulting pad is bounded by the randomness of the starting conditions. And there's a similar proof that bounded randomness cannot provide perfect secrecy to a message of unbounded length. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:29:34 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF4CFE.6AF01030@stud.uni-muenchen.de> References: <78nh16$lt6$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 21 Patrick Juola wrote: > > >Where is the proof of 'if the generation process is hardware then > >it is crypto-grade, otherwise it is not'?? > > There isn't such a proof. There *is* a proof that if the generation > process is purely software running on a deterministic routine, then > the randomness of the resulting pad is bounded by the randomness > of the starting conditions. > > And there's a similar proof that bounded randomness cannot provide > perfect secrecy to a message of unbounded length. Entirely true. It is for the same reason that none of the known (implementable) ciphers can offer absolute security. But that cetainly doesn't imply that any hardware generator can offer absolute security. One should also note that even 'crypto-grade' is itself a fuzzy term. M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:13:20 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af56f3.29664925@nntp.ix.netcom.com> References: <36AF4CFE.6AF01030@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 25 On Wed, 27 Jan 1999 18:29:34 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> And there's a similar proof that bounded randomness cannot provide >> perfect secrecy to a message of unbounded length. >Entirely true. It is for the same reason that none of the known >(implementable) ciphers can offer absolute security. But that >cetainly doesn't imply that any hardware generator can offer >absolute security. One should also note that even 'crypto-grade' >is itself a fuzzy term. You STILL haven't got it. The OTP cryptosystem is proveably secure. Why are you struggling with that? Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 19:36:17 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF5CA1.59F0F2F4@stud.uni-muenchen.de> References: <36af56f3.29664925@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 29 R. Knauer wrote: > > > You STILL haven't got it. > > The OTP cryptosystem is proveably secure. > > Why are you struggling with that? I never refuse to acknowledge 'The OTP cryptosystem is proveably secure'. Only as I said repeatedly that that fact is USELESS for the practice, i.e. useless for the user of a crypto application, because he can't get a perfect OTP to give him the 'provable security'. Neither will he, excepting a mentally illed, demand absolute security. But he has a right to demand that the security offered by a system be demonstrated. Simply saying that a hardware generator offers high security is NO convincing argument for him. He has to have some supporting scientific data. Now for a random number sequence I don't know any way to provide such data excepting through statistical tests. And by the discussions till now you also don't know a way. So it is my opinion that using statistical tests at least gives him something concrete for him to form a more or less objective opinion, even though we should tell him that the tests are questionable for this and that reason. This is anyway better than leaving him with nothing to form a (subjective) opinion of the encryption scheme he is using. Have I expressed my arguments clearly? M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:37:37 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36afa0ac.48537553@nntp.ix.netcom.com> References: <36AF5CA1.59F0F2F4@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 57 On Wed, 27 Jan 1999 19:36:17 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >But he has a right to demand that the security >offered by a system be demonstrated. Fine - let him seek a meaningful demonstration then. >Simply saying that a hardware >generator offers high security No one ever said that. >Now for a random >number sequence I don't know any way to provide such data excepting >through statistical tests. Statistical tests do not prove randomness if they are applied to finite numbers. >And by the discussions till now you >also don't know a way. That's because there is no way, other than to let the TRNG generate an infinitely long number. >So it is my opinion that using statistical >tests at least gives him something concrete for him to form a more >or less objective opinion, even though we should tell him that the >tests are questionable for this and that reason. "Questionable" is a weasel word for meaningless. Statistical tests are useless if applied to finite random numbers generated by a TRNG. >This is anyway >better than leaving him with nothing to form a (subjective) opinion >of the encryption scheme he is using. If a test is worthless (except to diagnose a major malfunction), how is it any good? Actually, it is worse than not using it because it can give a false sense of confidence. >Have I expressed my arguments clearly? Yep, it is clear that you are as confused as ever. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 16:38:22 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36B0846D.5113C58D@stud.uni-muenchen.de> References: <36afa0ac.48537553@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 40 R. Knauer wrote: > > > Statistical tests are useless if applied to finite random numbers > generated by a TRNG. Are you going to REWITE the whole scientific theory of probability and statistics????? After having built a hardware generator, do you test it or simply 'believe' that it is o.k.? If you do test, which tests do you perform? Do you ever consider that the output can have bias? If yes, what is a bias and how do you detect the bias and quantify it? What tools do you have in doing all these if you exclude ALL statistical tools??? Since we have argued so much and in that process also touched PRNGs, I suppose the following paragraph taken from a brand new book by O. Goldreich, Modern Cryptography, Probabilistic Proofs and Pseudorandomness, Springer-Verlag, 1999, may interest you (page 77): Thus, pseudorandom generators are efficient (i.e. polynomial-time) deterministic programs which expand short randomly selected seeds into longer pseudorandom bit sequences, where the latter are defined as computationally indistinguishable from truly random sequences by efficient (i.e. polynomial-time) algorithms. It follows that any efficient randomized algorithms maintains its performance when its internal coin tosses are substituted by a sequence generated by a pseudorandom generator. Also interesting is a quotation Goldreich puts at the beginning of chapter 3 of his book: If two objects are indistinguishable, in what sense are they different? Since you so often use the term 'crypto-grade', let me also remind you that there are cryptologically secure PRNGs, the best known one being that of BBS (see any textbook on modern cryptography). M. K. Shen
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 23:18:56 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b0f039.10200557@nntp.ix.netcom.com> References: <36B0846D.5113C58D@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 20 On Thu, 28 Jan 1999 16:38:22 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> Statistical tests are useless if applied to finite random numbers >> generated by a TRNG. >Are you going to REWITE the whole scientific theory of probability >and statistics????? I give up. Believe what you want. You are beyond redemption. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:30:54 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af2cf5.18914767@nntp.ix.netcom.com> References: <36AF24E5.95D5D7F9@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 56 On Wed, 27 Jan 1999 15:38:29 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >Excellent! Then tell me HOW to get such a physical device that >PROVABLY is capable of generating all possible sequences of a given >finite length equiprobalbly. By analyzing the design. With correct design one can make a TRNG random to within an arbitrarily high level of precision. If the TRNG is certified to produce crypto-grade random numbers to a level of precision that ensures that it would take an impossible amount of work to make it vulnerable, then that is "perfect" in a practical sense. If the ball bearings in your car's wheels are spherical enough so that they do not tear up the bearing races, that is "perfect" enough in a practical sense. Being obsessed over the fact that there is no such thing as a Perfect TRNG or a Perfect Sphere in the real world is a waste of time at the practical level. There are more important considerations that affect security - like having someone steal your pad. One thing is for sure - just because there is no such thing as a Perfect TRNG is no excuse to fall back on PRNGs for the OTP cryptosystem. >Secondly, your equiprobability is not at all sufficient. I never said it was. I said that the TRNG must be *CAPABLE* of: 1) Outputting all possible sequences of a given finite length; and 2) Outputting each of them equiprobably. >If the said given finite length is 2, is a physical device outputting >0001101100011011..... a TRNG????? Sure, why not - if you group the number above in 2-bit sequences: 00 01 10 11 00 01 10 11 Pad1: 00 Pad2: 01 Pad3:10 .... Pad8: 11 Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:59:03 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF37C7.EC548C1@stud.uni-muenchen.de> References: <36af2cf5.18914767@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 27 R. Knauer wrote: > > By analyzing the design. With correct design one can make a TRNG > random to within an arbitrarily high level of precision. > > If the TRNG is certified to produce crypto-grade random numbers to a > level of precision that ensures that it would take an impossible > amount of work to make it vulnerable, then that is "perfect" in a > practical sense. > > If the ball bearings in your car's wheels are spherical enough so that > they do not tear up the bearing races, that is "perfect" enough in a > practical sense. > > Being obsessed over the fact that there is no such thing as a Perfect > TRNG or a Perfect Sphere in the real world is a waste of time at the > practical level. There are more important considerations that affect > security - like having someone steal your pad. If you make a metal sphere, there is a common definition of precision. What is the 'precision' you are referring to about your TRNG design? You have to define that 'precision' in scientific terms, in particular establish a 'unit' and provide a precise method to measure that 'precision' in that unit. Before that, you have nothing. M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:31:33 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af3c48.22837718@nntp.ix.netcom.com> References: <36AF37C7.EC548C1@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 42 On Wed, 27 Jan 1999 16:59:03 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >If you make a metal sphere, there is a common definition of precision. >What is the 'precision' you are referring to about your TRNG design? >You have to define that 'precision' in scientific terms, in particular >establish a 'unit' and provide a precise method to measure that >'precision' in that unit. Before that, you have nothing. You do that by analyzing the design, and performing certain tests on the actual equipment to make sure it meets the design specifications. One such test for a radioactive decay TRNG would be to measure the radioactive decay before the interval measuring circuitry. If the measurements yields results that you expect, there is no reason to believe anything is wrong. As far as the digital circuitry is concerned, logic analysis is what you would use. You would put the digital circuits on a logic analyzer and certify that they perform to design specifications using simulated inputs. You would inject noise into the components and see if it had a measureable effect on the output. You are relying on two facts: 1) Quantum mechanical processes result in random events - otherwise physics wouldn't work; 2) Digital circuits are incredibly robust - otherwise computers wouldn't work. You can bet the NSA has a TRNG that is certified to be completely secure in a practical sense, where the work effort to break it would be more than the energy in the Universe. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 17:55:15 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF44F3.6132574A@stud.uni-muenchen.de> References: <36af3c48.22837718@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 36 R. Knauer wrote: > > On Wed, 27 Jan 1999 16:59:03 +0100, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > >If you make a metal sphere, there is a common definition of precision. > >What is the 'precision' you are referring to about your TRNG design? > >You have to define that 'precision' in scientific terms, in particular > >establish a 'unit' and provide a precise method to measure that > >'precision' in that unit. Before that, you have nothing. > > You do that by analyzing the design, and performing certain tests on > the actual equipment to make sure it meets the design specifications. > > One such test for a radioactive decay TRNG would be to measure the > radioactive decay before the interval measuring circuitry. If the > measurements yields results that you expect, there is no reason to > believe anything is wrong. > > As far as the digital circuitry is concerned, logic analysis is what > you would use. You would put the digital circuits on a logic analyzer > and certify that they perform to design specifications using simulated > inputs. You would inject noise into the components and see if it had a > measureable effect on the output. It is important to know what the specifications ARE. Certainly things like the dimensions of the apparatus don't have too much bearing in the present context. Now what are the specifications that ensures a crypto-grade TRNG? These specifications must contain certain numerical values in terms of certain units. Then one can test the actual product to see whether the specifications are fulfilled. As long as one can't define 'crypto-grade' in terms of certain units precisely, there is NO way to write up such specifications as you proposed. M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 17:39:15 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af4ec9.27574880@nntp.ix.netcom.com> References: <36AF44F3.6132574A@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 24 On Wed, 27 Jan 1999 17:55:15 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >It is important to know what the specifications ARE. Certainly >things like the dimensions of the apparatus don't have too much >bearing in the present context. Now what are the specifications that >ensures a crypto-grade TRNG? These specifications must contain >certain numerical values in terms of certain units. Then one can >test the actual product to see whether the specifications are >fulfilled. As long as one can't define 'crypto-grade' in terms >of certain units precisely, there is NO way to write up such >specifications as you proposed. Check out the Hotbits radioactive decay TRNG: http://www.fourmilab.ch/hotbits/ Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 13:13:21 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78nl01$m0t$1@quine.mathcs.duq.edu> References: <36AF44F3.6132574A@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 66 In article <36AF44F3.6132574A@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >R. Knauer wrote: >> >> On Wed, 27 Jan 1999 16:59:03 +0100, Mok-Kong Shen >> <mok-kong.shen@stud.uni-muenchen.de> wrote: >> >> >If you make a metal sphere, there is a common definition of precision. >> >What is the 'precision' you are referring to about your TRNG design? >> >You have to define that 'precision' in scientific terms, in particular >> >establish a 'unit' and provide a precise method to measure that >> >'precision' in that unit. Before that, you have nothing. >> >> You do that by analyzing the design, and performing certain tests on >> the actual equipment to make sure it meets the design specifications. >> >> One such test for a radioactive decay TRNG would be to measure the >> radioactive decay before the interval measuring circuitry. If the >> measurements yields results that you expect, there is no reason to >> believe anything is wrong. > >It is important to know what the specifications ARE. Certainly >things like the dimensions of the apparatus don't have too much >bearing in the present context. Now what are the specifications that >ensures a crypto-grade TRNG? Well, broadly speaking, you need a source of randomness, which will presumably be specified. As in the specification for ball bearings, your specification will not cover all possible configurations, but only one that you consider acceptable -- for example, you might specify that your bearing be made of steel when brass bearings *might* also be acceptable. So I'll specify that the source of randomness is a particular mass of depleted uranium of such-and-such purity. Why did I specify this? I believe -- and I have lots of evidence from physicists to support such a belief -- that radioactive events in such and such a mass are "truly random." Then, I need to sample those events in a manner that preserves the randomness. Von Neumann's math provides a formal specification of a process to do so -- or I could use a decent hash scheme or any of several alternatives to specify the "formal" properties of such a system. Once I've got the formal requirements, building such a gadget (and certifying its compliance to the specs) is just another engineering problem. Of course, depending on the sampling techniques I specify, I could come up with many different -- and all equally effective -- gadget designs. Or I could make the ball bearings out of aluminum.... Once I've done that, I will presumably need to specify some form of (formalizable) security to make sure that the whole thing is wrapped in a shielded and tamper-evident package. Another formal spec, implemented by an engineer. The resulting number streams can be certified as "random," subject to the same (standard) assumptions that any other engineering project uses. Note : I'm not claiming that this particular gizmo is the *only* way to get crypto-strength random numbers. Perhaps what you want for your random number source is a thorium source, or Brownian motion. But anything you build according to these gizmo specs can be certified. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 12:02:57 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36AF46C1.8FA0A086@aspi.net> References: <36AF37C7.EC548C1@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 33 Mok-Kong Shen wrote: > R. Knauer wrote: > > > > > By analyzing the design. With correct design one can make a TRNG > > random to within an arbitrarily high level of precision. > > > > If the TRNG is certified to produce crypto-grade random numbers to a > > level of precision that ensures that it would take an impossible > > amount of work to make it vulnerable, then that is "perfect" in a > > practical sense. > > > > If the ball bearings in your car's wheels are spherical enough so that > > they do not tear up the bearing races, that is "perfect" enough in a > > practical sense. > > > > Being obsessed over the fact that there is no such thing as a Perfect > > TRNG or a Perfect Sphere in the real world is a waste of time at the > > practical level. There are more important considerations that affect > > security - like having someone steal your pad. > > If you make a metal sphere, there is a common definition of precision. > What is the 'precision' you are referring to about your TRNG design? > You have to define that 'precision' in scientific terms, in particular > establish a 'unit' and provide a precise method to measure that > 'precision' in that unit. Before that, you have nothing. Now this is an issue worthy of intense thought and debate (emphasis on thought please). I believe this breaks into two subtopics, one fundamentally describing the unit of measure and the other describing the measurement methodology.
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 12:31:11 -0600 From: Medical Electronics Lab <rosing@physiology.wisc.edu> Message-ID: <36AF5B6F.6C2B@physiology.wisc.edu> References: <36AF37C7.EC548C1@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 24 Mok-Kong Shen wrote: > > If you make a metal sphere, there is a common definition of precision. > What is the 'precision' you are referring to about your TRNG design? > You have to define that 'precision' in scientific terms, in particular > establish a 'unit' and provide a precise method to measure that > 'precision' in that unit. Before that, you have nothing. Usually it's chi-square and KS-tests which output a probability parameter (usually called "p"). Precision of more than 3 digits isn't really necessary, either the p value is within a reasonable range (+/- 1.5 sigma from some mean) or it isn't. testing multiple blocks of data of the same length will give different p's (or you know something's crooked!) but they should all be with in the correct range. The "unit" is expectation value. Precision isn't a good term, what you want is "uniformity". A TRNG should have expectation values for *all* imaginable tests, i.e. it should be uniformly random no matter how you look at it. While not easy to do, it is possible with careful attention to details. Patience, persistence, truth, Dr. mike
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 12:01:50 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78ngpu$lrm$1@quine.mathcs.duq.edu> References: <36af2cf5.18914767@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 43 In article <36af2cf5.18914767@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On Wed, 27 Jan 1999 15:38:29 +0100, Mok-Kong Shen ><mok-kong.shen@stud.uni-muenchen.de> wrote: > >>Secondly, your equiprobability is not at all sufficient. > >I never said it was. I said that the TRNG must be *CAPABLE* of: > >1) Outputting all possible sequences of a given finite length; and > >2) Outputting each of them equiprobably. > >>If the said given finite length is 2, is a physical device outputting >>0001101100011011..... a TRNG????? > >Sure, why not - if you group the number above in 2-bit sequences: > >00 01 10 11 00 01 10 11 > >Pad1: 00 >Pad2: 01 >Pad3:10 >.... >Pad8: 11 Minor quantification error : Mr. Knauer's criterion should not be interpreted as: There exists a length such that all possible sequences of that length are outputted equiprobably, but as : For all (finite) lengths, all possible sequences of that length are outputted equiprobable. So if that's really the output of your generator, it's *NOT* a TRNG as it's a single 8-bit sequence repeated with probability 1. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:10:02 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af54d6.29123497@nntp.ix.netcom.com> References: <78ngpu$lrm$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 56 On 27 Jan 1999 12:01:50 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>>Secondly, your equiprobability is not at all sufficient. >>I never said it was. I said that the TRNG must be *CAPABLE* of: >>1) Outputting all possible sequences of a given finite length; and >>2) Outputting each of them equiprobably. >>>If the said given finite length is 2, is a physical device outputting >>>0001101100011011..... a TRNG????? >>Sure, why not - if you group the number above in 2-bit sequences: >> >>00 01 10 11 00 01 10 11 >> >>Pad1: 00 >>Pad2: 01 >>Pad3:10 >>.... >>Pad8: 11 > > >Minor quantification error : > >Mr. Knauer's criterion should not be interpreted as: > >There exists a length such that all possible sequences of that >length are outputted equiprobably, > >but as : > >For all (finite) lengths, all possible sequences of that length are >outputted equiprobable. > >So if that's really the output of your generator, it's *NOT* a TRNG >as it's a single 8-bit sequence repeated with probability 1. I assumed the poster had generated the 16 bit sequence above using a TRNG, like a fair coin toss. I then assumed that he wanted to use it to make a pad. Then he says something about a length of 2, which I assumed he meant as message lengths - that he planned to send 8 messages of 2 bits in length. Given all those assumptions, I see no problem with using the sequence above as for an OTP - unless I am missing something. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Fri, 29 Jan 1999 05:48:38 GMT From: cr764@torfree.net (Kurt Wismer) Message-ID: <F6B453.D3o.0.queen@torfree.net> References: <36ae16a8.29884341@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 19 R. Knauer (rcktexas@ix.netcom.com) wrote: : Learn what crypto-grade randomness is. The concept is deceptively : simple once you understand it. But first you have to give up all other : definitions of randomness from other fields like statistics. : The key to understanding is that randomness depends on the generation : process, not the numbers themselves. The number 000...0 fails all : sorts of statistical tests, but can be a random number if it is : generated by a TRNG. Until you analyze the method of generation, you : cannot know. this is the definition i've used for years... strangely, nothing i ever learned in statistics ever suggested i was wrong... -- "some speak the sounds but speak in silent voices like radio is silent though it fills the air with noises its transmissions bring submission as ya mold to the unreal mad boy grips the microphone wit' a fistful of steel"
Subject: Re: hardRandNumbGen Date: 29 Jan 1999 09:12:24 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78sfk8$ou7$1@quine.mathcs.duq.edu> References: <F6B453.D3o.0.queen@torfree.net> Newsgroups: sci.crypt Lines: 25 In article <F6B453.D3o.0.queen@torfree.net>, Kurt Wismer <cr764@torfree.net> wrote: >R. Knauer (rcktexas@ix.netcom.com) wrote: >: Learn what crypto-grade randomness is. The concept is deceptively >: simple once you understand it. But first you have to give up all other >: definitions of randomness from other fields like statistics. > >: The key to understanding is that randomness depends on the generation >: process, not the numbers themselves. The number 000...0 fails all >: sorts of statistical tests, but can be a random number if it is >: generated by a TRNG. Until you analyze the method of generation, you >: cannot know. > >this is the definition i've used for years... strangely, nothing i ever >learned in statistics ever suggested i was wrong... Possibly because it isn't. 8-) On the other hand, it also looks suspiciously like the result of an incompetent engineer *trying* to build a RNG. Which is it? Your call. You know the engineers that you hired.... -kitten
Subject: Re: hardRandNumbGen Date: Sat, 30 Jan 1999 17:15:11 GMT From: cr764@torfree.net (Kurt Wismer) Message-ID: <F6DuLB.Lun.0.queen@torfree.net> References: <78sfk8$ou7$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 43 Patrick Juola (juola@mathcs.duq.edu) wrote: : In article <F6B453.D3o.0.queen@torfree.net>, : Kurt Wismer <cr764@torfree.net> wrote: : >R. Knauer (rcktexas@ix.netcom.com) wrote: : >: Learn what crypto-grade randomness is. The concept is deceptively : >: simple once you understand it. But first you have to give up all other : >: definitions of randomness from other fields like statistics. : > : >: The key to understanding is that randomness depends on the generation : >: process, not the numbers themselves. The number 000...0 fails all : >: sorts of statistical tests, but can be a random number if it is : >: generated by a TRNG. Until you analyze the method of generation, you : >: cannot know. : > : >this is the definition i've used for years... strangely, nothing i ever : >learned in statistics ever suggested i was wrong... : Possibly because it isn't. 8-) : On the other hand, it also looks suspiciously like the result of : an incompetent engineer *trying* to build a RNG. : Which is it? Your call. You know the engineers that you hired.... the null output you mean? i can't know simply by looking at the output... statistical tests can be diagnostic and suggest whether bias was unintentionally introduced... if the generator is relatively cheap i might ask the engineers (or maybe a different group of engineers) to build another to see if it behaves similarly... might also want to have group go over the first one with a fine tooth comb to make sure it meets design specifications... it may be that the design is prone to errors in implementation, in which case it should be redesigned (at least if the rng is meant for large scale production)... it might also be a statistical anomaly... i don't see any foolproof method of verifying that the trng is indeed a trng, however... -- "some speak the sounds but speak in silent voices like radio is silent though it fills the air with noises its transmissions bring submission as ya mold to the unreal mad boy grips the microphone wit' a fistful of steel"
Subject: Re: hardRandNumbGen Date: Sun, 31 Jan 1999 00:08:31 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b39ec6.42742981@nntp.ix.netcom.com> References: <F6DuLB.Lun.0.queen@torfree.net> Newsgroups: sci.crypt Lines: 17 On Sat, 30 Jan 1999 17:15:11 GMT, cr764@torfree.net (Kurt Wismer) wrote: >i don't >see any foolproof method of verifying that the trng is indeed a trng, >however... I believe that a radioactive decay TRNG can be verified to within a known level of precision. Bob Knauer "I place economy among the first and most important virtues and public debt as the greatest dangers to be feared. We must not let our rulers load us with perpetual debt." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 1 Feb 1999 09:18:14 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <794d36$qh0$1@quine.mathcs.duq.edu> References: <36b39ec6.42742981@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 18 In article <36b39ec6.42742981@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On Sat, 30 Jan 1999 17:15:11 GMT, cr764@torfree.net (Kurt Wismer) >wrote: > >>i don't >>see any foolproof method of verifying that the trng is indeed a trng, >>however... > >I believe that a radioactive decay TRNG can be verified to within a >known level of precision. But this is yet another example of a statistical test. There will be some level of precision greater than which you can't test -- and some possiblity that the randomness will result in an apparent aberration. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 03 Feb 1999 18:44:56 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b8955c.18282849@nntp.ix.netcom.com> References: <794d36$qh0$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 44 On 1 Feb 1999 09:18:14 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>I believe that a radioactive decay TRNG can be verified to within a >>known level of precision. >But this is yet another example of a statistical test. There will be >some level of precision greater than which you can't test -- and >some possiblity that the randomness will result in an apparent >aberration. I covered this point in an earlier post. The tests, statistical or otherwise, would be of a diagnostic nature applied to the sybsystems of the TRNG. Those diagnostics would be based on the design of the applicable subsystem in terms of known modes of malfunction. Therefore those diagnostic tests are of a determinant nature since their results can be related to a known condition. That cannot be said of statistical tests of the final output sequence, which by definition is completely indeterminant. Of course, something in one of the subsystems of the TRNG must be random, and testing it statistically would not be valid. But the fact that it is random can be inferred from the nature of the underlying physical process, like with radioactive decay. Therefore a complete audit of the TRNG, subsystem by subsystem, can be conducted resulting in a known level of precision for the final output. Such a TRNG can be certified as proveably secure if the level of precision is such that it would take an impossibly large work effort to decrypt OTP ciphers. The problem with statistical tests on the final output is that there is no reliable way to quantify the level of precision for ALL outputs, and there is no reliable way to filter out "bad" outputs since there is no such thing as a "bad" output - except possibly in the case of the diagnostic test for all 1s (open output) or all 0s (shorted output). Bob Knauer "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then, be trusted with the government of others?" --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 03 Feb 1999 16:08:03 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36B8BAB2.5D8D2F7E@aspi.net> References: <36b8955c.18282849@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 49 R. Knauer wrote: > On 1 Feb 1999 09:18:14 -0500, juola@mathcs.duq.edu (Patrick Juola) > wrote: > > >>I believe that a radioactive decay TRNG can be verified to within a > >>known level of precision. > > >But this is yet another example of a statistical test. There will be > >some level of precision greater than which you can't test -- and > >some possiblity that the randomness will result in an apparent > >aberration. > > I covered this point in an earlier post. The tests, statistical or > otherwise, would be of a diagnostic nature applied to the sybsystems > of the TRNG. Those diagnostics would be based on the design of the > applicable subsystem in terms of known modes of malfunction. Therefore > those diagnostic tests are of a determinant nature since their results > can be related to a known condition. So a properly designed RNG is not permitted to fails in an unknown way? Is this design methodology documented anywhere? It sounds like it avoids Murphy the way perpetual motion machines avoid friction. > > > That cannot be said of statistical tests of the final output sequence, > which by definition is completely indeterminant. Of course, something > in one of the subsystems of the TRNG must be random, and testing it > statistically would not be valid. But the fact that it is random can > be inferred from the nature of the underlying physical process, like > with radioactive decay. > > Therefore a complete audit of the TRNG, subsystem by subsystem, can be > conducted resulting in a known level of precision for the final > output. Such a TRNG can be certified as proveably secure if the level > of precision is such that it would take an impossibly large work > effort to decrypt OTP ciphers. > > The problem with statistical tests on the final output is that there > is no reliable way to quantify the level of precision for ALL outputs, > and there is no reliable way to filter out "bad" outputs since there > is no such thing as a "bad" output - except possibly in the case of > the diagnostic test for all 1s (open output) or all 0s (shorted > output). Now I get it. We shouldn't test because there might be a source of entropy present other than that which is designed to be present.
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 08:37:36 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78n4r0$l6l$1@quine.mathcs.duq.edu> References: <36AE0852.16B9A95F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 55 In article <36AE0852.16B9A95F@stud.uni-muenchen.de>, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >R. Knauer wrote: >> >> On Tue, 26 Jan 1999 13:33:49 +0100, Mok-Kong Shen >> <mok-kong.shen@stud.uni-muenchen.de> wrote: >> >> >If I have two sources of randomness, one software and one hardware, >> >both passing all statistical tests I apply equally well, why should >> >I choose one source in preference to the other? >> >> Why do you persist in believing that statistical tests have anything >> to do with randomness in cryptography? > >Tell me what other (better) tools are available for me to make >the decision. These are simply easy to obtain, as far as my >humble knowledge goes. Please kindly give your recipe to cope with >the situation I described. Thanks in advance. There's an old joke about a man losing a quarter and looking for it on the wrong corner "because the light is better here." Or as another metaphor, if I have two used cars, one a Chevrolet and one a Ford, both with identical option packages, which one's the better buy? Only an idiot buys a used car on the basis of the option package; anyone with any sense will look under the hood, test drive it, and so forth. And if you know nothing at all about engines, if you're sensible you'll get a knowledgeable friend or a professional mechanic to look at it and figure out whether or not it's got some lurking evil hiding in the clutch master cylinder. (No, I'm not bitter about that old Renault. Not at all!) You *can't* evaluate a random number generator based only on the statistical properties of the stream it produces. Yes, we've got lots of useful statistical tools that will tell you if it's producing an obviously biased stream, and if it can't pass them, then it's not very good. *BUT* that's not enough. You are going to have to open the hood and look at the engine and see how it's put together. Lots of things that *look* random -- LFSRs, congruential generators, "chaotic function" generators -- have been cracked. Lots of apparent hardware randomness solutions have subtle, or not so subtle, biases. Lots of people have written lots of buggy code. At this level of evaluation, there are no *tests* per se, although there are lots of informal checks. E.g., if someone uses a PRNG with a 32-bit seed, that's not random enough. If someone uses a pure LFSR, it doesn't matter *how* large the seed is, it's not secure. And so forth. But the idea that if you can't run a formal test or if there isn't a software package to do the work for you, it doesn't matter, is pure and unadulterated bullshit. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 14:54:25 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af1f4b.15416708@nntp.ix.netcom.com> References: <78n4r0$l6l$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 42 On 27 Jan 1999 08:37:36 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >You *can't* evaluate a random number generator based only on the >statistical properties of the stream it produces. I fully agree for finite sequences, but as I mention in the thread entitled "Metaphysics of Randomness", statistical tests presumably do apply to infinite sequences. But then, an infinite sequence contains all the details about the generation process, so that makes sense. On an earlier occasion I asked if the Bayesian attack could be used to characterize crypto-grade random numbers. If I gave you a large number of ciphers of substantial length based on some unknown key generation procedure, you tried the Bayesian attack on them and they "leaked" no significant amount of information, then you would probably conclude that the pads were most likely generated by a TRNG, even if I had used some other method like a well-hashed stream cipher. Then I went on to ask if such a "Bayesian Test" for vulnerability could be used to produce confidence limits on the security of subsequent ciphers produced by that encryption method. Assuming that you could use such a Bayesian Test to characterize randomness, how does that differ from statistical tests for randomness? If you were to take the combined keys for all the ciphers used in the Bayesian Test and concatenate them to one very large number for statistical testing, we know that does not tell us if the number is random - otherwise statistical testing could be used to characterize randomness. But why should applying the Bayesian Test to presumably random numbers be any different? (I think I know why - one is a statistical test, the other is a probability survey.) Can you or anyone else please comment on this. Bob Knauer "An honest man can feel no pleasure in the exercise of power over his fellow citizens." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:16:24 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF2DC8.197D196@stud.uni-muenchen.de> References: <36af1f4b.15416708@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 10 R. Knauer wrote: > > But why should applying the Bayesian Test to presumably random numbers > be any different? (I think I know why - one is a statistical test, the > other is a probability survey.) What is a 'probability survey'? Any literature reference? M. K. Shen
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:53:39 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af363c.21289092@nntp.ix.netcom.com> References: <36AF2DC8.197D196@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 18 On Wed, 27 Jan 1999 16:16:24 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >What is a 'probability survey'? Any literature reference? I used the term to describe the process whereby one surveyed a large number of ciphers using probability techniques such as the Bayesian method. See Patrick's following comments. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 10:21:56 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78nauk$lks$1@quine.mathcs.duq.edu> References: <36af1f4b.15416708@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 113 In article <36af1f4b.15416708@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 27 Jan 1999 08:37:36 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>You *can't* evaluate a random number generator based only on the >>statistical properties of the stream it produces. > >I fully agree for finite sequences, but as I mention in the thread >entitled "Metaphysics of Randomness", statistical tests presumably do >apply to infinite sequences. But then, an infinite sequence contains >all the details about the generation process, so that makes sense. Umm.... this is a very limited use of the word "test." I will, of course, be delighted to apply any test you choose to any sequence you choose, finite or note. But I charge by the hour, and it would, of course, be improper of me to give you any results until after I've examined the entire sequence. Still want me to test that infinite sequence for you? >On an earlier occasion I asked if the Bayesian attack could be used to >characterize crypto-grade random numbers. If I gave you a large number >of ciphers of substantial length based on some unknown key generation >procedure, you tried the Bayesian attack on them and they "leaked" no >significant amount of information, then you would probably conclude >that the pads were most likely generated by a TRNG, even if I had used >some other method like a well-hashed stream cipher. Probably. By the time you get up to the upper reaches of what we can do with PRNGs, the information leak over reasonably-sizes samples is pretty infinitesimal. Having only reasonably-sized amounts of funding, computing power, and patience at my disposal, there's a limit to what can be detected -- and if I get something that's indistinguishable from random by this test, then I'm probably willing to pass it as a "good" RNG. On the other hand, there's definitely a possiblity of error there, which is why I wouldn't be likely to make any such statement without asking to see how you generated the key sequence(s). An example of something you could "sneak under the radar" would be a very large LFSR. There are well-known tests of linear complexity that give the minimum size of an LFSR to generate a given sequence. A "truly random" sequence will (with probability 1) yield a complexity curve looking something like this : _/ _/ _/ _/ _/ _/ / A LFSR-generated sequence will have a complexity curve something like this : ____________________________________ _/ _/ _/ _/ / where it levels off after you hit the generator size. The question, however, is a) did you give me enough data to find where the sequence levels off? b) did I run a test powerful enough and long enough to find that point? The first is a mathematical property of the sequence. The second is a question of my time, expertise, and a bit of luck. The problem is that the Bayesian attack only works if I know -- or can guess -- the type of bias likely to be in your generator. In most cases this isn't a problem; many biases are blatantly obvious or result from common mistakes. The more sophisticated you are at hiding your biases, the less likely I am to guess the sort of bias or to lose patience/funding before I complete the necessary tests. (Of course, if I have an infinite amount of time/money, I can make an infinite number of guesses and will probably guess right on at least one of them.) On the other hand, if I can get a copy of your code, I can just read the code and determine your biases. But you can't rely on keeping your code secret.... > Then I went on to >ask if such a "Bayesian Test" for vulnerability could be used to >produce confidence limits on the security of subsequent ciphers >produced by that encryption method. In theory, given a large enough number of computers, without question. But we're running into serious funding difficulties here. >But why should applying the Bayesian Test to presumably random numbers >be any different? (I think I know why - one is a statistical test, the >other is a probability survey.) More a question about finite vs. infinite data. Bayes' Theorem lets you refine hypotheses about biases that you've already made. Conventional statistics just let you test for the presence or absence of a visible bias. As any statistician will tell you, you can't prove the absence of an effect by statistical means. You can just prove that it didn't show up in your experiment and therefore was less than the sensitivity of your test. Of course, with infinite data, you can develop "tests," in the loosest possible sense, of infinite sensitivity. But this isn't helpful. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:12:42 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af34e2.20943054@nntp.ix.netcom.com> References: <78nauk$lks$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 97 On 27 Jan 1999 10:21:56 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >But I charge by the hour, and it would, >of course, be improper of me to give you any results until after I've >examined the entire sequence. >Still want me to test that infinite sequence for you? Sure! As long as I only have to pay you when the test is completed. :-) >Probably. By the time you get up to the upper reaches of what we >can do with PRNGs, the information leak over reasonably-sizes >samples is pretty infinitesimal. Is there a way to measure that - to give confidence limits? >Having only reasonably-sized >amounts of funding, computing power, and patience at my disposal, >there's a limit to what can be detected -- and if I get something >that's indistinguishable from random by this test, then I'm probably >willing to pass it as a "good" RNG. I assumed that you had all the existing resources at your disposal that you wanted - like the NSA has. >a complexity curve looking >something like this : > > _/ > _/ > _/ > _/ > _/ > _/ >/ What is a "complexity curve" and how do you generate one? References please, including web sites if possible. >The problem is that the Bayesian attack only works if I know -- or >can guess -- the type of bias likely to be in your generator. So, the Bayesian attack is not all that powerful against stream ciphers *in general*. You have to provide the first hypothesis to get it started. And if you cannot provide a decent hypothesis, then the Bayesian attack is worthless. >On the other hand, if I can get a copy of your code, I can just read >the code and determine your biases. But you can't rely on keeping >your code secret.... I can rely on keeping it just as secret as I keep my keys. If my code has been compromised, so have my keys. >But we're running into serious funding difficulties here. Not if you are the NSA. There is no such thing as a funding difficulty when you have access to OPM. >More a question about finite vs. infinite data. Bayes' Theorem lets you >refine hypotheses about biases that you've already made. Conventional >statistics just let you test for the presence or absence of a visible >bias. As any statistician will tell you, you can't prove the absence >of an effect by statistical means. You can just prove that it >didn't show up in your experiment and therefore was less than the >sensitivity of your test. Crypto-grade randomness has the negative property that it is non-deterministic. Statistical tests cannot prove the absence of determinism in numbers. That is why they cannot be used to characterize randomness from the numbers themselves. >Of course, with infinite data, you can develop "tests," in the loosest >possible sense, of infinite sensitivity. The sequence 111... with an infinite number of 1s is not a random number. An infinte random number has no bit bias. Finite random numbers can have bit bias. Therefore the finite sequnece 111...1 can be a random number. After all, it is one sequence from a TRNG. >But this isn't helpful. It is helpful perhaps when proving theorems. Whether that is of value to the working cryptanalyst is another matter. Maybe when quantum computers come online it will be useful. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 12:13:29 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36AF4939.9C33D7CD@aspi.net> References: <36af34e2.20943054@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 15 R. Knauer wrote: > On 27 Jan 1999 10:21:56 -0500, juola@mathcs.duq.edu (Patrick Juola) > wrote: > > >On the other hand, if I can get a copy of your code, I can just read > >the code and determine your biases. But you can't rely on keeping > >your code secret.... > > I can rely on keeping it just as secret as I keep my keys. If my code > has been compromised, so have my keys. Hardly. One can change keys arbitrarily. Once cannot change code so often (here code == algorithm not .EXE).
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:44:52 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af5ae4.30673045@nntp.ix.netcom.com> References: <36AF4939.9C33D7CD@aspi.net> Newsgroups: sci.crypt Lines: 31 On Wed, 27 Jan 1999 12:13:29 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >R. Knauer wrote: Could I ask you to keep the header information with my attribution, just like you did for the nested poster below. Thanks. >> On 27 Jan 1999 10:21:56 -0500, juola@mathcs.duq.edu (Patrick Juola) >> wrote: >> >On the other hand, if I can get a copy of your code, I can just read >> >the code and determine your biases. But you can't rely on keeping >> >your code secret.... >> I can rely on keeping it just as secret as I keep my keys. If my code >> has been compromised, so have my keys. >Hardly. One can change keys arbitrarily. Once cannot change code so often >(here code == algorithm not .EXE). How about making your algorithm (code) part of the key? That way you could change algorithms as often as you change keys. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 14:32:25 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78npk9$m54$1@quine.mathcs.duq.edu> References: <36af5ae4.30673045@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 32 In article <36af5ae4.30673045@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On Wed, 27 Jan 1999 12:13:29 -0500, "Trevor Jackson, III" ><fullmoon@aspi.net> wrote: > >>R. Knauer wrote: > >>> On 27 Jan 1999 10:21:56 -0500, juola@mathcs.duq.edu (Patrick Juola) >>> wrote: > >>> >On the other hand, if I can get a copy of your code, I can just read >>> >the code and determine your biases. But you can't rely on keeping >>> >your code secret.... > >>> I can rely on keeping it just as secret as I keep my keys. If my code >>> has been compromised, so have my keys. > >>Hardly. One can change keys arbitrarily. Once cannot change code so often >>(here code == algorithm not .EXE). > >How about making your algorithm (code) part of the key? That way you >could change algorithms as often as you change keys. The larger the key, the more difficulty you have in changing/exchanging it. If you're going to exchange 10,000 bytes of code, why don't you use the same mechanism to exchange a 10,000 byte RSA key or something like that, which gives you much more bang/buck? -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:46:31 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36afa530.49693825@nntp.ix.netcom.com> References: <78npk9$m54$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 19 On 27 Jan 1999 14:32:25 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >The larger the key, the more difficulty you have in changing/exchanging >it. >If you're going to exchange 10,000 bytes of code, why don't you use >the same mechanism to exchange a 10,000 byte RSA key or something like >that, which gives you much more bang/buck? The question was theoretical. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 15:44:49 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36AF7AC1.98B15F0E@aspi.net> References: <36af5ae4.30673045@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 33 R. Knauer wrote: > On Wed, 27 Jan 1999 12:13:29 -0500, "Trevor Jackson, III" > <fullmoon@aspi.net> wrote: > > >R. Knauer wrote: > > Could I ask you to keep the header information with my attribution, > just like you did for the nested poster below. Thanks. > > >> On 27 Jan 1999 10:21:56 -0500, juola@mathcs.duq.edu (Patrick Juola) > >> wrote: > > >> >On the other hand, if I can get a copy of your code, I can just read > >> >the code and determine your biases. But you can't rely on keeping > >> >your code secret.... > > >> I can rely on keeping it just as secret as I keep my keys. If my code > >> has been compromised, so have my keys. > > >Hardly. One can change keys arbitrarily. Once cannot change code so often > >(here code == algorithm not .EXE). > > How about making your algorithm (code) part of the key? That way you > could change algorithms as often as you change keys. In theory you can do this by encoding the algorithm appropriately. However, you need a clever encoding scheme such that all key values translate to valid algoirhms. In addition, you need to show that all such encoded algorithms are "secure".
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:50:54 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36afa56b.49752720@nntp.ix.netcom.com> References: <36AF7AC1.98B15F0E@aspi.net> Newsgroups: sci.crypt Lines: 29 On Wed, 27 Jan 1999 15:44:49 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >> How about making your algorithm (code) part of the key? That way you >> could change algorithms as often as you change keys. >In theory you can do this by encoding the algorithm appropriately. Any suggestions on how to do that? >However, you >need a clever encoding scheme such that all key values translate to valid >algoirhms. A daunting task. >In addition, you need to show that all such encoded algorithms are >"secure". A daunting task. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 15:06:39 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78nrkg$m6o$1@quine.mathcs.duq.edu> References: <36af34e2.20943054@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 59 In article <36af34e2.20943054@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >>Having only reasonably-sized >>amounts of funding, computing power, and patience at my disposal, >>there's a limit to what can be detected -- and if I get something >>that's indistinguishable from random by this test, then I'm probably >>willing to pass it as a "good" RNG. > >I assumed that you had all the existing resources at your disposal >that you wanted - like the NSA has. Um... the NSA does *NOT* have infinite resources. Far from it. The GNP of the United States is only measured in the trillions, so any hardware costing more than a quadrillion dollars or so is probably out of reach. That's the problem with infinite. It's *BIG*. Of a scale that dwarfs anything we can possibly imagine. From the standpoint of "perfect secrecy", a block cypher with a 10,000 bit key is exactly as a strong as a Captain Midnight secret decoder ring -- i.e., not perfect and hence infinitely less strong. God has no more difficulty reading message that the NSA would pound against for a billion years than he has reading something rot13ed. For us mortals, the NSAs capacities are huge. It's easy to hide something from the NSA.... >>a complexity curve looking >>something like this : >> >> _/ >> _/ >> _/ >> _/ >> _/ >> _/ >>/ > >What is a "complexity curve" and how do you generate one? x axis == length of string in bits, y axis == linear complexity of the prefix of the (infinite) string of such length. I think there's a brief discussion in the RSA Labs Stream Ciphers tech report -- if not, check the various complexity-related citations contained therein. RSA Labs is on the Web somewhere, as is the TR. >>The problem is that the Bayesian attack only works if I know -- or >>can guess -- the type of bias likely to be in your generator. > >So, the Bayesian attack is not all that powerful against stream >ciphers *in general*. If all you know is that someone used "a stream cypher", then, no, they're not useful. But usually you know more than that. Or unless you guess lucky, which is half the skill in cryptography. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 12:10:17 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36AF4878.7686E20E@aspi.net> References: <78nauk$lks$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 120 Patrick Juola wrote: > In article <36af1f4b.15416708@nntp.ix.netcom.com>, > R. Knauer <rcktexas@ix.netcom.com> wrote: > >On 27 Jan 1999 08:37:36 -0500, juola@mathcs.duq.edu (Patrick Juola) > >wrote: > > > >>You *can't* evaluate a random number generator based only on the > >>statistical properties of the stream it produces. > > > >I fully agree for finite sequences, but as I mention in the thread > >entitled "Metaphysics of Randomness", statistical tests presumably do > >apply to infinite sequences. But then, an infinite sequence contains > >all the details about the generation process, so that makes sense. > > Umm.... this is a very limited use of the word "test." I will, of > course, be delighted to apply any test you choose to any sequence > you choose, finite or note. But I charge by the hour, and it would, > of course, be improper of me to give you any results until after I've > examined the entire sequence. > > Still want me to test that infinite sequence for you? > > >On an earlier occasion I asked if the Bayesian attack could be used to > >characterize crypto-grade random numbers. If I gave you a large number > >of ciphers of substantial length based on some unknown key generation > >procedure, you tried the Bayesian attack on them and they "leaked" no > >significant amount of information, then you would probably conclude > >that the pads were most likely generated by a TRNG, even if I had used > >some other method like a well-hashed stream cipher. > > Probably. By the time you get up to the upper reaches of what we > can do with PRNGs, the information leak over reasonably-sizes > samples is pretty infinitesimal. Having only reasonably-sized > amounts of funding, computing power, and patience at my disposal, > there's a limit to what can be detected -- and if I get something > that's indistinguishable from random by this test, then I'm probably > willing to pass it as a "good" RNG. > > On the other hand, there's definitely a possiblity of error there, > which is why I wouldn't be likely to make any such statement without > asking to see how you generated the key sequence(s). An example > of something you could "sneak under the radar" would be a very large > LFSR. There are well-known tests of linear complexity that give the > minimum size of an LFSR to generate a given sequence. A "truly random" > sequence will (with probability 1) yield a complexity curve looking > something like this : > > _/ > _/ > _/ > _/ > _/ > _/ > / > > A LFSR-generated sequence will have a complexity curve something like > this : > > ____________________________________ > _/ > _/ > _/ > _/ > / > > where it levels off after you hit the generator size. > > The question, however, is > > a) did you give me enough data to find where the sequence levels off? > b) did I run a test powerful enough and long enough to find that point? > > The first is a mathematical property of the sequence. The second is > a question of my time, expertise, and a bit of luck. > > The problem is that the Bayesian attack only works if I know -- or > can guess -- the type of bias likely to be in your generator. In > most cases this isn't a problem; many biases are blatantly obvious > or result from common mistakes. The more sophisticated you are at > hiding your biases, the less likely I am to guess the sort of bias > or to lose patience/funding before I complete the necessary tests. > (Of course, if I have an infinite amount of time/money, I can > make an infinite number of guesses and will probably guess right > on at least one of them.) > > On the other hand, if I can get a copy of your code, I can just read > the code and determine your biases. But you can't rely on keeping > your code secret.... > > > Then I went on to > >ask if such a "Bayesian Test" for vulnerability could be used to > >produce confidence limits on the security of subsequent ciphers > >produced by that encryption method. > > In theory, given a large enough number of computers, without > question. But we're running into serious funding difficulties here. > > >But why should applying the Bayesian Test to presumably random numbers > >be any different? (I think I know why - one is a statistical test, the > >other is a probability survey.) > > More a question about finite vs. infinite data. Bayes' Theorem lets you > refine hypotheses about biases that you've already made. Conventional > statistics just let you test for the presence or absence of a visible > bias. As any statistician will tell you, you can't prove the absence > of an effect by statistical means. You can just prove that it > didn't show up in your experiment and therefore was less than the > sensitivity of your test. > > Of course, with infinite data, you can develop "tests," in the loosest > possible sense, of infinite sensitivity. But this isn't helpful. To boil all this verbiage down, analysis of the generation process is more efficient than analysis of the generator output. One reason for the improved efficiency is that understanding the generation process provides hints of the potential defects worth testing. Another reason is that the description of the generation process is logarithmicaly smaller than the description of the generated output.
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 18:28:35 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af575c.29769466@nntp.ix.netcom.com> References: <36AF4878.7686E20E@aspi.net> Newsgroups: sci.crypt Lines: 40 On Wed, 27 Jan 1999 12:10:17 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >To boil all this verbiage down, analysis of the generation process is more >efficient than analysis of the generator output. One reason for the >improved efficiency is that understanding the generation process provides >hints of the potential defects worth testing. Another reason is that the >description of the generation process is logarithmicaly smaller than the >description of the generated output. Algorithmic analysis of the generated output itself is completely worthless if that output is finite in length. If the number 111...1 is generated by a TRNG, then it is a perfectly valid crypto-grade random number, just like all the other 2^N-1 numbers of the same length. Since all of these 2^N sequences of length N are random numbers, what analysis of them is going to tell you that they are or are not random? If you apply your analysis to each of the 2^N numbers, then you must get the same result for each one since they are generated by a TRNG, otherwise your analysis is fatally flawed. If your analysis does give the same result for each number, what value is it in determining randomness or non-randomness? If it correctly gives the same result for each number, there is no discrimination being performed. In fact, the analysis needs to say: "I cannot decide if that number is random or not", for each number, or it is wrong. If you could prove algorithmically that a given finite sequence is truly random, then you could also solve both the Godel incompleteness problem and the Turing halting problem - which are related. See Chaitin (op. cit.) for the details why. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:00:55 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36AF7E86.637D0856@aspi.net> References: <36af575c.29769466@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 67 R. Knauer wrote: > On Wed, 27 Jan 1999 12:10:17 -0500, "Trevor Jackson, III" > <fullmoon@aspi.net> wrote: > > >To boil all this verbiage down, analysis of the generation process is more > >efficient than analysis of the generator output. One reason for the > >improved efficiency is that understanding the generation process provides > >hints of the potential defects worth testing. Another reason is that the > >description of the generation process is logarithmicaly smaller than the > >description of the generated output. > > Algorithmic analysis of the generated output itself is completely > worthless if that output is finite in length. If the number 111...1 is > generated by a TRNG, then it is a perfectly valid crypto-grade random > number, just like all the other 2^N-1 numbers of the same length. False. Analysis of output is insufficient to prove the entropy density is 100%, but it can easily provide useful information. For instance, a page of business text is around 2^16 bits (of data, not information). If we divide up the (?)RNG output into samples of this size we can establish a confidence that the samples are statistically random. If the samples are not statistically random, we can, with statistical confidence, eliminate that RNG as appropriate for key generation. If the samples are statistically random we have not proved that they are appropriate for key generation, but we have shown, to the limit of our testing capability, the absence of predictability. If the limit of our testing capability is equal or greater than the capabilities of our adversaries, then the samples are adequate key material. Note that the last statement does NOT mean that future samples may be adequate. The pattern established by the samples inspected might begin repeating on the very next bit generated. It DOES say that, withint the bounds of analytic capability, one can inspect keys for effectiveness. The evaluation of the limits of statistical testing is probably tougher than I can address because I am insufficiently versed in fundamental statistical theory. But it's an interesting operational question... > > > Since all of these 2^N sequences of length N are random numbers, what > analysis of them is going to tell you that they are or are not random? > If you apply your analysis to each of the 2^N numbers, then you must > get the same result for each one since they are generated by a TRNG, > otherwise your analysis is fatally flawed. > > If your analysis does give the same result for each number, what value > is it in determining randomness or non-randomness? If it correctly > gives the same result for each number, there is no discrimination > being performed. In fact, the analysis needs to say: "I cannot decide > if that number is random or not", for each number, or it is wrong. > > If you could prove algorithmically that a given finite sequence is > truly random, then you could also solve both the Godel incompleteness > problem and the Turing halting problem - which are related. > > See Chaitin (op. cit.) for the details why. Prove? No. Express confidence in? Certainly. Arbitrarily high confidence? Yes, given adequate sample size. Higher confidence than is proper for the other components of the security system? Almost always.
Subject: Re: hardRandNumbGen Date: 27 Jan 1999 16:28:30 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78o0dv$mfl$1@quine.mathcs.duq.edu> References: <36AF7E86.637D0856@aspi.net> Newsgroups: sci.crypt Lines: 48 In article <36AF7E86.637D0856@aspi.net>, Trevor Jackson, III <fullmoon@aspi.net> wrote: >R. Knauer wrote: > >> On Wed, 27 Jan 1999 12:10:17 -0500, "Trevor Jackson, III" >> <fullmoon@aspi.net> wrote: >> >> >To boil all this verbiage down, analysis of the generation process is more >> >efficient than analysis of the generator output. One reason for the >> >improved efficiency is that understanding the generation process provides >> >hints of the potential defects worth testing. Another reason is that the >> >description of the generation process is logarithmicaly smaller than the >> >description of the generated output. >> >> Algorithmic analysis of the generated output itself is completely >> worthless if that output is finite in length. If the number 111...1 is >> generated by a TRNG, then it is a perfectly valid crypto-grade random >> number, just like all the other 2^N-1 numbers of the same length. > >False. Analysis of output is insufficient to prove the entropy density is >100%, but it can easily provide useful information. For instance, a page of >business text is around 2^16 bits (of data, not information). If we divide up >the (?)RNG output into samples of this size we can establish a confidence that >the samples are statistically random. If the samples are not statistically >random, we can, with statistical confidence, eliminate that RNG as appropriate >for key generation. If the samples are statistically random we have not proved >that they are appropriate for key generation, but we have shown, to the limit >of our testing capability, the absence of predictability. If the limit of our >testing capability is equal or greater than the capabilities of our >adversaries, then the samples are adequate key material. > >Note that the last statement does NOT mean that future samples may be >adequate. The pattern established by the samples inspected might begin >repeating on the very next bit generated. It DOES say that, withint the bounds >of analytic capability, one can inspect keys for effectiveness. > >The evaluation of the limits of statistical testing is probably tougher than I >can address because I am insufficiently versed in fundamental statistical >theory. But it's an interesting operational question... One important operational point should be made here : the distinction between planned and post-hoc tests. Specifically, if you're going to reject the RNG on the basis of statistical tests (which is reasonable), you *NEED* to define the tests you're going to run and the rejection threshhold before you power up the generator. -kitten
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 18:51:59 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36B0F81E.13B14D28@aspi.net> References: <78o0dv$mfl$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 56 Patrick Juola wrote: > In article <36AF7E86.637D0856@aspi.net>, > Trevor Jackson, III <fullmoon@aspi.net> wrote: > >R. Knauer wrote: > > > >> On Wed, 27 Jan 1999 12:10:17 -0500, "Trevor Jackson, III" > >> <fullmoon@aspi.net> wrote: > >> > >> >To boil all this verbiage down, analysis of the generation process is more > >> >efficient than analysis of the generator output. One reason for the > >> >improved efficiency is that understanding the generation process provides > >> >hints of the potential defects worth testing. Another reason is that the > >> >description of the generation process is logarithmicaly smaller than the > >> >description of the generated output. > >> > >> Algorithmic analysis of the generated output itself is completely > >> worthless if that output is finite in length. If the number 111...1 is > >> generated by a TRNG, then it is a perfectly valid crypto-grade random > >> number, just like all the other 2^N-1 numbers of the same length. > > > >False. Analysis of output is insufficient to prove the entropy density is > >100%, but it can easily provide useful information. For instance, a page of > >business text is around 2^16 bits (of data, not information). If we divide up > >the (?)RNG output into samples of this size we can establish a confidence that > >the samples are statistically random. If the samples are not statistically > >random, we can, with statistical confidence, eliminate that RNG as appropriate > >for key generation. If the samples are statistically random we have not proved > >that they are appropriate for key generation, but we have shown, to the limit > >of our testing capability, the absence of predictability. If the limit of our > >testing capability is equal or greater than the capabilities of our > >adversaries, then the samples are adequate key material. > > > >Note that the last statement does NOT mean that future samples may be > >adequate. The pattern established by the samples inspected might begin > >repeating on the very next bit generated. It DOES say that, withint the bounds > >of analytic capability, one can inspect keys for effectiveness. > > > >The evaluation of the limits of statistical testing is probably tougher than I > >can address because I am insufficiently versed in fundamental statistical > >theory. But it's an interesting operational question... > > One important operational point should be made here : the distinction > between planned and post-hoc tests. Specifically, if you're going to > reject the RNG on the basis of statistical tests (which is reasonable), > you *NEED* to define the tests you're going to run and the rejection > threshhold before you power up the generator. Absolutely. Any other arrangement produces a man-in-the-loop situation. The man isn't the problem, but the loop sure is. The output of the generator should not be able to influence or adjust the acceptance/rejection criteria of a production system. Of course, during testing, we'll use the test run outputs to monotonicly tighten the criteria until we're ready to go into production.
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 23:44:42 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36afa3bc.49321650@nntp.ix.netcom.com> References: <36AF7E86.637D0856@aspi.net> Newsgroups: sci.crypt Lines: 38 On Wed, 27 Jan 1999 16:00:55 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >> Algorithmic analysis of the generated output itself is completely >> worthless if that output is finite in length. If the number 111...1 is >> generated by a TRNG, then it is a perfectly valid crypto-grade random >> number, just like all the other 2^N-1 numbers of the same length. >False. Analysis of output is insufficient to prove the entropy density is >100%, but it can easily provide useful information. I never said it couldn't. However, analysis of the output of a TRNG is not useful for purposes of crypto. >For instance, a page of >business text is around 2^16 bits (of data, not information). If we divide up >the (?)RNG output into samples of this size we can establish a confidence that >the samples are statistically random. Define "statistically random" in terms of finite length numbers. >If the samples are not statistically >random, we can, with statistical confidence, eliminate that RNG as appropriate >for key generation. Define "not statistically random" in terms of finite length numbers. Remember we are talking about crypto-grade randomness, where it is possible to use a random number as a pad in the OTP cryptosystem, which means it must be proveably secure. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 28 Jan 1999 11:23:20 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78q2to$ng9$1@quine.mathcs.duq.edu> References: <36afa3bc.49321650@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 62 In article <36afa3bc.49321650@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On Wed, 27 Jan 1999 16:00:55 -0500, "Trevor Jackson, III" ><fullmoon@aspi.net> wrote: >>For instance, a page of >>business text is around 2^16 bits (of data, not information). If we divide up >>the (?)RNG output into samples of this size we can establish a confidence that >>the samples are statistically random. > >Define "statistically random" in terms of finite length numbers. > >>If the samples are not statistically >>random, we can, with statistical confidence, eliminate that RNG as appropriate >>for key generation. > >Define "not statistically random" in terms of finite length numbers. In terms of finite length numbers, a finite number is not statistically random if I can predict it (or something about it). The key word there is "predict." Post hoc analyses don't mean much. A priori analyses, however, are a good way of showing that, in practice, a prediction can be made -- by the simple technique of making and validating a prediction. Example. 1) You hand me a black box 2) I tell you 'if this thing doesn't put out "about the same" number of ones and zeros, it's hosed and you're fired.' 3) I run the black box for 100,000 bits and it gives 95% ones. 4) I kick your sorry unemployed ass out of the building. This is valid. *Technically speaking*, there is approximately one chance in a godzillion that you are not incompetent, but just unlucky. In practical terms, this wouldn't happen. I'm throwing out one working RNG every jillion centuries and firing a lot of charletans. Statistics are like that -- you can never state something "for sure," but you can usually make statements like "a truly random system would only do that 5% of the time, and so I don't think it's random." The key point above is the ordering of steps 2 and 3. If I reversed them -- running the generator and *THEN* deciding what counted as a test of randomness -- then I'd be dealing with a stacked deck. There's always some test that *can* be run on *any* finite (or even infinite) sequence that will cause it to appear non-random. The trick is whether I can *pre*dict that test before examining the output, or whether I can only hunt for it after the fact. And, of course, anything that I test for *might* be the result of a non-random process that "just happened" to give a good result for the test. I can never prove that a process is "random," but I can give measures (confidence intervals), stating that the amount of a particular sort of non-randomness is likely (with some probability) to be less than some threshhold. But I can't prove perfection -- I can never get the threshholds down to zero, which is why I can't prove "randomness", only the apparent absence of LOTS of randomness. -kitten
Subject: Re: hardRandNumbGen Date: Thu, 28 Jan 1999 23:37:14 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b0f08a.10281173@nntp.ix.netcom.com> References: <78q2to$ng9$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 49 On 28 Jan 1999 11:23:20 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >In terms of finite length numbers, a finite number is not statistically >random if I can predict it (or something about it). >The key word there is "predict." Post hoc analyses don't mean much. >A priori analyses, however, are a good way of showing that, in practice, >a prediction can be made -- by the simple technique of making and >validating a prediction. [snip] Your comments are directed at showing reasons for suspecting that a generator is not random. What do you do if your statistical tests indicate that it IS random? You throw out the suspect bad generators and argue that such a practice is safe. But what is your argument for the generators that you do not throw out? I can demonstrate experimentally that a program halts. That's easy. What I cannot demonstrate is that a program does not halt. No matter how long I let it run, there is always the chance that it will halt if I wait just a little longer. If I stop short of that point in time, and conclude that the program does not halt, and yet it would have halted if I had waited just a little longer, then I am in error. The same is true for your statistical tests. If you had used just a little longer number, you might have found out that the generator was bad. By using numbers of finite length, you are merely guessing that they are random when they pass your statistical tests. If you could determine that a finite number is truly random by using algorithmic tests, then you could solve the Godel incompleteness problem, the Turing halting problem and the Chaitin complexity problem algorithmically too. As one poster said earlier, many PRNGs pass statistical tests wonderfully. And cryptanalysts have a real festival when someone uses one of them. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 29 Jan 1999 08:59:41 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <78sesd$osg$1@quine.mathcs.duq.edu> References: <36b0f08a.10281173@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 32 In article <36b0f08a.10281173@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 28 Jan 1999 11:23:20 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>In terms of finite length numbers, a finite number is not statistically >>random if I can predict it (or something about it). > >>The key word there is "predict." Post hoc analyses don't mean much. >>A priori analyses, however, are a good way of showing that, in practice, >>a prediction can be made -- by the simple technique of making and >>validating a prediction. > >[snip] > >Your comments are directed at showing reasons for suspecting that a >generator is not random. What do you do if your statistical tests >indicate that it IS random? > >You throw out the suspect bad generators and argue that such a >practice is safe. But what is your argument for the generators that >you do not throw out? That the amount of the bias *that I measured* is less than some threshhold. If I can live with that threshhold and I believe that no other (untested) source of bias is likely to be present, then the cypher is safe to use. If I don't believe that -- or the threshhold is too high -- then I can't place any reliance on a negative result. -kitten
Subject: Re: hardRandNumbGen Date: Sat, 30 Jan 1999 03:18:06 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b279b1.51639443@nntp.ix.netcom.com> References: <78sesd$osg$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 24 On 29 Jan 1999 08:59:41 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>You throw out the suspect bad generators and argue that such a >>practice is safe. But what is your argument for the generators that >>you do not throw out? >That the amount of the bias *that I measured* is less than some >threshhold. If I can live with that threshhold and I believe that >no other (untested) source of bias is likely to be present, then >the cypher is safe to use. >If I don't believe that -- or the threshhold is too high -- then >I can't place any reliance on a negative result. Then you accept bad generators and reject good ones. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 1 Feb 1999 08:55:04 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <794bno$qdu$1@quine.mathcs.duq.edu> References: <36b279b1.51639443@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 28 In article <36b279b1.51639443@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 29 Jan 1999 08:59:41 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>>You throw out the suspect bad generators and argue that such a >>>practice is safe. But what is your argument for the generators that >>>you do not throw out? > >>That the amount of the bias *that I measured* is less than some >>threshhold. If I can live with that threshhold and I believe that >>no other (untested) source of bias is likely to be present, then >>the cypher is safe to use. > >>If I don't believe that -- or the threshhold is too high -- then >>I can't place any reliance on a negative result. > >Then you accept bad generators and reject good ones. Absolutely. With a particular, known, measured probability. Statisticians call these Type I and Type II errors. The trick is to make the cost of those errors as small as possible while still keeping testing costs under control. -kitten
Subject: Re: hardRandNumbGen Date: Wed, 03 Feb 1999 18:28:37 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36b8755f.10093954@nntp.ix.netcom.com> References: <794bno$qdu$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 35 On 1 Feb 1999 08:55:04 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>Then you accept bad generators and reject good ones. >Absolutely. With a particular, known, measured probability. I am not as concerned about rejecting good TRNGs as accepting bad RNGs, although you aren't gonna do a lot of OTP encryption when your tests keep rejecting good TRNGs all the time. And your tests will eventually reject every TRNG you test long enough (I assume correctly that will sumbit your RNG to tests periodically). The specification for the OTP does not permit accepting bad RNGs with any statistical measure that yields a probability other than zero. >Statisticians call these Type I and Type II errors. The trick is >to make the cost of those errors as small as possible while still >keeping testing costs under control. "Small as possible" is not small enough for the OTP. The probability for accepting a bad RNG must be zero. For example, earlier you criticized rejecting the sequence 000...0 for use as a complete pad on the basis that it would permit the cryptanalyst to rule out one possible decryption, and yet that string would certainly be cause for rejecting all TRNGs that produced it, based on statistical tests. Bob Knauer "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then, be trusted with the government of others?" --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 3 Feb 1999 14:34:40 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79a8cg$2up$1@quine.mathcs.duq.edu> References: <36b8755f.10093954@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 59 In article <36b8755f.10093954@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 1 Feb 1999 08:55:04 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>>Then you accept bad generators and reject good ones. > >>Absolutely. With a particular, known, measured probability. > >I am not as concerned about rejecting good TRNGs as accepting bad >RNGs, although you aren't gonna do a lot of OTP encryption when your >tests keep rejecting good TRNGs all the time. And your tests will >eventually reject every TRNG you test long enough (I assume correctly >that will sumbit your RNG to tests periodically). Ooooooh, repeated testing! Now that's a different kettle of fish. Actually, I don't need to necessarily reject every TRNG eventually; I "know" that a given TRNG will fail 1/20 of the tests at a 5% level, so if I get an approximate failure rate of 1 in 20, then I keep that one.... and I reject, paradoxically enough, a generator that doesn't fail 5% of the tests on repeated testing. And of course, that itself constitutes a meta-test that itself can be passed or failed with some probability. Eventually along that But the point is for any well-defined testing scheme, I can make the probability that a good generator will fail as small as I like, to the point where the expected number of functioning generators that I throw out in my professional lifetime is less than 1 (or less than 1/1000 of a generator, for that matter). That's not a problem. Yes, *eventually* I will throw out a working generator. But I'll run out of solar energy to power the testing equipment at some point, too. >The specification for the OTP does not permit accepting bad RNGs with >any statistical measure that yields a probability other than zero. In that case, you'll never get one. Statistics -- nor engineering practice, for that matter -- will never allow you to get a probability all the way down to zero. The best I can do is to get the probability down to darn-small, where I accept one bad generator per million lifetimes. There comes a point where keeping a secret becomes silly; I think the point where you're no longer worried about the NSA and are now considering Buddha qualifies. >For example, earlier you criticized rejecting the sequence 000...0 for >use as a complete pad on the basis that it would permit the >cryptanalyst to rule out one possible decryption, and yet that string >would certainly be cause for rejecting all TRNGs that produced it, >based on statistical tests. Depends on how often it was produced. Give me a million 20 bit sequences and I'll bet I can find one. If I find 500,000, I'll worry. -kitten
Subject: Re: hardRandNumbGen Date: Fri, 05 Feb 1999 12:31:50 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36bae371.2258006@nntp.ix.netcom.com> References: <79a8cg$2up$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 26 On 3 Feb 1999 14:34:40 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >Depends on how often it was produced. Give me a million 20 bit sequences >and I'll bet I can find one. If I find 500,000, I'll worry. Let's expand on this a bit, but this time from the point of view of accepting good RNGs. Imagine that you perform some statistical tests on an RNG and it passes. And you perform some more tests and it passes. Yet the RNG is not crypto-secure. IOW, your tests merely found some statistical properties from which you inferred that the RNG was crypto-grade, like low bias. The number 101010...10 has no bias at all. So here's the question: Just how much testing do you need to do to convince yourself that a given RNG is crypto-grade (either proveably secure or secure in terms of work effort)? What happens if you must test forever? Bob Knauer "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then, be trusted with the government of others?" --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 4 Feb 1999 14:36:11 -0500 From: hrubin@b.stat.purdue.edu (Herman Rubin) Message-ID: <79csrb$16je@b.stat.purdue.edu> References: <794bno$qdu$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 40 In article <794bno$qdu$1@quine.mathcs.duq.edu>, Patrick Juola <juola@mathcs.duq.edu> wrote: >In article <36b279b1.51639443@nntp.ix.netcom.com>, >R. Knauer <rcktexas@ix.netcom.com> wrote: >>On 29 Jan 1999 08:59:41 -0500, juola@mathcs.duq.edu (Patrick Juola) >>wrote: >>>>You throw out the suspect bad generators and argue that such a >>>>practice is safe. But what is your argument for the generators that >>>>you do not throw out? >>>That the amount of the bias *that I measured* is less than some >>>threshhold. If I can live with that threshhold and I believe that >>>no other (untested) source of bias is likely to be present, then >>>the cypher is safe to use. >>>If I don't believe that -- or the threshhold is too high -- then >>>I can't place any reliance on a negative result. >>Then you accept bad generators and reject good ones. >Absolutely. With a particular, known, measured probability. >Statisticians call these Type I and Type II errors. The trick is >to make the cost of those errors as small as possible while still >keeping testing costs under control. > -kitten If one can assume independence between generated sets, one can make both of these errors quite small, if the point null is not assumed. That is, the user must specify the inaccuracy allowed, which cannot be zero. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
Subject: Re: hardRandNumbGen Date: Thu, 04 Feb 1999 22:34:11 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36ba1b69.44591739@nntp.ix.netcom.com> References: <79csrb$16je@b.stat.purdue.edu> Newsgroups: sci.crypt Lines: 20 On 4 Feb 1999 14:36:11 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) wrote: >If one can assume independence between generated sets, one can make >both of these errors quite small, if the point null is not assumed. >That is, the user must specify the inaccuracy allowed, which cannot >be zero. Please elaborate on how this is done. I am interested in seeing how much testing is involved for a given level of confidence, and how that testing effort increases with decreasing levels of confidence. Bob Knauer "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then, be trusted with the government of others?" --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 5 Feb 1999 08:50:10 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79esui$65e$1@quine.mathcs.duq.edu> References: <36ba1b69.44591739@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 17 In article <36ba1b69.44591739@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 4 Feb 1999 14:36:11 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) >wrote: > >>If one can assume independence between generated sets, one can make >>both of these errors quite small, if the point null is not assumed. >>That is, the user must specify the inaccuracy allowed, which cannot >>be zero. > >Please elaborate on how this is done. Well, one starts out by making a questionable assumption, in most cases -- sorry, Dr. Rubin 8-) -- especially when one is testing successive "pads" from a generator. -kitten
Subject: Re: hardRandNumbGen Date: 7 Feb 1999 09:35:58 -0500 From: hrubin@b.stat.purdue.edu (Herman Rubin) Message-ID: <79k8ce$ms8@b.stat.purdue.edu> References: <36ba1b69.44591739@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 43 In article <36ba1b69.44591739@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 4 Feb 1999 14:36:11 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) >wrote: >>If one can assume independence between generated sets, one can make >>both of these errors quite small, if the point null is not assumed. >>That is, the user must specify the inaccuracy allowed, which cannot >>be zero. >Please elaborate on how this is done. >I am interested in seeing how much testing is involved for a given >level of confidence, and how that testing effort increases with >decreasing levels of confidence. The number of items to be tested for a given level of confidence and a given accuracy goes up as the square of the accuracy wanted. If one had guaranteed independent observations with a probability p of success, and wanted to get assured probabilities of acceptance if p=.5, and also of rejection if |p-.5|>d, the sample size needed would increase as 1/d^2, for d small, which is the situation of interest here. Not much changes qualitatively if one wants to accept for p closer to .5 than some multiple of d, such as d/10, except for a factor. The same results hold if there are a fixed number of tests; there would be a factor depending on how many and which tests are to be used. So a 10 megabyte file could be easily tested in such a way that one could be fairly sure that the deviation from randomness was at most 1%, and have good chance of acceptance if it was .1%. Now if we XOR independent files, say run on different days, the deviations from randomness multiply. So if we XOR 8 files, and accept if 5 pass, we can be quite confident about the results. To do this by testing a single file would require a file of size more than 10^20. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
Subject: Re: hardRandNumbGen Date: Sun, 07 Feb 1999 16:29:35 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36bdba8f.12639013@nntp.ix.netcom.com> References: <79k8ce$ms8@b.stat.purdue.edu> Newsgroups: sci.crypt Lines: 60 On 7 Feb 1999 09:35:58 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) wrote: >So a 10 megabyte file could be easily tested in such a way >that one could be fairly sure that the deviation from randomness >was at most 1%, and have good chance of acceptance if it was .1%. I am still having a problem relating this "deviation from randomness" you are testing for and the indeterminancy inherent in a TRNG. You are claiming that a property for infinite numbers applies to finite numbers, albeit with less than probability one. The thing that really bothers me is that "good chance" part in your statement above. If your tests are probabilistic with only a "good chance" of being correct, then how can they be relied on? For each test you require a RNG to pass, the builder of the RNG can fake the numbers to pass your tests. Or do you know of a set of tests that can *with absolute certainty* distinguish a faked RNG from a TRNG? Let me expand on that. It all starts when you come to me and tell me you want a TRNG that will pass your tests. I ask to see your tests so I can test my design for myself before turning the system over to you. Also, I want to see if the tests are reasonable, so I do not waste my time playing amateur games with twit tests. But unbeknownst to you I am really a purveyor of Snake Oil Generators (SOGs). I take your tests and program an algorithm to generate numbers that will pass your tests, and put that algorithm in a black box so you cannot see it. Even if I dude the black box up with lots of bells ans whistles, I embed the algorithm in the silicon away from your watchful eye. I tell you that the algorithm is used to "diagnose" the SOG so it behaves as certified all the time. [NB: This is very much like the standard proof of Turing's halting problem.] Later you come back and complain that the SOG won't pass additional tests that raise your level of confidence, and I point out that you got everything you wanted and paid for. But, since you are a nice gullible customer, I will enhance your SOG for an additional fee, so it will pass your new improved tests. The same scenario obtains until I finally bilk you out of all your money. You made the most fundamental mistake when you assumed that statistical testing could certify a TRNG to within any arbitrary level of precision. You cannot use deterministic algoritmic tests to certify if numbers are being generated by an indeterministic process. You can only use such tests to certify that the process is not deterministic. Therefore you can not certify that you have a TRNG or a SOG. But that determination is crucial to producing ciphers that are proveably secure. Bob Knauer "To compel a man to furnish contributions of money for the propagation of opinions which he disbelieves is sinful and tyrannical." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: 8 Feb 1999 08:49:11 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79mq0n$tgu$1@quine.mathcs.duq.edu> References: <36bdba8f.12639013@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 27 In article <36bdba8f.12639013@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 7 Feb 1999 09:35:58 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) >wrote: > >>So a 10 megabyte file could be easily tested in such a way >>that one could be fairly sure that the deviation from randomness >>was at most 1%, and have good chance of acceptance if it was .1%. > >I am still having a problem relating this "deviation from randomness" >you are testing for and the indeterminancy inherent in a TRNG. You are >claiming that a property for infinite numbers applies to finite >numbers, albeit with less than probability one. > >The thing that really bothers me is that "good chance" part in your >statement above. If your tests are probabilistic with only a "good >chance" of being correct, then how can they be relied on? The "good chance" is quantifiable (as is the "fairly sure"). What is your acceptable degree of uncertainty? What is your minimal "good chance." Remember that, according to modern physics, it's only probabilistic with a good chance that water will get hotter instead of colder when placed over a lit gas burner. -kitten
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 13:16:14 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c031bb.5684543@nntp.ix.netcom.com> References: <79mq0n$tgu$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 57 On 8 Feb 1999 08:49:11 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>The thing that really bothers me is that "good chance" part in your >>statement above. If your tests are probabilistic with only a "good >>chance" of being correct, then how can they be relied on? >The "good chance" is quantifiable (as is the "fairly sure"). >What is your acceptable degree of uncertainty? What is your >minimal "good chance." I am not objecting to statistical measure per se. I could not do that and still believe that Quantum Mechanics is correct. My objections center on the fact that one cannot characterize the randomness of a TRNG by measuring the statistical properties of the numbers it generates. If that were the case, how do you explain what happens when a PRNG passes such tests, and yet that PRNG is a poor generator for crypto purposes. The only way to get a "minimal good chance" of characterizing the TRNG itself is to do a complete audit on it. That is what experimental scientists have to do to have a "minimal good chance" of having their experimental results accepted in the scientific community. If as a scientist I proposed that the soundness of my experimental equipment were determined by the output of that equipment, I would be laughed out of science. Yet that is exactly what is being proposed here - that the soundness of a TRNG be determined from its output. >Remember that, according to modern physics, it's only probabilistic >with a good chance that water will get hotter instead of colder >when placed over a lit gas burner. I certainly have no quarrel with statistical mechanics. I once calculated the probability that all the molecules of air in a room would end up in the corner. A bit on the small side, so small that it would never have happened in the age of the Universe. BTW, I can rig that experiment you describe above to make you think the phenomenon is happening. I put water in the pot, create a separation, put in another layer of water and freeze that top layer. Then I put the thermometer at the bottom, and raise the temperature just to the point where the ice is about to fall into the bottom. Then I bring you in to observe how the temperature will decrease when I put the pot on the stove. You will only discover what is actually happening if you do a complete audit of how I prepared the system. The same is true of a TRNG. Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 11:22:44 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36C060D4.CCAAE9E1@aspi.net> References: <36c031bb.5684543@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 97 R. Knauer wrote: > On 8 Feb 1999 08:49:11 -0500, juola@mathcs.duq.edu (Patrick Juola) > wrote: > > >>The thing that really bothers me is that "good chance" part in your > >>statement above. If your tests are probabilistic with only a "good > >>chance" of being correct, then how can they be relied on? > > >The "good chance" is quantifiable (as is the "fairly sure"). > >What is your acceptable degree of uncertainty? What is your > >minimal "good chance." > > I am not objecting to statistical measure per se. I could not do that > and still believe that Quantum Mechanics is correct. > > My objections center on the fact that one cannot characterize the > randomness of a TRNG by measuring the statistical properties of the > numbers it generates. If that were the case, how do you explain what > happens when a PRNG passes such tests, and yet that PRNG is a poor > generator for crypto purposes. How is it that we reach the judgement that an RNG is a poor generator? We may know that it is a deterministic algorithm, and thus weak against certain attacks. These attacks can be considered statistical tests that indicate the strength of the RNG in question. This argument is ased on the concept that any judgement or decision is going to be based on some evidence gathered by observation and/or experiment. When necessary we can repeat the observation/experiment to classify a phenomenon as subject to the judgement or decision. Consider the case of a Hardware RNG whose design and implementation appear perfect, but I used too much solder on that connection. I created tiny capacitor, so the inverter somtimes acts as a buffer especially after certain patterns of high frequency data. Statistical testing will indicate that the device is imperfect. The results of the test will be descriptive rather than prescriptive, which may make some people uncomfortable, but it will be crucial to the security of anyone who uses it. The device owner's adversary is NOT going to simply say, "Oh, he's using an HRNG, so I'll give up now!". The adversary is ging to probe for weaknesses. TEST the cipher system in the fundamental sense of test, which is evaluate. In the case above the adversary is going to find a weakness that leaks information. Thus our provably secure cipher system turns out not to be. R.I.P. Any device owner who fails to test for weaknesses is negligent. The term "statistical tests" may conjure up some ivory tower image of irrelevancy or futility, but it actually indicates a skeptical attitude on the part ofa defender of a secret. So the intentions of the tests are appropriate. The testing technique can be identical to the expected attack. Thus if systems like X are weak because they fall to attack Y, we should utilize attack Y as a test for the weakness demonstrated by system X. Thus the idean that "bad" RNGs pass statistical tests is a bit of sleight of hand. Reminds me of looking for the keys under the street lamp because the light is better there. Weak RNGs will not pass tests that evaluate cryptographc weakness. RNGs that pass those tests are not weak! > The only way to get a "minimal good chance" of characterizing the TRNG > itself is to do a complete audit on it. That is what experimental > scientists have to do to have a "minimal good chance" of having their > experimental results accepted in the scientific community. > > If as a scientist I proposed that the soundness of my experimental > equipment were determined by the output of that equipment, I would be > laughed out of science. Yet that is exactly what is being proposed > here - that the soundness of a TRNG be determined from its output. > > >Remember that, according to modern physics, it's only probabilistic > >with a good chance that water will get hotter instead of colder > >when placed over a lit gas burner. > > I certainly have no quarrel with statistical mechanics. I once > calculated the probability that all the molecules of air in a room > would end up in the corner. A bit on the small side, so small that it > would never have happened in the age of the Universe. > > BTW, I can rig that experiment you describe above to make you think > the phenomenon is happening. I put water in the pot, create a > separation, put in another layer of water and freeze that top layer. > Then I put the thermometer at the bottom, and raise the temperature > just to the point where the ice is about to fall into the bottom. Then > I bring you in to observe how the temperature will decrease when I put > the pot on the stove. > > You will only discover what is actually happening if you do a complete > audit of how I prepared the system. The same is true of a TRNG. Have you ever audited a system for side effects? Like exhaustive logic tests or provable software design, this kind of analysis is only useful for toy systems.
Subject: Re: hardRandNumbGen Date: 8 Feb 1999 09:13:28 -0500 From: hrubin@b.stat.purdue.edu (Herman Rubin) Message-ID: <79mre8$mni@b.stat.purdue.edu> References: <36bdba8f.12639013@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 109 In article <36bdba8f.12639013@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 7 Feb 1999 09:35:58 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) >wrote: >>So a 10 megabyte file could be easily tested in such a way >>that one could be fairly sure that the deviation from randomness >>was at most 1%, and have good chance of acceptance if it was .1%. >I am still having a problem relating this "deviation from randomness" >you are testing for and the indeterminancy inherent in a TRNG. You are >claiming that a property for infinite numbers applies to finite >numbers, albeit with less than probability one. There is a language problem, and it does cause confusion. ANY physical device producing bits is a TRNG, in the sense that there is a joint probability distribution of all the bits produced. It does not follow from this that the probability that any k of the bits form any particular k-element sequence is exactly 1/2^k. Your generator, or any other physical generator, produces random bits. The question remains as to whether they have the particular properties needed to use these bits as if they had the ideal properties. It is THIS which is being tested. >The thing that really bothers me is that "good chance" part in your >statement above. If your tests are probabilistic with only a "good >chance" of being correct, then how can they be relied on? One cannot possibly do better. It is not possible to get certainty, and one can only do so much to balance the probabilities of the various kinds of error. This balancing of risks is what theoretical statisticians study, so that one can decide what can be done and how well it can be done. >For each test you require a RNG to pass, the builder of the RNG can >fake the numbers to pass your tests. Or do you know of a set of tests >that can *with absolute certainty* distinguish a faked RNG from a >TRNG? This is harder than one thinks. There are arguments for PRNGs, among them being the fact that transmitting them involves much smaller band width. For cryptography, the criterion is vulnerability of the transmitted message. For simulation, the criterion is the accuracy of the calculated results. In neither of these case is the source of the "random numbers" important, unless it causes the criterion not to be met. For some simulation problems, quasi random numbers, which fail independence tests very badly, are known to give better results than random numbers. We know (in principle) the probability usually know it for the alternatives. >Let me expand on that. It all starts when you come to me and tell me >you want a TRNG that will pass your tests. I ask to see your tests so >I can test my design for myself before turning the system over to you. >Also, I want to see if the tests are reasonable, so I do not waste my >time playing amateur games with twit tests. >But unbeknownst to you I am really a purveyor of Snake Oil Generators >(SOGs). I take your tests and program an algorithm to generate numbers >that will pass your tests, and put that algorithm in a black box so >you cannot see it. Even if I dude the black box up with lots of bells >ans whistles, I embed the algorithm in the silicon away from your >watchful eye. I tell you that the algorithm is used to "diagnose" the >SOG so it behaves as certified all the time. As I stated before, this is much harder to do than one would think. There is a major effort being made to come up with PRNGs which can meet the "reasonable" tests which have been promulgated. In addition, these PRNGs are being tried in problems for which some answers can be computed; a particular generator was found to fail in a simulation of a physical situation. Simulations in statistics and physics are using terabytes of PRNGs now, and will be using thousands or millions of them in the future. ................. >You made the most fundamental mistake when you assumed that >statistical testing could certify a TRNG to within any arbitrary level >of precision. You cannot use deterministic algoritmic tests to certify >if numbers are being generated by an indeterministic process. You can >only use such tests to certify that the process is not deterministic. >Therefore you can not certify that you have a TRNG or a SOG. But that >determination is crucial to producing ciphers that are proveably >secure. What you say is correct. But the type of TRNG you think you have does not exist, only approximations of it can exist. And it is necessary to test if the approximations are good enough. A statistical test can only be algorithmic. Let us say that my criterion for physical random bits to be good enough is that the probability of a bit being 1 is .5 within an error of 10^{-10}, given all previous bits. Now how can I test this directly? The answer is that it cannot be done; if I wanted to test it for all possibilities of the preceding 100 bits, we already have more than 10^30 cases. It cannot be done, and so we have to work around it. Your physical process is not going to be that constant. This is why it is necessary to make lots of assumptions and test intelligently to hope to achieve this. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
Subject: Re: hardRandNumbGen Date: Mon, 08 Feb 1999 15:01:01 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36bef38f.9237422@nntp.ix.netcom.com> References: <79mre8$mni@b.stat.purdue.edu> Newsgroups: sci.crypt Lines: 137 On 8 Feb 1999 09:13:28 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) wrote: >>I am still having a problem relating this "deviation from randomness" >>you are testing for and the indeterminancy inherent in a TRNG. You are >>claiming that a property for infinite numbers applies to finite >>numbers, albeit with less than probability one. >There is a language problem, and it does cause confusion. Indeed! >ANY physical device producing bits is a TRNG, Not necessarily the kind of True Random Number Generator that produces proper sequences for the OTP cryptosystem. >in the sense that there >is a joint probability distribution of all the bits produced. It >does not follow from this that the probability that any k of the >bits form any particular k-element sequence is exactly 1/2^k. A TRNG has a specific definition - it must be capable of generating all possible finite sequences equiprobably. The best example of a TRNG is a fair coin toss. >Your generator, or any other physical generator, produces random >bits. Not necessarily the kind of random bits that a TRNG produces to satisfy the requirement above. >The question remains as to whether they have the particular >properties needed to use these bits as if they had the ideal >properties. It is THIS which is being tested. Here is the crux of the position I am maintaining. I agree that statistical tests can be used for diagnostic purposes to demonstrate that a RNG is not a TRNG to within a certain level of confidence. Even then you can reject a perfectly good TRNG if you do not subject it to a large number of tests. But that is not where the real danger lies - the worst is that you will reject properly working TRNGs. The danger comes about when you attempt to use statistical tests to demonstrate that a RNG is a crypto-grade TRNG. Even if a RNG passes your statistical tests, you still do not know if it is a properly working TRNG or not. Therefore the very thing you are testing the RNG for, namely its suitability for use with the OTP system, is not determinable. You might be able to determine that a RNG is not suitable, but you cannot determine that an RNG is suitable. >One cannot possibly do better. It is not possible to get certainty, >and one can only do so much to balance the probabilities of the >various kinds of error. This balancing of risks is what theoretical >statisticians study, so that one can decide what can be done and how >well it can be done. You cannot determine with *any* level of confidence that a TRNG is working properly just because it passes your tests. You can only determine with a certain level of confidence that it is not working properly because it does not pass your tests. >This is harder than one thinks. I know that. >There are arguments for PRNGs, among >them being the fact that transmitting them involves much smaller band >width. For cryptography, the criterion is vulnerability of the >transmitted message. For simulation, the criterion is the accuracy >of the calculated results. In neither of these case is the source >of the "random numbers" important, unless it causes the criterion >not to be met. I disagree with your statement for purposes of the OTP cryptosystem. The security of the OTP relies on the fact that the pad is constructed from the output of a TRNG. If you use a source of "randomness" that does not meet the specification of the TRNG, you could suffer a probabilistic attack. Only if the numbers come from a completely nondeterministic random number generator can there be no "leakage" of information. >>You made the most fundamental mistake when you assumed that >>statistical testing could certify a TRNG to within any arbitrary level >>of precision. You cannot use deterministic algoritmic tests to certify >>if numbers are being generated by an indeterministic process. You can >>only use such tests to certify that the process is not deterministic. >>Therefore you can not certify that you have a TRNG or a SOG. But that >>determination is crucial to producing ciphers that are proveably >>secure. >What you say is correct. Finally! I got someone to agree with me on something! >But the type of TRNG you think you have does >not exist, only approximations of it can exist. And it is necessary >to test if the approximations are good enough. You can never know if they are good enough, only that they are bad enough. I realize that an actual TRNG will not be perfect, but I believe it can be made to specification such that it will not cause any significant amount of information to "leak". That may also be the case for some stream ciphers, such as using the least significant bits of text and hashing them to high entropy density. Even though the latter is not a nondeterministic TRNG, it may not leak cause enough information to leak for practical purposes. My concern is that statistical tests cannot be used to determine whether ANY method of random number generation is suitable for the OTP system. All it can be used for is to reject those RNGs that do not pass, and then only probabilistically. The only way you can certify a TRNG for use with the OTP system is to do a full audit of the generation method. >Your physical process is not going to be that constant. This is why it >is necessary to make lots of assumptions and test intelligently to hope >to achieve this. I believe it is possible to build a TRNG that can be be certified to be properly working based on subsystem diagnostics and the knowledge of the random process that gives it its nondeterminancy - without having to resort to statistical tests on the output sequences. Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: 8 Feb 1999 10:42:38 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79n0le$u0h$1@quine.mathcs.duq.edu> References: <36bef38f.9237422@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 73 In article <36bef38f.9237422@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 8 Feb 1999 09:13:28 -0500, hrubin@b.stat.purdue.edu (Herman Rubin) >wrote: > >>>I am still having a problem relating this "deviation from randomness" >>>you are testing for and the indeterminancy inherent in a TRNG. You are >>>claiming that a property for infinite numbers applies to finite >>>numbers, albeit with less than probability one. > >>There is a language problem, and it does cause confusion. > >Indeed! > >>ANY physical device producing bits is a TRNG, > >Not necessarily the kind of True Random Number Generator that produces >proper sequences for the OTP cryptosystem. > >>in the sense that there >>is a joint probability distribution of all the bits produced. It >>does not follow from this that the probability that any k of the >>bits form any particular k-element sequence is exactly 1/2^k. > >A TRNG has a specific definition - it must be capable of generating >all possible finite sequences equiprobably. The best example of a TRNG >is a fair coin toss. > >>Your generator, or any other physical generator, produces random >>bits. > >Not necessarily the kind of random bits that a TRNG produces to >satisfy the requirement above. > >>The question remains as to whether they have the particular >>properties needed to use these bits as if they had the ideal >>properties. It is THIS which is being tested. > >Here is the crux of the position I am maintaining. I agree that >statistical tests can be used for diagnostic purposes to demonstrate >that a RNG is not a TRNG to within a certain level of confidence. Even >then you can reject a perfectly good TRNG if you do not subject it to >a large number of tests. But that is not where the real danger lies - >the worst is that you will reject properly working TRNGs. > >The danger comes about when you attempt to use statistical tests to >demonstrate that a RNG is a crypto-grade TRNG. Even if a RNG passes >your statistical tests, you still do not know if it is a properly >working TRNG or not. > >Therefore the very thing you are testing the RNG for, namely its >suitability for use with the OTP system, is not determinable. You >might be able to determine that a RNG is not suitable, but you cannot >determine that an RNG is suitable. No. There are two things you need to do to produce a certifiable TRNG. One is to confirm that the device is, in fact, a "random number generator" in the sense that it produces random bits. The main thing to confirm then is that you can get an unbounded number of random (although not necessarily equiprobable) bits out of the system. This requires examination of the generator -- and is probably impossible unless you're willing to make certain assumptions about various physical processes such as radioactive decay or wave height or something. The other is to confirm that the outputs are bias-free -- or more accuratley as bias-free as possible, since there's no way to prove ZERO bias. And this is best done statistically, although if you really trust your engineers you can probably do it by design analysis as well. -kitten
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 18:34:07 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c07adb.992577@nntp.ix.netcom.com> References: <79n0le$u0h$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 78 On 8 Feb 1999 10:42:38 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>Therefore the very thing you are testing the RNG for, namely its >>suitability for use with the OTP system, is not determinable. You >>might be able to determine that a RNG is not suitable, but you cannot >>determine that an RNG is suitable. >No. There are two things you need to do to produce a certifiable >TRNG. I meant "you cannot determine that an RNG is suitable"... using statistical tests on the output. >One is to confirm that the device is, in fact, a "random number generator" >in the sense that it produces random bits. The main thing to confirm >then is that you can get an unbounded number of random (although not >necessarily equiprobable) bits out of the system. I do not know what you mean by "random" in that sentence. I will take it to mean "indeterminant". Which brings up a question I was going to bring up earlier and have been waiting for the right place. We speak of the ills of bit-bias in terms of random number generation, but what if the generator were designed with a deliberate bias? As an analog (and only as an analog) imagine a symmetric polygonal die with one more 1 than 0. That would have a built in bias, yet each outcome of a throw would be indeterminant. So you subject the output of that die to a statistical test for bit-bias and it flunks. Now what? Also, imagine actually using the output for an OTP and your attacker tries to figure out why the bits in the ciphers are biased. Will that do him any good? IOW, does using the pad from a deliberately biased RNG (which is otherwise completely indeterminant) leak any information that is useful for decrypting your ciphers? It would seem that any bias, even bias that is deliberately introduced and accounted for, is going to weaken the random number generation process cryptographically, since in the limit that the bias becomes very large, you have a totally unsecure system? Yet the TNG is completely indeterminant from one throw of the die to the next [NB: For those of you who were here a year ago, this very important point was discussed at length - and is the reason we define a TRNG in terms of equiprobable sequences, and not just independent bit generation.] >This requires >examination of the generator -- and is probably impossible unless >you're willing to make certain assumptions about various physical >processes such as radioactive decay or wave height or something. Therefore you must have a known source of randomness to avoid such assumptions. Radioactive decay suffices - unless you are prepared to take on the entire scientific community with a refutation of indeterminancy in Quantum Mechanics, in which case be sure to bring your lunch because you are gonna be at it for a while. >The other is to confirm that the outputs are bias-free -- or more >accuratley as bias-free as possible, since there's no way to prove >ZERO bias. And this is best done statistically, although if you >really trust your engineers you can probably do it by design analysis >as well. If you know that your RNG is supposed to be bias-free, then testing it for bias may be necessary but is certainly not sufficient to demonstrate that it is working properly - with the proviso that you know that it is designed to be a TRNG so you can avoid the possibility that you have a PRNG which passes the tests and fools you. Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: Wed, 10 Feb 1999 00:05:49 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36C113AC.E21DD743@aspi.net> References: <36c07adb.992577@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 87 R. Knauer wrote: > On 8 Feb 1999 10:42:38 -0500, juola@mathcs.duq.edu (Patrick Juola) > wrote: > > >>Therefore the very thing you are testing the RNG for, namely its > >>suitability for use with the OTP system, is not determinable. You > >>might be able to determine that a RNG is not suitable, but you cannot > >>determine that an RNG is suitable. > > >No. There are two things you need to do to produce a certifiable > >TRNG. > > I meant "you cannot determine that an RNG is suitable"... using > statistical tests on the output. > > >One is to confirm that the device is, in fact, a "random number generator" > >in the sense that it produces random bits. The main thing to confirm > >then is that you can get an unbounded number of random (although not > >necessarily equiprobable) bits out of the system. > > I do not know what you mean by "random" in that sentence. I will take > it to mean "indeterminant". > > Which brings up a question I was going to bring up earlier and have > been waiting for the right place. We speak of the ills of bit-bias in > terms of random number generation, but what if the generator were > designed with a deliberate bias? As an analog (and only as an analog) > imagine a symmetric polygonal die with one more 1 than 0. That would > have a built in bias, yet each outcome of a throw would be > indeterminant. So you subject the output of that die to a statistical > test for bit-bias and it flunks. Now what? > > Also, imagine actually using the output for an OTP and your attacker > tries to figure out why the bits in the ciphers are biased. Will that > do him any good? IOW, does using the pad from a deliberately biased > RNG (which is otherwise completely indeterminant) leak any information > that is useful for decrypting your ciphers? > > It would seem that any bias, even bias that is deliberately introduced > and accounted for, is going to weaken the random number generation > process cryptographically, since in the limit that the bias becomes > very large, you have a totally unsecure system? Yet the TNG is > completely indeterminant from one throw of the die to the next > > [NB: For those of you who were here a year ago, this very important > point was discussed at length - and is the reason we define a TRNG in > terms of equiprobable sequences, and not just independent bit > generation.] > > >This requires > >examination of the generator -- and is probably impossible unless > >you're willing to make certain assumptions about various physical > >processes such as radioactive decay or wave height or something. > > Therefore you must have a known source of randomness to avoid such > assumptions. Radioactive decay suffices - unless you are prepared to > take on the entire scientific community with a refutation of > indeterminancy in Quantum Mechanics, in which case be sure to bring > your lunch because you are gonna be at it for a while. Any tests you would use to prove QM indeterminate can be used t prove a non QM RNG indeterminate. naturally, these would be statistical tests. > > > >The other is to confirm that the outputs are bias-free -- or more > >accuratley as bias-free as possible, since there's no way to prove > >ZERO bias. And this is best done statistically, although if you > >really trust your engineers you can probably do it by design analysis > >as well. > > If you know that your RNG is supposed to be bias-free, then testing it > for bias may be necessary but is certainly not sufficient to > demonstrate that it is working properly - with the proviso that you > know that it is designed to be a TRNG so you can avoid the possibility > that you have a PRNG which passes the tests and fools you. > > Bob Knauer > > "The world is filled with violence. Because criminals carry guns, > we decent law-abiding citizens should also have guns. Otherwise > they will win and the decent people will loose." > --James Earl Jones
Subject: Re: hardRandNumbGen Date: Wed, 10 Feb 1999 15:00:04 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c19eb3.4946452@nntp.ix.netcom.com> References: <36C113AC.E21DD743@aspi.net> Newsgroups: sci.crypt Lines: 15 On Wed, 10 Feb 1999 00:05:49 -0500, "Trevor Jackson, III" <fullmoon@aspi.net> wrote: >Any tests you would use to prove QM indeterminate can be used t prove a non QM >RNG indeterminate. naturally, these would be statistical tests. Apparently you never mastered Quantum Mechanics, or you would not be making such a ludicrous statement as you did above. Bob Knauer "It is not a matter of what is true that counts, but a matter of what is perceived to be true." --Henry Kissinger
Subject: Re: hardRandNumbGen Date: 10 Feb 1999 08:53:19 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79s30f$9db$1@quine.mathcs.duq.edu> References: <36c07adb.992577@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 64 In article <36c07adb.992577@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 8 Feb 1999 10:42:38 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>>Therefore the very thing you are testing the RNG for, namely its >>>suitability for use with the OTP system, is not determinable. You >>>might be able to determine that a RNG is not suitable, but you cannot >>>determine that an RNG is suitable. > >>No. There are two things you need to do to produce a certifiable >>TRNG. > >I meant "you cannot determine that an RNG is suitable"... using >statistical tests on the output. > >>One is to confirm that the device is, in fact, a "random number generator" >>in the sense that it produces random bits. The main thing to confirm >>then is that you can get an unbounded number of random (although not >>necessarily equiprobable) bits out of the system. > >I do not know what you mean by "random" in that sentence. I will take >it to mean "indeterminant". > >Which brings up a question I was going to bring up earlier and have >been waiting for the right place. We speak of the ills of bit-bias in >terms of random number generation, but what if the generator were >designed with a deliberate bias? As an analog (and only as an analog) >imagine a symmetric polygonal die with one more 1 than 0. That would >have a built in bias, yet each outcome of a throw would be >indeterminant. So you subject the output of that die to a statistical >test for bit-bias and it flunks. Now what? Depends. If it's a really good generator with a known bias, there are mathematical techniques that will allow me to strip out the bias and produced an unbiased stream. So if I'm willing to embed the hardware in some other equipment, it may be a good building block. >Also, imagine actually using the output for an OTP and your attacker >tries to figure out why the bits in the ciphers are biased. Will that >do him any good? IOW, does using the pad from a deliberately biased >RNG (which is otherwise completely indeterminant) leak any information >that is useful for decrypting your ciphers? Broadly speaking, yes. First, I can start decrypting at the most probable key and work downwards from there. Second, I can produce a probability (Bayes' theorem again) for every potential key, and from that derive probabilities for various plaintexts. If you send a message telling your broker either to "buy!" or "sell", but one is overwhelmingly more probable than the other, that's a potentially serious crack. >It would seem that any bias, even bias that is deliberately introduced >and accounted for, is going to weaken the random number generation >process cryptographically, since in the limit that the bias becomes >very large, you have a totally unsecure system? Yet the TNG is >completely indeterminant from one throw of the die to the next But you can strip out bias fairly easily -- gather bits in pairs, output a 1 bit if the pair is 01, a zero bit if the pair is 10 and gather a new pair otherwise. Yes, you waste about 75 percent of your generator this way.... but bits are cheap. -kitten
Subject: Re: hardRandNumbGen Date: Thu, 11 Feb 1999 08:56:20 -0700 From: "Tony T. Warnock" <u091889@cic-mail.lanl.gov> Message-ID: <36C2FDA4.5B0776F7@cic-mail.lanl.gov> References: <79s30f$9db$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 11 Patrick Juola wrote: > But you can strip out bias fairly easily -- gather bits in pairs, output > a 1 bit if the pair is 01, a zero bit if the pair is 10 and gather a > new pair otherwise. Yes, you waste about 75 percent of your generator > this way.... but bits are cheap. Actually, "random" bits are quite expensive to get. "Ignorance can be costly." Tony
Subject: Re: hardRandNumbGen Date: Thu, 11 Feb 1999 18:39:51 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c31cb8.17085527@nntp.ix.netcom.com> References: <79s30f$9db$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 48 On 10 Feb 1999 08:53:19 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >Depends. If it's a really good generator with a known bias, there >are mathematical techniques that will allow me to strip out the bias >and produced an unbiased stream. This brings up the question of whether anti-skewing changes the equiprobability of the generator. I suspect it does for the following reason. A TRNG must be capable of generating all possible finite sequences equiprobably. If it is biased, then is it not doing that. Anti-skewing procedures do not generate those sequences that under-representted because of the bias. >>IOW, does using the pad from a deliberately biased >>RNG (which is otherwise completely indeterminant) leak any information >>that is useful for decrypting your ciphers? >Broadly speaking, yes. Just as I suspected - equiprobability is at the heart of proveably secure crypto. Equiprobability implies both independence and equidistribution of all possible finite sequences. Independence is not sufficient and equidistribution is not sufficient - you must have both to satisfy the requirements for proveable security. >But you can strip out bias fairly easily -- gather bits in pairs, output >a 1 bit if the pair is 01, a zero bit if the pair is 10 and gather a >new pair otherwise. Yes, you waste about 75 percent of your generator >this way.... but bits are cheap. See my comments above. I believe that if you employ that, or any other anti-skewing technique, to the output of a TRNG which is suffering from severe bias, then you will no longer have a TRNG in the sense of its fundamental specification - capable of generating all possible finite sequences equiprobably. IOW, I fail to see how any anti-skewing procedure, including the one you gave above, is going to resore the equiprobability of the TRNG. Bob Knauer "It is not a matter of what is true that counts, but a matter of what is perceived to be true." --Henry Kissinger
Subject: Re: hardRandNumbGen Date: 11 Feb 1999 15:42:48 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <79vfc8$d9q$1@quine.mathcs.duq.edu> References: <36c31cb8.17085527@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 36 In article <36c31cb8.17085527@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 10 Feb 1999 08:53:19 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>Depends. If it's a really good generator with a known bias, there >>are mathematical techniques that will allow me to strip out the bias >>and produced an unbiased stream. > >This brings up the question of whether anti-skewing changes the >equiprobability of the generator. I suspect it does for the following >reason. > >A TRNG must be capable of generating all possible finite sequences >equiprobably. If it is biased, then is it not doing that. Anti-skewing >procedures do not generate those sequences that under-representted >because of the bias. Um... This is simply incorrect. That's *exactly* what anti-skewing procedures do. Think of it this way. Assume you have a biased, but independent, bit source that generates 1s with probability p > 0.5. Consider two successive bits, x, y. The probability of getting the sequence 1, 0 is p * (1-p). The probability of getting the sequence 0, 1 is (1-p) * p, which is identical to the above. So if you output a 1 bit when you see the pair 1,0 and a zero bit when you see the pair 0,1 (and nothing otherwise), then you've got a provably unbiased output stream -- the bias has been scrubbed from the input -- by the technique of throwing away everyting that *isn't* unbiased, broadly speaking. -kitten
Subject: Re: hardRandNumbGen Date: Sat, 13 Feb 1999 15:31:48 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c598cd.7094651@nntp.ix.netcom.com> References: <79vfc8$d9q$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 49 On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>A TRNG must be capable of generating all possible finite sequences >>equiprobably. If it is biased, then is it not doing that. Anti-skewing >>procedures do not generate those sequences that under-representted >>because of the bias. >Um... This is simply incorrect. That's *exactly* what anti-skewing >procedures do. I would like to see how that is possible. >Think of it this way. Assume you have a biased, but independent, bit >source that generates 1s with probability p > 0.5. Consider two >successive bits, x, y. >The probability of getting the sequence 1, 0 is p * (1-p). >The probability of getting the sequence 0, 1 is (1-p) * p, which is >identical to the above. >So if you output a 1 bit when you see the pair 1,0 and a zero bit >when you see the pair 0,1 (and nothing otherwise), then you've >got a provably unbiased output stream -- the bias has been scrubbed >from the input -- by the technique of throwing away everyting that >*isn't* unbiased, broadly speaking. I believe that algorithm is attributed to Knuth. This all sounds good on the surface but I am not convinced at this point. For example, there are sequences from a TRNG that are heavily biased, like 111...1 and 000...0, yet the scheme above does not seem to be able to produce those sequences. Can that anti-skewing technique above generate all possible sequences including 111...1 and 000...0 with their expected probability based on their length? Is so, please show us how. If there is a reference in Li & Vitanyi, please cite it, since I have renewed the book for a couple more weeks for a second reading. If not, perhaps you can provide another reference. I might as well get as much out of my "free" library as I can, since I am paying for it. Bob Knauer "The one thing every man fears is the unknown. When presented with this scenario, individual rights will be willingly relinquished for the guarantee of their well being granted to them by their world government." --Henry Kissinger
Subject: Re: hardRandNumbGen Date: Sat, 13 Feb 1999 09:28:03 -0800 From: "karl malbrain" <karl_m@acm.org> Message-ID: <rDix2.16$Gr2.23@typhoon-la.pbi.net> References: <36c598cd.7094651@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 28 R. Knauer <rcktexas@ix.netcom.com> wrote in message news:36c598cd.7094651@nntp.ix.netcom.com... >On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: >(...) >>Think of it this way. Assume you have a biased, but independent, bit >>source that generates 1s with probability p > 0.5. Consider two >>successive bits, x, y. > >>The probability of getting the sequence 1, 0 is p * (1-p). >>The probability of getting the sequence 0, 1 is (1-p) * p, which is >>identical to the above. > >>So if you output a 1 bit when you see the pair 1,0 and a zero bit >>when you see the pair 0,1 (and nothing otherwise), then you've >>got a provably unbiased output stream -- the bias has been scrubbed >>from the input -- by the technique of throwing away everyting that >>*isn't* unbiased, broadly speaking. > >I believe that algorithm is attributed to Knuth. > What you're missing here is INDUCTION. The pair-wise bias of the generator's bits is being extended by you to define a group-wise bias (take this in the negative). Karl M
Subject: Re: hardRandNumbGen Date: 15 Feb 1999 09:18:14 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <7a9ab6$dio$1@quine.mathcs.duq.edu> References: <36c598cd.7094651@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 73 In article <36c598cd.7094651@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>>A TRNG must be capable of generating all possible finite sequences >>>equiprobably. If it is biased, then is it not doing that. Anti-skewing >>>procedures do not generate those sequences that under-representted >>>because of the bias. > >>Um... This is simply incorrect. That's *exactly* what anti-skewing >>procedures do. > >I would like to see how that is possible. > >>Think of it this way. Assume you have a biased, but independent, bit >>source that generates 1s with probability p > 0.5. Consider two >>successive bits, x, y. > >>The probability of getting the sequence 1, 0 is p * (1-p). >>The probability of getting the sequence 0, 1 is (1-p) * p, which is >>identical to the above. > >>So if you output a 1 bit when you see the pair 1,0 and a zero bit >>when you see the pair 0,1 (and nothing otherwise), then you've >>got a provably unbiased output stream -- the bias has been scrubbed >>from the input -- by the technique of throwing away everyting that >>*isn't* unbiased, broadly speaking. > >I believe that algorithm is attributed to Knuth. Before that, it was attributed to Von Neumann. it's an old one. >This all sounds good on the surface but I am not convinced at this >point. For example, there are sequences from a TRNG that are heavily >biased, like 111...1 and 000...0, yet the scheme above does not seem >to be able to produce those sequences. Can that anti-skewing technique >above generate all possible sequences including 111...1 and 000...0 >with their expected probability based on their length? Is so, please >show us how. Certainly. Remember that the symbol (1) is generated from a subsequence of 10 (or 1110, 0010, etc... in fact, by any sequence of the form [(00)+(11)*10]. So any sequence of the form [(00)+(11)*10]* will "debiasify" into 11111...11... As to proof that the expected probability is based on the length; do it by case analysis and induction. There are four cases for the first four bits; as expressed above : >>The probability of getting the sequence 1, 0 is p * (1-p). >>The probability of getting the sequence 0, 1 is (1-p) * p. The probability of getting the sequence 1, 1 is p*p. The probability of getting the sequence 0, 0 is (1-p)*(1-p). Note : only the first two generate output. Assume for induction that all (output) sequences of length K or less are printed uniformly with probability proportional to their length. (The base case of length zero sequences is, literally, trivial). Assume for contradiction, wolog, that the sequence X1 is more probable than the sequence X0. This means that it's more likely that a 1 was printed than a 0. But *this*, in turn, means that the sequence [(00)+(11)*10] was *generated* more probably than the sequence [(00)+(11)*01]. The prefix probability is the same in both cases, so the only possible source of such difference is in the probability of generating 10 vs. 01. But by case analysis, these probabilities are equal, hence the desired contraction is obtained. -kitten
Subject: Re: hardRandNumbGen Date: Mon, 15 Feb 1999 12:10:49 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36C85518.D62EF8@aspi.net> References: <7a9ab6$dio$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 88 Patrick Juola wrote: > In article <36c598cd.7094651@nntp.ix.netcom.com>, > R. Knauer <rcktexas@ix.netcom.com> wrote: > >On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) > >wrote: > > > >>>A TRNG must be capable of generating all possible finite sequences > >>>equiprobably. If it is biased, then is it not doing that. Anti-skewing > >>>procedures do not generate those sequences that under-representted > >>>because of the bias. > > > >>Um... This is simply incorrect. That's *exactly* what anti-skewing > >>procedures do. > > > >I would like to see how that is possible. > > > >>Think of it this way. Assume you have a biased, but independent, bit > >>source that generates 1s with probability p > 0.5. Consider two > >>successive bits, x, y. > > > >>The probability of getting the sequence 1, 0 is p * (1-p). > >>The probability of getting the sequence 0, 1 is (1-p) * p, which is > >>identical to the above. > > > >>So if you output a 1 bit when you see the pair 1,0 and a zero bit > >>when you see the pair 0,1 (and nothing otherwise), then you've > >>got a provably unbiased output stream -- the bias has been scrubbed > >>from the input -- by the technique of throwing away everyting that > >>*isn't* unbiased, broadly speaking. > > > >I believe that algorithm is attributed to Knuth. > > Before that, it was attributed to Von Neumann. it's an old one. In electronics I believe this is called "edge detection" instead of "level detection". The transform from 01/10 to 0/1 takes in an edge and yeilds a level. Signal processing terminology also deals with these transforms, but their lexicon is irrelevant because we're dealing with independent events as opposed to impulse responses where dependence is presumed. > > > >This all sounds good on the surface but I am not convinced at this > >point. For example, there are sequences from a TRNG that are heavily > >biased, like 111...1 and 000...0, yet the scheme above does not seem > >to be able to produce those sequences. Can that anti-skewing technique > >above generate all possible sequences including 111...1 and 000...0 > >with their expected probability based on their length? Is so, please > >show us how. > > Certainly. Remember that the symbol (1) is generated from a subsequence > of 10 (or 1110, 0010, etc... in fact, by any sequence of the form > [(00)+(11)*10]. So any sequence of the form [(00)+(11)*10]* > will "debiasify" into 11111...11... > > As to proof that the expected probability is based on the length; > do it by case analysis and induction. > > There are four cases for the first four bits; as expressed above : > > >>The probability of getting the sequence 1, 0 is p * (1-p). > >>The probability of getting the sequence 0, 1 is (1-p) * p. > > The probability of getting the sequence 1, 1 is p*p. > The probability of getting the sequence 0, 0 is (1-p)*(1-p). > > Note : only the first two generate output. > > Assume for induction that all (output) sequences of length K or less > are printed uniformly with probability proportional to their length. > (The base case of length zero sequences is, literally, trivial). > Assume for contradiction, wolog, that the sequence X1 is more > probable than the sequence X0. This means that it's more likely > that a 1 was printed than a 0. But *this*, in turn, means that > the sequence [(00)+(11)*10] was *generated* more probably than > the sequence [(00)+(11)*01]. The prefix probability is the same > in both cases, so the only possible source of such difference is > in the probability of generating 10 vs. 01. But by case analysis, > these probabilities are equal, hence the desired contraction is > obtained. > > -kitten
Subject: Re: hardRandNumbGen Date: Sat, 13 Feb 1999 15:50:54 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c59ebb.8612624@nntp.ix.netcom.com> References: <79vfc8$d9q$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 42 On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >>A TRNG must be capable of generating all possible finite sequences >>equiprobably. If it is biased, then is it not doing that. Anti-skewing >>procedures do not generate those sequences that under-representted >>because of the bias. >Um... This is simply incorrect. That's *exactly* what anti-skewing >procedures do. > >Think of it this way. Assume you have a biased, but independent, bit >source that generates 1s with probability p > 0.5. Consider two >successive bits, x, y. > >The probability of getting the sequence 1, 0 is p * (1-p). >The probability of getting the sequence 0, 1 is (1-p) * p, which is >identical to the above. > >So if you output a 1 bit when you see the pair 1,0 and a zero bit >when you see the pair 0,1 (and nothing otherwise), then you've >got a provably unbiased output stream -- the bias has been scrubbed >from the input -- by the technique of throwing away everyting that >*isn't* unbiased, broadly speaking. I have a further objection to this method. If this method produced perfectly secure numbers, then it could be applied to the output of any PRNG to produce perfectly secure random numbers. But we know this is impossible in general. I still think that correlation and bias are completely separate concepts. You may be able to anti-skew a sequence, but that won't remove correlation. Bob Knauer "The one thing every man fears is the unknown. When presented with this scenario, individual rights will be willingly relinquished for the guarantee of their well being granted to them by their world government." --Henry Kissinger
Subject: Re: hardRandNumbGen Date: 15 Feb 1999 09:19:27 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <7a9adf$dj9$1@quine.mathcs.duq.edu> References: <36c59ebb.8612624@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 38 In article <36c59ebb.8612624@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 11 Feb 1999 15:42:48 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>>A TRNG must be capable of generating all possible finite sequences >>>equiprobably. If it is biased, then is it not doing that. Anti-skewing >>>procedures do not generate those sequences that under-representted >>>because of the bias. > >>Um... This is simply incorrect. That's *exactly* what anti-skewing >>procedures do. >> >>Think of it this way. Assume you have a biased, but independent, bit >>source that generates 1s with probability p > 0.5. Consider two >>successive bits, x, y. >> >>The probability of getting the sequence 1, 0 is p * (1-p). >>The probability of getting the sequence 0, 1 is (1-p) * p, which is >>identical to the above. >> >>So if you output a 1 bit when you see the pair 1,0 and a zero bit >>when you see the pair 0,1 (and nothing otherwise), then you've >>got a provably unbiased output stream -- the bias has been scrubbed >>from the input -- by the technique of throwing away everyting that >>*isn't* unbiased, broadly speaking. > >I have a further objection to this method. > >If this method produced perfectly secure numbers, then it could be >applied to the output of any PRNG to produce perfectly secure random >numbers. But we know this is impossible in general. No. This method produces perfectly unbiased numbers *IF* the underlying bit sequence is independent. PRNGs, not being independent, can't be improved by this. -kitten
Subject: Re: hardRandNumbGen Date: Mon, 15 Feb 1999 15:56:41 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c84226.10575306@nntp.ix.netcom.com> References: <7a9adf$dj9$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 31 On 15 Feb 1999 09:19:27 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >No. This method produces perfectly unbiased numbers *IF* the >underlying bit sequence is independent. PRNGs, not being >independent, can't be improved by this. Let me attempt to reiterate this and your earlier post. If a RNG produces finite sequences independently, but they are not equidistributed, then the Knuth method of anti-skewing will cause the output to become equidistributed - which then brings the RNG into compliance with the specifications for a TRNG. Is that correct? If so, then this should be part of the design of a TRNG. One advantage is that if the output is shorted or pulled up, there will be no final output since there will be no dibits of the kind 01 or 10. That means that the TRNG will be self-diagnosing in that regard. Bob Knauer "Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their consciences." --C.S. Lewis
Subject: Re: hardRandNumbGen Date: 15 Feb 1999 13:11:35 -0500 From: juola@mathcs.duq.edu (Patrick Juola) Message-ID: <7a9o0n$e7u$1@quine.mathcs.duq.edu> References: <36c84226.10575306@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 44 In article <36c84226.10575306@nntp.ix.netcom.com>, R. Knauer <rcktexas@ix.netcom.com> wrote: >On 15 Feb 1999 09:19:27 -0500, juola@mathcs.duq.edu (Patrick Juola) >wrote: > >>No. This method produces perfectly unbiased numbers *IF* the >>underlying bit sequence is independent. PRNGs, not being >>independent, can't be improved by this. > >Let me attempt to reiterate this and your earlier post. > >If a RNG produces finite sequences independently, but they are not >equidistributed, then the Knuth method of anti-skewing will cause the >output to become equidistributed - which then brings the RNG into >compliance with the specifications for a TRNG. > >Is that correct? Yes. >If so, then this should be part of the design of a TRNG. Not necessarily. This technique is sufficient but not necessary for removing bias, and there may be other, more appropriate techniques depending on your needs. For example, this technique throws away approximately 1/2 of the generated bits, which means that your random number generator needs to generate bits in excess of twice the needed volume. This could rightly be objected to as inefficient. A further objection is that the number of bits that may need to be generated and thrown away are unbounded, and as such this would be inappropriate to use in a real-time system where response time is required to be faster than a certain threshhold. In plain English, I may be unwilling to wait several seconds before my RNG spits out any data. Furthermore, if you are sufficiently confident that your generator is unbiased, such a technique is redundant. So this is an engineering question, and not a formal requirement. -kitten
Subject: Re: hardRandNumbGen Date: Mon, 15 Feb 1999 20:03:47 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c87a91.25018835@nntp.ix.netcom.com> References: <7a9o0n$e7u$1@quine.mathcs.duq.edu> Newsgroups: sci.crypt Lines: 48 On 15 Feb 1999 13:11:35 -0500, juola@mathcs.duq.edu (Patrick Juola) wrote: >Not necessarily. This technique is sufficient but not necessary >for removing bias, and there may be other, more appropriate >techniques depending on your needs. For example, this technique >throws away approximately 1/2 of the generated bits, which means >that your random number generator needs to generate bits in excess >of twice the needed volume. This could rightly be objected to >as inefficient. >A further objection is that the number of bits that may need to >be generated and thrown away are unbounded, and as such this would >be inappropriate to use in a real-time system where response time >is required to be faster than a certain threshhold. In plain English, >I may be unwilling to wait several seconds before my RNG spits out >any data. >Furthermore, if you are sufficiently confident that your generator >is unbiased, such a technique is redundant. >So this is an engineering question, and not a formal requirement. Until another proveably secure anti-skewing technique can be identified, it has the advantage that it works. And since this discussion is mostly theoretical regarding the proveable security of the OTP system, the fact that such a generator would not be useable for a real time stream cipher is not relevant. Regarding non-TRNG streams (such as text streams), one thing that I forgot to ask when we were discussing decorrelation techniques is whether those schemes proposed, such as CRC hash or the LZ77 compression algorithm you suggested, also remove bias? Do such decorrelation procedures also remove bias, or is it necessary to run an anti-skewing technique on the stream? If so, should it be run before or after the decorrelation procedure? Bob Knauer "Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their consciences." --C.S. Lewis
Subject: Re: hardRandNumbGen Date: Mon, 08 Feb 1999 19:28:59 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36bf3adb.4721173@news.io.com> References: <36bef38f.9237422@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 82 On Mon, 08 Feb 1999 15:01:01 GMT, in <36bef38f.9237422@nntp.ix.netcom.com>, in sci.crypt rcktexas@ix.netcom.com (R. Knauer) wrote: >[...] >I realize that an actual TRNG will not be perfect, but I believe it >can be made to specification such that it will not cause any >significant amount of information to "leak". I would say that to the extent that a source has some "true" randomness, information leakage has no consequence. If we can make all of our reported values dependent upon enough TRNG values, we can assure (statistically!!) to a high probability that each reported value is independent. >That may also be the case for some stream ciphers, such as using the >least significant bits of text and hashing them to high entropy >density. Even though the latter is not a nondeterministic TRNG, it may >not leak cause enough information to leak for practical purposes. This may be combining two concerns: first to acquire the uniqueness which exists in expression, and second to prevent the identification of the presumably known source text. The second we can achieve by throwing away information, and the first is best achieved by using the larger character representation, and not just the lsb's. >My concern is that statistical tests cannot be used to determine >whether ANY method of random number generation is suitable for the OTP >system. All it can be used for is to reject those RNGs that do not >pass, and then only probabilistically. I think reality is the other way around: I think we can reject bad generators to a very high probability, if we can detect them. Often, statistical tests will set a PASS / FAIL bound on the statistic sequence from a generator and get a statistic which we can interpret as a "pass" or "fail." The probabilistic part of this is that the *ideal* random generator will "fail" about 1 percent of the time (or whatever level we select as failure). In contrast, a "bad" generator (one with a measurable error) will fail very badly virtually all of the time. Of course, a bad generator which we cannot detect appears to be a good generator, and we will reject it 1 percent of the time, just like any other good generator. >The only way you can certify a >TRNG for use with the OTP system is to do a full audit of the >generation method. I agree. >[...] >I believe it is possible to build a TRNG that can be be certified to >be properly working based on subsystem diagnostics and the knowledge >of the random process that gives it its nondeterminancy - without >having to resort to statistical tests on the output sequences. Of course, the way we certify the TRNG is to run *statistical* tests on the randomness source. (There is not much use in trying to test the source on the basis of processed output sequences.) These are true statistical tests, and we only get probable results. But subsequent processing can accumulate randomness and throw away information, so even imperfect generators can be very usable. We seek to certify, to high probability, using statistical tests, that a complex physical process is producing the results we expect. In this way we certify the detection machinery, and then depend upon the assumed characteristics of the physical source as the generator of randomness. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Mon, 08 Feb 1999 20:54:11 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36bf4dcb.32337138@nntp.ix.netcom.com> References: <36bf3adb.4721173@news.io.com> Newsgroups: sci.crypt Lines: 24 On Mon, 08 Feb 1999 19:28:59 GMT, ritter@io.com (Terry Ritter) wrote: >This may be combining two concerns: first to acquire the uniqueness >which exists in expression, and second to prevent the identification >of the presumably known source text. The second we can achieve by >throwing away information, and the first is best achieved by using the >larger character representation, and not just the lsb's. I do not understand what you have just said wrt the LSB's. Why would using larger character representations be preferably to using LSBs? >Of course, a bad generator >which we cannot detect appears to be a good generator, and we will >reject it 1 percent of the time, just like any other good generator. That was exactly my point with regard to bad generators. Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 04:06:24 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36bfb43d.2850225@news.io.com> References: <36bf4dcb.32337138@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 35 On Mon, 08 Feb 1999 20:54:11 GMT, in <36bf4dcb.32337138@nntp.ix.netcom.com>, in sci.crypt rcktexas@ix.netcom.com (R. Knauer) wrote: >On Mon, 08 Feb 1999 19:28:59 GMT, ritter@io.com (Terry Ritter) wrote: > >>This may be combining two concerns: first to acquire the uniqueness >>which exists in expression, and second to prevent the identification >>of the presumably known source text. The second we can achieve by >>throwing away information, and the first is best achieved by using the >>larger character representation, and not just the lsb's. > >I do not understand what you have just said wrt the LSB's. Why would >using larger character representations be preferably to using LSBs? There is simply more information present in all the bits than in just the LSB. This allows us to throw away more in the hashing. By using only LSB's, we throw away the rest of each character. But this is information which is not well mixed. So we may be throwing away bits which are more variable -- and thus more valuable -- than the lsb's. Even when characters have an overall "entropy" of less than a bit, there is no particular reason to expect to find the entropy concentrated in the LSB's. The LSB represents part of the coding; it has nothing to do with the entropy transported by the variation in usage of that coding. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 13:20:57 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c03545.6590156@nntp.ix.netcom.com> References: <36bfb43d.2850225@news.io.com> Newsgroups: sci.crypt Lines: 24 On Tue, 09 Feb 1999 04:06:24 GMT, ritter@io.com (Terry Ritter) wrote: >Even when characters have an overall "entropy" of less than a bit, >there is no particular reason to expect to find the entropy >concentrated in the LSB's. The LSB represents part of the coding; it >has nothing to do with the entropy transported by the variation in >usage of that coding. I suppose the problem I am having is that if you include all the bits of the text characters, you will be introducing bias into the stream. If you use only the first 128 ASCII characters, the most significant bit will always be 0, so there will be a bias towards more 0s than 1s. Do you assume that some kind of anti-skewing procedure has been employed prior to hashing the data? Or is anti-skewing even necessary in what you are proposing? Does hashing remove bias? Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 16:27:44 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36C053F0.38A1949B@stud.uni-muenchen.de> References: <36bef38f.9237422@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 12 R. Knauer wrote: > > A TRNG has a specific definition - it must be capable of generating > all possible finite sequences equiprobably. The best example of a TRNG > is a fair coin toss. I believe lots of people would be very happy if you could tell them how to obtain a fair coin! Isn't evident now that one can never get an ideal OTP? M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 18:41:17 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36c07fdb.2272868@nntp.ix.netcom.com> References: <36C053F0.38A1949B@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 31 On Tue, 09 Feb 1999 16:27:44 +0100, Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >> A TRNG has a specific definition - it must be capable of generating >> all possible finite sequences equiprobably. The best example of a TRNG >> is a fair coin toss. >I believe lots of people would be very happy if you could tell them >how to obtain a fair coin! Isn't evident now that one can never get >an ideal OTP? I meant that coin toss system as an analogy. I do not believe that chaotic classical systems can be proven to be totally random. The reason is that they are computable. Only certain Quantum Mechanical systems are proven to be totally random. Certain Quantum Mechanical processes are uncomputable. For example, the spontaneous emission that occurs in certain kinds of radioactive decay is totally random in time - i.e., the time of any particular decay is uncomputable. If anyone can demonstrate that it is computable, they need to plan on the tux rental for their trip to Stockholm. Bob Knauer "The world is filled with violence. Because criminals carry guns, we decent law-abiding citizens should also have guns. Otherwise they will win and the decent people will loose." --James Earl Jones
Subject: Re: hardRandNumbGen Date: Tue, 09 Feb 1999 12:21:04 -0700 From: "Tony T. Warnock" <u091889@cic-mail.lanl.gov> Message-ID: <36C08AA0.3D4D5994@cic-mail.lanl.gov> References: <36c07fdb.2272868@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 79 R. Knauer wrote: > On Tue, 09 Feb 1999 16:27:44 +0100, Mok-Kong Shen > <mok-kong.shen@stud.uni-muenchen.de> wrote: > > >> A TRNG has a specific definition - it must be capable of generating > >> all possible finite sequences equiprobably. The best example of a TRNG > >> is a fair coin toss. > > >I believe lots of people would be very happy if you could tell them > >how to obtain a fair coin! Isn't evident now that one can never get > >an ideal OTP? > > I meant that coin toss system as an analogy. > > I do not believe that chaotic classical systems can be proven to be > totally random. The reason is that they are computable. Only certain > Quantum Mechanical systems are proven to be totally random. Certain > Quantum Mechanical processes are uncomputable. > > For example, the spontaneous emission that occurs in certain kinds of > radioactive decay is totally random in time - i.e., the time of any > particular decay is uncomputable. If anyone can demonstrate that it is > computable, they need to plan on the tux rental for their trip to > Stockholm. > > Bob Knauer > > "The world is filled with violence. Because criminals carry guns, > we decent law-abiding citizens should also have guns. Otherwise > they will win and the decent people will loose." > --James Earl Jones Your example of coin tossing is not bad though. Some simulations (with a two dimensional 4-sided die, the results should extrapolate to coins) show that if the die is dropped from high enough, and that there is enough elastisitcity in the floor, the probability of falling on any side is 1/4 within any epsilon of a starting orientation. If you bounce a die hard enough, it acts random. The craps procotol in casinos reflect this. I coin, flipped high, bounced at least once (craps require twice) ought to do quite well. Of course this assumes a reasonably symmetrical coin. You ought to get 50% eagles and 50% buildings. von Neumann's bias removal can be used improve results. Of course this is really slow. Tony news fodder:01101110010111011110001001101010111100110111101111100001000110010 10011101001010110110101111100011001110101101111100111011111011111100000100001 100010100011100100100101100110100111101000101001101010101011101100101101101110 10111111000011000111001011001111010011010111011011011111100011100111010111011 1111001111011111101111110000001000001100001010000111000100100010110001101000111 10010001001001100101010010111001100100110110011101001111101000010100011010010 10100111010100101010110101101010111101100010110011011010101101110111001011101 10111101011111110000011000011100010110001111001001100101110011011001111101000 11010011101010110101111011001101101110111011011111110000111000111100101110011 11101001110101111011011101111111000111100111110101111011111110011111011111110 1111111100000001000000110000010100000111000010010000101100001101000011110001000 100010011000101010001011100011001000110110001110100011111001000010010001 100100101001001110010100100101011001011010010111100110001001100110011010 100110111001110010011101100111101001111110100000101000011010001010100011 101001001010010110100110101001111010100010101001101010101010101110101100 10101101101011101010111110110000101100011011001010110011101101001011010110110110 10110111101110001011100110111010101110111011110010111101101111101011111111000000 110000011100001011000011110001001100010111000110110001111100100011001001
Subject: Re: hardRandNumbGen Date: Mon, 08 Feb 1999 19:28:21 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36bf3aca.4703677@news.io.com> References: <79mre8$mni@b.stat.purdue.edu> Newsgroups: sci.crypt Lines: 28 On 8 Feb 1999 09:13:28 -0500, in <79mre8$mni@b.stat.purdue.edu>, in sci.crypt hrubin@b.stat.purdue.edu (Herman Rubin) wrote: >[...] >For cryptography, the criterion is vulnerability of the >transmitted message. For simulation, the criterion is the accuracy >of the calculated results. In neither of these case is the source >of the "random numbers" important, unless it causes the criterion >not to be met. I agree with this. In the end we get a sequence. The source of the sequence does not matter to the cipher. But as I see it, "the" important quality for a cryptographic sequence is the independence of each value. Complete independence simply cannot be tested. There are too many ways a sequence can have dependence to test them all statistically. And if we want each value that we report to be even *almost* independent of all other reported values, we are not going to be using any PRNG's. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Fri, 29 Jan 1999 16:20:20 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36B1D1B4.3AE4762@stud.uni-muenchen.de> References: <36b0f08a.10281173@nntp.ix.netcom.com> Newsgroups: sci.crypt Lines: 32 R. Knauer wrote: > > Your comments are directed at showing reasons for suspecting that a > generator is not random. What do you do if your statistical tests > indicate that it IS random? Since you have been all the time disclaiming the usefulness of statistical tests, the formulation of your question surprises me. For it possibly indicates that you are rather unfamiliar with such tests (apology if I am wrong). With statistical means you can NEVER expect to 'prove' something 'is' or 'is not' (in the absolute sense). Elsewhere Parick Juola has given good exposition of the essence of statistics. With my very humble knowledge I certainly can't do better. But perhaps I could attempt to give the following as a supplement to the topic of what a statistical test performs: One has a hypothesis H_0 and wants to investigate it statistically. One takes a sample. Given a confidence level alpha, one evaluate the value t of a test function T corresponding to the hypothesis and see if t is in the critical domain D of T for the given value alpha. If yes, H_0 is rejected (at the given confidence level). Otherwise one can only conclude that result of the test does not contradict H_0 (but it does NOT prove H_0!) So you see people doing statistical tests are very prudent and very modest ones. They never say anything categorically ('for sure', IS or IS NOT) but rather say 'something is fairly unlikely to be true' or 'there is not enough evidence to believe that it is false'. (Compare such categorical statements as 'anything from software IS not crypto-grade' and 'everything from hardware IS crypto-grade'.) M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 08:18:01 -1000 From: handWave <real9@complex9.net> Message-ID: <36AE06D9.384@complex9.net> References: <36ADB62D.E681674F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 58 Mok-Kong Shen wrote: > > handWave wrote: > > > > > > Even the presmption that the output would pass statistical tests is > > > questionable. One famous gafffe in PRNG design was Knuth's composite > > > generator, which he called superrandom. Unfortunately it was a closed loop > > > design. > > > > It was a computer program. > > Having previously taken part in discussions in several threads of > this group on random number generations, I doubt nevertheless that > I have really known an answer to the following question: > > If I have two sources of randomness, one software and one hardware, > both passing all statistical tests I apply equally well, why should > I choose one source in preference to the other? Consider than many situations RNGs are used for. In two dimensions there are four scenarios: 1 You want repeatable outputs or you want unpredictable outputs. 2 You use the RNG is a vaults with trusted guards or you use it in a college dormatory where you made some energetic enemies. The preferences become clearer in those situations. >And if additionally > I don't know which sequence I get is from software and which is from > hardware? (Compare the Turing test.) How does the origin of the > sequence affect the workload of the analyst, In that case you may chose either one with a chance of making a bad choice, depending on the future situational dimension in which it will be used. Random numbers sometimes seem nonrandom, hence lottery players sometimes find a lucky number or see a pattern. >if the software > generation process involves so many parameters that for combinatorical > reasons he has no chance of directly dealing with them but has > to try to look instead for possible regularities/irregularities in > the sequence itself and, by assumption, the sequences from the > two sources are of equal statistical quality? (Note that the > hardware source is (in my humble opinion) unpredictable simply > because there are so many participating 'parameters' that the > 'summation' (the end product) becomes unpredictable, cf. the casting > of a dice.) > > M. K. Shen The scenario in which the RNG will be used makes the choice clearer. Pick a purpose: repeatability for stream ciphers, then use a PRNG. For key generation use a RNG based on hardware. If you are in the college dorm with enemies who may interfere remotely, then a thermal noise generator from an avalanche diode with a high gain amplifier breadboared using wire-wrapped, unshielded circuits is a bad implementation.
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 18:24:51 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36ADFA63.61CC6CAB@stud.uni-muenchen.de> References: <36AE06D9.384@complex9.net> Newsgroups: sci.crypt Lines: 21 handWave wrote: > > The scenario in which the RNG will be used makes the choice clearer. Pick > a purpose: repeatability for stream ciphers, then use a PRNG. For key > generation use a RNG based on hardware. If you are in the college dorm > with enemies who may interfere remotely, then a thermal noise generator > from an avalanche diode with a high gain amplifier breadboared using > wire-wrapped, unshielded circuits is a bad implementation. I think the following scenario may be interesting: I am the user of an application with certain security expectations. Some one offers me two sequences generated from two different sources (I am not told which is which). I want to decide which one to use. I have all statistical test tools available. How am I going to make the decision? (This is independent of how I am going to use the sequence, I suppose. I simply want the one with better 'crypto strength', i.e. higher un-predictability. Maybe I'll use the sequence as kind of one-time pad (never mind the terminology here).) M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:36:01 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36ae1916.10719434@news.io.com> References: <36ADB62D.E681674F@stud.uni-muenchen.de> Newsgroups: sci.crypt Lines: 108 On Tue, 26 Jan 1999 13:33:49 +0100, in <36ADB62D.E681674F@stud.uni-muenchen.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> wrote: >[...] >If I have two sources of randomness, one software and one hardware, >both passing all statistical tests I apply equally well, why should >I choose one source in preference to the other? This of course depends upon the desired use; here we assume that the use will be as some sort of cryptographic value. That means the sequence must be "unpredictable." And we assume in cryptography that the design itself is available for analysis, with a large amount of the resulting sequence. Can a software RNG remain "unpredictable" under these conditions? Presumably the answer is "Yes," but we have long experience with the confusion generators in stream cipher cryptography, where many, many, apparently-strong RNG's have been broken. (Note that digital state-machine constructions can be realized in either hardware or software.) In cryptography we can neither measure nor prove strength. We *can* prove or demonstrate weakness, and we call such a proof a "break." But finding a break is often a difficult exercise, so we cannot simply assume that the absence of a break means that none are to be found. So how are we to trust a software state-machine RNG to produce our unpredictable values? One way is to use a well-known cryptographic hash or cipher to protect the original values. Another way is to measure molecular-level events which are "known" to be unpredictable. The problem with molecular-level events is that they are very, very tiny, and that many other signals which we normally ignore are of a similar magnitude or even larger. But if we *could* measure such events, we would have a good theoretical basis for asserting *strength* (in the sense of "unpredictability"), which is otherwise generally unavailable in cryptography. Now, no real device is going to be ideal, and presumably there will always be some possibility of hidden "weakness." So as a very basic first step, we must be very sure that we actually are measuring and reporting molecular-level randomness, rather than some other signal. One way to do this is to arrange to "turn off" the randomness source, and then inspect the output of the machine and see that it is quiet. Then we can know our machine is indeed measuring the events that we have just turned off. Ideally, a randomness machine will allow us to conduct statistical experiments on the randomness source itself (as opposed to a processed "random" output). We have a strong theoretical understanding of molecular-level randomness sources, so we know what the distributions should look like under various conditions. Ideally, the machine will allow us to change the conditions and collect statistics and see how the compare to theory. Presumably, if the machine is working well, there may be some deviation, but will largely produce the theoretical results. If so, we can be fairly sure that we are actually measuring the signal we hope to be measuring. So, ideally, by tests we can confirm that we can detect noise, and that the noise has the structure we expect from theory. If so, we have a fairly strong argument that we are measuring "unknowable" randomness, and then only need to process it (typically in a hash, and CRC would be ideal) to obtain unknowable uniformly-distributed values. >And if additionally >I don't know which sequence I get is from software and which is from >hardware? (Compare the Turing test.) How does the origin of the >sequence affect the workload of the analyst, Note that the goal of the analysis is to "break" (be able to predict) the generator. That obviously depends upon the generator. >if the software >generation process involves so many parameters that for combinatorical >reasons he has no chance of directly dealing with them but has >to try to look instead for possible regularities/irregularities in >the sequence itself and, by assumption, the sequences from the >two sources are of equal statistical quality? The way generators are broken is through a detailed analysis of the generation process, which may involve statistical support. But, on their own, statistical tests are simply uninformed about the structure of the generator, and so cannot test correlations to the construction. But it is precisely the construction which we seek to break. >(Note that the >hardware source is (in my humble opinion) unpredictable simply >because there are so many participating 'parameters' that the >'summation' (the end product) becomes unpredictable, cf. the casting >of a dice.) That would be the complexity construction, which is closely related to the way we construct conventional ciphers. The problem with this is that most new ciphers turn out to be weak. So how can we trust this same construction when applied to RNG's? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:06:40 +0100 From: Mok-Kong Shen <mok-kong.shen@stud.uni-muenchen.de> Message-ID: <36AF2B80.C844BA93@stud.uni-muenchen.de> References: <36ae1916.10719434@news.io.com> Newsgroups: sci.crypt Lines: 40 Terry Ritter wrote: > > So how are we to trust a software state-machine RNG to produce our > unpredictable values? One way is to use a well-known cryptographic > hash or cipher to protect the original values. Another way is to > measure molecular-level events which are "known" to be unpredictable. I like however to associate that "known" with some degree of subjectivity (unless that "known" can be shown to be the same as "proved"). So I would say that the decision I referred to is not entirely free from subjectivity. > >And if additionally > >I don't know which sequence I get is from software and which is from > >hardware? (Compare the Turing test.) How does the origin of the > >sequence affect the workload of the analyst, > > Note that the goal of the analysis is to "break" (be able to predict) > the generator. That obviously depends upon the generator. In another follow-up I described more precisely the difficult situation of a user who has to choose among two given sequences without additional information as to the sources. I don't think that a choice based on objective criteria alone is (always) possible. > > The way generators are broken is through a detailed analysis of the > generation process, which may involve statistical support. But, on > their own, statistical tests are simply uninformed about the structure > of the generator, and so cannot test correlations to the construction. > But it is precisely the construction which we seek to break. But the generator may under circumstances not be exactly known, namely if its design (its structure etc.) depends on a number of parameters not known to the analyst. Brute-forcing these may be imfeasible if there is a combinatorial explosion. M. K. Shen
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 12:15:34 -0500 From: "Trevor Jackson, III" <fullmoon@aspi.net> Message-ID: <36ADF835.5630B028@aspi.net> References: <36ADC1FB.4212@complex9.net> Newsgroups: sci.crypt Lines: 73 handWave wrote: > Trevor Jackson, III wrote: > > handWave wrote: > > > I am glad you raised the "handwaves" metaphore, because handwaves are > > > what toss coins. A complex person tosses a coin and you might think it > > > is random. The oscillator RNG in this discussion is directly analogous > > > to a coin toss in many ways. If a coin is not rotating (oscillating) > > > it will fall back into the hand in the same position that it stated from. > > > It is the rotation that helps the randomness, not only the complexity > > > of the nervous system, the elasticity of the skin, and the trembling of > > > the muscles. The rotation should be fast for best results. A juggler > > > could become skilled at non-random coin tosses for one coin that > > > rotates slowly. But if she tosses three coins with rapid rotation than > > > it is likely that random results will occur. If a periodic wind is > > > present in a coin toss, yes, it will influence the outcome, but the > > > result will often be recognizable as a useful random throw, or a throw > > > that was blown away. The same with this RNG. > > > > Human gestures are not a good foundation for system design. There are large, > > industrial concerns that rely upon human-gesture-generated unpredictability. > > Their interest is *not* statistical randomness as we find in simulations, > > games, and Monte Carlo tests (in spite of the latter name). Their interest > > is the same as ours: unpredicability. They are called casinos. > > The product I designed was evaluated for casinos by Bally, a potential > customer. > > > > > In spite of the fantastic efforts taken to eliminate predictability in games > > of chance human gestures can still dominate the outcomes completely. I'm not > > referring to shuffling cards systematically, but to rolling a roulette ball > > against the wheel so precisely that out of 20 tries a human can obtain a > > predicted outcome (slot 17) 10 times. 50% success. I think that constitutes > > predictability. > > Yes, this is like the skilled juggler I described above. The analogy to a > hardRandNumbGen is a skilled hacker who controls the power supply noise, > the clock glitches, the radio beams so that the RNG becomes under his > control. The chip designer must anticipate such antics, and prepare the > module for lunar insertion. > > > The hand-eye coordination involved is of an extreme level, requiring decades > > of practice to achieve. But it is real. The complexity of controlling a > > roulette wheel appears to me to be far larger than that of a coin toss. Even > > a fast one. > > I dispute this. A coin has one bit of output, a wheel has many bits in > one toss. A wheel is a big target with a smaller bandwidth for RPMs. A > coin has a wider bandwidth, perhaps 1hz to 50 hz, a wheel, from .1 hz to > .5 hz on the initial spin. A coin may be tossed from a rooftop. Wheels > would fracture under such conditions. Then we disagree strongly. My statement was not based on the degrees of freedom of the two systems, nor on unanticipated operations on the hardware such as defenestration. I was referring only to the degree of practice required by a human to reach a particular skill level. For instance, to get a 10% advantage given the standard payoff odds. I doubt an average person could control a roulette wheel that well after a year of practice. OTOH, I think an average person could reach that level of skill with a silver dollar after a few days or a week. Clearly the coin and the wheel are part of the physical universe and thus subject to the same set of arcane and esoteric influences. But the coin is much easier to control than the wheel. So we have to assume that a coin is not as good a source of entropy because it is too easy for an external influence to inject bias into the results. [ mega snip] As for the rest of your response, I can only harumph and murphle, which are not productive of enlightenment.
Subject: Re: hardRandNumbGen Date: Mon, 25 Jan 1999 12:29:42 -0600 From: Medical Electronics Lab <rosing@physiology.wisc.edu> Message-ID: <36ACB816.35EF@physiology.wisc.edu> References: <36AC8363.6D55@complex.net> Newsgroups: sci.crypt Lines: 21 .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·. wrote: [...] > Radioactive decay is also a large signal RNG. It may be considered > to be both digital and analog, as this RNG may be. It depends on how you measure it. Using a smoke detector it's a mighty damn weak signal. It takes about 1e5 amplification to see the radiation noise and that involves lots of shielding from outside sources as well as preventing feedback. It's not likely that anyone would stick a radiation source onto a processor either, so it has to be external. For the truely paranoid this is a good thing, they can look at their hardware and slap 'scopes on just to be sure everything is copasetic. > Thank you for this polite discussion. It has been fun reading, thank you too! Patience, persistence, truth, Dr. mike
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 02:56:36 -1000 From: handWave <real9@complex9.net> Message-ID: <36ADBB84.775@complex9.net> References: <36ACB816.35EF@physiology.wisc.edu> Newsgroups: sci.crypt Lines: 43 Medical Electronics Lab wrote: > > .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·. wrote: > [...] > > Radioactive decay is also a large signal RNG. It may be considered > > to be both digital and analog, as this RNG may be. > > It depends on how you measure it. Using a smoke detector it's > a mighty damn weak signal. It takes about 1e5 amplification to > see the radiation noise and that involves lots of shielding from > outside sources as well as preventing feedback. > > It's not likely that anyone would stick a radiation source onto > a processor either, so it has to be external. For the truely > paranoid this is a good thing, they can look at their hardware > and slap 'scopes on just to be sure everything is copasetic. Radioactive sources occur in dynamic RAMs, which cause soft errors: LARGE SIGNALS from thorium and uranium traces in common aluminum. Tim May showed this around 1980. The problem is not putting radioactive materials into chip, the problem is keeping them out. Special processing has reduced this problem in recent decades. If you have a 1979 DRAM laying around, it would be a slow source of random numbers as a Large Signal Random Number Generator. This has been discussed before. Just write all ones to the DRAM, observe the soft errors that cause zeros, and use the address and time of the bit flipped to seed your hash. Or take your smoke detector and place the vitals in contact with the surface of a modern DRAM: Large Random Signals would result in the big memory array. Take two and call me in the morning. ·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·. handWave > > Thank you for this polite discussion. > > It has been fun reading, thank you too! > > Patience, persistence, truth, > Dr. mike You're welcome, God Bless You!
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 09:33:00 -0000 From: "burt" <Lee.Grime@rdl.co.uk> Message-ID: <36ad8bb3.0@nnrp1.news.uk.psi.net> References: <36AC8363.6D55@complex.net> Newsgroups: sci.crypt Lines: 4 Have you done any Frequency domain analysis on the signals produced?? any periodicity in the frequency domain?
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 02:46:01 -1000 From: handWave <real9@complex9.net> Message-ID: <36ADB909.4F73@complex9.net> References: <36ad8bb3.0@nnrp1.news.uk.psi.net> Newsgroups: sci.crypt Lines: 18 burt wrote: > > Have you done any Frequency domain analysis on the signals produced?? any > periodicity in the frequency domain? Our team tested the hardRandNumbGen using a spectrum analyser during 1984. I do not have any photos of spectra available now, 15 years later, because I resigned from that company many moons ago. We were looking for radiated signals that could be picked up from outside the chip, and we found a strong signal in the 500khz range, corresponding to the oscillator used to sample the faster waveforms. There were other, weaker frequencies detected, but I have no interesting facts to tell you about the spectrum. As I recall, the two faster oscillators ran at about 5Mhz to 50Mhz as their frequencies were randomly modulated. handWave
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:35:40 GMT From: ritter@io.com (Terry Ritter) Message-ID: <36ae1907.10704108@news.io.com> References: <36AC8363.6D55@complex.net> Newsgroups: sci.crypt Lines: 283 On Mon, 25 Jan 1999 04:44:51 -1000, in <36AC8363.6D55@complex.net>, in sci.crypt ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." <real@complex.net> wrote: >On Sun, 24 Jan 1999 03:53:40 -1000, in <36AB25E4.2E7E@complex.net>, in >sci.crypt <real@complex.net> sinewave wrote: > >>Terry Ritter wrote: >[...] >>That means that this "large signal" design is probably sensitive to >>even tiny power and ground transients. It is going to be very hard to >>distinguish the effects of "real" thermal noise from transient >>feedback due to the structure of the circuit. So how can we have >>confidence in the result? Statistical testing cannot distinguish >>between "physical" and "pseudo" randomness. > >In the real world, it is not always possible to tell. I have never heard IC designers claim that it is "not possible to tell" whether their design works. In fact, if we can't tell whether a design works or not, we generally consider that a design *failure*: If we can't test it, we can't trust it. >Integrating >a random number generator (RNG) on a commodity IC is similar to >a manned expedition to MARS: they must take everything with them >into that harsh environment that they will need. If the craft is >buffeted by periodic winds, they do not have the luxury of calling >back to base and saying, "Houston, you told us this was a vacuum, >please make it a perfect vacuum, over". The RNG will encounter >non-ideal electrical environments. *Every* electronic circuit has requirements; we specify those requirements and then demand that the larger system achieve them if it expects the circuit to work. Conceivably, a low-noise section might have its own filtered analog power with a single-point ground, and separate digital power. >It should have redundant systems Yet we don't have, say, redundant adders on a processor. Why? One reason is that the result can be even *less* reliable. What we do is *test* the part, and if it doesn't work, we don't ship it. Which means we have to be able to test it. >which are combined to give the final random number the best shot at >being unpredictable, not perfect, but unpredictable. If one is going to claim that a design achieves its goals due to thermal noise, one must at the very least be able to show a performance change between noise and no-noise. Combining the thermal noise detection with RNG feedback complexity generally prevents this, which means we can't test the very thing we claim to do. >[...] >The major source of randomness of this RNG is the unsynchronized >nature of multiple oscillators with randomly changing frequencies. First, it is in the nature of this sort of oscillator to synchronize to tiny transients as may occur from nearby state changes. So just how do we *know* that our oscillators *are* "unsynchronized"? Next, the idea that the oscillators have "randomly changing frequencies" is what we need to prove or demonstrate, and not just assume. Even chaotic oscillators can follow predictable patterns. >This >is a large signal phenomenon, which cannot be accurately described >mathematically. Large signal phenomena are precisely those which are best described mathematically. It is the tiny signals (which must compete with thermal noise and transients from capacitive, inductive, and electromagnetic coupling) which are difficult to model well. >Similar to a coin toss, many analog variables are >involved. These continuous variations of many influences cause seeming >randomness. If you can mathematically describe a human coin toss, then >so you can with this RNG. But you cannot, and I cannot. That does not >invalidate the usefulness of these seed generators, not in this >century. So I guess this is not a machine which provably detects random molecular events and conveys them to us in the larger world. It is instead something which looks quite a lot like yet another complex software PRNG. >[...] >>Easy calculations using the publushed results show that the effective >>population of values is 1/4 the claimed ideal, which shows that the >>design was not as good as you thought. > >Correct, that first version in that report had an XOR gate placed in >a bad position, causing twice as many ones as zeros. The CIA alerted >us to my mistake with that one gate. When removed, the results are >much better. I still regret my mistake in that one gate placement. Thank you for the confirmation. Usually I hear about my mistakes. Apparently I was able to detect a problem in the design which your team and famous consultant did not, but which the CIA apparently also detected. In other words, my analysis was contrary to your beliefs and those of your crypto expert, but was also correct. So why are you not listening to me now? >The ealier description was an illustration for some readers to examine. >It was not an exhaustive explanation of the theory behind the design. >I have now expanded upon the description, explaining the large >signals as being analogous to coin tosses which must rotate due to >a complex had waving motion. The complexity of my circuit design >mimics, on a small scale, the complexities of the human hand wave >and coin toss. The frequency changes in the design are the analogy >of the hand motion. Thermal irregularities power supply variations >also contribute to this hand motion. That is a very unsatisfying explanation for an on-chip device composed of extremely-well-understood physical components and techniques. >[...] >I do not claim nobody can break this. I am presenting concepts to a >wide reading audience. Some of these concepts are less sound than >others, so the readers have the opportunity to judge various attepts >to produce randomness in a harsh environment. I hope that they will >fare better than I did. And that is the same reason I confront those claims. Someday I may have to trust one of those systems. >>The reasoning about this design is contradictory: Supposedly the >>large signal design is "random" because it senses low-level noise. >>Yet the circuit is supposedly suitable for a noisy digital chip >>because it is a "large-signal" design. There is a fundamental problem >>in making both claims at the same time. > >I have addressed this above. A large signal, digital oscillator has >small noise on top of that. And, to the extent that it is sensitive to "small noise," that part of its operation is no longer "large signal." >The randomness is primarily based on the >coin toss analogy. Every binary sequence has 1's and 0's, but that does not make the generator physically random, or even pseudorandom. >The thermal noise calculation first given is a >secondary source of randomness. And the first source would be? >The periodic power supply noise >will affect this design more in some ways than it would affect an >analog circuit with well designed differential and common mode >considerations. I would say so, yes. >But the ways periodic noise affects these circuits >do not ruin the unpredictability of the resulting numbers. If these stages are switching not based on thermal noise, but instead from nearby power transients from digital switching, they could appear to work, and yet be absolutely correlated to some apparently unrelated but physically-close device. And that could be predictable, if we are willing to invest the effort in finding the relationship. And once we do, of course, we can apply that knowledge to all examples of the device. >I leave >that discussion for another day. Well :) >[...] >>But that approach is digital complexity, and not thermal randomness. >>It can be simulated in software. It is PSEUDO-random. Maybe it is >>strong, maybe not, but there is certainly no proof. > >It is analog complexity. I will give no proof today. Give me proof >of coin tossing that does not involve complexity or strength.. This is the same sort of argument we very often hear from crypto newbies. Unfortunately, it is not possible to prove -- or even measure -- cryptographic strength. It *is* possible to prove weakness by finding "breaks," but typically at great cost. That leaves the rather unsatisfying task of arguing strength, which is not working here. >[...] >>The obvious experiment, then, is to take the device to cryogenic >>temperatures and see how it performs. If the output still has good >>statistics, we can suspect that the output does not represent thermal >>noise at all, but is just a complex digital system. Was such an >>experiment performed? > >No. The circuit depends on many complex factors for randomness, as a >coin toss does. In some imagined laboratory experiment, it is feasible >to control all factors, causing non-random results. In commodity >applications, Large Signal Random Number Generators are sometimes >superior to small signal based generators and both may appear on a >single IC. But what would make you think "Large Signal Random Number Generators" are superior? Even the name is a deception which hides the fact that switching depends upon tiny noise-level values. The more you discuss this, the more it sounds like the sort of snake oil we see around her all the time. >[...] >>Even PSEUDO-random RNG's pass statistical tests. Those tests have >>nothing to do with cryptographic unpredictability or "strength." Yet >>strength is what you claim. > >Yes it is a strong source, as upcoming product releases are expected to >show. A press release shows us cryptographic strength? If these circuits cannot demonstrate that they are detecting the thermal randomness they are said to use, there is no set of tests in the world which would be sufficient to show "strength." >Just because old PRNGs pass some tests does not mean that new >designs are bad, as you imply. I imply no such thing. Perhaps you missed the point. >>I think you have missed the distinction between unpredictable >>randomness for cryptography, and ordinary statistical randomness. > >A PSRG may be depended upon to produce the same string under certain >easy to arrange conditions. This RNG does the opposite of that. That is what you need to prove, or to argue in a convincing way. I argue that an RNG as you describe not only has digital state, it also has analog thermal state, direct electrical coupling from other power-using circuits (including each stage), and indirect coupling from nearby circuits. Once we account for all this state and interaction, I expect that we can largely predict the result. The worst case would be that some stages trigger from some unexpected power transient, or even trigger off each other by unexpected coupling. In this case, prediction becomes much easier. Most dangerously, such a device might well seem "random" to all tests not specifically aware of the unexpected transient or coupling. >Two >sequential random numbers from this circuit would prove that to >anyone who tests it, most of the time. Testing the output cannot be sufficient to test the design. The engineering in circuit design is not about handwaves, it is about the attempt to absolutely understand and control device operation. So if we can't test it, we can't trust it. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: hardRandNumbGen Date: Tue, 26 Jan 1999 19:02:55 -0500 From: "Kazak, Boris" <bkazak@erols.com> Message-ID: <36AE57AF.4C5C@erols.com> References: <36ae1907.10704108@news.io.com> Newsgroups: sci.crypt Lines: 37 Terry Ritter wrote: > > > Large signal phenomena are precisely those which are best described > mathematically. It is the tiny signals (which must compete with > thermal noise and transients from capacitive, inductive, and > electromagnetic coupling) which are difficult to model well. > ------------------- Let's be practical... Consider such a simple system: HHHHHHHHHHHHHHHH HH H MMM HH H MMMMM HH OOOOOOOOOO H MMMMM HH OOOOOOOOOOOO H MMM HHHHHHHHHHHHHHHH where HH is a Housing (just a glass or plastic bottle), OO are Objects (a pseudo-scientific baptism for 100-200 peas or beans), MM is a Microphone. Now if we start rotating the Housing around its horizontal axis, the Objects will produce a loud Random Rattle, and the Microphone will transmit this rattle to the sound card. My questions are: How many Objects are needed and what must be the speed of rotation that will assure the True Randomness? What estimates can be given for Degree of Correlation and for Period of Repetition, depending on the system parameters? The System is not patented, it is hereby placed in the public domain. Respectfully BNK
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 07:27:34 -1000 From: handWave <shaken@stirred.com> Message-ID: <36AF4C86.18CC@stirred.com> References: <36AE57AF.4C5C@erols.com> Newsgroups: sci.crypt Lines: 56 Kazak, Boris wrote: > > Terry Ritter wrote: > > > > > > > Large signal phenomena are precisely those which are best described > > mathematically. It is the tiny signals (which must compete with > > thermal noise and transients from capacitive, inductive, and > > electromagnetic coupling) which are difficult to model well. > > > ------------------- > Let's be practical... > > Consider such a simple system: > > HHHHHHHHHHHHHHHH > HH H MMM > HH H MMMMM > HH OOOOOOOOOO H MMMMM > HH OOOOOOOOOOOO H MMM > HHHHHHHHHHHHHHHH > > where HH is a Housing (just a glass or plastic bottle), OO are Objects > (a pseudo-scientific baptism for 100-200 peas or beans), MM is a > Microphone. > Now if we start rotating the Housing around its horizontal axis, > the Objects will produce a loud Random Rattle, and the Microphone will > transmit this rattle to the sound card. My questions are: > > How many Objects are needed One. >and what must be the speed of > rotation that will assure the True Randomness? If the bottle is smooth inside, rotating is not as good as shaking it. Shaking by hand is like a coin toss: it is the result of a complex process that has never been repeated exactly during human history. > What estimates can be given for Degree of Correlation and > for Period of Repetition, depending on the system parameters? Your sound card and software will need to comprehend the natural resonant frequencies of the bottle and filter out that repeating soundwave. A threshold for the clicks of a fall should be set to reject small resonant sounds. Multiple bounces after a fall may have similarities to previous bounces, so that should be rejected. > > The System is not patented, it is hereby placed in the public > domain. > > Respectfully BNK
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 16:47:58 GMT From: rcktexas@ix.netcom.com (R. Knauer) Message-ID: <36af42cb.24504355@nntp.ix.netcom.com> References: <36AF4C86.18CC@stirred.com> Newsgroups: sci.crypt Lines: 20 On Wed, 27 Jan 1999 07:27:34 -1000, handWave <shaken@stirred.com> wrote: >Your sound card and software will need to comprehend the natural resonant >frequencies of the bottle and filter out that repeating soundwave. A >threshold for the clicks of a fall should be set to reject small resonant >sounds. Multiple bounces after a fall may have similarities to previous >bounces, so that should be rejected. I suspect that all classical processes, even chaotic ones, suffer from some kind of flaw as you describe. That's why I would only use quantum processes for a TRNG. Bob Knauer "No Freeman shall ever be debarred the use of arms. The strongest reason for the people to retain the right to keep and bear arms is, as a last resort, to protect themselves against tyranny in government." --Thomas Jefferson
Subject: Re: hardRandNumbGen Date: Wed, 27 Jan 1999 02:35:56 -1000 From: handWave <shaken@stirred.com> Message-ID: <36AF082C.3AA6@stirred.com> References: <36ae1907.10704108@news.io.com> Newsgroups: sci.crypt Lines: 39 Terry Ritter wrote: > > On Mon, 25 Jan 1999 04:44:51 -1000, in <36AC8363.6D55@complex.net>, in > sci.crypt ".·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·." > <real@complex.net> wrote: > > >On Sun, 24 Jan 1999 03:53:40 -1000, in <36AB25E4.2E7E@complex.net>, in > >sci.crypt <real@complex.net> sinewave wrote: > > > >>Terry Ritter wrote: > >>Easy calculations using the publushed results show that the effective > >>population of values is 1/4 the claimed ideal, which shows that the > >>design was not as good as you thought. > > > >Correct, that first version in that report had an XOR gate placed in > >a bad position, causing twice as many ones as zeros. The CIA alerted > >us to my mistake with that one gate. When removed, the results are > >much better. I still regret my mistake in that one gate placement. > > Thank you for the confirmation. Usually I hear about my mistakes. > > Apparently I was able to detect a problem in the design which your > team and famous consultant did not, but which the CIA apparently also > detected. > > In other words, my analysis was contrary to your beliefs and those of > your crypto expert, but was also correct. > > So why are you not listening to me now? I am listening. My silence for 24 hours does not mean I am not listening. And the consultant we used did not work on the RNG, he worked on the hash function, only. Your messages are beginning to repeat themselves, so I am not responding to these repeated comments. Thank you for a rational discussion of this subject. If you post more on this subject , I will be reading them, but not always responding. .·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·..·´¯`·.handWave
Subject: Re: hardRandNumbGen Date: Sun, 31 Jan 1999 15:23:49 GMT From: bo.doemstedt@mbox200.swipnet.se (Bo Dömstedt) Message-ID: <36b47461.856793353@nntpserver.swip.net> References: <36AC8363.6D55@complex.net> Newsgroups: sci.crypt Lines: 15 "someone" wrote: >Integrating a random number generator (RNG) on a commodity >IC is similar to a manned expedition to MARS: they must take >everything with them into that harsh environment that they will need. If the >craft is buffeted by periodic winds, they do not have the luxury of calling >back to base and saying... ...etc... Dear "someone", I would strongly recommend that you go to Mars at earliest possible time. As for random number generation, we sell them for $170... http://www.protego.se/sg100_en.htm Bo Dömstedt Protego Information AB
Subject: Re: hardRandNumbGen Date: Sun, 31 Jan 1999 12:04:45 -0500 From: "Kazak, Boris" <bkazak@erols.com> Message-ID: <36B48D2D.6B25@erols.com> References: <36b47461.856793353@nntpserver.swip.net> Newsgroups: sci.crypt Lines: 24 Bo Dömstedt wrote: > > "someone" wrote: > >Integrating a random number generator (RNG) on a commodity > >IC is similar to a manned expedition to MARS: they must take > >everything with them into that harsh environment that they will need. If the > >craft is buffeted by periodic winds, they do not have the luxury of calling > >back to base and saying... > ...etc... > Dear "someone", I would strongly recommend that > you go to Mars at earliest possible time. As for random number > generation, we sell them for $170... > http://www.protego.se/sg100_en.htm > > Bo Dömstedt > Protego Information AB -------------------------- Gnaediger Herr Bo Doemstedt! In all the hype that is there on the WWW page you mentioned, there is not a single word about the underlying physical phenomena which produces "high level signal random output". Without understanding this, I will be reluctant myself, and will strongly discourage everybody from trusting your (T or P)RNG. Respectfully BNK
Subject: Re: hardRandNumbGen Date: Mon, 01 Feb 1999 17:31:14 GMT From: bo.doemstedt@mbox200.swipnet.se (Bo Dömstedt) Message-ID: <36b5e46a.951026902@nntpserver.swip.net> References: <36B48D2D.6B25@erols.com> Newsgroups: sci.crypt Lines: 34 "Kazak, Boris" <bkazak@erols.com> wrote: > Gnaediger Herr Bo Doemstedt! > In all the hype that is there on the WWW page you mentioned, >there is not a single word about the underlying physical phenomena >which produces "high level signal random output". > Without understanding this, I will be reluctant myself, and will >strongly discourage everybody from trusting your (T or P)RNG. > Respectfully BNK Please excuse us if our server pages are not perfect. We use a noisy diode as our noise source. It has been extensively checked, and works well in practice. handWave <shaken@stirred3.com> wrote: >And if I buy one, it will be on its own in my environment. You will not >be there to protect it from my attacks. Mars-like temperatures, >carbon-dioxide at 1/30 atmosphereic temperature, >found on your warehouse shelf. It will be subjected to radio beacons at >close range. The correct operation of the SG100 device is continuously checked by a software driver. These attacks will, if successful, result in an error-message. The SG100 works well even if subject to strong RF-fields (30V/m). >Will you be there to help it maintain its insanity, or will you be warm >and safe in Houston? You will be sipping your coffee while your lonely >RNG will be under hostile attack by Martians. Houston is on the other side of the Globe! Bo Dömstedt Protego Information AB Malmo, Sweden http://www.protego.se/sg100_en.htm
Subject: Re: hardRandNumbGen Date: Sun, 31 Jan 1999 07:53:56 -1000 From: handWave <shaken@stirred3.com> Message-ID: <36B498B4.6460@stirred3.com> References: <36b47461.856793353@nntpserver.swip.net> Newsgroups: sci.crypt Lines: 30 Bo Dömstedt wrote: > > "someone" wrote: > >Integrating a random number generator (RNG) on a commodity > >IC is similar to a manned expedition to MARS: they must take > >everything with them into that harsh environment that they will need. If the > >craft is buffeted by periodic winds, they do not have the luxury of calling > >back to base and saying... > ...etc... > Dear "someone", I would strongly recommend that > you go to Mars at earliest possible time. As for random number > generation, we sell them for $170... > http://www.protego.se/sg100_en.htm > > Bo Dömstedt > Protego Information AB And if I buy one, it will be on its own in my environment. You will not be there to protect it from my attacks. Mars-like temperatures, carbon-dioxide at 1/30 atmosphereic temperature, radiation of types not found on your warehouse shelf. It will be subjected to radio beacons at close range. If I become deperate, the RNG will be subjected to physical tampering and disassembly. Will you be there to help it maintain its insanity, or will you be warm and safe in Houston? You will be sipping your coffee while your lonely RNG will be under hostile attack by Martians. The randomness will be compromised, and you will not hear its screams as it goes from random to silent to Martian Language. Then will you be so quick to scoff at analogies? No, you will will erect cenotaph which says: We Loved RNG As If It Were Our Own, Now It Serves Another Master, Another Purpose.

Terry Ritter, his current address, and his top page.

Last updated: 1999-02-21