- Amplitude
- The signal level, or height.
- Augmented Doubles
- A term used in
population
estimation. If we take enough random
samples from a population, eventually we will find some value which
occurs again or repeats; this is a "double." As we continue to
draw values, eventually we will get a triple, which is of course
not a double. Augmentation converts triples and higher occurrences
to the number of doubles which have the same probability. Then we
can use the augmented doubles value to estimate population with
better accuracy than we might otherwise have. See
the article.
- Autocorrelation
- The similarity or
correlation between a sequence of
data and itself, at different offsets. Clearly, at zero offset,
any data sequence is maximally correlated to itself. But if one
version is rotated or given a circular shift of n positions, there
may not be much correlation at all, especially in noise signals.
The autocorrelation result is typically a correlation computed for
every possible sequence offset, with the data considered repetitive
or circular. Noise data are not repetitive, but the characteristics
of the generation tend to be constant and we can treat the result
as a circular array without much offense to the underlying
requirements.
- Autocorrelation Graph
- In the
autocorrelation
graph, the horizontal axis represents the
512 possible unique offsets between two copies of the same sequence
of 1024 values.
(Note that a circular shift or "rotation" of 1 place in one copy
produces the same comparison as a reverse 1 place rotation in the
other copy.)
The leftmost pictel column represents zero offset, which holds the
expected "spike" from the maximum possible correlation.
The vertical axis is bipolar, with zero plotted in a light gray
horizontal line. The "spike" at zero is scaled to 4096, which is
about 32x the plotable range of -128..127. The spike is thus well
"off scale" (plotted as 127 instead of 4096), and we see only the
center 1/32 of the scaled graph. Normally this will include the
structure we might want to see.
The autocorrelation values are computed here using
FFT techniques. The resulting FFT blocks
contain 1024 complex values in which each orthogonal or imaginary
component is zero, and each real component is repeated twice in
the array. The lowest 512 real results are averaged over each
FFT block in the sample, which reduces the effect of random
variations so we can see consistent patterns.
All of these
.WAV recordings display an autocorrelation
"ringing" which is clear through at least 8 elements, and presumably
continues. I have variously attributed this result to an unknown
digital filter in the sampling hardware or the apparent correlation
inherent in the uneven distribution. Since lower values are most
probable, we can expect that "clumps" of lower values may occur
together, thus providing an apparent correlation between values.
We can investigate the alternative of an apparently flat
distribution by taking values from a uniform random number
generator, and then performing the same computation and plotting
process. In this case we get a very peculiar "noise" distribution,
and the autocorrelation spike is present only at offset zero.
This is consistent with a correct computation and plot.
- Average Deviation
- The absolute value of the difference from the
mean for each data value, summed, then
divided by the number of values. A somewhat more robust estimator
of the of the so-called "second moment" than
standard deviation.
Correlation
- Generally speaking, a similarity between data; the extent
to which data are related.
Typically, some sort of dependence relationship (not necessarily
linear) between two data sequences, or mutual dependence on some
other sequence. Also see the discussion under
white noise.
dB
- decibel.
- Decibel
- Ten times the base-10 logarithm of the ratio of two
power values. Denoted by dB.
Entropy
- The Shannon computation of the same name.
The probability of finding a particular symbol, times the
natural log of that probability, summed over all symbols, and
negated.
A measure of coding efficiency, in bits (binary digits) of
information per bit of data.
A distinctly different concept than the entropy of physics, where
it represents disorder.
In noise characterization we count every occurrence of all 65,536
possible bipolar 16-bit data values, and get the sample probability
for each value out of the total number of data values.
The entropy computation measures coding efficiency, and not
unknowable randomness as often claimed.
For example, if we measure the entropy of a counting sequence,
we will get a high value.
But any counting sequence is controlled by a relatively tiny
amount of arbitrary internal state.
And a counting sequence is completely predictable given the
design of the counter and any one result.
Clearly, entropy does not reflect the essential weakness of a
cryptographically poor generator.
In noise work, we expect noise sample values to occur in a
normal distribution.
That distribution amounts to a form of coding, thus limiting the
amount of information per data bit.
That sets an expected entropy value for good noise sequences,
quite independent of other issues.
- Expected Maxima
- The number of
maxima expected from
white-noise theory. If we assume an upper frequency limit of
20 kHz, we expect about 15,492 maxima per second.
Ideally, we do expect that any sampling system will have a strong
anti-alias filter to limit signal frequencies to below the Nyquist
limit. For audio, we might well expect this to occur between the
20 kHz we use as the usual audio upper limit, and the
22 kHz Nyquist limit implied by a 44 ks/sec
CD-quality sampling rate. Unfortunately, it appears that the sound
card used here does not have an anti-aliasing filter, or at least not
one with a strong cut-off. And while it would be easy enough to get
a much better sound card, that would not be representative of the
equipment available on most computers.
Filter
- One interesting opportunity in these noise experiments was
the possibility that some small "data fix-up" process could improve
the resulting statistics. Various approaches were implemented,
including:
- raw: the unmodified data from the
.WAV file
- diff from mean: the
mean is subtracted from each data value.
Intended to force the subsequent mean to zero, thus eliminating
simple positive or negative bias.
- diff from prev: the previous data value is subtracted
from the current value. This gives the difference between samples,
which reduces several forms of bias. But we can also see this as a
sort of digital filter which necessarily affects frequency response.
- statistical RNG: a random value is acquired from a
source which is known to be
white and flat. We see, however, that
this is very different from the usual noise source.
In the end, it appears that the differential mode (diff from prev)
can provide a substantial improvement for many lower-quality sources.
- FFT
- Fast Fourier Transform. A numerically advantageous way of
computing a discrete Fourier Transform.
Basically a way of transforming information from a sequence of
waveform amplitude values as sampled periodically through time,
into the equivalent information in amplitude values at discrete
frequencies.
The FFT performs this transformation in time proportional to
n log n, for some n a power of 2.
The basic FFT transformation is from complex-value (real plus
orthogonal or imaginary) elements to complex frequency values.
Various "tricks" can be used to decrease computation when only
real-value data are used, or when the number of values is not
a power of 2. Here we use 1024 values, each with an imaginary
component of zero, and so ignore the various FFT "tricks."
One deceptive aspect of the FFT is that it produces results
for typically arbitrary particular discrete frequencies.
The result values are not, as one might expect, the simple
accumulation of all energy near each particular frequency.
Instead, the result values are how the original waveform could be
reproduced using energy only at our selected frequencies.
Significant signals between our selected frequencies can produce
somewhat ambiguous results.
A restrictive requirement of the FFT is that the data need
be repetitive or circular with exactly the length of the FFT block.
This is often violated in practice, and various "windowing
functions" are introduced to minimize resulting errors. The issue
is much less important in noise work, and windowing functions
have little effect on noise results.
- FFT Graph
- The
FFT
graph displays the amplitude of
data at various discrete frequencies. In this case we use
.WAV files which record a 16-bit bipolar
noise amplitude at the CD rate of 44,100 samples (or data
elements) per second.
Our FFT has a fixed block size of 1024 complex values, which
each have a zero orthogonal or "imaginary" component. That block
transforms into 511 complex values repeated twice, plus zero and
the highest frequency each once. We convert each complex value to
real magnitude and use 512 results.
We are measuring random-like data which will vary widely from
one FFT block to another. To gain a systematic view of the results,
we take the average magnitude for any particular frequency over all
FFT blocks included in our sample size. Increasing the sample size
thus tends to decrease the variation seen in the result (see
standard error).
The leftmost column in the display (zero) corresponds to zero
"frequency," which is the DC bias. Each successive column to the
right represents a frequency increase of
44,100 / 1024 = 43.066 Hz. This makes column 511
correspond to 22,007 Hz.
Although frequency is traditionally plotted on a logarithmic
scale, the frequency increments in FFT results are linear. If we
plotted FFT results on a log scale, we might want to interpolate
the few low-frequency values to fill the left-side pixels, and
discard many of the high-frequency values which would occur in
the same right-side pixels. That would be both deceptive and
wasteful; accordingly, frequency is plotted linearly.
Frequency is calibrated with a series of vertical lines in a
1, 2, 5 pattern, in which red represents the start of a new decade
(1) and green represents the next two steps (2 and 5). The three
vertical red lines on the FFT graph represent 100 Hz at the left,
then 1 kHz and 10 kHz to the right. The rightmost green line
represents 20 kHz.
Amplitude is calibrated with a series of horizontal lines, also
in a 1, 2, 5 pattern. The display is normalized to the average of
the 443 magnitude averages from 1 kHz and
20 kHz, where we draw a light gray horizontal line.
Two green lines surround this, representing +/- 0.5 dB
change; then two red lines represent +/- 1.0 dB and
another green pair represent +/- 2.0 dB.
- Frequency
- In general, the number of repetitions or cycles per
second.
Specifically, the number of repetitions of a sine wave signal per
second: A signal of a single frequency is a sine wave of that
frequency.
Any deviation from the sine waveform can be seen as components of
other frequencies, as described by an
FFT.
Now measured in Hertz (Hz); previously called cycles-per-second
(cps).
Highest Repetition
- In these noise measurements, the most times that any one
16-bit bipolar data value occurs.
Kurtosis
- An expression of the so-called "fourth moment." A tendency
for a distribution to form a sharp narrow peak in the center
(or, when negative, a broad flat plateau). Also see
standard deviation and
skew.
Maxima
- The number of local maxima. The count of data values where
both the adjacent values (predecessor and successor) are lower.
- Maximum Value
- The largest bipolar 16-bit integer value in the data.
Typically positive. Ideally, the maximum value would be close to
-- but not quite reach -- the largest possible positive recorded
value of +32,767.
- Mean
- The average; between the extremes.
The sum of all values in the data, divided by the number of values.
This is the sample mean, a value we can actually measure and
compute, which approximates the
population
mean which we generally cannot measure. But we do expect the
population mean to be within "a few multiples" of the
standard error
of the computed sample mean value.
The mean is a characterization of the distribution's "central
value." It is most useful for those distributions which rise
to a maximum in the center, as do most noise results. Also see
average deviation and
standard deviation.
Since we have bipolar noise values, our mean is typically near
zero.
- Minima
- The number of local minima. The count of data values where
both the adjacent values (predecessor and successor) are higher.
- Minimum Value
- The smallest bipolar 16-bit integer value in the data.
Typically negative. Ideally, the maximum value would be close to
-- but not quite reach -- the largest possible negative recorded
value of -32,768.
Noise Graph
- Each graph contains exactly 512 horizontal "picture elements"
(pictels) or columns (0..511), and 256 vertical pictels or rows
(0..255). On some graphs, we plot bipolar horizontal data by
on the range (-256..-1,0..255). Similarly, we may plot bipolar
vertical data on (-128..-1,-..127). In most bipolar cases we plot
a light gray line across the graph at zero.
There is also a single-pictel-width white border around the
graph, making the overall display (with border) 514 x 258 pixels.
This is the size of the display image.
Each graph has a dark-gray grid positioned every 8 pictels,
starting at zero zero (the leftmost pictel column and the bottom
pictel row).
By using the grid, and a graphics program which can to expand the
display, it is relatively easy to read the exact coordinates of
any point plotted on the graph.
Also see:
FFT graph,
autocorrelation graph, and
normal graph.
- Normal Distribution
- The usual "bell shaped" distribution which may or may not be
due to Carl Friedrich Gauss 1777-1855. Called "normal" because
it is similar to many real-world distributions. Note that
real-world distributions can be similar to normal, and
still differ from it in serious systematic ways. Also see the
normal computation page.
"The" normal distribution is in fact a family of distributions,
as parameterized by
mean and
standard deviation values. By computing
the sample mean and standard deviation, we can reduce the whole
family into a single curve. A value from any normal-like
distribution can be "normalized" by subtracting the mean and
dividing by the standard deviation; the result can be used to
look up probabilities in standard normal tables. All of which
of course assumes that the underlying distribution is in fact
normal, which may or may not be the case.
- Normal Graph
- The normal
graph displays the "sample distribution"
or number of occurrences of each 16-bit data value over the entire
sample. Ideally, these counts will occur in a manner similar to
what we find in the "normal" statistical distribution.
The horizontal axis is bipolar and calibrated in
standard deviations,
which here range from -4 to +4. The
mean (sd = 0) is given
a light gray vertical line, while sd values 1..3 and -1..-3 are
plotted as green vertical lines. The ideal
normal distribution
(for the computed mean and sd) is plotted in red.
Since the horizontal axis covers 8 standard deviations in 512
pictels, each standard deviation covers exactly 64 pictels. Each
pictel-width is thus 1/64 sd; each pictel generally
covers multiple
data sample value counts. Although both the mean and sd are quite
unlikely to be integer values, their basis is still the bipolar
integer recorded data values. We can thus define the real-value
range of a pictel as being between part of one value-count and
part of another, the enclosed sum being what we plot.
The pictel centered on the mean value (that is,
+/- sd/128) is
scaled to vertical height of 199.45, which is about 500 times the
ideal normal curve at mean.
Peak
- The maximum value. Here we specifically compare to
RMS, and so average the
maximum and
minimum
values to get the effective peak.
- Pink Noise
- A random-like signal in which -- ideally -- power is
proportional to the inverse of
frequency, or 1/f. At twice the
frequency, we would expect half the power, which is a 3
dB decrease. This is a frequency-response
slope of -3 dB / octave, or
-10 dB / decade.
As opposed to
white noise, which has the same level
at all frequencies, pink noise has more low-frequency or "red"
components, and so is called "pink."
A common single-stage R-C low-pass filter has half the output
voltage at twice the frequency. But this is actually one-quarter
the power and a -6 dB / octave slope, which might be
termed more "red" than "pink."
For ideal pink noise, the desired voltage drop per octave is
the square root of two, or about 0.707.
- Population
- The total number of unique values. When some values are more
probable than others, the effective population approaches the
number of the more probable values.
The unknown universe of values which we can only estimate by
sampling.
Range
- The number of possible values, from the
minimum to the
maximum.
RMS
- The root-mean-square. The square root of the average (or
mean) of the
square of each data value. Because energy varies as the square
of amplitude, RMS tracks the amount of energy in a complex signal.
In noise data, the RMS value should be approximately the same as the
standard deviation.
Skew
- An expression of the so-called "third moment." A tendency
for a distribution to lean to the right (or left, when negative),
showing a fast dip from the central mean but having an extended
tail. Also see
standard deviation and
kurtosis.
- Standard Deviation
- The square of the difference from the
mean for each data value, summed,
divided by one less than the number of values, then
square-rooted. An expression of the so-called "second moment,"
which describes the "dispersion" or variability around the mean.
With a
normal distribution
as we expect from most noise sources, about 68% of our data
values should be within +/- 1 standard deviation about
the mean.
The square of the standard deviation is the
variance. In noise data, the standard
deviation should be approximately the same as the
RMS.
- Standard Error
- The standard error of the mean. The
standard deviation
divided by the square root of the number of data values.
The extent to which we expect the sample
mean
to differ (+/-) from the
population mean.
The more data we have, the smaller this range becomes; but to get
10x the precision, we need 100x as much data.
Unique Values
- In these noise tests, the number of unique 16-bit data values
found in the sample. This is not the number of data elements, but
rather the number of different values in the sample. At most we
could have 65536 different 16-bit values, but if that happened, we
would be justifiably concerned that we did not capture peak values
outside the recording range.
Variance
- The square of the difference from the
mean for each data value, summed and
divided by one less than the number of values. An expression of
the so-called "second moment" which describes the variability
around the mean. The square root of the variance is the
standard deviation.
.WAV File
- A type of file used for sound storage. The structures in .WAV
files can represent a wide range of data and sampling rates, but
only simple "cannonic" files are used here. In these, the data
are 16-bit bipolar integer samples (or data elements) starting at
byte 44 in the file. These data represent monaural noise amplitude
values sampled 44,100 times per second.
- White Noise
- A random-like signal with a flat
frequency spectrum.
Doubling the bandwidth doubles the noise power and increases
RMS noise voltage by the square root of two.
As opposed to
pink noise, in which the frequency
spectrum drops off with frequency. White noise is analogous to
white light, which contains every possible color.
White noise is normally described as a relative power density in
volts squared per hertz of bandwidth.
White noise power varies directly with bandwidth, so white noise
would have twice as much power in the next higher octave as in the
current one. The introduction of a white noise audio signal can
destroy high-frequency loudspeakers.
The definition of white noise as a random signal having a flat
frequency spectrum is very common. However,
frequency is defined in terms of a
continuous sine wave, and that sort of
correlation is something we do not
expect in noise.
Instead, we expect noise to be the result of multitudes of
independent and unpredictable quantum pulses or actions.
From simple random variation we do expect that any particular sample
set or random sequence might seem to have some correlation.
But we do not expect random effects to be repeatable or to have
a clear and complex structure.
So if we have structured results, or get similar results again,
we are looking at true correlations.
And if correlated frequency energy exists, individual noise
samples cannot be considered completely independent.