The generic code of the program looks like this:
DO calculate dpv ' calculate delta pv if direction = "up" then pv = pv + dpv if pv > 3 then direction = "down" Else pv = pv - dpv if pv < -3 then direction = "up" End if LOOPThe two ON OFF values are 3 and -3.
7 X 6 X . . 5 . . X . . 4 . . . . . . 3 X . -------- . . --------- . . --------- 2 . . . X . . 1 . . . . . . 0X X . . . X -1 . . . X X -2 . . . . . -3 -------- . X ------- . . ------------- . -4 . . X . X -5 X . . -6 . . -7 X 3 3 6 5 2 8 3 6 3 8 7 1 6 3The bottom line contains the delta pv values.
The first reason is because the process in order to calculate the delta pv value also is also more or less a random process. The value is initially the number of counts to detect if the timer changes with 16 ms. This value is roughly between 750 and 850 and can also be much lower depending on system load. It also depends about the PC used. If you take the mod 10 of that number (plus 1) you get a value between 1 and 10. Completely unpredictable. This whole process resembles the "Schrodinger Cat Paradox" which is nothing more than a demonstration of the half-life of a radio active element.
The second reason is because the outcome of the ON OFF process is also unpredictable. The reason is what is called deadband. In the above example example the deadband is 9 (4+4+1) and the maximum delta pv value is 8. The closer the maximum pv delta to the deadband the more the process value can change, increasing the unpredictability.
The following figure demonstrates 3 possible scenarios with total deadband of 9 and a maximum delta pv of 4. Only half of the results is shown.
7 6 X1 X7 X13 5 Y1. Y5 . . Y9 . . Y13 4 Z1 . Z3 . . Z5 Z7 . Z9 3 X0. . . . x6 . . . Y8 . . . . . 2 Y0 . X2 . . . . . . . . Y12 . . 1 Z0 . . . . . Y6 . . . . . . . . 0. . Y2. . Y4 . . X8 . X12 . X14 . . -1 . . . . . . . . X11 . . . . -2 . . X5 . . . . . Y10 . . . -3 . . X3 . . . X9 . . . . . . -4 Z2 . . Z4 . Z6 Z8 X15 Z10 -5 Y3. . Y7 . . Y11 . -6 X4 X10 X16 -7The 3 scenarios are identified with the letters X, Y and Z.
In short what does tests show is that an ON OFF value of 2 gives the best results, meaning that the final value of each experiment can be a 0 or a 1 with the same probability.
As already mentioned before the only thing you can show is that a sequence of bits is not random. Unfortunate the document does not demonstrate that point and IMO lacks certain rigour. The document contains 15 tests. Which each test is an example to demonstrate that the test is implemented correctly. That is okay. What it lacks is to demonstrate what its specific function is. In fact, IMO, each test services as a filter, to capture non random bit patterns. That means for example that with test 5 you need an specific non random test pattern which at least fails at test 5 implying that the pattern is non random. For test 6 you need a different non random pattern which fails specific at test 6 etc etc.
In paragraph 2.14.8 also the expansion of "e" was tested. At page 2-37 we read:
In paragraph 2.7.8 and 2.9.8 the test consists of the "2^20 bits produced by the G-SHA-1 generator". Also here the conclusion is that the sequence was random. Again IMO the same conclusion holds: The methodology used is wrong. Physical systems should be used.
In test 31 the expansion e is tested and in test 32 the expansion of pi is tested. Test 32 fails at test 3 and test 32 fails in test 2.10 and test 2.14.
At page 1.1 of the NIST document we read:
At page 1.2 of the NIST document we read:
There are 5 Benchmarks performed in sheet "Benchmark" identified as Test 0, Test -1, Test -2, Test -3 and Test -4
The results of the 3 Benchmarks Test 0, Test -1 and Test -2 using 100 strings for each (50 for test 0) show no significant differences. That means that the tests are not accurate enough to make a distinction between a sequence of numbers generated by a process versus by a pseudo random number generator.
The accuracy of the RND function consists of 2^24 numbers. The purpose of Test -3 and Test -4 is to see what happens when you decrease this number. The accuracy of Test -3 is 2^14 and of test -4 is 2^12.
The results of Benchmark tests -3 and -4 are very important.
Back to the start page: Index of this Document
Description and Operation sheet: "NIST"
The purpose of sheet "NIST" is to test the performance of the random number generator using the guide lines described in the "Special Publication 800-22 Revision 1a" isued by the National Instutute of Standards. See Documentation
Sheet "NIST" consists of 7 targets: Nist, Norm Dist, Comp Error, Gamma, Inc Gamma, Rank and Support.
Each time when the "Nist" target is selected all the 15 tests are performed on a certain test string of zero's and one's
Description and Operation: sheet "Benchmark"
The purpose of sheet "Benchmark" is:
In order to execute the program select the target "Benchmark". The program
One cycle of test 0 takes approximately 42 minutes.
The Seed of test -2 (RND function) is the test counter
For each test there are 4 lines with results.
Each time when the program is executed the previous results are not cleared (initialized).
Important: The program stops when any value is modified.
For the latest results See here: Benchmark
The first row shows the decimal number. The word End is used to terminate the string.
The second row contains the binary number.
In order to execute the program select the target "Convert"
The e value comes from:
http://members.home.nl/evwinsen/wiskunde/epagina.htm
The pi value comes from: http://nl.wikipedia.org/wiki/Pi_(wiskunde)
In order to execute the program select the target "2.5"
See chapter 3.5 for details. IMO chapter 3.5 does not give the statistics for a 32 by 32 matrix. Instead it gives the details for a 3 by 3 matrix.
In the program the statistics are calculated for a 3 by 3, 4 by 4 and 5 by 5 matrix.
There are in total 2^9 3 by 3 matrices. For each of those matrices the Rank value is calculated. The same is done for the 2^16 matrices of 4 by 4 and the 2^25 matrices of 5 by 5.
The results show that in the case of a 3 by 3 the probability of a full rank matrix (i.e. with rank 3) is 0,1953. In the case of a 4 by 4 this is 0,1166 and in the case of a 5 by 5 this is 0,0667. Chapter 3.5 shows a different result.
In order to execute the program select the target "2.10"
See chapter 3.10 for details.
The basic string length is 13 bits. There are in total 2^13 of those bit strings. For each of those bit strings the Li value is calculated.
Nist report 800-22 Evaluation
The question can be raised how important the Nist document is.
To be more specific IMO the 15 tests in the document can not be used to decide if the ON OFF process is either random or non random. IMO if one or more tests fail than that is more an indication that the tests are "wrong" than that the process is a non random process.
As such I do not understand while certain the details (the measurement process) of the two trapped Yb qubits are not important. (detection error < 3% - See page 1023).
In the paragraphs: 2.5.8, 2.8.8, 2.10.8, 2.11.8, 2.13.8, 2.15.8 the test string consists of the “the first 1,000,000 binary digits in the expansion of e”. The conclusion of those 6 tests is that the sequence was random. IMO that conclusion is not appropriate and conceptual wrong. What is more important the expansion of e should not be used in order to validate or calibrate the 15 tests.
Unfortunate the document does not support this methodology.
Why the words "Ironically" and "appear"?
Nist report 800-22 Evaluation - part 2
Reading and studying the document it tries to cover two areas:
What the document should do is only discuss the second type of generators and try to clarify what to do in order to clasify those as non random. That means to specify a certain number of tests that if one fails the number is non random.
Benchmark Evaluation
For the latest results See here: Benchmark
Each of these Benchmarks produces multiple a string of 0's and 1's with a fixed length of 2500 bits.
The idea behind the Benchmark is mathematical to unravel this difference by investigating the strings using the 15 tests explained in the Nist document. These 3 and 3.1 are special tests develloped by the author.
The biggest differences are in Test 2.6 and Test 3.
The details of the test results are here: Test 2.6
The purpose of Test 3 is to count the number of continuous 0 and 1 bit strings and compare them with expected counts.
The details of the test results are here: Test 3
The test results show that if you decrease accuracy then the longer bit strings are missing.
Documentation
Feedback
No
Created: 24 April 2010
Updated: 15 Mai 2010
Back to Nature comment page: Nature Articles index