| .. | ||
| generator | ||
| genfiles | ||
| runners/typescript | ||
| README.md | ||
Conformance Tests
I'm using the Rust implementation as the source of truth. It was pretty easy to write unit tests and measure performance in Rust. I'm most familiar with testing tools in Rust so I did all the validation work there. Now I just want to check that all the other language implementations behave exactly the same as the Rust version, which I poured all this energy into validating for correctness.
Running the tests
| Language | Directory | Command |
|---|---|---|
| TypeScript | runners/typescript |
npm run test |
Generating test data
To generate the test data, run:
cd generator
cargo run
This produces .scenario.json files in genfiles/. Each file contains a sequence of random numbers and the expected outputs from the Rust implementation.
The idea is that we replay the same sequence of random numbers for each implementation. All of the different language libraries allow you to inject the source of randomness, so the tests just inject a mock random generator that reads from the recorded sequence. In theory, every implementation makes the exact same calls to random in the same order, and they all produce identical output.
Scenario format
You can test the entire public API by assuming you have a sorted list of sort keys. A sort key is just an LSEQ identifier. Each scenario has an initialState array of sort keys in sorted order. Then there's a list of operations: pick an array index to insert before or after, which tells you which two boundary keys to use for the alloc function from the paper. You call alloc, insert the result into the list, and repeat.
This setup can test everything you'd want to test, even if it's a bit indirect for very basic cases. You could still write basic unit tests in each language for the simple stuff.