* Pass a ref to `Vec<Shares>` instead of recreating and moving the object
through several functions.
* Return `slen`/ `data_len`, since we'll be using it anyway in `recover_secrets`
It's possible that two different points have the same data.
To give a concrete example consider the secret polynomial `x^2 + x + s`, where
`s` is the secret byte. Plugging in 214 and 215 (both elements of the cyclic
subgroup of order 2) for `x` will give the same result, `1 + s`.
More broadly, for any polynomial `b*x^t + b*x^(t-1) + ... + x + s`, where `t` is
the order of at least one subgroup of GF(256), for all subgroups of order `t`,
all elements of that subgroup, when chosen for `x`, will produce the same
result.
There are certainly other types of polynomials that have "share collisions."
This type was just easy to find because it exploits the nature of finite fields.
RustySecrets makes minimal use of the rand library. It only initializes
the `ChaChaRng` with a seed, and `OsRng` in the standard way, and then calls
their `fill_bytes` methods, provided by the same Trait, and whose function
signature has not changed. I have confirmed by looking at the code changes,
that there have been no changes to the relevant interfaces this library uses.
Horner's method is an algorithm for calculating polynomials, which consists of
transforming the monomial form into a computationally efficient form. It is
pretty easy to understand:
https://en.wikipedia.org/wiki/Horner%27s_method#Description_of_the_algorithm
This implementation has resulted in a noticeable secret share generation speedup
as the RustySecrets benchmarks show, especially when calculating larger
polynomials:
Before:
test sss::generate_1kb_10_25 ... bench: 3,104,391 ns/iter (+/- 113,824)
test sss::generate_1kb_3_5 ... bench: 951,807 ns/iter (+/- 41,067)
After:
test sss::generate_1kb_10_25 ... bench: 2,071,655 ns/iter (+/- 46,445)
test sss::generate_1kb_3_5 ... bench: 869,875 ns/iter (+/- 40,246)
Implements barycentric Lagrange interpolation. Uses algorithm (3.1) from the
paper "Polynomial Interpolation: Langrange vs Newton" by Wilhelm Werner to find
the barycentric weights, and then evaluates at `Gf256::zero()` using the second
or "true" form of the barycentric interpolation formula.
I also earlier implemented a variant of this algorithm, Algorithm 2, from "A new
efficient algorithm for polynomial interpolation," which uses less total
operations than Werner's version, however, because it uses a lot more
multiplications or divisions (depending on how you choose to write it), it runs
slower given the running time of subtraction/ addition (equal) vs
multiplication, and especially division in the Gf256 module.
The new algorithm takes n^2 / 2 divisions and n^2 subtractions to calculate the
barycentric weights, and another n divisions, n multiplications, and 2n
additions to evaluate the polynomial*. The old algorithm runs in n^2 - n
divisions, n^2 multiplications, and n^2 subtractions. Without knowing the exact
running time of each of these operations, we can't say for sure, but I think a
good guess would be the new algorithm trends toward about 1/3 running time as n
-> infinity. It's also easy to see theoretically that for small n the original
lagrange algorithm is faster. This is backed up by benchmarks, which showed for
n >= 5, the new algorithm is faster. We can see that this is more or less what
we should expect given the running times in n of these algorithms.
To ensure we always run the faster algorithm, I've kept both versions and only
use the new one when 5 or more points are given.
Previously the tests in the lagrange module were allowed to pass nodes to the
interpolation algorithms with x = 0. Genuine shares will not be evaluated at x =
0, since then they would just be the secret, so:
1. Now nodes in tests start at x = 1 like `scheme::secret_share` deals them out.
2. I have added assert statements to reinforce this fact and guard against
division by 0 panics.
This meant getting rid of the `evaluate_at_works` test, but
`interpolate_evaluate_at_0_eq_evaluate_at` provides a similar test.
Further work will include the use of barycentric weights in the `interpolate`
function.
A couple more interesting things to note about barycentric weights:
* Barycentric weights can be partially computed if less than threshold
shares are present. When additional shares come in, computation can resume
with no penalty to the total runtime.
* They can be determined totally independently from the y values of our points,
and the x value we want to evaluate for. We only need to know the x values of
our interpolation points.
Fixes#43.
Fixes a syntactic error. Threshold should determine the number of coefficients
in the secret polynomial. As is the code is equivalent to threshold always being
2.