.
=item * I has a large prime factor I~~
=item * I has a large prime factor I
=back
Using a strong prime in cryptography guards against easy factoring with
algorithms like Pollard's Rho. Rivest and Silverman (1999) present a case
that using strong primes is unnecessary, and most modern cryptographic systems
agree. First, the smoothness does not affect more modern factoring methods
such as ECM. Second, modern factoring methods like GNFS are far faster than
either method so make the point moot. Third, due to key size growth and
advances in factoring and attacks, for practical purposes, using large random
primes offer security equivalent to strong primes.
Similar to L, the result will be a BigInt if the
number of bits is greater than the native bit size. For better performance
with large bit sizes, install L.
=head2 random_proven_prime
my $bigprime = random_proven_prime(512);
Constructs an n-bit random proven prime. Internally this may use
L(L) or
L depending on the platform and bit size.
=head2 random_proven_prime_with_cert
my($n, $cert) = random_proven_prime_with_cert(512)
Similar to L, but returns a two-element array containing
the n-bit provable prime along with a primality certificate. The certificate
is the same as produced by L or
L, and can be parsed by L or
any other software that understands MPU primality certificates.
=head2 random_maurer_prime
my $bigprime = random_maurer_prime(512);
Construct an n-bit provable prime, using the FastPrime algorithm of
Ueli Maurer (1995). This is the same algorithm used by L.
Similar to L, the result will be a BigInt if the
number of bits is greater than the native bit size. For better performance
with large bit sizes, install L.
The differences between this function and that in L are
described in the L section.
Internally this additionally runs the BPSW probable prime test on every
partial result, and constructs a primality certificate for the final
result, which is verified. These provide additional checks that the resulting
value has been properly constructed.
An alternative to this function is to run L on the
result of L, which will provide more diversity and
will be faster up to 512 or so bits. Maurer's method should be much
faster for large bit sizes (larger than 2048). If you don't need absolutely
proven results, then using L followed by additional
tests (L and/or L)
should be much faster.
=head2 random_maurer_prime_with_cert
my($n, $cert) = random_maurer_prime_with_cert(512)
As with L, but returns a two-element array containing
the n-bit provable prime along with a primality certificate. The certificate
is the same as produced by L or
L, and can be parsed by L or
any other software that understands MPU primality certificates.
The proof construction consists of a single chain of C types.
=head1 UTILITY FUNCTIONS
=head2 prime_precalc
prime_precalc( 1_000_000_000 );
Let the module prepare for fast operation up to a specific number. It is not
necessary to call this, but it gives you more control over when memory is
allocated and gives faster results for multiple calls in some cases. In the
current implementation this will calculate a sieve for all numbers up to the
specified number.
=head2 prime_memfree
prime_memfree;
Frees any extra memory the module may have allocated. Like with
C, it is not necessary to call this, but if you're done
making calls, or want things cleanup up, you can use this. The object method
might be a better choice for complicated uses.
=head2 Math::Prime::Util::MemFree->new
my $mf = Math::Prime::Util::MemFree->new;
# perform operations. When $mf goes out of scope, memory will be recovered.
This is a more robust way of making sure any cached memory is freed, as it
will be handled by the last C object leaving scope. This means if
your routines were inside an eval that died, things will still get cleaned up.
If you call another function that uses a MemFree object, the cache will stay
in place because you still have an object.
=head2 prime_get_config
my $cached_up_to = prime_get_config->{'precalc_to'};
Returns a reference to a hash of the current settings. The hash is copy of
the configuration, so changing it has no effect. The settings include:
precalc_to primes up to this number are calculated
maxbits the maximum number of bits for native operations
xs 0 or 1, indicating the XS code is available
gmp 0 or 1, indicating GMP code is available
maxparam the largest value for most functions, without bigint
maxdigits the max digits in a number, without bigint
maxprime the largest representable prime, without bigint
maxprimeidx the index of maxprime, without bigint
assume_rh whether to assume the Riemann hypothesis (default 0)
=head2 prime_set_config
prime_set_config( assume_rh => 1 );
Allows setting of some parameters. Currently the only parameters are:
xs Allows turning off the XS code, forcing the Pure Perl
code to be used. Set to 0 to disable XS, set to 1 to
re-enable. You probably will never want to do this.
gmp Allows turning off the use of L,
which means using Pure Perl code for big numbers. Set
to 0 to disable GMP, set to 1 to re-enable.
You probably will never want to do this.
assume_rh Allows functions to assume the Riemann hypothesis is
true if set to 1. This defaults to 0. Currently this
setting only impacts prime count lower and upper
bounds, but could later be applied to other areas such
as primality testing. A later version may also have a
way to indicate whether no RH, RH, GRH, or ERH is to
be assumed.
irand Takes a code ref to an irand function returning a
uniform number between 0 and 2**32-1. This will be
used for all random number generation in the module.
=head1 FACTORING FUNCTIONS
=head2 factor
my @factors = factor(3_369_738_766_071_892_021);
# returns (204518747,16476429743)
Produces the prime factors of a positive number input, in numerical order.
The product of the returned factors will be equal to the input. C
will return an empty list, and C will return 0. This matches Pari.
In scalar context, returns Ω(n), the total number of prime factors
(L).
This corresponds to Pari's C function and Mathematica's
C function.
This is same result that we would get if we evaluated the resulting
array in scalar context.
The current algorithm for non-bigints is a sequence of small trial division,
a few rounds of Pollard's Rho, SQUFOF, Pollard's p-1, Hart's OLF, a long
run of Pollard's Rho, and finally trial division if anything survives. This
process is repeated for each non-prime factor. In practice, it is very rare
to require more than the first Rho + SQUFOF to find a factor, and I have not
seen anything go to the last step.
Factoring bigints works with pure Perl, and can be very handy on 32-bit
machines for numbers just over the 32-bit limit, but it can be B slow
for "hard" numbers. Installing the L module will speed
up bigint factoring a B, and all future effort on large number factoring
will be in that module. If you do not have that module for some reason, use
the GMP or Pari version of bigint if possible
(e.g. C~~