% $Id: usersch3.tex,v 1.93.2.9 2005/09/15 14:58:20 bill Exp $
% Copyright (c) 2000 The PARI Group
%
% This file is part of the PARI/GP documentation
%
% Permission is granted to copy, distribute and/or modify this document
% under the terms of the GNU General Public License
\chapter{Functions and Operations Available in PARI and GP}
\label{se:functions}
The functions and operators available in PARI and in the GP/PARI calculator
are numerous and everexpanding. Here is a description of the ones available
in version \vers. It should be noted that many of these functions accept
quite different types as arguments, but others are more restricted. The list
of acceptable types will be given for each function or class of functions.
Except when stated otherwise, it is understood that a function or operation
which should make natural sense is legal. In this chapter, we will describe
the functions according to a rough classification. The general entry looks
something like:
\key{foo}$(x,\{\fl=0\})$: short description.
\syn{foo}{x,\fl}.
\noindent
This means that the GP function \kbd{foo} has one mandatory argument $x$, and
an optional one, $\fl$, whose default value is 0 (the $\{\}$ should never be
typed, it is just a convenient notation we will use throughout to denote
optional arguments). That is, you can type \kbd{foo(x,2)}, or \kbd{foo(x)},
which is then understood to mean \kbd{foo(x,0)}. As well, a comma or closing
parenthesis, where an optional argument should have been, signals to GP it
should use the default. Thus, the syntax \kbd{foo(x,)} is also accepted as a
synonym for our last expression. When a function has more than one optional
argument, the argument list is filled with user supplied values, in order.
And when none are left, the defaults are used instead. Thus, assuming that
\kbd{foo}'s prototype had been
$$\hbox{%
\key{foo}$(\{x=1\},\{y=2\},\{z=3\})$,%
}$$
typing in \kbd{foo(6,4)} would give
you \kbd{foo(6,4,3)}. In the rare case when you want to set some far away
flag, and leave the defaults in between as they stand, you can use the
``empty arg'' trick alluded to above: \kbd{foo(6,,1)} would yield
\kbd{foo(6,2,1)}. By the way, \kbd{foo()} by itself yields
\kbd{foo(1,2,3)} as was to be expected. In this rather special case of a
function having no mandatory argument, you can even omit the $()$: a
standalone \kbd{foo} would be enough (though we don't really recommend it for
your scripts, for the sake of clarity). In defining GP syntax, we strove
to put optional arguments at the end of the argument list (of course, since
they would not make sense otherwise), and in order of decreasing usefulness
so that, most of the time, you will be able to ignore them.
\misctitle{Binary Flags}.\sidx{binary flag} For some of these optional
flags, we adopted the customary binary notation as a compact way to
represent many toggles with just one number. Letting $(p_0,\dots,p_n)$ be a
list of switches (i.e.~of properties which can be assumed to take either
the value $0$ or~$1$), the number $2^3 + 2^5=40$ means that $p_3$ and $p_5$
have been set (that is, set to $1$), and none of the others were (that is,
they were set to 0). This will usually be announced as ``The binary digits
of $\fl$ mean 1: $p_0$, 2: $p_1$, 4: $p_2$'', and so on, using the
available consecutive powers of~$2$.
\misctitle{Pointers}.\sidx{pointer} If a parameter in the function
prototype is prefixed with a \& sign, as in
\key{foo}$(x,\&e)$
\noindent it means that, besides the normal return value, the variable named
$e$ may be set as a side effect. When passing the argument, the \& sign has
to be typed in explicitly. As of version \vers, this \tet{pointer} argument
is optional for all documented functions, hence the \& will always appear
between brackets as in \kbd{issquare}$(x,\{\&e\})$.
\misctitle{About library programming}. To finish with our generic
simple-minded example, the \var{library} function \kbd{foo}, as defined
above, is seen to have two mandatory arguments, $x$ and \fl (no PARI
mathematical function has been implemented so as to accept a variable
number of arguments). When not mentioned otherwise, the result and
arguments of a function are assumed implicitly to be of type \kbd{GEN}.
Most other functions return an object of type \kbd{long} integer in C (see
Chapter~4). The variable or parameter names \var{prec} and \fl\ always
denote \kbd{long} integers.
The \tet{entree} type is used by the library to implement iterators (loops,
sums, integrals, etc.) when a formal variable has to successively assume a
number of values in a given set. When programming with the library, it is
easier and much more efficient to code loops and the like directly. Hence
this type is not documented, although it does appear in a few library
function prototypes below. See \secref{se:sums} for more details.
\section{Standard monadic or dyadic operators}
\subseckbd{+$/$-}: The expressions \kbd{+}$x$ and \kbd{-}$x$ refer
to monadic operators (the first does nothing, the second negates $x$).
\syn{gneg}{x} for \kbd{-}$x$.
\subseckbd{+}, \kbd{-}: The expression $x$ \kbd{+} $y$ is the \idx{sum} and
$x$ \kbd{-} $y$ is the \idx{difference} of $x$ and $y$. Among the prominent
impossibilities are addition/subtraction between a scalar type and a vector
or a matrix, between vector/matrices of incompatible sizes and between an
integermod and a real number.
\syn{gadd}{x,y} $x$ \kbd{+} $y$, $\teb{gsub}(x,y)$ for $x$ \kbd{-} $y$.
\subseckbd{*}: The expression $x$ \kbd{*} $y$ is the \idx{product} of $x$
and $y$. Among the prominent impossibilities are multiplication between
vector/matrices of incompatible sizes, between an integermod and a real
number. Note that because of vector and matrix operations, \kbd{*} is not
necessarily commutative. Note also that since multiplication between two
column or two row vectors is not allowed, to obtain the \idx{scalar product}
of two vectors of the same length, you must multiply a line vector by a
column vector, if necessary by transposing one of the vectors (using
the operator \kbd{\til} or the function \kbd{mattranspose}, see
\secref{se:linear_algebra}).
If $x$ and $y$ are binary quadratic forms, compose them. See also
\kbd{qfbnucomp} and \kbd{qfbnupow}.
\syn{gmul}{x,y} for $x$ \kbd{*} $y$. Also available is
$\teb{gsqr}(x)$ for $x$ \kbd{*} $x$ (faster of course!).
\subseckbd{/}: The expression $x$ \kbd{/} $y$ is the \idx{quotient} of $x$
and $y$. In addition to the impossibilities for multiplication, note that if
the divisor is a matrix, it must be an invertible square matrix, and in that
case the result is $x*y^{-1}$. Furthermore note that the result is as exact
as possible: in particular, division of two integers always gives a rational
number (which may be an integer if the quotient is exact) and \var{not} the
Euclidean quotient (see $x$ \kbd{\bs} $y$ for that), and similarly the
quotient of two polynomials is a rational function in general. To obtain the
approximate real value of the quotient of two integers, add \kbd{0.} to the
result; to obtain the approximate $p$-adic value of the quotient of two
integers, add \kbd{O(p\pow k)} to the result; finally, to obtain the
\idx{Taylor series} expansion of the quotient of two polynomials, add
\kbd{O(X\pow k)} to the result or use the \kbd{taylor} function
(see \secref{se:taylor}). \label{se:gdiv}
\syn{gdiv}{x,y} for $x$ \kbd{/} $y$.
\subseckbd{\bs}: The expression $x$ \kbd{\bs} $y$ is the
% keep "Euclidean" and "quotient" on same line for gphelp
\idx{Euclidean quotient} of $x$ and $y$. The types must be either both
integer or both polynomials. The result is the Euclidean quotient. In the
case of integer division, the quotient is such that the corresponding
remainder is non-negative.
\syn{gdivent}{x,y} for $x$ \kbd{\bs} $y$.
\subseckbd{\bs/}: The expression $x$ \b{/} $y$ is the Euclidean
quotient of $x$ and $y$. The types must be either both integer or both
polynomials. The result is the rounded Euclidean quotient. In the case of
integer division, the quotient is such that the corresponding remainder is
smallest in absolute value and in case of a tie the quotient closest to
$+\infty$ is chosen.
\syn{gdivround}{x,y} for $x$ \b{/} $y$.
\subseckbd{\%}: The expression $x$ \kbd{\%} $y$ is the
% keep "Euclidean" and "remainder" on same line for gphelp
\idx{Euclidean remainder} of $x$ and $y$. The modulus $y$ must be of type
integer or polynomial. The result is the remainder, always non-negative in
the case of integers. Allowed dividend types are scalar exact types when
the modulus is an integer, and polynomials, polmods and rational functions
when the modulus is a polynomial.
\syn{gmod}{x,y} for $x$ \kbd{\%} $y$.
\subsecidx{divrem}$(x,y)$: creates a column vector with two components,
the first being the Euclidean quotient, the second the Euclidean remainder,
of the division of $x$ by $y$. This avoids the need to do two divisions if
one needs both the quotient and the remainder. The arguments must be both
integers or both polynomials; in the case of integers, the remainder is
non-negative.
\syn{gdiventres}{x,y}.
\subseckbd{\pow}: The expression $x\hbox{\kbd{\pow}}n$ is \idx{powering}.
If the exponent is an integer, then exact operations are performed using
binary (left-shift) powering techniques. In particular, in this case $x$
cannot be a vector or matrix unless it is a square matrix (and moreover
invertible if the exponent is negative). If $x$ is a $p$-adic number, its
precision will increase if $v_p(n) > 0$. PARI is able to rewrite the
multiplication $x * x$ of two {\it identical} objects as $x^2$, or
$\kbd{sqr}(x)$ (here, identical means the operands are two different labels
referencing the same chunk of memory; no equality test is performed). This
is no longer true when more than two arguments are involved.
If the exponent is not of type integer, this is treated as a transcendental
function (see \secref{se:trans}), and in particular has the effect of
componentwise powering on vector or matrices.
As an exception, if the exponent is a rational number $p/q$ and $x$ an
integer modulo a prime, return a solution $y$ of $y^q=x^p$ if it
exists. Currently, $q$ must not have large prime factors.
Beware that
\bprog
? Mod(7,19)^(1/2)
%1 = Mod(11, 19)/*is any square root*/
? sqrt(Mod(7,19))
%2 = Mod(8, 19)/*is the smallest square root*/
? Mod(7,19)^(3/5)
%3 = Mod(1, 19)
? %3^(5/3)
%4 = Mod(1, 19)/*Mod(7,19) is just another cubic root*/
@eprog\noindent
\syn{gpow}{x,n,\var{prec}} for $x\hbox{\kbd{\pow}}n$.
\subsecidx{shift}$(x,n)$ or $x$ \kbd{<<} $n$ (= $x$ \kbd{>>} $(-n)$): shifts
$x$ componentwise left by $n$ bits if $n\ge0$ and right by $|n|$ bits if
$n<0$. A left shift by $n$ corresponds to multiplication by $2^n$. A right
shift of an integer $x$ by $|n|$ corresponds to a Euclidean division of
$x$ by $2^{|n|}$ with a
remainder of the same sign as $x$, hence is not the same (in general) as
$x \kbd{\bs} 2^n$.
\syn{gshift}{x,n} where $n$ is a \kbd{long}.
\subsecidx{shiftmul}$(x,n)$: multiplies $x$ by $2^n$. The difference with
\kbd{shift} is that when $n<0$, ordinary division takes place, hence for
example if $x$ is an integer the result may be a fraction, while for
\kbd{shift} Euclidean division takes place when $n<0$ hence if $x$ is an
integer the result is still an integer.
\syn{gmul2n}{x,n} where $n$ is a \kbd{long}.
\subsec{Comparison and boolean operators}.\sidx{boolean operators}
The six standard \idx{comparison operators} \kbd{<=}, \kbd{<}, \kbd{>=},
\kbd{>}, \kbd{==}, \kbd{!=} are available in GP, and in library mode under
the names \teb{gle}, \teb{glt}, \teb{gge}, \teb{ggt}, \teb{geq}, \teb{gne}
respectively. The library syntax is $\var{co}(x,y)$, where \var{co} is the
comparison operator. The result is 1 (as a \kbd{GEN}) if the comparison is
true, 0 (as a \kbd{GEN}) if it is false.
The standard boolean functions \kbd{||} (\idx{inclusive or}), \kbd{\&\&}
(\idx{and})\sidx{or} and \kbd{!} (\idx{not}) are also available, and the
library syntax is $\teb{gor}(x,y)$, $\teb{gand}(x,y)$ and $\teb{gnot}(x)$
respectively.
In library mode, it is in fact usually preferable to use the two basic
functions which are $\teb{gcmp}(x,y)$ which gives the sign (1, 0, or -1) of
$x-y$, where $x$ and $y$ must be in $\R$, and $\teb{gegal}(x,y)$ which
can be applied to any two PARI objects $x$ and $y$ and gives 1 (i.e.~true) if
they are equal (but not necessarily identical), 0 (i.e.~false) otherwise.
Particular cases of \teb{gegal} which should be used are $\teb{gcmp0}(x)$
($x==0$ ?), $\teb{gcmp1}(x)$ ($x==1$ ?), and
$\teb{gcmp_1}(x)$ ($x==-1$ ?).
Note that $\teb{gcmp0}(x)$ tests whether $x$ is equal to zero, even if $x$ is
not an exact object. To test whether $x$ is an exact object which is equal to
zero, one must use $\teb{isexactzero}$.
Also note that the \kbd{gcmp} and \kbd{gegal} functions return a C-integer,
and \var{not} a \kbd{GEN} like \kbd{gle} etc.
\smallskip
GP accepts the following synonyms for some of the above functions: since we
thought it might easily lead to confusion, we don't use the customary C
operators for bitwise \kbd{and} or bitwise \kbd{or} (use \tet{bitand} or
\tet{bitor}), hence \kbd{|} and \kbd{\&} are accepted as\sidx{bitwise
and}\sidx{bitwise or} synonyms of \kbd{||} and \kbd{\&\&} respectively.
Also, \kbd{<>} is accepted as a synonym for \kbd{!=}. On the other hand,
\kbd{=} is definitely \var{not} a synonym for \kbd{==} since it is the
assignment statement.
\subsecidx{lex}$(x,y)$: gives the result of a lexicographic comparison
between $x$ and $y$. This is to be interpreted in quite a wide sense. For
example, the vector $[1,3]$ will be considered smaller than the longer
vector $[1,3,-1]$ (but of course larger than $[1,2,5]$),
i.e.~\kbd{lex([1,3], [1,3,-1])} will return $-1$.
\syn{lexcmp}{x,y}.
\subsecidx{sign}$(x)$: \idx{sign} ($0$, $1$ or $-1$) of $x$, which must be of
type integer, real or fraction.
\syn{gsigne}{x}. The result is a \kbd{long}.
\subsecidx{max}$(x,y)$ and \teb{min}$(x,y)$: creates the
maximum and minimum of $x$ and $y$ when they can be compared.
\syn{gmax}{x,y} and $\teb{gmin}(x,y)$.
\subsecidx{vecmax}$(x)$: if $x$ is a vector or a matrix, returns the maximum
of the elements of $x$, otherwise returns a copy of $x$. Returns $-\infty$
in the form of $-(2^{31}-1)$ (or $-(2^{63}-1)$ for 64-bit machines) if $x$ is
empty.
\syn{vecmax}{x}.
\subsecidx{vecmin}$(x)$: if $x$ is a vector or a matrix, returns the minimum
of the elements of $x$, otherwise returns a copy of $x$. Returns $+\infty$
in the form of $2^{31}-1$ (or $2^{63}-1$ for 64-bit machines) if $x$ is empty.
\syn{vecmin}{x}.
\section{Conversions and similar elementary functions or commands}
\label{se:conversion}
\noindent
Many of the conversion functions are rounding or truncating operations. In
this case, if the argument is a rational function, the result is the
Euclidean quotient of the numerator by the denominator, and if the argument
is a vector or a matrix, the operation is done componentwise. This will not
be restated for every function.
\subsecidx{List}$({x=[\,]})$: transforms a (row or column) vector $x$
into a list. The only other way to create a \typ{LIST} is to use the
function \kbd{listcreate}.
This is useless in library mode.
\subsecidx{Mat}$({x=[\,]})$: transforms the object $x$ into a matrix.
If $x$ is not a vector or a matrix, this creates a $1\times 1$ matrix.
If $x$ is a row (resp. column) vector, this creates a 1-row (resp.
1-column) matrix. If $x$ is already a matrix, a copy of $x$ is created.
This function can be useful in connection with the function \kbd{concat}
(see there).
\syn{gtomat}{x}.
\subsecidx{Mod}$(x,y,\{\fl=0\})$:\label{se:Mod} creates the PARI object
$(x \mod y)$, i.e.~an integermod or a polmod. $y$ must be an integer or a
polynomial. If $y$ is an integer, $x$ must be an integer, a rational
number, or a $p$-adic number compatible with the modulus $y$. If $y$ is a
polynomial, $x$ must be a scalar (which is not a polmod), a polynomial, a
rational function, or a power series.
This function is not the same as $x$ \kbd{\%} $y$, the result of which is an
integer or a polynomial.
If $\fl$ is equal to $1$, the modulus of the created result is put on the
heap and not on the stack, and hence becomes a permanent copy which cannot be
erased later by garbage collecting (see \secref{se:garbage}). Functions
will operate faster on such objects and memory consumption will be lower.
On the other hand, care should be taken to avoid creating too many such
objects.
Under GP, the same effect can be obtained by assigning the object to a GP
variable (the value of which is a permanent object for the duration of the
relevant library function call, and is treated as such). This value is
subject to garbage collection, since it will be deleted when the value
changes. This is preferable and the above flag is only retained for
compatibility reasons (it can still be useful in library mode).
\syn{Mod0}{x,y,\fl}. Also available are
$\bullet$ for $\fl=1$: $\teb{gmodulo}(x,y)$.
$\bullet$ for $\fl=0$: $\teb{gmodulcp}(x,y)$.
\subsecidx{Pol}$(x,\{v=x\})$: transforms the object $x$ into a polynomial with
main variable $v$. If $x$ is a scalar, this gives a constant polynomial. If
$x$ is a power series, the effect is identical to \kbd{truncate} (see there),
i.e.~it chops off the $O(X^k)$. If $x$ is a vector, this function creates
the polynomial whose coefficients are given in $x$, with $x[1]$ being the
leading coefficient (which can be zero).
Warning: this is \var{not} a substitution function. It is intended to be
quick and dirty. So if you try \kbd{Pol(a,y)} on the polynomial \kbd{a = x+y},
you will get \kbd{y+y}, which is not a valid PARI object.
\syn{gtopoly}{x,v}, where $v$ is a variable number.
\subsecidx{Polrev}$(x,\{v=x\})$: transform the object $x$ into a polynomial
with main variable $v$. If $x$ is a scalar, this gives a constant polynomial.
If $x$ is a power series, the effect is identical to \kbd{truncate} (see
there), i.e.~it chops off the $O(X^k)$. If $x$ is a vector, this function
creates the polynomial whose coefficients are given in $x$, with $x[1]$ being
the constant term. Note that this is the reverse of \kbd{Pol} if $x$ is a
vector, otherwise it is identical to \kbd{Pol}.
\syn{gtopolyrev}{x,v}, where $v$ is a variable number.
\subsecidx{Qfb}$(a,b,c,\{D=0.\})$: creates the binary quadratic form
$ax^2+bxy+cy^2$. If $b^2-4ac>0$, initialize \idx{Shanks}' distance
function to $D$.
\syn{Qfb0}{a,b,c,D,\var{prec}}. Also available are
$\teb{qfi}(a,b,c)$ (when $b^2-4ac<0$), and
$\teb{qfr}(a,b,c,d)$ (when $b^2-4ac>0$).\sidx{binary quadratic form}
\subsecidx{Ser}$(x,\{v=x\})$: transforms the object $x$ into a power series
with main variable $v$ ($x$ by default). If $x$ is a scalar, this gives a
constant power series with precision given by the default \kbd{serieslength}
(corresponding to the C global variable \kbd{precdl}). If $x$ is a
polynomial, the precision is the greatest of \kbd{precdl} and the degree of
the polynomial. If $x$ is a vector, the precision is similarly given, and the
coefficients of the vector are understood to be the coefficients of the power
series starting from the constant term (i.e.~the reverse of the function
\kbd{Pol}).
The warning given for \kbd{Pol} applies here: this is not a substitution
function.
\syn{gtoser}{x,v}, where $v$ is a variable number (i.e.~a C integer).
\subsecidx{Set}$(\{x=[\,]\})$: converts $x$ into a set, i.e.~into a row vector
with strictly increasing entries. $x$ can be of any type, but is most useful
when $x$ is already a vector. The components of $x$ are put in canonical form
(type \typ{STR}) so as to be easily sorted. To recover an ordinary \kbd{GEN}
from such an element, you can apply \tet{eval} to it.
\syn{gtoset}{x}.
\subsecidx{Str}$(\{x=\hbox{\kbd{""}}\},\{\fl=0\})$: converts $x$ into a
character string (type \typ{STR}, the empty string if $x$ is omitted). To
recover an ordinary \kbd{GEN} from a string, apply \kbd{eval} to it. The
arguments of \kbd{Str} are evaluated in string context (see
\secref{se:strings}). If \fl\ is set, treat $x$ as a filename and perform
\idx{environment expansion} on the string. This feature can be used to read
\idx{environment variable} values.
\bprog
? i = 1; Str("x" i)
%1 = "x1"
? eval(%)
%2 = x1;
? Str("$HOME", 1)
%3 = "/home/pari"
@eprog
\syn{strtoGENstr}{x,\fl}. This function is mostly useless in library mode. Use
the pair \tet{strtoGEN}/\tet{GENtostr} to convert between \kbd{char*} and
\kbd{GEN}.
\subsecidx{Vec}$({x=[\,]})$: transforms the object $x$ into a row vector. The
vector will be with one component only, except when $x$ is a vector/matrix or
a quadratic form (in which case the resulting vector is simply the initial
object considered as a row vector), but more importantly when $x$ is a
polynomial or a power series. In the case of a polynomial, the coefficients
of the vector start with the leading coefficient of the polynomial, while
for power series only the significant coefficients are taken into account,
but this time by increasing order of degree.
\syn{gtovec}{x}.
\subsecidx{binary}$(x)$: outputs the vector of the binary digits of $|x|$.
Here $x$ can be an integer, a real number (in which case the result has two
components, one for the integer part, one for the fractional part) or a
vector/matrix.
\syn{binaire}{x}.
\subsecidx{bitand}$(x,y)$: bitwise \tet{and}\sidx{bitwise and} of two
integers $x$ and $y$, that is the integer
$$\sum (x_i~\kbd{and}~y_i) 2^i$$
Negative numbers behave as if modulo a huge power of $2$.
\syn{gbitand}{x,y}.
\subsecidx{bitneg}$(x,\{n=-1\})$: \idx{bitwise negation} of an integer $x$,
truncated to $n$ bits, that is the integer
$$\sum_{i=0}^n \kbd{not}(x_i) 2^i$$
The special case $n=-1$ means no truncation: an infinite sequence of
leading $1$ is then represented as a negative number.
Negative numbers behave as if modulo a huge power of $2$.
\syn{gbitneg}{x}.
\subsecidx{bitnegimply}$(x,y)$: bitwise negated imply of two integers $x$
and $y$ (or \kbd{not} $(x \Rightarrow y)$), that is the integer
$$\sum (x_i~\kbd{and not}(y_i)) 2^i$$
Negative numbers behave as if modulo a huge power of $2$.
\syn{gbitnegimply}{x,y}.
\subsecidx{bitor}$(x,y)$: bitwise (inclusive) \tet{or}\sidx{bitwise
inclusive or} of two integers $x$ and $y$, that is the integer
$$\sum (x_i~\kbd{or}~y_i) 2^i$$
Negative numbers behave as if modulo a huge power of $2$.
\syn{gbitor}{x,y}.
\subsecidx{bittest}$(x,n)$: outputs the $n^{\text{th}}$ bit of $|x|$ starting
from the right (i.e.~the coefficient of $2^n$ in the binary expansion of $x$).
The result is 0 or 1. To extract several bits at once as a vector, pass a
vector for $n$.
\syn{bittest}{x,n}, where $n$ and the result are \kbd{long}s.
\subsecidx{bitxor}$(x,y)$: bitwise (exclusive) \tet{or}\sidx{bitwise
exclusive or} of two integers $x$ and $y$, that is the integer
$$\sum (x_i~\kbd{xor}~y_i) 2^i$$
Negative numbers behave as if modulo a huge power of $2$.
\syn{gbitxor}{x,y}.
\subsecidx{ceil}$(x)$: ceiling of $x$. When $x$ is in $\R$,
the result is the smallest integer greater than or equal to $x$. Applied to a
rational function, $\kbd{ceil}(x)$ returns the euclidian quotient of the
numerator by the denominator.
\syn{gceil}{x}.
\subsecidx{centerlift}$(x,\{v\})$: lifts an element $x=a \bmod n$ of $\Z/n\Z$
to $a$ in $\Z$, and similarly lifts a polmod to a polynomial. This is the
same as \kbd{lift} except that in the particular case of elements of
$\Z/n\Z$, the lift $y$ is such that $-n/20$), the result is undefined and an error occurs if $e$
was not given.
\misctitle{Important remark:} note that, contrary to the other truncation
functions, this function operates on every coefficient at every level of a
PARI object. For example
$$\text{truncate}\left(\dfrac{2.4*X^2-1.7}{X}\right)=2.4*X,$$ whereas
$$\text{round}\left(\dfrac{2.4*X^2-1.7}{X}\right)=\dfrac{2*X^2-2}{X}.$$
An important use of \kbd{round} is to get exact results after a long
approximate computation, when theory tells you that the coefficients
must be integers.
\syn{grndtoi}{x,\&e}, where $e$ is a \kbd{long} integer. Also available is
$\teb{ground}(x)$.
\subsecidx{simplify}$(x)$: this function tries to simplify the object $x$ as
much as it can. The simplifications do not concern rational functions (which
PARI automatically tries to simplify), but type changes. Specifically, a
complex or quadratic number whose imaginary part is exactly equal to 0
(i.e.~not a real zero) is converted to its real part, and a polynomial of
degree zero is converted to its constant term. For all types, this of course
occurs recursively. This function is useful in any case, but in particular
before the use of arithmetic functions which expect integer arguments, and
not for example a complex number of 0 imaginary part and integer real part
(which is however printed as an integer).
\syn{simplify}{x}.
\subsecidx{sizebyte}$(x)$: outputs the total number of bytes occupied by the
tree representing the PARI object $x$.
\syn{taille2}{x} which returns a \kbd{long}. The
function \teb{taille} returns the number of \var{words} instead.
\subsecidx{sizedigit}$(x)$: outputs a quick bound for the number of decimal
digits of (the components of) $x$, off by at most $1$. If you want the
exact value, you can use \kbd{length(Str(x))}, which is much slower.
\syn{sizedigit}{x} which returns a \kbd{long}.
\subsecidx{truncate}$(x,\{\&e\})$: truncates $x$ and sets $e$ to the number of
error bits. When $x$ is in $\R$, this means that the part after the decimal
point is chopped away, $e$ is the binary exponent of the difference between
the original and the truncated value (the ``fractional part''). If the
exponent of $x$ is too large compared to its precision (i.e.~$e>0$), the
result is undefined and an error occurs if $e$ was not given. The function
applies componentwise on rational functions and vector / matrices; $e$ is
then the maximal number of error bits.
Note a very special use of \kbd{truncate}: when applied to a power series, it
transforms it into a polynomial or a rational function with denominator
a power of $X$, by chopping away the $O(X^k)$. Similarly, when applied to
a $p$-adic number, it transforms it into an integer or a rational number
by chopping away the $O(p^k)$.
\syn{gcvtoi}{x,\&e}, where $e$ is a \kbd{long} integer. Also available is
\teb{gtrunc}$(x)$.
\subsecidx{valuation}$(x,p)$:\label{se:valuation} computes the highest
exponent of $p$ dividing $x$. If $p$ is of type integer, $x$ must be an
integer, an integermod whose modulus is divisible by $p$, a fraction, a
$q$-adic number with $q=p$, or a polynomial or power series in which case the
valuation is the minimum of the valuation of the coefficients.
If $p$ is of type polynomial, $x$ must be of type polynomial or rational
function, and also a power series if $x$ is a monomial. Finally, the
valuation of a vector, complex or quadratic number is the minimum of the
component valuations.
If $x=0$, the result is \kbd{VERYBIGINT} ($2^{31}-1$ for 32-bit machines or
$2^{63}-1$ for 64-bit machines) if $x$ is an exact object. If $x$ is a
$p$-adic numbers or power series, the result is the exponent of the zero.
Any other type combinations gives an error.
\syn{ggval}{x,p}, and the result is a \kbd{long}.
\subsecidx{variable}$(x)$: gives the main variable of the object $x$, and
$p$ if $x$ is a $p$-adic number. Gives an error if $x$ has no variable
associated to it. Note that this function is useful only in GP, since in
library mode the function \kbd{gvar} is more appropriate.
\syn{gpolvar}{x}. However, in library mode, this function should not be used.
Instead, test whether $x$ is a $p$-adic (type \typ{PADIC}), in which case $p$
is in $x[2]$, or call the function $\key{gvar}(x)$ which returns the variable
\var{number} of $x$ if it exists, \kbd{BIGINT} otherwise.
\section{Transcendental functions}\label{se:trans}
As a general rule, which of course in some cases may have exceptions,
transcendental functions operate in the following way:
$\bullet$ If the argument is either an integer, a real, a rational, a complex
or a quadratic number, it is, if necessary, first converted to a real (or
complex) number using the current \idx{precision} held in the default
\kbd{realprecision}. Note that only exact arguments are converted, while
inexact arguments such as reals are not.
Under GP this is transparent to the user, but when programming in library
mode, care must be taken to supply a meaningful parameter \var{prec} as the
last argument of the function if the first argument is an exact object.
This parameter is ignored if the argument is inexact.
Note that in library mode the precision argument \var{prec} is a word
count including codewords, i.e.~represents the length in words of a real
number, while under GP the precision (which is changed by the metacommand
\b{p} or using \kbd{default(realprecision,...)}) is the number of significant
decimal digits.
Note that some accuracies attainable on 32-bit machines cannot be attained
on 64-bit machines for parity reasons. For example the default GP accuracy
is 28 decimal digits on 32-bit machines, corresponding to \var{prec} having
the value 5, but this cannot be attained on 64-bit machines.\smallskip
After possible conversion, the function is computed. Note that even if the
argument is real, the result may be complex (e.g.~$\text{acos}(2.0)$ or
$\text{acosh}(0.0)$). Note also that the principal branch is always chosen.
$\bullet$ If the argument is an integermod or a $p$-adic, at present only a
few functions like \kbd{sqrt} (square root), \kbd{sqr} (square), \kbd{log},
\kbd{exp}, powering, \kbd{teichmuller} (Teichm\"uller character) and
\kbd{agm} (arithmetic-geometric mean) are implemented.
Note that in the case of a $2$-adic number, $\kbd{sqr}(x)$ may not be
identical to $x*x$: for example if $x = 1+O(2^5)$ and $y = 1+O(2^5)$ then
$x*y = 1+O(2^5)$ while $\kbd{sqr}(x) = 1+O(2^6)$. Here, $x * x$ yields the
same result as $\kbd{sqr}(x)$ since the two operands are known to be {\it
identical}. The same statement holds true for $p$-adics raised to the power
$n$, where $v_p(n) > 0$.
\misctitle{Remark:} note that if we wanted to be strictly consistent with
the PARI philosophy, we should have $x*y = (4 \mod 8)$ and $\kbd{sqr}(x) =
(4 \mod 32)$ when both $x$ and $y$ are congruent to $2$ modulo $4$.
However, since integermod is an exact object, PARI assumes that the modulus
must not change, and the result is hence $(0\, \mod\, 4)$ in both cases. On
the other hand, $p$-adics are not exact objects, hence are treated
differently.
$\bullet$ If the argument is a polynomial, power series or rational function,
it is, if necessary, first converted to a power series using the current
precision held in the variable \tet{precdl}. Under GP this again is
transparent to the user. When programming in library mode, however, the
global variable \kbd{precdl} must be set before calling the function if the
argument has an exact type (i.e.~not a power series). Here \kbd{precdl} is
not an argument of the function, but a global variable.
Then the Taylor series expansion of the function around $X=0$ (where $X$ is
the main variable) is computed to a number of terms depending on the number
of terms of the argument and the function being computed.
$\bullet$ If the argument is a vector or a matrix, the result is the
componentwise evaluation of the function. In particular, transcendental
functions on square matrices, which are not implemented in the present
version \vers\ (see Appendix~B however), will have a slightly different name
if they are implemented some day.
\subseckbd{\pow}: If $y$ is not of type integer, \kbd{x\pow y} has the same
effect as \kbd{exp(y*ln(x))}. It can be applied to $p$-adic numbers as
well as to the more usual types.\sidx{powering}
\syn{gpow}{x,y,\var{prec}}.
\subsecidx{Euler}: Euler's constant $0.57721\cdots$. Note that \kbd{Euler}
is one of the few special reserved names which cannot be used for variables
(the others are \kbd{I} and \kbd{Pi}, as well as all function names).
\label{se:euler}
\syn{mpeuler}{\var{prec}} where $\var{prec}$ \var{must} be given. Note that
this creates $\gamma$ on the PARI stack, but a copy is also created on the
heap for quicker computations next time the function is called.
\subsecidx{I}: the complex number $\sqrt{-1}$.
The library syntax is the global variable \kbd{gi} (of type \kbd{GEN}).
\subsecidx{Pi}: the constant $\pi$ ($3.14159\cdots$).\label{se:pi}
\syn{mppi}{\var{prec}} where $\var{prec}$ \var{must} be given. Note that this
creates $\pi$ on the PARI stack, but a copy is also created on the heap for
quicker computations next time the function is called.
\subsecidx{abs}$(x)$: absolute value of $x$ (modulus if $x$ is complex).
Power series and rational functions are not allowed. Contrary to most
transcendental functions, an exact argument is \var{not} converted to a real
number before applying \kbd{abs} and an exact result is returned if possible.
\bprog
? abs(-1)
%1 = 1
? abs(3/7 + 4/7*I)
%2 = 5/7
? abs(1 + I)
%3 = 1.414213562373095048801688724
@eprog
\noindent If $x$ is a polynomial, returns $-x$ if the leading coefficient is
real and negative else returns $x$. For a power series, the constant
coefficient is considered instead.
\syn{gabs}{x,\var{prec}}.
\subsecidx{acos}$(x)$: principal branch of $\text{cos}^{-1}(x)$,
i.e.~such that $\text{Re(acos}(x))\in [0,\pi]$. If
$x\in \R$ and $|x|>1$, then $\text{acos}(x)$ is complex.
\syn{gacos}{x,\var{prec}}.
\subsecidx{acosh}$(x)$: principal branch of $\text{cosh}^{-1}(x)$,
i.e.~such that $\text{Im(acosh}(x))\in [0,\pi]$. If
$x\in \R$ and $x<1$, then $\text{acosh}(x)$ is complex.
\syn{gach}{x,\var{prec}}.
\subsecidx{agm}$(x,y)$: arithmetic-geometric mean of $x$ and $y$. In the
case of complex or negative numbers, the principal square root is always
chosen. $p$-adic or power series arguments are also allowed. Note that
a $p$-adic agm exists only if $x/y$ is congruent to 1 modulo $p$ (modulo
16 for $p=2$). $x$ and $y$ cannot both be vectors or matrices.
\syn{agm}{x,y,\var{prec}}.
\subsecidx{arg}$(x)$: argument of the complex number $x$, such that
$-\pi<\text{arg}(x)\le\pi$.
\syn{garg}{x,\var{prec}}.
\subsecidx{asin}$(x)$: principal branch of $\text{sin}^{-1}(x)$, i.e.~such
that $\text{Re(asin}(x))\in [-\pi/2,\pi/2]$. If $x\in \R$ and $|x|>1$ then
$\text{asin}(x)$ is complex.
\syn{gasin}{x,\var{prec}}.
\subsecidx{asinh}$(x)$: principal branch of $\text{sinh}^{-1}(x)$, i.e.~such
that $\text{Im(asinh}(x))\in [-\pi/2,\pi/2]$.
\syn{gash}{x,\var{prec}}.
\subsecidx{atan}$(x)$: principal branch of $\text{tan}^{-1}(x)$, i.e.~such
that $\text{Re(atan}(x))\in{} ]-\pi/2,\pi/2[$.
\syn{gatan}{x,\var{prec}}.
\subsecidx{atanh}$(x)$: principal branch of $\text{tanh}^{-1}(x)$, i.e.~such
that $\text{Im(atanh}(x))\in{} ]-\pi/2,\pi/2]$. If $x\in \R$ and $|x|>1$ then
$\text{atanh}(x)$ is complex.
\syn{gath}{x,\var{prec}}.
\subsecidx{bernfrac}$(x)$: Bernoulli number\sidx{Bernoulli numbers} $B_x$,
where $B_0=1$, $B_1=-1/2$, $B_2=1/6$,\dots, expressed as a rational number.
The argument $x$ should be of type integer.
\syn{bernfrac}{x}.
\subsecidx{bernreal}$(x)$: Bernoulli number\sidx{Bernoulli numbers}
$B_x$, as \kbd{bernfrac}, but $B_x$ is returned as a real number
(with the current precision).
\syn{bernreal}{x,\var{prec}}.
\subsecidx{bernvec}$(x)$: creates a vector containing, as rational numbers,
the \idx{Bernoulli numbers} $B_0$, $B_2$,\dots, $B_{2x}$. These Bernoulli
numbers can then be used as follows. Assume that this vector has been put
into a variable, say \kbd{bernint}. Then you can define under GP:
\bprog
bern(x) =
{
if (x == 1, return(-1/2));
if (x < 0 || x % 2, return(0));
bernint[x/2+1]
}
@eprog
\noindent and then \kbd{bern(k)} gives the Bernoulli number of index $k$ as a
rational number, exactly as \kbd{bernreal(k)} gives it as a real number. If
you need only a few values, calling \kbd{bernfrac(k)} each time will be much
more efficient than computing the huge vector above.
\syn{bernvec}{x}.
\subsecidx{besseljh}$(n,x)$: $J$-Bessel function of half integral index.
More precisely, $\kbd{besseljh}(n,x)$ computes $J_{n+1/2}(x)$ where $n$
must be of type integer, and $x$ is any element of $\C$. In the
present version \vers, this function is not very accurate when $x$ is
small.
\syn{jbesselh}{n,x,\var{prec}}.
\subsecidx{besselk}$(\var{nu},x,\{\fl=0\})$: $K$-Bessel function of index
\var{nu} (which can be complex) and argument $x$. Only real and positive
arguments
$x$ are allowed in the present version \vers. If $\fl$ is equal to 1,
uses another implementation of this function which is often faster.
\syn{kbessel}{\var{nu},x,\var{prec}} and
$\teb{kbessel2}(\var{nu},x,\var{prec})$ respectively.
\subsecidx{cos}$(x)$: cosine of $x$.
\syn{gcos}{x,\var{prec}}.
\subsecidx{cosh}$(x)$: hyperbolic cosine of $x$.
\syn{gch}{x,\var{prec}}.
\subsecidx{cotan}$(x)$: cotangent of $x$.
\syn{gcotan}{x,\var{prec}}.
\subsecidx{dilog}$(x)$: principal branch of the dilogarithm of $x$,
i.e.~analytic continuation of the power series $\log_2(x)=\sum_{n\ge1}x^n/n^2$.
\syn{dilog}{x,\var{prec}}.
\subsecidx{eint1}$(x,\{n\})$: exponential integral
$\int_x^\infty \dfrac{e^{-t}}{t}\,dt$ ($x\in\R$)
If $n$ is present, outputs the $n$-dimensional vector
$[\kbd{eint1}(x),\dots,\kbd{eint1}(nx)]$ ($x \geq 0$). This is faster than
repeatedly calling \kbd{eint1($i$ * x)}.
\syn{veceint1}{x,n,\var{prec}}. Also available is
$\teb{eint1}(x,\var{prec})$.
\subsecidx{erfc}$(x)$: complementary error function
$(2/\sqrt\pi)\int_x^\infty e^{-t^2}\,dt$.
\syn{erfc}{x,\var{prec}}.
\subsecidx{eta}$(x,\{\fl=0\})$: \idx{Dedekind}'s $\eta$ function, without the
$q^{1/24}$. This means the following: if $x$ is a complex number with positive
imaginary part, the result is $\prod_{n=1}^\infty(1-q^n)$, where
$q=e^{2i\pi x}$. If $x$ is a power series (or can be converted to a power
series) with positive valuation, the result is $\prod_{n=1}^\infty(1-x^n)$.
If $\fl=1$ and $x$ can be converted to a complex number (i.e.~is not a power
series), computes the true $\eta$ function, including the leading $q^{1/24}$.
\syn{eta}{x,\var{prec}}.
\subsecidx{exp}$(x)$: exponential of $x$.
$p$-adic arguments with positive valuation are accepted.
\syn{gexp}{x,\var{prec}}.
\subsecidx{gammah}$(x)$: gamma function evaluated at the argument
$x+1/2$. When $x$ is an integer, this is much faster than using
$\kbd{gamma}(x+1/2)$.
\syn{ggamd}{x,\var{prec}}.
\subsecidx{gamma}$(x)$: gamma function of $x$. In the present version
\vers\ the $p$-adic gamma function is not implemented.
\syn{ggamma}{x,\var{prec}}.
\subsecidx{hyperu}$(a,b,x)$: $U$-confluent hypergeometric function with
parameters $a$ and $b$. The parameters $a$ and $b$ can be complex but
the present implementation requires $x$ to be positive.
\syn{hyperu}{a,b,x,\var{prec}}.
\subsecidx{incgam}$(s,x,{y})$: incomplete gamma function.
$x$ must be positive and $s$ real. The result returned is $\int_x^\infty
e^{-t}t^{s-1}\,dt$. When $y$ is given, assume (of course without checking!)
that $y=\Gamma(s)$. For small $x$, this will tremendously speed up the
computation.
\syn{incgam}{s,x,\var{prec}} and $\teb{incgam4}(s,x,y,\var{prec})$,
respectively. There exist also the functions \teb{incgam1} and
\teb{incgam2} which are used for internal purposes.
\subsecidx{incgamc}$(s,x)$: complementary incomplete gamma function.
The arguments $s$ and $x$ must be positive. The result returned is
$\int_0^x e^{-t}t^{s-1}\,dt$, when $x$ is not too large.
\syn{incgam3}{s,x,\var{prec}}.
\subsecidx{log}$(x,\{\fl=0\})$: principal branch of the natural logarithm of
$x$, i.e.~such that $\text{Im(ln}(x))\in{} ]-\pi,\pi]$. The result is complex
(with imaginary part equal to $\pi$) if $x\in \R$ and $x<0$.
$p$-adic arguments are also accepted for $x$, with the convention that
$\ln(p)=0$. Hence in particular $\exp(\ln(x))/x$ will not in general be
equal to 1 but to a $(p-1)$-th root of unity (or $\pm1$ if $p=2$)
times a power of $p$.
If $\fl$ is equal to 1, use an agm formula suggested by Mestre, when $x$ is
real, otherwise identical to \kbd{log}.
\syn{glog}{x,\var{prec}} or $\teb{glogagm}(x,\var{prec})$.
\subsecidx{lngamma}$(x)$: principal branch of the logarithm of the gamma
function of $x$. Can have much larger arguments than \kbd{gamma} itself.
In the present version \vers, the $p$-adic \kbd{lngamma} function is not
implemented.
\syn{glngamma}{x,\var{prec}}.
\subsecidx{polylog}$(m,x,{\fl=0})$: one of the different polylogarithms,
depending on \fl:
If $\fl=0$ or is omitted: $m^\text{th}$ polylogarithm of $x$, i.e.~analytic
continuation of the power series $\text{Li}_m(x)=\sum_{n\ge1}x^n/n^m$. The
program uses the power series when $|x|^2\le1/2$, and the power series
expansion in $\log(x)$ otherwise. It is valid in a large domain (at least
$|x|<230$), but should not be used too far away from the unit circle since it
is then better to use the functional equation linking the value at $x$ to the
value at $1/x$, which takes a trivial form for the variant below. Power
series, polynomial, rational and vector/matrix arguments are allowed.
For the variants to follow we need a notation: let $\Re_m$
denotes $\Re$ or $\Im$ depending whether $m$ is odd or even.
If $\fl=1$: modified $m^\text{th}$ polylogarithm of $x$, called
$\tilde D_m(x)$ in Zagier, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1} \dfrac{(-\log|x|)^k}{k!}\text{Li}_{m-k}(x)
+\dfrac{(-\log|x|)^{m-1}}{m!}\log|1-x|\right).$$
If $\fl=2$: modified $m^\text{th}$ polylogarithm of $x$,
called $D_m(x)$ in Zagier, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1}\dfrac{(-\log|x|)^k}{k!}\text{Li}_{m-k}(x)
-\dfrac{1}{2}\dfrac{(-\log|x|)^m}{m!}\right).$$
If $\fl=3$: another modified $m^\text{th}$
polylogarithm of $x$, called $P_m(x)$ in Zagier, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1}\dfrac{2^kB_k}{k!}(\log|x|)^k\text{Li}_{m-k}(x)
-\dfrac{2^{m-1}B_m}{m!}(\log|x|)^m\right).$$
These three functions satisfy the functional equation
$f_m(1/x)=(-1)^{m-1}f_m(x)$.
\syn{polylog0}{m,x,\fl,\var{prec}}.
\subsecidx{psi}$(x)$: the $\psi$-function of $x$, i.e.~the
logarithmic derivative $\Gamma'(x)/\Gamma(x)$.
\syn{gpsi}{x,\var{prec}}.
\subsecidx{sin}$(x)$: sine of $x$.
\syn{gsin}{x,\var{prec}}.
\subsecidx{sinh}$(x)$: hyperbolic sine of $x$.
\syn{gsh}{x,\var{prec}}.
\subsecidx{sqr}$(x)$: square of $x$. This operation is not completely
straightforward, i.e.~identical to $x * x$, since it can usually be
computed more efficiently (roughly one-half of the elementary
multiplications can be saved). Also, squaring a $2$-adic number increases
its precision. For example,
\bprog
? (1 + O(2^4))^2
%1 = 1 + O(2^5)
? (1 + O(2^4)) * (1 + O(2^4))
%2 = 1 + O(2^4)
@eprog\noindent
Note that this function is also called whenever one multiplies two objects
which are known to be {\it identical}, e.g.~they are the value of the same
variable, or we are computing a power.
\bprog
? x = (1 + O(2^4)); x * x
%3 = 1 + O(2^5)
? (1 + O(2^4))^4
%4 = 1 + O(2^6)
@eprog
\noindent(note the difference between \kbd{\%2} and \kbd{\%3} above).
\syn{gsqr}{x}.
\subsecidx{sqrt}$(x)$: principal branch of the square root of $x$,
i.e.~such that $\text{Arg}(\text{sqrt}(x))\in{} ]-\pi/2, \pi/2]$, or in other
words such that $\Re(\text{sqrt}(x))>0$ or $\Re(\text{sqrt}(x))=0$ and
$\Im(\text{sqrt}(x))\ge 0$. If $x\in \R$ and $x<0$, then the result is
complex with positive imaginary part.
Integermod a prime and $p$-adics are allowed as arguments. In that case,
the square root (if it exists) which is returned is the one whose
first $p$-adic digit (or its unique $p$-adic digit in the case of
integermods) is in the interval $[0,p/2]$. When the argument is an
integermod a non-prime (or a non-prime-adic), the result is undefined.
\syn{gsqrt}{x,\var{prec}}.
\subsecidx{sqrtn}$(x,n,\{\&z\})$: principal branch of the $n$th root of $x$,
i.e.~such that $\text{Arg}(\text{sqrt}(x))\in{} ]-\pi/n, \pi/n]$.
Integermod a prime and $p$-adics are allowed as arguments.
If $z$ is present, it is set to a suitable root of unity allowing to
recover all the other roots. If it was not possible, z is
set to zero.
The following script computes all roots in all possible cases:
\bprog
sqrtnall(x,n)=
{
local(V,r,z,r2);
r = sqrtn(x,n, &z);
if (!z, error("Impossible case in sqrtn"));
if (type(x) == "t_INTMOD" || type(x)=="t_PADIC" ,
r2 = r*z; n = 1;
while (r2!=r, r2*=z;n++));
V = vector(n); V[1] = r;
for(i=2, n, V[i] = V[i-1]*z);
V
}
addhelp(sqrtnall,"sqrtnall(x,n):compute the vector of nth-roots of x");
@eprog\noindent
\syn{gsqrtn}{x,n,\&z,\var{prec}}.
\subsecidx{tan}$(x)$: tangent of $x$.
\syn{gtan}{x,\var{prec}}.
\subsecidx{tanh}$(x)$: hyperbolic tangent of $x$.
\syn{gth}{x,\var{prec}}.
\subsecidx{teichmuller}$(x)$: Teichm\"uller character of the $p$-adic number
$x$.
\syn{teich}{x}.
\subsecidx{theta}$(q,z)$: Jacobi sine theta-function.
\syn{theta}{q,z,\var{prec}}.
\subsecidx{thetanullk}$(q,k)$: $k$-th derivative at $z=0$ of
$\kbd{theta}(q,z)$.
\syn{thetanullk}{q,k,\var{prec}}, where $k$ is a \kbd{long}.
\subsecidx{weber}$(x,\{\fl=0\})$: one of Weber's three $f$ functions.
If $\fl=0$, returns
$$f(x)=\exp(-i\pi/24)\cdot\eta((x+1)/2)\,/\,\eta(x) \quad\hbox{such that}\quad
j=(f^{24}-16)^3/f^{24}\,,$$
where $j$ is the elliptic $j$-invariant (see the function \kbd{ellj}).
If $\fl=1$, returns
$$f_1(x)=\eta(x/2)\,/\,\eta(x)\quad\hbox{such that}\quad
j=(f_1^{24}+16)^3/f_1^{24}\,.$$
Finally, if $\fl=2$, returns
$$f_2(x)=\sqrt{2}\eta(2x)\,/\,\eta(x)\quad\hbox{such that}\quad
j=(f_2^{24}+16)^3/f_2^{24}.$$
Note the identities $f^8=f_1^8+f_2^8$ and $ff_1f_2=\sqrt2$.
\syn{weber0}{x,\fl,\var{prec}}, or
$\teb{wf}(x,\var{prec})$, $\teb{wf1}(x,\var{prec})$ or
$\teb{wf2}(x,\var{prec})$.
\subsecidx{zeta}$(s)$: Riemann's zeta function\sidx{Riemann zeta-function}
$\zeta(s)=\sum_{n\ge1}n^{-s}$, computed using the \idx{Euler-Maclaurin}
summation formula, except when $s$ is of type integer, in which case it
is computed using Bernoulli numbers\sidx{Bernoulli numbers} for
$s\le0$ or $s>0$ and even, and using modular forms for $s>0$ and odd.
\syn{gzeta}{s,\var{prec}}.
\section{Arithmetic functions}\label{se:arithmetic}
These functions are by definition functions whose natural domain of
definition is either $\Z$ (or $\Z_{>0}$), or sometimes polynomials
over a base ring. Functions which concern polynomials exclusively will be
explained in the next section. The way these functions are used is
completely different from transcendental functions: in general only the types
integer and polynomial are accepted as arguments. If a vector or matrix type
is given, the function will be applied on each coefficient independently.
In the present version \vers, all arithmetic functions in the narrow sense
of the word~--- Euler's totient\sidx{Euler totient function} function, the
\idx{Moebius} function, the sums over divisors or powers of divisors
etc.--- call, after trial division by small primes, the same versatile
factoring machinery described under \kbd{factorint}. It includes
\idx{Shanks SQUFOF}, \idx{Pollard Rho}, \idx{ECM} and \idx{MPQS} stages, and
has an early exit option for the functions \teb{moebius} and (the integer
function underlying) \teb{issquarefree}. Note that it relies on a (fairly
strong) probabilistic primality test: numbers found to be strong
pseudo-primes after 10 successful trials of the \idx{Rabin-Miller} test are
declared primes.
\bigskip
\subsecidx{addprimes}$(\{x=[\,]\})$: adds the primes contained in the vector
$x$ (or the single integer $x$) to the table computed upon GP initialization
(by \kbd{pari\_init} in library mode), and returns a row vector whose first
entries contain all primes added by the user and whose last entries have been
filled up with 1's. In total the returned row vector has 100 components.
Whenever \kbd{factor} or \kbd{smallfact} is subsequently called, first the
primes in the table computed by \kbd{pari\_init} will be checked, and then
the additional primes in this table. If $x$ is empty or omitted, just returns
the current list of extra primes.
The entries in $x$ are not checked for primality. They need only be positive
integers not divisible by any of the pre-computed primes. It's in fact a nice
trick to add composite numbers, which for example the function
$\kbd{factor}(x,0)$ was not able to factor. In case the message ``impossible
inverse modulo $\langle$\var{some integermod}$\rangle$'' shows up afterwards,
you have just stumbled over a non-trivial factor. Note that the arithmetic
functions in the narrow sense, like \teb{eulerphi}, do \var{not} use this
extra table.
The present PARI version \vers\ allows up to 100 user-specified
primes to be appended to the table. This limit may be changed
by altering \kbd{NUMPRTBELT} in file \kbd{init.c}. To remove primes from the
list use \kbd{removeprimes}.
\syn{addprimes}{x}.
\subsecidx{bestappr}$(x,k)$: if $x\in\R$, finds the best rational
approximation to $x$ with denominator at most equal to $k$ using continued
fractions.
\syn{bestappr}{x,k}.
\subsecidx{bezout}$(x,y)$: finds $u$ and $v$ minimal in a
natural sense such that $x*u+y*v=\text{gcd}(x,y)$. The arguments
must be both integers or both polynomials, and the result is a
row vector with three components $u$, $v$, and $\text{gcd}(x,y)$.
\syn{vecbezout}{x,y} to get the vector, or $\teb{gbezout}(x,y, \&u, \&v)$
which gives as result the address of the created gcd, and puts
the addresses of the corresponding created objects into $u$ and $v$.
\subsecidx{bezoutres}$(x,y)$: as \kbd{bezout}, with the resultant of $x$ and
$y$ replacing the gcd.
\syn{vecbezoutres}{x,y} to get the vector, or $\teb{subresext}(x,y, \&u,
\&v)$ which gives as result the address of the created gcd, and puts the
addresses of the corresponding created objects into $u$ and $v$.
\subsecidx{bigomega}$(x)$: number of prime divisors of $|x|$ counted with
multiplicity. $x$ must be an integer.
\syn{bigomega}{x}, the result is a \kbd{long}.
\subsecidx{binomial}$(x,y)$: \idx{binomial coefficient} $\binom x y$.
Here $y$ must be an integer, but $x$ can be any PARI object.
\syn{binome}{x,y}, where $y$ must be a \kbd{long}.
\subsecidx{chinese}$(x,y)$: if $x$ and $y$ are both integermods or both
polmods, creates (with the same type) a $z$ in the same residue class
as $x$ and in the same residue class as $y$, if it is possible.
This function also allows vector and matrix arguments, in which case the
operation is recursively applied to each component of the vector or matrix.
For polynomial arguments, it is applied to each coefficient. Finally
$\kbd{chinese}(x,x) = x$ regardless of the type of $x$; this allows vector
arguments to contain other data, so long as they are identical in both
vectors.
\syn{chinois}{x,y}.
\subsecidx{content}$(x)$: computes the gcd of all the coefficients of $x$,
when this gcd makes sense. If $x$ is a scalar, this simply returns $x$. If $x$
is a polynomial (and by extension a power series), it gives the usual content
of $x$. If $x$ is a rational function, it gives the ratio of the contents of
the numerator and the denominator. Finally, if $x$ is a vector or a matrix,
it gives the gcd of all the entries.
\syn{content}{x}.
\subsecidx{contfrac}$(x,\{b\},\{lmax\})$: creates the row vector whose
components are the partial quotients of the \idx{continued fraction}
expansion of $x$, the number of partial quotients being limited to $lmax$.
If $x$ is a real number, the expansion stops at the last significant partial
quotient if $lmax$ is omitted. $x$ can also be a rational function or a power
series.
If a vector $b$ is supplied, the numerators will be equal to the coefficients
of $b$. The length of the result is then equal to the length of $b$, unless a
partial remainder is encountered which is equal to zero. In which case the
expansion stops. In the case of real numbers, the stopping criterion is thus
different from the one mentioned above since, if $b$ is too long, some partial
quotients may not be significant.
If $b$ is an integer, the command is understood as \kbd{contfrac($x,lmax$)}.
\syn{contfrac0}{x,b,lmax}. Also available are
$\teb{gboundcf}(x,lmax)$, $\teb{gcf}(x)$, or $\teb{gcf2}(b,x)$, where $lmax$
is a C integer.
\subsecidx{contfracpnqn}$(x)$: when $x$ is a vector or a one-row matrix, $x$
is considered as the list of partial quotients $[a_0,a_1,\dots,a_n]$ of a
rational number, and the result is the 2 by 2 matrix
$[p_n,p_{n-1};q_n,q_{n-1}]$ in the standard notation of continued fractions,
so $p_n/q_n=a_0+1/(a_1+\dots+1/a_n)\dots)$. If $x$ is a matrix with two rows
$[b_0,b_1,\dots,b_n]$ and $[a_0,a_1,\dots,a_n]$, this is then considered as a
generalized continued fraction and we have similarly
$p_n/q_n=1/b_0(a_0+b_1/(a_1+\dots+b_n/a_n)\dots)$. Note that in this case one
usually has $b_0=1$.
\syn{pnqn}{x}.
\subsecidx{core}$(n,\{\fl=0\})$: if $n$ is a non-zero integer written as
$n=df^2$ with $d$ squarefree, returns $d$. If $\fl$ is non-zero,
returns the two-element row vector $[d,f]$.
\syn{core0}{n,\fl}.
Also available are
$\teb{core}(n)$ (= \teb{core}$(n,0)$) and
$\teb{core2}(n)$ (= \teb{core}$(n,1)$).
\subsecidx{coredisc}$(n,\{\fl\})$: if $n$ is a non-zero integer written as
$n=df^2$ with $d$ fundamental discriminant (including 1), returns $d$. If
$\fl$ is non-zero, returns the two-element row vector $[d,f]$. Note that if
$n$ is not congruent to 0 or 1 modulo 4, $f$ will be a half integer and not
an integer.
\syn{coredisc0}{n,\fl}.
Also available are
$\teb{coredisc}(n)$ (= \teb{coredisc}$(n,0)$) and
$\teb{coredisc2}(n)$ (= \teb{coredisc}$(n,1)$).
\subsecidx{dirdiv}$(x,y)$: $x$ and $y$ being vectors of perhaps different
lengths but with $y[1]\neq 0$ considered as \idx{Dirichlet series}, computes
the quotient of $x$ by $y$, again as a vector.
\syn{dirdiv}{x,y}.
\subsecidx{direuler}$(p=a,b,\var{expr},\{c\})$: computes the
\idx{Dirichlet series} to $b$ terms of the \idx{Euler product} of
expression \var{expr} as $p$ ranges through the primes from $a$ to $b$.
\var{expr} must be a polynomial or rational function in another variable
than $p$ (say $X$) and $\var{expr}(X)$ is understood as the Dirichlet
series (or more precisely the local factor) $\var{expr}(p^{-s})$. If $c$ is
present, output only the first $c$ coefficients in the series.
\synt{direuler}{entree *ep, GEN a, GEN b, char *expr}
\subsecidx{dirmul}$(x,y)$: $x$ and $y$ being vectors of perhaps different
lengths considered as \idx{Dirichlet series}, computes the product of
$x$ by $y$, again as a vector.
\syn{dirmul}{x,y}.
\subsecidx{divisors}$(x)$: creates a row vector whose components are the
positive divisors of the integer $x$ in increasing order. The factorization
of $x$ (as output by \tet{factor}) can be used instead.
\syn{divisors}{x}.
\subsecidx{eulerphi}$(x)$: Euler's $\phi$
(totient)\sidx{Euler totient function} function of $|x|$, in other words
$|(\Z/x\Z)^*|$. $x$ must be of type integer.
\syn{phi}{x}.
\subsecidx{factor}$(x,\{\var{lim}=-1\})$: general factorization function.
If $x$ is of type integer, rational, polynomial or rational function, the
result is a two-column matrix, the first column being the irreducibles
dividing $x$ (prime numbers or polynomials), and the second the exponents.
If $x$ is a vector or a matrix, the factoring is done componentwise (hence
the result is a vector or matrix of two-column matrices). By definition,
$0$ is factored as $0^1$.
If $x$ is of type integer or rational, an argument \var{lim} can be
added, meaning that we look only for factors up to \var{lim}, or to
\kbd{primelimit}, whichever is lowest (except when $\var{lim}=0$ where the
effect is identical to setting $\var{lim}=\kbd{primelimit}$). Hence in this
case, the remaining part is not necessarily prime. See \teb{factorint} for
more information about the algorithms used.
The polynomials or rational functions to be factored must have scalar
coefficients. In particular PARI does \var{not} know how to factor
multivariate polynomials.
Note that PARI tries to guess in a sensible way over which ring you want
to factor. Note also that factorization of polynomials is done up to
multiplication by a constant. In particular, the factors of rational
polynomials will have integer coefficients, and the content of a polynomial
or rational function is discarded and not included in the factorization. If
you need it, you can always ask for the content explicitly:
\bprog
? factor(t^2 + 5/2*t + 1)
%1 =
[2*t + 1 1]
[t + 2 1]
? content(t^2 + 5/2*t + 1)
%2 = 1/2
@eprog
\noindent See also \teb{factornf}.
\syn{factor0}{x,\var{lim}}, where \var{lim} is a C integer.
Also available are
$\teb{factor}(x)$ (= $\teb{factor0}(x,-1)$),
$\teb{smallfact}(x)$ (= $\teb{factor0}(x,0)$).
\subsecidx{factorback}$(f,\{nf\})$: $f$ being any factorization, gives back
the factored object. If a second argument $\var{nf}$ is supplied, $f$ is
assumed to be a prime ideal factorization in the number field $\var{nf}$.
The resulting ideal is given in HNF\sidx{Hermite normal form} form.
\syn{factorback}{f,\var{nf}}, where an omitted
$\var{nf}$ is entered as \kbd{NULL}.
\subsecidx{factorcantor}$(x,p)$: factors the polynomial $x$ modulo the
prime $p$, using distinct degree plus
\idx{Cantor-Zassenhaus}\sidx{Zassenhaus}. The coefficients of $x$ must be
operation-compatible with $\Z/p\Z$. The result is a two-column matrix, the
first column being the irreducible polynomials dividing $x$, and the second
the exponents. If you want only the \var{degrees} of the irreducible
polynomials (for example for computing an $L$-function), use
$\kbd{factormod}(x,p,1)$. Note that the \kbd{factormod} algorithm is
usually faster than \kbd{factorcantor}.
\syn{factcantor}{x,p}.
\subsecidx{factorff}$(x,p,a)$: factors the polynomial $x$ in the field
$\F_q$ defined by the irreducible polynomial $a$ over $\F_p$. The
coefficients of $x$ must be operation-compatible with $\Z/p\Z$. The result
is a two-column matrix, the first column being the irreducible polynomials
dividing $x$, and the second the exponents. It is recommended to use for
the variable of $a$ (which will be used as variable of a polmod) a name
distinct from the other variables used, so that a \kbd{lift()} of the
result will be legible. If all the coefficients of $x$ are in $\F_p$, a much faster algorithm is applied, using the computation of isomorphisms between finite fields.
\syn{factmod9}{x,p,a}.
\subsecidx{factorial}$(x)$ or $x!$: factorial of $x$. The expression $x!$
gives a result which is an integer, while $\kbd{factorial}(x)$ gives a real
number.
\syn{mpfact}{x} for $x!$ and
$\teb{mpfactr}(x,\var{prec})$ for $\kbd{factorial}(x)$. $x$ must be a \kbd{long}
integer and not a PARI integer.
\subsecidx{factorint}$(n,\{\fl=0\})$: factors the integer n using a
combination of the \idx{Shanks SQUFOF} and \idx{Pollard Rho} method (with
modifications due to Brent), \idx{Lenstra}'s \idx{ECM} (with modifications by
Montgomery), and \idx{MPQS} (the latter adapted from the \idx{LiDIA} code
with the kind permission of the LiDIA maintainers), as well as a search for
pure powers with exponents$\le 10$. The output is a two-column matrix as for
\kbd{factor}.
This gives direct access to the integer factoring engine called by most
arithmetical functions. \fl\ is optional; its binary digits mean 1: avoid
MPQS, 2: skip first stage ECM (we may still fall back to it later), 4: avoid
Rho and SQUFOF, 8: don't run final ECM (as a result, a huge composite may be
declared to be prime). Note that a (strong) probabilistic primality test is
used; thus composites might (very rarely) not be detected.
The machinery underlying this function is still in a somewhat experimental
state, but should be much faster on average than pure ECM as used by all
PARI versions up to 2.0.8, at the expense of heavier memory use. You are
invited to play with the flag settings and watch the internals at work by
using GP's \tet{debuglevel} default parameter (level 3 shows just the
outline, 4 turns on time keeping, 5 and above show an increasing amount
of internal details). If you see anything funny happening, please let
us know.
\syn{factorint}{n,\fl}.
\subsecidx{factormod}$(x,p,\{\fl=0\})$: factors the polynomial $x$ modulo
the prime integer $p$, using \idx{Berlekamp}. The coefficients of $x$ must be
operation-compatible with $\Z/p\Z$. The result is a two-column matrix, the
first column being the irreducible polynomials dividing $x$, and the second
the exponents. If $\fl$ is non-zero, outputs only the \var{degrees} of the
irreducible polynomials (for example, for computing an $L$-function). A
different algorithm for computing the mod $p$ factorization is
\kbd{factorcantor} which is sometimes faster.
\syn{factormod}{x,p,\fl}. Also available are
$\teb{factmod}(x,p)$ (which is equivalent to $\teb{factormod}(x,p,0)$) and
$\teb{simplefactmod}(x,p)$ (= $\teb{factormod}(x,p,1)$).
\subsecidx{fibonacci}$(x)$: $x^{\text{th}}$ Fibonacci number.
\syn{fibo}{x}. $x$ must be a \kbd{long}.
\subsecidx{ffinit}$(p,n,\{v=x\})$: computes a monic polynomial of degree
$n$ which is irreducible over $\F_p$. For instance if
\kbd{P = ffinit(3,2,y)}, you can represent elements in $\F_{3^2}$ as polmods
modulo \kbd{P}.
\syn{ffinit}{p,n,v}, where $v$ is a variable number.
\subsecidx{gcd}$(x,y,\{\fl=0\})$: creates the greatest common divisor of $x$
and $y$. $x$ and $y$ can be of quite general types, for instance both
rational numbers. Vector/matrix types are also accepted, in which case
the GCD is taken recursively on each component. Note that for these
types, \kbd{gcd} is not commutative.
If $\fl=0$, use \idx{Euclid}'s algorithm.
If $\fl=1$, use the modular gcd algorithm ($x$ and $y$ have to be
polynomials, with integer coefficients).
If $\fl=2$, use the \idx{subresultant algorithm}.
\syn{gcd0}{x,y,\fl}. Also available are
$\teb{ggcd}(x,y)$, $\teb{modulargcd}(x,y)$, and $\teb{srgcd}(x,y)$
corresponding to $\fl=0$, $1$ and $2$ respectively.
\subsecidx{hilbert}$(x,y,\{p\})$: \idx{Hilbert symbol} of $x$ and $y$ modulo
$p$. If $x$ and $y$ are of type integer or fraction, an explicit third
parameter $p$ must be supplied, $p=0$ meaning the place at infinity.
Otherwise, $p$ needs not be given, and $x$ and $y$ can be of compatible types
integer, fraction, real, integermod a prime (result is undefined if the
modulus is not prime), or $p$-adic.
\syn{hil}{x,y,p}.
\subsecidx{isfundamental}$(x)$: true (1) if $x$ is equal to 1 or to the
discriminant of a quadratic field, false (0) otherwise.
\syn{gisfundamental}{x}, but the
simpler function $\teb{isfundamental}(x)$ which returns a \kbd{long}
should be used if $x$ is known to be of type integer.
\subsecidx{isprime}$(x,\{\fl=0\})$: if $\fl=0$ (default), true (1) if $x$ is a strong pseudo-prime
for 10 randomly chosen bases, false (0) otherwise.
If $\fl=1$, use Pocklington-Lehmer ``P-1'' test. true (1) if $x$ is
prime, false (0) otherwise.
If $\fl=2$, use Pocklington-Lehmer ``P-1'' test and output a primality
certificate as follows: return 0 if $x$ is composite, 1 if $x$ is a
small prime (currently strictly less than $341 550 071 728 321$), and
a matrix if $x$ is a large prime. The matrix has three columns. The
first contains the prime factors $p$, the second the corresponding
elements $a_p$ as in Proposition~8.3.1 in GTM~138, and the third the
output of isprime(p,2).
In the two last cases, the algorithm fails if one of the (strong
pseudo-)prime factors is not prime, but it should be exceedingly rare.
\syn{gisprime}{x,\fl}, but the simpler function $\teb{isprime}(x)$
which returns a \kbd{long} should be used if $x$ is known to be of
type integer. Also available is $\teb{plisprime}(N,\fl)$,
corresponding to $\teb{gisprime}(x,\fl+1)$ if $x$ is known to be of
type integer.
\subsecidx{ispseudoprime}$(x)$: true (1) if $x$ is a strong
pseudo-prime for a randomly chosen base, false (0) otherwise.
\syn{gispsp}{x}, but the
simpler function $\teb{ispsp}(x)$ which returns a \kbd{long}
should be used if $x$ is known to be of type integer.
\subsecidx{issquare}$(x,\{\&n\})$: true (1) if $x$ is square, false (0) if
not. $x$ can be of any type. If $n$ is given and an exact square root had to
be computed in the checking process, puts that square root in $n$. This is in
particular the case when $x$ is an integer or a polynomial. This is \var{not}
the case for intmods (use quadratic reciprocity) or series (only check the
leading coefficient).
\syn{gcarrecomplet}{x,\&n}. Also available is $\teb{gcarreparfait}(x)$.
\subsecidx{issquarefree}$(x)$: true (1) if $x$ is squarefree, false (0) if not.
Here $x$ can be an integer or a polynomial.
\syn{gissquarefree}{x}, but the simpler function $\teb{issquarefree}(x)$
which returns a \kbd{long} should be used if $x$ is known to be of type
integer. This \teb{issquarefree} is just the square of the
\idx{Moebius} function, and is computed as a multiplicative
arithmetic function much like the latter.
\subsecidx{kronecker}$(x,y)$:
Kronecker\sidx{Kronecker symbol}\sidx{Legendre symbol}
(i.e.~generalized Legendre) symbol $\left(\dfrac{x}{y}\right)$. $x$ and $y$
must be of type integer.
\syn{kronecker}{x,y}, the result ($0$ or $\pm 1$) is a \kbd{long}.
\subsecidx{lcm}$(x,y)$: least common multiple of $x$ and $y$, i.e.~such
that $\text{lcm}(x,y)*\text{gcd}(x,y)=\text{abs}(x*y)$.
\syn{glcm}{x,y}.
\subsecidx{moebius}$(x)$: \idx{Moebius} $\mu$-function of $|x|$. $x$ must
be of type integer.
\syn{mu}{x}, the result ($0$ or $\pm 1$) is a \kbd{long}.
\subsecidx{nextprime}$(x)$: finds the smallest prime greater than or
equal to $x$. $x$ can be of any real type. Note that if $x$ is a prime,
this function returns $x$ and not the smallest prime strictly larger than $x$.
\syn{nextprime}{x}.
\subsecidx{numdiv}$(x)$: number of divisors of $|x|$. $x$ must be of type
integer, and the result is a \kbd{long}.
\syn{numbdiv}{x}.
\subsecidx{omega}$(x)$: number of distinct prime divisors of $|x|$. $x$
must be of type integer.
\syn{omega}{x}, the result is a \kbd{long}.
\subsecidx{precprime}$(x)$: finds the largest prime less than or equal to
$x$. $x$ can be of any real type. Returns 0 if $x\le1$.
Note that if $x$ is a prime, this function returns $x$ and not the largest
prime strictly smaller than $x$.
\syn{precprime}{x}.
\subsecidx{prime}$(x)$: the $x^{\text{th}}$ prime number, which must be among
the precalculated primes.
\syn{prime}{x}. $x$ must be a \kbd{long}.
\subsecidx{primes}$(x)$: creates a row vector whose components
are the first $x$ prime numbers, which must be among the precalculated primes.
\syn{primes}{x}. $x$ must be a \kbd{long}.
\subsecidx{qfbclassno}$(x,\{\fl=0\})$: class number of the quadratic field
of discriminant $x$. In the present version \vers, a simple algorithm is used
for $x>0$, so $x$ should not be too large (say $x<10^7$) for the time to be
reasonable. On the other hand, for $x<0$ one can reasonably compute
classno($x$) for $|x|<10^{25}$, since the method used is \idx{Shanks}' method
which is in $O(|x|^{1/4})$. For larger values of $|D|$, see
\kbd{quadclassunit}.
If $\fl=1$, compute the class number using \idx{Euler product}s and the
functional equation. However, it is in $O(|x|^{1/2})$.
\misctitle{Important warning.} For $D<0$, this function often gives
incorrect results when the class group is non-cyclic, because the authors
were too lazy to implement \idx{Shanks}' method completely. It is therefore
strongly recommended to use either the version with $\fl=1$, the function
$\kbd{qfbhclassno}(-x)$ if $x$ is known to be a fundamental discriminant, or
the function \kbd{quadclassunit}.
\syn{qfbclassno0}{x,\fl}. Also available are
$\teb{classno}(x)$ (= $\teb{qfbclassno}(x)$),
$\teb{classno2}(x)$ (= $\teb{qfbclassno}(x,1)$), and finally
there exists the function $\teb{hclassno}(x)$ which computes the class
number of an imaginary quadratic field by counting reduced forms, an $O(|x|)$
algorithm. See also \kbd{qfbhclassno}.
\subsecidx{qfbcompraw}$(x,y)$ \idx{composition} of the binary quadratic forms
$x$ and $y$, without \idx{reduction} of the result. This is useful e.g.~to
compute a generating element of an ideal.
\syn{compraw}{x,y}.
\subsecidx{qfbhclassno}$(x)$: \idx{Hurwitz class number} of $x$, where $x$ is
non-negative and congruent to 0 or 3 modulo 4. See also \kbd{qfbclassno}.
\syn{hclassno}{x}.
\subsecidx{qfbnucomp}$(x,y,l)$: \idx{composition} of the primitive positive
definite binary quadratic forms $x$ and $y$ using the NUCOMP and NUDUPL
algorithms of \idx{Shanks} (\`a la Atkin). $l$ is any positive constant,
but for optimal speed, one should take $l=|D|^{1/4}$, where $D$ is the common
discriminant of $x$ and $y$. When $x$ and $y$ do not have the same
discriminant, the result is undefined.
\syn{nucomp}{x,y,l}. The auxiliary function
$\teb{nudupl}(x,l)$ should be used instead for speed when $x=y$.
\subsecidx{qfbnupow}$(x,n)$: $n$-th power of the primitive positive definite
binary quadratic form $x$ using the NUCOMP and NUDUPL algorithms (see
\kbd{qfbnucomp}).
\syn{nupow}{x,n}.
\subsecidx{qfbpowraw}$(x,n)$: $n$-th power of the binary quadratic form
$x$, computed without doing any \idx{reduction} (i.e.~using \kbd{qfbcompraw}).
Here $n$ must be non-negative and $n<2^{31}$.
\syn{powraw}{x,n} where $n$ must be a \kbd{long}
integer.
\subsecidx{qfbprimeform}$(x,p)$: prime binary quadratic form of discriminant
$x$ whose first coefficient is the prime number $p$. By abuse of notation,
$p = 1$ is a valid special case which returns the unit form. Returns an
error if $x$ is not a quadratic residue mod $p$. In the case where $x>0$,
the ``distance'' component of the form is set equal to zero according to
the current precision.
\syn{primeform}{x,p,\var{prec}}, where the third variable $\var{prec}$ is a
\kbd{long}, but is only taken into account when $x>0$.
\subsecidx{qfbred}$(x,\{\fl=0\},\{D\},\{\var{isqrtD}\},\{\var{sqrtD}\})$:
reduces the binary quadratic form $x$ (updating Shanks's distance function
if $x$ is indefinite). The binary digits of $\fl$ are toggles meaning
\quad 1: perform a single \idx{reduction} step
\quad 2: don't update \idx{Shanks}'s distance
$D$, \var{isqrtD}, \var{sqrtD}, if present, supply the values of the
discriminant, $\lfloor \sqrt{D}\rfloor$, and $\sqrt{D}$ respectively
(no checking is done of these facts). If $D<0$ these values are useless,
and all references to Shanks's distance are irrelevant.
\syn{qfbred0}{x,\fl,D,\var{isqrtD},\var{sqrtD}}. Use \kbd{NULL}
to omit any of $D$, \var{isqrtD}, \var{sqrtD}.
\noindent Also available are
$\teb{redimag}(x)$ (= $\teb{qfbred}(x)$ where $x$ is definite),
\noindent and for indefinite forms:
$\teb{redreal}(x)$ (= $\teb{qfbred}(x)$),
$\teb{rhoreal}(x)$ (= $\teb{qfbred}(x,1)$),
$\teb{redrealnod}(x,sq)$ (= $\teb{qfbred}(x,2,,isqrtD)$),
$\teb{rhorealnod}(x,sq)$ (= $\teb{qfbred}(x,3,,isqrtD)$).
\subsecidx{quadclassunit}$(D,\{\fl=0\},\{\var{tech}=[]\})$:
\idx{Buchmann-McCurley}'s sub-exponential algorithm for computing the class
group of a quadratic field of discriminant $D$. If $D$ is not fundamental,
the function may or may not be defined, but usually is, and often gives the
right answer (a warning is issued). The more general function \tet{bnrinit}
should be used to compute the class group of an order.
This function should be used instead of \kbd{qfbclassno} or \kbd{quadregula}
when $D<-10^{25}$, $D>10^{10}$, or when the \var{structure} is wanted.
If $\fl$ is non-zero \var{and} $D>0$, computes the narrow class group and
regulator, instead of the ordinary (or wide) ones. In the current version
\vers, this doesn't work at all~: use the general function \tet{bnfnarrow}.
Optional parameter \var{tech} is a row vector of the form
$[c_1,c_2]$, where $c_1$ and $c_2$ are positive real numbers which
control the execution time and the stack size. To get maximum speed,
set $c_2=c$. To get a rigorous result (under \idx{GRH}) you must take
$c_2=6$. Reasonable values for $c$ are between $0.1$ and $2$.
The result of this function is a vector $v$ with 4 components if $D<0$, and
$5$ otherwise. The correspond respectively to
$\bullet$ $v[1]$~: the class number
$\bullet$ $v[2]$~: a vector giving the structure of the class group as a
product of cyclic groups;
$\bullet$ $v[3]$~: a vector giving generators of those cyclic groups (as
binary quadratic forms).
$\bullet$ $v[4]$~: (omitted if $D < 0$) the regulator, computed to an
accuracy which is the maximum of an internal accuracy determined by the
program and the current default (note that once the regulator is known to a
small accuracy it is trivial to compute it to very high accuracy, see the
tutorial).
$\bullet$ $v[5]$~: a measure of the correctness of the result. If it is
close to 1, the result is correct (under \idx{GRH}). If it is close to a
larger integer, this shows that the class number is off by a factor equal
to this integer, and you must start again with a larger value for $c_1$ or
a different random seed. In this case, a warning message is printed.
\syn{quadclassunit0}{D,\fl,tech}. Also available are
$\teb{buchimag}(D,c_1,c_2)$ and $\teb{buchreal}(D,\fl,c_1,c_2)$.
\subsecidx{quaddisc}$(x)$: discriminant of the quadratic field
$\Q(\sqrt{x})$, where $x\in\Q$.
\syn{quaddisc}{x}.
\subsecidx{quadhilbert}$(D,\{\fl=0\})$: relative equation defining the
\idx{Hilbert class field} of the quadratic field of discriminant $D$.
If $\fl$ is non-zero
and $D<0$, outputs $[\var{form},\var{root}(\var{form})]$ (to be used for
constructing subfields). If $\fl$ is non-zero and $D>0$, try hard to
get the best modulus.
Uses complex multiplication in the imaginary case and \idx{Stark units}
in the real case.
\syn{quadhilbert}{D,\fl,\var{prec}}.
\subsecidx{quadgen}$(x)$: creates the quadratic number\sidx{omega}
$\omega=(a+\sqrt{x})/2$ where $a=0$ if $x\equiv0\mod4$,
$a=1$ if $x\equiv1\mod4$, so that $(1,\omega)$ is an integral basis for
the quadratic order of discriminant $x$. $x$ must be an integer congruent to
0 or 1 modulo 4.
\syn{quadgen}{x}.
\subsecidx{quadpoly}$(D,\{v=x\})$: creates the ``canonical'' quadratic
polynomial (in the variable $v$) corresponding to the discriminant $D$,
i.e.~the minimal polynomial of $\kbd{quadgen}(x)$. $D$ must be an integer
congruent to 0 or 1 modulo 4.
\syn{quadpoly0}{x,v}.
\subsecidx{quadray}$(D,f,\{\fl=0\})$: relative equation for the ray class
field of conductor $f$ for the quadratic field of discriminant $D$ (which
can also be a \kbd{bnf}), using analytic methods.
For $D<0$, uses the $\sigma$ function. $\fl$ has the following meaning: if
it's an odd integer, outputs instead the vector of $[\var{ideal},
\var{corresponding root}]$. It can also be a two-component vector
$[\lambda,\fl]$, where \fl\ is as above and $\lambda$ is the technical
element of \kbd{bnf} necessary for Schertz's method. In that case, returns
0 if $\lambda$ is not suitable.
For $D>0$, uses Stark's conjecture. If $\fl$ is non-zero, try hard to
get the best modulus. The function may fail with the following message
\bprog
"Cannot find a suitable modulus in FindModulus"
@eprog
See \tet{bnrstark} for more details about the real case.
\syn{quadray}{D,f,\fl}.
\subsecidx{quadregulator}$(x)$: regulator of the quadratic field of
positive discriminant $x$. Returns an error if $x$ is not a discriminant
(fundamental or not) or if $x$ is a square. See also \kbd{quadclassunit} if
$x$ is large.
\syn{regula}{x,\var{prec}}.
\subsecidx{quadunit}$(x)$: fundamental unit\sidx{fundamental units} of the
real quadratic field $\Q(\sqrt x)$ where $x$ is the positive discriminant
of the field. If $x$ is not a fundamental discriminant, this probably gives
the fundamental unit of the corresponding order. $x$ must be of type
integer, and the result is a quadratic number.
\syn{fundunit}{x}.
\subsecidx{removeprimes}$(\{x=[\,]\})$: removes the primes listed in $x$ from
the prime number table. In particular \kbd{removeprimes(addprimes)} empties
the extra prime table. $x$ can also be a single integer. List the current
extra primes if $x$ is omitted.
\syn{removeprimes}{x}.
\subsecidx{sigma}$(x,\{k=1\})$: sum of the $k^{\text{th}}$ powers of the
positive divisors of $|x|$. $x$ must be of type integer.
\syn{sumdiv}{x} (= $\teb{sigma}(x)$) or $\teb{gsumdivk}(x,k)$ (=
$\teb{sigma}(x,k)$), where $k$ is a C long integer.
\subsecidx{sqrtint}$(x)$: integer square root of $x$, which must be of PARI
type integer. The result is non-negative and rounded towards zero. A
negative $x$ is allowed, and the result in that case is \kbd{I*sqrtint(-x)}.
\syn{racine}{x}.
\subsecidx{znlog}$(x,g)$: $g$ must be a primitive root mod a prime $p$, and
the result is the discrete log of $x$ in the multiplicative group
$(\Z/p\Z)^*$. This function using a simple-minded baby-step/giant-step
approach and requires $O(\sqrt{p})$ storage, hence it cannot be used for
$p$ greater than about $10^{13}$.
\syn{znlog}{x,g}.
\subsecidx{znorder}$(x)$: $x$ must be an integer mod $n$, and the result is the
order of $x$ in the multiplicative group $(\Z/n\Z)^*$. Returns an error if $x$
is not invertible.
\syn{order}{x}.
\subsecidx{znprimroot}$(x)$: returns a primitive root of $x$, where $x$
is a prime power.
\syn{gener}{x}.
\subsecidx{znstar}$(n)$: gives the structure of the multiplicative group
$(\Z/n\Z)^*$ as a 3-component row vector $v$, where $v[1]=\phi(n)$ is the
order of that group, $v[2]$ is a $k$-component row-vector $d$ of integers
$d[i]$ such that $d[i]>1$ and $d[i]\mid d[i-1]$ for $i \ge 2$ and
$(\Z/n\Z)^* \simeq \prod_{i=1}^k(\Z/d[i]\Z)$, and $v[3]$ is a $k$-component row
vector giving generators of the image of the cyclic groups $\Z/d[i]\Z$.
\syn{znstar}{n}.
\section{Functions related to elliptic curves}
We have implemented a number of functions which are useful for number
theorists working on elliptic curves. We always use \idx{Tate}'s notations.
The functions assume that the curve is given by a general Weierstrass
model\sidx{Weierstrass equation}
$$
y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,
$$
where a priori the $a_i$ can be of any scalar type. This curve can be
considered as a five-component vector \kbd{E=[a1,a2,a3,a4,a6]}. Points on
\kbd{E} are represented as two-component vectors \kbd{[x,y]}, except for the
point at infinity, i.e.~the identity element of the group law, represented by
the one-component vector \kbd{[0]}.
It is useful to have at one's disposal more information. This is given by
the function \tet{ellinit} (see there), which usually gives a 19 component
vector (which we will call a long vector in this section). If a specific flag
is added, a vector with only 13 component will be output (which we will call
a medium vector). A medium vector just gives the first 13 components of the
long vector corresponding to the same curve, but is of course faster to
compute. The following \idx{member functions} are available to deal with the
output of \kbd{ellinit}:
\settabs\+xxxxxxxxxxxxxxxxxx&: &\cr
\+ \kbd{a1}--\kbd{a6}, \kbd{b2}--\kbd{b8}, \kbd{c4}--\kbd{c6} &: &
coefficients of the elliptic curve.\cr
\+ \tet{area} &: & volume of the complex lattice defining $E$.\cr
\+ \tet{disc} &: & discriminant of the curve.\cr
\+ \tet{j} &: & $j$-invariant of the curve.\cr
\+ \tet{omega}&: & $[\omega_1,\omega_2]$, periods forming a basis of
the complex lattice defining $E$ ($\omega_1$ is the\cr
\+ & & real period, and $\omega_2/\omega_1$ belongs to
Poincar\'e's half-plane).\cr
\+ \tet{eta} &: & quasi-periods $[\eta_1, \eta_2]$, such that
$\eta_1\omega_2-\eta_2\omega_1=i\pi$.\cr
\+ \tet{roots}&: & roots of the associated Weierstrass equation.\cr
\+ \tet{tate} &: & $[u^2,u,v]$ in the notation of Tate.\cr
\+ \tet{w} &: & Mestre's $w$ (this is technical).\cr
Their use is best described by an example: assume that $E$ was output by
\kbd{ellinit}, then typing \kbd{$E$.disc} will retrieve the curve's
discriminant. The member functions \kbd{area}, \kbd{eta} and \kbd{omega} are
only available for curves over $\Q$. Conversely, \kbd{tate} and \kbd{w} are
only available for curves defined over $\Q_p$.\smallskip
Some functions, in particular those relative to height computations (see
\kbd{ellheight}) require also that the curve be in minimal Weierstrass
form. This is achieved by the function \kbd{ellglobalred}.
All functions related to elliptic curves share the prefix \kbd{ell}, and the
precise curve we are interested in is always the first argument, in either
one of the three formats discussed above, unless otherwise specified. For
instance, in functions which do not use the extra information given by long
vectors, the curve can be given either as a five-component vector, or by one
of the longer vectors computed by \kbd{ellinit}.
\subsecidx{elladd}$(E,z1,z2)$: sum of the points $z1$ and $z2$ on the
elliptic curve corresponding to the vector $E$.
\syn{addell}{E,z1,z2}.
\subsecidx{ellak}$(E,n)$: computes the coefficient $a_n$ of the
$L$-function of the elliptic curve $E$, i.e.~in principle coefficients of a
newform of weight 2 assuming \idx{Taniyama-Weil conjecture} (which is now
known to hold in full generality thanks to the work of \idx{Breuil},
\idx{Conrad}, \idx{Diamond}, \idx{Taylor} and \idx{Wiles}). $E$ must be a
medium or long vector of the type given by \kbd{ellinit}. For this function
to work for every $n$ and not just those prime to the conductor, $E$ must
be a minimal Weierstrass equation. If this is not the case, use the
function \kbd{ellglobalred} first before using \kbd{ellak}.
\syn{akell}{E,n}.
\subsecidx{ellan}$(E,n)$: computes the vector of the first $n$ $a_k$
corresponding to the elliptic curve $E$. All comments in \kbd{ellak}
description remain valid.
\syn{anell}{E,n}, where $n$ is a C integer.
\subsecidx{ellap}$(E,p,\{\fl=0\})$: computes the $a_p$ corresponding to the
elliptic curve $E$ and the prime number $p$. These are defined by the
equation $\#E(\F_p) = p+1 - a_p$, where $\#E(\F_p)$ stands for the number
of points of the curve $E$ over the finite field $\F_p$. When $\fl$ is $0$,
this uses the baby-step giant-step method and a trick due to Mestre. This
runs in time $O(p^{1/4})$ and requires $O(p^{1/4})$ storage, hence becomes
unreasonable when $p$ has about 30 digits.
If $\fl$ is $1$, computes the $a_p$ as a sum of Legendre symbols. This is
slower than the previous method as soon as $p$ is greater than 100, say.
No checking is done that $p$ is indeed prime. $E$ must be a medium or long
vector of the type given by \kbd{ellinit}, defined over $\Q$, $\F_p$ or
$\Q_p$. $E$ must be given by a Weierstrass equation minimal at $p$.
\syn{ellap0}{E,p,\fl}. Also available are $\teb{apell}(E,p)$, corresponding
to $\fl=0$, and $\teb{apell2}(E,p)$ ($\fl=1$).
\subsecidx{ellbil}$(E,z1,z2)$: if $z1$ and $z2$ are points on the elliptic
curve $E$, this function computes the value of the canonical bilinear form on
$z1$, $z2$:
$$
\kbd{ellheight}(E,z1\kbd{+}z2) - \kbd{ellheight}(E,z1) - \kbd{ellheight}(E,z2)
$$
where \kbd{+} denotes of course addition on $E$. In addition, $z1$ or $z2$
(but not both) can be vectors or matrices. Note that this is equal to twice
some normalizations. $E$ is assumed to be integral, given by a minimal model.
\syn{bilhell}{E,z1,z2,\var{prec}}.
\subsecidx{ellchangecurve}$(E,v)$: changes the data for the elliptic curve $E$
by changing the coordinates using the vector \kbd{v=[u,r,s,t]}, i.e.~if $x'$
and $y'$ are the new coordinates, then $x=u^2x'+r$, $y=u^3y'+su^2x'+t$.
The vector $E$ must be a medium or long vector of the type given by
\kbd{ellinit}.
\syn{coordch}{E,v}.
\subsecidx{ellchangepoint}$(x,v)$: changes the coordinates of the point or
vector of points $x$ using the vector \kbd{v=[u,r,s,t]}, i.e.~if $x'$ and
$y'$ are the new coordinates, then $x=u^2x'+r$, $y=u^3y'+su^2x'+t$ (see also
\kbd{ellchangecurve}).
\syn{pointch}{x,v}.
\subsecidx{elleisnum}$(E,k,\{\fl=0\})$: $E$ being an elliptic curve as
output by \kbd{ellinit} (or, alternatively, given by a 2-component vector
$[\omega_1,\omega_2]$), and $k$ being an even positive integer, computes
the numerical value of the Eisenstein series of weight $k$ at $E$. When
\fl\ is non-zero and $k=4$ or 6, returns $g_2$ or $g_3$ with the correct
normalization.
\syn{elleisnum}{E,k,\fl}.
\subsecidx{elleta}$(om)$: returns the two-component row vector
$[\eta_1,\eta_2]$ of quasi-periods associated to $\kbd{om} = [\omega_1,
\omega_2]$
\syn{elleta}{om, \var{prec}}
\subsecidx{ellglobalred}$(E)$: calculates the arithmetic conductor, the global
minimal model of $E$ and the global \idx{Tamagawa number} $c$. Here $E$ is an
elliptic curve given by a medium or long vector of the type given by
\kbd{ellinit}, {\it and is supposed to have all its coefficients $a_i$ in}
$\Q$. The result is a 3 component vector $[N,v,c]$. $N$ is the arithmetic
conductor of the curve, $v$ is itself a vector $[u,r,s,t]$ with rational
components. It gives a coordinate change for $E$ over $\Q$ such that the
resulting model has integral coefficients, is everywhere minimal, $a_1$ is 0
or 1, $a_2$ is 0, 1 or $-1$ and $a_3$ is 0 or 1. Such a model is unique, and
the vector $v$ is unique if we specify that $u$ is positive. To get the new
model, simply type \kbd{ellchangecurve(E,v)}. Finally $c$ is the product of
the local Tamagawa numbers $c_p$, a quantity which enters in the
\idx{Birch and Swinnerton-Dyer conjecture}.
\syn{globalreduction}{E}.
\subsecidx{ellheight}$(E,z,\{\fl=0\})$: global \idx{N\'eron-Tate height} of
the point $z$ on the elliptic curve $E$. The vector $E$ must be a long vector
of the type given by \kbd{ellinit}, with $\fl=1$. If $\fl=0$, this
computation is done using sigma and theta-functions and a trick due to J.
Silverman. If $\fl=1$, use Tate's $4^n$ algorithm, which is much slower.
$E$ is assumed to be integral, given by a minimal model.
\syn{ellheight0}{E,z,\fl,\var{prec}}. The Archimedean
contribution alone is given by the library function
$\teb{hell}(E,z,\var{prec})$.
Also available are $\teb{ghell}(E,z,\var{prec})$ ($\fl=0$) and
$\teb{ghell2}(E,z,\var{prec})$ ($\fl=1$).
\subsecidx{ellheightmatrix}$(E,x)$: $x$ being a vector of points, this
function outputs the Gram matrix of $x$ with respect to the N\'eron-Tate
height, in other words, the $(i,j)$ component of the matrix is equal to
\kbd{ellbil($E$,x[$i$],x[$j$])}. The rank of this matrix, at least in some
approximate sense, gives the rank of the set of points, and if $x$ is a
basis of the \idx{Mordell-Weil group} of $E$, its determinant is equal to
the regulator of $E$. Note that this matrix should be divided by 2 to be in
accordance with certain normalizations. $E$ is assumed to be integral,
given by a minimal model.
\syn{mathell}{E,x,\var{prec}}.
\subsecidx{ellinit}$(E,\{\fl=0\})$: computes some fixed data concerning the
elliptic curve given by the five-component vector $E$, which will be
essential for most further computations on the curve. The result is a
19-component vector E (called a long vector in this section), shortened
to 13 components (medium vector) if $\fl=1$. Both contain the
following information in the first 13 components:
%
$$ a_1,a_2,a_3,a_4,a_6,b_2,b_4,b_6,b_8,c_4,c_6,\Delta,j.$$
%
In particular, the discriminant is $E[12]$ (or \kbd{$E$.disc}), and the
$j$-invariant is $E[13]$ (or \kbd{$E$.j}).
The other six components are only present if $\fl$ is $0$ (or omitted!).
Their content depends on whether the curve is defined over $\R$ or not:
\smallskip
$\bullet$ When $E$ is defined over $\R$, $E[14]$ (\kbd{$E$.roots}) is a
vector whose three components contain the roots of the associated Weierstrass
equation. If the roots are all real, then they are ordered by decreasing
value. If only one is real, it is the first component of $E[14]$.
$E[15]$ (\kbd{$E$.omega[1]}) is the real period of $E$ (integral of
$dx/(2y+a_1x+a_3)$ over the connected component of the identity element of
the real points of the curve), and $E[16]$ (\kbd{$E$.omega[2]}) is a complex
period. In other words, $\omega_1=E[15]$ and $\omega_2=E[16]$ form a basis of
the complex lattice defining $E$ (\kbd{$E$.omega}), with
$\tau=\dfrac{\omega_2}{\omega_1}$ having positive imaginary part.
$E[17]$ and $E[18]$ are the corresponding values $\eta_1$ and $\eta_2$ such
that $\eta_1\omega_2-\eta_2\omega_1=i\pi$, and both can be retrieved by
typing \kbd{$E$.eta} (as a row vector whose components are the $\eta_i$).
Finally, $E[19]$ (\kbd{$E$.area}) is the volume of the complex lattice defining
$E$.\smallskip
$\bullet$ When $E$ is defined over $\Q_p$, the $p$-adic valuation of $j$
must be negative. Then $E[14]$ (\kbd{$E$.roots}) is the vector with a single
component equal to the $p$-adic root of the associated Weierstrass equation
corresponding to $-1$ under the Tate parametrization.
$E[15]$ is equal to the square of the $u$-value, in the notation of Tate.
$E[16]$ is the $u$-value itself, if it belongs to $\Q_p$, otherwise zero.
$E[17]$ is the value of Tate's $q$ for the curve $E$.
\kbd{$E$.tate} will yield the three-component vector $[u^2,u,q]$.
$E[18]$ (\kbd{$E$.w}) is the value of Mestre's $w$ (this is technical), and
$E[19]$ is arbitrarily set equal to zero.
\smallskip
For all other base fields or rings, the last six components are arbitrarily
set equal to zero (see also the description of member functions related to
elliptic curves at the beginning of this section).
\syn{ellinit0}{E,\fl,\var{prec}}. Also available are
$\teb{initell}(E,\var{prec})$ ($\fl=0$) and
$\teb{smallinitell}(E,\var{prec})$ ($\fl=1$).
\subsecidx{ellisoncurve}$(E,z)$: gives 1 (i.e.~true) if the point $z$ is on
the elliptic curve $E$, 0 otherwise. If $E$ or $z$ have imprecise coefficients,
an attempt is made to take this into account, i.e.~an imprecise equality is
checked, not a precise one.
\syn{oncurve}{E,z}, and the result is a \kbd{long}.
\subsecidx{ellj}$(x)$: elliptic $j$-invariant. $x$ must be a complex number
with positive imaginary part, or convertible into a power series or a
$p$-adic number with positive valuation.
\syn{jell}{x,\var{prec}}.
\subsecidx{elllocalred}$(E,p)$: calculates the \idx{Kodaira} type of the
local fiber of the elliptic curve $E$ at the prime $p$.
$E$ must be given by a medium or
long vector of the type given by \kbd{ellinit}, and is assumed to have all
its coefficients $a_i$ in $\Z$. The result is a 4-component vector
$[f,kod,v,c]$. Here $f$ is the exponent of $p$ in the arithmetic conductor of
$E$, and $kod$ is the Kodaira type which is coded as follows:
1 means good reduction (type I$_0$), 2, 3 and 4 mean types II, III and IV
respectively, $4+\nu$ with $\nu>0$ means type I$_\nu$;
finally the opposite values $-1$, $-2$, etc.~refer to the starred types
I$_0^*$, II$^*$, etc. The third component $v$ is itself a vector $[u,r,s,t]$
giving the coordinate changes done during the local reduction. Normally, this
has no use if $u$ is 1, that is, if the given equation was already minimal.
Finally, the last component $c$ is the local \idx{Tamagawa number} $c_p$.
\syn{localreduction}{E,p}.
\subsecidx{elllseries}$(E,s,\{A=1\})$: $E$ being a medium or long vector
given by \kbd{ellinit}, this computes the value of the L-series of $E$ at
$s$. It is assumed that $E$ is a minimal model over $\Z$ and that the curve
is a modular elliptic curve. The optional parameter $A$ is a cutoff point for
the integral, which must be chosen close to 1 for best speed. The result
must be independent of $A$, so this allows some internal checking of the
function.
Note that if the conductor of the curve is large, say greater than $10^{12}$,
this function will take an unreasonable amount of time since it uses an
$O(N^{1/2})$ algorithm.
\syn{lseriesell}{E,s,A,\var{prec}} where $\var{prec}$ is a \kbd{long} and an
omitted $A$ is coded as \kbd{NULL}.
\subsecidx{ellorder}$(E,z)$: gives the order of the point $z$ on the elliptic
curve $E$ if it is a torsion point, zero otherwise. In the present version
\vers, this is implemented only for elliptic curves defined over $\Q$.
\syn{orderell}{E,z}.
\subsecidx{ellordinate}$(E,x)$: gives a 0, 1 or 2-component vector containing
the $y$-coordinates of the points of the curve $E$ having $x$ as
$x$-coordinate.
\syn{ordell}{E,x}.
\subsecidx{ellpointtoz}$(E,z)$: if $E$ is an elliptic curve with coefficients
in $\R$, this computes a complex number $t$ (modulo the lattice defining
$E$) corresponding to the point $z$, i.e.~such that, in the standard
Weierstrass model, $\wp(t)=z[1],\wp'(t)=z[2]$. In other words, this is the
inverse function of \kbd{ellztopoint}.
If $E$ has coefficients in $\Q_p$, then either Tate's $u$ is in $\Q_p$, in
which case the output is a $p$-adic number $t$ corresponding to the point $z$
under the Tate parametrization, or only its square is, in which case the
output is $t+1/t$. $E$ must be a long vector output by \kbd{ellinit}.
\syn{zell}{E,z,\var{prec}}.
\subsecidx{ellpow}$(E,z,n)$: computes $n$ times the point $z$ for the
group law on the elliptic curve $E$. Here, $n$ can be in $\Z$, or $n$
can be a complex quadratic integer if the curve $E$ has complex multiplication
by $n$ (if not, an error message is issued).
\syn{powell}{E,z,n}.
\subsecidx{ellrootno}$(E,\{p=1\})$: $E$ being a medium or long vector given
by \kbd{ellinit}, this computes the local (if $p\neq 1$) or global (if $p=1$)
root number of the L-series of the elliptic curve $E$. Note that the global
root number is the sign of the functional equation and conjecturally is the
parity of the rank of the \idx{Mordell-Weil group}.
The equation for $E$ must have
coefficients in $\Q$ but need \var{not} be minimal.
\syn{ellrootno}{E,p} and the result (equal to $\pm1$) is a \kbd{long}.
\subsecidx{ellsigma}$(E,z,\{\fl=0\})$: value of the Weierstrass $\sigma$
function of the lattice associated to $E$ as given by \kbd{ellinit}
(alternatively, $E$ can be given as a lattice $[\omega_1,\omega_2]$).
If $\fl=1$, computes an (arbitrary) determination of $\log(\sigma(z))$.
If $\fl=2,3$, same using the product expansion instead of theta series.
\syn{ellsigma}{E,z,\fl}
\subsecidx{ellsub}$(E,z1,z2)$: difference of the points $z1$ and $z2$ on the
elliptic curve corresponding to the vector $E$.
\syn{subell}{E,z1,z2}.
\subsecidx{elltaniyama}$(E)$: computes the modular parametrization of the
elliptic curve $E$, where $E$ is given in the (long or medium) format output
by \kbd{ellinit}, in the form of a two-component vector $[u,v]$ of power
series, given to the current default series precision. This vector is
characterized by the following two properties. First the point $(x,y)=(u,v)$
satisfies the equation of the elliptic curve. Second, the differential
$du/(2v+a_1u+a_3)$ is equal to $f(z)dz$, a differential form on
$H/\Gamma_0(N)$ where $N$ is the conductor of the curve. The variable used in
the power series for $u$ and $v$ is $x$, which is implicitly understood to be
equal to $\exp(2i\pi z)$. It is assumed that the curve is a \var{strong}
\idx{Weil curve}, and the Manin constant is equal to 1. The equation of
the curve $E$ must be minimal (use \kbd{ellglobalred} to get a minimal
equation).
\syn{taniyama}{E}, and the precision of the result is determined by the
global variable \kbd{precdl}.
\subsecidx{elltors}$(E,\{\fl=0\})$: if $E$ is an elliptic curve {\it defined
over $\Q$}, outputs the torsion subgroup of $E$ as a 3-component vector
\kbd{[t,v1,v2]}, where \kbd{t} is the order of the torsion group, \kbd{v1}
gives the structure of the torsion group as a product of cyclic groups
(sorted by decreasing order), and \kbd{v2} gives generators for these cyclic
groups. $E$ must be a long vector as output by \kbd{ellinit}.
\bprog
? E = ellinit([0,0,0,-1,0]);
? elltors(E)
%1 = [4, [2, 2], [[0, 0], [1, 0]]]
@eprog
Here, the torsion subgroup is isomorphic to $\Z/2\Z \times \Z/2\Z$, with
generators $[0,0]$ and $[1,0]$.
If $\fl = 0$, use Doud's algorithm~: bound torsion by computing $\#E(\F_p)$
for small primes of good reduction, then look for torsion points using
Weierstrass parametrization (and Mazur's classification).
If $\fl = 1$, use Lutz--Nagell (\var{much} slower), $E$ is allowed to be a
medium vector.
\syn{elltors0}{E,flag}.
\subsecidx{ellwp}$(E,\{z=x\},\{\fl=0\})$:
Computes the value at $z$ of the Weierstrass $\wp$ function attached to the
elliptic curve $E$ as given by \kbd{ellinit} (alternatively, $E$ can be
given as a lattice $[\omega_1,\omega_2]$).
If $z$ is omitted or is a simple variable, computes the \var{power series}
expansion in $z$ (starting $z^{-2}+O(z^2)$). The number of terms to an
\var{even} power in the expansion is the default serieslength in GP, and the
second argument (C long integer) in library mode.
Optional \fl\ is (for now) only taken into account when $z$ is numeric, and
means 0: compute only $\wp(z)$, 1: compute $[\wp(z),\wp'(z)]$.
\syn{ellwp0}{E,z,\fl,\var{prec},\var{precdl}}. Also available is
\teb{weipell}$(E,\var{precdl})$ for the power series (in
$x=\kbd{polx[0]}$).
\subsecidx{ellzeta}$(E,z)$: value of the Weierstrass $\zeta$ function of the
lattice associated to $E$ as given by \kbd{ellinit} (alternatively, $E$ can
be given as a lattice $[\omega_1,\omega_2]$).
\syn{ellzeta}{E,z}.
\subsecidx{ellztopoint}$(E,z)$: $E$ being a long vector, computes the
coordinates $[x,y]$ on the curve $E$ corresponding to the complex number $z$.
Hence this is the inverse function of \kbd{ellpointtoz}. In other words, if
the curve is put in Weierstrass form, $[x,y]$ represents the
\idx{Weierstrass $\wp$-function} and its derivative.
If $z$ is in the lattice defining $E$ over
$\C$, the result is the point at infinity $[0]$.
\syn{pointell}{E,z,\var{prec}}.
\section{Functions related to general number fields}
In this section can be found functions which are used almost exclusively for
working in general number fields. Other less specific functions can be found
in the next section on polynomials. Functions related to quadratic number
fields can be found in the section \secref{se:arithmetic} (Arithmetic
functions).
\noindent We shall use the following conventions:
$\bullet$ $\tev{nf}$ denotes a number field, i.e.~a 9-component vector
in the format output by \tet{nfinit}. This contains the basic arithmetic data
associated to the number field: signature, maximal order, discriminant, etc.
$\bullet$ $\tev{bnf}$ denotes a big number field, i.e.~a 10-component
vector in the format output by \tet{bnfinit}. This contains $\var{nf}$ and
the deeper invariants of the field: units, class groups, as well as a lot of
technical data necessary for some complex fonctions like \kbd{bnfisprincipal}.
$\bullet$ $\tev{bnr}$ denotes a big ``ray number field'', i.e.~some data
structure output by \kbd{bnrinit}, even more complicated than $\var{bnf}$,
corresponding to the ray class group structure of the field, for some
modulus.
$\bullet$ $\tev{rnf}$ denotes a relative number field (see below).
\smallskip
$\bullet$ $\tev{ideal}$ can mean any of the following:
\quad -- a $\Z$-basis, in \idx{Hermite normal form}
(HNF) or not. In this case $x$ is a square matrix.
\quad -- an \tev{idele}, i.e.~a 2-component vector, the first being an
ideal given as a $\Z$--basis, the second being a $r_1+r_2$-component row
vector giving the complex logarithmic Archimedean information.
\quad -- a $\Z_K$-generating system for an ideal.
\quad -- a \var{column} vector $x$ expressing an element of the number field
on the integral basis, in which case the ideal is treated as being the
principal idele (or ideal) generated by $x$.
\quad -- a prime ideal, i.e.~a 5-component vector in the format output by
\kbd{idealprimedec}.
\quad -- a polmod $x$, i.e.~an algebraic integer, in which case the ideal
is treated as being the principal idele generated by $x$.
\quad -- an integer or a rational number, also treated as a principal idele.
$\bullet$ a \var{{character}} on the Abelian group
$\bigoplus (\Z/N_i\Z) g_i$
is given by a row vector $\chi = [a_1,\ldots,a_n]$ such that
$\chi(\prod g_i^{n_i}) = exp(2i\pi\sum a_i n_i / N_i)$.
\misctitle{Warnings:}
1) An element in $\var{nf}$ can be expressed either as a polmod or as a
vector of components on the integral basis \kbd{\var{nf}.zk}. It is absolutely
essential that all such vectors be \var{column} vectors.
2) When giving an ideal by a $\Z_K$ generating system to a function expecting
an ideal, it must be ensured that the function understands that it is a
$\Z_K$-generating system and not a $\Z$-generating system. When the number of
generators is strictly less than the degree of the field, there is no
ambiguity and the program assumes that one is giving a $\Z_K$-generating set.
When the number of generators is greater than or equal to the degree of the
field, however, the program assumes on the contrary that you are giving a
$\Z$-generating set. If this is not the case, you \var{must} absolutely
change it into a $\Z$-generating set, the simplest manner being to use
\kbd{idealhnf(\var{nf},$x$)}.
Concerning relative extensions, some additional definitions are necessary.
$\bullet$ A \var{{relative matrix}} will be a matrix whose entries are
elements of a (given) number field $\var{nf}$, always expressed as column
vectors on the integral basis \kbd{\var{nf}.zk}. Hence it is a matrix of
vectors.
$\bullet$ An \tev{ideal list} will be a row vector of (fractional)
ideals of the number field $\var{nf}$.
$\bullet$ A \tev{pseudo-matrix} will be a pair $(A,I)$ where $A$ is a
relative matrix and $I$ an ideal list whose length is the same as the number
of columns of $A$. This pair will be represented by a 2-component row vector.
$\bullet$ The \tev{module} generated by a pseudo-matrix $(A,I)$ is
the sum $\sum_i{\Bbb a}_jA_j$ where the ${\Bbb a}_j$ are the ideals of $I$
and $A_j$ is the $j$-th column of $A$.
$\bullet$ A pseudo-matrix $(A,I)$ is a \tev{pseudo-basis} of the module
it generates if $A$ is a square matrix with non-zero determinant and all the
ideals of $I$ are non-zero. We say that it is in Hermite Normal
Form\sidx{Hermite normal form} (HNF) if it is upper triangular and all the
elements of the diagonal are equal to 1.
$\bullet$ The \var{determinant} of a pseudo-basis $(A,I)$ is the ideal
equal to the product of the determinant of $A$ by all the ideals of $I$. The
determinant of a pseudo-matrix is the determinant of any pseudo-basis of the
module it generates.
Finally, when defining a relative extension, the base field should be
defined by a variable having a lower priority (i.e.~a higher number)
than the variable defining the extension. For example, under GP you can
use the variable name $y$ (or $t$) to define the base field, and the
variable name $x$ to define the relative extension.
Now a last set of definitions concerning the way big ray number fields
(or \var{bnr}) are input, using class field theory.
These are defined by a triple
$a1$, $a2$, $a3$, where the defining set $[a1,a2,a3]$ can have any of the
following forms: $[\var{bnr}]$, $[\var{bnr},\var{subgroup}]$,
$[\var{bnf},\var{module}]$, $[\var{bnf},\var{module},\var{subgroup}]$, where:
$\bullet$ $\var{bnf}$ is as output by \kbd{bnfclassunit} or \kbd{bnfinit},
where units are mandatory unless the ideal is trivial; \var{bnr} by
\kbd{bnrclass} (with $\fl>0$) or \kbd{bnrinit}. This is the ground field.
$\bullet$ \var{module} is either an ideal in any form (see above) or a
two-component row vector containing an ideal and an $r_1$-component row
vector of flags indicating which real Archimedean embeddings to take in the
module.
$\bullet$ \var{subgroup} is the HNF matrix of a subgroup of the ray class group
of the ground field for the modulus \var{module}. This is input as a square
matrix expressing generators of a subgroup of the ray class group
\kbd{\var{bnr}.clgp} on the given generators.
The corresponding \var{bnr} is then the subfield of the ray class field of the
ground field for the given modulus, associated to the given subgroup.
All the functions which are specific to relative extensions, number fields,
big number fields, big number rays, share the prefix \kbd{rnf}, \kbd{nf},
\kbd{bnf}, \kbd{bnr} respectively. They are meant to take as first argument a
number field of that precise type, respectively output by \kbd{rnfinit},
\kbd{nfinit}, \kbd{bnfinit}, and \kbd{bnrinit}.
However, and even though it may not be specified in the descriptions of the
functions below, it is permissible, if the function expects a $\var{nf}$, to
use a $\var{bnf}$ instead (which contains much more information). The program
will make the effort of converting to what it needs. On the other hand, if
the program requires a big number field, the program will \var{not} launch
\kbd{bnfinit} for you, which can be a costly operation. Instead, it will give
you a specific error message.
The data types corresponding to the structures described above are rather
complicated. Thus, as we already have seen it with elliptic curves, GP
provides you with some ``member functions'' to retrieve the data you need
from these structures (once they have been initialized of course). The
relevant types of number fields are indicated between parentheses:
\smallskip
\sidx{member functions}
\settabs\+xxxxxxx&(\var{bnr},x&\var{bnf},x&nf\hskip2pt&)x&: &\cr
\+\tet{bnf} &(\var{bnr},& \var{bnf}&&)&: & big number field.\cr
\+\tet{clgp} &(\var{bnr},& \var{bnf}&&)&: & classgroup. This one admits the
following three subclasses:\cr
\+ \quad \tet{cyc} &&&&&: & \quad cyclic decomposition
(SNF)\sidx{Smith normal form}.\cr
\+ \quad \kbd{gen}\sidx{gen (member function)} &&&&&: &
\quad generators.\cr
\+ \quad \tet{no} &&&&&: & \quad number of elements.\cr
\+\tet{diff} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the different ideal.\cr
\+\tet{codiff}&(\var{bnr},& \var{bnf},& \var{nf}&)&: & the codifferent
(inverse of the different in the ideal group).\cr
\+\tet{disc} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & discriminant.\cr
\+\tet{fu} &(\var{bnr},& \var{bnf},& \var{nf}&)&: &
\idx{fundamental units}.\cr
\+\tet{futu} &(\var{bnr},& \var{bnf}&&)&: & $[u,w]$, $u$ is a vector of
fundamental units, $w$ generates the torsion.\cr
\+\tet{nf} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & number field.\cr
\+\tet{reg} &(\var{bnr},& \var{bnf},&&)&: & regulator.\cr
\+\tet{roots}&(\var{bnr},& \var{bnf},& \var{nf}&)&: & roots of the
polnomial generating the field.\cr
\+\tet{sign} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & $[r_1,r_2]$ the
signature of the field. This means that the field has $r_1$ real \cr
\+ &&&&&& embeddings, $2r_2$ complex ones.\cr
\+\tet{t2} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the T2 matrix (see
\kbd{nfinit}).\cr
\+\tet{tu} &(\var{bnr},& \var{bnf},&&)&: & a generator for the torsion
units.\cr
\+\tet{tufu} &(\var{bnr},& \var{bnf},&&)&: & as \kbd{futu}, but outputs
$[w,u]$.\cr
\+\tet{zk} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & integral basis, i.e.~a
$\Z$-basis of the maximal order.\cr
\+\tet{zkst} &(\var{bnr}& & &)&: & structure of $(\Z_K/m)^*$ (can be
extracted also from an \var{idealstar}).\cr
For instance, assume that $\var{bnf} = \kbd{bnfinit}(\var{pol})$, for some
polynomial. Then \kbd{\var{bnf}.clgp} retrieves the class group, and
\kbd{\var{bnf}.clgp.no} the class number. If we had set $\var{bnf} =
\kbd{nfinit}(\var{pol})$, both would have output an error message. All these
functions are completely recursive, thus for instance
\kbd{\var{bnr}.bnf.nf.zk} will yield the maximal order of \var{bnr} (which
you could get directly with a simple \kbd{\var{bnr}.zk} of course).
\medskip
The following functions, starting with \kbd{buch} in library mode, and with
\kbd{bnf} under GP, are implementations of the sub-exponential algorithms for
finding class and unit groups under \idx{GRH}, due to Hafner-McCurley,
\idx{Buchmann} and Cohen-Diaz-Olivier.
The general call to the functions concerning class groups of general number
fields (i.e.~excluding \kbd{quadclassunit}) involves a polynomial $P$ and a
technical vector
$$\var{tech} = [c,c2,\var{nrel},\var{borne},\var{nrpid},\var{minsfb}],$$
where the parameters are to be understood as follows:
$P$ is the defining polynomial for the number field, which must be in
$\Z[X]$, irreducible and, preferably, monic. In fact, if you supply a
non-monic polynomial at this point, GP will issue a warning, then
\var{transform your polynomial} so that it becomes monic. Instead of the
normal result, say \kbd{res}, you then get a vector \kbd{[res,Mod(a,Q)]},
where \kbd{Mod(a,Q)=Mod(X,P)} gives the change of variables.
The numbers $c$ and $c2$ are positive real numbers which control the
execution time and the stack size. To get maximum speed, set $c2=c$. To get a
rigorous result (under \idx{GRH}) you must take $c2=12$ (or $c2=6$ in the
quadratic case, but then you should use the much faster function
\kbd{quadclassunit}). Reasonable values for $c$ are between $0.1$ and
$2$. (The defaults are $c=c2=0.3$).
$\var{nrel}$ is the number of initial extra relations requested in
computing the
relation matrix. Reasonable values are between 5 and 20. (The default is 5).
$\var{borne}$ is a multiplicative coefficient of the Minkowski bound which
controls
the search for small norm relations. If this parameter is set equal to 0, the
program does not search for small norm relations. Otherwise reasonable values
are between $0.5$ and $2.0$. (The default is $1.0$).
$\var{nrpid}$ is the maximal number of small norm relations associated to each
ideal in the factor base. Irrelevant when $\var{borne}=0$. Otherwise,
reasonable values are between 4 and 20. (The default is 4).
$\var{minsfb}$ is the minimal number of elements in the ``sub-factorbase''.
If the
program does not seem to succeed in finding a full rank matrix (which you can
see in GP by typing \kbd{\bs g 2}), increase this number. Reasonable values
are between 2 and 5. (The default is 3).
\misctitle{Remarks.}
Apart from the polynomial $P$, you don't need to supply any of the technical
parameters (under the library you still need to send at least an empty
vector, \kbd{cgetg(1,t\_VEC)}). However, should you choose to set some of
them, they \var{must} be given in the requested order. For example, if you
want to specify a given value of $nrel$, you must give some values as well
for $c$ and $c2$, and provide a vector $[c,c2,nrel]$.
Note also that you can use an $\var{nf}$ instead of $P$, which avoids
recomputing the integral basis and analogous quantities.
\smallskip
\subsecidx{bnfcertify}$(\var{bnf})$: $\var{bnf}$ being a big number field
as output by \kbd{bnfinit} or \kbd{bnfclassunit}, checks whether the result
is correct, i.e.~whether it is possible to remove the assumption of the
Generalized Riemann Hypothesis\sidx{GRH}. If it is correct, the answer is 1.
If not, the program may output some error message, but more probably will loop
indefinitely. In \var{no} occasion can the program give a wrong answer
(barring bugs of course): if the program answers 1, the answer is certified.
\syn{certifybuchall}{\var{bnf}}, and the result is a C long.
\subsecidx{bnfclassunit}$(P,\{\fl=0\},\{\var{tech}=[\,]\})$: \idx{Buchmann}'s
sub-exponential algorithm for computing the class group, the regulator and a
system of \idx{fundamental units} of the general algebraic number field $K$
defined by the irreducible polynomial $P$ with integer coefficients.
The result of this function is a vector $v$ with 10 components (it is
\var{not} a $\var{bnf}$, you need \kbd{bnfinit} for that), which for ease of
presentation is in fact output as a one column matrix. First we describe the
default behaviour ($\fl=0$):
$v[1]$ is equal to the polynomial $P$. Note that for optimum performance,
$P$ should have gone through \kbd{polred} or $\kbd{nfinit}(x,2)$.
$v[2]$ is the 2-component vector $[r1,r2]$, where $r1$ and $r2$ are as usual
the number of real and half the number of complex embeddings of the number
field $K$.
$v[3]$ is the 2-component vector containing the field discriminant and the
index.
$v[4]$ is an integral basis in Hermite normal form.
$v[5]$ (\kbd{$v$.clgp}) is a 3-component vector containing the class number
(\kbd{$v$.clgp.no}), the structure of the class group as a product of cyclic
groups of order $n_i$ (\kbd{$v$.clgp.cyc}), and the corresponding generators
of the class group of respective orders $n_i$ (\kbd{$v$.clgp.gen}).
$v[6]$ (\kbd{$v$.reg}) is the regulator computed to an accuracy which is the
maximum of an internally determined accuracy and of the default.
$v[7]$ is a measure of the correctness of the result. If it is close to 1,
the results are correct (under \idx{GRH}). If it is close to a larger integer,
this shows that the product of the class number by the regulator is off by a
factor equal to this integer, and you must start again with a larger value
for $c$ or a different random seed, i.e.~use the function \kbd{setrand}.
(Since the computation involves a random process, starting again with exactly
the same parameters may give the correct result.) In this case a warning
message is printed.
$v[8]$ (\kbd{$v$.tu}) a vector with 2 components, the first being the number
$w$ of roots of unity in $K$ and the second a primitive $w$-th root of unity
expressed as a polynomial.
$v[9]$ (\kbd{$v$.fu}) is a system of fundamental units also expressed as
polynomials.
$v[10]$ gives a measure of the correctness of the computations of the
fundamental units (not of the regulator), expressed as a number of bits. If
this number is greater than $20$, say, everything is OK. If $v[10]\le0$,
then we have lost all accuracy in computing the units (usually an error
message will be printed and the units not given). In the intermediate cases,
one must proceed with caution (for example by increasing the current
precision).
If $\fl=1$, and the precision happens to be insufficient for obtaining the
fundamental units exactly, the internal precision is doubled and the
computation redone, until the exact results are obtained. The user should be
warned that this can take a very long time when the coefficients of the
fundamental units on the integral basis are very large, for example in the
case of large real quadratic fields. In that case, there are alternate
methods for representing algebraic numbers which are not implemented in PARI.
If $\fl=2$, the fundamental units and roots of unity are not computed.
Hence the result has only 7 components, the first seven ones.
$\var{tech}$ is a technical vector (empty by default) containing $c$, $c2$,
\var{nrel}, \var{borne}, \var{nbpid}, \var{minsfb}, in this order (see
the beginning of the section or the keyword \kbd{bnf}).
You can supply any number of these {\it provided you give an actual value to
each of them} (the ``empty arg'' trick won't work here). Careful use of these
parameters may speed up your computations considerably.
\syn{bnfclassunit0}{P,\fl,\var{tech},\var{prec}}.
\subsecidx{bnfclgp}$(P,\{\var{tech}=[\,]\})$: as \kbd{bnfclassunit}, but only
outputs $v[5]$, i.e.~the class group.
\syn{bnfclassgrouponly}{P,\var{tech},\var{prec}}, where \var{tech}
is as described under \kbd{bnfclassunit}.
\subsecidx{bnfdecodemodule}$(\var{nf},m)$: if $m$ is a module as output in the
first component of an extension given by \kbd{bnrdisclist}, outputs the
true module.
\syn{decodemodule}{\var{nf},m}.
\subsecidx{bnfinit}$(P,\{\fl=0\},\{\var{tech}=[\,]\})$: essentially identical
to \kbd{bnfclassunit} except that the output contains a lot of technical data,
and should not be printed out explicitly in general. The result of
\kbd{bnfinit} is used in programs such as \kbd{bnfisprincipal},
\kbd{bnfisunit} or \kbd{bnfnarrow}. The result is a 10-component vector
$\var{bnf}$.
\noindent $\bullet$ The first 6 and last 2 components are technical and in
principle are not used by the casual user. However, for the sake of
completeness, their description is as follows. We use the notations explained
in the book by H. Cohen, {\it A Course in Computational Algebraic Number
Theory}, Graduate Texts in Maths \key{138}, Springer-Verlag, 1993, Section
6.5, and subsection 6.5.5 in particular.
$\var{bnf}[1]$ contains the matrix $W$, i.e.~the matrix in Hermite normal
form giving relations for the class group on prime ideal generators
$(\p_i)_{1\le i\le r}$.
$\var{bnf}[2]$ contains the matrix $B$, i.e.~the matrix containing the
expressions of the prime ideal factorbase in terms of the $\p_i$. It is an
$r\times c$ matrix.
$\var{bnf}[3]$ contains the complex logarithmic embeddings of the system of
fundamental units which has been found. It is an $(r_1+r_2)\times(r_1+r_2-1)$
matrix.
$\var{bnf}[4]$ contains the matrix $M''_C$ of Archimedean components of the
relations of the matrix $(W|B)$.
$\var{bnf}[5]$ contains the prime factor base, i.e.~the list of prime
ideals used in finding the relations.
$\var{bnf}[6]$ contains the permutation of the prime factor base which was
necessary to reduce the relation matrix to the form explained in subsection
6.5.5 of GTM~138 (i.e.~with a big $c\times c$ identity matrix on the lower
right). Note that in the above mentioned book, the need to permute the rows
of the relation matrices which occur was not emphasized.
$\var{bnf}[9]$ is a 3-element row vector used in \tet{bnfisprincipal} only
and obtained as follows. Let $D = U W V$ obtained by applying the
\idx{Smith normal form} algorithm to the matrix $W$ (= $\var{bnf}[1]$) and
let $U_r$ be the reduction of $U$ modulo $D$. The first elements of the
factorbase are given (in terms of \kbd{bnf.gen}) by the columns of $U_r$,
with archimedian component $g_a$; let also $GD_a$ be the archimedian
components of the generators of the (principal) ideals defined by the
\kbd{bnf.gen[i]\pow bnf.cyc[i]}. Then $\var{bnf}[9]=[U_r, g_a, GD_a]$.
Finally, $\var{bnf}[10]$ is by default unused and set equal to 0. This
field is used to store further information about the field as it becomes
available (which is rarely needed, hence would be too expensive to compute
during the initial \kbd{bnfinit} call). For instance, the generators of the
principal ideals \kbd{bnf.gen[i]\pow bnf.cyc[i]} (during a call to
\tet{bnrisprincipal}), or those corresponding to the relations in $W$ and
$B$ (when the \kbd{bnf} internal precision needs to be increased).
\smallskip
\noindent$\bullet$ The less technical components are as follows:
$\var{bnf}[7]$ or \kbd{\var{bnf}.nf} is equal to the number field data
$\var{nf}$ as would be given by \kbd{nfinit}.
$\var{bnf}[8]$ is a vector containing the last 6 components of
\kbd{bnfclassunit[,1]}, i.e.~the classgroup \kbd{\var{bnf}.clgp}, the
regulator \kbd{\var{bnf}.reg}, the general ``check'' number which should be
close to 1, the number of roots of unity and a generator \kbd{\var{bnf}.tu},
the fundamental units \kbd{\var{bnf}.fu}, and finally the check on their
computation. If the precision becomes insufficient, GP outputs a warning
(\kbd{fundamental units too large, not given}) and does not strive to
compute the units by default ($\fl=0$).
When $\fl=1$, GP insists on finding the fundamental units exactly, the
internal precision being doubled and the computation redone, until the exact
results are obtained. The user should be warned that this can take a very
long time when the coefficients of the fundamental units on the integral
basis are very large.
When $\fl=2$, on the contrary, it is initially agreed that GP
will not compute units.
When $\fl=3$, computes a very small version of \kbd{bnfinit}, a ``small big
number field'' (or \var{sbnf} for short) which contains enough information
to recover the full $\var{bnf}$ vector very rapidly, but which is much
smaller and hence easy to store and print. It is supposed to be used in
conjunction with \kbd{bnfmake}. The output is a 12 component vector $v$, as
follows. Let $\var{bnf}$ be the result of a full \kbd{bnfinit}, complete with
units. Then $v[1]$ is the polynomial $P$, $v[2]$ is the number of real
embeddings $r_1$, $v[3]$ is the field discriminant, $v[4]$ is the integral
basis, $v[5]$ is the list of roots as in the sixth component of \kbd{nfinit},
$v[6]$ is the matrix $MD$ of \kbd{nfinit} giving a $\Z$-basis of the
different, $v[7]$ is the matrix $\kbd{W} = \var{bnf}[1]$, $v[8]$ is the
matrix $\kbd{matalpha}=\var{bnf}[2]$, $v[9]$ is the prime ideal factor base
$\var{bnf}[5]$ coded in a compact way, and ordered according to the
permutation $\var{bnf}[6]$, $v[10]$ is the 2-component vector giving the
number of roots of unity and a generator, expressed on the integral basis,
$v[11]$ is the list of fundamental units, expressed on the integral basis,
$v[12]$ is a vector containing the algebraic numbers alpha corresponding to
the columns of the matrix \kbd{matalpha}, expressed on the integral basis.
Note that all the components are exact (integral or rational), except for
the roots in $v[5]$. In practice, this is the only component which a user
is allowed to modify, by recomputing the roots to a higher accuracy if
desired. Note also that the member functions will \var{not} work on
\var{sbnf}, you have to use \kbd{bnfmake} explicitly first.
\syn{bnfinit0}{P,\fl,\var{tech},\var{prec}}.
\subsecidx{bnfisintnorm}$(\var{bnf},x)$: computes a complete system of
solutions (modulo units of positive norm) of the absolute norm equation
$\text{Norm}(a)=x$,
where $a$ is an integer in $\var{bnf}$. If $\var{bnf}$ has not been certified,
the correctness of the result depends on the validity of \idx{GRH}.
\syn{bnfisintnorm}{\var{bnf},x}.
\subsecidx{bnfisnorm}$(\var{bnf},x,\{\fl=1\})$: tries to tell whether the
rational number $x$ is the norm of some element y in $\var{bnf}$. Returns a
vector $[a,b]$ where $x=Norm(a)*b$. Looks for a solution which is an $S$-unit,
with $S$ a certain set of prime ideals containing (among others) all primes
dividing $x$. If $\var{bnf}$ is known to be \idx{Galois}, set $\fl=0$ (in
this case,
$x$ is a norm iff $b=1$). If $\fl$ is non zero the program adds to $S$ the
following prime ideals, depending on the sign of $\fl$. If $\fl>0$, the
ideals of norm less than $\fl$. And if $\fl<0$ the ideals dividing $\fl$.
If you are willing to assume \idx{GRH}, the answer is guaranteed
(i.e.~$x$ is a norm iff $b=1$), if $S$ contains all primes less than
$12\log(\var{disc}(\var{Bnf}))^2$,
where $\var{Bnf}$ is the Galois closure of $\var{bnf}$.
\syn{bnfisnorm}{\var{bnf},x,\fl,\var{prec}}, where $\fl$ and
$\var{prec}$ are \kbd{long}s.
\subsecidx{bnfissunit}$(\var{bnf},\var{sfu},x)$: $\var{bnf}$ being output by
\kbd{bnfinit}, \var{sfu} by \kbd{bnfsunit}, gives the column vector of
exponents of $x$ on the fundamental $S$-units and the roots of unity.
If $x$ is not a unit, outputs an empty vector.
\syn{bnfissunit}{\var{bnf},\var{sfu},x}.
\subsecidx{bnfisprincipal}$(\var{bnf},x,\{\fl=1\})$: $\var{bnf}$ being the
number field data output by \kbd{bnfinit}, and $x$ being either a $\Z$-basis
of an ideal in the number field (not necessarily in HNF) or a prime ideal in
the format output by the function \kbd{idealprimedec}, this function tests
whether the ideal is principal or not. The result is more complete than a
simple true/false answer: it gives a row vector $[v_1,v_2,check]$, where
$v_1$ is the vector of components $c_i$ of the class of the ideal $x$ in the
class group, expressed on the generators $g_i$ given by \kbd{bnfinit}
(specifically \kbd{\var{bnf}.clgp.gen} which is the same as
\kbd{\var{bnf}[8][1][3]}). The $c_i$ are chosen so that $0\le c_i1$).
\syn{idealprimedec}{\var{nf},p}.
\subsecidx{idealprincipal}$(\var{nf},x)$: creates the principal ideal
generated by the algebraic number $x$ (which must be of type integer,
rational or polmod) in the number field $\var{nf}$. The result is a
one-column matrix.
\syn{principalideal}{\var{nf},x}.
\subsecidx{idealred}$(\var{nf},I,\{\var{vdir}=0\})$: \idx{LLL} reduction of
the ideal $I$ in the number field \var{nf}, along the direction \var{vdir}.
If \var{vdir} is present, it must be an $r1+r2$-component vector ($r1$ and
$r2$ number of real and complex places of \var{nf} as usual).
This function finds a ``small'' $a$ in $I$ (it is an LLL pseudo-minimum
along direction \var{vdir}). The result is the \idx{Hermite normal form} of
the LLL-reduced ideal $r I/a$, where $r$ is a rational number such that the
resulting ideal is integral and primitive. This is often, but not always, a
reduced ideal in the sense of \idx{Buchmann}. If $I$ is an idele, the
logarithmic embeddings of $a$ are subtracted to the Archimedean part.
More often than not, a \idx{principal ideal} will yield the identity
matrix. This is a quick and dirty way to check if ideals are principal
without computing a full \kbd{bnf} structure, but it's not a necessary
condition; hence, a non-trivial result doesn't prove the ideal is
non-trivial in the class group.
Note that this is \var{not} the same as the LLL reduction of the lattice
$I$ since ideal operations are involved.
\syn{ideallllred}{\var{nf},x,\var{vdir},\var{prec}}, where an omitted
\var{vdir} is coded as \kbd{NULL}.
\subsecidx{idealstar}$(\var{nf},I,\{\fl=1\})$: \var{nf} being a number
field, and $I$
either and ideal in any form, or a row vector whose first component is an
ideal and whose second component is a row vector of $r_1$ 0 or 1, outputs
necessary data for computing in the group $(\Z_K/I)^*$.
If $\fl=2$, the result is a 5-component vector $w$. $w[1]$ is the ideal
or module $I$ itself. $w[2]$ is the structure of the group. The other
components are difficult to describe and are used only in conjunction with
the function \kbd{ideallog}.
If $\fl=1$ (default), as $\fl=2$, but do not compute explicit generators
for the cyclic components, which saves time.
If $\fl=0$, computes the structure of $(\Z_K/I)^*$ as a 3-component vector
$v$. $v[1]$ is the order, $v[2]$ is the vector of SNF\sidx{Smith normal form}
cyclic components and
$v[3]$ the corresponding generators. When the row vector is explicitly
included, the
non-zero elements of this vector are considered as real embeddings of
\var{nf} in the order given by \kbd{polroots}, i.e.~in \var{nf}[6]
(\kbd{\var{nf}.roots}), and then $I$ is a module with components at infinity.
To solve discrete logarithms (using \kbd{ideallog}), you have to choose
$\fl=2$.
\syn{idealstar0}{\var{nf},I,\fl}.
\subsecidx{idealtwoelt}$(\var{nf},x,\{a\})$: computes a two-element
representation of the ideal $x$ in the number field $\var{nf}$, using a
straightforward (exponential time) search. $x$ can be an ideal in any form,
(including perhaps an Archimedean part, which is ignored) and the result is a
row vector $[a,\alpha]$ with two components such that $x=a\Z_K+\alpha\Z_K$
and $a\in\Z$, where $a$ is the one passed as argument if any. If $x$ is given
by at least two generators, $a$ is chosen to be the positive generator of
$x\cap\Z$.
Note that when an explicit $a$ is given, we use an asymptotically faster
method, however in practice it is usually slower.
\syn{ideal_two_elt0}{\var{nf},x,a}, where an omitted $a$ is entered as
\kbd{NULL}.
\subsecidx{idealval}$(\var{nf},x,\var{vp})$: gives the valuation of the
ideal $x$ at the prime ideal \var{vp} in the number field $\var{nf}$,
where \var{vp} must be a
5-component vector as given by \kbd{idealprimedec}.
\syn{idealval}{\var{nf},x,\var{vp}}, and the result is a \kbd{long}
integer.
\subsecidx{ideleprincipal}$(\var{nf},x)$: creates the principal idele
generated by the algebraic number $x$ (which must be of type integer,
rational or polmod) in the number field $\var{nf}$. The result is a
two-component vector, the first being a one-column matrix representing the
corresponding principal ideal, and the second being the vector with $r_1+r_2$
components giving the complex logarithmic embedding of $x$.
\syn{principalidele}{\var{nf},x}.
\subsecidx{matalgtobasis}$(\var{nf},x)$: $\var{nf}$ being a number field in
\kbd{nfinit} format, and $x$ a matrix whose coefficients are expressed as
polmods in $\var{nf}$, transforms this matrix into a matrix whose
coefficients are expressed on the integral basis of $\var{nf}$. This is the
same as applying \kbd{nfalgtobasis} to each entry, but it would be dangerous
to use the same name.
\syn{matalgtobasis}{\var{nf},x}.
\subsecidx{matbasistoalg}$(\var{nf},x)$: $\var{nf}$ being a number field in
\kbd{nfinit} format, and $x$ a matrix whose coefficients are expressed as
column vectors on the integral basis of $\var{nf}$, transforms this matrix
into a matrix whose coefficients are algebraic numbers expressed as
polmods. This is the same as applying \kbd{nfbasistoalg} to each entry, but
it would be dangerous to use the same name.
\syn{matbasistoalg}{\var{nf},x}.
\subsecidx{modreverse}$(a)$: $a$ being a polmod $A(X)$ modulo $T(X)$, finds
the ``reverse polmod'' $B(X)$ modulo $Q(X)$, where $Q$ is the minimal
polynomial of $a$, which must be equal to the degree of $T$, and such that if
$\theta$ is a root of $T$ then $\theta=B(\alpha)$ for a certain root $\alpha$
of $Q$.
This is very useful when one changes the generating element in algebraic
extensions.
\syn{polmodrecip}{x}.
\subsecidx{newtonpoly}$(x,p)$: gives the vector of the slopes of the Newton
polygon of the polynomial $x$ with respect to the prime number $p$. The $n$
components of the vector are in decreasing order, where $n$ is equal to the
degree of $x$. Vertical slopes occur iff the constant coefficient of $x$ is
zero and are denoted by \kbd{VERYBIGINT}, the biggest single precision
integer representable on the machine ($2^{31}-1$ (resp.~$2^{63}-1$) on 32-bit
(resp.~64-bit) machines), see \secref{se:valuation}.
\syn{newtonpoly}{x,p}.
\subsecidx{nfalgtobasis}$(\var{nf},x)$: this is the inverse function of
\kbd{nfbasistoalg}. Given an object $x$ whose entries are expressed as
algebraic numbers in the number field $\var{nf}$, transforms it so that the
entries are expressed as a column vector on the integral basis
\kbd{\var{nf}.zk}.
\syn{algtobasis}{\var{nf},x}.
\subsecidx{nfbasis}$(x,\{\fl=0\},\{p\})$: \idx{integral basis} of the number
field defined by the irreducible, preferably monic, polynomial $x$,
using a modified version of the \idx{round 4} algorithm by
default. The binary digits of $\fl$ have the following meaning:
1: assume that no square of a prime greater than the default \kbd{primelimit}
divides the discriminant of $x$, i.e.~that the index of $x$ has only small
prime divisors.
2: use \idx{round 2} algorithm. For small degrees and coefficient size, this is
sometimes a little faster. (This program is the translation into C of a program
written by David \idx{Ford} in Algeb.)
Thus for instance, if $\fl=3$, this uses the round 2 algorithm and outputs
an order which will be maximal at all the small primes.
If $p$ is present, we assume (without checking!) that it is the two-column
matrix of the factorization of the discriminant of the polynomial $x$. Note
that it does \var{not} have to be a complete factorization. This is
especially useful if only a local integral basis for some small set of places
is desired: only factors with exponents greater or equal to 2 will be
considered.
\syn{nfbasis0}{x,\fl,p}. An extended version
is $\teb{nfbasis}(x,\&d,\fl,p)$, where $d$ will receive the discriminant of
the number field (\var{not} of the polynomial $x$), and an omitted $p$ should
be input as \kbd{gzero}. Also available are $\teb{base}(x,\&d)$ ($\fl=0$),
$\teb{base2}(x,\&d)$ ($\fl=2$) and $\teb{factoredbase}(x,p,\&d)$.
\subsecidx{nfbasistoalg}$(\var{nf},x)$: this is the inverse function of
\kbd{nfalgtobasis}. Given an object $x$ whose entries are expressed on the
integral basis \kbd{\var{nf}.zk}, transforms it into an object whose entries
are algebraic numbers (i.e.~polmods).
\syn{basistoalg}{\var{nf},x}.
\subsecidx{nfdetint}$(\var{nf},x)$: given a pseudo-matrix $x$, computes a
non-zero ideal contained in (i.e.~multiple of) the determinant of $x$. This
is particularly useful in conjunction with \kbd{nfhnfmod}.
\syn{nfdetint}{\var{nf},x}.
\subsecidx{nfdisc}$(x,\{\fl=0\},\{p\})$: \idx{field discriminant} of the
number field defined by the integral, preferably monic, irreducible
polynomial $x$. $\fl$ and $p$ are exactly as in \kbd{nfbasis}. That is, $p$
provides the matrix of a partial factorization of the discriminant of $x$,
and binary digits of $\fl$ are as follows:
1: assume that no square of a prime greater than \kbd{primelimit}
divides the discriminant.
2: use the round 2 algorithm, instead of the default \idx{round 4}.
This should be
slower except maybe for polynomials of small degree and coefficients.
\syn{nfdiscf0}{x,\fl,p} where, to omit $p$, you should input \kbd{gzero}. You
can also use $\teb{discf}(x)$ ($\fl=0$).
\subsecidx{nfeltdiv}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes their quotient $x/y$ in the number field $\var{nf}$.
\syn{element_div}{\var{nf},x,y}.
\subsecidx{nfeltdiveuc}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes an algebraic integer $q$ in the number field $\var{nf}$
such that the components of $x-qy$ are reasonably small. In fact, this is
functionally identical to \kbd{round(nfeltdiv(\var{nf},x,y))}.
\syn{nfdiveuc}{\var{nf},x,y}.
\subsecidx{nfeltdivmodpr}$(\var{nf},x,y,\var{pr})$: given two elements $x$
and $y$ in \var{nf} and \var{pr} a prime ideal in \kbd{modpr} format (see
\tet{nfmodprinit}), computes their quotient $x / y$ modulo the prime ideal
\var{pr}.
\syn{element_divmodpr}{\var{nf},x,y,\var{pr}}.
\subsecidx{nfeltdivrem}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, gives a two-element row vector $[q,r]$ such that $x=qy+r$, $q$ is
an algebraic integer in $\var{nf}$, and the components of $r$ are
reasonably small.
\syn{nfdivres}{\var{nf},x,y}.
\subsecidx{nfeltmod}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes an element $r$ of $\var{nf}$ of the form $r=x-qy$ with
$q$ and algebraic integer, and such that $r$ is small. This is functionally
identical to
$$\kbd{x - nfeltmul(\var{nf},round(nfeltdiv(\var{nf},x,y)),y)}.$$
\syn{nfmod}{\var{nf},x,y}.
\subsecidx{nfeltmul}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes their product $x*y$ in the number field $\var{nf}$.
\syn{element_mul}{\var{nf},x,y}.
\subsecidx{nfeltmulmodpr}$(\var{nf},x,y,\var{pr})$: given two elements $x$ and
$y$ in \var{nf} and \var{pr} a prime ideal in \kbd{modpr} format (see
\tet{nfmodprinit}), computes their product $x*y$ modulo the prime ideal
\var{pr}.
\syn{element_mulmodpr}{\var{nf},x,y,\var{pr}}.
\subsecidx{nfeltpow}$(\var{nf},x,k)$: given an element $x$ in \var{nf},
and a positive or negative integer $k$, computes $x^k$ in the number field
$\var{nf}$.
\syn{element_pow}{\var{nf},x,k}.
\subsecidx{nfeltpowmodpr}$(\var{nf},x,k,\var{pr})$: given an element $x$ in
\var{nf}, an integer $k$ and a prime ideal \var{pr} in \kbd{modpr} format
(see \tet{nfmodprinit}), computes $x^k$ modulo the prime ideal \var{pr}.
\syn{element_powmodpr}{\var{nf},x,k,\var{pr}}.
\subsecidx{nfeltreduce}$(\var{nf},x,\var{ideal})$: given an ideal in
Hermite normal form and an element $x$ of the number field $\var{nf}$,
finds an element $r$ in $\var{nf}$ such that $x-r$ belongs to the ideal
and $r$ is small.
\syn{element_reduce}{\var{nf},x,\var{ideal}}.
\subsecidx{nfeltreducemodpr}$(\var{nf},x,\var{pr})$: given
an element $x$ of the number field $\var{nf}$ and a prime ideal \var{pr} in
\kbd{modpr} format compute a canonical representative for the class of $x$
modulo \var{pr}.
\syn{nfreducemodpr2}{\var{nf},x,\var{pr}}.
\subsecidx{nfeltval}$(\var{nf},x,\var{pr})$: given an element $x$ in
\var{nf} and a prime ideal \var{pr} in the format output by
\kbd{idealprimedec}, computes their the valuation at \var{pr} of the
element $x$. The same result could be obtained using
\kbd{idealval(\var{nf},x,\var{pr})} (since $x$ would then be converted to a
principal ideal), but it would be less efficient.
\syn{element_val}{\var{nf},x,\var{pr}}, and the result is a \kbd{long}.
\subsecidx{nffactor}$(\var{nf},x)$: factorization of the univariate
polynomial $x$ over the number field $\var{nf}$ given by \kbd{nfinit}. $x$
has coefficients in $\var{nf}$ (i.e.~either scalar, polmod, polynomial or
column vector). The main variable of $\var{nf}$ must be of \var{lower}
priority than that of $x$ (in other words, the variable number of $\var{nf}$
must be \var{greater} than that of $x$). However if the polynomial defining
the number field occurs explicitly in the coefficients of $x$ (as modulus of
a \typ{POLMOD}), its main variable must be \var{the same} as the main
variable of $x$. For example,
\bprog
? nf = nfinit(y^2 + 1);
? nffactor(nf, x^2 + y); \\@com OK
? nffactor(nf, x^2 + Mod(y, y^2+1)); \\ @com OK
? nffactor(nf, x^2 + Mod(z, z^2+1)); \\ @com WRONG
@eprog
\syn{nffactor}{\var{nf},x}.
\subsecidx{nffactormod}$(\var{nf},x,\var{pr})$: factorization of the
univariate polynomial $x$ modulo the prime ideal \var{pr} in the number
field $\var{nf}$. $x$ can have coefficients in the number field (scalar,
polmod, polynomial, column vector) or modulo the prime ideal (integermod
modulo the rational prime under \var{pr}, polmod or polynomial with
integermod coefficients, column vector of integermod). The prime ideal
\var{pr} \var{must} be in the format output by \kbd{idealprimedec}. The
main variable of $\var{nf}$ must be of lower priority than that of $x$ (in
other words the variable number of $\var{nf}$ must be greater than that of
$x$). However if the coefficients of the number field occur explicitly (as
polmods) as coefficients of $x$, the variable of these polmods \var{must}
be the same as the main variable of $t$ (see \kbd{nffactor}).
\syn{nffactormod}{\var{nf},x,\var{pr}}.
\subsecidx{nfgaloisapply}$(\var{nf},\var{aut},x)$: $\var{nf}$ being a
number field as output by \kbd{nfinit}, and \var{aut} being a \idx{Galois}
automorphism of $\var{nf}$ expressed either as a polynomial or a polmod
(such automorphisms being found using for example one of the variants of
\kbd{nfgaloisconj}), computes the action of the automorphism \var{aut} on
the object $x$ in the number field. $x$ can be an element (scalar, polmod,
polynomial or column vector) of the number field, an ideal (either given by
$\Z_K$-generators or by a $\Z$-basis), a prime ideal (given as a 5-element
row vector) or an idele (given as a 2-element row vector). Because of
possible confusion with elements and ideals, other vector or matrix
arguments are forbidden.
\syn{galoisapply}{\var{nf},\var{aut},x}.
\subsecidx{nfgaloisconj}$(\var{nf},\{\fl=0\},\{d\})$: $\var{nf}$ being a
number field as output by \kbd{nfinit}, computes the conjugates of a root
$r$ of the non-constant polynomial $x=\var{nf}[1]$ expressed as
polynomials in $r$. This can be used even if the number field $\var{nf}$ is
not \idx{Galois} since some conjugates may lie in the field. As a note to
old-timers of PARI, starting with version 2.0.17 this function works much
better than in earlier versions.
$\var{nf}$ can simply be a polynomial if $\fl\neq 1$.
If no flags or $\fl=0$, if $\var{nf}$ is a number field use a
combination of flag $4$ and $1$ and the result is always complete,
else use a combination of flag $4$ and $2$ and the result is subject
to the restriction of $\fl=2$, but a warning is issued when it is not
proven complete.
If $\fl=1$, use \kbd{nfroots} (require a number field).
If $\fl=2$, use complex approximations to the roots and an integral
\idx{LLL}. The result is not guaranteed to be complete: some
conjugates may be missing (no warning issued), especially so if the
corresponding polynomial has a huge index. In that case, increasing
the default precision may help.
If $\fl=4$, use Allombert's algorithm and permutation testing. If the
field is Galois with ``weakly'' super solvable Galois group, return
the complete list of automorphisms, else only the identity element. If
present, $d$ is assumed to be a multiple of the least common
denominator of the conjugates expressed as polynomial in a root of
\var{pol}.
A group G is ``weakly'' super solvable if it contains a super solvable
normal subgroup $H$ such that $G=H$ , or $G/H \simeq A_4$ , or $G/H \simeq
S_4$. Abelian and nilpotent groups are ``weakly'' super solvable. In
practice, almost all groups of small order are ``weakly'' super solvable, the
exceptions having order 36(1 exception), 48(2), 56(1), 60(1), 72(5), 75(1),
80(1), 96(10) and $\geq 108$.
Hence $\fl = 4$ permits to quickly check whether a polynomial of order
strictly less than $36$ is Galois or not. This method is much faster than
\kbd{nfroots} and can be applied to polynomials of degree larger than $50$.
\syn{galoisconj0}{\var{nf},\fl,d,\var{prec}}. Also available are
$\teb{galoisconj}(\var{nf})$ for $\fl=0$,
$\teb{galoisconj2}(\var{nf},n,\var{prec})$ for $\fl=2$ where $n$ is a bound
on the number of conjugates, and $\teb{galoisconj4}(\var{nf},d)$
corresponding to $\fl=4$.
\subsecidx{nfhilbert}$(\var{nf},a,b,\{\var{pr}\})$: if \var{pr} is omitted,
compute the global \idx{Hilbert symbol} $(a,b)$ in $\var{nf}$, that is $1$
if $x^2 - a y^2 - b z^2$ has a non trivial solution $(x,y,z)$ in $\var{nf}$,
and $-1$ otherwise. Otherwise compute the local symbol modulo the prime ideal
\var{pr} (as output by \kbd{idealprimedec}).
\syn{nfhilbert}{\var{nf},a,b,\var{pr}}, where an omitted \var{pr} is coded
as \kbd{NULL}.
\subsecidx{nfhnf}$(\var{nf},x)$: given a pseudo-matrix $(A,I)$, finds a
pseudo-basis in \idx{Hermite normal form} of the module it generates.
\syn{nfhermite}{\var{nf},x}.
\subsecidx{nfhnfmod}$(\var{nf},x,\var{detx})$: given a pseudo-matrix $(A,I)$
and an ideal \var{detx} which is contained in (read integral multiple of) the
determinant of $(A,I)$, finds a pseudo-basis in \idx{Hermite normal form}
of the module generated by $(A,I)$. This avoids coefficient explosion.
\var{detx} can be computed using the function \kbd{nfdetint}.
\syn{nfhermitemod}{\var{nf},x,\var{detx}}.
\subsecidx{nfinit}$(\var{pol},\{\fl=0\})$: \var{pol} being a non-constant,
preferably monic, irreducible polynomial in $\Z[X]$, initializes a
\var{number field} structure (\kbd{nf}) associated to the field $K$ defined
by \var{pol}. As such, it's a technical object passed as the first argument
to most \kbd{nf}\var{xxx} functions, but it contains some information which
may be directly useful. Access to this information via \var{member
functions} is prefered since the specific data organization specified below
may change in the future. Currently, \kbd{nf} is a row vector with 9
components:
$\var{nf}[1]$ contains the polynomial \var{pol} (\kbd{\var{nf}.pol}).
$\var{nf}[2]$ contains $[r1,r2]$ (\kbd{\var{nf}.sign}), the number of real
and complex places of $K$.
$\var{nf}[3]$ contains the discriminant $d(K)$ (\kbd{\var{nf}.disc}) of $K$.
$\var{nf}[4]$ contains the index of $\var{nf}[1]$,
i.e.~$[\Z_K : \Z[\theta]]$, where $\theta$ is any root of $\var{nf}[1]$.
$\var{nf}[5]$ is a vector containing 7 matrices $M$, $MC$, $T2$, $T$,
$MD$, $TI$, $MDI$ useful for certain computations in the number field $K$.
\quad$\bullet$ $M$ is the $(r1+r2)\times n$ matrix whose columns represent
the numerical values of the conjugates of the elements of the integral
basis.
\quad$\bullet$ $MC$ is essentially the conjugate of the transpose of $M$,
except that the last $r2$ columns are also multiplied by 2.
\quad$\bullet$ $T2$ is an $n\times n$ matrix equal to the real part of the
product $MC\cdot M$ (which is a real positive definite symmetric matrix), the
so-called $T_2$-matrix (\kbd{\var{nf}.t2}).
\quad$\bullet$ $T$ is the $n\times n$ matrix whose coefficients are
$\text{Tr}(\omega_i\omega_j)$ where the $\omega_i$ are the elements of the
integral basis. Note that $T=\overline{MC}\cdot M$ and in particular that
$T=T_2$ if the field is totally real (in practice $T_2$ will have real
approximate entries and $T$ will have integer entries). Note also that
$\det(T)$ is equal to the discriminant of the field $K$.
\quad$\bullet$ The columns of $MD$ (\kbd{\var{nf}.diff}) express a $\Z$-basis
of the different of $K$ on the integral basis.
\quad$\bullet$ $TI$ is equal to $d(K)T^{-1}$, which has integral
coefficients. Note that, understood as as ideal, the matrix $T^{-1}$
generates the codifferent ideal.
\quad$\bullet$ Finally, $MDI$ is a two-element representation (for faster
ideal product) of $d(K)$ times the codifferent ideal
(\kbd{\var{nf}.disc$*$\var{nf}.codiff}, which is an integral ideal). $MDI$
is only used in \tet{idealinv}.
$\var{nf}[6]$ is the vector containing the $r1+r2$ roots
(\kbd{\var{nf}.roots}) of $\var{nf}[1]$ corresponding to the $r1+r2$
embeddings of the number field into $\C$ (the first $r1$ components are real,
the next $r2$ have positive imaginary part).
$\var{nf}[7]$ is an integral basis in Hermite normal form for $\Z_K$
(\kbd{\var{nf}.zk}) expressed on the powers of~$\theta$.
$\var{nf}[8]$ is the $n\times n$ integral matrix expressing the power
basis in terms of the integral basis, and finally
$\var{nf}[9]$ is the $n\times n^2$ matrix giving the multiplication table
of the integral basis.
If a non monic polynomial is input, \kbd{nfinit} will transform it into a
monic one, then reduce it (see $\fl=3$). It is allowed, though not very
useful given the existence of \teb{nfnewprec}, to input a \kbd{nf} or a
\kbd{bnf} instead of a polynomial.
The special input format $[x,B]$ is also accepted where $x$ is a polynomial
as above and $B$ is the integer basis, as computed by \tet{nfbasis}. This can
be useful since \kbd{nfinit} uses the round 4 algorithm by default, which can
be very slow in pathological cases where round 2 (\kbd{nfbasis(x,2)}) would
succeed very quickly.
If $\fl=2$: \var{pol} is changed into another polynomial $P$ defining the same
number field, which is as simple as can easily be found using the
\kbd{polred} algorithm, and all the subsequent computations are done using
this new polynomial. In particular, the first component of the result is the
modified polynomial.
If $\fl=3$, does a \kbd{polred} as in case 2, but outputs
$[\var{nf},\kbd{Mod}(a,P)]$, where $\var{nf}$ is as before and
$\kbd{Mod}(a,P)=\kbd{Mod}(x,\var{pol})$ gives the change of
variables. This is implicit when \var{pol} is not monic: first a linear change
of variables is performed, to get a monic polynomial, then a \kbd{polred}
reduction.
If $\fl=4$, as $2$ but uses a partial \kbd{polred}.
If $\fl=5$, as $3$ using a partial \kbd{polred}.
\syn{nfinit0}{x,\fl,\var{prec}}.
\subsecidx{nfisideal}$(\var{nf},x)$: returns 1 if $x$ is an ideal in
the number field $\var{nf}$, 0 otherwise.
\syn{isideal}{x}.
\subsecidx{nfisincl}$(x,y)$: tests whether the number field $K$ defined
by the polynomial $x$ is conjugate to a subfield of the field $L$ defined
by $y$ (where $x$ and $y$ must be in $\Q[X]$). If they are not, the output
is the number 0. If they are, the output is a vector of polynomials, each
polynomial $a$ representing an embedding of $K$ into $L$, i.e.~being such
that $y\mid x\circ a$.
If $y$ is a number field (\var{nf}), a much faster algorithm is used
(factoring $x$ over $y$ using \tet{nffactor}). Before version 2.0.14, this
wasn't guaranteed to return all the embeddings, hence was triggered by a
special flag. This is no more the case.
\syn{nfisincl}{x,y,\fl}.
\subsecidx{nfisisom}$(x,y)$: as \tet{nfisincl}, but tests
for isomorphism. If either $x$ or $y$ is a number field, a much faster
algorithm will be used.
\syn{nfisisom}{x,y,\fl}.
\subsecidx{nfnewprec}$(\var{nf})$: transforms the number field $\var{nf}$
into the corresponding data using current (usually larger) precision. This
function works as expected if $\var{nf}$ is in fact a $\var{bnf}$ (update
$\var{bnf}$ to current precision) but may be quite slow (many generators of
principal ideals have to be computed).
\syn{nfnewprec}{\var{nf},\var{prec}}.
\subsecidx{nfkermodpr}$(\var{nf},a,\var{pr})$: kernel of the matrix $a$ in
$\Z_K/\var{pr}$, where \var{pr} is in \key{modpr} format
(see \kbd{nfmodprinit}).
\syn{nfkermodpr}{\var{nf},a,\var{pr}}.
\subsecidx{nfmodprinit}$(\var{nf},\var{pr})$: transforms the prime ideal
\var{pr} into \tet{modpr} format necessary for all operations modulo
\var{pr} in the number field \var{nf}. Returns a two-component vector
$[P,a]$, where $P$ is the \idx{Hermite normal form} of \var{pr}, and $a$ is
an integral element congruent to $1$ modulo \var{pr}, and congruent to $0$
modulo $p / pr^e$. Here $p = \Z \cap \var{pr}$ and $e$
is the absolute ramification index.\label{se:nfmodprinit}
\syn{nfmodprinit}{\var{nf},\var{pr}}.
\subsecidx{nfsubfields}$(\var{nf},\{d=0\})$: finds all subfields of degree $d$
of the number field $\var{nf}$ (all subfields if $d$ is null or omitted).
The result is a vector of subfields, each being given by $[g,h]$, where $g$ is an
absolute equation and $h$ expresses one of the roots of $g$ in terms of the
root $x$ of the polynomial defining $\var{nf}$. This is a crude
implementation by M.~Olivier of an algorithm due to J.~Kl\"uners.
\syn{subfields}{\var{nf},d}.
\subsecidx{nfroots}$(\var{nf},x)$: roots of the polynomial $x$ in the number
field $\var{nf}$ given by \kbd{nfinit} without multiplicity. $x$ has
coefficients in the number field (scalar, polmod, polynomial, column
vector). The main variable of $\var{nf}$ must be of lower priority than that
of $x$ (in other words the variable number of $\var{nf}$ must be greater than
that of $x$). However if the coefficients of the number field occur
explicitly (as polmods) as coefficients of $x$, the variable of these
polmods \var{must} be the same as the main variable of $t$ (see
\kbd{nffactor}).
\syn{nfroots}{\var{nf},x}.
\subsecidx{nfrootsof1}$(\var{nf})$: computes the number of roots of unity
$w$ and a primitive $w$-th root of unity (expressed on the integral basis)
belonging to the number field $\var{nf}$. The result is a two-component
vector $[w,z]$ where $z$ is a column vector expressing a primitive $w$-th
root of unity on the integral basis \kbd{\var{nf}.zk}.
\syn{rootsof1}{\var{nf}}.
\subsecidx{nfsnf}$(\var{nf},x)$: given a torsion module $x$ as a 3-component
row
vector $[A,I,J]$ where $A$ is a square invertible $n\times n$ matrix, $I$ and
$J$ are two ideal lists, outputs an ideal list $d_1,\dots,d_n$ which is the
\idx{Smith normal form} of $x$. In other words, $x$ is isomorphic to
$\Z_K/d_1\oplus\cdots\oplus\Z_K/d_n$ and $d_i$ divides $d_{i-1}$ for $i\ge2$.
The link between $x$ and $[A,I,J]$ is as follows: if $e_i$ is the canonical
basis of $K^n$, $I=[b_1,\dots,b_n]$ and $J=[a_1,\dots,a_n]$, then $x$ is
isomorphic to
$$ (b_1e_1\oplus\cdots\oplus b_ne_n) / (a_1A_1\oplus\cdots\oplus a_nA_n)
\enspace, $$
where the $A_j$ are the columns of the matrix $A$. Note that every finitely
generated torsion module can be given in this way, and even with $b_i=Z_K$
for all $i$.
\syn{nfsmith}{\var{nf},x}.
\subsecidx{nfsolvemodpr}$(\var{nf},a,b,\var{pr})$: solution of $a\cdot x = b$
in $\Z_K/\var{pr}$, where $a$ is a matrix and $b$ a column vector, and where
\var{pr} is in \key{modpr} format (see \kbd{nfmodprinit}).
\syn{nfsolvemodpr}{\var{nf},a,b,\var{pr}}.
\subsecidx{polcompositum}$(x,y,\{\fl=0\})$: $x$ and $y$ being polynomials
in $\Z[X]$ in the same variable, outputs a vector giving the list of all
possible composita of the number fields defined by $x$ and $y$, if $x$ and
$y$ are irreducible, or of the corresponding \'etale algebras, if they are
only squarefree. Returns an error if one of the polynomials is not
squarefree. When one of the polynomials is irreducible (say $x$), it is
often \var{much} faster to use \kbd{nffactor(nfinit($x$), $y$)} then
\tet{rnfequation}.
If $\fl=1$, outputs a vector of 4-component vectors $[z,a,b,k]$, where $z$
ranges through the list of all possible compositums as above, and $a$
(resp. $b$) expresses the root of $x$ (resp. $y$) as a polmod in a root of
$z$, and $k$ is a small integer k such that $a+kb$ is the chosen root of
$z$.
The compositum will quite often be defined by a complicated polynomial,
which it is advisable to reduce before further work. Here is a simple
example involving the field $\Q(\zeta_5, 5^{1/5})$:
\bprog
? z = polcompositum(x^5 - 5, polcyclo(5), 1)[1];
? pol = z[1] \\@com \kbd{pol} defines the compositum
%2 = x^20 + 5*x^19 + 15*x^18 + 35*x^17 + 70*x^16 + 141*x^15 + 260*x^14 \
+ 355*x^13 + 95*x^12 - 1460*x^11 - 3279*x^10 - 3660*x^9 - 2005*x^8 \
+ 705*x^7 + 9210*x^6 + 13506*x^5 + 7145*x^4 - 2740*x^3 + 1040*x^2 \
- 320*x + 256
? a = z[2]; a^5 - 5 \\@com \kbd{a} is a fifth root of $5$
%3 = 0
? z = polredabs(pol, 1); \\@com look for a simpler polynomial
? pol = z[1]
%5 = x^20 + 25*x^10 + 5
? a = subst(a.pol, x, z[2]) \\@com \kbd{a} in the new coordinates
%6 = Mod(-5/22*x^19 + 1/22*x^14 - 123/22*x^9 + 9/11*x^4, x^20 + 25*x^10 + 5)
@eprog
\syn{polcompositum0}{x,y,\fl}.
\subsecidx{polgalois}$(x)$: \idx{Galois} group of the non-constant polynomial
$x\in\Q[X]$. In the present version \vers, $x$ must be irreducible and
the degree of $x$ must be less than or equal to 7. On certain versions for
which the data file of Galois resolvents has been installed (available
in the Unix distribution as a separate package), degrees 8, 9, 10 and 11
are also implemented.
The output is a 3-component vector $[n,s,k]$ with the following meaning: $n$
is the cardinality of the group, $s$ is its signature ($s=1$ if the group is
a subgroup of the alternating group $A_n$, $s=-1$ otherwise), and $k$ is the
number of the group corresponding to a given pair $(n,s)$ ($k=1$ except in 2
cases). Specifically, the groups are coded as follows, using standard
notations (see GTM 138, quoted at the beginning of this section; see also
``The transitive groups of degree up to eleven'', by G.~Butler and J.~McKay
in Communications in Algebra, vol.~11, 1983, pp.~863--911):
\smallskip
In degree 1: $S_1=[1,-1,1]$.
\smallskip
In degree 2: $S_2=[2,-1,1]$.
\smallskip
In degree 3: $A_3=C_3=[3,1,1]$, $S_3=[6,-1,1]$.
\smallskip
In degree 4: $C_4=[4,-1,1]$, $V_4=[4,1,1]$, $D_4=[8,-1,1]$, $A_4=[12,1,1]$,
$S_4=[24,-1,1]$.
\smallskip
In degree 5: $C_5=[5,1,1]$, $D_5=[10,1,1]$, $M_{20}=[20,-1,1]$,
$A_5=[60,1,1]$, $S_5=[120,-1,1]$.
\smallskip
In degree 6: $C_6=[6,-1,1]$, $S_3=[6,-1,2]$, $D_6=[12,-1,1]$, $A_4=[12,1,1]$,
$G_{18}=[18,-1,1]$, $S_4^-=[24,-1,1]$, $A_4\times C_2=[24,-1,2]$,
$S_4^+=[24,1,1]$, $G_{36}^-=[36,-1,1]$, $G_{36}^+=[36,1,1]$,
$S_4\times C_2=[48,-1,1]$, $A_5=PSL_2(5)=[60,1,1]$, $G_{72}=[72,-1,1]$,
$S_5=PGL_2(5)=[120,-1,1]$, $A_6=[360,1,1]$, $S_6=[720,-1,1]$.
\smallskip
In degree 7: $C_7=[7,1,1]$, $D_7=[14,-1,1]$, $M_{21}=[21,1,1]$,
$M_{42}=[42,-1,1]$, $PSL_2(7)=PSL_3(2)=[168,1,1]$, $A_7=[2520,1,1]$,
$S_7=[5040,-1,1]$.
\smallskip
The method used is that of resolvent polynomials and is sensitive to the
current precision. The precision is updated internally but, in very rare
cases, a wrong result may be returned if the initial precision was not
sufficient.
\syn{galois}{x,\var{prec}}.
\subsecidx{polred}$(x,\{\fl=0\},\{p\})$: finds polynomials with reasonably
small coefficients defining subfields of the number field defined by $x$.
One of the polynomials always defines $\Q$ (hence is equal to $x-1$),
and another always defines the same number field as $x$ if $x$ is irreducible.
All $x$ accepted by \tet{nfinit} are also allowed here (e.g. non-monic
polynomials, \kbd{nf}, \kbd{bnf}, \kbd{[x,Z\_K\_basis]}).
The following binary digits of $\fl$ are significant:
1: does a partial reduction only. This means that only a suborder of the
maximal order may be used.
2: gives also elements. The result is a two-column matrix, the first column
giving the elements defining these subfields, the second giving the
corresponding minimal polynomials.
If $p$ is given, it is assumed that it is the two-column matrix of the
factorization of the discriminant of the polynomial $x$.
\syn{polred0}{x,\fl,p,\var{prec}}, where an omitted $p$ is
coded by $gzero$. Also available are $\teb{polred}(x,\var{prec})$ and
$\teb{factoredpolred}(x,p,\var{prec})$, both corresponding to $\fl=0$.
\subsecidx{polredabs}$(x,\{\fl=0\})$: finds one of the polynomial defining
the same number field as the one defined by $x$, and such that the sum of the
squares of the modulus of the roots (i.e.~the $T_2$-norm) is minimal.
All $x$ accepted by \tet{nfinit} are also allowed here (e.g. non-monic
polynomials, \kbd{nf}, \kbd{bnf}, \kbd{[x,Z\_K\_basis]}).
The binary digits of $\fl$ mean
1: outputs a two-component row vector $[P,a]$, where $P$ is the default
output and $a$ is an element expressed on a root of the polynomial $P$,
whose minimal polynomial is equal to $x$.
4: gives \var{all} polynomials of minimal $T_2$ norm (of the two polynomials
$P(x)$ and $P(-x)$, only one is given).
\syn{polredabs0}{x,\fl,\var{prec}}.
\subsecidx{polredord}$(x)$: finds polynomials with reasonably small
coefficients and of the same degree as that of $x$ defining suborders of the
order defined by $x$. One of the polynomials always defines $\Q$ (hence
is equal to $(x-1)^n$, where $n$ is the degree), and another always defines
the same order as $x$ if $x$ is irreducible.
\syn{ordred}{x}.
\subsecidx{poltschirnhaus}$(x)$: applies a random Tschirnhausen
transformation to the polynomial $x$, which is assumed to be non-constant
and separable, so as to obtain a new equation for the \'etale algebra
defined by $x$. This is for instance useful when computing resolvents,
hence is used by the \kbd{polgalois} function.
\syn{tschirnhaus}{x}.
\subsecidx{rnfalgtobasis}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$L$ expressed as a polynomial or polmod with polmod coefficients, expresses
$x$ on the relative integral basis.
\syn{rnfalgtobasis}{\var{rnf},x}.
\subsecidx{rnfbasis}$(\var{bnf},x)$: given a big number field $\var{bnf}$ as
output by \kbd{bnfinit}, and either a polynomial $x$ with coefficients in
$\var{bnf}$ defining a relative extension $L$ of $\var{bnf}$, or a
pseudo-basis $x$ of such an extension, gives either a true $\var{bnf}$-basis
of $L$ if it exists, or an $n+1$-element generating set of $L$ if not, where
$n$ is the rank of $L$ over $\var{bnf}$.
\syn{rnfbasis}{\var{bnf},x}.
\subsecidx{rnfbasistoalg}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$L$ expressed on the relative integral basis, computes the representation of
$x$ as a polmod with polmods coefficients.
\syn{rnfbasistoalg}{\var{rnf},x}.
\subsecidx{rnfcharpoly}$(\var{nf},T,a,\{v=x\})$: characteristic polynomial of
$a$ over $\var{nf}$, where $a$ belongs to the algebra defined by $T$ over
$\var{nf}$, i.e.~$\var{nf}[X]/(T)$. Returns a polynomial in variable $v$
($x$ by default).
\syn{rnfcharpoly}{\var{nf},T,a,v}, where $v$ is a variable number.
\subsecidx{rnfconductor}$(\var{bnf},\var{pol})$: $\var{bnf}$ being a big number
field as output by \kbd{bnfinit}, and \var{pol} a relative polynomial defining
an \idx{Abelian extension}, computes the class field theory conductor of this
Abelian extension. The result is a 3-component vector
$[\var{conductor},\var{rayclgp},\var{subgroup}]$, where \var{conductor} is
the conductor of the extension given as a 2-component row vector
$[f_0,f_\infty]$, \var{rayclgp} is the full ray class group corresponding to
the conductor given as a 3-component vector [h,cyc,gen] as usual for a group,
and \var{subgroup} is a matrix in HNF defining the subgroup of the ray class
group on the given generators gen.
\syn{rnfconductor}{\var{rnf},\var{pol},\var{prec}}.
\subsecidx{rnfdedekind}$(\var{nf},\var{pol},\var{pr})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} and a polynomial \var{pol} with
coefficients in $\var{nf}$ defining a relative extension $L$ of $\var{nf}$,
evaluates the relative \idx{Dedekind} criterion over the order defined by a
root of \var{pol} for the prime ideal \var{pr} and outputs a 3-component
vector as the result. The first component is a flag equal to 1 if the
enlarged order could be proven to be \var{pr}-maximal and to 0 otherwise (it
may be maximal in the latter case if \var{pr} is ramified in $L$), the second
component is a pseudo-basis of the enlarged order and the third component is
the valuation at \var{pr} of the order discriminant.
\syn{rnfdedekind}{\var{nf},\var{pol},\var{pr}}.
\subsecidx{rnfdet}$(\var{nf},M)$: given a pseudomatrix $M$ over the maximal
order of $\var{nf}$, computes its pseudodeterminant.
\syn{rnfdet}{\var{nf},M}.
\subsecidx{rnfdisc}$(\var{nf},\var{pol})$: given a number field $\var{nf}$ as
output by \kbd{nfinit} and a polynomial \var{pol} with coefficients in
$\var{nf}$ defining a relative extension $L$ of $\var{nf}$, computes
the relative
discriminant of $L$. This is a two-element row vector $[D,d]$, where $D$ is
the relative ideal discriminant and $d$ is the relative discriminant
considered as an element of $\var{nf}^*/{\var{nf}^*}^2$. The main variable of
$\var{nf}$ \var{must} be of lower priority than that of \var{pol}.
Note: As usual, $\var{nf}$ can be a $\var{bnf}$ as output by \kbd{nfinit}.
\syn{rnfdiscf}{\var{bnf},\var{pol}}.
\subsecidx{rnfeltabstorel}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field
extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of $L$
expressed as a polynomial modulo the absolute equation $\var{rnf}[11][1]$,
computes $x$ as an element of the relative extension $L/K$ as a polmod with
polmod coefficients.
\syn{rnfelementabstorel}{\var{rnf},x}.
\subsecidx{rnfeltdown}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$L$ expressed as a polynomial or polmod with polmod coefficients, computes
$x$ as an element of $K$ as a polmod, assuming $x$ is in $K$ (otherwise an
error will occur). If $x$ is given on the relative integral basis, apply
\kbd{rnfbasistoalg} first, otherwise PARI will believe you are dealing with a
vector.
\syn{rnfelementdown}{\var{rnf},x}.
\subsecidx{rnfeltreltoabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an
element of $L$ expressed as a polynomial or polmod with polmod
coefficients, computes $x$ as an element of the absolute extension $L/\Q$ as
a polynomial modulo the absolute equation $\var{rnf}[11][1]$. If $x$ is
given on the relative integral basis, apply \kbd{rnfbasistoalg} first,
otherwise PARI will believe you are dealing with a vector.
\syn{rnfelementreltoabs}{\var{rnf},x}.
\subsecidx{rnfeltup}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$K$ expressed as a polynomial or polmod, computes $x$ as an element of the
absolute extension $L/\Q$ as a polynomial modulo the absolute equation
$\var{rnf}[11][1]$. Note that it is unnecessary to compute $x$ as an
element of the relative extension $L/K$ (its expression would be identical to
itself). If $x$ is given on the integral basis of $K$, apply
\kbd{nfbasistoalg} first, otherwise PARI will believe you are dealing with a
vector.
\syn{rnfelementup}{\var{rnf},x}.
\subsecidx{rnfequation}$(\var{nf},\var{pol},\{\fl=0\})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} (or simply a polynomial) and a
polynomial \var{pol} with coefficients in $\var{nf}$ defining a relative
extension $L$ of $\var{nf}$, computes the absolute equation of $L$ over
$\Q$.
If $\fl$ is non-zero, outputs a 3-component row vector $[z,a,k]$, where
$z$ is the absolute equation of $L$ over $\Q$, as in the default behaviour,
$a$ expresses as an element of $L$ a root $\alpha$ of the polynomial
defining the base field $\var{nf}$, and $k$ is a small integer such that
$\theta = \beta+k\alpha$ where $\theta$ is a root of $z$ and $\beta$ a root
of $\var{pol}$.
The main variable of $\var{nf}$ \var{must} be of lower priority than that
of \var{pol}. Note that for efficiency, this does not check whether the
relative equation is irreducible over $\var{nf}$, but only if it is
squarefree. If it is reducible but squarefree, the result will be the
absolute equation of the \'etale algebra defined by \var{pol}. If \var{pol}
is not squarefree, an error message will be issued.
\syn{rnfequation0}{\var{nf},\var{pol},\fl}.
\subsecidx{rnfhnfbasis}$(\var{bnf},x)$: given a big number field $\var{bnf}$
as output by \kbd{bnfinit}, and either a polynomial $x$ with coefficients in
$\var{bnf}$ defining a relative extension $L$ of $\var{bnf}$, or a
pseudo-basis $x$ of such an extension, gives either a true $\var{bnf}$-basis
of $L$ in upper triangular Hermite normal form, if it exists,
zero otherwise.
\syn{rnfhermitebasis}{\var{nf},x}.
\subsecidx{rnfidealabstorel}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an
ideal of the absolute extension $L/\Q$ given in HNF\sidx{Hermite normal form}
(if it is not, apply \kbd{idealhnf} first), computes the relative pseudomatrix
in HNF giving the ideal $x$ considered as an ideal of the relative extension
$L/K$.
\syn{rnfidealabstorel}{\var{rnf},x}.
\subsecidx{rnfidealdown}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an ideal of
the absolute extension $L/\Q$ given in HNF (if it is not, apply
\kbd{idealhnf} first), gives the ideal of $K$ below $x$, i.e.~the
intersection of $x$ with $K$. Note that, if $x$ is given as a relative ideal
(i.e.~a pseudomatrix in HNF), then it is not necessary to use this function
since the result is simply the first ideal of the ideal list of the
pseudomatrix.
\syn{rnfidealdown}{\var{rnf},x}.
\subsecidx{rnfidealhnf}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a relative
ideal (which can be, as in the absolute case, of many different types,
including of course elements), computes as a 2-component row vector the
relative Hermite normal form of $x$, the first component being the HNF matrix
(with entries on the integral basis), and the second component the ideals.
\syn{rnfidealhermite}{\var{rnf},x}.
\subsecidx{rnfidealmul}$(\var{rnf},x,y)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ and $y$ being ideals
of the relative extension $L/K$ given by pseudo-matrices, outputs the ideal
product, again as a relative ideal.
\syn{rnfidealmul}{\var{rnf},x,y}.
\subsecidx{rnfidealnormabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a
relative ideal (which can be, as in the absolute case, of many different
types, including of course elements), computes the norm of the ideal $x$
considered as an ideal of the absolute extension $L/\Q$. This is identical to
\kbd{idealnorm(rnfidealnormrel(\var{rnf},x))}, only faster.
\syn{rnfidealnormabs}{\var{rnf},x}.
\subsecidx{rnfidealnormrel}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field
extension $L/K$ as output by \kbd{rnfinit} and $x$ being a relative ideal
(which can be, as in the absolute case, of many different types, including
of course elements), computes the relative norm of $x$ as a ideal of $K$
in HNF.
\syn{rnfidealnormrel}{\var{rnf},x}.
\subsecidx{rnfidealreltoabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field
extension $L/K$ as output by \kbd{rnfinit} and $x$ being a relative ideal
(which can be, as in the absolute case, of many different types, including
of course elements), computes the HNF matrix of the ideal $x$ considered
as an ideal of the absolute extension $L/\Q$.
\syn{rnfidealreltoabs}{\var{rnf},x}.
\subsecidx{rnfidealtwoelt}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field
extension $L/K$ as output by \kbd{rnfinit} and $x$ being an ideal of the
relative extension $L/K$ given by a pseudo-matrix, gives a vector of
two generators of $x$ over $\Z_L$ expressed as polmods with polmod
coefficients.
\syn{rnfidealtwoelement}{\var{rnf},x}.
\subsecidx{rnfidealup}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field
extension $L/K$ as output by \kbd{rnfinit} and $x$ being an ideal of
$K$, gives the ideal $x\Z_L$ as an absolute ideal of $L/\Q$ (the relative
ideal representation is trivial: the matrix is the identity matrix, and
the ideal list starts with $x$, all the other ideals being $\Z_K$).
\syn{rnfidealup}{\var{rnf},x}.
\subsecidx{rnfinit}$(\var{nf},\var{pol})$: $\var{nf}$ being a number field in
\kbd{nfinit}
format considered as base field, and \var{pol} a polynomial defining a relative
extension over $\var{nf}$, this computes all the necessary data to work in the
relative extension. The main variable of \var{pol} must be of higher priority
(i.e.~lower number) than that of $\var{nf}$, and the coefficients of \var{pol}
must be in $\var{nf}$.
The result is an 11-component row vector as follows (most of the components
are technical), the numbering being very close to that of \kbd{nfinit}. In
the following description, we let $K$ be the base field defined by
$\var{nf}$, $m$ the degree of the base field, $n$ the relative degree, $L$
the large field (of relative degree $n$ or absolute degree $nm$), $r_1$ and
$r_2$ the number of real and complex places of $K$.
$\var{rnf}[1]$ contains the relative polynomial \var{pol}.
$\var{rnf}[2]$ is a row vector with $r_1+r_2$ entries, entry $j$ being
a 2-component row vector $[r_{j,1},r_{j,2}]$ where $r_{j,1}$ and $r_{j,2}$
are the number of real and complex places of $L$ above the $j$-th place of
$K$ so that $r_{j,1}=0$ and $r_{j,2}=n$ if $j$ is a complex place, while if
$j$ is a real place we have $r_{j,1}+2r_{j,2}=n$.
$\var{rnf}[3]$ is a two-component row vector $[\d(L/K),s]$ where $\d(L/K)$
is the relative ideal discriminant of $L/K$ and $s$ is the discriminant of
$L/K$ viewed as an element of $K^*/(K^*)^2$, in other words it is the output
of \kbd{rnfdisc}.
$\var{rnf}[4]$ is the ideal index $\f$, i.e.~such that
$d(pol)\Z_K=\f^2\d(L/K)$.
$\var{rnf}[5]$ is a vector \var{vm} with 7 entries useful for certain
computations in the relative extension $L/K$. $\var{vm}[1]$ is a vector of
$r_1+r_2$ matrices, the $j$-th matrix being an $(r_{1,j}+r_{2,j})\times n$
matrix $M_j$ representing the numerical values of the conjugates of the
$j$-th embedding of the elements of the integral basis, where $r_{i,j}$ is as
in $\var{rnf}[2]$. $\var{vm}[2]$ is a vector of $r_1+r_2$ matrices, the
$j$-th matrix $MC_j$ being essentially the conjugate of the matrix $M_j$
except that the last $r_{2,j}$ columns are also multiplied by 2.
$\var{vm}[3]$ is a vector of $r_1+r_2$ matrices $T2_j$, where $T2_j$ is
an $n\times n$ matrix equal to the real part of the product $MC_j\cdot M_j$
(which is a real positive definite matrix). $\var{vm}[4]$ is the $n\times n$
matrix $T$ whose entries are the relative traces of $\omega_i\omega_j$
expressed as polmods in $\var{nf}$, where the $\omega_i$ are the elements
of the relative integral basis. Note that the $j$-th embedding of $T$ is
equal to $\overline{MC_j}\cdot M_j$, and in particular will be equal to
$T2_j$ if $r_{2,j}=0$. Note also that the relative ideal discriminant of
$L/K$ is equal to $\det(T)$ times the square of the product of the ideals
in the relative pseudo-basis (in $\var{rnf}[7][2]$). The last 3 entries
$\var{vm}[5]$, $\var{vm}[6]$ and $\var{vm}[7]$ are linked to the different
as in \kbd{nfinit}, but have not yet been implemented.
$\var{rnf}[6]$ is a row vector with $r_1+r_2$ entries, the $j$-th entry
being the
row vector with $r_{1,j}+r_{2,j}$ entries of the roots of the $j$-th embedding
of the relative polynomial \var{pol}.
$\var{rnf}[7]$ is a two-component row vector, where the first component is
the relative integral pseudo basis expressed as polynomials (in the variable of
$pol$) with polmod coefficients in $\var{nf}$, and the second component is the
ideal list of the pseudobasis in HNF.
$\var{rnf}[8]$ is the inverse matrix of the integral basis matrix, with
coefficients polmods in $\var{nf}$.
$\var{rnf}[9]$ may be the multiplication table of the integral basis, but
is not implemented at present.
$\var{rnf}[10]$ is $\var{nf}$.
$\var{rnf}[11]$ is a vector \var{vabs} with 5 entries describing the
\var{absolute} extension $L/\Q$. $\var{vabs}[1]$ is an absolute equation.
$\var{vabs}[2]$ expresses the generator $\alpha$ of the number field
$\var{nf}$ as a polynomial modulo the absolute equation $\var{vabs}[1]$.
$\var{vabs}[3]$ is a small integer $k$ such that, if $\beta$ is an abstract
root of \var{pol} and $\alpha$ the generator of $\var{nf}$, the generator
whose root is \var{vabs} will be $\beta + k \alpha$. Note that one must
be very careful if $k\neq0$ when dealing simultaneously with absolute and
relative quantities since the generator chosen for the absolute extension
is not the same as for the relative one. If this happens, one can of course
go on working, but we strongly advise to change the relative polynomial so
that its root will be $\beta + k \alpha$. Typically, the GP instruction would
be
\kbd{pol = subst(pol, x, x - k*Mod(y,\var{nf}.pol))}
Finally, $\var{vabs}[4]$ is the absolute integral basis of $L$ expressed in HNF
(hence as would be output by \kbd{nfinit(vabs[1])}), and $\var{vabs}[5]$ the
inverse matrix of the integral basis, allowing to go from polmod to integral
basis representation.
\syn{rnfinitalg}{\var{nf},\var{pol},\var{prec}}.
\subsecidx{rnfisfree}$(\var{bnf},x)$: given a big number field $\var{bnf}$ as
output by \kbd{bnfinit}, and either a polynomial $x$ with coefficients in
$\var{bnf}$ defining a relative extension $L$ of $\var{bnf}$, or a
pseudo-basis $x$ of such an extension, returns true (1) if $L/\var{bnf}$ is
free, false (0) if not.
\syn{rnfisfree}{\var{bnf},x}, and the result is a \kbd{long}.
\subsecidx{rnfisnorm}$(\var{bnf},\var{ext},\var{el},\{\fl=1\})$: similar to
\kbd{bnfisnorm} but in the relative case. This tries to decide whether the
element \var{el} in \var{bnf} is the norm of some $y$ in \var{ext}.
$\var{bnf}$ is as output by \kbd{bnfinit}.
$\var{ext}$ is a relative extension which has to be a row vector whose
components are:
$\var{ext}[1]$: a relative equation of the number field \var{ext} over
\var{bnf}. As usual, the priority of the variable of the polynomial
defining the ground field \var{bnf} (say $y$) must be lower than the
main variable of $\var{ext}[1]$, say $x$.
$\var{ext}[2]$: the generator $y$ of the base field as a polynomial in $x$ (as
given by \kbd{rnfequation} with $\fl = 1$).
$\var{ext}[3]$: is the \kbd{bnfinit} of the absolute extension $\var{ext}/\Q$.
This returns a vector $[a,b]$, where $\var{el}=\var{Norm}(a)*b$. It looks for a
solution which is an $S$-integer, with $S$ a list of places (of \var{bnf})
containing the ramified primes, the generators of the class group of
\var{ext}, as well as those primes dividing \var{el}. If $\var{ext}/\var{bnf}$
is known to be \idx{Galois}, set $\fl=0$ (here \var{el} is a norm iff $b=1$).
If $\fl$ is non zero add to $S$ all the places above the primes which: divide
$\fl$ if $\fl<0$, or are less than $\fl$ if $\fl>0$. The answer is guaranteed
(i.e.~\var{el} is a norm iff $b=1$) under \idx{GRH}, if $S$ contains all
primes less than $12\log^2\left|\text{disc}(\var{Ext})\right|$, where
\var{Ext} is the normal closure of $\var{ext} / \var{bnf}$. Example:
\bprog
bnf = bnfinit(y^3 + y^2 - 2*y - 1);
p = x^2 + Mod(y^2 + 2*y + 1, bnf.pol);
rnf = rnfequation(bnf,p,1);
ext = [p, rnf[2], bnfinit(rnf[1])];
rnfisnorm(bnf,ext,17, 1)
@eprog
\noindent checks whether $17$ is a norm in the Galois extension $\Q(\beta) /
\Q(\alpha)$, where $\alpha^3 + \alpha^2 - 2\alpha - 1 = 0$ and $\beta^2 +
\alpha^2 + 2*\alpha + 1 = 0$ (it is).
\syn{rnfisnorm}{\var{bnf},ext,x,\fl,\var{prec}}.
\subsecidx{rnfkummer}$(\var{bnr},\var{subgroup},\{deg=0\})$: \var{bnr}
being as output by \kbd{bnrinit}, finds a relative equation for the
class field corresponding to the module in \var{bnr} and the given
congruence subgroup. If \var{deg} is positive, outputs the list of all
relative equations of degree \var{deg} contained in the ray class field
defined by \var{bnr}.
(THIS PROGRAM IS STILL IN DEVELOPMENT STAGE)
\syn{rnfkummer}{\var{bnr},\var{subgroup},\var{deg},\var{prec}},
where \var{deg} is a \kbd{long}.
\subsecidx{rnflllgram}$(\var{nf},\var{pol},\var{order})$: given a polynomial
\var{pol} with coefficients in \var{nf} and an order \var{order} as output
by \kbd{rnfpseudobasis} or similar, gives $[[\var{neworder}],U]$, where
\var{neworder} is a reduced order and $U$ is the unimodular transformation
matrix.
\syn{rnflllgram}{\var{nf},\var{pol},\var{order},\var{prec}}.
\subsecidx{rnfnormgroup}$(\var{bnr},\var{pol})$: \var{bnr} being a big ray
class field as output by \kbd{bnrinit} and \var{pol} a relative polynomial
defining an \idx{Abelian extension}, computes the norm group (alias Artin
or Takagi group) corresponding to the Abelian extension of $\var{bnf}=bnr[1]$
defined by \var{pol}, where the module corresponding to \var{bnr} is assumed
to be a multiple of the conductor (i.e.~polrel defines a subextension of
bnr). The result is the HNF defining the norm group on the given generators
of $\var{bnr}[5][3]$. Note that neither the fact that \var{pol} defines an
Abelian extension nor the fact that the module is a multiple of the conductor
is checked. The result is undefined if the assumption is not correct.
\syn{rnfnormgroup}{\var{bnr},\var{pol}}.
\subsecidx{rnfpolred}$(\var{nf},\var{pol})$: relative version of \kbd{polred}.
Given a monic polynomial \var{pol} with coefficients in $\var{nf}$, finds a
list of relative polynomials defining some subfields, hopefully simpler and
containing the original field. In the present version \vers, this is slower
than \kbd{rnfpolredabs}.
\syn{rnfpolred}{\var{nf},\var{pol},\var{prec}}.
\subsecidx{rnfpolredabs}$(\var{nf},\var{pol},\{\fl=0\})$: relative version of
\kbd{polredabs}. Given a monic polynomial \var{pol} with coefficients in
$\var{nf}$, finds a simpler relative polynomial defining the same field. If
$\fl=1$, returns $[P,a]$ where $P$ is the default output and $a$ is an
element expressed on a root of $P$ whose characteristic polynomial is
\var{pol}, if $\fl=2$, returns an absolute polynomial (same as
{\tt rnfequation(\var{nf},rnfpolredabs(\var{nf},\var{pol}))}
\noindent but faster).
\misctitle{Remark.} In the present implementation, this is both faster and
much more efficient than \kbd{rnfpolred}, the difference being more
dramatic than in the absolute case. This is because the implementation of
\kbd{rnfpolred} is based on (a partial implementation of) an incomplete
reduction theory of lattices over number fields (i.e.~the function
\kbd{rnflllgram}) which deserves to be improved.
\syn{rnfpolredabs}{\var{nf},\var{pol},\fl,\var{prec}}.
\subsecidx{rnfpseudobasis}$(\var{nf},\var{pol})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} and a polynomial \var{pol} with
coefficients in $\var{nf}$ defining a relative extension $L$ of $\var{nf}$,
computes a pseudo-basis $(A,I)$ and the relative discriminant of $L$.
This is output as
a four-element row vector $[A,I,D,d]$, where $D$ is the relative ideal
discriminant and $d$ is the relative discriminant considered as an element of
$\var{nf}^*/{\var{nf}^*}^2$.
Note: As usual, $\var{nf}$ can be a $\var{bnf}$ as output by \kbd{bnfinit}.
\syn{rnfpseudobasis}{\var{nf},\var{pol}}.
\subsecidx{rnfsteinitz}$(\var{nf},x)$: given a number field $\var{nf}$ as
output by \kbd{nfinit} and either a polynomial $x$ with coefficients in
$\var{nf}$ defining a relative extension $L$ of $\var{nf}$, or a pseudo-basis
$x$ of such an extension as output for example by \kbd{rnfpseudobasis},
computes another pseudo-basis $(A,I)$ (not in HNF in general) such that all
the ideals of $I$ except perhaps the last one are equal to the ring of
integers of $\var{nf}$, and outputs the four-component row vector $[A,I,D,d]$
as in \kbd{rnfpseudobasis}. The name of this function comes from the fact
that the ideal class of the last ideal of $I$ (which is well defined) is
called the \tev{Steinitz class} of the module $\Z_L$.
Note: $\var{nf}$ can be a $\var{bnf}$ as output by \kbd{bnfinit}.
\syn{rnfsteinitz}{\var{nf},x}.
\subsecidx{subgrouplist}$(\var{bnr},\{\var{bound}\},\{\fl=0\})$:
\var{bnr} being as output by \kbd{bnrinit} or a list of cyclic components
of a finite Abelian group $G$, outputs the list of subgroups of $G$
(of index bounded by \var{bound}, if not omitted). Subgroups are given
as HNF\sidx{Hermite normal form} left divisors of the
SNF\sidx{Smith normal form} matrix corresponding to $G$. If $\fl=0$
(default) and \var{bnr} is as output by
\kbd{bnrinit}, gives only the subgroups whose modulus is the conductor.
\syn{subgrouplist0}{\var{bnr},\var{bound},\fl,\var{prec}}, where
\var{bound}, $\fl$ and $\var{prec}$ are long integers.
\subsecidx{zetak}$(\var{znf},x,\{\fl=0\})$: \var{znf} being a number
field initialized by \kbd{zetakinit} (\var{not} by \kbd{nfinit}),
computes the value of the \idx{Dedekind} zeta function of the number
field at the complex number $x$. If $\fl=1$ computes Dedekind $\Lambda$
function instead (i.e.~the product of the
Dedekind zeta function by its gamma and exponential factors).
The accuracy of the result depends in an essential way on the accuracy of
both the \kbd{zetakinit} program and the current accuracy, but even so the
result may be off by up to 5 or 10 decimal digits.
\syn{glambdak}{\var{znf},x,\var{prec}} or
$\teb{gzetak}(\var{znf},x,\var{prec})$.
\subsecidx{zetakinit}$(x)$: computes a number of initialization data
concerning the number field defined by the polynomial $x$ so as to be able
to compute the \idx{Dedekind} zeta and lambda functions (respectively
$\kbd{zetak}(x)$ and $\kbd{zetak}(x,1)$). This function calls in particular
the \kbd{bnfinit} program. The result is a 9-component vector $v$ whose
components are very technical and cannot really be used by the user except
through the \kbd{zetak} function. The only component which can be used if
it has not been computed already is $v[1][4]$ which is the result of the
\kbd{bnfinit} call.
This function is very inefficient and should be rewritten. It needs to
computes millions of coefficients of the corresponding Dirichlet series if
the precision is big. Unless the discriminant is small it will not be able
to handle more than 9 digits of relative precision
(e.g~\kbd{zetakinit(x\pow 8 - 2)} needs 440MB of memory at default
precision).
\syn{initzeta}{x}.
\section{Polynomials and power series}
We group here all functions which are specific to polynomials or power
series. Many other functions which can be applied on these objects are
described in the other sections. Also, some of the functions described here
can be applied to other types.
\subsecidx{O}$(a$\kbd{\pow}$b)$: $p$-adic (if $a$ is an integer greater or
equal to 2) or power series zero (in all other cases), with precision given
by $b$.
\syn{ggrandocp}{a,b}, where $b$ is a \kbd{long}.
\subsecidx{deriv}$(x,\{v\})$: derivative of $x$ with respect to the main
variable if $v$ is omitted, and with respect to $v$ otherwise. $x$ can be any
type except polmod. The derivative of a scalar type is zero, and the
derivative of a vector or matrix is done componentwise. One can use $x'$ as a
shortcut if the derivative is with respect to the main variable of $x$.
\syn{deriv}{x,v}, where $v$ is a \kbd{long}, and an omitted $v$ is coded as
$-1$.
\subsecidx{eval}$(x)$: replaces in $x$ the formal variables by the values that
have been assigned to them after the creation of $x$. This is mainly useful
in GP, and not in library mode. Do not confuse this with substitution (see
\kbd{subst}). Applying this function to a character string yields the
output from the corresponding GP command, as if directly input from the
keyboard (see \secref{se:strings}).\label{se:eval}
\syn{geval}{x}. The more basic functions $\teb{poleval}(q,x)$,
$\teb{qfeval}(q,x)$, and $\teb{hqfeval}(q,x)$ evaluate $q$ at $x$, where $q$
is respectively assumed to be a polynomial, a quadratic form (a symmetric
matrix), or an Hermitian form (an Hermitian complex matrix).
\subsecidx{factorpadic}$(\var{pol},p,r,\{\fl=0\})$: $p$-adic factorization
of the polynomial \var{pol} to precision $r$, the result being a
two-column matrix as in \kbd{factor}. The factors are normalized so that
their leading coefficient is a power of $p$. $r$ must be strictly larger than
the $p$-adic valuation of the discriminant of \var{pol} for the result to
make any sense. The method used is a modified version of the \idx{round 4}
algorithm of \idx{Zassenhaus}.
If $\fl=1$, use an algorithm due to \idx{Buchmann} and \idx{Lenstra}, which is
usually less efficient.
\syn{factorpadic4}{\var{pol},p,r}, where $r$ is a \kbd{long} integer.
\subsecidx{intformal}$(x,\{v\})$: \idx{formal integration} of $x$ with
respect to the main variable if $v$ is omitted, with respect to the variable
$v$ otherwise. Since PARI does not know about ``abstract'' logarithms (they
are immediately evaluated, if only to a power series), logarithmic terms in
the result will yield an error. $x$ can be of any type. When $x$ is a
rational function, it is assumed that the base ring is an integral domain of
characteristic zero.
\syn{integ}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded
as $-1$.
\subsecidx{padicappr}$(\var{pol},a)$: vector of $p$-adic roots of the
polynomial
$pol$ congruent to the $p$-adic number $a$ modulo $p$ (or modulo 4 if $p=2$),
and with the same $p$-adic precision as $a$. The number $a$ can be an
ordinary $p$-adic number (type \typ{PADIC}, i.e.~an element of $\Q_p$) or
can be an element of a finite extension of $\Q_p$, in which case it is of
type \typ{POLMOD}, where at least one of the coefficients of the polmod is a
$p$-adic number. In this case, the result is the vector of roots belonging to
the same extension of $\Q_p$ as $a$.
\syn{apprgen9}{\var{pol},a}, but if $a$ is known to be simply a $p$-adic number
(type \typ{PADIC}), the syntax $\teb{apprgen}(\var{pol},a)$ can be used.
\subsecidx{polcoeff}$(x,s,\{v\})$: coefficient of degree $s$ of the
polynomial $x$, with respect to the main variable if $v$ is omitted, with
respect to $v$ otherwise.
\syn{polcoeff0}{x,s,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded
as $-1$. Also available is \teb{truecoeff}$(x,v)$.
\subsecidx{poldegree}$(x,\{v\})$: degree of the polynomial $x$ in the main
variable if $v$ is omitted, in the variable $v$ otherwise. This is to be
understood as follows. When $x$ is a polynomial or a rational function, it
gives the degree of $x$, the degree of $0$ being $-1$ by convention. When $x$
is a non-zero scalar, it gives 0, and when $x$ is a zero scalar, it gives
$-1$. Return an error otherwise.
\syn{poldegree}{x,v}, where $v$ and the result are \kbd{long}s (and an
omitted $v$ is coded as $-1$). Also available is \teb{degree}$(x)$, which is
equivalent to \kbd{poldegree($x$,-1)}.
\subsecidx{polcyclo}$(n,\{v=x\})$: $n$-th cyclotomic polynomial, in variable
$v$ ($x$ by default). The integer $n$ must be positive.
\syn{cyclo}{n,v}, where $n$ and $v$ are \kbd{long}
integers ($v$ is a variable number, usually obtained through \kbd{varn}).
\subsecidx{poldisc}$(\var{pol},\{v\})$: discriminant of the polynomial
\var{pol} in the main variable is $v$ is omitted, in $v$ otherwise. The
algorithm used is the \idx{subresultant algorithm}.
\syn{poldisc0}{x,v}. Also available is \teb{discsr}$(x)$, equivalent
to \kbd{poldisc0(x,-1)}.
\subsecidx{poldiscreduced}$(f)$: reduced discriminant vector of the
(integral, monic) polynomial $f$. This is the vector of elementary divisors
of $\Z[\alpha]/f'(\alpha)\Z[\alpha]$, where $\alpha$ is a root of the
polynomial $f$. The components of the result are all positive, and their
product is equal to the absolute value of the discriminant of~$f$.
\syn{reduceddiscsmith}{x}.
\subsecidx{polhensellift}$(x, y, p, e)$: given a vector $y$ of
polynomials that are pairwise relatively prime modulo the prime $p$,
and whose product is congruent to $x$ modulo $p$, lift the elements of
$y$ to polynomials whose product is congruent to $x$ modulo $p^e$.
\syn{polhensellift}{x,y,p,e} where $e$ must be a \kbd{long}.
\subsecidx{polinterpolate}$(xa,\{ya\},\{v=x\},\{\&e\})$: given the data vectors
$xa$ and $ya$ of the same length $n$ ($xa$ containing the $x$-coordinates,
and $ya$ the corresponding $y$-coordinates), this function finds the
\idx{interpolating polynomial} passing through these points and evaluates it
at~$v$. If $ya$ is omitted, return the polynomial interpolating the
$(i,xa[i])$. If present, $e$ will contain an error estimate on the returned
value.
\syn{polint}{xa,ya,v,\&e}, where $e$ will contain an error estimate on the
returned value.
\subsecidx{polisirreducible}$(\var{pol})$: \var{pol} being a polynomial
(univariate in the present version \vers), returns 1 if \var{pol} is
non-constant and irreducible, 0 otherwise. Irreducibility is checked over
the smallest base field over which \var{pol} seems to be defined.
\syn{gisirreducible}{\var{pol}}.
\subsecidx{pollead}$(x,\{v\})$: leading coefficient of the polynomial or
power series $x$. This is computed with respect to the main variable of $x$
if $v$ is omitted, with respect to the variable $v$ otherwise.
\syn{pollead}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded as
$-1$. Also available is \teb{leadingcoeff}$(x)$.
\subsecidx{pollegendre}$(n,\{v=x\})$: creates the $n^{\text{th}}$
\idx{Legendre polynomial}, in variable $v$.
\syn{legendre}{n}, where $x$ is a \kbd{long}.
\subsecidx{polrecip}$(\var{pol})$: reciprocal polynomial of \var{pol},
i.e.~the coefficients are in reverse order. \var{pol} must be a polynomial.
\syn{polrecip}{x}.
\subsecidx{polresultant}$(x,y,\{v\},\{\fl=0\})$: resultant of the two
polynomials $x$ and $y$ with exact entries, with respect to the main
variables of $x$ and $y$ if $v$ is omitted, with respect to the variable $v$
otherwise. The algorithm used is the \idx{subresultant algorithm} by default.
If $\fl=1$, uses the determinant of Sylvester's matrix instead (here $x$ and
$y$ may have non-exact coefficients).
If $\fl=2$, uses Ducos's modified subresultant algorithm. It should be much
faster than the default if the coefficient ring is complicated (e.g
multivariate polynomials or huge coefficients), and slightly slower
otherwise.
\syn{polresultant0}{x,y,v,\fl}, where $v$ is a \kbd{long} and an omitted $v$
is coded as $-1$. Also available are $\teb{subres}(x,y)$ ($\fl=0$) and
$\teb{resultant2}(x,y)$ ($\fl=1$).
\subsecidx{polroots}$(\var{pol},\{\fl=0\})$: complex roots of the polynomial
\var{pol}, given as a column vector where each root is repeated according to
its multiplicity. The precision is given as for transcendental functions: under
GP it is kept in the variable \kbd{realprecision} and is transparent to the
user, but it must be explicitly given as a second argument in library mode.
The algorithm used is a modification of A.~\idx{Sch\"onhage}'s remarkable
root-finding algorithm, due to and implemented by X.~Gourdon. Barring bugs,
it is guaranteed to converge and to give the roots to the required accuracy.
If $\fl=1$, use a variant of the Newton-Raphson method, which is \var{not}
guaranteed to converge, but is rather fast. If you get the messages ``too
many iterations in roots'' or ``INTERNAL ERROR: incorrect result in roots'',
use the default function (i.e.~no flag or $\fl=0$). This used to be the
default root-finding function in PARI until version 1.39.06.
\syn{roots}{\var{pol},\var{prec}} or $\teb{rootsold}(\var{pol},\var{prec})$.
\subsecidx{polrootsmod}$(\var{pol},p,\{\fl=0\})$: row vector of roots modulo
$p$ of the polynomial \var{pol}. The particular non-prime value $p=4$ is
accepted, mainly for $2$-adic computations. Multiple roots are \var{not}
repeated.
If $p<100$, you may try setting $\fl=1$, which uses a naive search. In this
case, multiple roots \var{are} repeated with their order of multiplicity.
\syn{rootmod}{\var{pol},p} ($\fl=0$) or
$\teb{rootmod2}(\var{pol},p)$ ($\fl=1$).
\subsecidx{polrootspadic}$(\var{pol},p,r)$: row vector of $p$-adic roots of the
polynomial \var{pol} with $p$-adic precision equal to $r$. Multiple roots are
\var{not} repeated. $p$ is assumed to be a prime.
\syn{rootpadic}{\var{pol},p,r}, where $r$ is a \kbd{long}.
\subsecidx{polsturm}$(\var{pol},\{a\},\{b\})$: number of real roots of the real
polynomial \var{pol} in the interval $]a,b]$, using Sturm's algorithm. $a$
(resp.~$b$) is taken to be $-\infty$ (resp.~$+\infty$) if omitted.
\syn{sturmpart}{\var{pol},a,b}. Use \kbd{NULL} to omit an argument.
\teb{sturm}\kbd{(\var{pol})} is equivalent to
\key{sturmpart}\kbd{(\var{pol},NULL,NULL)}. The result is a \kbd{long}.
\subsecidx{polsubcyclo}$(n,d,\{v=x\})$: gives a polynomial (in variable
$v$) defining the sub-Abelian extension of degree $d$ of the cyclotomic
field $\Q(\zeta_n)$, where $d\mid \phi(n)$. $(\Z/n\Z)^*$ has to be cyclic
(i.e.~$n=2$, $4$, $p^k$ or $2p^k$ for an odd prime $p$). The function
\tet{galoissubcyclo} covers the general case.
\syn{subcyclo}{n,d,v}, where $v$ is a variable number.
\subsecidx{polsylvestermatrix}$(x,y)$: forms the Sylvester matrix
corresponding to the two polynomials $x$ and $y$, where the coefficients of
the polynomials are put in the columns of the matrix (which is the natural
direction for solving equations afterwards). The use of this matrix can be
essential when dealing with polynomials with inexact entries, since
polynomial Euclidean division doesn't make much sense in this case.
\syn{sylvestermatrix}{x,y}.
\subsecidx{polsym}$(x,n)$: creates the vector of the \idx{symmetric powers}
of the roots of the polynomial $x$ up to power $n$, using Newton's
formula.
\syn{polsym}{x}.
\subsecidx{poltchebi}$(n,\{v=x\})$: creates the $n^{\text{th}}$
\idx{Chebyshev} polynomial, in variable $v$.
\syn{tchebi}{n,v}, where $n$ and $v$ are \kbd{long}
integers ($v$ is a variable number).
\subsecidx{polzagier}$(n,m)$: creates Zagier's polynomial $P_{n,m}$ used in
the functions \kbd{sumalt} and \kbd{sumpos} (with $\fl=1$). The exact
definition can be found in a forthcoming paper. One must have $m\le n$.
\syn{polzagreel}{n,m,\var{prec}} if the result is only wanted as a polynomial
with real coefficients to the precision $\var{prec}$, or $\teb{polzag}(n,m)$
if the result is wanted exactly, where $n$ and $m$ are \kbd{long}s.
\subsecidx{serconvol}$(x,y)$: convolution (or \idx{Hadamard product}) of the
two power series $x$ and $y$; in other words if $x=\sum a_k*X^k$ and $y=\sum
b_k*X^k$ then $\kbd{serconvol}(x,y)=\sum a_k*b_k*X^k$.
\syn{convol}{x,y}.
\subsecidx{serlaplace}$(x)$: $x$ must be a power series with only
non-negative exponents. If $x=\sum (a_k/k!)*X^k$ then the result is $\sum
a_k*X^k$.
\syn{laplace}{x}.
\subsecidx{serreverse}$(x)$: reverse power series (i.e.~$x^{-1}$, not $1/x$)
of $x$. $x$ must be a power series whose valuation is exactly equal to one.
\syn{recip}{x}.
\subsecidx{subst}$(x,y,z)$:
replace the simple variable $y$ by the argument $z$ in the ``polynomial''
expression $x$. Every type is allowed for $x$, but if it is not a genuine
polynomial (or power series, or rational function), the substitution will be
done as if the scalar components were polynomials of degree one. In
particular, beware that:
\bprog
? subst(1, x, [1,2; 3,4])
%1 =
[1 0]
[0 1]
? subst(1, x, Mat([0,1]))
*** forbidden substitution by a non square matrix
@eprog
If $x$ is a power series, $z$ must be either a polynomial, a power series, or
a rational function. $y$ must be a simple variable name.
\syn{gsubst}{x,v,z}, where $v$ is the number of
the variable $y$.
\subsecidx{taylor}$(x,y)$: Taylor expansion around $0$ of $x$ with respect
to\label{se:taylor}
the simple variable $y$. $x$ can be of any reasonable type, for example a
rational function. The number of terms of the expansion is transparent to the
user under GP, but must be given as a second argument in library mode.
\syn{tayl}{x,y,n}, where the \kbd{long} integer $n$ is the desired number of
terms in the expansion.
\subsecidx{thue}$(\var{tnf},a,\{\var{sol}\})$: solves the equation
$P(x,y)=a$ in integers $x$ and $y$, where \var{tnf} was created with
$\kbd{thueinit}(P)$. \var{sol}, if present, contains the solutions of
$\text{Norm}(x)=a$ modulo units of positive norm in the number field
defined by $P$ (as computed by \kbd{bnfisintnorm}). If \var{tnf} was
computed without assuming \idx{GRH} ($\fl=1$ in \kbd{thueinit}), the
result is unconditional. For instance, here's how to solve the Thue
equation $x^{13} - 5y^{13} = - 4$:
\bprog
? tnf = thueinit(x^13 - 5);
? thue(tnf, -4)
%1 = [[1, 1]]
@eprog
Hence, assuming GRH, the only solution is $x = 1$, $y = 1$.
\syn{thue}{\var{tnf},a,\var{sol}}, where an omitted \var{sol} is coded
as \kbd{NULL}.
\subsecidx{thueinit}$(P,\{\fl=0\})$: initializes the \var{tnf}
corresponding to $P$. It is meant to be used in conjunction with \tet{thue}
to solve Thue equations $P(x,y) = a$, where $a$ is an integer. If $\fl$ is
non-zero, certify the result unconditionnaly, Otherwise, assume \idx{GRH},
this being much faster of course.
\syn{thueinit}{P,\fl,\var{prec}}.
\section{Vectors, matrices, linear algebra and sets}
\label{se:linear_algebra}
Note that most linear algebra functions operating on subspaces defined by
generating sets (such as \tet{mathnf}, \tet{qflll}, etc.) take matrices as
arguments. As usual, the generating vectors are taken to be the
\var{columns} of the given matrix.
\subsecidx{algdep}$(x,k,\{\fl=0\})$:\sidx{algebraic dependence} $x$ being
real, complex, or $p$-adic, finds a polynomial of degree at most $k$ with
integer coefficients having $x$ as approximate root. Note that the polynomial
which is obtained is not necessarily the ``correct'' one (it's not even
guaranteed to be irreducible!). One can check the closeness either by a
polynomial evaluation or substitution, or by computing the roots of the
polynomial given by algdep.
If $x$ is padic, $\fl$ is meaningless and the algorithm LLL-reduces the
``dual lattice'' corresponding to the powers of $x$.
Otherwise, if $\fl$ is zero, the algorithm used is a variant of the \idx{LLL}
algorithm due to Hastad, Lagarias and Schnorr (STACS 1986). If the precision
is too low, the routine may enter an infinite loop.
If $\fl$ is non-zero, use a standard LLL. $\fl$ then indicates a precision,
which should be between $0.5$ and $1.0$ times the number of decimal digits
to which $x$ was computed.
\syn{algdep0}{x,k,\fl,\var{prec}}, where $k$ and $\fl$ are \kbd{long}s.
Also available is $\teb{algdep}(x,k,\var{prec})$ ($\fl=0$).
\subsecidx{charpoly}$(A,\{v=x\},\{\fl=0\})$: \idx{characteristic polynomial}
of $A$ with respect to the variable $v$, i.e.~determinant of $v*I-A$ if $A$
is a square matrix, determinant of the map ``multiplication by $A$'' if $A$
is a scalar, in particular a polmod (e.g.~\kbd{charpoly(I,x)=x\pow2+1}).
Note that in the latter case, the \idx{minimal polynomial} can be obtained
as
\bprog
minpoly(A)=
{
local(y);
y = charpoly(A);
y / gcd(y,y')
}
@eprog
\noindent The value of $\fl$ is only significant for matrices.
If $\fl=0$, the method used is essentially the same as for computing the
adjoint matrix, i.e.~computing the traces of the powers of $A$.
If $\fl=1$, uses Lagrange interpolation which is almost always slower.
If $\fl=2$, uses the Hessenberg form. This is faster than the default when
the coefficients are integermod a prime or real numbers, but is usually
slower in other base rings.
\syn{charpoly0}{A,v,\fl}, where $v$ is the variable number. Also available
are the functions $\teb{caract}(A,v)$ ($\fl=1$), $\teb{carhess}(A,v)$
($\fl=2$), and $\teb{caradj}(A,v,\var{pt})$ where, in this last case,
\var{pt} is a \kbd{GEN*} which, if not equal to \kbd{NULL}, will receive
the address of the adjoint matrix of $A$ (see \kbd{matadjoint}), so both
can be obtained at once.
\subsecidx{concat}$(x,\{y\})$: concatenation of $x$ and $y$. If $x$ or $y$ is
not a vector or matrix, it is considered as a one-dimensional vector. All
types are allowed for $x$ and $y$, but the sizes must be compatible. Note
that matrices are concatenated horizontally, i.e.~the number of rows stays
the same. Using transpositions, it is easy to concatenate them vertically.
To concatenate vectors sideways (i.e.~to obtain a two-row or two-column
matrix), first transform the vector into a one-row or one-column matrix using
the function \tet{Mat}. Concatenating a row vector to a matrix having the
same number of columns will add the row to the matrix (top row if the vector
is $x$, i.e.~comes first, and bottom row otherwise).
The empty matrix \kbd{[;]} is considered to have a number of rows compatible
with any operation, in particular concatenation. (Note that this is
definitely \var{not} the case for empty vectors \kbd{[~]} or \kbd{[~]\til}.)
If $y$ is omitted, $x$ has to be a row vector or a list, in which case its
elements are concatenated, from left to right, using the above rules.
\bprog
? concat([1,2], [3,4])
%1 = [1, 2, 3, 4]
? a = [[1,2]~, [3,4]~]; concat(a)
%2 = [1, 2, 3, 4]~
? a[1] = Mat(a[1]); concat(a)
%3 =
[1 3]
[2 4]
? concat([1,2; 3,4], [5,6]~)
%4 =
[1 2 5]
[3 4 6]
? concat([%, [7,8]~, [1,2,3,4]])
%5 =
[1 2 5 7]
[3 4 6 8]
[1 2 3 4]
@eprog
\syn{concat}{x,y}.
\subsecidx{lindep}$(x,\{\fl=0\})$:\sidx{linear dependence}$x$ being a
vector with real or complex coefficients, finds a small integral linear
combination among these coefficients.
If $\fl=0$, uses a variant of the \idx{LLL} algorithm due to Hastad, Lagarias
and Schnorr (STACS 1986).
If $\fl>0$, uses the LLL algorithm. $\fl$ is a parameter which should be
between one half the number of decimal digits of precision and that number
(see \kbd{algdep}).
If $\fl<0$, returns as soon as one relation has been found.
\syn{lindep0}{x,\fl,\var{prec}}. Also available is
$\teb{lindep}(x,\var{prec})$ ($\fl=0$).
\subsecidx{listcreate}$(n)$: creates an empty list of maximal length $n$.
This function is useless in library mode.
\subsecidx{listinsert}$(\var{list},x,n)$: inserts the object $x$ at
position $n$ in \var{list} (which must be of type \typ{LIST}). All the
remaining elements of \var{list} (from position $n+1$ onwards) are shifted
to the right. This and \kbd{listput} are the only commands which enable
you to increase a list's effective length (as long as it remains under
the maximal length specified at the time of the \kbd{listcreate}).
This function is useless in library mode.
\subsecidx{listkill}$(\var{list})$: kill \var{list}. This deletes all
elements from \var{list} and sets its effective length to $0$. The maximal
length is not affected.
This function is useless in library mode.
\subsecidx{listput}$(\var{list},x,\{n\})$: sets the $n$-th element of the list
\var{list} (which must be of type \typ{LIST}) equal to $x$. If $n$ is omitted,
or greater than the list current effective length, just appends $x$. This and
\kbd{listinsert} are the only commands which enable you to increase a list's
effective length (as long as it remains under the maximal length specified at
the time of the \kbd{listcreate}).
If you want to put an element into an occupied cell, i.e.~if you don't want to
change the effective length, you can consider the list as a vector and use
the usual \kbd{list[n] = x} construct.
This function is useless in library mode.
\subsecidx{listsort}$(\var{list},\{\fl=0\})$: sorts \var{list} (which must
be of type \typ{LIST}) in place. If $\fl$ is non-zero, suppresses all repeated
coefficients. This is much faster than the \kbd{vecsort} command since no
copy has to be made.
This function is useless in library mode.
\subsecidx{matadjoint}$(x)$: \idx{adjoint matrix} of $x$, i.e.~the matrix $y$
of cofactors of $x$, satisfying $x*y=\det(x)*\text{Id}$. $x$ must be a
(non-necessarily invertible) square matrix.
\syn{adj}{x}.
\subsecidx{matcompanion}$(x)$: the left companion matrix to the polynomial $x$.
\syn{assmat}{x}.
\subsecidx{matdet}$(x,\{\fl=0\})$: determinant of $x$. $x$ must be a
square matrix.
If $\fl=0$, uses Gauss-Bareiss.
If $\fl=1$, uses classical Gaussian elimination, which is better when the
entries of the matrix are reals or integers for example, but usually much
worse for more complicated entries like multivariate polynomials.
\syn{det}{x} ($\fl=0$) and $\teb{det2}(x)$
($\fl=1$).
\subsecidx{matdetint}$(x)$: $x$ being an $m\times n$ matrix with integer
coefficients, this function computes a multiple of the determinant of the
lattice generated by the columns of $x$ if it is of rank $m$, and returns
zero otherwise. This function can be useful in conjunction with the function
\kbd{mathnfmod} which needs to know such a multiple. Other ways to obtain
this determinant (assuming the rank is maximal) is
\kbd{matdet(qflll(x,4)[2]$*$x)} or more simply \kbd{matdet(mathnf(x))}.
Experiment to see which is faster for your applications.
\syn{detint}{x}.
\subsecidx{matdiagonal}$(x)$: $x$ being a vector, creates the diagonal matrix
whose diagonal entries are those of $x$.
\syn{diagonal}{x}.
\subsecidx{mateigen}$(x)$: gives the eigenvectors of $x$ as columns of a
matrix.
\syn{eigen}{x}.
\subsecidx{mathess}$(x)$: Hessenberg form of the square matrix $x$.
\syn{hess}{x}.
\subsecidx{mathilbert}$(x)$: $x$ being a \kbd{long}, creates the \idx{Hilbert
matrix} of order $x$, i.e.~the matrix whose coefficient ($i$,$j$) is $1/
(i+j-1)$.
\syn{mathilbert}{x}.
\subsecidx{mathnf}$(x,\{\fl=0\})$: if $x$ is a (not necessarily square)
matrix of maximal rank, finds the \var{upper triangular}
\idx{Hermite normal form} of $x$. If the rank of $x$ is equal to its number
of rows, the result is a square matrix. In general, the columns of the
result form a basis of the lattice spanned by the columns of $x$.
If $\fl=0$, uses the naive algorithm. If the $\Z$-module generated by the
columns is a lattice, it is recommanded to use
\kbd{mathnfmod(x, matdetint(x))} instead (much faster).
If $\fl=1$, uses Batut's algorithm. Outputs a two-component row vector
$[H,U]$, where $H$ is the \var{upper triangular} Hermite normal form
of $x$ (i.e.~the default result) and $U$ is the unimodular transformation
matrix such that $xU=[0|H]$. If the rank of $x$ is equal to its number of
rows, $H$ is a square matrix. In general, the columns of $H$ form a basis
of the lattice spanned by the columns of $x$.
If $\fl=2$, uses Havas's algorithm. Outputs $[H,U,P]$, such that
$H$ and $U$ are as before and $P$ is a permutation of the rows such that $P$
applied to $xU$ gives $H$. This does not work very well in present version
\vers.
If $\fl=3$, uses Batut's algorithm, and outputs $[H,U,P]$ as in the previous
case.
If $\fl=4$, as in case 1 above, but uses \idx{LLL} reduction along the way.
\syn{mathnf0}{x,\fl}. Also available are $\teb{hnf}(x)$ ($\fl=0$) and
$\teb{hnfall}(x)$ ($\fl=1$). To reduce \var{huge} (say $400 \times 400$ and
more) relation matrices (sparse with small entries), you can use the pair
\kbd{hnfspec} / \kbd{hnfadd}. Since this is rather technical and the
calling interface may change, they are not documented yet. Look at the code
in \kbd{basemath/alglin1.c}.
\subsecidx{mathnfmod}$(x,d)$: if $x$ is a (not necessarily square) matrix of
maximal rank with integer entries, and $d$ is a multiple of the (non-zero)
determinant of the lattice spanned by the columns of $x$, finds the
\var{upper triangular} \idx{Hermite normal form} of $x$.
If the rank of $x$ is equal to its number of rows, the result is a square
matrix. In general, the columns of the result form a basis of the lattice
spanned by the columns of $x$. This is much faster than \kbd{mathnf} when $d$
is known.
\syn{hnfmod}{x,d}.
\subsecidx{mathnfmodid}$(x,d)$: outputs the (upper triangular)
\idx{Hermite normal form} of $x$ concatenated with $d$ times
the identity matrix.
\syn{hnfmodid}{x,d}.
\subsecidx{matid}$(n)$: creates the $n\times n$ identity matrix.
\syn{idmat}{n} where $n$ is a \kbd{long}.
Related functions are $\teb{gscalmat}(x,n)$, which creates $x$ times the
identity matrix ($x$ being a \kbd{GEN} and $n$ a \kbd{long}), and
$\teb{gscalsmat}(x,n)$ which is the same when $x$ is a \kbd{long}.
\subsecidx{matimage}$(x,\{\fl=0\})$: gives a basis for the image of the
matrix $x$ as columns of a matrix. A priori the matrix can have entries of
any type. If $\fl=0$, use standard Gauss pivot. If $\fl=1$, use
\kbd{matsupplement}.
\syn{matimage0}{x,\fl}. Also available is $\teb{image}(x)$ ($\fl=0$).
\subsecidx{matimagecompl}$(x)$: gives the vector of the column indices which
are not extracted by the function \kbd{matimage}. Hence the number of
components of \kbd{matimagecompl(x)} plus the number of columns of
\kbd{matimage(x)} is equal to the number of columns of the matrix $x$.
\syn{imagecompl}{x}.
\subsecidx{matindexrank}$(x)$: $x$ being a matrix of rank $r$, gives two
vectors $y$ and $z$ of length $r$ giving a list of rows and columns
respectively (starting from 1) such that the extracted matrix obtained from
these two vectors using $\tet{vecextract}(x,y,z)$ is invertible.
\syn{indexrank}{x}.
\subsecidx{matintersect}$(x,y)$: $x$ and $y$ being two matrices with the same
number of rows each of whose columns are independent, finds a basis of the
$\Q$-vector space equal to the intersection of the spaces spanned by the
columns of $x$ and $y$ respectively. See also the function
\tet{idealintersect}, which does the same for free $\Z$-modules.
\syn{intersect}{x,y}.
\subsecidx{matinverseimage}$(x,y)$: gives a column vector belonging to the
inverse image of the column vector $y$ by the matrix $x$ if one exists, the
empty vector otherwise. To get the complete inverse image, it suffices to add
to the result any element of the kernel of $x$ obtained for example by
\kbd{matker}.
\syn{inverseimage}{x,y}.
\subsecidx{matisdiagonal}$(x)$: returns true (1) if $x$ is a diagonal matrix,
false (0) if not.
\syn{isdiagonal}{x}, and this returns a \kbd{long}
integer.
\subsecidx{matker}$(x,\{\fl=0\})$: gives a basis for the kernel of the
matrix $x$ as columns of a matrix. A priori the matrix can have entries of
any type.
If $x$ is known to have integral entries, set $\fl=1$.
\noindent Note: The library function $\tet{ker_mod_p}(x, p)$, where $x$ has
integer entries and $p$ is prime, which is equivalent to but many orders of
magnitude faster than \kbd{matker(x*Mod(1,p))} and needs much less stack
space. To use it under GP, type \kbd{install(ker\_mod\_p, GG)} first.
\syn{matker0}{x,\fl}. Also available are $\teb{ker}(x)$ ($\fl=0$),
$\teb{keri}(x)$ ($\fl=1$) and $\kbd{ker\_mod\_p}(x,p)$.
\subsecidx{matkerint}$(x,\{\fl=0\})$: gives an \idx{LLL}-reduced $\Z$-basis
for the lattice equal to the kernel of the matrix $x$ as columns of the
matrix $x$ with integer entries (rational entries are not permitted).
If $\fl=0$, uses a modified integer LLL algorithm.
If $\fl=1$, uses $\kbd{matrixqz}(x,-2)$. If LLL reduction of the final result
is not desired, you can save time using \kbd{matrixqz(matker(x),-2)} instead.
If $\fl=2$, uses another modified LLL. In the present version \vers, only
independent rows are allowed in this case.
\syn{matkerint0}{x,\fl}. Also available is
$\teb{kerint}(x)$ ($\fl=0$).
\subsecidx{matmuldiagonal}$(x,d)$: product of the matrix $x$ by the diagonal
matrix whose diagonal entries are those of the vector $d$. Equivalent to,
but much faster than $x*\kbd{matdiagonal}(d)$.
\syn{matmuldiagonal}{x,d}.
\subsecidx{matmultodiagonal}$(x,y)$: product of the matrices $x$ and $y$
knowing that the result is a diagonal matrix. Much faster than $x*y$ in
that case.
\syn{matmultodiagonal}{x,y}.
\subsecidx{matpascal}$(x,\{q\})$: creates as a matrix the lower triangular
\idx{Pascal triangle} of order $x+1$ (i.e.~with binomial coefficients
up to $x$). If $q$ is given, compute the $q$-Pascal triangle (i.e.~using
$q$-binomial coefficients).
\syn{matqpascal}{x,q}, where $x$ is a \kbd{long} and $q=\kbd{NULL}$ is used
to omit $q$. Also available is \teb{matpascal}{x}.
\subsecidx{matrank}$(x)$: rank of the matrix $x$.
\syn{rank}{x}, and the result is a \kbd{long}.
\subsecidx{matrix}$(m,n,\{X\},\{Y\},\{\var{expr}=0\})$: creation of the
$m\times n$ matrix whose coefficients are given by the expression
\var{expr}. There are two formal parameters in \var{expr}, the first one
($X$) corresponding to the rows, the second ($Y$) to the columns, and $X$
goes from 1 to $m$, $Y$ goes from 1 to $n$. If one of the last 3 parameters
is omitted, fill the matrix with zeroes.
\synt{matrice}{GEN nlig,GEN ncol,entree *e1,entree *e2,char *expr}.
\subsecidx{matrixqz}$(x,p)$: $x$ being an $m\times n$ matrix with $m\ge n$
with rational or integer entries, this function has varying behaviour
depending on the sign of $p$:
If $p\geq 0$, $x$ is assumed to be of maximal rank. This function returns a
matrix having only integral entries, having the same image as $x$, such that
the GCD of all its $n\times n$ subdeterminants is equal to 1 when $p$ is
equal to 0, or not divisible by $p$ otherwise. Here $p$ must be a prime
number (when it is non-zero). However, if the function is used when $p$ has
no small prime factors, it will either work or give the message ``impossible
inverse modulo'' and a non-trivial divisor of $p$.
If $p=-1$, this function returns a matrix whose columns form a basis of the
lattice equal to $\Z^n$ intersected with the lattice generated by the
columns of $x$.
If $p=-2$, returns a matrix whose columns form a basis of the lattice equal
to $\Z^n$ intersected with the $\Q$-vector space generated by the
columns of $x$.
\syn{matrixqz0}{x,p}.
\subsecidx{matsize}$(x)$: $x$ being a vector or matrix, returns a row vector
with two components, the first being the number of rows (1 for a row vector),
the second the number of columns (1 for a column vector).
\syn{matsize}{x}.
\subsecidx{matsnf}$(X,\{\fl=0\})$: if $X$ is a (singular or non-singular)
square matrix outputs the vector of elementary divisors of $X$ (i.e.~the
diagonal of the \idx{Smith normal form} of $X$).
The binary digits of \fl\ mean:
1 (complete output): if set, outputs $[U,V,D]$, where $U$ and $V$ are two
unimodular matrices such that $UXV$ is the diagonal matrix $D$. Otherwise
output only the diagonal of $D$.
2 (generic input): if set, allows polynomial entries. Otherwise, assume
that $X$ has integer coefficients.
4 (cleanup): if set, cleans up the output. This means that elementary
divisors equal to $1$ will be deleted, i.e.~outputs a shortened vector $D'$
instead of $D$. If complete output was required, returns $[U',V',D']$ so
that $U'XV' = D'$ holds. If this flag is set, $X$ is allowed to be of the
form $D$ or $[U,V,D]$ as would normally be output with the cleanup flag
unset.
\syn{matsnf0}{X,\fl}. Also available is $\teb{smith}(X)$ ($\fl=0$).
\subsecidx{matsolve}$(x,y)$: $x$ being an invertible matrix and $y$ a column
vector, finds the solution $u$ of $x*u=y$, using Gaussian elimination. This
has the same effect as, but is a bit faster, than $x^{-1}*y$.
\syn{gauss}{x,y}.
\subsecidx{matsolvemod}$(m,d,y,\{\fl=0\})$: $m$ being any integral matrix,
$d$ a vector of positive integer moduli, and $y$ an integral
column vector, gives a small integer solution to the system of congruences
$\sum_i m_{i,j}x_j\equiv y_i\pmod{d_i}$ if one exists, otherwise returns
zero. Shorthand notation: $y$ (resp.~$d$) can be given as a single integer,
in which case all the $y_i$ (resp.~$d_i$) above are taken to be equal to $y$
(resp.~$d$).
If $\fl=1$, all solutions are returned in the form of a two-component row
vector $[x,u]$, where $x$ is a small integer solution to the system of
congruences and $u$ is a matrix whose columns give a basis of the homogeneous
system (so that all solutions can be obtained by adding $x$ to any linear
combination of columns of $u$). If no solution exists, returns zero.
\syn{matsolvemod0}{m,d,y,\fl}. Also available
are $\teb{gaussmodulo}(m,d,y)$ ($\fl=0$)
and $\teb{gaussmodulo2}(m,d,y)$ ($\fl=1$).
\subsecidx{matsupplement}$(x)$: assuming that the columns of the matrix $x$
are linearly independent (if they are not, an error message is issued), finds
a square invertible matrix whose first columns are the columns of $x$,
i.e.~supplement the columns of $x$ to a basis of the whole space.
\syn{suppl}{x}.
\subsecidx{mattranspose}$(x)$ or $x\til$: transpose of $x$.
This has an effect only on vectors and matrices.
\syn{gtrans}{x}.
\subsecidx{qfgaussred}$(q)$: \idx{decomposition into squares} of the
quadratic form represented by the symmetric matrix $q$. The result is a
matrix whose diagonal entries are the coefficients of the squares, and the
non-diagonal entries represent the bilinear forms. More precisely, if
$(a_{ij})$ denotes the output, one has
$$ q(x) = \sum_i a_{ii} (x_i + \sum_{j>i} a_{ij} x_j)^2 $$
\syn{sqred}{x}.
\subsecidx{qfjacobi}$(x)$: $x$ being a real symmetric matrix, this gives a
vector having two components: the first one is the vector of eigenvalues of
$x$, the second is the corresponding orthogonal matrix of eigenvectors of
$x$. The method used is Jacobi's method for symmetric matrices.
\syn{jacobi}{x}.
\subsecidx{qflll}$(x,\{\fl=0\})$: \idx{LLL} algorithm applied to the
\var{columns} of the (not necessarily square) matrix $x$. The columns of $x$
must however be linearly independent, unless specified otherwise below. The
result is a transformation matrix $T$ such that $x\cdot T$ is an LLL-reduced
basis of the lattice generated by the column vectors of $x$.
If $\fl=0$ (default), the computations are done with real numbers (i.e.~not
with rational numbers) hence are fast but as presently programmed (version
\vers) are numerically unstable.
If $\fl=1$, it is assumed that the corresponding Gram matrix is integral.
The computation is done entirely with integers and the algorithm is both
accurate and quite fast. In this case, $x$ needs not be of maximal rank, but
if it is not, $T$ will not be square.
If $\fl=2$, similar to case 1, except $x$ should be an integer matrix whose
columns are linearly independent. The lattice generated by the columns of
$x$ is first partially reduced before applying the LLL algorithm. [A basis
is said to be \var{partially reduced} if $|v_i \pm v_j| \geq |v_i|$ for any
two distinct basis vectors $v_i, \, v_j$.]
This can be significantly faster than $\fl=1$ when one row is huge compared
to the other rows.
If $\fl=3$, all computations are done in rational numbers. This does not
incur numerical instability, but is extremely slow. This function is
essentially superseded by case 1, so will soon disappear.
If $\fl=4$, $x$ is assumed to have integral entries, but needs not be of
maximal rank. The result is a two-component vector of matrices~: the
columns of the first matrix represent a basis of the integer kernel of $x$
(not necessarily LLL-reduced) and the second matrix is the transformation
matrix $T$ such that $x\cdot T$ is an LLL-reduced $\Z$-basis of the image
of the matrix $x$.
If $\fl=5$, case as case $4$, but $x$ may have polynomial coefficients.
If $\fl=7$, uses an older version of case $0$ above.
If $\fl=8$, same as case $0$, where $x$ may have polynomial coefficients.
If $\fl=9$, variation on case $1$, using content.
\syn{qflll0}{x,\fl,\var{prec}}. Also available are
$\teb{lll}(x,\var{prec})$ ($\fl=0$), $\teb{lllint}(x)$ ($\fl=1$), and
$\teb{lllkerim}(x)$ ($\fl=4$).
\subsecidx{qflllgram}$(x,\{\fl=0\})$: same as \kbd{qflll} except that the
matrix $x$ which must now be a square symmetric real matrix is the Gram
matrix of the lattice vectors, and not the coordinates of the vectors
themselves. The result is again the transformation matrix $T$ which gives (as
columns) the coefficients with respect to the initial basis vectors. The
flags have more or less the same meaning, but some are missing. In brief:
$\fl=0$: numerically unstable in the present version \vers.
$\fl=1$: $x$ has integer entries, the computations are all done in integers.
$\fl=4$: $x$ has integer entries, gives the kernel and reduced image.
$\fl=5$: same as $4$ for generic $x$.
$\fl=7$: an older version of case $0$.
\syn{qflllgram0}{x,\fl,\var{prec}}. Also available are
$\teb{lllgram}(x,\var{prec})$ ($\fl=0$), $\teb{lllgramint}(x)$ ($\fl=1$), and
$\teb{lllgramkerim}(x)$ ($\fl=4$).
\subsecidx{qfminim}$(x,b,m,\{\fl=0\})$: $x$ being a square and symmetric
matrix representing a positive definite quadratic form, this function
deals with the minimal vectors of $x$, depending on $\fl$.
If $\fl=0$ (default), seeks vectors of square norm less than or equal to $b$
(for the norm defined by $x$), and at most $2m$ of these vectors. The result
is a three-component vector, the first component being the number of vectors,
the second being the maximum norm found, and the last vector is a matrix
whose columns are the vectors found, only one being given for each
pair $\pm v$ (at most $m$ such pairs).
If $\fl=1$, ignores $m$ and returns the first vector whose norm is less than
$b$.
In both these cases, $x$ {\it is assumed to have integral entries}, and the
function searches for the minimal non-zero vectors whenever $b=0$.
If $\fl=2$, $x$ can have non integral real entries, but $b=0$ is now
meaningless (uses Fincke-Pohst algorithm).
\syn{qfminim0}{x,b,m,\fl,\var{prec}}, also available are \funs{minim}{x,b,m}
($\fl=0$), \funs{minim2}{x,b,m} ($\fl=1$), and finally
\funs{fincke_pohst}{x,b,m,\var{prec}} ($\fl=2$).
\subsecidx{qfperfection}$(x)$: $x$ being a square and symmetric matrix with
integer entries representing a positive definite quadratic form, outputs the
perfection rank of the form. That is, gives the rank of the family of the $s$
symmetric matrices $v_iv_i^t$, where $s$ is half the number of minimal
vectors and the $v_i$ ($1\le i\le s$) are the minimal vectors.
As a side note to old-timers, this used to fail bluntly when $x$ had more
than $5000$ minimal vectors. Beware that the computations can now be very
lengthy when $x$ has many minimal vectors.
\syn{perf}{x}.
\subsecidx{qfsign}$(x)$: signature of the quadratic form represented by the
symmetric matrix $x$. The result is a two-component vector.
\syn{signat}{x}.
\subsecidx{setintersect}$(x,y)$: intersection of the two sets $x$ and $y$.
\syn{setintersect}{x,y}.
\subsecidx{setisset}$(x)$: returns true (1) if $x$ is a set, false (0) if
not. In PARI, a set is simply a row vector whose entries are strictly
increasing. To convert any vector (and other objects) into a set, use the
function \kbd{Set}.
\syn{setisset}{x}, and this returns a \kbd{long}.
\subsecidx{setminus}$(x,y)$: difference of the two sets $x$ and $y$,
i.e.~set of elements of $x$ which do not belong to $y$.
\syn{setminus}{x,y}.
\subsecidx{setsearch}$(x,y,\{\fl=0\})$: searches if $y$ belongs to the set
$x$. If it does and $\fl$ is zero or omitted, returns the index $j$ such that
$x[j]=y$, otherwise returns 0. If $\fl$ is non-zero returns the index $j$
where $y$ should be inserted, and $0$ if it already belongs to $x$ (this is
meant to be used in conjunction with \kbd{listinsert}).
This function works also if $x$ is a \var{sorted} list (see \kbd{listsort}).
\syn{setsearch}{x,y,\fl} which returns a \kbd{long}
integer.
\subsecidx{setunion}$(x,y)$: union of the two sets $x$ and $y$.
\syn{setunion}{x,y}.
\subsecidx{trace}$(x)$: this applies to quite general $x$. If $x$ is not a
matrix, it is equal to the sum of $x$ and its conjugate, except for polmods
where it is the trace as an algebraic number.
For $x$ a square matrix, it is the ordinary trace. If $x$ is a
non-square matrix (but not a vector), an error occurs.
\syn{gtrace}{x}.
\subsecidx{vecextract}$(x,y,\{z\})$: extraction of components of the
vector or matrix $x$ according to $y$. In case $x$ is a matrix, its
components are as usual the \var{columns} of $x$. The parameter $y$ is a
component specifier, which is either an integer, a string describing a
range, or a vector.
If $y$ is an integer, it is considered as a mask: the binary bits of $y$ are
read from right to left, but correspond to taking the components from left to
right. For example, if $y=13=(1101)_2$ then the components 1,3 and 4 are
extracted.
If $y$ is a vector, which must have integer entries, these entries correspond
to the component numbers to be extracted, in the order specified.
If $y$ is a string, it can be
$\bullet$ a single (non-zero) index giving a component number (a negative
index means we start counting from the end).
$\bullet$ a range of the form \kbd{"$a$..$b$"}, where $a$ and $b$ are
indexes as above. Any of $a$ and $b$ can be omitted; in this case, we take
as default values $a = 1$ and $b = -1$, i.e.~ the first and last components
respectively. We then extract all components in the interval $[a,b]$, in
reverse order if $b < a$.
In addition, if the first character in the string is \kbd{\pow}, the
complement of the given set of indices is taken.
If $z$ is not omitted, $x$ must be a matrix. $y$ is then the \var{line}
specifier, and $z$ the \var{column} specifier, where the component specifier
is as explained above.
\bprog
? v = [a, b, c, d, e];
? vecextract(v, 5) \\@com mask
%1 = [a, c]
? vecextract(v, [4, 2, 1]) \\@com component list
%2 = [d, b, a]
? vecextract(v, "2..4") \\@com interval
%3 = [b, c, d]
? vecextract(v, "-1..-3") \\@com interval + reverse order
%4 = [e, d, c]
? vecextract([1,2,3], "^2") \\@com complement
%5 = [1, 3]
? vecextract(matid(3), "2..", "..")
%6 =
[0 1 0]
[0 0 1]
@eprog
\syn{extract}{x,y} or $\teb{matextract}(x,y,z)$.
\subsecidx{vecsort}$(x,\{k\},\{\fl=0\})$: sorts the vector $x$ in ascending
order, using the heapsort method. $x$ must be a vector, and its components
integers, reals, or fractions.
If $k$ is present and is an integer, sorts according to the value of the
$k$-th subcomponents of the components of~$x$. $k$ can also be a vector,
in which case the
sorting is done lexicographically according to the components listed in the
vector $k$. For example, if $k=[2,1,3]$, sorting will be done with respect
to the second component, and when these are equal, with respect to the
first, and when these are equal, with respect to the third.
\noindent The binary digits of \fl\ mean:
$\bullet$ 1: indirect sorting of the vector $x$, i.e.~if $x$ is an
$n$-component vector, returns a permutation of $[1,2,\dots,n]$ which
applied to the components of $x$ sorts $x$ in increasing order.
For example, \kbd{vecextract(x, vecsort(x,,1))} is equivalent to
\kbd{vecsort(x)}.
$\bullet$ 2: sorts $x$ by ascending lexicographic order (as per the
\kbd{lex} comparison function).
$\bullet$ 4: use decreasing instead of ascending order.
\syn{vecsort0}{x,k,flag}. To omit $k$, use \kbd{NULL} instead. You can also
use the simpler functions
$\teb{sort}(x)$ (= $\kbd{vecsort0}(x,\text{NULL},0)$).
$\teb{indexsort}(x)$ (= $\kbd{vecsort0}(x,\text{NULL},1)$).
$\teb{lexsort}(x)$ (= $\kbd{vecsort0}(x,\text{NULL},2)$).
Also available are \teb{sindexsort} and \teb{sindexlexsort} which return a
vector of C-long integers (private type \typ{VECSMALL}) $v$, where
$v[1]\dots v[n]$ contain the indices. Note that the resulting $v$ is
\var{not} a generic PARI object, but is in general easier to use in C
programs!
\subsecidx{vector}$(n,\{X\},\{\var{expr}=0\})$: creates a row vector (type
\typ{VEC}) with $n$ components whose components are the expression
\var{expr} evaluated at the integer points between 1 and $n$. If one of the
last two arguments is omitted, fill the vector with zeroes.
\synt{vecteur}{GEN nmax, entree *ep, char *expr}.
\subsecidx{vectorv}$(n,X,\var{expr})$: as \teb{vector}, but returns a
column vector (type \typ{COL}).
\synt{vvecteur}{GEN nmax, entree *ep, char *expr}.
\section{Sums, products, integrals and similar functions}
\label{se:sums}
Although the GP calculator is programmable, it is useful to have
preprogrammed a number of loops, including sums, products, and a certain
number of recursions. Also, a number of functions from numerical analysis
like numerical integration and summation of series will be described here.
One of the parameters in these loops must be the control variable, hence a
simple variable name. The last parameter can be any legal PARI expression,
including of course expressions using loops. Since it is much easier to
program directly the loops in library mode, these functions are mainly
useful for GP programming. The use of these functions in library mode is a
little tricky and its explanation will be mostly omitted, although the
reader can try and figure it out by himself by checking the example given
for the \tet{sum} function. In this section we only give the library
syntax, with no semantic explanation.
The letter $X$ will always denote any simple variable name, and represents
the formal parameter used in the function.
\misctitle{(numerical) integration}:\sidx{numerical integration} A number
of Romberg-like integration methods are implemented (see \kbd{intnum} as
opposed to \kbd{intformal} which we already described). The user should not
require too much accuracy: 18 or 28 decimal digits is OK, but not much more.
In addition, analytical cleanup of the integral must have been done: there
must be no singularities in the interval or at the boundaries. In practice
this can be accomplished with a simple change of variable. Furthermore, for
improper integrals, where one or both of the limits of integration are plus
or minus infinity, the function must decrease sufficiently rapidly at
infinity. This can often be accomplished through integration by parts.
Finally, the function to be integrated should not be very small
(compared to the current precision) on the entire interval. This can
of course be accomplished by just multiplying by an appropriate
constant.
Note that \idx{infinity} can be represented with essentially no loss of
accuracy by 1e4000. However beware of real underflow when dealing with
rapidly decreasing functions. For example, if one wants to compute the
$\int_0^\infty e^{-x^2}\,dx$ to 28 decimal digits, then one should set
infinity equal to 10 for example, and certainly not to 1e4000.
The integrand may have values belonging to a vector space over the real
numbers; in particular, it can be complex-valued or vector-valued.
See also the discrete summation methods below (sharing the prefix \kbd{sum}).
\subsecidx{intnum}$(X=a,b,\var{expr},\{\fl=0\})$: numerical integration of
\var{expr} (smooth in $]a,b[$), with respect to $X$.
Set $\fl=0$ (or omit it altogether) when $a$ and $b$ are not too large, the
function is smooth, and can be evaluated exactly everywhere on the interval
$[a,b]$.
If $\fl=1$, uses a general driver routine for doing numerical integration,
making no particular assumption (slow).
$\fl=2$ is tailored for being used when $a$ or $b$ are infinite. One
\var{must} have $ab>0$, and in fact if for example $b=+\infty$, then it is
preferable to have $a$ as large as possible, at least $a\ge1$.
If $\fl=3$, the function is allowed to be undefined (but continuous) at $a$
or $b$, for example the function $\sin(x)/x$ at $x=0$.
\synt{intnum0}{entree$\,$*e,GEN a,GEN b,char$\,$*expr,long \fl,long prec}.
\subsecidx{prod}$(X=a,b,\var{expr},\{x=1\})$: product of expression \var{expr},
initialized at $x$, the formal parameter $X$ going from $a$ to $b$. As for
\kbd{sum}, the main purpose of the initialization parameter $x$ is to force
the type of the operations being performed. For example if it is set equal to
the integer 1, operations will start being done exactly. If it is set equal
to the real $1.$, they will be done using real numbers having the default
precision. If it is set equal to the power series $1+O(X^k)$ for a certain
$k$, they will be done using power series of precision at most $k$. These
are the three most common initializations.
\noindent As an extreme example, compare
\bprog
? prod(i=1, 100, 1 - X^i); \\@com this has degree $5050$ !!
time = 3,335 ms.
? prod(i=1, 100, 1 - X^i, 1 + O(X^101))
time = 43 ms.
%2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \
X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)
@eprog
\synt{produit}{entree *ep, GEN a, GEN b, char *expr, GEN x}.
\subsecidx{prodeuler}$(X=a,b,\var{expr})$: product of expression \var{expr},
initialized at 1. (i.e.~to a \var{real} number equal to 1 to the current
\kbd{realprecision}), the formal parameter $X$ ranging over the prime numbers
between $a$ and $b$.\sidx{Euler product}
\synt{prodeuler}{entree *ep, GEN a, GEN b, char *expr, long prec}.
\subsecidx{prodinf}$(X=a,\var{expr},\{\fl=0\})$: \idx{infinite product} of
expression \var{expr}, the formal parameter $X$ starting at $a$. The evaluation
stops when the relative error of the expression minus 1 is less than the
default precision. The expressions must always evaluate to an element of
$\C$.
If $\fl=1$, do the product of the ($1+\var{expr}$) instead.
\synt{prodinf}{entree *ep, GEN a, char *expr, long prec} ($\fl=0$), or
\teb{prodinf1} with the same arguments ($\fl=1$).
\subsecidx{solve}$(X=a,b,\var{expr})$: find a real root of expression
\var{expr} between $a$ and $b$, under the condition
$\var{expr}(X=a) * \var{expr}(X=b) \le 0$.
This routine uses Brent's method and can fail miserably if \var{expr} is
not defined in the whole of $[a,b]$ (try \kbd{solve(x=1, 2, tan(x)}).
\synt{zbrent}{entree *ep, GEN a, GEN b, char *expr, long prec}.
\subsecidx{sum}$(X=a,b,\var{expr},\{x=0\})$: sum of expression \var{expr},
initialized at $x$, the formal parameter going from $a$ to $b$. As for
\kbd{prod}, the initialization parameter $x$ may be given to force the type
of the operations being performed.
\noindent As an extreme example, compare
\bprog
? sum(i=1, 5000, 1/i); \\@com rational number: denominator has $2166$ digits.
time = 1,241 ms.
? sum(i=1, 5000, 1/i, 0.)
time = 158 ms.
%2 = 9.094508852984436967261245533
@eprog
\synt{somme}{entree *ep, GEN a, GEN b, char *expr, GEN x}. This is to be
used as follows: \kbd{ep} represents the dummy variable used in the
expression \kbd{expr}
\bprog
/* compute a^2 + @dots + b^2 */
{
/* define the dummy variable "i" */
entree *ep = is_entry("i");
/* sum for a <= i <= b */
return somme(ep, a, b, "i^2", gzero);
}
@eprog
\subsecidx{sumalt}$(X=a,\var{expr},\{\fl=0\})$: numerical summation of the
series \var{expr}, which should be an \idx{alternating series}, the formal
variable $X$ starting at $a$.
If $\fl=0$, use an algorithm of F.~Villegas as modified by D.~Zagier. This
is much better than \idx{Euler}-Van Wijngaarden's method which was used
formerly.
Beware that the stopping criterion is that the term gets small enough, hence
terms which are equal to 0 will create problems and should be removed.
If $\fl=1$, use a variant with slightly different polynomials. Sometimes
faster.
Divergent alternating series can sometimes be summed by this method, as well
as series which are not exactly alternating (see for example
\secref{se:user_defined}).
\misctitle{Important hint:} a significant speed gain can be obtained by
writing the $(-1)^X$ which may occur in the expression as
\kbd{(1.~- X\%2*2)}.
\synt{sumalt}{entree *ep, GEN a, char *expr, long \fl, long prec}.
\subsecidx{sumdiv}$(n,X,\var{expr})$: sum of expression \var{expr} over
the positive divisors of $n$.
Arithmetic functions like \teb{sigma} use the multiplicativity of the
underlying expression to speed up the computation. In the present version
\vers, there is no way to indicate that \var{expr} is multiplicative in
$n$, hence specialized functions should be prefered whenever possible.
\synt{divsum}{entree *ep, GEN num, char *expr}.
\subsecidx{suminf}$(X=a,\var{expr})$: \idx{infinite sum} of expression
\var{expr}, the formal parameter $X$ starting at $a$. The evaluation stops
when the relative error of the expression is less than the default precision.
The expressions must always evaluate to a complex number.
\synt{suminf}{entree *ep, GEN a, char *expr, long prec}.
\subsecidx{sumpos}$(X=a,\var{expr},\{\fl=0\})$: numerical summation of the
series \var{expr}, which must be a series of terms having the same sign,
the formal
variable $X$ starting at $a$. The algorithm used is Van Wijngaarden's trick
for converting such a series into an alternating one, and is quite slow.
Beware that the stopping criterion is that the term gets small enough, hence
terms which are equal to 0 will create problems and should be removed.
If $\fl=1$, use slightly different polynomials. Sometimes faster.
\synt{sumpos}{entree *ep, GEN a, char *expr, long \fl, long prec}.
\section{Plotting functions}
Although plotting is not even a side purpose of PARI, a number of plotting
functions are provided. Moreover, a lot of people felt like suggesting
ideas or submitting huge patches for this section of the code. Among these,
special thanks go to Klaus-Peter Nischke who suggested the recursive plotting
and the forking/resizing stuff under X11, and Ilya Zakharevich who
undertook a complete rewrite of the graphic code, so that most of it is now
platform-independent and should be relatively easy to port or expand.
These graphic functions are either
$\bullet$ high-level plotting functions (all the functions starting with
\kbd{ploth}) in which the user has little to do but explain what type of plot
he wants, and whose syntax is similar to the one used in the preceding
section (with somewhat more complicated flags).
$\bullet$ low-level plotting functions, where every drawing primitive (point,
line, box, etc.) must be specified by the user. These low-level functions
(called \var{rectplot} functions, sharing the prefix \kbd{plot}) work as
follows. You have at your disposal 16 virtual windows which are filled
independently, and can then be physically ORed on a single window at
user-defined positions. These windows are numbered from 0 to 15, and must be
initialized before being used by the function \kbd{plotinit}, which specifies
the height and width of the virtual window (called a \var{rectwindow} in the
sequel). At all times, a virtual cursor (initialized at $[0,0]$) is
associated to the window, and its current value can be obtained using the
function \kbd{plotcursor}.
A number of primitive graphic objects (called \var{rect} objects) can then
be drawn in these windows, using a default color associated to that window
(which can be changed under X11, using the \kbd{plotcolor} function, black
otherwise) and only the part of the object which is inside the window will be
drawn, with the exception of polygons and strings which are drawn entirely
(but the virtual cursor can move outside of the window). The ones sharing the
prefix \kbd{plotr} draw relatively to the current position of the virtual
cursor, the others use absolute coordinates. Those having the prefix
\kbd{plotrecth} put in the rectwindow a large batch of rect objects
corresponding to the output of the related \kbd{ploth} function.
Finally, the actual physical drawing is done using the function
\kbd{plotdraw}. Note that the windows are preserved so that further drawings
using the same windows at different positions or different windows can be
done without extra work. If you want to erase a window (and free the
corresponding memory), use the function \kbd{plotkill}. It is not possible to
partially erase a window. Erase it completely, initialize it again and then
fill it with the graphic objects that you want to keep.
In addition to initializing the window, you may want to have a scaled
window to avoid unnecessary conversions. For this, use the function
\kbd{plotscale} below. As long as this function is not called, the scaling is
simply the number of pixels, the origin being at the upper left and the
$y$-coordinates going downwards.
Note that in the present version \vers\ all these plotting functions
(both low and high level) have been written for the X11-window system
(hence also for GUI's based on X11 such as Openwindows and Motif) only,
though very little code remains which is actually platform-dependent. A
Suntools/Sunview, Macintosh, and an Atari/Gem port were provided for
previous versions. These \var{may} be adapted in future releases.
Under X11/Suntools, the physical window (opened by \kbd{plotdraw} or any
of the \kbd{ploth*} functions) is completely separated from GP (technically,
a \kbd{fork} is done, and the non-graphical memory is immediately freed in
the child process), which means you can go on working in the current GP
session, without having to kill the window first. Under X11, this window can
be closed, enlarged or reduced using the standard window manager functions.
No zooming procedure is implemented though (yet).
$\bullet$ Finally, note that in the same way that \kbd{printtex} allows you
to have a \TeX\ output corresponding to printed results, the functions
starting with \kbd{ps} allow you to have \tet{PostScript} output of the
plots. This will not be absolutely identical with the screen output, but will
be sufficiently close. Note that you can use PostScript output even if you do
not have the plotting routines enabled. The PostScript output is written in a
file whose name is derived from the \tet{psfile} default (\kbd{./pari.ps} if
you did not tamper with it). Each time a new PostScript output is asked for,
the PostScript output is appended to that file. Hence the user must remove
this file, or change the value of \kbd{psfile}, first if he does not want
unnecessary drawings from preceding sessions to appear. On the other hand, in
this manner as many plots as desired can be kept in a single file. \smallskip
{\it None of the graphic functions are available within the PARI library, you
must be under GP to use them}. The reason for that is that you really should
not use PARI for heavy-duty graphical work, there are much better specialized
alternatives around. This whole set of routines was only meant as a
convenient, but simple-minded, visual aid. If you really insist on using
these in your program (we warned you), the source (\kbd{plot*.c}) should be
readable enough for you to achieve something.
\subsecidx{plot}$(X=a,b,\var{expr},\{\var{Ymin}\},\{\var{Ymax}\})$: crude
(ASCII) plot of the function represented by expression \var{expr} from
$a$ to $b$, with \var{Y} ranging from \var{Ymin} to \var{Ymax}. If
\var{Ymin} (resp. \var{Ymax}) is not given, the minima (resp. the
maxima) of the computed values of the expression is used instead.
\subsecidx{plotbox}$(w,x2,y2)$: let $(x1,y1)$ be the current position of the
virtual cursor. Draw in the rectwindow $w$ the outline of the rectangle which
is such that the points $(x1,y1)$ and $(x2,y2)$ are opposite corners. Only
the part of the rectangle which is in $w$ is drawn. The virtual cursor does
\var{not} move.
\subsecidx{plotclip}$(w)$: `clips' the content of rectwindow $w$, i.e
remove all parts of the drawing that would not be visible on the screen.
Together with \tet{plotcopy} this function enables you to draw on a
scratchpad before commiting the part you're interested in to the final
picture.
\subsecidx{plotcolor}$(w,c)$: set default color to $c$ in rectwindow $w$.
In present version \vers, this is only implemented for X11 window system,
and you only have the following palette to choose from:
1=black, 2=blue, 3=sienna, 4=red, 5=cornsilk, 6=grey, 7=gainsborough.
Note that it should be fairly easy for you to hardwire some more colors by
tweaking the files \kbd{rect.h} and \kbd{plotX.c}. User-defined
colormaps would be nice, and \var{may} be available in future versions.
\subsecidx{plotcopy}$(w1,w2,dx,dy)$: copy the contents of rectwindow
$w1$ to rectwindow $w2$, with offset $(dx,dy)$.
\subsecidx{plotcursor}$(w)$: give as a 2-component vector the current
(scaled) position of the virtual cursor corresponding to the rectwindow $w$.
\subsecidx{plotdraw}$(list)$: physically draw the rectwindows given in $list$
which must be a vector whose number of components is divisible by 3. If
$list=[w1,x1,y1,w2,x2,y2,\dots]$, the windows $w1$, $w2$, etc.~are
physically placed with their upper left corner at physical position
$(x1,y1)$, $(x2,y2)$,\dots\ respectively, and are then drawn together.
Overlapping regions will thus be drawn twice, and the windows are considered
transparent. Then display the whole drawing in a special window on your
screen.
\subsecidx{plotfile}$(s)$: set the output file for plotting output. Special
filename \kbd{-} redirects to the same place as PARI output.
\subsecidx{ploth}$(X=a,b,\var{expr},\{\fl=0\},\{n=0\})$: high precision
plot of the function $y=f(x)$ represented by the expression \var{expr}, $x$
going from $a$ to $b$. This opens a specific window (which is killed
whenever you click on it), and returns a four-component vector giving the
coordinates of the bounding box in the form
$[\var{xmin},\var{xmax},\var{ymin},\var{ymax}]$.
\misctitle{Important note}: Since this may involve a lot of function calls,
it is advised to keep the current precision to a minimum (e.g.~9) before
calling this function.
$n$ specifies the number of reference point on the graph (0 means use the
hardwired default values, that is: 1000 for general plot, 1500 for
parametric plot, and 15 for recursive plot).
If no $\fl$ is given, \var{expr} is either a scalar expression $f(X)$, in which
case the plane curve $y=f(X)$ will be drawn, or a vector
$[f_1(X),\dots,f_k(X)]$, and then all the curves $y=f_i(X)$ will be drawn in
the same window.
\noindent The binary digits of $\fl$ mean:
$\bullet$ 1: \tev{parametric plot}. Here \var{expr} must be a vector with
an even number of components. Successive pairs are then understood as the
parametric coordinates of a plane curve. Each of these are then drawn.
For instance:
\kbd{ploth(X=0,2*Pi,[sin(X),cos(X)],1)} will draw a circle.
\kbd{ploth(X=0,2*Pi,[sin(X),cos(X)])} will draw two entwined sinusoidal
curves.
\kbd{ploth(X=0,2*Pi,[X,X,sin(X),cos(X)],1)} will draw a circle and the line
$y=x$.
$\bullet$ 2: \tev{recursive plot}. If this flag is set, only \var{one}
curve can be drawn at time, i.e.~\var{expr} must be either a two-component
vector (for a single parametric curve, and the parametric flag \var{has} to
be set), or a scalar function. The idea is to choose pairs of successive
reference points, and if their middle point is not too far away from the
segment joining them, draw this as a local approximation to the curve.
Otherwise, add the middle point to the reference points. This is very fast,
and usually more precise than usual plot. Compare the results of
$$\kbd{ploth(X=-1,1,sin(1/X),2)}\quad
\text{and}\quad\kbd{ploth(X=-1,1,sin(1/X))}$$
for instance. But beware that if you are extremely unlucky, or choose too few
reference points, you may draw some nice polygon bearing little resemblance
to the original curve. For instance you should \var{never} plot recursively
an odd function in a symmetric interval around 0. Try
\bprog
ploth(x = -20, 20, sin(x), 2)
@eprog
\noindent to see why. Hence, it's usually a good idea to try and plot the same
curve with slightly different parameters.
The other values toggle various display options:
$\bullet$ 4: do not rescale plot according to the computed extrema. This is
meant to be used when graphing multiple functions on a rectwindow (as a
\tet{plotrecth} call), in conjuction with \tet{plotscale}.
$\bullet$ 8: do not print the $x$-axis.
$\bullet$ 16: do not print the $y$-axis.
$\bullet$ 32: do not print frame.
$\bullet$ 64: only plot reference points, do not join them.
$\bullet$ 256: use splines to interpolate the points.
$\bullet$ 512: plot no $x$-ticks.
$\bullet$ 1024: plot no $y$-ticks.
$\bullet$ 2048: plot all ticks with the same length.
\subsecidx{plothraw}$(\var{listx},\var{listy},\{\fl=0\})$: given
\var{listx} and \var{listy} two vectors of equal length, plots (in high
precision) the points whose $(x,y)$-coordinates are given in \var{listx}
and \var{listy}. Automatic positioning and scaling is done, but with the
same scaling factor on $x$ and $y$. If $\fl$ is 1, join points, other non-0
flags toggle display options and should be combinations of bits $2^k$, $k
\geq 3$ as in \kbd{ploth}.
\subsecidx{plothsizes}$()$: return data corresponding to the output window
in the form of a 6-component vector: window width and height, sizes for ticks
in horizontal and vertical directions (this is intended for the \kbd{gnuplot}
interface and is currently not significant), width and height of characters.
\subsecidx{plotinit}$(w,x,y)$: initialize the rectwindow $w$ to width $x$ and
height $y$, and position the virtual cursor at $(0,0)$. This destroys any rect
objects you may have already drawn in $w$.
The plotting device imposes an upper bound for $x$ and $y$, for instance the
number of pixels for screen output. These bounds are available through the
\tet{plothsizes} function. The following sequence initializes in a portable way
(i.e independant of the output device) a window of maximal size, accessed through
coordinates in the $[0,1000] \times [0,1000]$ range~:
\bprog
s = plothsizes();
plotinit(0, s[1]-1, s[2]-1);
plotscale(0, 0,1000, 0,1000);
@eprog
\subsecidx{plotkill}$(w)$: erase rectwindow $w$ and free the corresponding
memory. Note that if you want to use the rectwindow $w$ again, you have to
use \kbd{initrect} first to specify the new size. So it's better in this case
to use \kbd{initrect} directly as this throws away any previous work in the
given rectwindow.
\subsecidx{plotlines}$(w,X,Y,\{\fl=0\})$: draw on the rectwindow $w$
the polygon such that the (x,y)-coordinates of the vertices are in the
vectors of equal length $X$ and $Y$. For simplicity, the whole
polygon is drawn, not only the part of the polygon which is inside the
rectwindow. If $\fl$ is non-zero, close the polygon. In any case, the
virtual cursor does not move.
$X$ and $Y$ are allowed to be scalars (in this case, both have to).
There, a single segment will be drawn, between the virtual cursor current
position and the point $(X,Y)$. And only the part thereof which
actually lies within the boundary of $w$. Then \var{move} the virtual cursor
to $(X,Y)$, even if it is outside the window. If you want to draw a
line from $(x1,y1)$ to $(x2,y2)$ where $(x1,y1)$ is not necessarily the
position of the virtual cursor, use \kbd{plotmove(w,x1,y1)} before using this
function.
\subsecidx{plotlinetype}$(w,\var{type})$: change the type of lines
subsequently plotted in rectwindow $w$. \var{type} $-2$ corresponds to
frames, $-1$ to axes, larger values may correspond to something else. $w =
-1$ changes highlevel plotting. This is only taken into account by the
\kbd{gnuplot} interface.
\subsecidx{plotmove}$(w,x,y)$: move the virtual cursor of the rectwindow $w$
to position $(x,y)$.
\subsecidx{plotpoints}$(w,X,Y)$: draw on the rectwindow $w$ the
points whose $(x,y)$-coordinates are in the vectors of equal length $X$ and
$Y$ and which are inside $w$. The virtual cursor does \var{not} move. This
is basically the same function as \kbd{plothraw}, but either with no scaling
factor or with a scale chosen using the function \kbd{plotscale}.
As was the case with the \kbd{plotlines} function, $X$ and $Y$ are allowed to
be (simultaneously) scalar. In this case, draw the single point $(X,Y)$ on
the rectwindow $w$ (if it is actually inside $w$), and in any case
\var{move} the virtual cursor to position $(x,y)$.
\subsecidx{plotpointsize}$(w,size)$: changes the ``size'' of following
points in rectwindow $w$. If $w = -1$, change it in all rectwindows.
This only works in the \kbd{gnuplot} interface.
\subsecidx{plotpointtype}$(w,\var{type})$: change the type of
points subsequently plotted in rectwindow $w$. $\var{type} = -1$
corresponds to a dot, larger values may correspond to something else. $w = -1$
changes highlevel plotting. This is only taken into account by the
\kbd{gnuplot} interface.
\subsecidx{plotrbox}$(w,dx,dy)$: draw in the rectwindow $w$ the outline of
the rectangle which is such that the points $(x1,y1)$ and $(x1+dx,y1+dy)$ are
opposite corners, where $(x1,y1)$ is the current position of the cursor.
Only the part of the rectangle which is in $w$ is drawn. The virtual cursor
does \var{not} move.
\subsecidx{plotrecth}$(w,X=a,b,\var{expr},\{\fl=0\},\{n=0\})$: writes to
rectwindow $w$ the curve output of \kbd{ploth}$(w,X=a,b,\var{expr},\fl,n)$.
\subsecidx{plotrecthraw}$(w,\var{data},\{\fl=0\})$: plot graph(s) for
\var{data} in rectwindow $w$. $\fl$ has the same significance here as in
\kbd{ploth}, though recursive plot is no more significant.
\var{data} is a vector of vectors, each corresponding to a list a coordinates.
If parametric plot is set, there must be an even number of vectors, each
successive pair corresponding to a curve. Otherwise, the first one containe
the $x$ coordinates, and the other ones contain the $y$-coordinates
of curves to plot.
\subsecidx{plotrline}$(w,dx,dy)$: draw in the rectwindow $w$ the part of the
segment $(x1,y1)-(x1+dx,y1+dy)$ which is inside $w$, where $(x1,y1)$ is the
current position of the virtual cursor, and move the virtual cursor to
$(x1+dx,y1+dy)$ (even if it is outside the window).
\subsecidx{plotrmove}$(w,dx,dy)$: move the virtual cursor of the rectwindow
$w$ to position $(x1+dx,y1+dy)$, where $(x1,y1)$ is the initial position of
the cursor (i.e.~to position $(dx,dy)$ relative to the initial cursor).
\subsecidx{plotrpoint}$(w,dx,dy)$: draw the point $(x1+dx,y1+dy)$ on the
rectwindow $w$ (if it is inside $w$), where $(x1,y1)$ is the current position
of the cursor, and in any case move the virtual cursor to position
$(x1+dx,y1+dy)$.
\subsecidx{plotscale}$(w,x1,x2,y1,y2)$: scale the local coordinates of the
rectwindow $w$ so that $x$ goes from $x1$ to $x2$ and $y$ goes from $y1$ to
$y2$ ($x2b$.
$a$ and $b$ must be in $\R$.
\subsubsecidx{fordiv}$(n,X,\var{seq})$: the formal variable $X$ ranging
through the positive divisors of $n$, the sequence \var{seq} is evaluated.
$n$ must be of type integer.
\subsubsecidx{forprime}$(X=a,b,\var{seq})$: the formal variable $X$
ranging over the prime numbers between $a$ to $b$ (including $a$ and $b$
if they are prime), the \var{seq} is evaluated. More precisely, the value
of $X$ is incremented to the smallest prime strictly larger than $X$ at the
end of each iteration. Nothing is done if $a>b$. Note that $a$ and $b$ must
be in $\R$.
\bprog
? { forprime(p = 2, 12,
print(p);
if (p == 3, p = 6);
)
}
2
3
7
11
@eprog
\subsubsecidx{forstep}$(X=a,b,s,\var{seq})$: the formal variable $X$
going from $a$ to $b$, in increments of $s$, the \var{seq} is evaluated.
Nothing is done if $s>0$ and $a>b$ or if $s<0$ and $a**^2 + 1
@eprog
If you simply want to restore a variable to its ``undefined'' value
(monomial of degree one), use the \idx{quote} operator: \kbd{a = 'a}.
Predefined symbols (\kbd{x} and GP function names) cannot be killed.
\subsubsecidx{print}$(\{\var{str}\}*)$: outputs its (string) arguments in raw
format, ending with a newline.
\subsubsecidx{print1}$(\{\var{str}\}*)$: outputs its (string) arguments in raw
format, without ending with a newline (note that you can still embed newlines
within your strings, using the \b{n} notation~!).
\subsubsecidx{printp}$(\{\var{str}\}*)$: outputs its (string) arguments in
prettyprint (beautified) format, ending with a newline.
\subsubsecidx{printp1}$(\{\var{str}\}*)$: outputs its (string) arguments in
prettyprint (beautified) format, without ending with a newline.
\subsubsecidx{printtex}$(\{\var{str}\}*)$: outputs its (string) arguments in
\TeX\ format. This output can then be used in a \TeX\ manuscript.
The printing is done on the standard output. If you want to print it to a
file you should use \kbd{writetex} (see there).
Another possibility is to enable the \tet{log} default
(see~\secref{se:defaults}).
You could for instance do:\sidx{logfile}
%
\bprog
default(logfile, "new.tex");
default(log, 1);
printtex(result);
@eprog
\noindent
(You can use the automatic string expansion/concatenation process to have
dynamic file names if you wish).
\subsubsecidx{quit}$()$: exits GP.\label{se:quit}
\subsubsecidx{read}$(\{\var{str}\})$: reads in the file whose name results
from the expansion of the string \var{str}. If \var{str} is omitted,
re-reads the last file that was fed into GP. The return value is the result of
the last expression evaluated.\label{se:read}
\subsubsecidx{reorder}$(\{x=[\,]\})$: $x$ must be a vector. If $x$ is the
empty vector, this gives the vector whose components are the existing
variables in increasing order (i.e.~in decreasing importance). Killed
variables (see \kbd{kill}) will be shown as \kbd{0}. If $x$ is
non-empty, it must be a permutation of variable names, and this permutation
gives a new order of importance of the variables, {\it for output only}. For
example, if the existing order is \kbd{[x,y,z]}, then after
\kbd{reorder([z,x])} the order of importance of the variables, with respect
to output, will be \kbd{[z,y,x]}. The internal representation is unaffected.
\label{se:reorder}
\subsubsecidx{setrand}$(n)$: reseeds the random number generator to the value
$n$. The initial seed is $n=1$.
\syn{setrand}{n}, where $n$ is a \kbd{long}. Returns $n$.
\subsubsecidxunix{system}$(\var{str})$: \var{str} is a string representing
a system command. This command is executed, its output written to the
standard output (this won't get into your logfile), and control returns
to the PARI system. This simply calls the C \kbd{system} command.
\subsubsecidx{trap}$(\{e\}, \{\var{rec}\}, \{\var{seq}\})$: tries to
execute \var{seq}, trapping error $e$, that is effectively preventing it
from aborting computations in the usual way; the recovery sequence
\var{rec} is executed if the error occurs and the evaluation of \var{rec}
becomes the result of the command. If $e$ is omitted, all exceptions are
trapped. Note in particular that hitting \kbd{\pow C} (Control-C) raises an
exception.
\bprog
? \\@com trap division by 0
? inv(x) = trap (gdiver2, INFINITY, 1/x)
? inv(2)
%1 = 1/2
? inv(0)
%2 = INFINITY
@eprog
If \var{seq} is omitted, defines \var{rec} as a default action when
encountering exception $e$. The error message is printed, as well as the
result of the evaluation of \var{rec}, and the control is given back to the
GP prompt. In particular, current computation is then lost.
The following error handler prints the list of all user variables, then
stores in a file their name and their values:
\bprog
? { trap( ,
print(reorder);
write("crash", reorder);
write("crash", eval(reorder))) }
@eprog
If no recovery code is given (\var{rec} is omitted) a so-called
{\it\idx{break loop}} will be started. During a break loop, all commands are
read and evaluated as during the main GP loop (except that no history of
results is kept).
To get out of the break loop, you can use \tet{next}, \tet{break} or
\tet{return}; reading in a file by \b{r} will also terminate the loop once
the file has been read (\kbd{read} will remain in the break loop). If the
error is not fatal (\kbd{\pow C} is the only non-fatal error), \kbd{next}
will continue the computation as if nothing had happened (except of course,
you may have changed GP state during the break loop); otherwise control
will come back to the GP prompt. After a user interrupt (\kbd{\pow C}),
entering an empty input line (i.e hitting the return key) has the same
effect as \kbd{next}.
Break loops are useful as a debugging tool to inspect the values of GP
variables to understand why a problem occurred, or to change GP behaviour
(increase debugging level, start storing results in a logfile, modify
parameters\dots) in the middle of a long computation (hit \kbd{\pow C}, type
in your modifications, then type \kbd{next}).
If \var{rec} is the empty string \kbd{""} the last default handler is popped
out, and replaced by the previous one for that error.
\misctitle{Note:} The interface is currently not adequate for trapping
individual exceptions. In the current version \vers, the following keywords
are recognized, but the name list will be expanded and changed in the
future (all library mode errors can be trapped: it's a matter of defining
the keywords to GP, and there are currently far too many useless ones):
\kbd{accurer}: accuracy problem
\kbd{gdiver2}: division by 0
\kbd{archer}: not available on this architecture or operating system
\kbd{typeer}: wrong type
\kbd{errpile}: the PARI stack overflows
\subsubsecidx{type}$(x,\{t\})$: this is useful only under GP. If $t$ is
not present, returns the internal type number of the PARI object $x$.
Otherwise, makes a copy of $x$ and sets its type equal to type $t$, which
can be either a number or, preferably since internal codes may eventually
change, a symbolic name such as \typ{FRACN} (you can skip the \typ{}
part here, so that \kbd{FRACN} by itself would also be all right). Check out
existing type names with the metacommand \b{t}.\label{se:gptype}
GP won't let you create meaningless objects in this way where the internal
structure doesn't match the type. This function can be useful to create
reducible rationals (type \typ{FRACN}) or rational functions (type
\typ{RFRACN}). In fact it's the only way to do so in GP. In this case, the
created object, as well as the objects created from it, will not be reduced
automatically, making some operations a bit faster.
There is no equivalent library syntax, since the internal functions \kbd{typ}
and \kbd{settyp} are available. Note that \kbd{settyp} does \var{not}
create a copy of \kbd{x}, contrary to most PARI functions. It also doesn't
check for consistency. \kbd{settyp} just changes the type in place and
returns nothing. \kbd{typ} returns a C long integer. Note also the different
spellings of the internal functions (\kbd{set})\kbd{typ} and of the GP
function \kbd{type}, which is due to the fact that \kbd{type} is a reserved
identifier for some C compilers.
\subsubsecidx{whatnow}$(\var{key})$: if keyword \var{key} is the name
of a function that was present in GP version 1.39.15 or lower, outputs
the new function name and syntax, if it changed at all ($387$ out of $560$
did).\label{se:whatnow}
\subsubsecidx{write}$(\var{filename},\{\var{str}*\})$: writes (appends)
to \var{filename} the remaining arguments, and appends a newline (same output
as \kbd{print}).\label{se:write}
\subsubsecidx{write1}$(\var{filename},\{\var{str}*\})$: writes (appends) to
\var{filename} the remaining arguments without a trailing newline
(same output as \kbd{print1}).
\subsubsecidx{writetex}$(\var{filename},\{\var{str}*\})$: as \kbd{write},
in \TeX\ format.\label{se:writetex}
\vfill\eject
**