NEWS  R Documentation 
dpareto() did not handle the case x == 0 correctly.
The package now depends on R >= 3.3.0 since it uses chkDots() in a few methods that do not use the content of their '...' argument.
ogive() lost its argument '...' as it was unused anyway.
severity.portfolio() calls unroll() directly instead of relying on the default method to be identical to unroll().
Deleted an unwanted debugging message ("local") printed by CTE() at every execution.
predict.cm() and summary.cm() now treat the '...' argument as advertised in the help file.
Fixed bad examples in a few probability law help files that returned unintended results such as Inf or NaN.
Clevel function log1pexp(...) used in a few places in lieu of log1p(exp(...)).
Names of the internal utility macros defined in dpq.h changed from "R_<...>" to "ACT_<...>" to make it clearer that they are defined by the package (although they were copied from R sources).
In the computation of the CTE in the Normal Power case, numerical integration has been replaced by the explicit formula given in Castañer, A.; Claramunt, M.M.; Mármol, M. (2013). Tail value at risk. An analysis with the NormalPower approximation. In Statistical and Soft Computing Approaches in Insurance Problems. Nova Science Publishers. ISBN 9781626185067.
Results of 'cm' for hierarchical models would get incorrectly sorted when there were 10 nodes or more at a given level. Thanks to Dylan Wienke dwienke2@gmail.com for the catch.
Functions 'head' and 'tail' explicitly imported from package utils in NAMESPACE as per a new requirement of R 3.3.x.
Memory allocation problem at the C level in hierarc(). Thanks to Prof. Ripley for identification of the problem and help solving it.
Abusive use of abs() at the C level in a few places.
panjer() result was wrong for the "logarithmic" type of frequency distribution. Thanks to mmclaramunt@ub.edu for the catch.
Fixed a deprecated use of real().
Complete rewrite of coverage(); the function it creates no longer relies on ifelse() and, consequently, is much faster. The rewrite was motivated by a change in the way [dp]gamma() handle their arguments in R 2.15.1.
summary.ogive() no longer relies on length 'n' to be in the environment of a function created by approxfun(). Fix required by R >= 2.16.0.
The function resulting from elev() for individual data is now faster for a large number of limits. (Thanks to Frank Zhan FrankZhan@donegalgroup.com for the catch and report.)
Resolved symbol clash at C level tickled by package GeneralizedHyperbolic on Solaris.
Wrong result given by levinvGauss() because the upper tail of the normal distribution was used in the calculation instead of the lower tail. Thanks to Dan Murphy chiefmurphy@gmail.com for the heads up.
discretize()
would return wrong results when argument
step
was omitted in favor of by
and the
discretization method unbiased
was used. (Thanks to
MariePier Côté mariepier.cote.11@ulaval.ca for the
catch.)
CITATION file updated.
summary.cm()
could skip records in the output
thinking they were duplicates.
New argument convolve
in aggregateDist()
to
convolve the distribution obtained with the recursive method a
number of times with itself. This is used for large portfolios
where the expected number of claims is so large that recursions
cannot start. Dividing the frequency parameter by 2^n and
convolving n times can solve the problem.
New method of diff()
for "aggregateDist"
objects to return the probability mass function at the knots of
the aggregate distribution. Valid (and defined) for
"recursive"
, "exact"
and "simulation"
methods
only.
Since the terminology Tail ValueatRisk is often used
instead of Conditional Tail Expectation, TVaR()
is now an
alias for CTE()
.
Quantiles (and thus VaRs and CTEs) for
"aggregateDist"
objects where off by one knot of the
distribution.
cm()
returned the internal classification codes
instead of the original ones for hierarchical models. (Thanks to
Zachary Martin for the heads up.)
Functions m<foo>()
and lev<foo>()
now return
Inf
instead of NaN
for infinite moments. (Thanks to
David Humke for the idea.)
Nonascii characters in one R source file prevented compilation of the package in a C locale (at least on OS X).
For probability laws that have a strictly positive mode or a
mode at zero depending on the value of one or more shape
parameters, d<foo>(0, ...)
did not handle correctly the
case exactly at the boundary condition. (Thanks to Stephen L
bulls22eye@gmail.com for the catch.)
levinvpareto()
works for order > shape
and
defaults to order = 1
, like all other lev<foo>()
functions.
Functions d<foo>()
handle the case x == 0
correctly.
Functions q<foo>()
return NaN
instead of an
error when argument p
is outside [0, 1] (as in R).
Functions r<foo>()
for three parameter distributions
(e.g. Burr) no longer wrongly display the "NaNs produced"
warning message.
The warning message "NaNs produced"
was not (and
could not be) translated.
Function levinvpareto()
computes limited moments for
order > shape
using numerical integration.
Improved support for regression credibility models. There is now an option to make the computations with the intercept at the barycenter of time. This assures that the credibility adjusted regression line (or plane, or ...) lies between the individual and collective ones. In addition, contracts without data are now supported like in other credibility models.
Argument right
for grouped.data()
to allow
intervals closed on the right (default) or on the left.
Method of quantile()
for grouped data objects to
compute the inverse of the ogive.
cm()
no longer returns the values of the unbiased
estimators when method = "iterative"
.
Specification of regression models in cm()
has
changed: one should now provide the regression model as a formula
and the regressors in a separate matrix or data frame.
Due to above change, predict.cm()
now expects
newdata
to be a data frame as for
stats:::predict.lm()
.
Function bstraub()
is no longer exported. Users are
expected to use cm()
as interface instead.
Functions r<foo>()
are now more consistent in warning
when NA
s (specifically NaN
s) are generated (as per
the change in R 2.7.0).
frequency.portfolio()
was wrongly counting NA
s.
Domain of pdfs returned by aggregateDist()
now
restricted to [0, 1].
Quantiles are now computed correctly (and more efficiently)
in 0 and 1 by quantile.aggregateDist()
.
coverage()
no longer requires a cdf when it is not
needed, namely when there is no deductible and no limit.
plot
method for function objects returned by
ruin()
.
Calculation of the BühlmannGisler and Ohlsson estimators was incorrect for hierarchical models with more than one level.
Better display of first column for grouped data objects.
Miscellaneous corrections to the vignettes.
Accented letters in comments removed to avoid compilation problems under MacOS X on CRAN (see thread starting at https://stat.ethz.ch/pipermail/rdevel/2008February/048391.html).
New simulation
vignette on usage of function
simul()
. Most of the material was previously in the
credibility
vignette.
Examples of ruin()
and adjCoef()
added to the
risk
demo.
Following some negative comments on a function name VG had
been using for years, function simpf()
is renamed to
simul()
and the class of the output from simpf
to
portfolio
.
The components of the list returned by
severity.portfolio()
are renamed from "first"
and
"last"
to "main"
and "split"
, respectively.
levinvgauss()
returned wrong results.
Restructuring of the weights matrix in simpf()
may
fail with an incorrect number of columns.
Fixed index entry of the credibility theory vignette.
adjCoef()
would only accept as argument h
a
function named h
.
ruin()
built incorrect probability vector and
intensity matrix for mixture of Erlangs.
CTE.aggregateDist()
sometimes gave values smaller
than the VaR for recursive and simulation methods.
Maintenance and new features release.
Functions mgffoo()
to compute the moment (or cumulant
if log = TRUE
) generating function of the following
distributions: chisquare, exponential, gamma, inverse gaussian
(from package SuppDists), inverse gamma, normal, uniform and
phasetype (see below).
Functions mfoo()
to compute the raw moments of all
the probability distributions supported in the package and the
following of base R: chisquare, exponential, gamma, inverse
gaussian (from package SuppDists), inverse gamma, normal,
uniform.
Functions <d,p,mgf,m,r>phtype()
to compute the
probability density function, cumulative distribution function,
moment generating function, raw moments of, and to generate
variates from, phasetype distributions.
Function VaR()
with a method for objects of class
"aggregateDist"
to compute the Value at Risk of a
distribution.
Function CTE()
with a method for objects of class
"aggregateDist"
to compute the Conditional Tail Expectation
of a distribution.
Function adjCoef()
to compute the adjustment
coefficient in ruin theory. If proportional or excessofloss
reinsurance is included in the model, adjCoef()
returns a
function to compute the adjustment coefficient for given limits. A
plot method is also included.
Function ruin()
returns a function to compute the
infinite time probability of ruin for given initial surpluses in
the CramérLundberg and Sparre Andersen models. Most calculations
are done using the cdf of phasetype distributions as per Asmussen
and Rolski (1991).
Calculations of the aggregate claim distribution using the recursive method much faster now that recursions are done in C.
Modular rewrite of cm()
: the function now calls
internal functions to carry calculations for each supported
credibility model. This is more efficient.
Basic support for the regression model of Hachemeister in
function cm()
.
For the hierarchical credibility model: support for the variance components estimators of Bühlmann and Gisler (2005) and Ohlsson (2005). Support remains for iterative pseudoestimators.
Calculations of iterative pseudoestimators in hierarchical credibility are much faster now that they are done in C.
Four new vignettes: introduction to the package and presentation of the features in loss distributions, risk theory and credibility theory.
Portfolio simulation material of the credibility
demo
moved to demo simulation
.
Argument approx.lin
of
quantile.aggregateDist()
renamed smooth
.
Function aggregateDist()
gains a maxit
argument for the maximum number of recursions when using Panjer's
algorithm. This is to avoid infinite recursion when the cumulative
distribution function does not converge to 1.
Function cm()
gains a maxit
argument for the
maximum number of iterations in pseudoestimators calculations.
Methods of aggregate()
, frequency()
,
severity()
and weights()
for objects of class
"simpf"
gain two new arguments:
classification
; when TRUE
, the columns
giving the classification structure of the portfolio are
excluded from the result. This eases calculation of loss ratios
(aggregate claim amounts divided by the weights);
prefix
; specifies a prefix to use in column names,
with sensible defaults to avoid name clashes for data and weight
columns.
The way weights had to be specified for the
"chisquare"
method of mde()
to give expected
results was very unintuitive. The fix has no effect when using the
default weights.
The empirical step function returned by the
"recursive"
and "convolution"
methods of
aggregateDist()
now correctly returns 1 when evaluated past
its largest knot.
Direct usage of bstraub()
is now deprecated in favor
of cm()
. The function will remain in the package since it
is used internally by cm()
, but it will not be exported in
future releases of the package. The current format of the results
is also deprecated.
The user interface of coverage()
has changed. Instead
of taking in argument the name of a probability law (say
foo
) and require that functions dfoo()
and
pfoo()
exist, coverage()
now requires a function
name or function object to compute the cdf of the unmodified
random variable and a function name or function object to compute
the pdf. If both functions are provided, coverage()
returns
a function to compute the pdf of the modified random variable; if
only the cdf is provided, coverage()
returns the cdf of the
modified random variable. Hence, argument cdf
is no longer
a boolean. The new interface is more in line with other functions
of the package.
Methods of summary()
and print.summary()
for
objects of class "cm"
were not declared in the NAMESPACE
file.
Various fixes to the demo files.
Major official update. This version is not backward compatible with the 0.1x series. Features of the package can be split in the following categories: loss distributions modeling, risk theory, credibility theory.
Functions <d,p,q,r>foo()
to compute the density
function, cumulative distribution function, quantile function of,
and to generate variates from, all probability distributions of
Appendix A of Klugman et al. (2004), Loss Models, Second
Edition (except the inverse gaussian and logt) not already in R.
Namely, this adds the following distributions (the root is what
follows the d
, p
, q
or r
in function
names):
DISTRIBUTION NAME  ROOT 
Burr  burr 
Generalized beta  genbeta 
Generalized Pareto  genpareto 
Inverse Burr  invburr 
Inverse exponential  invexp 
Inverse gamma  invgamma 
Inverse Pareto  invpareto 
Inverse paralogistic  invparalogis 
Inverse transformed gamma  invtrgamma 
Inverse Weibull  invweibull 
Loggamma  loggamma 
Loglogistic  llogis 
Paralogistic  paralogis 
Pareto  pareto 
Single parameter Pareto  pareto1 
Transformed beta  trbeta 
Transformed gamma  trgamma

All functions are coded in C for efficiency purposes and should
behave exactly like the functions in base R. For all distributions
that have a scale parameter, the corresponding functions have
rate = 1
and scale = 1/rate
arguments.
Functions <m,lev>foo()
to compute the kth raw
(noncentral) moment and kth limited moment for all the
probability distributions mentioned above, plus the following ones
of base R: beta, exponential, gamma, lognormal and Weibull.
Facilities to store and manipulate grouped data (stored in an
intervalfrequency fashion). Function grouped.data()
creates
a grouped data object similar to a data frame. Methods of
"["
, "[<"
, mean()
and hist()
created
for objects of class "grouped.data"
.
Function ogive()
— with appropriate methods of
knots()
, plot()
, print()
and summary()
— to compute the ogive of grouped data. Usage is in every respect
similar to stats:::ecdf()
.
Function elev()
to compute the empirical limited
expected value of a sample of individual or grouped data.
Function emm() to compute the kth empirical raw (noncentral) moment of a sample of individual or grouped data.
Function mde()
to compute minimum distance estimators
from a sample of individual or grouped data using one of three
distance measures: Cramervon Mises (CvM), chisquare, layer
average severity (LAS). Usage is similar to fitdistr()
of
package MASS.
Function coverage()
to obtain the pdf or cdf of the
payment per payment or payment per loss random variable under any
combination of the following coverage modifications: ordinary of
franchise deductible, policy limit, coinsurance, inflation. The
result is a function that can be used in fitting models to data
subject to such coverage modifications.
Individual dental claims data set dental
and grouped dental
claims data set gdental
of Klugman et al. (2004), Loss Models,
Second Edition.
Function aggregateDist()
returns a function to
compute the cumulative distribution function of the total amount
of claims random variable for an insurance portfolio using any of
the following five methods:
exact calculation by convolutions (using function convolve() of package stats;
recursive calculation using Panjer's algorithm;
normal approximation;
normal power approximation;
simulation.
The modular conception of aggregateDist()
allows for easy
inclusion of additional methods. There are special methods of
print()
, summary()
, quantile()
and
mean()
for objects of class "aggregateDist"
. The objects
otherwise inherit from classes "ecdf"
(for methods 1, 2 and 3) and
"function"
.
See also the DEPRECATED, DEFUNCT OR NO BACKWARD COMPATIBILITY section below.
Function discretize()
to discretize a continuous
distribution using any of the following four methods:
upper discretization, where the discretized cdf is always above the true cdf;
lower discretization, where the discretized cdf is always under the true cdf;
rounding, where the true cdf passes through the midpoints of the intervals of the discretized cdf;
first moment matching of the discretized and true distributions.
Usage is similar to curve()
of package graphics.
Again, the modular conception allows for easy inclusion of
additional discretization methods.
Function simpf()
can now simulate data for
hierarchical portfolios of any number of levels. Model
specification changed completely; see the DEPRECATED, DEFUNCT OR
NO BACKWARD COMPATIBILITY below. The function is also
significantly (~10x) faster than the previous
version.
Generic function severity()
defined mostly to provide
a method for objects of class "simpf"
; see below.
Methods of aggregate()
, frequency()
,
severity()
and weights()
to extract information from
objects of class "simpf"
:
aggregate()
returns the matrix of aggregate claim
amounts per node;
frequency()
returns the matrix of the number of
claims per node;
severity()
returns the matrix of individual claim
amounts per node;
weights()
returns the matrix of weights
corresponding to the data.
Summaries can be done in various ways; see ?simpf.summaries
Function cm()
(for credibility model)
to compute structure parameters estimators for hierarchical
credibility models, including the Bühlmann and BühlmannStraub
models. Usage is similar to lm()
of package stats in
that the hierarchical structure is specified by means of a formula
object and data is extracted from a matrix or data frame. There
are special methods of print()
, summary()
for
objects of class "cm"
. Credibility premiums are computed
using a method of predict()
; see below.
For simple Bühlmann and BühlmannStraub models, bstraub()
remains simpler to use and faster.
Function bstraub()
now returns an object of class
"bstraub"
for which there exist print and summary
methods. The function no longer computes the credibility
premiums; see the DEPRECATED, DEFUNCT OR NO BACKWARD
COMPATIBILITY section below.
Methods of predict()
for objects of class "cm"
and "bstraub"
created to actually compute the credibility
premiums of credibility models. Function predict.cm()
can
return the premiums for specific levels of a hierarchical
portfolio only.
Function unroll()
to unlist a list with a
"dim"
attribute of length 0, 1 or 2 (that is, a vector or
matrix of vectors) according to a specific dimension. Currently
identical to severity.default()
by lack of a better usage
of the default method of severity()
.
Three new demos corresponding to the three main fields of actuarial science covered by the package.
French translations of the error and warning messages.
The package now has a name space.
Function panjer()
, although still present in the
package, should no longer be used directly. Recursive calculation
of the aggregate claim amount should be done with
aggregateDist()
. Further, the function is not backward
compatible: model specification has changed, discretization of the
claim amount distribution should now be done with
discretize()
, and the function now returns a function to
compute the cdf instead of a simple vector of probabilities.
Model specification for simpf()
changed completely
and is not backward compatible with previous versions of the
package. The new scheme allows for much more general models.
Function rearrangepf()
is defunct and has been
replaced by methods of aggregate()
, frequency()
and
severity()
.
Function bstraub()
no longer computes the credibility
premiums. One should now instead use predict()
for this.
The data set hachemeister
is no longer a list but rather a
matrix with a state specification.
Fixed the dependency on R >= 2.1.0 since the package uses
function isTRUE()
.
First public release.
Fixed an important bug in bstraub()
: when calculating
the range of the weights matrix, NA
s were not excluded.
Miscellaneous documentation corrections.
Initial release.
Contains functions bstraub()
, simpf()
,
rearrangepf()
and panjer()
, and the dataset
hachemeister
.