Commit 0d42a0b6 authored by Tom Reynkens's avatar Tom Reynkens

General documentation updates

parent 39f69b63
Type: Package
Package: smurf
Title: Sparse Multi-Type Regularized Feature Modeling
Version: 0.3.0.9028
Version: 0.3.0.9029
Date: 2018-09-24
Authors@R: c(
person("Tom", "Reynkens", email = "tomreynkens@hotmail.com", role = c("aut", "cre"),
......
......@@ -18,7 +18,7 @@
#' @param tau Parameter for backtracking the step size. A numeric strictly between 0 and 1, default is 0.5.
#' @param reest A logical indicating if the obtained (reduced) model is re-estimated using \code{\link[speedglm]{speedglm}} or \code{\link[stats]{glm}}. Default is \code{TRUE}.
#' @param lambda.vector Values of lambda to consider when selecting the optimal value of lambda. A vector of strictly positive numerics (which is preferably a decreasing sequence as we make use of warm starts) or \code{NULL} (default).
#' When \code{NULL}, it is set to an exponential decreasing sequence between \code{lambda.max} and \code{lambda.min}.
#' When \code{NULL}, it is set to an exponential decreasing sequence of length \code{lambda.length} between \code{lambda.max} and \code{lambda.min}.
#' @param lambda.min Minimum value of lambda to consider when selecting the optimal value of lambda. A strictly positive numeric or \code{NULL} (default).
#' When \code{NULL}, it is equal to \code{0.0001} times \code{lambda.max}. This argument is ignored when \code{lambda.vector} is not \code{NULL}.
#' @param lambda.max Maximum value of lambda to consider when selecting the optimal value of lambda. A strictly positive numeric larger than \code{lambda.min} or \code{NULL} (default).
......@@ -29,7 +29,7 @@
#' @param k Number of folds when selecting lambda using cross-validation. A strictly positive integer, default is 5 (i.e. five-fold cross-validation). This number cannot be larger than the number of observations. Note that cross-validation with one fold (\code{k=1}) is the same as in-sample selection of \code{lambda}.
#' @param oos.prop Proportion of the data that is used as the validation sample when selecting \code{lambda} out-of-sample. A numeric strictly between 0 and 1, default is 0.2.
#' This argument is ignored when \code{validation.index} is not \code{NULL}.
#' @param validation.index Vector containing the row indices of the data matrix that are used as the validation sample.
#' @param validation.index Vector containing the row indices of the data matrix corresponding to the observations that are used as the validation sample.
#' This argument is only used when \code{lambda} is selected out-of-sample. Default is \code{NULL} meaning that randomly 100*\code{oos.prop}\% of the data are used as validation sample.
#' @param ncores Number of cores used when performing cross-validation. A strictly positive integer or \code{NULL} (default).
#' When \code{NULL}, \code{max(nc-1,1)} cores are used where \code{nc} is the number of cores as determined by \code{\link{detectCores}}.
......
......@@ -20,7 +20,7 @@
#'
#' @details See \code{\link[stats]{glm.summaries}} for an overview of the different types of residuals.
#'
#' @seealso \code{\link{residuals_reest}}, \code{\link[stats]{glm.summaries}}, \code{\link{glmsmurf-class}}
#' @seealso \code{\link{residuals_reest}}, \code{\link{residuals}}, \code{\link[stats]{glm.summaries}}, \code{\link{glmsmurf-class}}
#'
#' @examples \dontrun{
#'
......
......@@ -43,6 +43,8 @@
#'
#' @importFrom RColorBrewer brewer.pal
#'
#' @importFrom speedglm speedglm.wfit
#'
#' @importFrom stats as.formula
#' @importFrom stats contrasts
#' @importFrom stats coef
......@@ -63,8 +65,6 @@
#' @importFrom stats sd
#' @importFrom stats terms
#' @importFrom stats weighted.mean
#'
#' @importFrom speedglm speedglm.wfit
#############################
# Make C code recognisable
......
......@@ -27,8 +27,6 @@ print.glmsmurf <- function(x, ...) {
#' @seealso \code{\link[stats]{summary.glm}}, \code{\link{glmsmurf}}, \code{\link{glmsmurf-class}}
#'
#' @method summary glmsmurf
#'
#' @examples ## See example(glmsmurf) for examples
summary.glmsmurf <- function(object, digits = 3L, ...) {
# Handle reest
......
......@@ -14,6 +14,7 @@
\subsection{Changes in documentation:}{
\itemize{
\item \code{glmsmurf}: Add note that selected value of lambda for out-of-sample selection and cross-validation is not (always) deterministic.
\item General documentation updates.
}
}
......
......@@ -51,7 +51,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty.
# Fit a multi-type regularized GLM using the SMuRF algorithm.
# We use standardization adaptive penalty weights based on an initial GLM fit.
# The value for lambda is selected using cross-validation
# (with the deviance as loss measure and the one standard error rule), see example(plot_lambda)
......
......@@ -52,7 +52,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(tiles, pen = "lasso") + p(bathextra, pen = "lasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty and select the optimal value of lambda
# Fit a multi-type regularized GLM using the SMuRF algorithm and select the optimal value of lambda
# using cross-validation (with the deviance as loss measure and the one standard error rule).
# We use standardization adaptive penalty weights based on an initial GLM fit.
# The number of values of lambda to consider in cross-validation is
......
......@@ -54,7 +54,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty.
# Fit a multi-type regularized GLM using the SMuRF algorithm.
# We use standardization adaptive penalty weights based on an initial GLM fit.
munich.fit <- glmsmurf(formula = formu, family = gaussian(), data = rent,
pen.weights = "glm.stand", lambda = 0.1)
......
......@@ -176,7 +176,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty.
# Fit a multi-type regularized GLM using the SMuRF algorithm.
# We use standardization adaptive penalty weights based on an initial GLM fit.
# The value for lambda is selected using cross-validation
# (with the deviance as loss measure and the one standard error rule), see example(plot_lambda)
......
......@@ -22,7 +22,7 @@ glmsmurf.control(epsilon = 1e-08, maxiter = 10000, step = NULL,
\item{reest}{A logical indicating if the obtained (reduced) model is re-estimated using \code{\link[speedglm]{speedglm}} or \code{\link[stats]{glm}}. Default is \code{TRUE}.}
\item{lambda.vector}{Values of lambda to consider when selecting the optimal value of lambda. A vector of strictly positive numerics (which is preferably a decreasing sequence as we make use of warm starts) or \code{NULL} (default).
When \code{NULL}, it is set to an exponential decreasing sequence between \code{lambda.max} and \code{lambda.min}.}
When \code{NULL}, it is set to an exponential decreasing sequence of length \code{lambda.length} between \code{lambda.max} and \code{lambda.min}.}
\item{lambda.min}{Minimum value of lambda to consider when selecting the optimal value of lambda. A strictly positive numeric or \code{NULL} (default).
When \code{NULL}, it is equal to \code{0.0001} times \code{lambda.max}. This argument is ignored when \code{lambda.vector} is not \code{NULL}.}
......@@ -40,7 +40,7 @@ This argument is only used if \code{reest} is \code{TRUE}.}
\item{oos.prop}{Proportion of the data that is used as the validation sample when selecting \code{lambda} out-of-sample. A numeric strictly between 0 and 1, default is 0.2.
This argument is ignored when \code{validation.index} is not \code{NULL}.}
\item{validation.index}{Vector containing the row indices of the data matrix that are used as the validation sample.
\item{validation.index}{Vector containing the row indices of the data matrix corresponding to the observations that are used as the validation sample.
This argument is only used when \code{lambda} is selected out-of-sample. Default is \code{NULL} meaning that randomly 100*\code{oos.prop}\% of the data are used as validation sample.}
\item{ncores}{Number of cores used when performing cross-validation. A strictly positive integer or \code{NULL} (default).
......
......@@ -113,7 +113,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty.
# Fit a multi-type regularized GLM using the SMuRF algorithm.
# We use standardization adaptive penalty weights based on an initial GLM fit.
munich.fit <- glmsmurf(formula = formu, family = gaussian(), data = rent,
pen.weights = "glm.stand", lambda = 0.1)
......
......@@ -93,7 +93,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(tiles, pen = "lasso") + p(bathextra, pen = "lasso") +
p(kitchen, pen = "lasso")
# Fit a GLM model with a multi-type Lasso penalty and select the optimal value of lambda
# Fit a multi-type regularized GLM using the SMuRF algorithm and select the optimal value of lambda
# using cross-validation (with the deviance as loss measure and the one standard error rule).
# We use standardization adaptive penalty weights based on an initial GLM fit.
# The number of values of lambda to consider in cross-validation is
......
......@@ -44,5 +44,5 @@ residuals_reest(munich.fit, type = "deviance")
}
\seealso{
\code{\link{residuals_reest}}, \code{\link[stats]{glm.summaries}}, \code{\link{glmsmurf-class}}
\code{\link{residuals_reest}}, \code{\link{residuals}}, \code{\link[stats]{glm.summaries}}, \code{\link{glmsmurf-class}}
}
......@@ -16,9 +16,6 @@
\description{
Function to print a summary of a \code{glmsmurf}-object.
}
\examples{
## See example(glmsmurf) for examples
}
\seealso{
\code{\link[stats]{summary.glm}}, \code{\link{glmsmurf}}, \code{\link{glmsmurf-class}}
}
......@@ -40,7 +40,7 @@ formu <- rentm ~ p(area, pen = "gflasso", refcat = 3) +
p(kitchen, pen = "lasso")
# Fit multi-type penalized GLM model.
# Fit a multi-type regularized GLM using the SMuRF algorithm.
# We use adaptive standardization penalty weights based on a GLM fit.
# The value for lambda is selected using cross-validation
# (with MSE as measure), see example(plot_lambda)
......
......@@ -236,7 +236,7 @@ formu <- rentm ~ p(area, pen = "gflasso") +
p(kitchen, pen = "lasso")
```
Next, we fit a multi-type regularized GLM model, where we use standardization adaptive penalty weights based on an initial GLM fit.
Next, we fit a multi-type regularized GLM, where we use standardization adaptive penalty weights based on an initial GLM fit.
We predetermined the value for lambda using cross-validation (with the deviance as loss measure and the one standard error rule), see [Selection of lambda](#selection-of-lambda).
```{r, warning = FALSE}
munich.fit <- glmsmurf(formula = formu, family = gaussian(), data = rent,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment