\section{L\+A\+RS Class Reference}
\label{classmlpack_1_1regression_1_1LARS}\index{L\+A\+RS@{L\+A\+RS}}


An implementation of \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, a stage-\/wise homotopy-\/based algorithm for l1-\/regularized linear regression (L\+A\+S\+SO) and l1+l2 regularized linear regression (Elastic Net).  


\subsection*{Public Member Functions}
\begin{DoxyCompactItemize}
\item 
\textbf{ L\+A\+RS} (const bool use\+Cholesky=false, const double lambda1=0.\+0, const double lambda2=0.\+0, const double tolerance=1e-\/16)
\begin{DoxyCompactList}\small\item\em Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} (const bool use\+Cholesky, const arma\+::mat \&gram\+Matrix, const double lambda1=0.\+0, const double lambda2=0.\+0, const double tolerance=1e-\/16)
\begin{DoxyCompactList}\small\item\em Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, and pass in a precalculated Gram matrix. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} (const arma\+::mat \&data, const arma\+::rowvec \&responses, const bool transpose\+Data=true, const bool use\+Cholesky=false, const double lambda1=0.\+0, const double lambda2=0.\+0, const double tolerance=1e-\/16)
\begin{DoxyCompactList}\small\item\em Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} and run training. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} (const arma\+::mat \&data, const arma\+::rowvec \&responses, const bool transpose\+Data, const bool use\+Cholesky, const arma\+::mat \&gram\+Matrix, const double lambda1=0.\+0, const double lambda2=0.\+0, const double tolerance=1e-\/16)
\begin{DoxyCompactList}\small\item\em Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, pass in a precalculated Gram matrix, and run training. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} (const \textbf{ L\+A\+RS} \&other)
\begin{DoxyCompactList}\small\item\em Construct the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object by copying the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} (\textbf{ L\+A\+RS} \&\&other)
\begin{DoxyCompactList}\small\item\em Construct the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object by taking ownership of the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. \end{DoxyCompactList}\item 
const std\+::vector$<$ size\+\_\+t $>$ \& \textbf{ Active\+Set} () const
\begin{DoxyCompactList}\small\item\em Access the set of active dimensions. \end{DoxyCompactList}\item 
const arma\+::vec \& \textbf{ Beta} () const
\begin{DoxyCompactList}\small\item\em Access the solution coefficients. \end{DoxyCompactList}\item 
const std\+::vector$<$ arma\+::vec $>$ \& \textbf{ Beta\+Path} () const
\begin{DoxyCompactList}\small\item\em Access the set of coefficients after each iteration; the solution is the last element. \end{DoxyCompactList}\item 
double \textbf{ Compute\+Error} (const arma\+::mat \&matX, const arma\+::rowvec \&y, const bool row\+Major=false)
\begin{DoxyCompactList}\small\item\em Compute cost error of the given data matrix using the currently-\/trained \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. \end{DoxyCompactList}\item 
const std\+::vector$<$ double $>$ \& \textbf{ Lambda\+Path} () const
\begin{DoxyCompactList}\small\item\em Access the set of values for lambda1 after each iteration; the solution is the last element. \end{DoxyCompactList}\item 
const arma\+::mat \& \textbf{ Mat\+Utri\+Chol\+Factor} () const
\begin{DoxyCompactList}\small\item\em Access the upper triangular cholesky factor. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} \& \textbf{ operator=} (const \textbf{ L\+A\+RS} \&other)
\begin{DoxyCompactList}\small\item\em Copy the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. \end{DoxyCompactList}\item 
\textbf{ L\+A\+RS} \& \textbf{ operator=} (\textbf{ L\+A\+RS} \&\&other)
\begin{DoxyCompactList}\small\item\em Take ownership of the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. \end{DoxyCompactList}\item 
void \textbf{ Predict} (const arma\+::mat \&points, arma\+::rowvec \&predictions, const bool row\+Major=false) const
\begin{DoxyCompactList}\small\item\em Predict y\+\_\+i for each data point in the given data matrix using the currently-\/trained \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Archive $>$ }\\void \textbf{ serialize} (Archive \&ar, const unsigned int)
\begin{DoxyCompactList}\small\item\em Serialize the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. \end{DoxyCompactList}\item 
double \textbf{ Train} (const arma\+::mat \&data, const arma\+::rowvec \&responses, arma\+::vec \&beta, const bool transpose\+Data=true)
\begin{DoxyCompactList}\small\item\em Run \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. \end{DoxyCompactList}\item 
double \textbf{ Train} (const arma\+::mat \&data, const arma\+::rowvec \&responses, const bool transpose\+Data=true)
\begin{DoxyCompactList}\small\item\em Run \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. \end{DoxyCompactList}\end{DoxyCompactItemize}


\subsection{Detailed Description}
An implementation of \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, a stage-\/wise homotopy-\/based algorithm for l1-\/regularized linear regression (L\+A\+S\+SO) and l1+l2 regularized linear regression (Elastic Net). 

Let $ X $ be a matrix where each row is a point and each column is a dimension and let $ y $ be a vector of responses.

The Elastic Net problem is to solve

\[ \min_{\beta} 0.5 || X \beta - y ||_2^2 + \lambda_1 || \beta ||_1 + 0.5 \lambda_2 || \beta ||_2^2 \]

where $ \beta $ is the vector of regression coefficients.

If $ \lambda_1 > 0 $ and $ \lambda_2 = 0 $, the problem is the L\+A\+S\+SO. If $ \lambda_1 > 0 $ and $ \lambda_2 > 0 $, the problem is the elastic net. If $ \lambda_1 = 0 $ and $ \lambda_2 > 0 $, the problem is ridge regression. If $ \lambda_1 = 0 $ and $ \lambda_2 = 0 $, the problem is unregularized linear regression.

Note\+: This algorithm is not recommended for use (in terms of efficiency) when $ \lambda_1 $ = 0.

For more details, see the following papers\+:


\begin{DoxyCode}
@article\{efron2004least,
  title=\{Least angle regression\},
  author=\{Efron, B. and Hastie, T. and Johnstone, I. and Tibshirani, R.\},
  journal=\{The Annals of statistics\},
  volume=\{32\},
  number=\{2\},
  pages=\{407--499\},
  year=\{2004\},
  publisher=\{Institute of Mathematical Statistics\}
\}
\end{DoxyCode}



\begin{DoxyCode}
@article\{zou2005regularization,
  title=\{Regularization and variable selection via the elastic net\},
  author=\{Zou, H. and Hastie, T.\},
  journal=\{Journal of the Royal Statistical Society Series B\},
  volume=\{67\},
  number=\{2\},
  pages=\{301--320\},
  year=\{2005\},
  publisher=\{Royal Statistical Society\}
\}
\end{DoxyCode}
 

Definition at line 89 of file lars.\+hpp.



\subsection{Constructor \& Destructor Documentation}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a286c84aa2fd218969a77cc297985c482}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [1/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{const bool}]{use\+Cholesky = {\ttfamily false},  }\item[{const double}]{lambda1 = {\ttfamily 0.0},  }\item[{const double}]{lambda2 = {\ttfamily 0.0},  }\item[{const double}]{tolerance = {\ttfamily 1e-\/16} }\end{DoxyParamCaption})}



Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. 

Both lambda1 and lambda2 default to 0.


\begin{DoxyParams}{Parameters}
{\em use\+Cholesky} & Whether or not to use Cholesky decomposition when solving linear system (as opposed to using the full Gram matrix). \\
\hline
{\em lambda1} & Regularization parameter for l1-\/norm penalty. \\
\hline
{\em lambda2} & Regularization parameter for l2-\/norm penalty. \\
\hline
{\em tolerance} & Run until the maximum correlation of elements in (X$^\wedge$T y) is less than this. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a7d239923bcafc8aae2fe6920a5816936}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [2/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{const bool}]{use\+Cholesky,  }\item[{const arma\+::mat \&}]{gram\+Matrix,  }\item[{const double}]{lambda1 = {\ttfamily 0.0},  }\item[{const double}]{lambda2 = {\ttfamily 0.0},  }\item[{const double}]{tolerance = {\ttfamily 1e-\/16} }\end{DoxyParamCaption})}



Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, and pass in a precalculated Gram matrix. 

Both lambda1 and lambda2 default to 0.


\begin{DoxyParams}{Parameters}
{\em use\+Cholesky} & Whether or not to use Cholesky decomposition when solving linear system (as opposed to using the full Gram matrix). \\
\hline
{\em gram\+Matrix} & Gram matrix. \\
\hline
{\em lambda1} & Regularization parameter for l1-\/norm penalty. \\
\hline
{\em lambda2} & Regularization parameter for l2-\/norm penalty. \\
\hline
{\em tolerance} & Run until the maximum correlation of elements in (X$^\wedge$T y) is less than this. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a70e31765ec8b1f41c244bc096f662e95}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [3/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{data,  }\item[{const arma\+::rowvec \&}]{responses,  }\item[{const bool}]{transpose\+Data = {\ttfamily true},  }\item[{const bool}]{use\+Cholesky = {\ttfamily false},  }\item[{const double}]{lambda1 = {\ttfamily 0.0},  }\item[{const double}]{lambda2 = {\ttfamily 0.0},  }\item[{const double}]{tolerance = {\ttfamily 1e-\/16} }\end{DoxyParamCaption})}



Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} and run training. 

Both lambda1 and lambda2 are set by default to 0.


\begin{DoxyParams}{Parameters}
{\em data} & Input data. \\
\hline
{\em responses} & A vector of targets. \\
\hline
{\em transpose\+Data} & Should be true if the input data is column-\/major and false otherwise. \\
\hline
{\em use\+Cholesky} & Whether or not to use Cholesky decomposition when solving linear system (as opposed to using the full Gram matrix). \\
\hline
{\em lambda1} & Regularization parameter for l1-\/norm penalty. \\
\hline
{\em lambda2} & Regularization parameter for l2-\/norm penalty. \\
\hline
{\em tolerance} & Run until the maximum correlation of elements in (X$^\wedge$T y) is less than this. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a063aa7d400bd5974d940d19299536b39}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [4/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{data,  }\item[{const arma\+::rowvec \&}]{responses,  }\item[{const bool}]{transpose\+Data,  }\item[{const bool}]{use\+Cholesky,  }\item[{const arma\+::mat \&}]{gram\+Matrix,  }\item[{const double}]{lambda1 = {\ttfamily 0.0},  }\item[{const double}]{lambda2 = {\ttfamily 0.0},  }\item[{const double}]{tolerance = {\ttfamily 1e-\/16} }\end{DoxyParamCaption})}



Set the parameters to \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}, pass in a precalculated Gram matrix, and run training. 

Both lambda1 and lambda2 are set by default to 0.


\begin{DoxyParams}{Parameters}
{\em data} & Input data. \\
\hline
{\em responses} & A vector of targets. \\
\hline
{\em transpose\+Data} & Should be true if the input data is column-\/major and false otherwise. \\
\hline
{\em use\+Cholesky} & Whether or not to use Cholesky decomposition when solving linear system (as opposed to using the full Gram matrix). \\
\hline
{\em gram\+Matrix} & Gram matrix. \\
\hline
{\em lambda1} & Regularization parameter for l1-\/norm penalty. \\
\hline
{\em lambda2} & Regularization parameter for l2-\/norm penalty. \\
\hline
{\em tolerance} & Run until the maximum correlation of elements in (X$^\wedge$T y) is less than this. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a0cc9d048eafa5549851dfbad14e2034a}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [5/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{const \textbf{ L\+A\+RS} \&}]{other }\end{DoxyParamCaption})}



Construct the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object by copying the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. 


\begin{DoxyParams}{Parameters}
{\em other} & \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object to copy. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a7aa7ae2d7c44de23b571d63999228a80}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!L\+A\+RS@{L\+A\+RS}}
\index{L\+A\+RS@{L\+A\+RS}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{L\+A\+R\+S()\hspace{0.1cm}{\footnotesize\ttfamily [6/6]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS} (\begin{DoxyParamCaption}\item[{\textbf{ L\+A\+RS} \&\&}]{other }\end{DoxyParamCaption})}



Construct the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object by taking ownership of the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. 


\begin{DoxyParams}{Parameters}
{\em other} & \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object to take ownership of. \\
\hline
\end{DoxyParams}


\subsection{Member Function Documentation}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a1775b52aa7ab1b47251df16bfa84969a}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Active\+Set@{Active\+Set}}
\index{Active\+Set@{Active\+Set}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Active\+Set()}
{\footnotesize\ttfamily const std\+::vector$<$size\+\_\+t$>$\& Active\+Set (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Access the set of active dimensions. 



Definition at line 253 of file lars.\+hpp.

\mbox{\label{classmlpack_1_1regression_1_1LARS_a853eded6fe21f0293d997b3bc7905857}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Beta@{Beta}}
\index{Beta@{Beta}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Beta()}
{\footnotesize\ttfamily const arma\+::vec\& Beta (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Access the solution coefficients. 



Definition at line 260 of file lars.\+hpp.

\mbox{\label{classmlpack_1_1regression_1_1LARS_ac597124267a151fc199119835fd4a520}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Beta\+Path@{Beta\+Path}}
\index{Beta\+Path@{Beta\+Path}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Beta\+Path()}
{\footnotesize\ttfamily const std\+::vector$<$arma\+::vec$>$\& Beta\+Path (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Access the set of coefficients after each iteration; the solution is the last element. 



Definition at line 257 of file lars.\+hpp.

\mbox{\label{classmlpack_1_1regression_1_1LARS_a007f162efb479883303b4d0d2bacdcb4}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Compute\+Error@{Compute\+Error}}
\index{Compute\+Error@{Compute\+Error}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Compute\+Error()}
{\footnotesize\ttfamily double Compute\+Error (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{matX,  }\item[{const arma\+::rowvec \&}]{y,  }\item[{const bool}]{row\+Major = {\ttfamily false} }\end{DoxyParamCaption})}



Compute cost error of the given data matrix using the currently-\/trained \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. 

Only $\vert$$\vert$y-\/beta$\ast$\+X$\vert$$\vert$2 is used to calculate cost error.


\begin{DoxyParams}{Parameters}
{\em matX} & Column-\/major input data (or row-\/major input data if row\+Major = true). \\
\hline
{\em y} & responses A vector of targets. \\
\hline
{\em row\+Major} & Should be true if the data points matrix is row-\/major and false otherwise. \\
\hline
\end{DoxyParams}
\begin{DoxyReturn}{Returns}
The minimum cost error. 
\end{DoxyReturn}


Referenced by L\+A\+R\+S\+::\+Mat\+Utri\+Chol\+Factor().

\mbox{\label{classmlpack_1_1regression_1_1LARS_a1d857e3dd6d8bde441e0df7ce4d8fb62}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Lambda\+Path@{Lambda\+Path}}
\index{Lambda\+Path@{Lambda\+Path}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Lambda\+Path()}
{\footnotesize\ttfamily const std\+::vector$<$double$>$\& Lambda\+Path (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Access the set of values for lambda1 after each iteration; the solution is the last element. 



Definition at line 264 of file lars.\+hpp.

\mbox{\label{classmlpack_1_1regression_1_1LARS_aa1cedc65d70665851c31a900685dc061}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Mat\+Utri\+Chol\+Factor@{Mat\+Utri\+Chol\+Factor}}
\index{Mat\+Utri\+Chol\+Factor@{Mat\+Utri\+Chol\+Factor}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Mat\+Utri\+Chol\+Factor()}
{\footnotesize\ttfamily const arma\+::mat\& Mat\+Utri\+Chol\+Factor (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Access the upper triangular cholesky factor. 



Definition at line 267 of file lars.\+hpp.



References L\+A\+R\+S\+::\+Compute\+Error(), and L\+A\+R\+S\+::serialize().

\mbox{\label{classmlpack_1_1regression_1_1LARS_ac19238b303d9bcb7b8adf9fde6bd2b80}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!operator=@{operator=}}
\index{operator=@{operator=}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{operator=()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS}\& operator= (\begin{DoxyParamCaption}\item[{const \textbf{ L\+A\+RS} \&}]{other }\end{DoxyParamCaption})}



Copy the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. 


\begin{DoxyParams}{Parameters}
{\em other} & \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object to copy. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a24105afcc8594b088d865c2a7171e860}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!operator=@{operator=}}
\index{operator=@{operator=}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{operator=()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily \textbf{ L\+A\+RS}\& operator= (\begin{DoxyParamCaption}\item[{\textbf{ L\+A\+RS} \&\&}]{other }\end{DoxyParamCaption})}



Take ownership of the given \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object. 


\begin{DoxyParams}{Parameters}
{\em other} & \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} object to take ownership of. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_af2f52669db2906eea3d0a0cb28d9c62a}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Predict@{Predict}}
\index{Predict@{Predict}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Predict()}
{\footnotesize\ttfamily void Predict (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{points,  }\item[{arma\+::rowvec \&}]{predictions,  }\item[{const bool}]{row\+Major = {\ttfamily false} }\end{DoxyParamCaption}) const}



Predict y\+\_\+i for each data point in the given data matrix using the currently-\/trained \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. 


\begin{DoxyParams}{Parameters}
{\em points} & The data points to regress on. \\
\hline
{\em predictions} & y, which will contained calculated values on completion. \\
\hline
{\em row\+Major} & Should be true if the data points matrix is row-\/major and false otherwise. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1regression_1_1LARS_af0dd9205158ccf7bcfcd8ff81f79c927}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!serialize@{serialize}}
\index{serialize@{serialize}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{serialize()}
{\footnotesize\ttfamily void serialize (\begin{DoxyParamCaption}\item[{Archive \&}]{ar,  }\item[{const unsigned}]{int }\end{DoxyParamCaption})}



Serialize the \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} model. 



Referenced by L\+A\+R\+S\+::\+Mat\+Utri\+Chol\+Factor().

\mbox{\label{classmlpack_1_1regression_1_1LARS_a6d3b55be8a7673b24b91100d71f88b83}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Train@{Train}}
\index{Train@{Train}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Train()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily double Train (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{data,  }\item[{const arma\+::rowvec \&}]{responses,  }\item[{arma\+::vec \&}]{beta,  }\item[{const bool}]{transpose\+Data = {\ttfamily true} }\end{DoxyParamCaption})}



Run \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. 

The input matrix (like all mlpack matrices) should be column-\/major -- each column is an observation and each row is a dimension. However, because \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} is more efficient on a row-\/major matrix, this method will (internally) transpose the matrix. If this transposition is not necessary (i.\+e., you want to pass in a row-\/major matrix), pass \textquotesingle{}false\textquotesingle{} for the transpose\+Data parameter.


\begin{DoxyParams}{Parameters}
{\em data} & Column-\/major input data (or row-\/major input data if row\+Major = true). \\
\hline
{\em responses} & A vector of targets. \\
\hline
{\em beta} & Vector to store the solution (the coefficients) in. \\
\hline
{\em transpose\+Data} & Set to false if the data is row-\/major. \\
\hline
\end{DoxyParams}
\begin{DoxyReturn}{Returns}
minimum cost error($\vert$$\vert$y-\/beta$\ast$\+X$\vert$$\vert$2 is used to calculate error). 
\end{DoxyReturn}
\mbox{\label{classmlpack_1_1regression_1_1LARS_a124534427646a88375731cb50e0adacc}} 
\index{mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}!Train@{Train}}
\index{Train@{Train}!mlpack\+::regression\+::\+L\+A\+RS@{mlpack\+::regression\+::\+L\+A\+RS}}
\subsubsection{Train()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily double Train (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{data,  }\item[{const arma\+::rowvec \&}]{responses,  }\item[{const bool}]{transpose\+Data = {\ttfamily true} }\end{DoxyParamCaption})}



Run \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS}. 

The input matrix (like all mlpack matrices) should be column-\/major -- each column is an observation and each row is a dimension. However, because \doxyref{L\+A\+RS}{p.}{classmlpack_1_1regression_1_1LARS} is more efficient on a row-\/major matrix, this method will (internally) transpose the matrix. If this transposition is not necessary (i.\+e., you want to pass in a row-\/major matrix), pass \textquotesingle{}false\textquotesingle{} for the transpose\+Data parameter.


\begin{DoxyParams}{Parameters}
{\em data} & Input data. \\
\hline
{\em responses} & A vector of targets. \\
\hline
{\em transpose\+Data} & Should be true if the input data is column-\/major and false otherwise. \\
\hline
\end{DoxyParams}
\begin{DoxyReturn}{Returns}
minimum cost error($\vert$$\vert$y-\/beta$\ast$\+X$\vert$$\vert$2 is used to calculate error). 
\end{DoxyReturn}


The documentation for this class was generated from the following file\+:\begin{DoxyCompactItemize}
\item 
/var/www/mlpack.\+ratml.\+org/mlpack.\+org/\+\_\+src/mlpack-\/3.\+3.\+2/src/mlpack/methods/lars/\textbf{ lars.\+hpp}\end{DoxyCompactItemize}
