\section{B\+R\+NN$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$ Class Template Reference}
\label{classmlpack_1_1ann_1_1BRNN}\index{B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$@{B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$}}


Implementation of a standard bidirectional recurrent neural network container.  


\subsection*{Public Types}
\begin{DoxyCompactItemize}
\item 
using \textbf{ Network\+Type} = \textbf{ B\+R\+NN}$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers... $>$
\begin{DoxyCompactList}\small\item\em Convenience typedef for the internal model construction. \end{DoxyCompactList}\end{DoxyCompactItemize}
\subsection*{Public Member Functions}
\begin{DoxyCompactItemize}
\item 
\textbf{ B\+R\+NN} (const size\+\_\+t rho, const bool single=false, Output\+Layer\+Type output\+Layer=Output\+Layer\+Type(), Merge\+Layer\+Type $\ast$merge\+Layer=new Merge\+Layer\+Type(), Merge\+Output\+Type $\ast$merge\+Output=new Merge\+Output\+Type(), Initialization\+Rule\+Type initialize\+Rule=Initialization\+Rule\+Type())
\begin{DoxyCompactList}\small\item\em Create the \doxyref{B\+R\+NN}{p.}{classmlpack_1_1ann_1_1BRNN} object. \end{DoxyCompactList}\item 
\textbf{ $\sim$\+B\+R\+NN} ()
\item 
{\footnotesize template$<$class Layer\+Type , class... Args$>$ }\\void \textbf{ Add} (Args... args)
\item 
void \textbf{ Add} (\textbf{ Layer\+Types}$<$ Custom\+Layers... $>$ layer)
\item 
double \textbf{ Evaluate} (const arma\+::mat \&parameters, const size\+\_\+t begin, const size\+\_\+t batch\+Size, const bool deterministic)
\begin{DoxyCompactList}\small\item\em Evaluate the bidirectional recurrent neural network with the given parameters. \end{DoxyCompactList}\item 
double \textbf{ Evaluate} (const arma\+::mat \&parameters, const size\+\_\+t begin, const size\+\_\+t batch\+Size)
\begin{DoxyCompactList}\small\item\em Evaluate the bidirectional recurrent neural network with the given parameters. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Grad\+Type $>$ }\\double \textbf{ Evaluate\+With\+Gradient} (const arma\+::mat \&parameters, const size\+\_\+t begin, Grad\+Type \&gradient, const size\+\_\+t batch\+Size)
\begin{DoxyCompactList}\small\item\em Evaluate the bidirectional recurrent neural network with the given parameters. \end{DoxyCompactList}\item 
void \textbf{ Gradient} (const arma\+::mat \&parameters, const size\+\_\+t begin, arma\+::mat \&gradient, const size\+\_\+t batch\+Size)
\begin{DoxyCompactList}\small\item\em Evaluate the gradient of the bidirectional recurrent neural network with the given parameters, and with respect to only one point in the dataset. \end{DoxyCompactList}\item 
size\+\_\+t \textbf{ Num\+Functions} () const
\begin{DoxyCompactList}\small\item\em Return the number of separable functions. (number of predictor points). \end{DoxyCompactList}\item 
const arma\+::mat \& \textbf{ Parameters} () const
\begin{DoxyCompactList}\small\item\em Return the initial point for the optimization. \end{DoxyCompactList}\item 
arma\+::mat \& \textbf{ Parameters} ()
\begin{DoxyCompactList}\small\item\em Modify the initial point for the optimization. \end{DoxyCompactList}\item 
void \textbf{ Predict} (arma\+::cube predictors, arma\+::cube \&results, const size\+\_\+t batch\+Size=256)
\begin{DoxyCompactList}\small\item\em Predict the responses to a given set of predictors. \end{DoxyCompactList}\item 
const arma\+::cube \& \textbf{ Predictors} () const
\begin{DoxyCompactList}\small\item\em Get the matrix of data points (predictors). \end{DoxyCompactList}\item 
arma\+::cube \& \textbf{ Predictors} ()
\begin{DoxyCompactList}\small\item\em Modify the matrix of data points (predictors). \end{DoxyCompactList}\item 
void \textbf{ Reset} ()
\begin{DoxyCompactList}\small\item\em Reset the state of the network. \end{DoxyCompactList}\item 
void \textbf{ Reset\+Parameters} ()
\begin{DoxyCompactList}\small\item\em Reset the module information (weights/parameters). \end{DoxyCompactList}\item 
const arma\+::cube \& \textbf{ Responses} () const
\begin{DoxyCompactList}\small\item\em Get the matrix of responses to the input data points. \end{DoxyCompactList}\item 
arma\+::cube \& \textbf{ Responses} ()
\begin{DoxyCompactList}\small\item\em Modify the matrix of responses to the input data points. \end{DoxyCompactList}\item 
const size\+\_\+t \& \textbf{ Rho} () const
\begin{DoxyCompactList}\small\item\em Return the maximum length of backpropagation through time. \end{DoxyCompactList}\item 
size\+\_\+t \& \textbf{ Rho} ()
\begin{DoxyCompactList}\small\item\em Modify the maximum length of backpropagation through time. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Archive $>$ }\\void \textbf{ serialize} (Archive \&ar, const unsigned int)
\begin{DoxyCompactList}\small\item\em Serialize the model. \end{DoxyCompactList}\item 
void \textbf{ Shuffle} ()
\begin{DoxyCompactList}\small\item\em Shuffle the order of function visitation. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Optimizer\+Type $>$ }\\double \textbf{ Train} (arma\+::cube predictors, arma\+::cube responses, Optimizer\+Type \&optimizer)
\begin{DoxyCompactList}\small\item\em Train the bidirectional recurrent neural network on the given input data using the given optimizer. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Optimizer\+Type  = ens\+::\+Standard\+S\+GD$>$ }\\double \textbf{ Train} (arma\+::cube predictors, arma\+::cube responses)
\begin{DoxyCompactList}\small\item\em Train the bidirectional recurrent neural network on the given input data. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Optimizer\+Type $>$ }\\std\+::enable\+\_\+if$<$ Has\+Max\+Iterations$<$ Optimizer\+Type, size\+\_\+t \&(Optimizer\+Type\+::$\ast$)()$>$\+::value, void $>$\+::type \textbf{ Warn\+Message\+Max\+Iterations} (Optimizer\+Type \&optimizer, size\+\_\+t samples) const
\begin{DoxyCompactList}\small\item\em Check if the optimizer has Max\+Iterations() parameter, if it does then check if it\textquotesingle{}s value is less than the number of datapoints in the dataset. \end{DoxyCompactList}\item 
{\footnotesize template$<$typename Optimizer\+Type $>$ }\\std\+::enable\+\_\+if$<$ !Has\+Max\+Iterations$<$ Optimizer\+Type, size\+\_\+t \&(Optimizer\+Type\+::$\ast$)()$>$\+::value, void $>$\+::type \textbf{ Warn\+Message\+Max\+Iterations} (Optimizer\+Type \&optimizer, size\+\_\+t samples) const
\begin{DoxyCompactList}\small\item\em Check if the optimizer has Max\+Iterations() parameter, if it doesn\textquotesingle{}t then simply return from the function. \end{DoxyCompactList}\end{DoxyCompactItemize}


\subsection{Detailed Description}
\subsubsection*{template$<$typename Output\+Layer\+Type = Negative\+Log\+Likelihood$<$$>$, typename Merge\+Layer\+Type = Concat$<$$>$, typename Merge\+Output\+Type = Log\+Soft\+Max$<$$>$, typename Initialization\+Rule\+Type = Random\+Initialization, typename... Custom\+Layers$>$\newline
class mlpack\+::ann\+::\+B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$}

Implementation of a standard bidirectional recurrent neural network container. 


\begin{DoxyTemplParams}{Template Parameters}
{\em Output\+Layer\+Type} & The output layer type used to evaluate the network. \\
\hline
{\em Initialization\+Rule\+Type} & Rule used to initialize the weight matrix. \\
\hline
\end{DoxyTemplParams}


Definition at line 48 of file brnn.\+hpp.



\subsection{Member Typedef Documentation}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_af977343dd09381ce7c70aa160dfbdd3d}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Network\+Type@{Network\+Type}}
\index{Network\+Type@{Network\+Type}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Network\+Type}
{\footnotesize\ttfamily using \textbf{ Network\+Type} =  \textbf{ B\+R\+NN}$<$Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers...$>$}



Convenience typedef for the internal model construction. 



Definition at line 56 of file brnn.\+hpp.



\subsection{Constructor \& Destructor Documentation}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_ae88bb4928222f3ccbbaab4eaa7ab80fc}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!B\+R\+NN@{B\+R\+NN}}
\index{B\+R\+NN@{B\+R\+NN}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{B\+R\+N\+N()}
{\footnotesize\ttfamily \textbf{ B\+R\+NN} (\begin{DoxyParamCaption}\item[{const size\+\_\+t}]{rho,  }\item[{const bool}]{single = {\ttfamily false},  }\item[{Output\+Layer\+Type}]{output\+Layer = {\ttfamily OutputLayerType()},  }\item[{Merge\+Layer\+Type $\ast$}]{merge\+Layer = {\ttfamily new~MergeLayerType()},  }\item[{Merge\+Output\+Type $\ast$}]{merge\+Output = {\ttfamily new~MergeOutputType()},  }\item[{Initialization\+Rule\+Type}]{initialize\+Rule = {\ttfamily InitializationRuleType()} }\end{DoxyParamCaption})}



Create the \doxyref{B\+R\+NN}{p.}{classmlpack_1_1ann_1_1BRNN} object. 

Optionally, specify which initialize rule and performance function should be used.

If you want to pass in a parameter and discard the original parameter object, be sure to use std\+::move to avoid unnecessary copy.


\begin{DoxyParams}{Parameters}
{\em rho} & Maximum number of steps to backpropagate through time (B\+P\+TT). \\
\hline
{\em single} & Predict only the last element of the input sequence. \\
\hline
{\em merge\+Layer} & Merge layer to be used to evaluate the network. \\
\hline
{\em output\+Layer} & Output layer used to evaluate the network. \\
\hline
{\em merge\+Output} & Output Merge layer to be used. \\
\hline
{\em initialize\+Rule} & Optional instantiated Initialization\+Rule object for initializing the network parameter. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_acc79b090f76f6307689b493158afcc3b}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!````~B\+R\+NN@{$\sim$\+B\+R\+NN}}
\index{````~B\+R\+NN@{$\sim$\+B\+R\+NN}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{$\sim$\+B\+R\+N\+N()}
{\footnotesize\ttfamily $\sim$\textbf{ B\+R\+NN} (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})}



\subsection{Member Function Documentation}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a8b5234495846c00f6b2c8296ca6bc718}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Add@{Add}}
\index{Add@{Add}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Add()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily void \textbf{ Add} (\begin{DoxyParamCaption}\item[{Args...}]{args }\end{DoxyParamCaption})}

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a503a807740e6c729be9efc89520db728}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Add@{Add}}
\index{Add@{Add}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Add()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily void \textbf{ Add} (\begin{DoxyParamCaption}\item[{\textbf{ Layer\+Types}$<$ Custom\+Layers... $>$}]{layer }\end{DoxyParamCaption})}

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a3e02a8743fd14b2a902a2e090da2df47}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Evaluate@{Evaluate}}
\index{Evaluate@{Evaluate}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Evaluate()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily double Evaluate (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{parameters,  }\item[{const size\+\_\+t}]{begin,  }\item[{const size\+\_\+t}]{batch\+Size,  }\item[{const bool}]{deterministic }\end{DoxyParamCaption})}



Evaluate the bidirectional recurrent neural network with the given parameters. 

This function is usually called by the optimizer to train the model.


\begin{DoxyParams}{Parameters}
{\em parameters} & Matrix model parameters. \\
\hline
{\em begin} & Index of the starting point to use for objective function evaluation. \\
\hline
{\em batch\+Size} & Number of points to be passed at a time to use for objective function evaluation. \\
\hline
{\em deterministic} & Whether or not to train or test the model. Note some layer act differently in training or testing mode. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a8a04cfd951b52327d7f2e148c68f365d}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Evaluate@{Evaluate}}
\index{Evaluate@{Evaluate}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Evaluate()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily double Evaluate (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{parameters,  }\item[{const size\+\_\+t}]{begin,  }\item[{const size\+\_\+t}]{batch\+Size }\end{DoxyParamCaption})}



Evaluate the bidirectional recurrent neural network with the given parameters. 

This function is usually called by the optimizer to train the model. This just calls the other overload of \doxyref{Evaluate()}{p.}{classmlpack_1_1ann_1_1BRNN_a3e02a8743fd14b2a902a2e090da2df47} with deterministic = true.


\begin{DoxyParams}{Parameters}
{\em parameters} & Matrix model parameters. \\
\hline
{\em begin} & Index of the starting point to use for objective function evaluation. \\
\hline
{\em batch\+Size} & Number of points to be passed at a time to use for objective function evaluation. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a3e01e9e3fe4f5bd8cfc78521567a0f5a}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Evaluate\+With\+Gradient@{Evaluate\+With\+Gradient}}
\index{Evaluate\+With\+Gradient@{Evaluate\+With\+Gradient}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Evaluate\+With\+Gradient()}
{\footnotesize\ttfamily double Evaluate\+With\+Gradient (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{parameters,  }\item[{const size\+\_\+t}]{begin,  }\item[{Grad\+Type \&}]{gradient,  }\item[{const size\+\_\+t}]{batch\+Size }\end{DoxyParamCaption})}



Evaluate the bidirectional recurrent neural network with the given parameters. 

This function is usually called by the optimizer to train the model. This just calls the other overload of \doxyref{Evaluate()}{p.}{classmlpack_1_1ann_1_1BRNN_a3e02a8743fd14b2a902a2e090da2df47} with deterministic = true.


\begin{DoxyParams}{Parameters}
{\em parameters} & Matrix model parameters. \\
\hline
{\em begin} & Index of the starting point to use for objective function evaluation. \\
\hline
{\em gradient} & Matrix to output gradient into. \\
\hline
{\em batch\+Size} & Number of points to be passed at a time to use for objective function evaluation. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_aca73798d93d56b280185c01502d8bd13}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Gradient@{Gradient}}
\index{Gradient@{Gradient}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Gradient()}
{\footnotesize\ttfamily void Gradient (\begin{DoxyParamCaption}\item[{const arma\+::mat \&}]{parameters,  }\item[{const size\+\_\+t}]{begin,  }\item[{arma\+::mat \&}]{gradient,  }\item[{const size\+\_\+t}]{batch\+Size }\end{DoxyParamCaption})}



Evaluate the gradient of the bidirectional recurrent neural network with the given parameters, and with respect to only one point in the dataset. 

This is useful for optimizers such as S\+GD, which require a separable objective function.


\begin{DoxyParams}{Parameters}
{\em parameters} & Matrix of the model parameters to be optimized. \\
\hline
{\em begin} & Index of the starting point to use for objective function gradient evaluation. \\
\hline
{\em gradient} & Matrix to output gradient into. \\
\hline
{\em batch\+Size} & Number of points to be processed as a batch for objective function gradient evaluation. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a1fa76af34a6e3ea927b307f0c318ee4b}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Num\+Functions@{Num\+Functions}}
\index{Num\+Functions@{Num\+Functions}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Num\+Functions()}
{\footnotesize\ttfamily size\+\_\+t Num\+Functions (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Return the number of separable functions. (number of predictor points). 



Definition at line 283 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_aa68d74dc1e86e4352e00a3cab83a0e4a}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Parameters@{Parameters}}
\index{Parameters@{Parameters}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Parameters()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily const arma\+::mat\& Parameters (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Return the initial point for the optimization. 



Definition at line 286 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a043f0ccd62e6711a18e0d81047be9a0a}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Parameters@{Parameters}}
\index{Parameters@{Parameters}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Parameters()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily arma\+::mat\& Parameters (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})\hspace{0.3cm}{\ttfamily [inline]}}



Modify the initial point for the optimization. 



Definition at line 288 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a01373d6adb1a306eb2b093c26d5d9031}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Predict@{Predict}}
\index{Predict@{Predict}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Predict()}
{\footnotesize\ttfamily void Predict (\begin{DoxyParamCaption}\item[{arma\+::cube}]{predictors,  }\item[{arma\+::cube \&}]{results,  }\item[{const size\+\_\+t}]{batch\+Size = {\ttfamily 256} }\end{DoxyParamCaption})}



Predict the responses to a given set of predictors. 

The responses will reflect the output of the given output layer as returned by the output layer function.

If you want to pass in a parameter and discard the original parameter object, be sure to use std\+::move to avoid unnecessary copy.

The format of the data should be as follows\+:
\begin{DoxyItemize}
\item each slice should correspond to a time step
\item each column should correspond to a data point
\item each row should correspond to a dimension So, e.\+g., predictors(i, j, k) is the i\textquotesingle{}th dimension of the j\textquotesingle{}th data point at time slice k. The responses will be in the same format.
\end{DoxyItemize}


\begin{DoxyParams}{Parameters}
{\em predictors} & Input predictors. \\
\hline
{\em results} & Matrix to put output predictions of responses into. \\
\hline
{\em batch\+Size} & Number of points to predict at once. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_ae1efcab525131a0bc040aff1a8bb8e2d}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Predictors@{Predictors}}
\index{Predictors@{Predictors}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Predictors()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily const arma\+::cube\& Predictors (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Get the matrix of data points (predictors). 



Definition at line 301 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a22c21d155cb5637e7d4e821564b27072}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Predictors@{Predictors}}
\index{Predictors@{Predictors}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Predictors()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily arma\+::cube\& Predictors (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})\hspace{0.3cm}{\ttfamily [inline]}}



Modify the matrix of data points (predictors). 



Definition at line 303 of file brnn.\+hpp.



References B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::\+Reset(), B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::\+Reset\+Parameters(), and B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::serialize().

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a372de693ad40b3f42839c8ec6ac845f4}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Reset@{Reset}}
\index{Reset@{Reset}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Reset()}
{\footnotesize\ttfamily void Reset (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})}



Reset the state of the network. 

This ensures that all internally-\/held gradients are set to 0, all memory cells are reset, and the parameters matrix is the right size. 

Referenced by B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::\+Predictors().

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a7178038c3cb8d247eadb94cd2058c432}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Reset\+Parameters@{Reset\+Parameters}}
\index{Reset\+Parameters@{Reset\+Parameters}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Reset\+Parameters()}
{\footnotesize\ttfamily void Reset\+Parameters (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})}



Reset the module information (weights/parameters). 



Referenced by B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::\+Predictors().

\mbox{\label{classmlpack_1_1ann_1_1BRNN_ac109136a291d66f916e382c08a04a8a6}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Responses@{Responses}}
\index{Responses@{Responses}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Responses()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily const arma\+::cube\& Responses (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Get the matrix of responses to the input data points. 



Definition at line 296 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a1b3ccc9c0245210305d8dd2df8693f3c}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Responses@{Responses}}
\index{Responses@{Responses}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Responses()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily arma\+::cube\& Responses (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})\hspace{0.3cm}{\ttfamily [inline]}}



Modify the matrix of responses to the input data points. 



Definition at line 298 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a8cc9a89f8ec7de47890a86665a393450}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Rho@{Rho}}
\index{Rho@{Rho}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Rho()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily const size\+\_\+t\& Rho (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption}) const\hspace{0.3cm}{\ttfamily [inline]}}



Return the maximum length of backpropagation through time. 



Definition at line 291 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_aeb617af2894a3e4bbabcd7ebc30a35af}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Rho@{Rho}}
\index{Rho@{Rho}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Rho()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily size\+\_\+t\& Rho (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})\hspace{0.3cm}{\ttfamily [inline]}}



Modify the maximum length of backpropagation through time. 



Definition at line 293 of file brnn.\+hpp.

\mbox{\label{classmlpack_1_1ann_1_1BRNN_af0dd9205158ccf7bcfcd8ff81f79c927}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!serialize@{serialize}}
\index{serialize@{serialize}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{serialize()}
{\footnotesize\ttfamily void serialize (\begin{DoxyParamCaption}\item[{Archive \&}]{ar,  }\item[{const unsigned}]{int }\end{DoxyParamCaption})}



Serialize the model. 



Referenced by B\+R\+N\+N$<$ Output\+Layer\+Type, Merge\+Layer\+Type, Merge\+Output\+Type, Initialization\+Rule\+Type, Custom\+Layers $>$\+::\+Predictors().

\mbox{\label{classmlpack_1_1ann_1_1BRNN_a2697cc8b37d7bca7c055228382a9b208}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Shuffle@{Shuffle}}
\index{Shuffle@{Shuffle}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Shuffle()}
{\footnotesize\ttfamily void Shuffle (\begin{DoxyParamCaption}{ }\end{DoxyParamCaption})}



Shuffle the order of function visitation. 

This may be called by the optimizer. \mbox{\label{classmlpack_1_1ann_1_1BRNN_a443a41ed01d03b2962d175cfab448c2c}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Train@{Train}}
\index{Train@{Train}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Train()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily double Train (\begin{DoxyParamCaption}\item[{arma\+::cube}]{predictors,  }\item[{arma\+::cube}]{responses,  }\item[{Optimizer\+Type \&}]{optimizer }\end{DoxyParamCaption})}



Train the bidirectional recurrent neural network on the given input data using the given optimizer. 

This will use the existing model parameters as a starting point for the optimization. If this is not what you want, then you should access the parameters vector directly with \doxyref{Parameters()}{p.}{classmlpack_1_1ann_1_1BRNN_a043f0ccd62e6711a18e0d81047be9a0a} and modify it as desired.

If you want to pass in a parameter and discard the original parameter object, be sure to use std\+::move to avoid unnecessary copy.

The format of the data should be as follows\+:
\begin{DoxyItemize}
\item each slice should correspond to a time step
\item each column should correspond to a data point
\item each row should correspond to a dimension So, e.\+g., predictors(i, j, k) is the i\textquotesingle{}th dimension of the j\textquotesingle{}th data point at time slice k.
\end{DoxyItemize}


\begin{DoxyTemplParams}{Template Parameters}
{\em Optimizer\+Type} & Type of optimizer to use to train the model. \\
\hline
\end{DoxyTemplParams}

\begin{DoxyParams}{Parameters}
{\em predictors} & Input training variables. \\
\hline
{\em responses} & Outputs results from input training variables. \\
\hline
{\em optimizer} & Instantiated optimizer used to train the model. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a86403caae7a2b23c5c7f151bfa7d9c3e}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Train@{Train}}
\index{Train@{Train}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Train()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily double Train (\begin{DoxyParamCaption}\item[{arma\+::cube}]{predictors,  }\item[{arma\+::cube}]{responses }\end{DoxyParamCaption})}



Train the bidirectional recurrent neural network on the given input data. 

By default, the S\+GD optimization algorithm is used, but others can be specified (such as ens\+::\+R\+M\+Sprop).

This will use the existing model parameters as a starting point for the optimization. If this is not what you want, then you should access the parameters vector directly with \doxyref{Parameters()}{p.}{classmlpack_1_1ann_1_1BRNN_a043f0ccd62e6711a18e0d81047be9a0a} and modify it as desired.

If you want to pass in a parameter and discard the original parameter object, be sure to use std\+::move to avoid unnecessary copy.

The format of the data should be as follows\+:
\begin{DoxyItemize}
\item each slice should correspond to a time step
\item each column should correspond to a data point
\item each row should correspond to a dimension So, e.\+g., predictors(i, j, k) is the i\textquotesingle{}th dimension of the j\textquotesingle{}th data point at time slice k.
\end{DoxyItemize}


\begin{DoxyTemplParams}{Template Parameters}
{\em Optimizer\+Type} & Type of optimizer to use to train the model. \\
\hline
\end{DoxyTemplParams}

\begin{DoxyParams}{Parameters}
{\em predictors} & Input training variables. \\
\hline
{\em responses} & Outputs results from input training variables. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a5a15e2d0145f9a32c5b439d9d1949908}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Warn\+Message\+Max\+Iterations@{Warn\+Message\+Max\+Iterations}}
\index{Warn\+Message\+Max\+Iterations@{Warn\+Message\+Max\+Iterations}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Warn\+Message\+Max\+Iterations()\hspace{0.1cm}{\footnotesize\ttfamily [1/2]}}
{\footnotesize\ttfamily std\+::enable\+\_\+if$<$ Has\+Max\+Iterations$<$Optimizer\+Type, size\+\_\+t\&(Optimizer\+Type\+::$\ast$)()$>$\+::value, void$>$\+::type Warn\+Message\+Max\+Iterations (\begin{DoxyParamCaption}\item[{Optimizer\+Type \&}]{optimizer,  }\item[{size\+\_\+t}]{samples }\end{DoxyParamCaption}) const}



Check if the optimizer has Max\+Iterations() parameter, if it does then check if it\textquotesingle{}s value is less than the number of datapoints in the dataset. 


\begin{DoxyTemplParams}{Template Parameters}
{\em Optimizer\+Type} & Type of optimizer to use to train the model. \\
\hline
\end{DoxyTemplParams}

\begin{DoxyParams}{Parameters}
{\em optimizer} & optimizer used in the training process. \\
\hline
{\em samples} & Number of datapoints in the dataset. \\
\hline
\end{DoxyParams}
\mbox{\label{classmlpack_1_1ann_1_1BRNN_a02d59875a99c37ae210353881619d1af}} 
\index{mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}!Warn\+Message\+Max\+Iterations@{Warn\+Message\+Max\+Iterations}}
\index{Warn\+Message\+Max\+Iterations@{Warn\+Message\+Max\+Iterations}!mlpack\+::ann\+::\+B\+R\+NN@{mlpack\+::ann\+::\+B\+R\+NN}}
\subsubsection{Warn\+Message\+Max\+Iterations()\hspace{0.1cm}{\footnotesize\ttfamily [2/2]}}
{\footnotesize\ttfamily std\+::enable\+\_\+if$<$ !Has\+Max\+Iterations$<$Optimizer\+Type, size\+\_\+t\&(Optimizer\+Type\+::$\ast$)()$>$\+::value, void$>$\+::type Warn\+Message\+Max\+Iterations (\begin{DoxyParamCaption}\item[{Optimizer\+Type \&}]{optimizer,  }\item[{size\+\_\+t}]{samples }\end{DoxyParamCaption}) const}



Check if the optimizer has Max\+Iterations() parameter, if it doesn\textquotesingle{}t then simply return from the function. 


\begin{DoxyTemplParams}{Template Parameters}
{\em Optimizer\+Type} & Type of optimizer to use to train the model. \\
\hline
\end{DoxyTemplParams}

\begin{DoxyParams}{Parameters}
{\em optimizer} & optimizer used in the training process. \\
\hline
{\em samples} & Number of datapoints in the dataset. \\
\hline
\end{DoxyParams}


The documentation for this class was generated from the following file\+:\begin{DoxyCompactItemize}
\item 
/var/www/mlpack.\+ratml.\+org/mlpack.\+org/\+\_\+src/mlpack-\/3.\+3.\+2/src/mlpack/methods/ann/\textbf{ brnn.\+hpp}\end{DoxyCompactItemize}
