GatedRecurrentLayer

GatedRecurrentLayer[n]

represents a trainable recurrent layer that takes a sequence of vectors and produces a sequence of vectors each of size n.

GatedRecurrentLayer[n,opts]

includes options for initial weights and other parameters.

Details and Options

  • GatedRecurrentLayer[n] represents a net that takes an input matrix representing a sequence of vectors and outputs a sequence of the same length.
  • Each element of the input sequence is a vector of size k, and each element of the output sequence is a vector of size n.
  • The size k of the input vectors is usually inferred automatically within a NetGraph, NetChain, etc.
  • The input and output ports of the net represented by GatedRecurrentLayer[n] are:
  • "Input"a sequence of vectors of size k
    "Output"a sequence of vectors of size n
  • Given an input sequence {x1,x2,,xT}, a GatedRecurrentLayer outputs a sequence of states {s1,s2,,sT} using the following recurrence relation:
  • input gateit=LogisticSigmoid[Wix.xt+Wis.st-1+bi]
    reset gatert=LogisticSigmoid[Wrx.xt+Wrs.st-1+br]
    memory gatemt=Tanh[Wmx.xt+rt*(Wms.st-1)+bm]
    statest=(1-it)*mt+it*st-1
  • The above definition of GatedRecurrentLayer is based on the variant described in Chung et al., Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, 2014.
  • GatedRecurrentLayer[n] also has a single state port, "State", which is a vector of size n.
  • Within a NetGraph, a connection of the form src->NetPort[layer,"State"] can be used to provide the initial value of the state for a GatedRecurrentLayer, corresponding to s0 in the recurrence relation. The default initial value is a zero vector.
  • Within a NetGraph, a connection of the form NetPort[layer,"State"]->dst can be used to obtain the final value of the state for a GatedRecurrentLayer, corresponding to sT in the recurrence relation.
  • NetStateObject can be used to create a net that will remember values for the state of GatedRecurrentLayer that update when the net is applied to inputs.
  • An initialized GatedRecurrentLayer[] that operates on vectors of size k contains the following trainable arrays:
  • "InputGateInputWeights"Wixmatrix of size n×k
    "InputGateStateWeights"Wismatrix of size n×n
    "InputGateBiases"bivector of size n
    "ResetGateInputWeights"Wrxmatrix of size n×k
    "ResetGateStateWeights"Wrsmatrix of size n×n
    "ResetGateBiases"brvector of size n
    "MemoryGateInputWeights"Wmxmatrix of size n×k
    "MemoryGateStateWeights"Wmsmatrix of size n×n
    "MemoryGateBiases"bmvector of size n
  • In GatedRecurrentLayer[n,opts], initial values can be given to the trainable arrays using a rule of the form "array"->value.
  • The following training parameters can be included:
  • "Dropout" Nonedropout regularization, in which units are probabilistically set to zero
    LearningRateMultipliersAutomaticlearning rate multipliers for the trainable arrays
  • Specifying "Dropout"->None disables dropout during training.
  • Specifying "Dropout"->p applies an automatically chosen dropout method with dropout probability p.
  • Specifying "Dropout"->{"method1"->p1,"method2"->p2,} can be used to combine specific methods of dropout with the corresponding dropout probabilities. Possible methods include:
  • "VariationalWeights"dropout applied to the recurrent connections between weight matrices (default)
    "VariationalInput"dropout applied to the gate contributions from the input, using the same pattern of units at each sequence step
    "VariationalState"dropout applied to the gate contributions from the previous state, using the same pattern of units at each sequence step
    "StateUpdate"dropout applied to the state update vector prior to it being added to the previous state, using a different pattern of units at each sequence step
  • The dropout methods "VariationalInput" and "VariationalState" are based on the Gal et al. 2016 method, while "StateUpdate" is based on the Semeniuta et al. 2016 method and "VariationalWeights" is based on the Merity et al. 2017 method.
  • GatedRecurrentLayer[n,"Input"->shape] allows the shape of the input to be specified. Possible forms for shape are:
  • NetEncoder[]encoder producing a sequence of vectors
    {len,k}sequence of len length-k vectors
    {len,Automatic}sequence of len vectors whose length is inferred
    {"Varying",k}varying number of vectors each of length k
    {"Varying",Automatic}varying number of vectors each of inferred length
  • When given a NumericArray as input, the output will be a NumericArray.
  • Options[GatedRecurrentLayer] gives the list of default options to construct the layer. Options[GatedRecurrentLayer[]] gives the list of default options to evaluate the layer on some data.
  • Information[GatedRecurrentLayer[]] gives a report about the layer.
  • Information[GatedRecurrentLayer[],prop] gives the value of the property prop of GatedRecurrentLayer[]. Possible properties are the same as for NetGraph.

Examples

open allclose all

Basic Examples  (2)

Create a GatedRecurrentLayer that produces a sequence of length-3 vectors:

Create a randomly initialized GatedRecurrentLayer that takes a sequence of length-2 vectors and produces a sequence of length-3 vectors:

Apply the layer to an input sequence:

Scope  (5)

Arguments  (1)

Create a GatedRecurrentLayer that produces a sequence of length-3 vectors:

Ports  (4)

Create a randomly initialized GatedRecurrentLayer that takes a string and produces a sequence of length-2 vectors:

Apply the layer to an input string:

Thread the layer over a batch of inputs:

Create a randomly initialized net that takes a sequence of length-2 vectors and produces a single length-3 vector:

Apply the layer to an input:

Thread the layer across a batch of inputs:

Create a NetGraph that allows the initial state of a GatedRecurrentLayer to be set:

Create a NetGraph that allows the final state of a GatedRecurrentLayer to be obtained:

The final state is the last element of the output sequence:

Options  (2)

"Dropout"  (2)

Create a GatedRecurrentLayer with the dropout method specified:

Create a randomly initialized GatedRecurrentLayer with specified dropout probability:

Evaluate the layer on a sequence of vectors:

Dropout has no effect during evaluation:

Use NetEvaluationMode to force the training behavior of dropout:

Multiple evaluations on the same input can give different results:

Applications  (2)

Create training data consisting of strings that describe two-digit additions and the corresponding numeric result:

Create a network using stacked GatedRecurrentLayer layers that reads the input string and predicts the numeric result:

Train the network:

Apply the trained network to a list of inputs:

Create training data based on strings containing x's and y's, and either Less, Greater or Equal by comparing the number of x's and y's. The training data consists of all possible sentences up to length 8:

Create a network containing a GatedRecurrentLayer to read an input string and predict one of Less, Greater or Equal:

Train the network:

Apply the trained network to a list of inputs:

Measure the accuracy on the entire training set:

Properties & Relations  (1)

NetStateObject can be used to create a net that remembers the state of GatedRecurrentLayer:

Each evaluation modifies the state stored inside the NetStateObject:

Wolfram Research (2017), GatedRecurrentLayer, Wolfram Language function, https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html.

Text

Wolfram Research (2017), GatedRecurrentLayer, Wolfram Language function, https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html.

CMS

Wolfram Language. 2017. "GatedRecurrentLayer." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html.

APA

Wolfram Language. (2017). GatedRecurrentLayer. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html

BibTeX

@misc{reference.wolfram_2023_gatedrecurrentlayer, author="Wolfram Research", title="{GatedRecurrentLayer}", year="2017", howpublished="\url{https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html}", note=[Accessed: 20-April-2024 ]}

BibLaTeX

@online{reference.wolfram_2023_gatedrecurrentlayer, organization={Wolfram Research}, title={GatedRecurrentLayer}, year={2017}, url={https://reference.wolfram.com/language/ref/GatedRecurrentLayer.html}, note=[Accessed: 20-April-2024 ]}