torchbox.nn package

Submodules

torchbox.nn.activations module

torchbox.nn.activations.crelu(x)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.

Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.elu(x)

Computes exponential linear element-wise.

\[y = \left\{ {\begin{tensor}{*{20}{c}}{x,\;\;\;\;\;\;\;\;\;x \ge 0}\\{{e^x} - 1,\;\;\;x < 0}\end{tensor}} \right.. \]

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.leaky_relu(x, alpha=0.2)

Compute the Leaky ReLU activation function.

\[y = \left\{ {\begin{tensor}{ccc}{x, x \ge 0}\\{\alpha x, x < 0}\end{tensor}} \right. \]

Rectifier Nonlinearities Improve Neural Network Acoustic Models

Parameters
  • x (lists or tensor) – inputs

  • alpha (float) – \(\alpha\)

Returns

outputs

Return type

tensor

torchbox.nn.activations.linear(x)

linear activation

\[y = x \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.relu(x)

Computes rectified linear

\[{\rm max}(x, 0) \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.relu6(x)

Computes Rectified Linear 6

\[{\rm min}({\rm max}(x, 0), 6) \]

Convolutional Deep Belief Networks on CIFAR-10. A. Krizhevsky

Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.selu(x)

Computes scaled exponential linear

\[y = \lambda \left\{ {\begin{tensor}{*{20}{c}}{x, x \ge 0}\\{\alpha ({e^x} - 1), x < 0}\end{tensor}} \right. \]

where, \(\alpha = 1.6732632423543772848170429916717\) , \(\lambda = 1.0507009873554804934193349852946\), See Self-Normalizing Neural Networks

Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.sigmoid(x)

sigmoid function

\[y = \frac{e^x}{e^x + 1} \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.softplus(x)

softplus function

\[{\rm log}(e^x + 1) \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.softsign(x)

softsign function

\[\frac{x} {({\rm abs}(x) + 1)} \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

torchbox.nn.activations.swish(x, beta=1.0)

Swish function

\[y = x\cdot {\rm sigmoid}(\beta x) = {e^{(\beta x)} \over {e^{(\beta x)} + 1}} \cdot x \]

See “Searching for Activation Functions” (Ramachandran et al. 2017)

Parameters
  • x (lists or tensor) – inputs

  • beta (float) – \(\beta\)

Returns

outputs

Return type

tensor

torchbox.nn.activations.tanh(x)

tanh function

\[y = {\rm tanh}(x) = {{e^{2x} - 1} \over {e^{2x} + 1}}. \]
Parameters

x (lists or tensor) – inputs

Returns

outputs

Return type

tensor

Module contents