pyml.nn package

Submodules

pyml.nn.activations module

pyml.nn.activations.crelu(x)[source]

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.elu(x)[source]

Computes exponential linear element-wise. exp(x) - 1` if x < 0, x otherwise

\[\begin{split}y = \left\{ {\begin{array}{*{20}{c}}{x,\;\;\;\;\;\;\;\;\;x \ge 0}\\{{e^x} - 1,\;\;\;x < 0}\end{array}} \right..\end{split}\]

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.leaky_relu(x, alpha=0.2)[source]

Compute the Leaky ReLU activation function.

\(y = \left\{ {\begin{array}{*{20}{c}}{x,\;\;\;\;\;\;x \ge 0}\\{\alpha x,\;\;\;x < 0}\end{array}} \right.\)

Rectifier Nonlinearities Improve Neural Network Acoustic Models

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.linear(x)[source]

linear activation

\(y = x\)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.relu(x)[source]

Computes rectified linear: max(x, 0).

\({\rm max}(x, 0)\)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.relu6(x)[source]

Computes Rectified Linear 6: min(max(x, 0), 6).

\({\rm min}({\rm max}(x, 0), 6)\)

Convolutional Deep Belief Networks on CIFAR-10. A. Krizhevsky

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.selu(x)[source]

Computes scaled exponential linear: scale * alpha * (exp(x) - 1) if < 0, scale * x otherwise.

\[\begin{split}y = \lambda \left\{ {\begin{array}{*{20}{c}}{x,\;\;\;\;\;\;\;\;\;\;\;\;\;x \ge 0}\\{\alpha ({e^x} - 1),\;\;\;\;x < 0}\end{array}} \right.\end{split}\]

where, \(\alpha = 1.6732632423543772848170429916717\) , \(\lambda = 1.0507009873554804934193349852946\)

See Self-Normalizing Neural Networks

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.sigmoid(x)[source]

sigmoid function

\[y = \frac{e^x}{e^x + 1}\]
Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.softplus(x)[source]

Computes softplus: log(exp(x) + 1).

\({\rm log}(e^x + 1)\)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.softsign(x)[source]

Computes softsign: x / (abs(x) + 1).

\(\frac{x} {({\rm abs}(x) + 1)}\)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.swish(x, beta=1.0)[source]

Computes the Swish activation function: x * sigmoid(beta*x).

\(y = x\cdot {\rm sigmoid}(\beta x) = {e^{(\beta x)} \over {e^{(\beta x)} + 1}} \cdot x\)

See “Searching for Activation Functions” (Ramachandran et al. 2017)

Arguments:
x {lists or array} – inputs
Returns:
array – outputs
pyml.nn.activations.tanh(x)[source]

Computes tanh of x element-wise.

Specifically

\[y = {\rm tanh}(x) = {{e^{2x} - 1} \over {e^{2x} + 1}}.\]
Arguments:
x {lists or array} – inputs
Returns:
array – outputs

Module contents