torchbox.module.layers package

Submodules

torchbox.module.layers.balanceconv2d module

class torchbox.module.layers.balanceconv2d.BalaConv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False, padding_mode='zeros')

Bases: torch.nn.modules.conv._ConvNd

Applies a 2D Balanced convolution over an input signal composed of several input planes.

In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, H, W)\) and output \((N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})\) can be precisely described as:

(1)\[{\bm Z}_{n_o, c_i, h_o, w_o} = \sum_{h=0}^{H_k-1}\sum_{w=0}^{W_k-1} \left[{\bm I}_{n_o, c_i, h_o + h - 1, w_o + w - 1} + {\bm K}_{c_o, h, w} - {\bm I}_{n_o, c_i, h_o + h - 1, w_o + w - 1} \cdot {\bm K}_{c_o, h, w}\right]. \]

where \(\star\) is the valid 2D cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is width in pixels.

  • stride controls the stride for the cross-correlation, a single number or a tuple.

  • padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.

  • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this has a nice visualization of what dilation does.

  • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

    • At groups=1, all inputs are convolved to all outputs.

    • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.

    • At groups= in_channels, each input channel is convolved with its own set of filters, of size: \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

The parameters kernel_size, stride, padding, dilation can either be:

  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.

Note

When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution.

In other words, for an input of size \((N, C_{in}, H_{in}, W_{in})\), a depthwise convolution with a depthwise multiplier K, can be constructed by arguments \((in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})\).

Parameters
  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0

  • padding_mode (string, optional) – zeros

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

Shape:
  • Input: \((N, C_{in}, H_{in}, W_{in})\)

  • Output: \((N, C_{out}, H_{out}, W_{out})\) where

    \[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor \]
    \[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor \]
weight

the learnable weights of the module of shape \((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)

Type

Tensor

bias

the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)

Type

Tensor

Examples:

>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.cnnsize module

torchbox.module.layers.cnnsize.ConvSize1d(CLi, Co, K, S, P, D=1, groups=1)

Compute shape after 2D-Convolution

\[\begin{array}{l} L_{o} &= \left\lfloor\frac{L_{i} + 2 \times P_l - D_l \times (K_l - 1) - 1}{S_l} + 1\right\rfloor \\ \end{array} \]
CLituple or list

input data shape (C, L)

Coint

number of output chanels.

Ktuple

kernel size

Stuple

stride size

Ptuple

padding size

Dtuple, optional

dilation size (the default is 1)

groupsint, optional

1 (the default is 1)

Returns

shape after 2D-Convolution

Return type

tuple

Raises

ValueError – dilation should be greater than zero.

torchbox.module.layers.cnnsize.ConvSize2d(CHWi, Co, K, S, P, D=(1, 1), groups=1)

Compute shape after 2D-Convolution

(2)\[\begin{array}{l} H_{o} &= \left\lfloor\frac{H_{i} + 2 \times P_h - D_h \times (K_h - 1) - 1}{S_h} + 1\right\rfloor \\ W_{o} &= \left\lfloor\frac{W_{i} + 2 \times P_w - D_w \times (K_w - 1) - 1}{S_w} + 1\right\rfloor \end{array} \]
CHWituple or list

input data shape (C, H, W)

Coint

number of output chanels.

Ktuple

kernel size

Stuple

stride size

Ptuple

padding size

Dtuple, optional

dilation size (the default is (1, 1))

groupsint, optional

[description] (the default is 1, which [default_description])

Returns

shape after 2D-Convolution

Return type

tuple

Raises

ValueError – dilation should be greater than zero.

torchbox.module.layers.cnnsize.ConvTransposeSize1d(CLi, Co, K, S, P, D=1, OP=0, groups=1)

Compute shape after Transpose Convolution

(3)\[\begin{array}{l} L_{o} &= (L_{i} - 1) \times S_l - 2 \times P_l + D_l \times (K_l - 1) + OP_l + 1 \\ \end{array} \]
Parameters
  • CLi (tuple or list) – input data shape (C, H, W)

  • Co (int) – number of output chanels.

  • K (tuple) – kernel size

  • S (tuple) – stride size

  • P (tuple) – padding size

  • D (tuple, optional) – dilation size (the default is 1)

  • OP (tuple, optional) – output padding size (the default is 0)

  • groups (int, optional) – one group (the default is 1)

Returns

shape after 2D-Transpose Convolution

Return type

tuple

Raises

ValueError – output padding must be smaller than either stride or dilation

torchbox.module.layers.cnnsize.ConvTransposeSize2d(CHWi, Co, K, S, P, D=(1, 1), OP=(0, 0), groups=1)

Compute shape after Transpose Convolution

(4)\[\begin{array}{l} H_{o} &= (H_{i} - 1) \times S_h - 2 \times P_h + D_h \times (K_h - 1) + OP_h + 1 \\ W_{o} &= (W_{i} - 1) \times S_w - 2 \times P_w + D_w \times (K_w - 1) + OP_w + 1 \end{array} \]
Parameters
  • CHWi (tuple or list) – input data shape (C, H, W)

  • Co (int) – number of output chanels.

  • K (tuple) – kernel size

  • S (tuple) – stride size

  • P (tuple) – padding size

  • D (tuple, optional) – dilation size (the default is (1, 1))

  • OP (tuple, optional) – output padding size (the default is (0, 0))

  • groups (int, optional) – one group (the default is 1)

Returns

shape after 2D-Transpose Convolution

Return type

tuple

Raises

ValueError – output padding must be smaller than either stride or dilation

torchbox.module.layers.cnnsize.PoolSize1d(CLi, K, S, P, D=1)
torchbox.module.layers.cnnsize.PoolSize2d(CHWi, K, S, P, D=(1, 1))
torchbox.module.layers.cnnsize.UnPoolSize1d(CLi, K, S, P, D=1)
torchbox.module.layers.cnnsize.UnPoolSize2d(CHWi, K, S, P, D=(1, 1))
torchbox.module.layers.cnnsize.conv_size(in_size, kernel_size, stride=1, padding=0, dilation=1)

computes output shape of convolution

(5)\[\begin{array}{l} H_{o} &= \left\lfloor\frac{H_{i} + 2 \times P_h - D_h \times (K_h - 1) - 1}{S_h} + 1\right\rfloor \\ W_{o} &= \left\lfloor\frac{W_{i} + 2 \times P_w - D_w \times (K_w - 1) - 1}{S_w} + 1\right\rfloor \\ B_{o} &= \left\lfloor\frac{B_{i} + 2 \times P_b - D_w \times (K_b - 1) - 1}{S_b} + 1\right\rfloor \\ \cdots \end{array} \]
Parameters
  • in_size (list or tuple) – the size of input (without batch and channel)

  • kernel_size (int, list or tuple) – the window size of convolution

  • stride (int, list or tuple, optional) – the stride of convolution, by default 1

  • padding (int, str, list or tuple, optional) – the padding size of convolution, 'valid', 'same', by default 0

  • dilation (int, list or tuple, optional) – the spacing between kernel elements, by default 1

torchbox.module.layers.complex_layers module

class torchbox.module.layers.complex_layers.ComplexBatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Bases: torchbox.module.layers.complex_layers._ComplexBatchNorm

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexBatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Bases: torchbox.module.layers.complex_layers._ComplexBatchNorm

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConv1(axis, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(Xr, Xi)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConv1d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConv2(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(Xr, Xi)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexDropout(p=0.5, inplace=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexDropout2d(p=0.5, inplace=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexLeakyReLU(negative_slope=(0.01, 0.01), inplace=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexLinear(in_features, out_features, bias=True, cdim=- 1)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexMaxPool1(axis, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(Xr, Xi)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexMaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexMaxPool2(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(Xr, Xi)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexMaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexReLU(inplace=False)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexSequential(*args: torch.nn.modules.module.Module)
class torchbox.module.layers.complex_layers.ComplexSequential(arg: collections.OrderedDict[str, torch.nn.modules.module.Module])

Bases: torch.nn.modules.container.Sequential

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexSoftShrink(alpha=0.5, cdim=None, inplace=False)

Bases: torch.nn.modules.module.Module

forward(input, alpha=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.ComplexUpsample(size=None, scale_factor=None, mode='nearest', align_corners=None)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.NaiveComplexBatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, cdim=- 1)

Bases: torch.nn.modules.module.Module

Naive approach to complex batch norm, perform batch norm independently on real and imaginary part.

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.NaiveComplexBatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Bases: torch.nn.modules.module.Module

Naive approach to complex batch norm, perform batch norm independently on real and imaginary part.

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.complex_layers.SoftShrink(alpha=0.5, inplace=False)

Bases: torch.nn.modules.module.Module

forward(input, alpha=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.consistency_layers module

class torchbox.module.layers.consistency_layers.DataConsistency2d(ftaxis=(- 2, - 1), mixrate=1.0, isfft=True)

Bases: torch.nn.modules.module.Module

forward(x, y, mask)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.conv_lstms module

class torchbox.module.layers.conv_lstms.ConvLSTM(rank, in_channels, out_channels, kernel_size, stride=1, padding='same', dilation=1, groups=1, bias=True, padding_mode='zeros', activation='Tanh()', rnn_activation='Hardsigmoid()', dropp=None, rnn_dropp=None, bidirectional=False, batch_first=False, device=None, dtype=None)

Bases: torch.nn.modules.module.Module

class for the ConvLSTM layer

Convolutional LSTM.

input shape: (B, T, C, L) or (B, T, C, H, W) or (B, T, C, H, W, K)

Parameters
  • rank (int) – 1 for 1D convolution, 2 for 2D convolution, 3 for 3D convolution

  • in_channels (int, list or tuple) – the number of input channels of each cell

  • out_channels (int, list or tuple) – the number of output channels of each cell

  • kernel_size (int, list or tuple) – the window size of convolution of each cell

  • stride (int, list or tuple, optional) – the stride of convolution of each cell, by default 1

  • padding (int, str, list or tuple, optional) – the padding size of convolution of each cell, 'valid', 'same', by default 'same'

  • dilation (int, list or tuple, optional) – the spacing between kernel elements of each cell, by default 1

  • groups (int, list or tuple, optional) – the number of blocked connections from input channels to output channels of each cell, by default 1

  • bias (bool, list or tuple, optional) – If True, adds a learnable bias to the output of convolution of each cell, by default True

  • padding_mode (str, list or tuple, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’, by default ‘zeros’

  • activation (str or None, optional) – activation of input convolution layers, 'Tanh()' (default), 'Sigmoid', …

  • rnn_activation (str or None, optional) – activation of RNN convolution layers, 'Hardsigmoid()' (default), 'Sigmoid', …

  • dropp (float or None, optional) – dropout rate of input convolution layers, None

  • rnn_dropp (float or None, optional) – dropout rate of RNN layers, None

  • bidirectional (bool, optional) – True for bidirectional convolutional LSTM, by default False

  • batch_first (bool, optional) – True for (B, T, ...), by default False

  • device (str or None, optional) – device for computation, by default None

  • dtype (str or None, optional) – data type, by default None

Returns

  • xs (Tensor) – output sequence

  • states (tuple of list) – (hidden states, code states) of each cell

Examples

Stack two Conv2dLSTM Cells with ConvLSTM and ConvLSTMCell, respectively.

import torch as th
import torchbox as tb

tb.setseed(seed=2023, target='torch')

T, B, C, H, W = 10, 6, 2, 18, 18
x = th.randn(T, B, C, H, W)

# ===way1
tb.setseed(seed=2023, target='torch')
lstm = tb.ConvLSTM(rank=2, in_channels=[C, 4], out_channels=[4, 4], kernel_size=[3, 3], stride=[1, 1], padding=['same', 'same'])
print(lstm.cells[0].in_convc.weight.sum(), lstm.cells[0].rnn_convc.weight.sum())
print(lstm.cells[1].in_convc.weight.sum(), lstm.cells[1].rnn_convc.weight.sum())

# ===way2
tb.setseed(seed=2023, target='torch')
cell1 = tb.ConvLSTMCell(rank=2, in_channels=C, out_channels=4, kernel_size=3, stride=1, padding='same')
cell2 = tb.ConvLSTMCell(rank=2, in_channels=4, out_channels=4, kernel_size=3, stride=1, padding='same')

print(cell1.in_convc.weight.sum(), cell1.rnn_convc.weight.sum())
print(cell2.in_convc.weight.sum(), cell2.rnn_convc.weight.sum())

# ===way1
y, (h, c) = lstm(x, None)
h = th.stack(h, dim=0)
c = th.stack(c, dim=0)

print(y.shape, y.sum(), h.shape, h.sum(), c.shape, c.sum())

# ===way2
h1, c1 = None, None
h2, c2 = None, None
y = []
for t in range(x.shape[0]):
    h1, c1 = cell1(x[t, ...], (h1, c1))
    h2, c2 = cell2(h1, (h2, c2))
    y.append(h2)
y = th.stack(y, dim=0)
h = th.stack((h1, h2), dim=0)
c = th.stack((c1, c2), dim=0)
print(y.shape, y.sum(), h.shape, h.sum(), c.shape, c.sum())


# output
tensor(-1.4177, grad_fn=<SumBackward0>) tensor(0.9743, grad_fn=<SumBackward0>)
tensor(0.1532, grad_fn=<SumBackward0>) tensor(-0.1598, grad_fn=<SumBackward0>)
tensor(-1.4177, grad_fn=<SumBackward0>) tensor(0.9743, grad_fn=<SumBackward0>)
tensor(0.1532, grad_fn=<SumBackward0>) tensor(-0.1598, grad_fn=<SumBackward0>)
torch.Size([10, 6, 4, 18, 18]) tensor(-2144.8628, grad_fn=<SumBackward0>) torch.Size([2, 6, 4, 18, 18]) tensor(-398.1468, grad_fn=<SumBackward0>) torch.Size([2, 6, 4, 18, 18]) tensor(-783.8212, grad_fn=<SumBackward0>)
torch.Size([10, 6, 4, 18, 18]) tensor(-2144.8628, grad_fn=<SumBackward0>) torch.Size([2, 6, 4, 18, 18]) tensor(-398.1468, grad_fn=<SumBackward0>) torch.Size([2, 6, 4, 18, 18]) tensor(-783.8212, grad_fn=<SumBackward0>)
forward(x, states=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_hc_shape(xshape)
class torchbox.module.layers.conv_lstms.ConvLSTMCell(rank, in_channels, out_channels, kernel_size, stride=1, padding='same', dilation=1, groups=1, bias=True, padding_mode='zeros', activation='Tanh()', rnn_activation='Hardsigmoid()', dropp=None, rnn_dropp=None, device=None, dtype=None)

Bases: torch.nn.modules.module.Module

Cell class for the ConvLSTM layer

Convolutional LSTM Cell.

Parameters
  • rank (int) – 1 for 1D convolution, 2 for 2D convolution, 3 for 3D convolution

  • in_channels (int) – the number of input channels

  • out_channels (int) – the number of output channels

  • kernel_size (int, list or tuple) – the window size of convolution

  • stride (int, optional) – the stride of convolution, by default 1

  • padding (int, str, optional) – 'valid', 'same', by default 'same'

  • dilation (int, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

  • bias (bool, optional) – If 'True', adds a learnable bias to the output, by default 'True'

  • padding_mode (str, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • activation (str or None, optional) – activation of input convolution layer, 'Tanh()' (default), 'Sigmoid', …

  • rnn_activation (str or None, optional) – activation of RNN convolution layer, 'Hardsigmoid()' (default), 'Sigmoid', …

  • dropp (float or None, optional) – dropout rate of input convolution layer, None

  • rnn_dropp (float or None, optional) – dropout rate of RNN layer, None

  • device (str or None, optional) – device for computation, by default obj:None

  • dtype (str or None, optional) – data type, by default None

Returns

  • h (Tensor) – hidden states

  • c (Tensor) – code states

forward(x, states)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_hc_shape(xshape)

torchbox.module.layers.convolution module

class torchbox.module.layers.convolution.Conv1(axis, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.convolution.Conv2(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.convolution.FFTConv1(nh, h=None, axis=0, nfft=None, shape='same', train=True)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.convolution.MaxPool1(axis, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.convolution.MaxPool2(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Bases: torch.nn.modules.module.Module

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.edge module

class torchbox.module.layers.edge.EdgeDetector

Bases: torch.nn.modules.module.Module

forward(image)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.edge.EdgeFeatureExtractor(Ci)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.fft_layers module

class torchbox.module.layers.fft_layers.FFTLayer1d(nfft=None)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.flow_layers module

class torchbox.module.layers.flow_layers.ActNorm(inchannels, logdet=True)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize(input)
reverse(output)
class torchbox.module.layers.flow_layers.AffineCoupling(inchannels, filter_size=512, affine=True)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(output)
class torchbox.module.layers.flow_layers.Flow(inchannels, affine=True, convlu=True)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(output)
class torchbox.module.layers.flow_layers.FlowBlock(inchannels, nflow, split=True, affine=True, convlu=True)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(output, eps=None, reconstruct=False)
class torchbox.module.layers.flow_layers.Glow(inchannels, nflow, nblock, affine=True, convlu=True)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(z_list, reconstruct=False)
class torchbox.module.layers.flow_layers.InvConv2d(inchannels)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(output)
class torchbox.module.layers.flow_layers.InvConv2dLU(inchannels)

Bases: torch.nn.modules.module.Module

calc_weight()
forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reverse(output)
class torchbox.module.layers.flow_layers.ZeroConv2d(inchannels, out_channel, padding=1)

Bases: torch.nn.modules.module.Module

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.flow_layers.gaussian_log_p(x, mean, log_sd)
torchbox.module.layers.flow_layers.gaussian_sample(eps, mean, log_sd)
torchbox.module.layers.flow_layers.logabs(x)

torchbox.module.layers.gaborconv2d module

class torchbox.module.layers.gaborconv2d.GaborConv2d(channel_in, channel_out, kernel_size, stride=1, padding=0)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.gaborconv2d.gabor_fn(kernel_size, channel_in, channel_out, sigma, theta, Lambda, psi, gamma)

torchbox.module.layers.phase_convolution module

class torchbox.module.layers.phase_convolution.ComplexPhaseConv1d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.ComplexPhaseConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.ComplexPhaseConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=None, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.ComplexPhaseConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=None, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.PhaseConv1d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.PhaseConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.PhaseConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=None, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.phase_convolution.PhaseConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=None, dilation=1, padding_mode='zeros')

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.layers.pool module

class torchbox.module.layers.pool.MeanSquarePool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.layers.pool.PnormPool2d(kernel_size, p=2, stride=None, padding=0, ceil_mode=False, count_include_pad=True)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Module contents