torchlib.module.dsp package¶
Submodules¶
torchlib.module.dsp.convolution module¶
- class torchlib.module.dsp.convolution.Conv1(axis, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')¶
Bases:
torch.nn.modules.module.Module
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.dsp.convolution.Conv2(in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')¶
Bases:
torch.nn.modules.module.Module
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.dsp.convolution.FFTConv1(nh, h=None, axis=0, nfft=None, shape='same', train=True)¶
Bases:
torch.nn.modules.module.Module
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.dsp.convolution.MaxPool1(axis, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)¶
Bases:
torch.nn.modules.module.Module
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.dsp.convolution.MaxPool2(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)¶
Bases:
torch.nn.modules.module.Module
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.dsp.interpolation module¶
- class torchlib.module.dsp.interpolation.Interp1(*args, **kwargs)¶
Bases:
torch.autograd.function.Function
- static backward(ctx, grad_out)¶
Defines a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- forward(x, y, xnew, out=None)¶
Linear 1D interpolation on the GPU for Pytorch. This function returns interpolated values of a set of 1-D functions at the desired query points xnew. This function is working similarly to Matlab™ or scipy functions with the linear interpolation mode on, except that it parallelises over any number of desired interpolation problems. The code will run on GPU if all the tensors provided are on a cuda device. :param x: A 1-D or 2-D tensor of real values. :type x: (N, ) or (D, N) Pytorch Tensor :param y: A 1-D or 2-D tensor of real values. The length of y along its
last dimension must be the same as that of x
- Parameters
xnew ((P,) or (D, P) Pytorch Tensor) – A 1-D or 2-D tensor of real values. xnew can only be 1-D if _both_ x and y are 1-D. Otherwise, its length along the first dimension must be the same as that of whichever x and y is 2-D.
out (Pytorch Tensor, same shape as xnew) – Tensor for the output. If None: allocated automatically.
torchlib.module.dsp.polynomialfit module¶
- class torchlib.module.dsp.polynomialfit.PolyFit(w=None, deg=1, trainable=True)¶
Bases:
torch.nn.modules.module.Module
Polynominal fitting
We fit the data using a polynomial function of the form
\[y(x, {\mathbf w})=w_{0}+w_{1} x+w_{2} x^{2}+, \cdots,+w_{M} x^{M}=\sum_{j=0}^{M} w_{j} x^{j} \]- Parameters
Examples
th.manual_seed(2020) Ns, k, b = 100, 1.2, 3.0 x = th.linspace(0, 1, Ns) t = x * k + b + th.randn(Ns) deg = (0, 1) polyfit = PolyFit(deg=deg) lossfunc = th.nn.MSELoss('mean') optimizer = th.optim.Adam(filter(lambda p: p.requires_grad, polyfit.parameters()), lr=1e-1) for n in range(100): y = polyfit(x) loss = lossfunc(y, t) optimizer.zero_grad() loss.backward() optimizer.step() print("---Loss %.4f, %.4f, %.4f" % (loss.item(), polyfit.w[0], polyfit.w[1])) # output ---Loss 16.7143, -0.2315, -0.1427 ---Loss 15.5265, -0.1316, -0.0429 ---Loss 14.3867, -0.0319, 0.0568 ---Loss 13.2957, 0.0675, 0.1561 ---Loss 12.2543, 0.1664, 0.2551 ... ---Loss 0.9669, 2.4470, 1.9995 ---Loss 0.9664, 2.4515, 1.9967 ---Loss 0.9659, 2.4560, 1.9938
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.