torchbox.module.loss package

Submodules

torchbox.module.loss.contrast module

class torchbox.module.loss.contrast.ContrastLoss(mode='way1', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

Contrast

way1 is defined as follows, see [1]:

\[C = \frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]

way2 is defined as follows, see [2]:

\[C = \frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \]

[1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”

Parameters
  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing contrast. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • mode (str, optional) – 'way1' or 'way2'

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

C – The contrast value of input.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
C1 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in real format
C1 = ContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction=None)(X)
C2 = ContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='sum')(X)
C3 = ContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = ContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# output
tensor([[1.2612, 1.1085],
        [1.5992, 1.2124],
        [0.8201, 0.9887],
        [1.4376, 1.0091],
        [1.1397, 1.1860]]) tensor(11.7626) tensor(1.1763)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
forward(X)

forward process

Parameters

X (Tensor) – The the input for computing contrast.

class torchbox.module.loss.contrast.NegativeContrastLoss(mode='way1', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

Negative Contrast Loss

way1 is defined as follows, see [1]:

\[C = -\frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]

way2 is defined as follows, see [2]:

\[C = -\frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \]

[1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”

Parameters
  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing contrast. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • mode (str, optional) – 'way1' or 'way2'

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

C – The contrast value of input.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
C1 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in real format
C1 = NegativeContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction=None)(X)
C2 = NegativeContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='sum')(X)
C3 = NegativeContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = NegativeContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# output
tensor([[-1.2612, -1.1085],
        [-1.5992, -1.2124],
        [-0.8201, -0.9887],
        [-1.4376, -1.0091],
        [-1.1397, -1.1860]]) tensor(-11.7626) tensor(-1.1763)
tensor([-0.6321, -1.1808, -0.5884, -1.1346, -0.6038]) tensor(-4.1396) tensor(-0.8279)
tensor([-0.6321, -1.1808, -0.5884, -1.1346, -0.6038]) tensor(-4.1396) tensor(-0.8279)
forward(X)

forward process

Parameters

X (Tensor) – The the input for computing contrast.

class torchbox.module.loss.contrast.ReciprocalContrastLoss(mode='way1', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

way1 is defined as follows, for contrast, see [1]:

\[C = \frac{{\rm E}(|I|^2)}{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}} \]

way2 is defined as follows, for contrast, see [2]:

\[C = \frac{\left({\rm E}(|I|)\right)^2}{{\rm E}(|I|^2)} \]

[1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”

Parameters
  • mode (str, optional) – 'way1' or 'way2'

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing contrast. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

C – The contrast value of input.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
C1 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in real format
C1 = ReciprocalContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction=None)(X)
C2 = ReciprocalContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='sum')(X)
C3 = ReciprocalContrastLoss(mode='way1', cdim=1, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X)
C2 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X)
C3 = ReciprocalContrastLoss(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(C1, C2, C3)

tensor([[0.7929, 0.9021],
        [0.6253, 0.8248],
        [1.2193, 1.0114],
        [0.6956, 0.9909],
        [0.8774, 0.8432]]) tensor(8.7830) tensor(0.8783)
tensor([1.5821, 0.8469, 1.6997, 0.8813, 1.6563]) tensor(6.6663) tensor(1.3333)
tensor([1.5821, 0.8469, 1.6997, 0.8813, 1.6563]) tensor(6.6663) tensor(1.3333)
forward(X)

forward process

Parameters

X (Tensor) – The the input for computing contrast.

torchbox.module.loss.correlation module

class torchbox.module.loss.correlation.CosSimLoss(mode='abs', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

compute the cosine similarity loss of the inputs

If utilize the amplitude of correlation as loss

\[{\mathcal L} = 1 - |\frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]

If utilize the angle of correlation as loss

\[{\mathcal L} = |\angle \frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]
Parameters
  • mode (str) – only work when P and G are complex-valued in real format or complex format. 'abs' or 'amplitude' returns the amplitude of similarity, 'angle' or 'phase' returns the phase of similarity.

  • cdim (int or None) – If P and G is complex-valued, cdim is ignored. If P and G is real-valued and cdim is integer then P and G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P and G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing correlation. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

Examples

import torch as th
from torchbox import CosSimLoss

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
S1 = CosSimLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
S2 = CosSimLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
S3 = CosSimLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# complex in real format
S1 = CosSimLoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
S2 = CosSimLoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
S3 = CosSimLoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
S1 = CosSimLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
S2 = CosSimLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
S3 = CosSimLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# output
tensor([[0.4791, 0.0849],
        [0.0334, 0.4855],
        [0.0136, 0.2280],
        [0.4951, 0.2166],
        [0.4484, 0.4221]]) tensor(2.9068) tensor(0.2907)
tensor([[0.2926],
        [0.2912],
        [0.1505],
        [0.3993],
        [0.3350]]) tensor([1.4685]) tensor([0.2937])
tensor([0.2926, 0.2912, 0.1505, 0.3993, 0.3350]) tensor(1.4685) tensor(0.2937)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

Returns

S – The correlation of the inputs.

Return type

Tensor

class torchbox.module.loss.correlation.EigVecCorLoss(npcs=4, mode=None, cdim=None, fdim=- 2, sdim=- 1, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

compute the eigenvector correlation of the inputs

Parameters
  • mode (str) – only work when P and G are complex-valued in real format or complex format. 'abs' or 'amplitude' returns the amplitude of similarity, 'angle' or 'phase' returns the phase of similarity.

  • cdim (int or None) – If P and G is complex-valued, cdim is ignored. If P and G is real-valued and cdim is integer then P and G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P and G will be treated as real-valued

  • fdim (int, optional) – the dimension index of feature, by default -2

  • sdim (int, optional) – the dimension index of sample, by default -1

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

Examples

import torch as th
from torchbox import EigVecCorLoss

mode = 'abs'
th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
S1 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction=None)(P, G)
S2 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction='sum')(P, G)
S3 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction='mean')(P, G)
print(S1, S2, S3)

# complex in real format
S1 = EigVecCorLoss(npcs=4, mode=mode, cdim=1, fdim=(-2, -1), sdim=0, reduction=None)(P, G)
S2 = EigVecCorLoss(npcs=4, mode=mode, cdim=1, fdim=(-2, -1), sdim=0, reduction='sum')(P, G)
S3 = EigVecCorLoss(npcs=4, mode=mode, cdim=1, fdim=(-2, -1), sdim=0, reduction='mean')(P, G)
print(S1, S2, S3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
S1 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction=None)(P, G)
S2 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction='sum')(P, G)
S3 = EigVecCorLoss(npcs=4, mode=mode, cdim=None, fdim=(-2, -1), sdim=0, reduction='mean')(P, G)
print(S1, S2, S3)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.correlation.PeaCorLoss(mode='abs', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

compute the pearson correlation loss of the inputs

If utilize the amplitude of pearson correlation as loss

\[{\mathcal L} = 1 - |\frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]

If utilize the angle of pearson correlation as loss

\[{\mathcal L} = |\angle \frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]

where \(\bf p\) and \(\bf g\) is the centered version (removed mean) of inputs

Parameters
  • mode (str) – only work when P and G are complex-valued in real format or complex format. 'abs' or 'amplitude' returns the amplitude of similarity, 'angle' or 'phase' returns the phase of similarity.

  • cdim (int or None) – If P and G is complex-valued, cdim is ignored. If P and G is real-valued and cdim is integer then P and G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P and G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing correlation. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

Examples

import torch as th
from torchbox import PeaCorLoss

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
S1 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
S2 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
S3 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# complex in real format
S1 = PeaCorLoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
S2 = PeaCorLoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
S3 = PeaCorLoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
S1 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
S2 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
S3 = PeaCorLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(S1, S2, S3)

# output
tensor([[0.6010, 0.0260],
        [0.0293, 0.4981],
        [0.0063, 0.2284],
        [0.3203, 0.2851],
        [0.3757, 0.3936]]) tensor(2.7639) tensor(0.2764)
tensor([[0.3723],
        [0.2992],
        [0.1267],
        [0.3020],
        [0.2910]]) tensor([1.3911]) tensor([0.2782])
tensor([0.3723, 0.2992, 0.1267, 0.3020, 0.2910]) tensor(1.3911) tensor(0.2782)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

torchbox.module.loss.entropy module

class torchbox.module.loss.entropy.EntropyLoss(mode='shannon', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

compute the entropy of the inputs

\[{\rm S} = -\sum_{n=0}^N p_i{\rm log}_2 p_n \]

where \(N\) is the number of pixels, \(p_n=\frac{|X_n|^2}{\sum_{n=0}^N|X_n|^2}\).

Parameters
  • X (Tensor) – The complex or real inputs, for complex inputs, both complex and real representations are surpported.

  • mode (str, optional) – The entropy mode: 'shannon' or 'natural' (the default is ‘shannon’)

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing entropy. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

S – The entropy of the inputs.

Return type

Tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
S1 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction=None)(X)
S2 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction='sum')(X)
S3 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(S1, S2, S3)

# complex in real format
S1 = EntropyLoss(mode='shannon', cdim=1, dim=(-2, -1), reduction=None)(X)
S2 = EntropyLoss(mode='shannon', cdim=1, dim=(-2, -1), reduction='sum')(X)
S3 = EntropyLoss(mode='shannon', cdim=1, dim=(-2, -1), reduction='mean')(X)
print(S1, S2, S3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
S1 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction=None)(X)
S2 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction='sum')(X)
S3 = EntropyLoss(mode='shannon', cdim=None, dim=(-2, -1), reduction='mean')(X)
print(S1, S2, S3)

# output
tensor([[2.5482, 2.7150],
        [2.0556, 2.6142],
        [2.9837, 2.9511],
        [2.4296, 2.7979],
        [2.7287, 2.5560]]) tensor(26.3800) tensor(2.6380)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
forward(X)

forward process

Parameters

X (Tensor) – the input of entropy

torchbox.module.loss.error module

class torchbox.module.loss.error.MAELoss(cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the mean absoluted error

Both complex and real representation are supported.

\[{\rm MAE}({\bf P, G}) = \frac{1}{N}|{\bf P} - {\bf G}| = \frac{1}{N}\sum_{i=1}^N |p_i - g_i| \]
Parameters
  • P (array) – original

  • P – reconstructed

  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean absoluted error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = MAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = MAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = MAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = MAELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = MAELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = MAELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = MAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = MAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = MAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# ---output
[[1.06029116 1.19884877]
[0.90117091 1.13552361]
[1.23422083 0.75743914]
[1.16127965 1.42169262]
[1.25090731 1.29134222]] 11.41271620974502 1.141271620974502
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.MSELoss(cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the mean square error

Both complex and real representation are supported.

\[{\rm MSE}({\bf P, G}) = \frac{1}{N}\|{\bf P} - {\bf G}\|_2^2 = \frac{1}{N}\sum_{i=1}^N(|p_i - g_i|)^2 \]
Parameters
  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean square error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = MSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = MSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = MSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = MSELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = MSELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = MSELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = MSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = MSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = MSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# ---output
[[1.57602573 2.32844311]
[1.07232374 2.36118382]
[2.1841515  0.79002805]
[2.43036295 3.18413899]
[2.31107373 2.73990485]] 20.977636476183186 2.0977636476183186
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.NMAELoss(mode='Gabssum', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the normalized mean absoluted error

Both complex and real representation are supported.

Parameters
  • mode (str) – mode of normalization, 'Gabssum' (default) normalized square error with the amplitude summation of G, 'Gpowsum' normalized square error with the power summation of G, 'Gabsmax' normalized square error with the maximum amplitude of G, 'Gpowmax' normalized square error with the maximum power of G, 'GpeakV' normalized square error with the square of peak value (V) of G; 'Gfnorm' normalized square error with Frobenius norm of G; 'Gpnorm' normalized square error with p-norm of G; 'fnorm' normalized P and G with Frobenius norm, 'pnormV' normalized P and G with p-norm, respectively, where V is a float or integer number; 'zscore' normalized P and G with zscore method. 'std' normalized P and G with standard deviation.

  • cdim (int or None) – If G is complex-valued, cdim is ignored. If G is real-valued and cdim is integer then G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

normalized mean absoluted error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = NMAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NMAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = NMAELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = NMAELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMAELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = NMAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NMAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.NMSELoss(mode='Gpowsum', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the normalized mean square error

Both complex and real representation are supported.

Parameters
  • mode (str) – mode of normalization 'Gpowsum' (default) normalized square error with the power summation of G, 'Gabssum' (default) normalized square error with the amplitude summation of G, 'Gpowmax' normalized square error with the maximum power of G, 'Gabsmax' normalized square error with the maximum amplitude of G, 'GpeakV' normalized square error with the square of peak value (V) of G; 'Gfnorm' normalized square error with Frobenius norm of G; 'Gpnorm' normalized square error with p-norm of G; 'fnorm' normalized P and G with Frobenius norm, 'pnormV' normalized P and G with p-norm, respectively, where V is a float or integer number; 'zscore' normalized P and G with zscore method. 'std' normalized P and G with standard deviation.

  • cdim (int or None) – If G is complex-valued, cdim is ignored. If G is real-valued and cdim is integer then G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

normalized mean square error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = NMSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NMSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = NMSELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = NMSELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMSELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = NMSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NMSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NMSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.NSAELoss(mode='Gabssum', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the normalized sum absoluted error

Both complex and real representation are supported.

Parameters
  • mode (str) – mode of normalization, 'Gabssum' (default) normalized square error with the amplitude summation of G, 'Gpowsum' normalized square error with the power summation of G, 'Gabsmax' normalized square error with the maximum amplitude of G, 'Gpowmax' normalized square error with the maximum power of G, 'GpeakV' normalized square error with the square of peak value (V) of G; 'Gfnorm' normalized square error with Frobenius norm of G; 'Gpnorm' normalized square error with p-norm of G; 'fnorm' normalized P and G with Frobenius norm, 'pnormV' normalized P and G with p-norm, respectively, where V is a float or integer number; 'zscore' normalized P and G with zscore method. 'std' normalized P and G with standard deviation.

  • cdim (int or None) – If G is complex-valued, cdim is ignored. If G is real-valued and cdim is integer then G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum absoluted error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = NSAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NSAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = NSAELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = NSAELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSAELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = NSAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NSAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.NSSELoss(mode='Gpowsum', cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the normalized sum square error

Both complex and real representation are supported.

Parameters
  • mode (str) – mode of normalization, 'Gpowsum' (default) normalized square error with the power summation of G, 'Gabssum' (default) normalized square error with the amplitude summation of G, 'Gpowmax' normalized square error with the maximum power of G, 'Gabsmax' normalized square error with the maximum amplitude of G, 'GpeakV' normalized square error with the square of peak value (V) of G; 'Gfnorm' normalized square error with Frobenius norm of G; 'Gpnorm' normalized square error with p-norm of G; 'fnorm' normalized P and G with Frobenius norm, 'pnormV' normalized P and G with p-norm, respectively, where V is a float or integer number; 'zscore' normalized P and G with zscore method. 'std' normalized P and G with standard deviation.

  • cdim (int or None) – If G is complex-valued, cdim is ignored. If G is real-valued and cdim is integer then G will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), G will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum square error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = NSSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NSSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = NSSELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = NSSELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSSELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = NSSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = NSSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = NSSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.SAELoss(cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the sum absoluted error

Both complex and real representation are supported.

\[{\rm SAE}({\bf P, G}) = |{\bf P} - {\bf G}| = \sum_{i=1}^N |p_i - g_i| \]
Parameters
  • P (array) – original

  • P – reconstructed

  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum absoluted error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = SAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = SAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = SAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = SAELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = SAELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = SAELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = SAELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = SAELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = SAELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# ---output
[[12.72349388 14.3861852 ]
[10.81405096 13.62628335]
[14.81065     9.08926963]
[13.93535577 17.0603114 ]
[15.0108877  15.49610662]] 136.95259451694022 13.695259451694023
[20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.53852565478087 21.307705130956172
[20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.5385256547809 21.30770513095618
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.error.SSELoss(cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the sum square error

Both complex and real representation are supported.

\[{\rm SSE}({\bf P, G}) = \|{\bf P} - {\bf G}\|_2^2 = \sum_{i=1}^N(|p_i - g_i|)^2 \]
Parameters
  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing error. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum square error

Return type

scalar or array

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
G = th.randn(5, 2, 3, 4)

# real
C1 = SSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = SSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = SSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in real format
C1 = SSELoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
C2 = SSELoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
C3 = SSELoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
G = G[:, 0, ...] + 1j * G[:, 1, ...]
C1 = SSELoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
C2 = SSELoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
C3 = SSELoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(C1, C2, C3)

# ---output
[[18.91230872 27.94131733]
[12.86788492 28.33420589]
[26.209818    9.48033663]
[29.16435541 38.20966786]
[27.73288477 32.87885818]] 251.73163771419823 25.173163771419823
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

torchbox.module.loss.fourier module

class torchbox.module.loss.fourier.FourierAmplitudeLoss(err='th.nn.MSELoss()', cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)

Bases: torch.nn.modules.module.Module

Fourier Domain Amplitude Loss

compute amplitude loss in fourier domain.

Parameters
  • err (str, object, optional) – string type will be converted to function by eval(), such as 'th.nn.MSELoss()' (default), 'tb.SSELoss(cdim=None, dim=(-2, -1), reduction=None)', 'tb.CosSimLoss(cdim=None, dim=(-2, -1), reduction=None)', …

  • cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then cdim equals to -1 or 4.

  • ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).

  • iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.

  • ftn (int, None, optional) – the number of points for Fourier transformation, by default None

  • iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None

  • ftnorm (str, None, optional) –

    the normalization method for Fourier transformation, by default None

    • ”forward” - normalize by 1/n

    • ”backward” - no normalization

    • ”ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)

  • iftnorm (str, None, optional) –

    the normalization method for inverse Fourier transformation, by default None

    • ”forward” - no normalization

    • ”backward” - normalize by 1/n

    • ”ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)

:param please see also th.nn.fft.fft() and th.nn.fft.ifft().:

Examples

Compute loss of data in real and complex representation, respectively.

th.manual_seed(2020)
xr = th.randn(10, 2, 4, 4) * 100
yr = th.randn(10, 2, 4, 4) * 100
xc = xr[:, [0], ...] + 1j * xr[:, [1], ...]
yc = yr[:, [0], ...] + 1j * yr[:, [1], ...]

errr = "tb.SSELoss(cdim=1, dim=(-2, -1), reduction='mean')"
err = "tb.SSELoss(cdim=None, dim=(-2, -1), reduction='mean')"
# err = 'th.nn.MSELoss()'

flossr = FourierAmplitudeLoss(err=err, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
flossc = FourierAmplitudeLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

flossr = FourierAmplitudeLoss(err=err, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
flossc = FourierAmplitudeLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

# ---output
tensor(456761.5625)
tensor(456761.5625)
tensor(28547.5977)
tensor(28547.5977)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.fourier.FourierLoss(err='th.nn.MSELoss()', cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)

Bases: torch.nn.modules.module.Module

Fourier Domain Loss

Compute loss in Fourier domain. Given input \({\bm P}\), target \(\bm G\),

\[L = \varepsilon({\mathcal F}({\bm P}), {\mathcal F}({\bm G})) \]

where, \({\bm P}\), \(\bm G\) can be real-valued and complex-valued data, \(\varepsilon(\cdot)\) is a function, such as mean square error, absolute error, …

Parameters
  • err (str, object, optional) – string type will be converted to function by eval(), such as 'th.nn.MSELoss()' (default), 'tb.SSELoss(cdim=None, dim=(-2, -1), reduction=None)', 'tb.CosSimLoss(cdim=None, dim=(-2, -1), reduction=None)', …

  • cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then cdim equals to -1 or 4.

  • ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).

  • iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.

  • ftn (int, None, optional) – the number of points for Fourier transformation, by default None

  • iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None

  • ftnorm (str, None, optional) –

    the normalization method for Fourier transformation, by default None

    • ”forward” - normalize by 1/n

    • ”backward” - no normalization

    • ”ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)

  • iftnorm (str, None, optional) –

    the normalization method for inverse Fourier transformation, by default None

    • ”forward” - no normalization

    • ”backward” - normalize by 1/n

    • ”ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)

please see also th.nn.fft.fft() and th.nn.fft.ifft().

Examples

Compute loss of data in real and complex representation, respectively.

th.manual_seed(2020)
xr = th.randn(10, 2, 4, 4) * 100
yr = th.randn(10, 2, 4, 4) * 100
xc = xr[:, [0], ...] + 1j * xr[:, [1], ...]
yc = yr[:, [0], ...] + 1j * yr[:, [1], ...]

errr = "tb.SSELoss(cdim=1, dim=(-2, -1), reduction='mean')"
err = "tb.SSELoss(cdim=None, dim=(-2, -1), reduction='mean')"
# err = 'th.nn.MSELoss()'

flossr = FourierLoss(err=errr, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
flossc = FourierLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

flossr = FourierLoss(err=errr, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
flossc = FourierLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

# ---output
tensor(2325792.)
tensor(2325792.)
tensor(145362.)
tensor(145362.)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.fourier.FourierNormLoss(reduction='mean', p=1.5)

Bases: torch.nn.modules.module.Module

\[C = \frac{{\rm E}(|I|^2)}{[E(|I|)]^2} \]

see Fast Fourier domain optimization using hybrid

forward(X, w=None)

[summary]

[description]

Parameters
  • X (Tensor) – After fft in azimuth

  • w (Tensor, optional) – weight

Returns

loss

Return type

float

class torchbox.module.loss.fourier.FourierPhaseLoss(err='th.nn.MSELoss()', cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)

Bases: torch.nn.modules.module.Module

Fourier Domain Phase Loss

compute phase loss in fourier domain.

Parameters
  • err (str, object, optional) – string type will be converted to function by eval(), such as 'th.nn.MSELoss()' (default), 'tb.SSELoss(cdim=None, dim=(-2, -1), reduction=None)', 'tb.CosSimLoss(cdim=None, dim=(-2, -1), reduction=None)', …

  • cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then cdim equals to -1 or 4.

  • ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).

  • iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.

  • ftn (int, None, optional) – the number of points for Fourier transformation, by default None

  • iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None

  • ftnorm (str, None, optional) –

    the normalization method for Fourier transformation, by default None

    • ”forward” - normalize by 1/n

    • ”backward” - no normalization

    • ”ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)

  • iftnorm (str, None, optional) –

    the normalization method for inverse Fourier transformation, by default None

    • ”forward” - no normalization

    • ”backward” - normalize by 1/n

    • ”ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)

please see also th.nn.fft.fft() and th.nn.fft.ifft().

Examples

Compute loss of data in real and complex representation, respectively.

th.manual_seed(2020)
xr = th.randn(10, 2, 4, 4) * 100
yr = th.randn(10, 2, 4, 4) * 100
xc = xr[:, [0], ...] + 1j * xr[:, [1], ...]
yc = yr[:, [0], ...] + 1j * yr[:, [1], ...]

errr = "tb.SSELoss(cdim=1, dim=(-2, -1), reduction='mean')"
err = "tb.SSELoss(cdim=None, dim=(-2, -1), reduction='mean')"
# err = 'th.nn.MSELoss()'

flossr = FourierPhaseLoss(err=err, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
flossc = FourierPhaseLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

flossr = FourierPhaseLoss(err=err, cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
flossc = FourierPhaseLoss(err=err, cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None)
print(flossr(xr, yr))
print(flossc(xc, yc))

# ---output
tensor(106.8749)
tensor(106.8749)
tensor(106.8749)
tensor(106.8749)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

torchbox.module.loss.norm module

class torchbox.module.loss.norm.FnormLoss(cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

F-norm Loss

Both complex and real representation are supported.

\[{\rm norm}({\bf P}) = \|{\bf P}\|_2 = \left(\sum_{x_i\in {\bf P}}|x_i|^2\right)^{\frac{1}{2}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing norm. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str, None or optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

the inputs’s f-norm.

Return type

tensor

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
print('---norm')

# real
F1 = FnormLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
F2 = FnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
F3 = FnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

# complex in real format
F1 = FnormLoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
F2 = FnormLoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
F3 = FnormLoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
F1 = FnormLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
F2 = FnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
F3 = FnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

---norm
tensor([[3.0401, 4.9766],
        [4.8830, 3.1261],
        [6.3124, 4.1407],
        [5.9283, 4.5896],
        [3.4909, 6.7252]]) tensor(47.2130) tensor(4.7213)
tensor([5.8317, 5.7980, 7.5493, 7.4973, 7.5772]) tensor(34.2535) tensor(6.8507)
tensor([5.8317, 5.7980, 7.5493, 7.4973, 7.5772]) tensor(34.2535) tensor(6.8507)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

class torchbox.module.loss.norm.PnormLoss(p=2, cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

obtain the p-norm of a tensor

Both complex and real representation are supported.

\[{\rm pnorm}({\bf P}) = \|{\bf P}\|_p = \left(\sum_{x_i\in {\bf P}}|x_i|^p\right)^{\frac{1}{p}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • p (int) – Specifies the power. The default is 2.

  • cdim (int or None) – If P is complex-valued, cdim is ignored. If P is real-valued and cdim is integer then P will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), P will be treated as real-valued

  • dim (int or None) – The dimension axis for computing norm. The default is None, which means all.

  • keepdim (bool) – keep dimensions? (include complex dim, defalut is False)

  • reduction (str, None or optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

the inputs’s p-norm.

Return type

tensor

Examples

th.manual_seed(2020)
P = th.randn(5, 2, 3, 4)
print('---norm')

# real
F1 = PnormLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
F2 = PnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
F3 = PnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

# complex in real format
F1 = PnormLoss(cdim=1, dim=(-2, -1), reduction=None)(P, G)
F2 = PnormLoss(cdim=1, dim=(-2, -1), reduction='sum')(P, G)
F3 = PnormLoss(cdim=1, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

# complex in complex format
P = P[:, 0, ...] + 1j * P[:, 1, ...]
F1 = PnormLoss(cdim=None, dim=(-2, -1), reduction=None)(P, G)
F2 = PnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(P, G)
F3 = PnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(P, G)
print(F1, F2, F3)

---norm
tensor([[3.0401, 4.9766],
        [4.8830, 3.1261],
        [6.3124, 4.1407],
        [5.9283, 4.5896],
        [3.4909, 6.7252]]) tensor(47.2130) tensor(4.7213)
tensor([5.8317, 5.7980, 7.5493, 7.4973, 7.5772]) tensor(34.2535) tensor(6.8507)
tensor([5.8317, 5.7980, 7.5493, 7.4973, 7.5772]) tensor(34.2535) tensor(6.8507)
forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

torchbox.module.loss.perceptual module

class torchbox.module.loss.perceptual.RandomProjectionLoss(mode='real', baseloss='MSE', channels=[3, 32], kernel_sizes=[(3, 3)], activations=['ReLU'], reduction='mean')

Bases: torch.nn.modules.module.Module

RandomProjection loss

forward(P, G)

forward process

Parameters
  • P (Tensor) – predicted/estimated/reconstructed

  • G (Tensor) – ground-truth/target

weight_init()

torchbox.module.loss.retrieval module

class torchbox.module.loss.retrieval.DiceLoss(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

soft_dice_coeff(P, G)
class torchbox.module.loss.retrieval.F1Loss(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

F1 distance Loss

\[F_{\beta} = 1 -\frac{(1+\beta^2) * P * R}{\beta^2 *P + R} \]

where,

\[{\rm PPV} = {P} = \frac{\rm TP}{{\rm TP} + {\rm FP}} \]
\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.loss.retrieval.IridescentLoss(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

Iridescent Distance Loss

\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.loss.retrieval.JaccardLoss(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

Jaccard distance

\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.loss.sparse_metric module

class torchbox.module.loss.sparse_metric.FourierLogSparseLoss(lambd=1.0, cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

Parameters
  • X (array) – the input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing norm. The default is None, which means all.

  • lambd (float) – weight, default is 1.

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

loss

Return type

scalar or array

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchbox.module.loss.sparse_metric.LogSparseLoss(lambd=1.0, cdim=None, dim=None, keepdim=False, reduction='mean')

Bases: torch.nn.modules.module.Module

Log sparse loss

Parameters
  • X (array) – the input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis for computing norm. The default is None, which means all.

  • lambd (float) – weight, default is 1.

  • reduction (str or None, optional) – The operation mode of reduction, None, 'mean' or 'sum' (the default is 'mean')

Returns

loss

Return type

scalar or array

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchbox.module.loss.variation module

class torchbox.module.loss.variation.TotalVariation(reduction='mean', axis=0)

Bases: torch.nn.modules.module.Module

Total Variarion

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Module contents