torchlib.module.evaluation package

Submodules

torchlib.module.evaluation.contrast module

class torchlib.module.evaluation.contrast.Contrast(cdim=None, dim=None, mode='way1', reduction='mean')

Bases: torch.nn.modules.module.Module

way1 is defined as follows, see [1]:

\[C = \frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]

way2 is defined as follows, see [2]:

\[C = \frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \]

[1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”

Parameters
  • X (torch tensor) – The image tensor.

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (tuple, None, optional) – The dimension axis (cdim is not included) for computing contrast. The default is None, which means all.

  • mode (str, optional) – 'way1' or 'way2'

  • reduction (str, optional) – The operation in batch dim, 'None', 'mean' or 'sum' (the default is ‘mean’)

Returns

C – The contrast value of input.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
C1 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X)
C2 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X)
C3 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X)
print(C1, C2, C3)

# complex in real format
C1 = Contrast(cdim=1, dim=(-2, -1), mode='way1', reduction=None)(X)
C2 = Contrast(cdim=1, dim=(-2, -1), mode='way1', reduction='sum')(X)
C3 = Contrast(cdim=1, dim=(-2, -1), mode='way1', reduction='mean')(X)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X)
C2 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X)
C3 = Contrast(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X)
print(C1, C2, C3)

# output
tensor([[1.2612, 1.1085],
        [1.5992, 1.2124],
        [0.8201, 0.9887],
        [1.4376, 1.0091],
        [1.1397, 1.1860]]) tensor(11.7626) tensor(1.1763)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.entropy module

class torchlib.module.evaluation.entropy.Entropy(cdim=None, dim=None, mode='shannon', reduction='mean')

Bases: torch.nn.modules.module.Module

compute the entropy of the inputs

\[{\rm S} = -\sum_{n=0}^N p_i{\rm log}_2 p_n \]

where \(N\) is the number of pixels, \(p_n=\frac{|X_n|^2}{\sum_{n=0}^N|X_n|^2}\).

Parameters
  • X (tensor) – The complex or real inputs, for complex inputs, both complex and real representations are surpported.

  • cdim (int or None) – If X is complex-valued, caxis is ignored. If X is real-valued and caxis is integer then X will be treated as complex-valued, in this case, caxis specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (tuple, None, optional) – The dimension axis (caxis is not included) for computing entropy. The default is None, which means all.

  • mode (str, optional) – The entropy mode: 'shannon' or 'natural' (the default is ‘shannon’)

  • reduction (str, optional) – The operation in batch dim, 'None', 'mean' or 'sum' (the default is ‘mean’)

Returns

S – The entropy of the inputs.

Return type

tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
S1 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction=None)(X)
S2 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')(X)
S3 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')(X)
print(S1, S2, S3)

# complex in real format
S1 = Entropy(cdim=1, dim=(-2, -1), mode='shannon', reduction=None)(X)
S2 = Entropy(cdim=1, dim=(-2, -1), mode='shannon', reduction='sum')(X)
S3 = Entropy(cdim=1, dim=(-2, -1), mode='shannon', reduction='mean')(X)
print(S1, S2, S3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
S1 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction=None)(X)
S2 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')(X)
S3 = Entropy(cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')(X)
print(S1, S2, S3)

# output
tensor([[2.5482, 2.7150],
        [2.0556, 2.6142],
        [2.9837, 2.9511],
        [2.4296, 2.7979],
        [2.7287, 2.5560]]) tensor(26.3800) tensor(2.6380)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.error module

class torchlib.module.evaluation.error.MAE(cdim=None, dim=None, norm=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the mean absoluted error

Both complex and real representation are supported.

\[{\rm MAE}({\bf X, Y}) = \frac{1}{N}\||{\bf X} - {\bf Y}|\| = \frac{1}{N}\sum_{i=1}^N |x_i - y_i| \]
Parameters
  • X (array) – original

  • X – reconstructed

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean absoluted error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in real format
C1 = MAE(cdim=1, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MAE(cdim=1, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MAE(cdim=1, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MAE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# ---output
[[1.06029116 1.19884877]
[0.90117091 1.13552361]
[1.23422083 0.75743914]
[1.16127965 1.42169262]
[1.25090731 1.29134222]] 11.41271620974502 1.141271620974502
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.error.MSE(cdim=None, dim=None, norm=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the mean square error

Both complex and real representation are supported.

\[{\rm MSE}({\bf X, Y}) = \frac{1}{N}\|{\bf X} - {\bf Y}\|_2^2 = \frac{1}{N}\sum_{i=1}^N(|x_i - y_i|)^2 \]
Parameters
  • X (array) – reconstructed

  • Y (array) – target

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean square error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in real format
C1 = MSE(cdim=1, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MSE(cdim=1, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MSE(cdim=1, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = MSE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# ---output
[[1.57602573 2.32844311]
[1.07232374 2.36118382]
[2.1841515  0.79002805]
[2.43036295 3.18413899]
[2.31107373 2.73990485]] 20.977636476183186 2.0977636476183186
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.error.SAE(cdim=None, dim=None, norm=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the sum absoluted error

Both complex and real representation are supported.

\[{\rm SAE}({\bf X, Y}) = \||{\bf X} - {\bf Y}|\| = \sum_{i=1}^N |x_i - y_i| \]
Parameters
  • X (array) – original

  • X – reconstructed

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum absoluted error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in real format
C1 = SAE(cdim=1, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SAE(cdim=1, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SAE(cdim=1, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SAE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# ---output
[[12.72349388 14.3861852 ]
[10.81405096 13.62628335]
[14.81065     9.08926963]
[13.93535577 17.0603114 ]
[15.0108877  15.49610662]] 136.95259451694022 13.695259451694023
[20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.53852565478087 21.307705130956172
[20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.5385256547809 21.30770513095618
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.error.SSE(cdim=None, dim=None, norm=False, reduction='mean')

Bases: torch.nn.modules.module.Module

computes the sum square error

Both complex and real representation are supported.

\[{\rm SSE}({\bf X, Y}) = \|{\bf X} - {\bf Y}\|_2^2 = \sum_{i=1}^N(|x_i - y_i|)^2 \]
Parameters
  • X (array) – reconstructed

  • Y (array) – target

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum square error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in real format
C1 = SSE(cdim=1, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SSE(cdim=1, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SSE(cdim=1, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction=None)(X, Y)
C2 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction='sum')(X, Y)
C3 = SSE(cdim=None, dim=(-2, -1), norm=norm, reduction='mean')(X, Y)
print(C1, C2, C3)

# ---output
[[18.91230872 27.94131733]
[12.86788492 28.33420589]
[26.209818    9.48033663]
[29.16435541 38.20966786]
[27.73288477 32.87885818]] 251.73163771419823 25.173163771419823
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.norm module

class torchlib.module.evaluation.norm.Fnorm(cdim=None, dim=None, reduction='mean')

Bases: torch.nn.modules.module.Module

obtain the f-norm of a tensor

Both complex and real representation are supported.

\[{\rm norm}({\bf X}) = \|{\bf X}\|_2 = \left(\sum_{x_i\in {\bf X}}|x_i|^2\right)^{\frac{1}{2}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • X (tensor) – input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • reduction (str, None or optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

the inputs’s f-norm.

Return type

tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
print('---norm')

# real
F1 = Fnorm(cdim=None, dim=(-2, -1), reduction=None)(X)
F2 = Fnorm(cdim=None, dim=(-2, -1), reduction='sum')(X)
F3 = Fnorm(cdim=None, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

# complex in real format
F1 = Fnorm(cdim=1, dim=(-2, -1), reduction=None)(X)
F2 = Fnorm(cdim=1, dim=(-2, -1), reduction='sum')(X)
F3 = Fnorm(cdim=1, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
F1 = Fnorm(cdim=None, dim=(-2, -1), reduction=None)(X)
F2 = Fnorm(cdim=None, dim=(-2, -1), reduction='sum')(X)
F3 = Fnorm(cdim=None, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

---norm
tensor([[2.8719, 2.8263],
        [3.1785, 3.4701],
        [4.6697, 3.2955],
        [3.0992, 2.6447],
        [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.norm.Pnorm(cdim=None, dim=None, p=2, reduction='mean')

Bases: torch.nn.modules.module.Module

obtain the p-norm of a tensor

Both complex and real representation are supported.

\[{\rm pnorm}({\bf X}) = \|{\bf X}\|_p = \left(\sum_{x_i\in {\bf X}}|x_i|^p\right)^{\frac{1}{p}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • X (tensor) – input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • p (int) – Specifies the power. The default is 2.

Returns

the inputs’s p-norm.

Return type

tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
print('---pnorm')

# real
F1 = Pnorm(cdim=None, dim=(-2, -1), reduction=None)(X)
F2 = Pnorm(cdim=None, dim=(-2, -1), reduction='sum')(X)
F3 = Pnorm(cdim=None, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

# complex in real format
F1 = Pnorm(cdim=1, dim=(-2, -1), reduction=None)(X)
F2 = Pnorm(cdim=1, dim=(-2, -1), reduction='sum')(X)
F3 = Pnorm(cdim=1, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
F1 = Pnorm(cdim=None, dim=(-2, -1), reduction=None)(X)
F2 = Pnorm(cdim=None, dim=(-2, -1), reduction='sum')(X)
F3 = Pnorm(cdim=None, dim=(-2, -1), reduction='mean')(X)
print(F1, F2, F3)

---pnorm
tensor([[2.8719, 2.8263],
        [3.1785, 3.4701],
        [4.6697, 3.2955],
        [3.0992, 2.6447],
        [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.retrieval module

class torchlib.module.evaluation.retrieval.Dice(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

soft_dice_coeff(P, G)
training: bool
class torchlib.module.evaluation.retrieval.F1(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

F1 distance

(1)\[F_{\beta} = 1 -\frac{(1+\beta^2) P R}{\beta^2 P + R} \]

where,

(2)\[{\rm PPV} = {P} = \frac{\rm TP}{{\rm TP} + {\rm FP}} \]
(3)\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.retrieval.Iridescent(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

Iridescent Distance

\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.retrieval.Jaccard(size_average=True, reduce=True)

Bases: torch.nn.modules.module.Module

Jaccard distance

\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]
forward(P, G)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.ssims module

class torchlib.module.evaluation.ssims.MSSSIM(data_range=255, size_average=True, win_size=11, win_sigma=1.5, channel=3, spatial_dims=2, weights=None, K=(0.01, 0.03))

Bases: torch.nn.modules.module.Module

forward(X, Y)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchlib.module.evaluation.ssims.SSIM(data_range=255, size_average=True, win_size=11, win_sigma=1.5, channel=3, spatial_dims=2, K=(0.01, 0.03), nonnegative_ssim=False)

Bases: torch.nn.modules.module.Module

forward(X, Y)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

torchlib.module.evaluation.variation module

class torchlib.module.evaluation.variation.TotalVariation(axis=0, reduction='mean')

Bases: torch.nn.modules.module.Module

Total Variarion

# https://www.wikiwand.com/en/Total_variation_denoising

diff_i = torch.sum(torch.abs(y_hat[:, :, :, 1:] - y_hat[:, :, :, :-1])) diff_j = torch.sum(torch.abs(y_hat[:, :, 1:, :] - y_hat[:, :, :-1, :])) tv_loss = TV_WEIGHT*(diff_i + diff_j)

forward(X)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Module contents