torchlib.module.loss package¶
Submodules¶
torchlib.module.loss.contrast module¶
- class torchlib.module.loss.contrast.ContrastLoss(cdim=None, dim=None, mode='way1', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Contrast
way1 is defined as follows, see [1]:
\[C = \frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]way2 is defined as follows, see [2]:
\[C = \frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \][1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”
- Parameters
X (torch tensor) – The image tensor.
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (tuple, None, optional) – The dimension axis (
cdim
is not included) for computing contrast. The default isNone
, which means all.mode (str, optional) –
'way1'
or'way2'
reduction (str, optional) – The operation in batch dim,
'None'
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
C – The contrast value of input.
- Return type
scalar or tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real C1 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in real format C1 = ContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] C1 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # output tensor([[1.2612, 1.1085], [1.5992, 1.2124], [0.8201, 0.9887], [1.4376, 1.0091], [1.1397, 1.1860]]) tensor(11.7626) tensor(1.1763) tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279) tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.contrast.NegativeContrastLoss(cdim=None, dim=None, mode='way1', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Negative Contrast Loss
way1 is defined as follows, see [1]:
\[C = -\frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]way2 is defined as follows, see [2]:
\[C = -\frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \][1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”
- Parameters
X (torch tensor) – The image tensor.
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (tuple, None, optional) – The dimension axis (
cdim
is not included) for computing contrast. The default isNone
, which means all.mode (str, optional) –
'way1'
or'way2'
reduction (str, optional) – The operation in batch dim,
'None'
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
C – The contrast value of input.
- Return type
scalar or tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real C1 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in real format C1 = NegativeContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = NegativeContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = NegativeContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] C1 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = NegativeContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # output tensor([[-1.2612, -1.1085], [-1.5992, -1.2124], [-0.8201, -0.9887], [-1.4376, -1.0091], [-1.1397, -1.1860]]) tensor(-11.7626) tensor(-1.1763) tensor([-0.6321, -1.1808, -0.5884, -1.1346, -0.6038]) tensor(-4.1396) tensor(-0.8279) tensor([-0.6321, -1.1808, -0.5884, -1.1346, -0.6038]) tensor(-4.1396) tensor(-0.8279)
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.contrast.ReciprocalContrastLoss(cdim=None, dim=None, mode='way1', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
way1 is defined as follows, for contrast, see [1]:
\[C = \frac{{\rm E}(|I|^2)}{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}} \]way2 is defined as follows, for contrast, see [2]:
\[C = \frac{\left({\rm E}(|I|)\right)^2}{{\rm E}(|I|^2)} \][1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”
- Parameters
X (torch tensor) – The image array.
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (tuple, None, optional) – The dimension axis (
cdim
is not included) for computing contrast. The default isNone
, which means all.mode (str, optional) –
'way1'
or'way2'
reduction (str, optional) – The operation in batch dim,
'None'
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
C – The contrast value of input.
- Return type
scalar or tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real C1 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in real format C1 = ReciprocalContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ReciprocalContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ReciprocalContrastLoss(cdim=1, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] C1 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction=None)(X) C2 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='sum')(X) C3 = ReciprocalContrastLoss(cdim=None, dim=(-2, -1), mode='way1', reduction='mean')(X) print(C1, C2, C3) tensor([[0.7929, 0.9021], [0.6253, 0.8248], [1.2193, 1.0114], [0.6956, 0.9909], [0.8774, 0.8432]]) tensor(8.7830) tensor(0.8783) tensor([1.5821, 0.8469, 1.6997, 0.8813, 1.6563]) tensor(6.6663) tensor(1.3333) tensor([1.5821, 0.8469, 1.6997, 0.8813, 1.6563]) tensor(6.6663) tensor(1.3333)
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.entropy module¶
- class torchlib.module.loss.entropy.EntropyLoss(cdim=None, dim=None, mode='shannon', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
compute the entropy of the inputs
\[{\rm S} = -\sum_{n=0}^N p_i{\rm log}_2 p_n \]where \(N\) is the number of pixels, \(p_n=\frac{|X_n|^2}{\sum_{n=0}^N|X_n|^2}\).
- Parameters
X (tensor) – The complex or real inputs, for complex inputs, both complex and real representations are surpported.
cdim (int or None) – If
X
is complex-valued,caxis
is ignored. IfX
is real-valued andcaxis
is integer thenX
will be treated as complex-valued, in this case,caxis
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (tuple, None, optional) – The dimension axis (
caxis
is not included) for computing entropy. The default isNone
, which means all.mode (str, optional) – The entropy mode:
'shannon'
or'natural'
(the default is ‘shannon’)reduction (str, optional) – The operation in batch dim,
'None'
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
S – The entropy of the inputs.
- Return type
tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real S1 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction=None)(X) S2 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')(X) S3 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')(X) print(S1, S2, S3) # complex in real format S1 = EntropyLoss(cdim=1, dim=(-2, -1), mode='shannon', reduction=None)(X) S2 = EntropyLoss(cdim=1, dim=(-2, -1), mode='shannon', reduction='sum')(X) S3 = EntropyLoss(cdim=1, dim=(-2, -1), mode='shannon', reduction='mean')(X) print(S1, S2, S3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] S1 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction=None)(X) S2 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')(X) S3 = EntropyLoss(cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')(X) print(S1, S2, S3) # output tensor([[2.5482, 2.7150], [2.0556, 2.6142], [2.9837, 2.9511], [2.4296, 2.7979], [2.7287, 2.5560]]) tensor(26.3800) tensor(2.6380) tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408) tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.error module¶
- class torchlib.module.loss.error.MAELoss(cdim=None, dim=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
computes the mean absoluted error
Both complex and real representation are supported.
\[{\rm MAE}({\bf X, Y}) = \frac{1}{N}\||{\bf X} - {\bf Y}|\| = \frac{1}{N}\sum_{i=1}^N |x_i - y_i| \]- Parameters
X (array) – original
X – reconstructed
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valuedaxis (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.reduction (str, optional) – The operation in batch dim,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean absoluted error
- Return type
scalar or array
Examples
norm = False th.manual_seed(2020) X = th.randn(5, 2, 3, 4) Y = th.randn(5, 2, 3, 4) # real C1 = MAELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = MAELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = MAELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in real format C1 = MAELoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) C2 = MAELoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) C3 = MAELoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] Y = Y[:, 0, ...] + 1j * Y[:, 1, ...] C1 = MAELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = MAELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = MAELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # ---output [[1.06029116 1.19884877] [0.90117091 1.13552361] [1.23422083 0.75743914] [1.16127965 1.42169262] [1.25090731 1.29134222]] 11.41271620974502 1.141271620974502 [1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482 [1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.error.MSELoss(cdim=None, dim=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
computes the mean square error
Both complex and real representation are supported.
\[{\rm MSE}({\bf X, Y}) = \frac{1}{N}\|{\bf X} - {\bf Y}\|_2^2 = \frac{1}{N}\sum_{i=1}^N(|x_i - y_i|)^2 \]- Parameters
X (array) – reconstructed
Y (array) – target
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valuedaxis (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.reduction (str, optional) – The operation in batch dim,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean square error
- Return type
scalar or array
Examples
norm = False th.manual_seed(2020) X = th.randn(5, 2, 3, 4) Y = th.randn(5, 2, 3, 4) # real C1 = MSELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = MSELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = MSELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in real format C1 = MSELoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) C2 = MSELoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) C3 = MSELoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] Y = Y[:, 0, ...] + 1j * Y[:, 1, ...] C1 = MSELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = MSELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = MSELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # ---output [[1.57602573 2.32844311] [1.07232374 2.36118382] [2.1841515 0.79002805] [2.43036295 3.18413899] [2.31107373 2.73990485]] 20.977636476183186 2.0977636476183186 [3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637 [3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.error.SAELoss(cdim=None, dim=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
computes the sum absoluted error
Both complex and real representation are supported.
\[{\rm SAE}({\bf X, Y}) = \||{\bf X} - {\bf Y}|\| = \sum_{i=1}^N |x_i - y_i| \]- Parameters
X (array) – original
X – reconstructed
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valuedaxis (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.reduction (str, optional) – The operation in batch dim,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum absoluted error
- Return type
scalar or array
Examples
norm = False th.manual_seed(2020) X = th.randn(5, 2, 3, 4) Y = th.randn(5, 2, 3, 4) # real C1 = SAELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = SAELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = SAELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in real format C1 = SAELoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) C2 = SAELoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) C3 = SAELoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] Y = Y[:, 0, ...] + 1j * Y[:, 1, ...] C1 = SAELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = SAELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = SAELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # ---output [[12.72349388 14.3861852 ] [10.81405096 13.62628335] [14.81065 9.08926963] [13.93535577 17.0603114 ] [15.0108877 15.49610662]] 136.95259451694022 13.695259451694023 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.53852565478087 21.307705130956172 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.5385256547809 21.30770513095618
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.error.SSELoss(cdim=None, dim=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
computes the sum square error
Both complex and real representation are supported.
\[{\rm SSE}({\bf X, Y}) = \|{\bf X} - {\bf Y}\|_2^2 = \sum_{i=1}^N(|x_i - y_i|)^2 \]- Parameters
X (array) – reconstructed
Y (array) – target
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valuedaxis (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.reduction (str, optional) – The operation in batch dim,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum square error
- Return type
scalar or array
Examples
norm = False th.manual_seed(2020) X = th.randn(5, 2, 3, 4) Y = th.randn(5, 2, 3, 4) # real C1 = SSELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = SSELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = SSELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in real format C1 = SSELoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) C2 = SSELoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) C3 = SSELoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] Y = Y[:, 0, ...] + 1j * Y[:, 1, ...] C1 = SSELoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) C2 = SSELoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) C3 = SSELoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(C1, C2, C3) # ---output [[18.91230872 27.94131733] [12.86788492 28.33420589] [26.209818 9.48033663] [29.16435541 38.20966786] [27.73288477 32.87885818]] 251.73163771419823 25.173163771419823 [46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646 [46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.fourier module¶
- class torchlib.module.loss.fourier.FourierAmplitudeLoss(cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Fourier Domain Amplitude Loss
compute amplitude loss in fourier domain.
- Parameters
cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then
cdim
equals to -1 or 4.ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).
iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.
ftn (int, None, optional) – the number of points for Fourier transformation, by default None
iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None
ftnorm (str, None, optional) – the normalization method for Fourier transformation, by default None - “forward” - normalize by 1/n - “backward” - no normalization - “ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)
iftnorm (str, None, optional) – the normalization method for inverse Fourier transformation, by default None - “forward” - no normalization - “backward” - normalize by 1/n - “ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)
err (str, loss function, optional) –
'MSE'
,'MAE'
or torch’s loss function, by default'mse'
reduction (str, optional) – reduction behavior,
'sum'
or'mean'
, by default'mean'
:param please see
th.nn.fft.fft()
andth.nn.fft.ifft()
.:Examples
Compute loss of data in real and complex representation, respectively.
th.manual_seed(2020) xr = th.randn(10, 2, 4, 4) * 10000 yr = th.randn(10, 2, 4, 4) * 10000 xc = xr[:, [0], ...] + 1j * xr[:, [1], ...] yc = yr[:, [0], ...] + 1j * yr[:, [1], ...] flossr = FourierAmplitudeLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') flossc = FourierAmplitudeLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) flossr = FourierAmplitudeLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') flossc = FourierAmplitudeLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) # ---output tensor(2.8548e+08) tensor(2.8548e+08) tensor(17842250.) tensor(17842250.)
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.fourier.FourierLoss(cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Fourier Domain Loss
Compute loss in Fourier domain. Given input \({\bm P}\), target \(\bm G\),
\[L = g({\mathcal F}({\bm P}), {\mathcal F}({\bm G})) \]where, \({\bm P}\), \(\bm G\) can be real-valued and complex-valued data, \(g(\cdot)\) is a function, such as mean square error, absolute error, …
- Parameters
cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then
cdim
equals to -1 or 4.ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).
iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.
ftn (int, None, optional) – the number of points for Fourier transformation, by default None
iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None
ftnorm (str, None, optional) – the normalization method for Fourier transformation, by default None - “forward” - normalize by 1/n - “backward” - no normalization - “ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)
iftnorm (str, None, optional) – the normalization method for inverse Fourier transformation, by default None - “forward” - no normalization - “backward” - normalize by 1/n - “ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)
err (str, loss function, optional) –
'MSE'
,'MAE'
or torch’s loss function, by default'mse'
reduction (str, optional) – reduction behavior,
'sum'
or'mean'
, by default'mean'
:param please see
th.nn.fft.fft()
andth.nn.fft.ifft()
.:Examples
Compute loss of data in real and complex representation, respectively.
th.manual_seed(2020) xr = th.randn(10, 2, 4, 4) * 10000 yr = th.randn(10, 2, 4, 4) * 10000 xc = xr[:, [0], ...] + 1j * xr[:, [1], ...] yc = yr[:, [0], ...] + 1j * yr[:, [1], ...] flossr = FourierLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') flossc = FourierLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) flossr = FourierLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') flossc = FourierLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) # ---output tensor(7.2681e+08) tensor(7.2681e+08) tensor(45425624.) tensor(45425624.)
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.fourier.FourierNormLoss(reduction='mean', p=1.5)¶
Bases:
torch.nn.modules.module.Module
\[C = \frac{{\rm E}(|I|^2)}{[E(|I|)]^2} \]see Fast Fourier domain optimization using hybrid
- forward(X, w=None)¶
[summary]
[description]
- Parameters
X (Tensor) – After fft in azimuth
w (Tensor, optional) – weight
- Returns
loss
- Return type
- class torchlib.module.loss.fourier.FourierPhaseLoss(cdim=None, ftdim=(- 2, - 1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Fourier Domain Phase Loss
compute phase loss in fourier domain.
- Parameters
cdim (int, optional) – If data is complex-valued but represented as real tensors, you should specify the dimension. Otherwise, set it to None, defaults is None. For example, \({\bm X}_c\in {\mathbb C}^{N\times C\times H\times W}\) is represented as a real-valued tensor \({\bm X}_r\in {\mathbb R}^{N\times C\times H\times W\ times 2}\), then
cdim
equals to -1 or 4.ftdim (tuple, None, optional) – the dimensions for Fourier transformation. by default (-2, -1).
iftdim (tuple, None, optional) – the dimension for inverse Fourier transformation, by default None.
ftn (int, None, optional) – the number of points for Fourier transformation, by default None
iftn (int, None, optional) – the number of points for inverse Fourier transformation, by default None
ftnorm (str, None, optional) – the normalization method for Fourier transformation, by default None - “forward” - normalize by 1/n - “backward” - no normalization - “ortho” - normalize by 1/sqrt(n) (making the FFT orthonormal)
iftnorm (str, None, optional) – the normalization method for inverse Fourier transformation, by default None - “forward” - no normalization - “backward” - normalize by 1/n - “ortho” - normalize by 1/sqrt(n) (making the IFFT orthonormal)
err (str, loss function, optional) –
'MSE'
,'MAE'
or torch’s loss function, by default'mse'
reduction (str, optional) – reduction behavior,
'sum'
or'mean'
, by default'mean'
:param please see
th.nn.fft.fft()
andth.nn.fft.ifft()
.:Examples
Compute loss of data in real and complex representation, respectively.
th.manual_seed(2020) xr = th.randn(10, 2, 4, 4) * 10000 yr = th.randn(10, 2, 4, 4) * 10000 xc = xr[:, [0], ...] + 1j * xr[:, [1], ...] yc = yr[:, [0], ...] + 1j * yr[:, [1], ...] flossr = FourierPhaseLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') flossc = FourierPhaseLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm=None, iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) flossr = FourierPhaseLoss(cdim=1, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') flossc = FourierPhaseLoss(cdim=None, ftdim=(-2, -1), iftdim=None, ftn=None, iftn=None, ftnorm='forward', iftnorm=None, err='mse', reduction='mean') print(flossr(xr, yr)) print(flossc(xc, yc)) # ---output tensor(6.6797) tensor(6.6797) tensor(6.6797) tensor(6.6797)
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.norm module¶
- class torchlib.module.loss.norm.FnormLoss(cdim=None, dim=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
F-norm Loss
Both complex and real representation are supported.
\[{\rm norm}({\bf X}) = \|{\bf X}\|_2 = \left(\sum_{x_i\in {\bf X}}|x_i|^2\right)^{\frac{1}{2}} \]where, \(u, v\) are the real and imaginary part of x, respectively.
- Parameters
X (tensor) – input
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.reduction (str, None or optional) – The operation in batch dim,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
the inputs’s f-norm.
- Return type
tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) print('---norm') # real F1 = FnormLoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) F2 = FnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) F3 = FnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) # complex in real format F1 = FnormLoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) F2 = FnormLoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) F3 = FnormLoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] F1 = FnormLoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) F2 = FnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) F3 = FnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) ---norm tensor([[2.8719, 2.8263], [3.1785, 3.4701], [4.6697, 3.2955], [3.0992, 2.6447], [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
- forward(X, Y)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.norm.PnormLoss(cdim=None, dim=None, p=2, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
obtain the p-norm of a tensor
Both complex and real representation are supported.
\[{\rm pnorm}({\bf X}) = \|{\bf X}\|_p = \left(\sum_{x_i\in {\bf X}}|x_i|^p\right)^{\frac{1}{p}} \]where, \(u, v\) are the real and imaginary part of x, respectively.
- Parameters
X (tensor) – input
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis (
cdim
is not included) for computing norm. The default isNone
, which means all.p (int) – Specifies the power. The default is 2.
- Returns
the inputs’s p-norm.
- Return type
tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) print('---norm') # real F1 = PnormLoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) F2 = PnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) F3 = PnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) # complex in real format F1 = PnormLoss(cdim=1, dim=(-2, -1), reduction=None)(X, Y) F2 = PnormLoss(cdim=1, dim=(-2, -1), reduction='sum')(X, Y) F3 = PnormLoss(cdim=1, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] F1 = PnormLoss(cdim=None, dim=(-2, -1), reduction=None)(X, Y) F2 = PnormLoss(cdim=None, dim=(-2, -1), reduction='sum')(X, Y) F3 = PnormLoss(cdim=None, dim=(-2, -1), reduction='mean')(X, Y) print(F1, F2, F3) ---norm tensor([[2.8719, 2.8263], [3.1785, 3.4701], [4.6697, 3.2955], [3.0992, 2.6447], [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
- forward(X, Y)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.perceptual module¶
- class torchlib.module.loss.perceptual.RandomProjectionLoss(mode='real', baseloss='MSE', channels=[3, 32], kernel_sizes=[(3, 3)], activations=['ReLU'], reduction='mean')¶
Bases:
torch.nn.modules.module.Module
RandomProjection loss
- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- weight_init()¶
torchlib.module.loss.retrieval module¶
- class torchlib.module.loss.retrieval.DiceLoss(size_average=True, reduce=True)¶
Bases:
torch.nn.modules.module.Module
- soft_dice_coeff(P, G)¶
- class torchlib.module.loss.retrieval.F1Loss(size_average=True, reduce=True)¶
Bases:
torch.nn.modules.module.Module
F1 distance Loss
\[F_{\beta} = 1 -\frac{(1+\beta^2) * P * R}{\beta^2 *P + R} \]where,
\[{\rm PPV} = {P} = \frac{\rm TP}{{\rm TP} + {\rm FP}} \]\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.retrieval.IridescentLoss(size_average=True, reduce=True)¶
Bases:
torch.nn.modules.module.Module
Iridescent Distance Loss
\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.retrieval.JaccardLoss(size_average=True, reduce=True)¶
Bases:
torch.nn.modules.module.Module
Jaccard distance
\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]- forward(P, G)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.segmentation module¶
torchlib.module.loss.semantic module¶
- class torchlib.module.loss.semantic.EdgeAwareLoss¶
Bases:
torch.nn.modules.module.Module
- backward(retain_variables=True)¶
- evaluate(actual, desire)¶
- fit(actual, desire)¶
- forward(actual, desire)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- meta_optimize(lossD, length)¶
- class torchlib.module.loss.semantic.EdgeLoss(window='normsobel', Ci=1, dtype=torch.float32)¶
Bases:
torch.nn.modules.module.Module
Semantic Edge Loss Object
Semantic Edge Loss
- forward(X, Y)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.sparse_metric module¶
- class torchlib.module.loss.sparse_metric.FourierLogSparseLoss(p=1, axis=(- 2, - 1), caxis=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlib.module.loss.sparse_metric.LogSparseLoss(p=1.0, axis=None, caxis=None, reduction='mean')¶
Bases:
torch.nn.modules.module.Module
Log sparse loss
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchlib.module.loss.variation module¶
- class torchlib.module.loss.variation.TotalVariation(reduction='mean', axis=0)¶
Bases:
torch.nn.modules.module.Module
Total Variarion
- forward(X)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.