torchbox.module.evaluation package
Submodules
torchbox.module.evaluation.channel module
- class torchbox.module.evaluation.channel.ChnlCapCor(EsN0=30, rank=4, way='inv', cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the capacity and correlation metric value of channel
see “MIMO-OFDM Wireless Communications with MATLAB (Yong Soo Cho, Jaekwon Kim, Won Young Yang etc.)”,
- Parameters
EsN0 (float) – the ratio of symbol energy to noise power spectral density, \(E_s/N_0({\rm dB}) = E_b/N_0 + 10{\rm log}_{10}K\) \(E_s/N_0({\rm dB})=10{\rm log}_{10}(T_{\rm symbol}/T_{\rm sample}) + {\rm SNR}(dB)\), default is 30
way (int) – computation mode:
'det'
,'hadineq'
(Hadamard inequality),'inv'
(default)cdim (int or None) – If
H
is complex-valued,cdim
is ignored. IfH
is real-valued andcdim
is an integer thenH
will be treated as complex-valued, in this case,cdim
specifies the complex axis.dim (int or None) – The dimension indexes of (sub-carrirer, BS antenna, UE antenna), The default is
(-3, -2, -1)
.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
Examples
Here are demo codes.
import torch as th import torchbox as tb th.manual_seed(2020) Nt, Nsc, Nbs, Nms = 10, 360, 64, 4 # generates the ground-truth Hg = th.randn(Nt, 2, Nsc, Nbs, Nms) # noised version as the predicted Hp = tb.awgns(Hg, snrv=10, cdim=1, dim=(-3, -2, -1)) # complex in real format metric = tb.ChnlCapCor(rank=4, cdim=1, dim=(-3, -2, -1), reduction='mean') metric.updategt(Hg) print(metric.forward(Hp)) Hg = Hg[:, 0, ...] + 1j * Hg[:, 1, ...] Hp = Hp[:, 0, ...] + 1j * Hp[:, 1, ...] # complex in complex format metric = tb.ChnlCapCor(rank=4, cdim=None, dim=(-3, -2, -1), reduction='mean') metric.updategt(Hg) print(metric.forward(Hp)) print(metric.forward(Hg)) # complex in complex format metric = tb.ChnlCapCor(30, rank=4, cdim=None, dim=(-3, -2, -1), reduction=None) metric.updategt(Hg) capv, corv = metric.forward(Hp) print(capv.shape, corv.shape) # ---output (tensor(21.0226), tensor(0.8575)) (tensor(21.0226), tensor(0.8575)) (tensor(21.5848), tensor(1.)) torch.Size([10]) torch.Size([10, 4])
- forward(Hp)
forward process
- Parameters
Hp (Tensor) – the predicted/estimated channel.
- Returns
capv (scalar or Tensor) – The capacity of the channel.
corv (scalar or Tensor) – The correlation of the channel.
- updategt(Hg)
update the ground-truth
- Parameters
Hg (Tensor) – the ground-truth channel
torchbox.module.evaluation.contrast module
- class torchbox.module.evaluation.contrast.Contrast(mode='way1', cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
way1 is defined as follows, see [1]:
\[C = \frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]way2 is defined as follows, see [2]:
\[C = \frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \][1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”
- Parameters
mode (str, optional) –
'way1'
or'way2'
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis for computing contrast. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
C – The contrast value of input.
- Return type
scalar or tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real C1 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X) C2 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X) C3 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X) print(C1, C2, C3) # complex in real format C1 = Contrast(mode='way1', cdim=1, dim=(-2, -1), reduction=None)(X) C2 = Contrast(mode='way1', cdim=1, dim=(-2, -1), reduction='sum')(X) C3 = Contrast(mode='way1', cdim=1, dim=(-2, -1), reduction='mean')(X) print(C1, C2, C3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] C1 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction=None)(X) C2 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction='sum')(X) C3 = Contrast(mode='way1', cdim=None, dim=(-2, -1), reduction='mean')(X) print(C1, C2, C3) # output tensor([[1.2612, 1.1085], [1.5992, 1.2124], [0.8201, 0.9887], [1.4376, 1.0091], [1.1397, 1.1860]]) tensor(11.7626) tensor(1.1763) tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279) tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
- forward(X)
forward process
- Parameters
X (Tensor) – The the input for computing contrast.
torchbox.module.evaluation.correlation module
- class torchbox.module.evaluation.correlation.CosSim(mode='abs', cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
compute the cosine similarity of the inputs
If utilize the amplitude of correlation
\[{\mathcal L} = |\frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]If utilize the angle of correlation
\[{\mathcal L} = \angle \frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2} \]- Parameters
mode (str) – only work when
P
andG
are complex-valued in real format or complex format.'abs'
or'amplitude'
returns the amplitude of similarity,'angle'
or'phase'
returns the phase of similarity.cdim (int or None) – If
P
andG
is complex-valued,cdim
is ignored. IfP
andG
is real-valued andcdim
is integer thenP
andG
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
andG
will be treated as real-valueddim (int or None) – The dimension axis for computing correlation. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
S (Tensor) – The correlation of the inputs.
see also
cossim()
,peacor()
,eigveccor()
,PeaCor
,EigVecCor
,CosSimLoss
,EigVecCorLoss
.
Examples
import torch as th from torchbox import CosSim th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real S1 = CosSim(cdim=None, dim=(-2, -1), reduction=None)(P, G) S2 = CosSim(cdim=None, dim=(-2, -1), reduction='sum')(P, G) S3 = CosSim(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in real format S1 = CosSim(cdim=1, dim=(-2, -1), reduction=None)(P, G) S2 = CosSim(cdim=1, dim=(-2, -1), reduction='sum')(P, G) S3 = CosSim(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] S1 = CosSim(cdim=None, dim=(-2, -1), reduction=None)(P, G) S2 = CosSim(cdim=None, dim=(-2, -1), reduction='sum')(P, G) S3 = CosSim(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # output tensor([[0.4791, 0.0849], [0.0334, 0.4855], [0.0136, 0.2280], [0.4951, 0.2166], [0.4484, 0.4221]]) tensor(2.9068) tensor(0.2907) tensor([[0.2926], [0.2912], [0.1505], [0.3993], [0.3350]]) tensor([1.4685]) tensor([0.2937]) tensor([0.2926, 0.2912, 0.1505, 0.3993, 0.3350]) tensor(1.4685) tensor(0.2937)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- Returns
S – The correlation of the inputs.
- Return type
Tensor
- class torchbox.module.evaluation.correlation.EigVecCor(npcs=4, mode=None, cdim=None, fdim=- 2, sdim=- 1, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the eigenvector correlation of the inputs
cosine similarity of selected eigenvectors (principal components)
- Parameters
mode (str) – only work when
P
andG
are complex-valued in real format or complex format.'abs'
or'amplitude'
returns the amplitude of similarity,'angle'
or'phase'
returns the phase of similarity.cdim (int or None) – If
P
andG
is complex-valued,cdim
is ignored. IfP
andG
is real-valued andcdim
is integer thenP
andG
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
andG
will be treated as real-valuedfdim (int, optional) – the dimension index of feature, by default -2
sdim (int, optional) – the dimension index of sample, by default -1
keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
S (Tensor) – The eigenvector correlation of the inputs.
see also
cossim()
,peacor()
,eigveccor()
,PeaCor
,CosSim
,CosSimLoss
,EigVecCorLoss
.
Examples
import torch as th from torchbox import EigVecCor mode = 'abs' th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real S1 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction=None)(P, G) S2 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction='sum')(P, G) S3 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in real format S1 = EigVecCor(npcs=4, mode=mode, cdim=1, sdim=0, fdim=(-2, -1), reduction=None)(P, G) S2 = EigVecCor(npcs=4, mode=mode, cdim=1, sdim=0, fdim=(-2, -1), reduction='sum')(P, G) S3 = EigVecCor(npcs=4, mode=mode, cdim=1, sdim=0, fdim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] S1 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction=None)(P, G) S2 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction='sum')(P, G) S3 = EigVecCor(npcs=4, mode=mode, cdim=None, sdim=0, fdim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.correlation.PeaCor(mode='abs', cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
compute the pearson correlation loss of the inputs
If utilize the amplitude of pearson correlation as loss
\[{\mathcal L} = 1 - |\frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]If utilize the angle of pearson correlation as loss
\[{\mathcal L} = |\angle \frac{<{\bf p}, {\bf g}>}{\|{\bf p}\|_2\|{\bf g}\|_2}| \]where \(\bf p\) and \(\bf g\) is the centered version (removed mean) of inputs
- Parameters
mode (str) – only work when
P
andG
are complex-valued in real format or complex format.'abs'
or'amplitude'
returns the amplitude of similarity,'angle'
or'phase'
returns the phase of similarity.cdim (int or None) – If
P
andG
is complex-valued,cdim
is ignored. IfP
andG
is real-valued andcdim
is integer thenP
andG
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
andG
will be treated as real-valueddim (int or None) – The dimension axis for computing correlation. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
S (Tensor) – The correlation of the inputs.
see also
cossim()
,peacor()
,eigveccor()
,CoSim
,EigVecCor
,CosSimLoss
,EigVecCorLoss
.
Examples
import torch as th from torchbox import PeaCor th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real S1 = PeaCor(cdim=None, dim=(-2, -1), reduction=None)(P, G) S2 = PeaCor(cdim=None, dim=(-2, -1), reduction='sum')(P, G) S3 = PeaCor(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in real format S1 = PeaCor(cdim=1, dim=(-2, -1), reduction=None)(P, G) S2 = PeaCor(cdim=1, dim=(-2, -1), reduction='sum')(P, G) S3 = PeaCor(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] S1 = PeaCor(cdim=None, dim=(-2, -1), reduction=None)(P, G) S2 = PeaCor(cdim=None, dim=(-2, -1), reduction='sum')(P, G) S3 = PeaCor(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(S1, S2, S3) # output tensor([[0.6010, 0.0260], [0.0293, 0.4981], [0.0063, 0.2284], [0.3203, 0.2851], [0.3757, 0.3936]]) tensor(2.7639) tensor(0.2764) tensor([[0.3723], [0.2992], [0.1267], [0.3020], [0.2910]]) tensor([1.3911]) tensor([0.2782]) tensor([0.3723, 0.2992, 0.1267, 0.3020, 0.2910]) tensor(1.3911) tensor(0.2782)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
torchbox.module.evaluation.entropy module
- class torchbox.module.evaluation.entropy.Entropy(mode='shannon', cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
compute the entropy of the inputs
\[{\rm S} = -\sum_{n=0}^N p_i{\rm log}_2 p_n \]where \(N\) is the number of pixels, \(p_n=\frac{|X_n|^2}{\sum_{n=0}^N|X_n|^2}\).
- Parameters
mode (str, optional) – The entropy mode:
'shannon'
or'natural'
(the default is ‘shannon’)cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis for computing entropy. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
S – The entropy of the inputs.
- Return type
Tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) # real S1 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction=None)(X) S2 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction='sum')(X) S3 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction='mean')(X) print(S1, S2, S3) # complex in real format S1 = Entropy(mode='shannon', cdim=1, dim=(-2, -1), reduction=None)(X) S2 = Entropy(mode='shannon', cdim=1, dim=(-2, -1), reduction='sum')(X) S3 = Entropy(mode='shannon', cdim=1, dim=(-2, -1), reduction='mean')(X) print(S1, S2, S3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] S1 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction=None)(X) S2 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction='sum')(X) S3 = Entropy(mode='shannon', cdim=None, dim=(-2, -1), reduction='mean')(X) print(S1, S2, S3) # output tensor([[2.5482, 2.7150], [2.0556, 2.6142], [2.9837, 2.9511], [2.4296, 2.7979], [2.7287, 2.5560]]) tensor(26.3800) tensor(2.6380) tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408) tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
- forward(X)
forward process
- Parameters
X (Tensor) – The the input for computing entropy.
torchbox.module.evaluation.error module
- class torchbox.module.evaluation.error.MAE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the mean absoluted error
Both complex and real representation are supported.
\[{\rm MAE}({\bf P, G}) = \frac{1}{N}\||{\bf P} - {\bf G}|\| = \frac{1}{N}\sum_{i=1}^N |p_i - g_i| \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean absoluted error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = MAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = MAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = MAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = MAE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = MAE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = MAE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = MAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = MAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = MAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # ---output [[1.06029116 1.19884877] [0.90117091 1.13552361] [1.23422083 0.75743914] [1.16127965 1.42169262] [1.25090731 1.29134222]] 11.41271620974502 1.141271620974502 [1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482 [1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.MSE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the mean square error
Both complex and real representation are supported.
\[{\rm MSE}({\bf P, G}) = \frac{1}{N}\|{\bf P} - {\bf G}\|_2^2 = \frac{1}{N}\sum_{i=1}^N(|p_i - g_i|)^2 \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean square error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = MSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = MSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = MSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = MSE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = MSE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = MSE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = MSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = MSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = MSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # ---output [[1.57602573 2.32844311] [1.07232374 2.36118382] [2.1841515 0.79002805] [2.43036295 3.18413899] [2.31107373 2.73990485]] 20.977636476183186 2.0977636476183186 [3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637 [3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.NMAE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the normalized mean absoluted error
Both complex and real representation are supported.
\[{\rm MAE}({\bf P, G}) = \frac{\frac{1}{N}\||{\bf P} - {\bf G}|\|}{\||{\bf G}|\|} = \frac{\frac{1}{N}\sum_{i=1}^N |p_i - g_i|}{\sum_{i=1}^N |p_i - g_i|} \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean absoluted error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = NMAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NMAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NMAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = NMAE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = NMAE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = NMAE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = NMAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NMAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NMAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.NMSE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the normalized mean square error
Both complex and real representation are supported.
\[{\rm MSE}({\bf P, G}) = \frac{\frac{1}{N}\|{\bf P} - {\bf G}\|_2^2}{\|{\bf G}\|_2^2} = \frac{\frac{1}{N}\sum_{i=1}^N(|p_i - g_i|)^2}{\sum_{i=1}^N(|p_i - g_i|)^2} \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
mean square error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = NMSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NMSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NMSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = NMSE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = NMSE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = NMSE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = NMSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NMSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NMSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.NSAE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the normalized sum absoluted error
Both complex and real representation are supported.
\[{\rm SAE}({\bf P, G}) = \frac{\||{\bf P} - {\bf G}|\|}{\|{\bf G}|\|} = \frac{\sum_{i=1}^N |p_i - g_i|}{\sum_{i=1}^N |g_i|} \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum absoluted error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = NSAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NSAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NSAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = NSAE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = NSAE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = NSAE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = NSAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NSAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NSAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.NSSE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the normalized sum square error
Both complex and real representation are supported.
\[{\rm SSE}({\bf P, G}) = \frac{\|{\bf P} - {\bf G}\|_2^2}{\|{\bf G}\|_2^2} = \frac{\sum_{i=1}^N(|p_i - g_i|)^2}{\sum_{i=1}^N(|g_i|)^2} \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum square error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = NSSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NSSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NSSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = NSSE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = NSSE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = NSSE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = NSSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = NSSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = NSSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3)
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.SAE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the sum absoluted error
Both complex and real representation are supported.
\[{\rm SAE}({\bf P, G}) = \||{\bf P} - {\bf G}|\| = \sum_{i=1}^N |p_i - g_i| \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum absoluted error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = SAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = SAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = SAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = SAE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = SAE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = SAE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = SAE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = SAE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = SAE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # ---output [[12.72349388 14.3861852 ] [10.81405096 13.62628335] [14.81065 9.08926963] [13.93535577 17.0603114 ] [15.0108877 15.49610662]] 136.95259451694022 13.695259451694023 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.53852565478087 21.307705130956172 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.5385256547809 21.30770513095618
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
- class torchbox.module.evaluation.error.SSE(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
computes the sum square error
Both complex and real representation are supported.
\[{\rm SSE}({\bf P, G}) = \|{\bf P} - {\bf G}\|_2^2 = \sum_{i=1}^N(|p_i - g_i|)^2 \]- Parameters
cdim (int or None) – If
P
is complex-valued,cdim
is ignored. IfP
is real-valued andcdim
is integer thenP
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),P
will be treated as real-valueddim (int or None) – The dimension axis for computing error. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str or None, optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is'mean'
)
- Returns
sum square error
- Return type
scalar or array
Examples
th.manual_seed(2020) P = th.randn(5, 2, 3, 4) G = th.randn(5, 2, 3, 4) # real C1 = SSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = SSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = SSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in real format C1 = SSE(cdim=1, dim=(-2, -1), reduction=None)(P, G) C2 = SSE(cdim=1, dim=(-2, -1), reduction='sum')(P, G) C3 = SSE(cdim=1, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # complex in complex format P = P[:, 0, ...] + 1j * P[:, 1, ...] G = G[:, 0, ...] + 1j * G[:, 1, ...] C1 = SSE(cdim=None, dim=(-2, -1), reduction=None)(P, G) C2 = SSE(cdim=None, dim=(-2, -1), reduction='sum')(P, G) C3 = SSE(cdim=None, dim=(-2, -1), reduction='mean')(P, G) print(C1, C2, C3) # ---output [[18.91230872 27.94131733] [12.86788492 28.33420589] [26.209818 9.48033663] [29.16435541 38.20966786] [27.73288477 32.87885818]] 251.73163771419823 25.173163771419823 [46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646 [46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
- forward(P, G)
forward process
- Parameters
P (Tensor) – predicted/estimated/reconstructed
G (Tensor) – ground-truth/target
torchbox.module.evaluation.norm module
- class torchbox.module.evaluation.norm.Fnorm(cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
obtain the f-norm of a tensor
Both complex and real representation are supported.
\[{\rm norm}({\bf X}) = \|{\bf X}\|_2 = \left(\sum_{x_i\in {\bf X}}|x_i|^2\right)^{\frac{1}{2}} \]where, \(u, v\) are the real and imaginary part of x, respectively.
- Parameters
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis for computing norm. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str, None or optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
the inputs’s f-norm.
- Return type
tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) print('---norm') # real F1 = Fnorm(cdim=None, dim=(-2, -1), reduction=None)(X) F2 = Fnorm(cdim=None, dim=(-2, -1), reduction='sum')(X) F3 = Fnorm(cdim=None, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) # complex in real format F1 = Fnorm(cdim=1, dim=(-2, -1), reduction=None)(X) F2 = Fnorm(cdim=1, dim=(-2, -1), reduction='sum')(X) F3 = Fnorm(cdim=1, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] F1 = Fnorm(cdim=None, dim=(-2, -1), reduction=None)(X) F2 = Fnorm(cdim=None, dim=(-2, -1), reduction='sum')(X) F3 = Fnorm(cdim=None, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) ---norm tensor([[2.8719, 2.8263], [3.1785, 3.4701], [4.6697, 3.2955], [3.0992, 2.6447], [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
- forward(X)
forward process
- Parameters
X (Tensor) – The the input for computing norm.
- class torchbox.module.evaluation.norm.Pnorm(p=2, cdim=None, dim=None, keepdim=False, reduction='mean')
Bases:
torch.nn.modules.module.Module
obtain the p-norm of a tensor
Both complex and real representation are supported.
\[{\rm pnorm}({\bf X}) = \|{\bf X}\|_p = \left(\sum_{x_i\in {\bf X}}|x_i|^p\right)^{\frac{1}{p}} \]where, \(u, v\) are the real and imaginary part of x, respectively.
- Parameters
p (int) – Specifies the power. The default is 2.
cdim (int or None) – If
X
is complex-valued,cdim
is ignored. IfX
is real-valued andcdim
is integer thenX
will be treated as complex-valued, in this case,cdim
specifies the complex axis; otherwise (None),X
will be treated as real-valueddim (int or None) – The dimension axis for computing norm. The default is
None
, which means all.keepdim (bool) – keep dimensions? (include complex dim, defalut is
False
)reduction (str, None or optional) – The operation mode of reduction,
None
,'mean'
or'sum'
(the default is ‘mean’)
- Returns
the inputs’s p-norm.
- Return type
tensor
Examples
th.manual_seed(2020) X = th.randn(5, 2, 3, 4) print('---pnorm') # real F1 = Pnorm(cdim=None, dim=(-2, -1), reduction=None)(X) F2 = Pnorm(cdim=None, dim=(-2, -1), reduction='sum')(X) F3 = Pnorm(cdim=None, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) # complex in real format F1 = Pnorm(cdim=1, dim=(-2, -1), reduction=None)(X) F2 = Pnorm(cdim=1, dim=(-2, -1), reduction='sum')(X) F3 = Pnorm(cdim=1, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) # complex in complex format X = X[:, 0, ...] + 1j * X[:, 1, ...] F1 = Pnorm(cdim=None, dim=(-2, -1), reduction=None)(X) F2 = Pnorm(cdim=None, dim=(-2, -1), reduction='sum')(X) F3 = Pnorm(cdim=None, dim=(-2, -1), reduction='mean')(X) print(F1, F2, F3) ---pnorm tensor([[2.8719, 2.8263], [3.1785, 3.4701], [4.6697, 3.2955], [3.0992, 2.6447], [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108) tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
- forward(X)
forward process
- Parameters
X (Tensor) – The the input for computing norm.
torchbox.module.evaluation.retrieval module
- class torchbox.module.evaluation.retrieval.Dice(size_average=True, reduce=True)
Bases:
torch.nn.modules.module.Module
- soft_dice_coeff(P, G)
- class torchbox.module.evaluation.retrieval.F1(size_average=True, reduce=True)
Bases:
torch.nn.modules.module.Module
F1 distance
\[F_{\beta} = 1 -\frac{(1+\beta^2) P R}{\beta^2 P + R} \]where,
\[{\rm PPV} = {P} = \frac{\rm TP}{{\rm TP} + {\rm FP}} \]\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]- forward(P, G)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchbox.module.evaluation.retrieval.Iridescent(size_average=True, reduce=True)
Bases:
torch.nn.modules.module.Module
Iridescent Distance
\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]- forward(P, G)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchbox.module.evaluation.retrieval.Jaccard(size_average=True, reduce=True)
Bases:
torch.nn.modules.module.Module
Jaccard distance
\[d_{J}({\mathbb A}, {\mathbb B})=1-J({\mathbb A}, {\mathbb B})=\frac{|{\mathbb A} \cup {\mathbb B}|-|{\mathbb A} \cap {\mathbb B}|}{|{\mathbb A} \cup {\mathbb B}|} \]- forward(P, G)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchbox.module.evaluation.ssims module
- class torchbox.module.evaluation.ssims.MSSSIM(data_range=255, size_average=True, win_size=11, win_sigma=1.5, channel=3, spatial_dims=2, weights=None, K=(0.01, 0.03))
Bases:
torch.nn.modules.module.Module
- forward(X, Y)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchbox.module.evaluation.ssims.SSIM(data_range=255, size_average=True, win_size=11, win_sigma=1.5, channel=3, spatial_dims=2, K=(0.01, 0.03), nonnegative_ssim=False)
Bases:
torch.nn.modules.module.Module
- forward(X, Y)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
torchbox.module.evaluation.variation module
- class torchbox.module.evaluation.variation.TotalVariation(axis=0, reduction='mean')
Bases:
torch.nn.modules.module.Module
Total Variarion
- # https://www.wikiwand.com/en/Total_variation_denoising
diff_i = torch.sum(torch.abs(y_hat[:, :, :, 1:] - y_hat[:, :, :, :-1])) diff_j = torch.sum(torch.abs(y_hat[:, :, 1:, :] - y_hat[:, :, :-1, :])) tv_loss = TV_WEIGHT*(diff_i + diff_j)
- forward(X)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.