torchlib.evaluation package

Submodules

torchlib.evaluation.classification module

torchlib.evaluation.classification.accuracy(X, Y, TH=None)

compute accuracy

Parameters
  • X (tensor) – Predicted one hot matrix, \(\{0, 1\}\)

  • Y (tensor) – Referenced one hot matrix, \(\{0, 1\}\)

  • TH (float, optional) – threshold: X > TH –> 1, X <= TH –> 0

torchlib.evaluation.contrast module

torchlib.evaluation.contrast.contrast(X, cdim=None, dim=None, mode='way1', reduction='mean')

Compute contrast of an complex image

'way1' is defined as follows, see [1]:

\[C = \frac{\sqrt{{\rm E}\left(|I|^2 - {\rm E}(|I|^2)\right)^2}}{{\rm E}(|I|^2)} \]

'way2' is defined as follows, see [2]:

\[C = \frac{{\rm E}(|I|^2)}{\left({\rm E}(|I|)\right)^2} \]

[1] Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton [2] section 13.4.1 in “Ian G. Cumming’s SAR book”

Parameters
  • X (torch tensor) – The image array.

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (tuple, None, optional) – The dimension axis (cdim is not included) for computing contrast. The default is None, which means all.

  • mode (str, optional) – 'way1' or 'way2'

  • reduction (str, optional) – The operation in batch dim, 'None', 'mean' or 'sum' (the default is ‘mean’)

Returns

C – The contrast value of input.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
C1 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction=None)
C2 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction='sum')
C3 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = contrast(X, cdim=1, dim=(-2, -1), mode='way1', reduction=None)
C2 = contrast(X, cdim=1, dim=(-2, -1), mode='way1', reduction='sum')
C3 = contrast(X, cdim=1, dim=(-2, -1), mode='way1', reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction=None)
C2 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction='sum')
C3 = contrast(X, cdim=None, dim=(-2, -1), mode='way1', reduction='mean')
print(C1, C2, C3)

# output
tensor([[1.2612, 1.1085],
        [1.5992, 1.2124],
        [0.8201, 0.9887],
        [1.4376, 1.0091],
        [1.1397, 1.1860]]) tensor(11.7626) tensor(1.1763)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)
tensor([0.6321, 1.1808, 0.5884, 1.1346, 0.6038]) tensor(4.1396) tensor(0.8279)

torchlib.evaluation.entropy module

torchlib.evaluation.entropy.entropy(X, cdim=None, dim=None, mode='shannon', reduction='mean')

compute the entropy of the inputs

\[{\rm S} = -\sum_{n=0}^N p_i{\rm log}_2 p_n \]

where \(N\) is the number of pixels, \(p_n=\frac{|X_n|^2}{\sum_{n=0}^N|X_n|^2}\).

Parameters
  • X (tensor) – The complex or real inputs, for complex inputs, both complex and real representations are surpported.

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (tuple, None, optional) – The dimension axis (cdim is not included) for computing entropy. The default is None, which means all.

  • mode (str, optional) – The entropy mode: 'shannon' or 'natural' (the default is ‘shannon’)

  • reduction (str, optional) – The operation in batch dim, 'None', 'mean' or 'sum' (the default is ‘mean’)

Returns

S – The entropy of the inputs.

Return type

scalar or tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

# real
S1 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction=None)
S2 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')
S3 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')
print(S1, S2, S3)

# complex in real format
S1 = entropy(X, cdim=1, dim=(-2, -1), mode='shannon', reduction=None)
S2 = entropy(X, cdim=1, dim=(-2, -1), mode='shannon', reduction='sum')
S3 = entropy(X, cdim=1, dim=(-2, -1), mode='shannon', reduction='mean')
print(S1, S2, S3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
S1 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction=None)
S2 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction='sum')
S3 = entropy(X, cdim=None, dim=(-2, -1), mode='shannon', reduction='mean')
print(S1, S2, S3)

# output
tensor([[2.5482, 2.7150],
        [2.0556, 2.6142],
        [2.9837, 2.9511],
        [2.4296, 2.7979],
        [2.7287, 2.5560]]) tensor(26.3800) tensor(2.6380)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)
tensor([3.2738, 2.5613, 3.2911, 2.7989, 3.2789]) tensor(15.2040) tensor(3.0408)

torchlib.evaluation.error module

torchlib.evaluation.error.mae(X, Y, cdim=None, dim=None, norm=False, reduction='mean')

computes the mean absoluted error

Both complex and real representation are supported.

\[{\rm MAE}({\bf X, Y}) = \frac{1}{N}\||{\bf X} - {\bf Y}|\| = \frac{1}{N}\sum_{i=1}^N |x_i - y_i| \]
Parameters
  • X (array) – original

  • X – reconstructed

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean absoluted error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = mae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=None)
C2 = mae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# ---output
[[1.06029116 1.19884877]
[0.90117091 1.13552361]
[1.23422083 0.75743914]
[1.16127965 1.42169262]
[1.25090731 1.29134222]] 11.41271620974502 1.141271620974502
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
[1.71298566 1.50327364 1.53328572 2.11430946 2.01435599] 8.878210471231741 1.7756420942463482
torchlib.evaluation.error.mse(X, Y, cdim=None, dim=None, norm=False, reduction='mean')

computes the mean square error

Both complex and real representation are supported.

\[{\rm MSE}({\bf X, Y}) = \frac{1}{N}\|{\bf X} - {\bf Y}\|_2^2 = \frac{1}{N}\sum_{i=1}^N(|x_i - y_i|)^2 \]
Parameters
  • X (array) – reconstructed

  • Y (array) – target

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

mean square error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = mse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=None)
C2 = mse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = mse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# ---output
[[1.57602573 2.32844311]
[1.07232374 2.36118382]
[2.1841515  0.79002805]
[2.43036295 3.18413899]
[2.31107373 2.73990485]] 20.977636476183186 2.0977636476183186
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
[3.90446884 3.43350757 2.97417955 5.61450194 5.05097858] 20.977636476183186 4.195527295236637
torchlib.evaluation.error.sae(X, Y, cdim=None, dim=None, norm=False, reduction='mean')

computes the sum absoluted error

Both complex and real representation are supported.

\[{\rm SAE}({\bf X, Y}) = \||{\bf X} - {\bf Y}|\| = \sum_{i=1}^N |x_i - y_i| \]
Parameters
  • X (array) – original

  • X – reconstructed

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum absoluted error

Return type

scalar or array

Examples

::

norm = False th.manual_seed(2020) X = th.randn(5, 2, 3, 4) Y = th.randn(5, 2, 3, 4)

# real C1 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None) C2 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=’sum’) C3 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=’mean’) print(C1, C2, C3)

# complex in real format C1 = sae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=None) C2 = sae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=’sum’) C3 = sae(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=’mean’) print(C1, C2, C3)

# complex in complex format X = X[:, 0, …] + 1j * X[:, 1, …] Y = Y[:, 0, …] + 1j * Y[:, 1, …] C1 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None) C2 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=’sum’) C3 = sae(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=’mean’) print(C1, C2, C3)

# —output [[12.72349388 14.3861852 ] [10.81405096 13.62628335] [14.81065 9.08926963] [13.93535577 17.0603114 ] [15.0108877 15.49610662]] 136.95259451694022 13.695259451694023 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.53852565478087 21.307705130956172 [20.55582795 18.03928365 18.39942858 25.37171356 24.17227192] 106.5385256547809 21.30770513095618

torchlib.evaluation.error.sse(X, Y, cdim=None, dim=None, norm=False, reduction='mean')

computes the sum square error

Both complex and real representation are supported.

\[{\rm SSE}({\bf X, Y}) = \|{\bf X} - {\bf Y}\|_2^2 = \sum_{i=1}^N(|x_i - y_i|)^2 \]
Parameters
  • X (array) – reconstructed

  • Y (array) – target

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • norm (bool) – If True, normalize with the f-norm of X and Y. (default is False)

  • reduction (str, optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is 'mean')

Returns

sum square error

Return type

scalar or array

Examples

norm = False
th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)
Y = th.randn(5, 2, 3, 4)

# real
C1 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = sse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction=None)
C2 = sse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='sum')
C3 = sse(X, Y, cdim=1, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
Y = Y[:, 0, ...] + 1j * Y[:, 1, ...]
C1 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction=None)
C2 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='sum')
C3 = sse(X, Y, cdim=None, dim=(-2, -1), norm=norm, reduction='mean')
print(C1, C2, C3)

# ---output
[[18.91230872 27.94131733]
[12.86788492 28.33420589]
[26.209818    9.48033663]
[29.16435541 38.20966786]
[27.73288477 32.87885818]] 251.73163771419823 25.173163771419823
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646
[46.85362605 41.20209081 35.69015462 67.37402327 60.61174295] 251.73163771419823 50.346327542839646

torchlib.evaluation.norm module

torchlib.evaluation.norm.fnorm(X, cdim=None, dim=None, reduction='mean')

obtain the f-norm of a tensor

Both complex and real representation are supported.

\[{\rm norm}({\bf X}) = \|{\bf X}\|_2 = \left(\sum_{x_i\in {\bf X}}|x_i|^2\right)^{\frac{1}{2}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • X (tensor) – input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • reduction (str, None or optional) – The operation in batch dim, None, 'mean' or 'sum' (the default is ‘mean’)

Returns

the inputs’s f-norm.

Return type

tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

print('---norm')
# real
C1 = fnorm(X, cdim=None, dim=(-2, -1), reduction=None)
C2 = fnorm(X, cdim=None, dim=(-2, -1), reduction='sum')
C3 = fnorm(X, cdim=None, dim=(-2, -1), reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = fnorm(X, cdim=1, dim=(-2, -1), reduction=None)
C2 = fnorm(X, cdim=1, dim=(-2, -1), reduction='sum')
C3 = fnorm(X, cdim=1, dim=(-2, -1), reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = fnorm(X, cdim=None, dim=(-2, -1), reduction=None)
C2 = fnorm(X, cdim=None, dim=(-2, -1), reduction='sum')
C3 = fnorm(X, cdim=None, dim=(-2, -1), reduction='mean')
print(C1, C2, C3)

# ---output
---norm
tensor([[2.8719, 2.8263],
        [3.1785, 3.4701],
        [4.6697, 3.2955],
        [3.0992, 2.6447],
        [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
torchlib.evaluation.norm.pnorm(X, cdim=None, dim=None, p=2, reduction='mean')

obtain the p-norm of a tensor

Both complex and real representation are supported.

\[{\rm pnorm}({\bf X}) = \|{\bf X}\|_p = \left(\sum_{x_i\in {\bf X}}|x_i|^p\right)^{\frac{1}{p}} \]

where, \(u, v\) are the real and imaginary part of x, respectively.

Parameters
  • X (tensor) – input

  • cdim (int or None) – If X is complex-valued, cdim is ignored. If X is real-valued and cdim is integer then X will be treated as complex-valued, in this case, cdim specifies the complex axis; otherwise (None), X will be treated as real-valued

  • dim (int or None) – The dimension axis (cdim is not included) for computing norm. The default is None, which means all.

  • p (int) – Specifies the power. The default is 2.

Returns

the inputs’s p-norm.

Return type

tensor

Examples

th.manual_seed(2020)
X = th.randn(5, 2, 3, 4)

print('---pnorm')
# real
C1 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction=None)
C2 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction='sum')
C3 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction='mean')
print(C1, C2, C3)

# complex in real format
C1 = pnorm(X, cdim=1, dim=(-2, -1), p=2, reduction=None)
C2 = pnorm(X, cdim=1, dim=(-2, -1), p=2, reduction='sum')
C3 = pnorm(X, cdim=1, dim=(-2, -1), p=2, reduction='mean')
print(C1, C2, C3)

# complex in complex format
X = X[:, 0, ...] + 1j * X[:, 1, ...]
C1 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction=None)
C2 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction='sum')
C3 = pnorm(X, cdim=None, dim=(-2, -1), p=2, reduction='mean')
print(C1, C2, C3)

# ---output
---pnorm
tensor([[2.8719, 2.8263],
        [3.1785, 3.4701],
        [4.6697, 3.2955],
        [3.0992, 2.6447],
        [3.5341, 3.5779]]) tensor(33.1679) tensor(3.3168)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)
tensor([4.0294, 4.7058, 5.7154, 4.0743, 5.0290]) tensor(23.5539) tensor(4.7108)

torchlib.evaluation.retrieval module

torchlib.evaluation.retrieval.false_alarm_rate(X, Y, TH=None)

Compute false alarm rate or False Discovery Rate

(1)\[{\rm FDR} = \frac{\rm FP}{{\rm TP} + {\rm FP}} = 1 - P \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

FDR – False Discovery Rate

Return type

float

torchlib.evaluation.retrieval.false_negative(X, Y)

Find false negative elements

true_negative(X, Y) returns elements that are positive classes in Y and retrieved as negative in X.

Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

Returns

FN – a torch tensor which has the same type with X or Y. In FN, false negative elements are ones, while others are zeros.

Return type

tensor

torchlib.evaluation.retrieval.false_positive(X, Y)

Find false positive elements

false_positive(X, Y) returns elements that are negative classes in Y and retrieved as positive in X.

Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

Returns

FP – a torch tensor which has the same type with X or Y. In FP, false positive elements are ones, while others are zeros.

Return type

tensor

torchlib.evaluation.retrieval.fmeasure(X, Y, TH=None, beta=1.0)

Compute F-measure

(2)\[F_{\beta} = \frac{(1+\beta^2)PR}{\beta^2P + R} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

  • beta (float) – X > TH –> 1, X <= TH –> 0

Returns

F – F-measure

Return type

float

torchlib.evaluation.retrieval.miss_alarm_rate(X, Y, TH=None)

Compute miss alarm rate or False Negative Rate

(3)\[{\rm FNR} = \frac{\rm FN}{{\rm FN} + {\rm TP}} = 1 - R \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

FNR – False Negative Rate

Return type

float

torchlib.evaluation.retrieval.precision(X, Y, TH=None)

Compute precision

(4)\[{\rm PPV} = {P} = \frac{\rm TP}{{\rm TP} + {\rm FP}} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

P – precision

Return type

float

torchlib.evaluation.retrieval.recall(X, Y, TH=None)

Compute recall(sensitivity)

(5)\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

R – recall

Return type

float

torchlib.evaluation.retrieval.selectivity(X, Y, TH=None)

Compute selectivity or specificity

(6)\[{\rm TNR} = {S} = \frac{\rm TN}{{\rm TN} + {\rm FP}} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

S – selectivity

Return type

float

torchlib.evaluation.retrieval.sensitivity(X, Y, TH=None)

Compute sensitivity(recall)

(7)\[{\rm TPR} = {R} = \frac{\rm TP}{{\rm TP} + {\rm FN}} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

R – recall

Return type

float

torchlib.evaluation.retrieval.true_negative(X, Y)

Find true negative elements

true_negative(X, Y) returns elements that are negative classes in Y and retrieved as negative in X.

Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

Returns

TN – a torch tensor which has the same type with X or Y. In TN, true negative elements are ones, while others are zeros.

Return type

tensor

torchlib.evaluation.retrieval.true_positive(X, Y)

Find true positive elements

true_positive(X, Y) returns those elements that are positive classes in Y and retrieved as positive in X.

Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

Returns

TP – a torch tensor which has the same type with X or Y. In TP, true positive elements are ones, while others are zeros.

Return type

tensor

torchlib.evaluation.similarity module

torchlib.evaluation.similarity.dice_coeff(X, Y, TH=0.5)

Dice coefficient

\[s = \frac{2|Y \cap X|}{|X|+|Y|} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

DC – the dice coefficient.

Return type

float

torchlib.evaluation.similarity.jaccard_index(X, Y, TH=None)

Jaccard similarity coefficient

\[\mathrm{J}(\mathrm{A}, \mathrm{B})=\frac{|A \cap B|}{|A \cup B|} \]
Parameters
  • X (tensor) – retrieval results, retrieved–>1, not retrieved–>0

  • Y (tensor) – referenced, positive–>1, negative–>0

  • TH (float) – X > TH –> 1, X <= TH –> 0

Returns

JS – the jaccard similarity coefficient.

Return type

float

torchlib.evaluation.ssims module

torchlib.evaluation.ssims.gaussian_filter(input, win)

Blur input with 1-D kernel

Parameters
  • input (torch.Tensor) – a batch of tensors to be blurred

  • window (torch.Tensor) – 1-D gauss kernel

Returns

blurred tensors

Return type

torch.Tensor

torchlib.evaluation.ssims.msssim(X, Y, data_range=255, size_average=True, win_size=11, win_sigma=1.5, win=None, weights=None, K=(0.01, 0.03))

interface of ms-ssim

Parameters
  • X (torch.Tensor) – a batch of images, (N,C,[T,]H,W)

  • Y (torch.Tensor) – a batch of images, (N,C,[T,]H,W)

  • data_range (float or int, optional) – value range of input images. (usually 1.0 or 255)

  • size_average (bool, optional) – if size_average=True, ssim of all images will be averaged as a scalar

  • win_size (int, optional) – the size of gauss kernel

  • win_sigma (float, optional) – sigma of normal distribution

  • win (torch.Tensor, optional) – 1-D gauss kernel. if None, a new kernel will be created according to win_size and win_sigma

  • weights (list, optional) – weights for different levels

  • K (list or tuple, optional) – scalar constants (K1, K2). Try a larger K2 constant (e.g. 0.4) if you get a negative or NaN results.

Returns

msssim results

Return type

torch.Tensor

torchlib.evaluation.ssims.ssim(X, Y, data_range=255, size_average=True, win_size=11, win_sigma=1.5, win=None, K=(0.01, 0.03), nonnegative_ssim=False)

interface of ssim

Parameters
  • X (torch.Tensor) – a batch of images, (N,C,H,W)

  • Y (torch.Tensor) – a batch of images, (N,C,H,W)

  • data_range (float or int, optional) – value range of input images. (usually 1.0 or 255)

  • size_average (bool, optional) – if size_average=True, ssim of all images will be averaged as a scalar

  • win_size (int, optional) – the size of gauss kernel

  • win_sigma (float, optional) – sigma of normal distribution

  • win (torch.Tensor, optional) – 1-D gauss kernel. if None, a new kernel will be created according to win_size and win_sigma

  • K (list or tuple, optional) – scalar constants (K1, K2). Try a larger K2 constant (e.g. 0.4) if you get a negative or NaN results.

  • nonnegative_ssim (bool, optional) – force the ssim response to be nonnegative with relu

Returns

ssim results

Return type

torch.Tensor

Module contents