torchlib.spl package

Submodules

torchlib.spl.spfunction module

class torchlib.spl.spfunction.Binary

Bases: object

binary function

The binary SPL function can be expressed as

(1)\[f(\bm{v}, k) = = -λ\|{\bm v}\|_1 = -λ\sum_{n=1}^N v_n \]

The optimal solution is

(2)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
eval(v, lmbd)

eval SP function

The binary SPL function can be expressed as

(3)\[f(\bm{v}, k) = = -λ\|{\bm v}\|_1 = -λ\sum_{n=1}^N v_n \]
Parameters
  • v (tensor) – The easy degree of N samples. (\(N×1\) tensor)

  • lmbd (float) – balance factor

class torchlib.spl.spfunction.Linear

Bases: object

Linear function

The Linear SPL function can be expressed as

(4)\[f(\bm{v}, \lambda)=\lambda\left(\frac{1}{2}\|\bm{v}\|_{2}^{2}-\sum_{n=1}^{N} v_{n}\right) \]

The optimal solution is

(5)\[v_{n}^* = {\rm max}\{1-l_n/\lambda, 0\} \]
eval(v, lmbd)

eval SP function

The Linear SPL function can be expressed as

(6)\[f(\bm{v}, \lambda)=\lambda\left(\frac{1}{2}\|\bm{v}\|_{2}^{2}-\sum_{n=1}^{N} v_{n}\right) \]
Parameters
  • v (tensor) – The easy degree of N samples. (\(N×1\) tensor)

  • lmbd (float) – balance factor

class torchlib.spl.spfunction.Logarithmic

Bases: object

Logarithmic function

The Logarithmic SPL function can be expressed as

(7)\[f(\bm{v}, \lambda) = \sum_{n=1}^{N}\left(\zeta v_{n}-\frac{\zeta^{v_{n}}}{{\rm log} \zeta}\right) \]

where, \(\zeta=1-\lambda, 0<\lambda<1\)

The optimal solution is

(8)\[v_{n}^{*}=\left\{\begin{array}{ll}{0,} & {l_{n}>=\lambda} \\ {\log \left(l_{n}+\zeta\right) / \log \xi,} & {l_{n}<\lambda}\end{array}\right. \]
eval(v, lmbd)

eval SP function

The Logarithmic SPL function can be expressed as

(9)\[f(\bm{v}, \lambda) = \sum_{n=1}^{N}\left(\zeta v_{n}-\frac{\zeta^{v_{n}}}{{\rm log} \zeta}\right) \]

where, \(\zeta=1-\lambda, 0<\lambda<1\)

Parameters
  • v (tensor) – The easy degree of N samples. (\(N×1\) tensor)

  • lmbd (float) – balance factor

class torchlib.spl.spfunction.Mixture

Bases: object

Mixture function

The Mixture SPL function can be expressed as

(10)\[f\left(\bm{v}, λ \right)=-\zeta \sum_{n=1}^{N} \log \left(v_{n}+\zeta / λ \right) \]

where, \(ζ= \frac{1}{k^{\prime} - k} = \frac{\lambda^{\prime}\lambda}{\lambda-\lambda^{\prime}}\)

The optimal solution is

(11)\[v_{n}^{*}=\left\{\begin{array}{ll}{1,} & {l_{n} \leq \lambda^{\prime}} \\ {0,} & {l_{n} \geq \lambda} \\ {\zeta / l_{n}-\zeta / \lambda,} & {\text { otherwise }}\end{array}\right. \]
eval(v, lmbd1, lmbd2)

eval SP function

The Mixture SPL function can be expressed as

(12)\[f\left(\bm{v}, λ \right)=-\zeta \sum_{n=1}^{N} \log \left(v_{n}+\zeta / λ \right) \]

where, \(ζ= \frac{1}{k^{\prime} - k} = \frac{\lambda^{\prime}\lambda}{\lambda-\lambda^{\prime}}\)

Parameters

v (tensor) – The easy degree of N samples. (\(N×1\) tensor)

torchlib.spl.voptimizer module

class torchlib.spl.voptimizer.Binary(rankr=0.6, maxrankr=1, mu=1.003)

Bases: object

binary function

The binary SPL function can be expressed as

(13)\[f(\bm{v}, k) = = -λ\|{\bm v}\|_1 = -λ\sum_{n=1}^N v_n \]

The optimal solution is

(14)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
step(loss)

one step of optimization

The optimal solution is

(15)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
Parameters

loss (tensor) – The loss values of N samples. (\(N×1\) tensor)

update_rankr()

update rank ratio

\[r = {\rm min}\{r*\mu, r_{max}\} \]
class torchlib.spl.voptimizer.Linear(rankr=0.6, maxrankr=1, mu=1.003)

Bases: object

Linear function

The Linear SPL function can be expressed as

(16)\[f(\bm{v}, \lambda)=\lambda\left(\frac{1}{2}\|\bm{v}\|_{2}^{2}-\sum_{n=1}^{N} v_{n}\right) \]

The optimal solution is

(17)\[v_{n}^* = {\rm max}\{1-l_n/\lambda, 0\} \]
step(loss)

one step of optimization

The optimal solution is

(18)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
Parameters

loss (tensor) – The loss values of N samples. (\(N×1\) tensor)

update_rankr()

update rank ratio

\[r = {\rm min}\{r*\mu, r_{max}\} \]
class torchlib.spl.voptimizer.Logarithmic(rankr=0.6, maxrankr=1, mu=1.003)

Bases: object

Logarithmic function

The Logarithmic SPL function can be expressed as

(19)\[f(\bm{v}, \lambda) = \sum_{n=1}^{N}\left(\zeta v_{n}-\frac{\zeta^{v_{n}}}{{\rm log} \zeta}\right) \]

where, \(\zeta=1-\lambda, 0<\lambda<1\)

The optimal solution is

(20)\[v_{n}^{*}=\left\{\begin{array}{ll}{0,} & {l_{n}>=\lambda} \\ {\log \left(l_{n}+\zeta\right) / \log \xi,} & {l_{n}<\lambda}\end{array}\right. \]
step(loss)

one step of optimization

The optimal solution is

(21)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
Parameters

loss (tensor) – The loss values of N samples. (\(N×1\) tensor)

update_rankr()

update rank ratio

\[r = {\rm min}\{r*\mu, r_{max}\} \]
class torchlib.spl.voptimizer.Mixture(rankr=0.6, maxrankr=1, mu=1.003)

Bases: object

Mixture function

The Mixture SPL function can be expressed as

(22)\[f\left(\bm{v}, λ \right)=-\zeta \sum_{n=1}^{N} \log \left(v_{n}+\zeta / λ \right) \]

where, \(ζ= \frac{1}{k^{\prime} - k} = \frac{\lambda^{\prime}\lambda}{\lambda-\lambda^{\prime}}\)

The optimal solution is

(23)\[v_{n}^{*}=\left\{\begin{array}{ll}{1,} & {l_{n} \leq \lambda^{\prime}} \\ {0,} & {l_{n} \geq \lambda} \\ {\zeta / l_{n}-\zeta / \lambda,} & {\text { otherwise }}\end{array}\right. \]
step(loss)

one step of optimization

The optimal solution is

(24)\[v_{n}^* = \left\{\begin{array}{ll}{1,} & {l_{n}<\lambda} \\ {0,} & {l_{n}>=\lambda}\end{array}\right. \]
Parameters

loss (tensor) – The loss values of N samples. (\(N×1\) tensor)

update_rankr()

update rank ratio

\[r = {\rm min}\{r*\mu, r_{max}\} \]

Module contents