dubfi.inversion.inversion_lognormal¶
Inversion for log-normal a priori probability distribution.
Added in version 0.1.3.
Classes¶
Numerical solver for flux inversion problem. |
Functions¶
|
Given mu and B of a log-normal distribution, find Gaussian approximation at its maximum. |
|
Compute inverse of |
|
Given mu and B of a log-normal distribution, compute mean and variance (for Gaussian approximation). |
|
Given mean and variance of a random vector, construct parameters mu and B of log-normal distribution. |
Module Contents¶
- class dubfi.inversion.inversion_lognormal.InvertorOptimizerLogNormal(y, prior, h, b, r, norm_prefactor=0.0, min_s=1e-12)¶
Bases:
dubfi.inversion.inversion.InvertorOptimizerNumerical solver for flux inversion problem.
Solve inversion problem by numerically minimizing the cost function, assuming that uncertainties may depend on the optimization parameters. In this case, the control parameter ls is the logarithm of the scaling factors.
Solve inverse problem using full ensemble-estimated model uncertainty.
- Parameters:
y (AbstractVector) – vector of observations (usually minus model far-field or background)
prior (np.ndarray) – a priori estimate for scaling factors
h (ParametrizedVector) – model equivalents depending on scaling factors
b (np.ndarray) – error covariance matrix of prior scaling factors
r (ParametrizedOperator) – error covariance matrix of model-observation comparison, depends on scaling factors
norm_prefactor (float, default=0.0) – prefactor of normalization term in cost function.
min_s (float, default=1e-12) – minimum allowed scaling factor
- property b: numpy.ndarray¶
Prior error covariance matrix of scaling factors.
- Return type:
numpy.ndarray
- property prior: numpy.ndarray¶
Prior vector of scaling factors.
- Return type:
numpy.ndarray
- _sanitize_s(s)¶
Make sure all elements of s are positive, modify s in-place.
- Parameters:
s (numpy.ndarray)
- Return type:
None
- _check_before_solving()¶
Assert that prior and initial scaling factors are positive.
- Return type:
None
- cost_prior(s)¶
Prior contribution to cost function.
\[J_\text{prior}(s) = \tfrac{1}{2} \sum_{lk} [\log(s_l)-\mu_l] (\tilde{B}^{-1})_{lk} [\log(s_k) - \mu_k] + \sum_k [\log(s_k) - \mu_k]\]This defines a log-normal probability distribution \(P(s) \propto e^{-J_\text{prior}(s)}\) such that the element-wise logarithm of the scaling factors, \(z_k\equiv\log(s_k)\), has a probability density \(z \sim \mathcal{N}(\mu, \tilde{B})\).
The parameters \(\mu\) and \(\tilde{B}\) are defined such that the a priori log-normal probability distribution \(P(s) \propto e^{-J_\text{prior}(s)}\) has the given most likely scaling factors (\(s_0\)) and the local Gaussian approximation (\(B\)):
\[ \begin{align}\begin{aligned}(s_0)_i &= e^{\mu_i+\tfrac{1}{2}\tilde{B}_{ii}},\\B_{ij} &= e^{\mu_i + \mu_j + \tfrac{1}{2}(\tilde{B}_{ii}+\tilde{B}_{jj})} \left(e^{\tilde{B}_{ij}} - 1\right)\end{aligned}\end{align} \]Note
This is the first function called in
cost(). It ensures that all entries of the vector \(s\) are positive.See also
- Parameters:
s (numpy.ndarray)
- Return type:
numpy.float64
- cost_prior_grad(s)¶
Prior contribution to gradient of cost function.
\[\partial_k J_\text{prior}(s) = \sum_l [\log(s_l) - \mu_l] (\tilde{B}^{-1})_{lk} s_k^{-1} + s_k^{-1}\]Note
This is the first function called in
cost_grad(). It ensures that all entries of the vector \(s\) are positive.- Parameters:
s (numpy.ndarray)
- Return type:
numpy.ndarray
- cost_prior_hess(s)¶
Prior contribution to Hessian of cost function.
\[\partial_k \partial_l J_\text{prior}(s) = \frac{(\tilde{B}^{-1})_{lk}}{s_l s_k} - \delta_{lk} \left( 1 + \sum_m [\log(s_m) - \mu_m] (\tilde{B}^{-1})_{ml} \right) s_l^{-2}\]Note
This is the first function called in
cost_hess(). It ensures that all entries of the vector \(s\) are positive.- Parameters:
s (numpy.ndarray)
- Return type:
numpy.ndarray
- dubfi.inversion.inversion_lognormal.lognormal2normal(mu, b)¶
Given mu and B of a log-normal distribution, find Gaussian approximation at its maximum.
Consider a random vector \(x\) and denote by \(z\) its element-wise logarithm, \(z_i=\log(x_i)\). Assume that \(z\sim\mathcal{N}(\mu, B)\). Then \(P(x) \propto \exp\left(-\sum_k z_k - \tfrac{1}{2} [z-\mu]^\top B^{-1} [z-\mu] \right)\). Approximate \(P(x)\) locally at its maximum:
\[ \begin{align}\begin{aligned}x^\text{max}_i &= e^{\mu_i - \sum_j B_{ij}},\\(B_\text{Gauss}^{-1})_{ij} &= -\partial_i \partial_j \log(P(x))|_{x^\text{max}} = \frac{B_{ij}}{x^\text{max}_i x^\text{max}_j}\end{aligned}\end{align} \]Return \(x^\text{max}\) and \(B_\text{Gauss}^{-1}\).
Note
This only makes sense if \(B\) is positive definite and real-symmetric.
- Parameters:
mu (numpy.ndarray)
b (numpy.ndarray)
- Return type:
tuple[numpy.ndarray, numpy.ndarray]
- dubfi.inversion.inversion_lognormal.normal2lognormal(xmax, b_gauss)¶
Compute inverse of
lognormal2normal().Given a local Gaussian approximation of a log-normal distribution at its maximum, return parameters mu and B of the log-normal distribution.
Note
All entries of xmax must be positive. b_gauss must be a positive definite, real-symmetric matrix.
- Parameters:
xmax (numpy.ndarray)
b_gauss (numpy.ndarray)
- Return type:
tuple[numpy.ndarray, numpy.ndarray]
- dubfi.inversion.inversion_lognormal.lognormal2meanvar(mu, b)¶
Given mu and B of a log-normal distribution, compute mean and variance (for Gaussian approximation).
Consider a random vector \(x\) and denote by \(z\) its element-wise logarithm, \(z_i=\log(x_i)\). Assume that \(z\sim\mathcal{N}(\mu, B)\). Then \(P(x) \propto \exp\left(-\sum_k z_k - \tfrac{1}{2} [z-\mu]^\top B^{-1} [z-\mu] \right)\) and \(\langle e^{t^\top z} \rangle = e^{t^\top\mu + \tfrac{1}{2} t^\top B t}\). By choosing \(t=e_i\) or \(t=e_i+e_j\), one can show that the mean and variance of \(x\) are:
\[ \begin{align}\begin{aligned}\langle x_i \rangle &= e^{\mu_i+\tfrac{1}{2}B_{ii}},\\\langle\langle x_i x_j \rangle\rangle &= e^{\mu_i + \mu_j + \tfrac{1}{2}(B_{ii}+B_{jj})} \left(e^{B_{ij}} - 1\right)\end{aligned}\end{align} \]This function returns the mean and variance of \(x\).
Note
This only makes sense if \(B\) is positive definite and real-symmetric.
- Parameters:
mu (numpy.ndarray)
b (numpy.ndarray)
- Return type:
tuple[numpy.ndarray, numpy.ndarray]
- dubfi.inversion.inversion_lognormal.meanvar2lognormal(mean, var)¶
Given mean and variance of a random vector, construct parameters mu and B of log-normal distribution.
This is the inverse of
lognormal2normal().Note
All entries of mean must be positive. var must be a positive definite, real-symmetric matrix.
- Parameters:
mean (numpy.ndarray)
var (numpy.ndarray)
- Return type:
tuple[numpy.ndarray, numpy.ndarray]