dubfi.inversion.inversion_lognormal =================================== .. py:module:: dubfi.inversion.inversion_lognormal .. autoapi-nested-parse:: Inversion for log-normal a priori probability distribution. .. codeauthor:: Valentin Bruch, DWD .. versionadded:: 0.1.3 Classes ------- .. autoapisummary:: dubfi.inversion.inversion_lognormal.InvertorOptimizerLogNormal Functions --------- .. autoapisummary:: dubfi.inversion.inversion_lognormal.lognormal2normal dubfi.inversion.inversion_lognormal.normal2lognormal dubfi.inversion.inversion_lognormal.lognormal2meanvar dubfi.inversion.inversion_lognormal.meanvar2lognormal Module Contents --------------- .. py:class:: InvertorOptimizerLogNormal(y, prior, h, b, r, norm_prefactor = 0.0, min_s = 1e-12) Bases: :py:obj:`dubfi.inversion.inversion.InvertorOptimizer` Numerical solver for flux inversion problem. Solve inversion problem by numerically minimizing the cost function, assuming that uncertainties may depend on the optimization parameters. In this case, the control parameter ls is the logarithm of the scaling factors. Solve inverse problem using full ensemble-estimated model uncertainty. :param y: vector of observations (usually minus model far-field or background) :type y: AbstractVector :param prior: a priori estimate for scaling factors :type prior: np.ndarray :param h: model equivalents depending on scaling factors :type h: ParametrizedVector :param b: error covariance matrix of prior scaling factors :type b: np.ndarray :param r: error covariance matrix of model-observation comparison, depends on scaling factors :type r: ParametrizedOperator :param norm_prefactor: prefactor of normalization term in cost function. :type norm_prefactor: float, default=0.0 :param min_s: minimum allowed scaling factor :type min_s: float, default=1e-12 .. py:property:: b :type: numpy.ndarray Prior error covariance matrix of scaling factors. .. py:property:: prior :type: numpy.ndarray Prior vector of scaling factors. .. py:method:: _sanitize_s(s) Make sure all elements of s are positive, modify s in-place. .. py:method:: _check_before_solving() Assert that prior and initial scaling factors are positive. .. py:method:: cost_prior(s) Prior contribution to cost function. .. math:: J_\text{prior}(s) = \tfrac{1}{2} \sum_{lk} [\log(s_l)-\mu_l] (\tilde{B}^{-1})_{lk} [\log(s_k) - \mu_k] + \sum_k [\log(s_k) - \mu_k] This defines a log-normal probability distribution :math:`P(s) \propto e^{-J_\text{prior}(s)}` such that the element-wise logarithm of the scaling factors, :math:`z_k\equiv\log(s_k)`, has a probability density :math:`z \sim \mathcal{N}(\mu, \tilde{B})`. The parameters :math:`\mu` and :math:`\tilde{B}` are defined such that the a priori log-normal probability distribution :math:`P(s) \propto e^{-J_\text{prior}(s)}` has the given most likely scaling factors (:math:`s_0`) and the local Gaussian approximation (:math:`B`): .. math:: (s_0)_i &= e^{\mu_i+\tfrac{1}{2}\tilde{B}_{ii}}, B_{ij} &= e^{\mu_i + \mu_j + \tfrac{1}{2}(\tilde{B}_{ii}+\tilde{B}_{jj})} \left(e^{\tilde{B}_{ij}} - 1\right) .. note:: This is the first function called in :meth:`~dubfi.inversion.inversion.InvertorOptimizer.cost`. It ensures that all entries of the vector :math:`s` are positive. .. seealso:: :func:`lognormal2normal` .. py:method:: cost_prior_grad(s) Prior contribution to gradient of cost function. .. math:: \partial_k J_\text{prior}(s) = \sum_l [\log(s_l) - \mu_l] (\tilde{B}^{-1})_{lk} s_k^{-1} + s_k^{-1} .. note:: This is the first function called in :meth:`~dubfi.inversion.inversion.InvertorOptimizer.cost_grad`. It ensures that all entries of the vector :math:`s` are positive. .. py:method:: cost_prior_hess(s) Prior contribution to Hessian of cost function. .. math:: \partial_k \partial_l J_\text{prior}(s) = \frac{(\tilde{B}^{-1})_{lk}}{s_l s_k} - \delta_{lk} \left( 1 + \sum_m [\log(s_m) - \mu_m] (\tilde{B}^{-1})_{ml} \right) s_l^{-2} .. note:: This is the first function called in :meth:`~dubfi.inversion.inversion.InvertorOptimizer.cost_hess`. It ensures that all entries of the vector :math:`s` are positive. .. py:function:: lognormal2normal(mu, b) Given mu and B of a log-normal distribution, find Gaussian approximation at its maximum. Consider a random vector :math:`x` and denote by :math:`z` its element-wise logarithm, :math:`z_i=\log(x_i)`. Assume that :math:`z\sim\mathcal{N}(\mu, B)`. Then :math:`P(x) \propto \exp\left(-\sum_k z_k - \tfrac{1}{2} [z-\mu]^\top B^{-1} [z-\mu] \right)`. Approximate :math:`P(x)` locally at its maximum: .. math:: x^\text{max}_i &= e^{\mu_i - \sum_j B_{ij}}, (B_\text{Gauss}^{-1})_{ij} &= -\partial_i \partial_j \log(P(x))|_{x^\text{max}} = \frac{B_{ij}}{x^\text{max}_i x^\text{max}_j} Return :math:`x^\text{max}` and :math:`B_\text{Gauss}^{-1}`. .. note:: This only makes sense if :math:`B` is positive definite and real-symmetric. .. py:function:: normal2lognormal(xmax, b_gauss) Compute inverse of :func:`lognormal2normal`. Given a local Gaussian approximation of a log-normal distribution at its maximum, return parameters mu and B of the log-normal distribution. .. note:: All entries of xmax must be positive. b_gauss must be a positive definite, real-symmetric matrix. .. py:function:: lognormal2meanvar(mu, b) Given mu and B of a log-normal distribution, compute mean and variance (for Gaussian approximation). Consider a random vector :math:`x` and denote by :math:`z` its element-wise logarithm, :math:`z_i=\log(x_i)`. Assume that :math:`z\sim\mathcal{N}(\mu, B)`. Then :math:`P(x) \propto \exp\left(-\sum_k z_k - \tfrac{1}{2} [z-\mu]^\top B^{-1} [z-\mu] \right)` and :math:`\langle e^{t^\top z} \rangle = e^{t^\top\mu + \tfrac{1}{2} t^\top B t}`. By choosing :math:`t=e_i` or :math:`t=e_i+e_j`, one can show that the mean and variance of :math:`x` are: .. math:: \langle x_i \rangle &= e^{\mu_i+\tfrac{1}{2}B_{ii}}, \langle\langle x_i x_j \rangle\rangle &= e^{\mu_i + \mu_j + \tfrac{1}{2}(B_{ii}+B_{jj})} \left(e^{B_{ij}} - 1\right) This function returns the mean and variance of :math:`x`. .. note:: This only makes sense if :math:`B` is positive definite and real-symmetric. .. py:function:: meanvar2lognormal(mean, var) Given mean and variance of a random vector, construct parameters mu and B of log-normal distribution. This is the inverse of :func:`lognormal2normal`. .. note:: All entries of mean must be positive. var must be a positive definite, real-symmetric matrix.