etomo.reconstructors.base#

Heavily inspired by pysap-mri multichannel reconstructor.

class ReconstructorBase(data_op, linear_op, regularizer_op, gradient_formulation, grad_class, init_gradient_op=True, verbose=0, **extra_grad_args)[source]#

Bases: object

This is the base reconstructor class for reconstruction.

Notes

For the Analysis case, finds the solution for x of: ..math:: (1/2) * ||R x - y||^2_2 + mu * H (W x)

For the Synthesis case, finds the solution of: ..math:: (1/2) * ||R Wt alpha - y||^2_2 + mu * H(alpha)

Parameters
  • data_op (object of class Radon2D, Radon3D) – Defines the radon data operator R.

  • linear_op (object of class LinearBase) – Defines the linear sparsifying operator W. This must operate on x and have 2 functions, op(x) and adj_op(coeff) which implements the direct adjoint operators.

  • regularizer_op (operator, (optional default None)) – Defines the regularization operator for the regularization function H. If None, the regularization chosen is Identity and the optimization turns to gradient descent.

  • gradient_formulation (str between 'analysis' or 'synthesis',) – default ‘synthesis’ defines the formulation of the image model which defines the gradient.

  • grad_class (Gradient class from operators.gradient.) – Points to the gradient class based on the gradient_formulation.

  • init_gradient_op (bool, default True) – This parameter controls whether the gradient operator must be initialized right now. If set to false, the user needs to call initialize_gradient_op to initialize the gradient at right time before reconstruction

  • verbose (int, optional default 0) –

    Verbosity levels

    1 => Print basic debug information 5 => Print all initialization information 20 => Calculate cost at the end of each iteration. 30 => Print the debug information of operators if defined by class NOTE - High verbosity (>20) levels are computationally intensive.

  • extra_grad_args (Extra Keyword arguments for gradient initialization) – This holds the initialization parameters used for gradient initialization which is obtained from ‘grad_class’. Please refer to operators.gradient.base for reference. In case of sythesis formulation, the ‘linear_op’ is also passed as an extra arg

initialize_gradient_op(**extra_args)[source]#

Initialize gradient operator and cost operators

Parameters

extra_args – kwargs for GradAnalysis or GradSynthesis

Returns

Return type

None

reconstruct(data, optimization_alg='pogm', x_init=None, num_iterations=100, cost_op_kwargs=None, **kwargs)[source]#

This method calculates operator transform.

Parameters
  • data (np.ndarray) – The acquired value in the data domain. This is y in above equation.

  • optimization_alg (str (optional, default 'pogm')) – Type of optimization algorithm to use, ‘pogm’ | ‘fista’ | ‘condatvu’

  • x_init (np.ndarray (optional, default None)) – input initial guess image for reconstruction. If None, the initialization will be zero

  • num_iterations (int (optional, default 100)) – number of iterations of algorithm

  • cost_op_kwargs (dict (optional, default None)) – specifies the extra keyword arguments for cost operations. please refer to modopt.opt.cost.costObj for details.

  • kwargs (extra keyword arguments for modopt algorithm) – Please refer to corresponding ModOpt algorithm class for details. https://github.com/CEA-COSMIC/ModOpt/blob/master/ modopt/opt/algorithms.py