etomo.optimizers.utils.cost#

Different cost functions for the optimization.

class DualGapCost(linear_op, initial_cost=1000000.0, tolerance=0.0001, cost_interval=None, test_range=4, verbose=False, plot_output=None)[source]#

Bases: modopt.opt.cost.costObj

Define the dual-gap cost function.

_calc_cost(x_new, y_new, *args, **kwargs)[source]#

Return the dual-gap cost. :param x_new: new primal solution. :type x_new: np.ndarray :param y_new: new dual solution. :type y_new: np.ndarray

Returns

norm – the dual-gap.

Return type

float

_abc_impl = <_abc._abc_data object>#
class GenericCost(gradient_op, prox_op, linear_op, initial_cost=1000000.0, tolerance=0.0001, cost_interval=None, test_range=4, optimizer_type='forward_backward', verbose=False, plot_output=None)[source]#

Bases: modopt.opt.cost.costObj

Define the Generic cost function, based on the cost function of the gradient operator and the cost function of the proximity operator.

_calc_cost(x_new, *args, **kwargs)[source]#

Return the cost. :param x_new: intermediate solution in the optimization problem. :type x_new: np.ndarray

Returns

cost – the cost function defined by the operators (gradient + prox_op).

Return type

float

_abc_impl = <_abc._abc_data object>#