inquanto.minimizers¶
Module provides access to minimizers compatible with variational experiments.
- class MinimizerBasinHopping(scipy_method=OptimizationMethod.L_BFGS_B_smooth, scipy_minimiser_opts=None, disp=False, callback=None, **basinhopping_kwargs)¶
Bases:
MinimizerScipyA simple wrapper around the BasinHopping minimizer as implemented in SciPy.
The details for the algorithm can be found in the details section of the SciPy docs at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html
Inherits from the traditional scipy minimiser as these arguments can be directly passed to the basin hopping method.
- Parameters:
scipy_method (
OptimizationMethod|str, default:OptimizationMethod.L_BFGS_B_smooth) – The method to use. Popular methods to choose from include"CG","BFGS","SLSQP", and"COBYLA". For the L-BFGS-B method,the"OptimizationMethod"enum is used to conveniently specify the optimization method along with its associated default parameters.scipy_minimiser_opts (
Optional[dict[str,Any]], default:None) – Arguments that fit in options= component of scipy.optimize.minimize. This overrides any default settings provided by the method argument.callback (
Optional[Callable[...,Any]], default:None) – Custom callback function.**basinhoping_kwargs –
Keyword arguments which will be directly accepted by scipy.optimize.basinhopping. Important ones to set are:
niter (int): niter+1 runs of the minimiser T (float): Temperature for Metropolis–Hastings accept/reject higher energy steps. Should be comparable to gaps in the minima of the target function.
disp (
bool, default:False)basinhopping_kwargs (
Any)
- property basinhopping_kwargs: dict[str, Any]¶
Recover the basinhopping keyword arguments passed to the minimizer.
- generate_report()¶
Generates a report containing a summary of the minimization.
- property method: str¶
Get the method being used by the optimizer as a string.
- Returns:
The name of the minimization algorithm used by the minimizer.
- minimize(function, initial, gradient=None)¶
Minimize the function provided.
The minimization starts at the parameters provided in the initial argument, and the gradient (if provided) is used to aid the minimization and is evaluated by calling the gradient argument.
- Parameters:
- Returns:
tuple[float,ndarray] – The value of the function at the minimum and the location of the minimum.- Raises:
ValueError – If optimization process fails.
- class MinimizerRotosolve(max_iterations=20, tolerance=MINIMIZER_CONVERGENCE_TOLERANCE, disp=False, order_independence=True)¶
Bases:
GeneralMinimizerThe Rotosolve minimizer, introduced in Quantum 5, 391 (2021).
This learns the minimum of an estimator with a sinusoidal energy landscape.
- Parameters:
max_iterations (
int, default:20) – Maximum number of iterations allowed before the minimization is terminated.tolerance (
float, default:MINIMIZER_CONVERGENCE_TOLERANCE) – Tolerance for convergence.disp (
bool, default:False) – IfTrue, print information to the screen throughout minimization.order_independence (
bool, default:True) – IfFalse, the minimizer depends on the order of parameters. IfTrue, the minimizer operates independently of parameter order.
- generate_report()¶
Generates a summary of the minimization.
- minimize(function, initial)¶
Minimize the function provided.
Minimization starts at the parameters provided by the initial argument.
- Parameters:
- Returns:
tuple[float,ndarray] – A tuple containing the final value and parameters obtained by the minimization.- Raises:
RuntimeError – If the optimizer does not converge within the maximum number of iterations.
- class MinimizerSGD(learning_rate=0.01, decay_rate=0.05, max_iterations=100, disp=False, callback=None)¶
Bases:
GeneralMinimizerUses the gradient and geometry of the objective function to accelerate minimization.
Introduced in Quantum 4, 269 (2020).
- Parameters:
learning_rate (
float, default:0.01) – Stepsize in direction of descent.decay_rate (
float, default:0.05) – User defined decay rate.max_iterations (
int, default:100) – Maximum number of iterations allowed before variational loop is terminated.disp (
bool, default:False) – IfTrue, displays minimization history.callback (
Optional[Callable[[ndarray[tuple[Any,...],dtype[float64]]],None]], default:None) – Custom callback for minimizer.
- generate_report()¶
Generates a report summarizing the minimization.
Includes the final value, the final parameters and the number of iterations performed.
- minimize(function, initial, gradient=None)¶
Minimize the objective function, starting at the initial parameters provided by the user.
- Parameters:
function (
Callable[[ndarray|list[float]],float]) – Objective function to minimize.initial (
ndarray|list[float]) – Initial parameters to optimize.gradient (
Optional[Callable[[ndarray|list[float]],ndarray|list[float]]], default:None) – Gradient function to assist minimization. Must be provided, the keyword argument is optional to satisfy the interface.
- Returns:
tuple[float,ndarray|list[float]] – A tuple containing the final value and parameters obtained by the minimizer.
- class MinimizerSPSA(max_iterations=20, tolerance=MINIMIZER_CONVERGENCE_TOLERANCE, disp=False)¶
Bases:
GeneralMinimizerThe Simultaneous Perturbation Stochastic Approximation (SPSA) minimizer.
Implementation details are based on https://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF
- Parameters:
max_iterations (
int, default:20) – Maximum number of iterations allowed before the minimization is terminated.disp (
bool, default:False) – IfTrue, print information to the screen throughout minimization.tolerance (
float, default:MINIMIZER_CONVERGENCE_TOLERANCE) – Tolerance for convergence.disp – If
True, print information to the screen throughout minimization.
- generate_report()¶
Generate a report summarizing the minimization.
Includes the final value and the final parameters performed.
- minimize(function, initial, alpha=0.602, gamma=0.101, a=0.5, c=0.2, stability_constant=None, perturbation_samples=1, perturbation_samples_init=None, gradient_smoothing=False)¶
Minimize the function provided.
Minimization starts at the parameters provided by the initial argument.
- Parameters:
function (
Callable[[ndarray|list[float]],float]) – Objective function to minimize.initial (
ndarray|list[float]) – Initial parameters to minimize.alpha (
float, default:0.602) – The exponent of the learning rate powerseries.gamma (
float, default:0.101) – The exponent of the perturbation powerseries.a (
float, default:0.5) – The numerator of the initial learning rate magnitude.c (
float, default:0.2) – The initial perturbation magnitude.stability_constant (
Optional[float], default:None) – The denominator of the initial learning rate magnitude.perturbation_samples (
int, default:1) – The number of perturbation samples used for gradient approximation.perturbation_samples_init (
Optional[int], default:None) – The number of perturbation samples used for the initial gradient approximation. IfNone, this will the same asperturbation_samples.gradient_smoothing (
bool, default:False) – IfTrue, the gradient approximation is based on previous approximations.
- Returns:
tuple[float,ndarray|list[float]] – The final value and parameters obtained by the minimization.
- class MinimizerScipy(method=OptimizationMethod.L_BFGS_B_smooth, options=None, disp=False, callback=None)¶
Bases:
GeneralMinimizerA simple wrapper for SciPy minimization routines.
More minimizer details can be found in the SciPy documentation.
- Parameters:
method (
OptimizationMethod|str, default:OptimizationMethod.L_BFGS_B_smooth) – The method to use. Popular methods to choose from include"CG","BFGS","SLSQP", and"COBYLA". For the L-BFGS-B method,the"OptimizationMethod"enum is used to conveniently specify the optimization method along with its associated default parameters.options (
Optional[dict[str,Any]], default:None) – Options used for calibration which are passed through to the SciPy minimization.argument. (This overrides any default settings provided by the method)
disp (
bool, default:False) – IfTrue, prints minimization history.callback (
Optional[Callable[...,Any]], default:None) – Custom callback function.
- generate_report()¶
Generates a report containing a summary of the minimization.
- property method: str¶
Get the method being used by the optimizer as a string.
- Returns:
The name of the minimization algorithm used by the minimizer.
- minimize(function, initial, gradient=None)¶
Minimize the function provided.
The minimization starts at the parameters provided in the initial argument, and the gradient (if provided) is used to aid the minimization and is evaluated by calling the gradient argument.
- Parameters:
- Returns:
tuple[float,ndarray] – The value of the function at the minimum and the location of the minimum.- Raises:
ValueError – If optimization process fails.
- class NaiveEulerIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)¶
Bases:
GeneralIntegratorA simple Euler integrator to solve time evolution problems.
- Parameters:
time_eval (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool, default:False) – IfTrue, print information to the screen throughout minimization.callback (
Optional[Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float,ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],Any]], default:None) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Optional[Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]]], default:GeneralIntegrator.linear_solver_scipy_linalg) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)¶
A wrapper for the
scipy.linalg.solve()method.Solves the linear equation
a @ x == bfor the unknownxfor the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)¶
Linear equation solver using
scipy.linalg.pinvh().Solves the linear equation
a @ x == bfor the unknownxusing the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a. More information onscipy.linalg.pinvh()can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)¶
Solve the differential equation.
- Parameters:
linear_problem (
Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float],tuple[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns the linear problem (matrix \(A(t)\), vector \(b(t)\) of \(A*x=b\)) at a time \(t\).initial (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – Initial parameters.args (
Any)kwargs (
Any)
- Returns:
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]] –:- Array containing the solution of the differential equation for each time in
time_eval, with the initial value in the first row.
- Array containing the solution of the differential equation for each time in
- class OptimizationMethod(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)¶
Bases:
EnumEnumeration of optimization methods with associated parameters.
Each enum member represents an optimization method along with a dictionary of parameters used in the optimization process. These settings are recommended based on empirical testing for optimal performance.
- L_BFGS_B_smooth¶
method:
"L-BFGS-B". Applicability: This method is more suitable for smoother and more fine-grained optimization. ftol: Tolerance for termination. The iteration stops when \(\\frac{{f^k - f^{k+1}}}{{\\max\\left(|f^k|,|f^{k+1}|,1\\right)}} \\leq \\text{{ftol}}\). eps: Absolute step size used for numerical approximation of the Jacobian via forward differences.
- L_BFGS_B_coarse¶
method:
"L-BFGS-B". Applicability: This method is generally more suitable for coarser and more rapid optimization and may be preferable for optimizing noisy objective functions. ftol: Tolerance for termination. The iteration stops when \(\\frac{{f^k - f^{k+1}}}{{\\max\\left(|f^k|,|f^{k+1}|,1\\right)}} \\leq \\text{{ftol}}\). eps: Absolute step size used for numerical approximation of the Jacobian via forward differences.
- L_BFGS_B_coarse = ('L-BFGS-B', {'eps': 0.1, 'ftol': 0.0001})¶
- L_BFGS_B_smooth = ('L-BFGS-B', {'eps': 1e-08, 'ftol': 2.2e-09})¶
- class ScipyIVPIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)¶
Bases:
GeneralIntegratorA simple wrapper for SciPy
solve_ivp()for linear problems.More details about
solve_ivp()can be found in the SciPy documentation.- Parameters:
time_eval (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool, default:False) – IfTrue, print information to the screen throughout minimization.callback (
Optional[Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float,ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],Any]], default:None) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]], default:GeneralIntegrator.linear_solver_scipy_linalg) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)¶
A wrapper for the
scipy.linalg.solve()method.Solves the linear equation
a @ x == bfor the unknownxfor the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)¶
Linear equation solver using
scipy.linalg.pinvh().Solves the linear equation
a @ x == bfor the unknownxusing the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a. More information onscipy.linalg.pinvh()can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)¶
Solve the differential equation.
- Parameters:
linear_problem (
Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float],tuple[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns the linear problem (matrix \(A(t)\), vector \(b(t)\) of \(A*x=b\)) at a time \(t\).initial (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – Initial parameters.args (
Any)kwargs (
Any)
- Returns:
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]] – Array containing the solution of the differential equation for each time intime_eval, with the initial value in the first row.
- class ScipyODEIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)¶
Bases:
GeneralIntegratorA simple wrapper for SciPy
odeint()for linear problems.More details about
odeint()can be found in the SciPy documentation.- Parameters:
time_eval (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool, default:False) – If True, print information to the screen throughout minimization.callback (
Optional[Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float,ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],Any]], default:None) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]], default:GeneralIntegrator.linear_solver_scipy_linalg) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)¶
A wrapper for the
scipy.linalg.solve()method.Solves the linear equation
a @ x == bfor the unknownxfor the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)¶
Linear equation solver using
scipy.linalg.pinvh().Solves the linear equation
a @ x == bfor the unknownxusing the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a. More information onscipy.linalg.pinvh()can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)¶
Solve the differential equation.
- Parameters:
linear_problem (
Callable[[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],float],tuple[ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]],ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns theproblem (linear)
initial (
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]]) – Initial parameters.args (
Any)kwargs (
Any)
- Returns:
ndarray[tuple[Any,...],dtype[TypeVar(_ScalarT, bound=generic)]] – Array containing the solution to the problem for each time intime_eval, with the initial value in the first row.