Classical Minimizers

In the near-term many quantum computational chemistry algorithms use a combination of classical and quantum computational resources. Typically, hybrid quantum-classical approaches to finding Hamiltonian eigenvalues and eigenstates involve the minimization of a cost function, which is often the energy. The variables of the cost function are updated on a classical device, and the value of the cost function at each iteration is evaluated on a quantum device. To facilitate algorithms of this type, a variety of minimization methods are available in the InQuanto package. Each of the minimizers contains a minimize() method, which can be called by the user or by algorithms objects as part of the workflow. In this section, we will implement a bespoke state-vector VQE routine with gradients to showcase a few of the available minimizers.

MinimizerScipy

The simplest minimizer option available in InQuanto is the InQuanto MinimizerScipy class, which wraps the SciPy suite of minimizer classes into an InQuanto object. Below, we implement and optimize a VQE objective function using the conjugate gradient method to demonstrate the functionality of the minimizer classes, and the additional control that familiarity with these objects can provide. First, we load in a Hamiltonian from the express module:

from inquanto.express import load_h5
from inquanto.spaces import FermionSpace
from inquanto.mappings import QubitMappingJordanWigner

h2_sto3g = load_h5("h2_sto3g.h5")
hamiltonian = h2_sto3g["hamiltonian_operator"]
space = FermionSpace(4)
state = space.generate_occupation_state_from_list([1, 1, 0, 0])
qubit_hamiltonian = QubitMappingJordanWigner().operator_map(hamiltonian)

We now choose an ansatz (UCCSD), and prepare functions to compute the energy and energy gradient, which will be the VQE objective and VQE gradient functions respectively:

from inquanto.ansatzes import FermionSpaceAnsatzUCCSD
from inquanto.computables import ExpectationValue, ExpectationValueDerivative
from inquanto.core import dict_to_vector
from inquanto.protocols import SparseStatevectorProtocol
from pytket.extensions.qiskit import AerStateBackend
import numpy as np

ansatz = FermionSpaceAnsatzUCCSD(space, state)
parameters = ansatz.state_symbols.construct_zeros()
sv_protocol = SparseStatevectorProtocol(AerStateBackend())

ev = ExpectationValue(kernel=qubit_hamiltonian, state=ansatz)
evg = ExpectationValueDerivative(ansatz, qubit_hamiltonian, ansatz.free_symbols_ordered())

def vqe_objective(variables):
    parameters = ansatz.state_symbols.construct_from_array(variables)
    sv_evaluator = sv_protocol.get_evaluator(parameters)
    return ev.evaluate(sv_evaluator).real

def vqe_gradient(variables):
    parameters = ansatz.state_symbols.construct_from_array(variables)
    sv_evaluator = sv_protocol.get_evaluator(parameters)
    gradient_dict = evg.evaluate(sv_evaluator)
    return dict_to_vector(ansatz.free_symbols_ordered(), gradient_dict)

print("VQE energy with parameters at [0, 0, 0]:", vqe_objective(np.zeros(3)))
print("Gradients of parameters at [0, 0, 0]:", vqe_gradient(np.zeros(3)))
VQE energy with parameters at [0, 0, 0]: -1.117505884204331
Gradients of parameters at [0, 0, 0]: [0.359 0.    0.   ]

It should be noted that the minimizers work with NumPy arrays. The parameter objects compatible with the computables objects can be constructed with the construct_from_array() method as shown above.

We now initialize and execute the conjugate gradient minimizer. The underlying SciPy object can be configured by passing an options dict to the MinimizerScipy constructor, according to the solver-specific guidance in the SciPy user manual. With this, one may define, for example, a maximum number of iterations. With the minimize() method, we can leave the gradient argument empty to compute the gradient numerically, or pass the gradient function we defined above:

from inquanto.minimizers import MinimizerScipy

minimizer = MinimizerScipy("CG", disp=True)
minimizer.minimize(function=vqe_objective, initial=np.zeros(3))
Optimization terminated successfully.
         Current function value: -1.136847
         Iterations: 2
         Function evaluations: 20
         Gradient evaluations: 5
(-1.136846575472053, array([-0.107,  0.   ,  0.   ]))
minimizer = MinimizerScipy("CG", disp=True)
min, loc = minimizer.minimize(function=vqe_objective, initial=np.zeros(3), gradient=vqe_gradient)
print("Objective function minimum is {}, located at {}".format(min, loc))
Optimization terminated successfully.
         Current function value: -1.136847
         Iterations: 2
         Function evaluations: 5
         Gradient evaluations: 5
Objective function minimum is -1.1368465754720536, located at [-0.107  0.     0.   ]

As one might expect, we observe that the optimizer converges to the same value in both cases, but requires less evaluations of the objective function when gradient information is provided.

MinimizerRotosolve

The Rotosolve minimizer [54], is a gradient-free optimizer designed for minimization of VQE-like objective functions, and is available in the MinimizerRotosolve class. With this minimizer, one may define a maximum number of iterations and convergence threshold on initialization.

A short example using rotosolve with an algorithm object is shown below.

from inquanto.algorithms import AlgorithmVQE
from inquanto.minimizers import MinimizerRotosolve

minimizer=MinimizerRotosolve(max_iterations=10, tolerance=1e-6, disp=True)

vqe = AlgorithmVQE(
    objective_expression=ev,
    minimizer=minimizer,
    initial_parameters=ansatz.state_symbols.construct_zeros()
)
vqe.build(protocol_objective=SparseStatevectorProtocol(AerStateBackend()))
vqe.run()
print("VQE Energy:", vqe.generate_report()["final_value"])
# TIMER BLOCK-0 BEGINS AT 2024-11-20 16:02:28.231623

ROTOSOLVER – A gradient-free optimizer for parametric circuits

Iteration 1
fun = -1.1368465754720538 	 variance = 0.011499023526666223 	 p-norm = 0.10723350002059162 

Iteration 2
fun = -1.1368465754720538 	 variance = 4.930380657631324e-32 	 p-norm = 0.10723350002059184 

Optimizer Converged
nit = 2 	 nfun = 19
final fun = -1.1368465754720538 	 final variance = 4.930380657631324e-32 	 final p-norm = 0.10723350002059184

# TIMER BLOCK-0 ENDS - DURATION (s):  0.4914076 [0:00:00.491408]
VQE Energy: -1.1368465754720538

MinimizerSGD

The Stochastic Gradient Descent (SGD) approach to functional optimization is available in the MinimizerSGD class. This minimizer takes bespoke input arguments for the learning_rate and decay_rate parameters, defined in [55].

A short example using MinimizerSGD is shown below.

from inquanto.minimizers import MinimizerSGD

minimizer=MinimizerSGD(learning_rate=0.25, decay_rate=0.5, max_iterations=10, disp=True)

vqe = AlgorithmVQE(
    objective_expression=ev,
    minimizer=minimizer,
    initial_parameters=ansatz.state_symbols.construct_zeros(),
    gradient_expression=evg
)
vqe.build(
   protocol_objective=SparseStatevectorProtocol(AerStateBackend()),
   protocol_gradient=SparseStatevectorProtocol(AerStateBackend()),
)
vqe.run()
print("VQE Energy:", vqe.generate_report()["final_value"])
# TIMER BLOCK-1 BEGINS AT 2024-11-20 16:02:28.730738

Optimizer Stochastic Gradient Descent

Iteration 0

                fun = -1.117505884204331 	
                p-norm = 0.0 	
                g-norm = 0.35933735912603115 	
                
Iteration 1

                fun = -1.1321535146012698 	
                p-norm = 0.054487281372526786 	
                g-norm = 0.1777836552387544 	
                
Iteration 2

                fun = -1.135723964837147 	
                p-norm = 0.08144509079704805 	
                g-norm = 0.08704389270781243 	
                
Iteration 3

                fun = -1.136578976183475 	
                p-norm = 0.09464378821405434 	
                g-norm = 0.04250854257884057 	
                
Iteration 4

                fun = -1.1367828405791476 	
                p-norm = 0.10108947180749601 	
                g-norm = 0.02074667912183717 	
                
Iteration 5

                fun = -1.136831398578587 	
                p-norm = 0.10423534605114956 	
                g-norm = 0.010124128478005728 	
                
Iteration 6

                fun = -1.1368429616408455 	
                p-norm = 0.10577049463234463 	
                g-norm = 0.004940280683157272 	
                
Iteration 7

                fun = -1.1368457149778968 	
                p-norm = 0.10651960255782486 	
                g-norm = 0.00241069356793977 	
                
Iteration 8

                fun = -1.1368463705791971 	
                p-norm = 0.10688514244785674 	
                g-norm = 0.0011763364084198813 	
                
Iteration 9

                fun = -1.1368465266849062 	
                p-norm = 0.10706351347231746 	
                g-norm = 0.0005740118592311994 	
                
Optimizer Converged
nit = 9 	 nfun = 9 	 njac = 9

            final fun = -1.1368465266849062 	
            final p-norm = 0.10715055242023305 	
            final g-norm = 0.0005740118592311994 	
            
# TIMER BLOCK-1 ENDS - DURATION (s):  0.7768204 [0:00:00.776820]
VQE Energy: -1.1368465266849062

Minimizer SPSA

Simultaneous Perturbation Stochastic Approximation (SPSA) [56] is available in the MinimizerSPSA class. This minimizer is especially efficient in high dimensional problems. SPSA approximates function gradients using only two objective function measurements and is robust to noisy measurements of this objective function.

A short example using MinimizerSPSA is shown below.

from inquanto.minimizers import MinimizerSPSA

minimizer=MinimizerSPSA(max_iterations=5, disp=True)

vqe = AlgorithmVQE(
    objective_expression=ev,
    minimizer=minimizer,
    initial_parameters=ansatz.state_symbols.construct_zeros()
)
vqe.build(protocol_objective=SparseStatevectorProtocol(AerStateBackend()))
vqe.run()

print("VQE Energy:", vqe.generate_report()["final_value"])
# TIMER BLOCK-2 BEGINS AT 2024-11-20 16:02:29.514599
Starting SPSA minimization.
Result at iteration 0: [0.    0.    0.079].
Result at iteration 1: [0.    0.    0.079].
Result at iteration 2: [0.    0.    0.079].
Result at iteration 3: [-0.119 -0.119 -0.04 ].
Result at iteration 4: [-0.116 -0.119 -0.037].
Finishing SPSA minimization.
# TIMER BLOCK-2 ENDS - DURATION (s):  0.6721244 [0:00:00.672124]
VQE Energy: -1.1225968383218086