Solving real-world problems on a quantum computer: academic vs industry-relevant approaches

In quantum computing, we are currently experiencing a shift from an academic path to solving problems on quantum computers towards a more industry-focused approach. This means we are no longer searching for problems which fit to currently available quantum computers, but rather analyze industry-relevant problems and build quantum computers accordingly. In many of the real-world problems we encounter in science, engineering, and business, we need to find, among all possible choices we have, the best one according to some metrics.
For example:
- How can we schedule a set of tasks, some of which can be executed concurrently, and some of which must be executed in a specific order, so that the total time needed to complete all of them is minimized?
- How can we build a portfolio of assets which maximizes our expected returns at a given level of exposure to risks?
- What’s the maximal fraction of a volume of a given shape which can be populated with some solid objects?
Notice that for each of the problems above there are constraints that solutions need to satisfy, and in each of them is (implicitly) provided a so-called “cost function”, which for any solution can tell us how good it is. These kinds of problems, where one wants to maximize/minimize a cost function while satisfying constraints, are called optimization problems.
Although the study of optimization problems with classical computers is a mature field, many problems of practical interest fall into the class of NP-hard problems; this roughly means that as their “size” increases they quickly become practically impossible to solve exactly, even with supercomputers. By exploiting some phenomena which are forbidden in the classical regime, quantum computers could overcome classical barriers and solve some optimization problems of currently-intractable size.
In order to solve a constrained binary optimization problem with a quantum computer, one needs to transform the cost function of the problem, together with its constraints, into a Hamiltonian operator acting on the quantum computer’s qubits. This procedure is in general not straightforward, as in practice one is not free to include in the Hamiltonian any arbitrary interaction term between qubits, since in the foreseeable future quantum computers are expected to have only limited connectivity; moreover, qubits are typically coupled only via two-body interactions: what if the cost function involves products of three (or more) binary variables?
Hard constraints too pose a challenging problem, as they cannot be in general implemented on a typical quantum computer; a common way of dealing with them is to “soften” them, i.e., to transform them into a term in the cost function which heavily penalizes constraint-violating solutions… Although this has the downside that one must introduce in the Hamiltonian some couplings which have a much larger energy associated with them with respect to the ones which stem from interactions in the original problem, and this causes further engineering challenges.
A state-of-the-art method to embed constrained binary optimization problems on a quantum computer is to first transform the problems into QUBO (Quadratic Unconstrained Binary Optimization) ones, which entails softening constraints, decomposing many-body interactions into 2-body ones (an operation which also introduces some more constraints which enforce logical consistency of this procedure), and minor-embedding the result of this operation into a graph which is consistent with a target quantum computer’s connectivity.
However, converting a problem to the QUBO form is, in general, not a necessary step. At ParityQC we realized that higher order interactions lie at the core of many real-world problems, and at least some kinds of constraints are overwhelmingly common; therefore we designed an architecture which could attack what we call HCBO (Higher-order Constrained Binary Optimization) problems directly. In the Parity Architecture, many-body interactions can be taken into account natively, as each physical qubit does not correspond to a variable in the original problem, but to interactions within it. Commonly-found kinds of constraints, such as ones which set the products of an arbitrary number of binary variables equal to 0 or 1, or the ones which force the sum of a set of binary variables to be equal to a constant, can be enforced in simple ways. We think our approach, with scalability and real-world use cases held in the highest regard since its inception, has an edge over current state-of-the-art alternatives.
Once a suitable Hamiltonian has been constructed, its ground state can be found with different methods, depending on the properties of the quantum computer being used. Quantum annealers, for instance, start by implementing the interactions of some Hamiltonian for which the ground state is known, and prepare the initial state to be the one which minimizes the initial Hamiltonian. Then, they transform the initial Hamiltonian into the one which encodes the original problem: if this procedure is done slowly enough, and if it can be performed in a controlled way, then the adiabatic theorem guarantees that the final state will be the ground state of the target Hamiltonian. All that is left then is to measure the final state and obtain a solution to the original problem from it!
For gate-based quantum computers the procedures used are different from those implemented on an annealer, but they can also be used for the same goal. We think that each of them has its own particular advantages and disadvantages, but both are viable approaches to solve hard optimization problems!
–
Alessandro Angioi – Quantum Software Engineer at ParityQC