ORIGINAL RESEARCH article

Front. Phys., 27 September 2024

Sec. Quantum Engineering and Technology

Volume 12 - 2024 | https://doi.org/10.3389/fphy.2024.1443977

Improving the convergence of an iterative algorithm for solving arbitrary linear equation systems using classical or quantum binary optimization

  • 1. Centro Brasileiro de Pesquisas Físicas, Rio de Janeiro, RJ, Brazil

  • 2. Petróleo Brasileiro S.A., Centro de Pesquisas Leopoldo Miguez de Mello, Rio de Janeiro, Brazil

Abstract

Recent advancements in quantum computing and quantum-inspired algorithms have sparked renewed interest in binary optimization. These hardware and software innovations promise to revolutionize solution times for complex problems. In this work, we propose a novel method for solving linear systems. Our approach leverages binary optimization, making it particularly well-suited for problems with large condition numbers. We transform the linear system into a binary optimization problem, drawing inspiration from the geometry of the original problem and resembling the conjugate gradient method. This approach employs conjugate directions that significantly accelerate the algorithm’s convergence rate. Furthermore, we demonstrate that by leveraging partial knowledge of the problem’s intrinsic geometry, we can decompose the original problem into smaller, independent sub-problems. These sub-problems can be efficiently tackled using either quantum or classical solvers. Although determining the problem’s geometry introduces some additional computational cost, this investment is outweighed by the substantial performance gains compared to existing methods.

1 Introduction

Quadratic unconstrained binary optimization (QUBO) problems [1] are equivalent formulations of some specific type of combinatorial optimization problems, where one (or a few) particular configuration is sought among a finite huge space of possible configurations. This configuration maximizes the gain (or minimizes the cost) of a real function defined in the total space of possible configurations. In QUBO problems, each configuration is represented by a binary -dimensional vector , and the function to be optimized is constructed using a symmetric matrix . For each possible configuration, we have

The sought optimal solution satisfies , where is a sufficiently small positive number. It is often easier to build a system configured near the optimal solution than to build a system configured at the optimal solution.

The QUBO problem is NP-Hard and is equivalent to finding the ground state of a general Ising model with an arbitrary value and numbers of interactions, commonly used in condensed matter physics [2, 3]. The ground state of the related quantum Hamiltonian encodes the optimal configuration and can be obtained from a general initial Hamiltonian using a quantum evolution protocol. This is the essence of quantum computation by quantum annealing [4], where the optimal solution is encoded in a physical Ising quantum ground state. Hybrid quantum–classical methods, digital analog algorithms, and classical computing inspired by quantum computation are promising Ising solvers (see [5]).

Essential classes of problems, not necessarily combinatorial, can be handled using QUBO solvers. For example, the problem of solving systems of linear equations has been previously studied in the context of quantum annealing in [69]. The complexity and usefulness of the approach were discussed in [10, 11]. From those, we can say that quantum annealing is promising for solving linear equations even for ill-conditioned systems and when the number of rows far exceeds the number of columns.

In another context, QUBO formulation protocols were recently developed to train machine learning models with the promising expectation that quantum annealing could solve this type of hard problem more efficiently [12]. Machine learning algorithms and specific quantum-inspired formulations of these strategies in the quantum circuit approach have grown substantially in recent years; see, for example, [1318] and references therein. At the core of the machine learning approach, linear algebra is a fundamental tool used in these formulations. Therefore, the study of QUBO formulations of linear problems and their enhancement can be of interest in the use of the quantum annealing process in machine learning approaches. Another recent example is the study of simplified binary models of inverse problems where the QUBO matrix represents a quadratic approximation of the forward non-linear problem (see [19]). It is interesting to note that in classical inverse problems, the necessity of solving linear system equations is an essential step in the whole process.

In this work, we propose a new method to enhance the convergence rate of an iterative algorithm used to solve a system of equations with an arbitrary condition number. At each stage, the algorithm maps the linear problem to a QUBO problem and finds appropriate configurations using a QUBO solver, either classical or quantum. In previous implementations, the feasibility of the method was linked to the specific binary approximation used. Generally, as the condition number increases, more bits are required, which increases the dimension of the QUBO problem. Our contribution shows that a total or partial knowledge of the intrinsic geometry of the problem helps reformulate the QUBO problem, stabilizing the convergence to the solution and, therefore, improving the performance of the algorithm. In the case of full knowledge of the geometry, we show that the associated QUBO problem is trivial. If the geometry is only partially known, we show that the QUBO problems are small in principle, solvable with low binary approximation.

The remainder of this paper is organized as follows: Section 2 briefly describes how to convert the problem of solving a system of linear equations into a QUBO problem. The conventional algorithm for this problem is presented and illustrated with examples. Subsequently, we analyze the geometrical structure of the linear problem and their relation with the function [20]; from them, a new set of QUBO configurations is proposed, taking into account the intrinsic geometry in a new lattice configuration. In Section 4.2, we implement these ideas in a new algorithm using a different orthogonality notion (that we call -orthogonality) related to the well-known gradient descent method. Using the matrix , we find a new set of vectors that characterize the geometry of the problem. We compare the new algorithm with the previous version revised in Section 2. Section 4.3 uses the tools of the previous section to construct a different set of vectors grouped in many subsets mutually -orthogonal. This construction allows the decomposition of the original QUBO problem into independent QUBO sub-problems of smaller dimensions. Each sub-problem can be addressed using quantum or classical QUBO solvers, allowing arbitrary linear equation systems to be resolved. In Section 5, we present the final considerations.

2 System of linear equations

2.1 Writing a system of equations as a QUBO problem

Solving a system of linear equations of variables is identical to finding a -dimensional vector that satisfieswhere is the matrix constructed with the coefficients of the linear equations and is the vector formed with the inhomogeneous coefficients. If the determinant , then there exists one unique vector that solves the linear system. We can transform the linear problem of real variables into a binary optimization problem using a binary -approximation of the components of one vector :

Define the vector . The relation between and the binary numbers iswhere is the length of the edge of the -cube and is the -vector . Utilizing Equation 2 and recognizing the summation involving the term, we can expresswhere . With this notation, each binary vector

of length defines a unique vector . These choices ensure that the initial guess remains at the center of the -cube.

To construct the QUBO problem associated with solving the linear system, we provide a concrete example with ; the generalization to arbitrary is straightforward. Let be the matrix and be the vector.

The solution of the system minimizes the functionwith . We choose , , and . The binary vector has six components. Figure 1A depicts the vectors to be analyzed. To construct the QUBO problem, we substitute Equation 2 into Equation 3 and utilize the corresponding result in Equation 5. It is not difficult to observe that the function is redefined in the binary space of the 64 ’s, and therefore, we can construct a new matrix and an -vector satisfyingwhere , with denoting the matrix Kronecker product and

FIGURE 1

In our particular case, we have

To construct the QUBO matrix used in Equation 1, we expand . Neglecting the constant positive term , we obtain the symmetric QUBO matrixwhere converts an -vector into a diagonal matrix. For our specific case, we have

The binary vector minimizes the function . In Figure 1A, the orange triangle represents , which minimizes the function in Equation 6. Note that in this case, the QUBO solution is not the closest point to the exact solution of the problem (the red square). However, for the procedure to work, it is necessary only that the orange configuration lies within the same quadrant as the exact solution.

Once the vector is found using a QUBO solver, we repeat the process to find a better solution (closest to the exact solution ). This involves redefining and finding a new , smaller than the previous , such that the new -cube contains a solution closer to the exact one.

For our concrete example , verifying all the configurations and determining the best solution are easy tasks. However, when is big, this procedure becomes intractable because the space of configurations is too large. A new search algorithm, different from the brute force approach, is necessary. There are different possibilities, such as simulated annealing algorithms [20], metaheuristic algorithms [21], and particular-purpose quantum hardware such as quantum annealing machines [9, 22] and classical Ising machines [5]. Hybrid procedures using quantum and classical computation are still possible [23].

Other algorithms to tackle QUBO problems are mentioned in the review [1]. Once a QUBO solver is chosen, we can use the iterative process to find the solution of the linear equations system. We implement this procedure in Algorithm 1, as shown in Figure 2.

FIGURE 2

3 Methods

After developing the appropriate mathematical tools, we implemented three methods in Python to solve the associated QUBO problem.

  • Exhaustive search (for small problems): when the number of variables (denoted by RN) is less than 20, we directly evaluate all possible QUBO configurations and select the one that yields the optimal solution. This approach is guaranteed to find the best solution but becomes computationally expensive for larger problems.

  • D-Wave Qbsolv (deprecated): for larger problems, we employed Qbsolv open-source software provided by D-Wave systems (although it is currently deprecated). This Python library implements a simulated annealing algorithm, which we integrated into our code alongside Qbsolv.

  • Fujitsu Digital Annealer: we additionally utilized the Fujitsu Digital Annealer system. We accessed the Fujitsu system through an application programming interface (API) using Python’s requests package. This allows our code to seamlessly interact with the Fujitsu system and submit QUBO problems for optimization.

The coefficients of the linear systems that we studied were randomly generated. After transforming these coefficients into a QUBO format, we converted them into JavaScript Object Notation (JSON) for efficient data exchange. The resulting JSON data were then sent to the Fujitsu system for optimization. Inquiries regarding the implementation details or the code itself can be directed to the authors.

4 Results

Section 4.1 presents the performance of the algorithm in Figure 2 applied to problems with a small condition number. The algorithm works well in this case, but if we increase the condition number, convergence is only obtained by increasing the factor associated with the numerical binary approximation of the problem. Large condition numbers require larger , and Algorithm 2 is no longer efficient.

In Section 4.2, the previous issue is addressed by determining the geometry of the hypersurfaces with constant. We reformulate the QUBO problem considering this geometry and show that solving this problem is trivial, even when using . A linear system consisting of equations with a condition number is solved, demonstrating the power of the method.

In Section 4.3, it is shown that partial knowledge of the geometry simplifies the QUBO approach. In particular, it is demonstrated that in a large problem with a condition number where the algorithm in Figure 2 fails, it is possible to decompose the original problem into many independent QUBO sub-problems, each with a condition number amenable to being approached by the algorithm in Figure 2. Such decomposition is obtained with only partial knowledge of the geometry.

4.1 Convergence of the conventional algorithm

The performance of Algorithm 1 strongly depends on the type of matrix used in the problem, particularly on its condition number. The example described in Equation 4 has a condition number . For this example, it is sufficient to use the parameters and . As increases, the optimal QUBO configurations deviate further from the exact solution of the problem, and it is possible that in the next iteration, the exact solution may fall outside the -cube, breaking convergence. This issue can be resolved by decreasing the parameter , which increases the number of iterations needed to reach convergence.

Another option is to increase the factor of the algorithm, which increases the number of QUBO configurations. This, in turn, helps the optimal QUBO solution stay closer to the exact solution of the problem. However, increasing also enlarges the dimension of the QUBO problem to , thereby escalating the difficulty of the QUBO approach, at least in principle. In Figures 1B, C, we illustrate these issues for the simpler case of .

In Figures 1D, E, we solve three different systems of linear equations with , , and different . The vector associated with the problem was generated using random numbers between −200 and 200, and the matrix was generated using random unitary transformations applied to appropriate diagonal matrices. In this study, we compare the open-source heuristic algorithm Qbsolv in a classical simulation (which uses Tabu search and classical simulated annealing) and the Fujitsu system, which is a classical QUBO solver inspired by the quantum annealing approach. We observe that the Fujitsu system finds an adequate configuration in each iteration, reaching convergence when the process ends. For , Qbsolv software reaches convergence when and parameter , showing that for , it is advantageous to use the Fujitsu system.

Figure 1F shows that for and , the method still works very well only for the Fujitsu system. However, when , the correspondence between optimal QUBO configurations that minimize Equation 6 and the closest configuration to the solution of is lost. We can choose a larger , as shown in Figure 1C, but for larger matrices with , this procedure is not efficient.

The Fujitsu digital annealer enhances the well-known simulated annealing algorithm with other physics-inspired strategies that resemble quantum annealing procedures (see [24]). In our case, involving large matrices, small binary approximations, and small condition numbers, the Fujitsu system seems to be very efficient at solving these types of problems. Large QUBO problems can be solved using the Fujitsu system (QUBO with dimensions up to ), which includes integration with the Azure system’s blob storage to load even larger problems. However, even with an efficient QUBO solver like the Fujitsu system, in cases of large matrices with appreciable condition numbers and small binary approximations, Algorithm 1 is not adequate for solving a linear system equation with a unique solution. For matrices with larger , finding correspondence between QUBO configurations that minimize Equation 6 and configurations sufficiently close to depends on the initial guess . This property resembles the gradient descent algorithm used in minimization problems, where the convergence rate can heavily depend on the initial guess. This drawback is addressed in descent methods by considering the geometry of the problem and reformulating it into a more powerful conjugate gradient descent method. Next, we demonstrate that the geometry associated with the system of linear equations can improve convergence and break down a sizable original system with an arbitrary number into smaller ones with lower that could be solved separately using Algorithm 1.

4.2 Rhombus geometry applied to the problem

4.2.1 Geometry of the problem

The entire discrete set of possible configurations defines the QUBO. Generally, there is little structure in this set. However, since the problem is written in the language of vector space, there is a robust mathematical structure that we can use to improve the performance of existing algorithms. It is not difficult to observe that the subset of , where (given by Equation 5 with invertible) is constant, corresponds to ellipsoidal hypersurfaces of dimension . For , see Figure 3A.

FIGURE 3

All the ellipses in Figure 3A are concentric and similar. Therefore, we can take a unique representative. Each ellipse contains a family of parallelograms with different sizes but congruent angles; see Figure 3B. In Figure 1A, the problem is formulated using a square lattice geometry. However, nothing prevents us from using other geometries, especially those better suited to the problem. We can choose a lattice with the parallelogram geometry. In particular, we choose the parallelogram with equal-length sides (rhombus). Figure 3C illustrates how possible configurations are chosen using the rhombus geometry.

The choice of this geometry brings advantages in the final algorithm efficiency since we need only a few iterations with the rhombus geometry to obtain convergence to the solution. Given an initial guess , such a point defines a rhombus. If the solution is also inside the same rhombus, then we can guarantee that all subsequent steps will also be inside the same rhombus as (see proof in Supplementary Appendix S1). This property improves convergence and will be referred to here as rhombus convergence.

We emphasize that the square geometry used in previous works only coincides with the matrix inversion geometry when the matrix is diagonal. For non-diagonal matrices in the square geometry, the closest point (in the conventional distance) to the exact solution is not necessarily the point with the most negligible value of between the finite QUBO vectors. In other words, the exact solution would lay outside the region containing the QUBO configurations, breaking convergence. We can avoid the lack of convergence by reducing the parameter or increasing the number in the algorithm but with the consequence of increasing the number of iterations.

4.2.2 -orthogonality

The ellipsoid form in the matrix inversion problem is given by the symmetric matrix ; this becomes clear when we define the new function , which defines the same set of similar ellipsoids but centered at the zero vector. Particularly, the matrix introduces a different notion of orthogonality referred to in the review [25] as -orthogonality. Two vectors and in are -orthogonal if they satisfy

Given the ’s canonical vectors , with the th coordinate equal to 1 and all others equal to 0, we can construct from them -orthogonal vectors associated with each using a generalized Gram–Schmidt -orthogonalization. The method selects the first vector as . The vector is constructed as

The coefficients in Equation 8 are determined using the -orthogonality property . Explicitly,This procedure is implemented in Algorithm 2, as shown in Figure 4. The calculated non-orthogonal unitary vectors (in the standard scalar product) define the rhombus geometry previously described. In Supplementary Appendix S2, we improve the algorithm described above.

FIGURE 4

4.2.3 Modified search region

Considering the intrinsic rhombus geometry, the iterative algorithm converges exponentially fast with respect to the number of iterations, making it sufficient to use . The QUBO configurations in Equation 3 around a certain guess can be rewritten aswhere is the canonical base. Therefore, we modified Algorithm 1 by changing and , where . The QUBO configurations are the vertices of a -rhombus and are associated with all the possible binary vectors . We can substitute these modifications in the function and calculate and . Considering the vectors as the th row of a matrix (where ), it is not difficult to see thatand

From Equation 7 and the -orthogonality of the vectors (matrix rows of ), it becomes evident that the QUBO matrix constructed from Equation 9 is always diagonal. The QUBO solution is trivial (this means that there are no necessary heuristic algorithms or quantum computers to solve the QUBO problem). The modified iterative process is shown in Algorithm 3, shown in Figure 5.

FIGURE 5

4.2.4 Implementation of the algorithm

Algorithm 3 works whenever the rhombus that contains the QUBO configurations also includes the exact solution . This is guaranteed when is sufficiently large (in particular when , where is the “critical” value parameter to obtain convergence). In Figure 6A, we show the algorithm performance for a particular dense matrix with dimensions . The initial guess is the -dimensional zero vector . Note the dependence on the parameter value ; the critical value is , and there is no convergence for . To compare with the original algorithm, we study the case with , corresponding to a QUBO problem with 1500 variables ( and ). Figure 6B, C show the comparison of the two different approaches. The original Algorithm 1 in Figure 6C exhibits poorer efficiency than the modified Algorithm 3 shown in Figure 6B.

FIGURE 6

In the last section of this work, we show that having partial knowledge of the conjugated vectors also simplifies the original QUBO problem considerably.

4.3 Solving large systems of equations using binary optimization

4.3.1 Decomposing QUBO matrices in smaller sub-problems

In the previous section, we show that the knowledge of the conjugated vectors that generate the rhombus geometry simplifies the QUBO resolution and improves the convergence rate to the exact solution. However, the calculus of these vectors in Algorithm 2 has approximately steps. A faster algorithm would be desirable.

Another interesting possibility is to use the notion of -orthogonality to construct a different set of vectors grouped in different subsets in such a way that vectors in different subsets are -orthogonal. In this last section, we show that such construction decomposes the original QUBO matrix in a block diagonal form, and we can use a modified version of Algorithm 3. We can tackle each block independently for some QUBO solver, and after joining the independent results, we obtain the total solution. There are possible decompositions, where is the number of possible partitions of a set with N elements (Bell numbers).

Techniques for decomposing into sub-problems are standard in the search process for some QUBO solvers. One notable example is the QUBO-solver Qbsolv, a heuristic hybrid algorithm that decomposes the original problem into many QUBO sub-problems that can be approached using classical Ising or Quantum QUBO solvers. The solution of each sub-problem is projected into the actual space to infer better initial guesses in the classical heuristic algorithm (Tabu search); see [26] for details. Our algorithm decomposes the original QUBO problem associated with into many independent QUBO sub-problems. We obtain the optimal solution directly from the particular sub-solutions of each QUBO sub-problem.

To see how the decomposition method works, we use the generalized Gram–Schmidt orthogonalization only between different groups of vectors. We choose positive numbers , satisfying . First, call

For the other vectors, we use

We also require that the first group of vectors be -orthogonal to the second group of vectors; specifically, this applies for :

This last condition determines all the coefficients for each , by solving a linear system of dimension . For fixed and defining , the linear system to solve iswhere is the corresponding sub-matrix of consisting of its first block sub-matrix and represents the first ’s coefficients of the th column of . With the coefficients , we can calculate and normalize it. Grouping all these vectors as the rows of the matrix , it is possible to verify thatwhere is a matrix. We can put in a two-block diagonal form using the same process, where one block has dimension and the second block has dimension . In other words,andTo determine the new set of coefficients, we usewhere , is the first block diagonal matrix of , and represents the first ’s coefficients of the th column of . Repeating the previous procedure, we obtain a new matrix , which has the propertyRepeating the same process another times and definingand , we obtainWe use the notation to reinforce that this is a matrix. We implemented this procedure in Algorithm 4, as shown in Figure 7.

FIGURE 7

To effectively decompose a large matrix with an arbitrary condition number into sub-problems , each tractable with Algorithm 1, we need to choose adequate submatrices of such that . This is always possible for large matrices using the following procedure. To construct , first test all the submatrices of and choose the one with the minimal condition number. Next, test all the remaining indices to construct a matrix with the previous matrix, and choose the one with the minimal condition number; repeat this procedure until reaching the desired dimension to obtain . Then, apply the modified orthogonalization procedure explained above to obtain a new matrix with dimensions , and using this matrix, construct in the same way. Repeat the procedure until reaching . Evidently, the indices of the submatrices are not ordered, but the generalization is straightforward. Each matrix is associated with a set of indices , where for , and in Equation 10, the substitution is made where . It is not difficult to show that there exists a permutation of the matrix indices such that the row–column permutation is put into an explicit block diagonal form. This remark is important because we need to manipulate each block independently, as will be shown in the next section.

4.3.2 Implementation of the algorithm

Suppose that the matrix is transformed into block diagonal form with each block having , as explained above. The procedure for decomposing and solving a QUBO problem is shown in Algorithm 5 (Figure 8). For each matrix in each iterative step, the Fujitsu QUBO solver system is used. Figures 6D, E illustrate the resolution of two matrices of sizes and using block decomposition into ten sub-problems and ten sub-problems, respectively. The condition numbers of both matrices are, respectively, and . Note that our method, unlike the original Algorithm 1, works for arbitrary matrices and is not restricted to matrices with small condition numbers (the two matrices were generated by choosing random integers in the interval ).

FIGURE 8

5 Discussion

It has recently been conjectured that the use of quantum technologies would improve the learning process in machine learning models. In the standard quantum circuit paradigm, many proposals and generalizations exist, promising better performance with the advent of quantum computers. Machine learning formulations such as QUBO problems are also another possible strategy that can be improved with the development of quantum annealing hardware. In such cases, the approach of addressing linear algebra problems through QUBO problems is of general interest because linear algebra is one of the natural languages in which machine learning is written. In this work, we proposed a new method to solve a system of linear equations using binary optimizers. Our approach guarantees that the optimal configuration is the closest to the exact solution. Additionally, we demonstrated that partial knowledge of the problem’s geometry allows decomposition into a series of independent sub-problems that can be solved using conventional QUBO solvers. The solution to each sub-problem is then aggregated, enabling rapid determination of an optimal solution. We show that the original formulation as QUBO is efficient only when the condition number of the associated matrix is small (with being a square matrix). Our procedure is applicable in principle to matrices with arbitrary condition numbers where the error associated with the multiplication operations is controlled. Therefore, our method is not restricted to matrices with condition numbers close to 1.

However, identifying the vectors that determine the sub-problem decomposition incurs computational costs that influence the overall performance of the algorithm. Nevertheless, two factors could lead to significant improvements: better methods for identifying the vectors associated with the geometry and faster QUBO solvers. In our study, when using a QUBO solver such as the Fujitsu digital annealer, we focus on finding elite QUBO solutions. This is because when the condition number is small, we are guaranteed that the associated configuration is very close to the solution to the problem. Finding elite solutions to QUBO problems is very costly for large problems due to their NP-hardness. However, the only criterion for obtaining convergence in Algorithm 1 is to get a configuration in the same quadrant that contains the solution to the linear system of equations. The number of configurations in each quadrant (there are quadrants) is . For large values of , this results in a large number of configurations. Therefore, focusing on developing new methods to find configurations in the same quadrant as the solution would be an interesting strategy to overcome the NP-hardness of finding the best QUBO solution. In any case, quantum computing or quantum-inspired classical computation could be fundamental tools for developing better approaches that can be integrated with the procedures presented here. We intend to explore these interesting questions in subsequent studies, and we hope that the methods presented in this study can contribute to the discovery of better and more efficient procedures for solving extensive linear systems of equations.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

EC: conceptualization, data curation, formal analysis, investigation, methodology, software, validation, visualization, writing–original draft, and writing–review and editing. EM: conceptualization, investigation, methodology, project administration, resources, software, validation, visualization, and writing–review and editing. RS: methodology, resources, supervision, validation, and writing–review and editing. AS: conceptualization, formal analysis, investigation, methodology, resources, software, supervision, validation, visualization, and writing–review and editing. IO: conceptualization, funding acquisition, methodology, project administration, resources, supervision, visualization, and writing–review and editing.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work was supported by the Brazilian National Institute of Science and Technology for Quantum Information (INCT-IQ) (Grant No. 465 469/2 014-0), the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and PETROBRAS: Projects 2017/00 486-1, 2018/00 233-9, and 2019/00 062-2. AMS acknowledges support from FAPERJ (Grant No. 203.166/2 017). ISO acknowledges FAPERJ (Grant No. 202.518/2 019).

Conflict of interest

Author EM was employed by Petróleo Brasileiro S.A.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphy.2024.1443977/full#supplementary-material

References

  • 1.

    KochenbergerGHaoJKGloverFLewisMZWangHet alThe unconstrained binary quadratic programming problem: a survey. J Comb Optim (2014) 28:5881. 10.1007/s10878-014-9734-0

  • 2.

    BarahonaF. On the computational complexity of ising spin glass models. J Phys A: Math Gen (1982) 15:324153. 10.1088/0305-4470/15/10/028

  • 3.

    LucasA. Ising formulations of many np problems. Front Phys (2014) 2:115. 10.3389/fphy.2014.00005

  • 4.

    KadowakiTNishimoriH. Quantum annealing in the transverse ising model. Phys Rev E (1998) 58:535563. 10.1103/PhysRevE.58.5355

  • 5.

    MohseniNMcMahonPLByrnesT. Ising machines as hardware solvers of combinatorial optimization problems. Nat Rev Phys (2022) 4:36379. 10.1038/s42254-022-00440-8

  • 6.

    O’MalleyDVesselinovVV. Toq.jl: a high-level programming language for d-wave machines based on julia. In: IEEE conference on high performance extreme computing. Waltham, MA, United States: D-Wave (2016). p. 17doi. 10.1109/HPEC.2016.7761616

  • 7.

    PollachiniGGSalazarJPLCGóesCBDMacielTODuzzioniEI. Hybrid classical-quantum approach to solve the heat equation using quantum annealers. Phys Rev A (2021) 104:032426. 10.1103/PhysRevA.104.032426

  • 8.

    RogersMLJrRLS. Floating-point calculations on a quantum annealer: division and matrix inversion. Front Phys (2020) 8:265. 10.3389/fphy.2020.00265

  • 9.

    SouzaAMMartinsEORoditiINSarthourRSOliveiraIS. An application of quantum annealing computing to seismic inversion. Front Phys (2021) 9:748285. 10.3389/fphy.2021.748285

  • 10.

    BorleALomonacoSJ. Analyzing the quantum annealing approach for solving linear least squares problems. In: WALCOM: algorithms and computation. Springer (2019). p. 289301doi. 10.1007/978-3-030-10564-8_23

  • 11.

    BorleALomonacoSJ. How viable is quantum annealing for solving linear algebra problems?arXiv:2206 (2022). 10.48550/arXiv.2206.10576

  • 12.

    DatePArthurDPusey-NazzaroL. Qubo formulations for training machine learning models. Sci Rep (2021) 11:10029. 10.1038/s41598-021-89461-4

  • 13.

    GongCZhouN-RXiaSHuangS. Quantum particle swarm optimization algorithm based on diversity migration strategy. Fut Gen Comp Syst (2024) 157:44558. 10.1016/j.future.2024.04.008

  • 14.

    GongL-HDingWLiZWangY-ZZhouN-R. Quantum k-nearest neighbor classification algorithm via a divide-and-conquer strategy. Adv Quan Technol (2024) 7:2300221. 10.1002/qute.202300221

  • 15.

    GongL-HPeiJ-JZhangT-FZhouN-R. Quantum convolutional neural network based on variational quantum circuits. Opt Commun (2024) 550:129993. 10.1016/j.optcom.2023.129993

  • 16.

    HuangS-YAnW-JZhangD-SZhouN-R. Image classification and adversarial robustness analysis based on hybrid quantum–classical convolutional neural network. Opt Commun (2023) 533:129287. 10.1016/j.optcom.2023.129287

  • 17.

    WuCHuangFDaiJZhouN-R. Quantum susan edge detection based on double chains quantum genetic algorithm. Phys A: Statis Mech Its Appl (2022) 605:128017. 10.1016/j.physa.2022.128017

  • 18.

    ZhouN-RZhangT-FXieX-WWuJ-Y. Hybrid quantum–classical generative adversarial networks for image generation via learning discrete distribution. Sign Proc Ima Commun (2023) 110:116891. 10.1016/j.image.2022.116891

  • 19.

    GreerSO’MalleyD. Early steps toward practical subsurface computations with quantum computing. Front Comput Sci (2023) 5:1235784. 10.3389/fcomp.2023.1235784

  • 20.

    AlkhamisTMHasanMAhmedMA. Simulated annealing for the unconstrained binary quadratic pseudo-boolean function. Eur J Oper Res (1998) 108:64152. 10.1016/S0377-2217(97)00130-6

  • 21.

    DunningIGuptaSSilberholzJ. What works best when? a systematic evaluation of heuristics for max-cut and qubo. INFORMS J Comput (2018) 30:60824. 10.1287/ijoc.2017.0798

  • 22.

    HaukePKatzgraberHGLechnerWNishimoriHOliverWD. Perspectives of quantum annealing: methods and implementations. Rep Prog Phys (2020) 83:054401. 10.1088/1361-6633/ab85b8

  • 23.

    BoothMBerwaldJUchenna ChukwuJDDridiRLeDWaingerMet al (2020). Qci qbsolv delivers strong classical performance for quantum-ready formulation. 10.48550/arXiv.2005.11294

  • 24.

    AramonMRosenbergGValianteEMiyazawaTTamuraHKatzgraberHG. Physics-inspired optimization for quadratic unconstrained problems using a digital annealer. Front Phys (2019) 7:48. 10.3389/fphy.2019.00048

  • 25.

    ShewchukJR. An introduction to the conjugate gradient method without the agonizing pain. Pittsburgh, PA, USA: Carnegie-Mellon University, Department of Computer Science (1994).

  • 26.

    BoothMReinhardtSPRoyA. Partitioning optimization problems for hybrid classical/quantum execution. Burnaby, BC, Canada: D-Wave The Quantum Computing Company (2017).

  • 27.

    RumpSM. Inversion of extremely ill-conditioned matrices in floating-point. Jpn J. Indust. Appl. Math. (2009) 26:24977. 10.1007/BF03186534

Summary

Keywords

linear algebra algorithms, quadratic unconstrained binary optimization formulation, digital annealing, conjugate geometry approach, convergence analysis

Citation

Castro ER, Martins EO, Sarthour RS, Souza AM and Oliveira IS (2024) Improving the convergence of an iterative algorithm for solving arbitrary linear equation systems using classical or quantum binary optimization. Front. Phys. 12:1443977. doi: 10.3389/fphy.2024.1443977

Received

04 June 2024

Accepted

26 August 2024

Published

27 September 2024

Volume

12 - 2024

Edited by

Nanrun Zhou, Shanghai University of Engineering Sciences, China

Reviewed by

Lihua Gong, Shanghai University of Engineering Sciences, China

Mengmeng Wang, Qingdao University of Technology, China

Zhao Dou, Beijing University of Posts and Telecommunications, China

Updates

Copyright

*Correspondence: Erick R. Castro,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics