# Linear systems

This page is about solutions to linear systems $Ax=b$. We call d the dimension of $x$.

## Applications

Least square regression. PageRank.

## Complexity

The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields $O(d^3)$. But using the Optimized CW-like algorithms yields $O(d^{2.373})$.

A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.

## Quantum algorithm for sparse linear systems

HHL09 propose quantum algorithms to compute the norm of the solution $x$ to $Ax=b$. The complexity is $O(k^2 \log d)$, where $d$ is the dimension of $x$ and $k$ is the condition number ($= ||A|| \cdot ||A^{-1}||$ for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in $O(kd)$).

## Multiplierless algorithm for Ax=b

ThaoRzepka19 (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve $Ax=b$. The basic idea is to decompose it into equations $a_i x=b_i$. Then we iterate for values of $i$, and adjust only $x_i$ to obtain a new value of $x$ for which $a_i x = b_i$. It turns out that if $A$ is symmetric positive definite, then this algorithm converges. They show this by factorizing $A = MM^T$ and to show that, in terms of $M^Tx$, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of $x_i$ at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a "weak greedy" manner.

It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.