Difference between revisions of "Linear systems"

From RB Wiki
(Created page with "This page is about solutions to linear systems Ax=b. We call d the dimension of x. == Applications == Least square regression. PageRank. == Complexity == The complexity of...")
 
m
 
Line 1: Line 1:
This page is about solutions to linear systems Ax=b. We call d the dimension of x.
+
This page is about solutions to linear systems <math>Ax=b</math>. We call d the dimension of <math>x</math>.
  
 
== Applications ==
 
== Applications ==
Line 7: Line 7:
 
== Complexity ==
 
== Complexity ==
  
The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields O(d<sup>3</sup>). But using the Optimized CW-like algorithms yields O(d<sup>2.373</sup>).
+
The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields <math>O(d^3)</math>. But using the Optimized CW-like algorithms yields <math>O(d^{2.373})</math>.
  
 
A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.
 
A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.
Line 13: Line 13:
 
== Quantum algorithm for sparse linear systems ==
 
== Quantum algorithm for sparse linear systems ==
  
[https://arxiv.org/pdf/0811.3171.pdf HHL][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Quantum+algorithm+for+linear+systems+of+equations&btnG= 09] propose quantum algorithms to compute the norm of the solution x to Ax=b. The complexity is O(k<sup>2</sup> log d), where d is the dimension of x and k is the <em>condition number</em> (= ||A|| ||A<sup>-1</sup>|| for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in O(kd)).
+
[https://arxiv.org/pdf/0811.3171.pdf HHL][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Quantum+algorithm+for+linear+systems+of+equations&btnG= 09] propose quantum algorithms to compute the norm of the solution <math>x</math> to <math>Ax=b</math>. The complexity is <math>O(k^2 \log d)</math>, where <math>d</math> is the dimension of <math>x</math> and <math>k</math> is the <em>condition number</em> (<math>= ||A|| \cdot ||A^{-1}||</math> for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in <math>O(kd)</math>).
  
 
== Multiplierless algorithm for Ax=b ==
 
== Multiplierless algorithm for Ax=b ==
  
[https://arxiv.org/pdf/1911.12945.pdf ThaoRzepka][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Pseudo-inversion+of+time+encoding+of+bandlimited+signals&btnG= 19] (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve Ax=b. The basic idea is to decompose it into equations a<sub>i</sub>x=b<sub>i</sub>. Then we iterate for values of i, and adjust only x_i to obtain a new value of x for which a<sub>i</sub>x=b<sub>i</sub>. It turns out that if A is symmetric positive definite, then this algorithm converges. They show this by factorizing A = MM<sup>T</sup> and to show that, in terms of M<sup>T</sup>x, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of x_i at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a "weak greedy" manner.
+
[https://arxiv.org/pdf/1911.12945.pdf ThaoRzepka][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Pseudo-inversion+of+time+encoding+of+bandlimited+signals&btnG= 19] (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve <math>Ax=b</math>. The basic idea is to decompose it into equations <math>a_i x=b_i</math>. Then we iterate for values of <math>i</math>, and adjust only <math>x_i</math> to obtain a new value of <math>x</math> for which <math>a_i x = b_i</math>. It turns out that if <math>A</math> is symmetric positive definite, then this algorithm converges. They show this by factorizing <math>A = MM^T</math> and to show that, in terms of <math>M^Tx</math>, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of <math>x_i</math> at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a "weak greedy" manner.
  
 
It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.
 
It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.

Latest revision as of 14:48, 22 January 2020

This page is about solutions to linear systems [math]Ax=b[/math]. We call d the dimension of [math]x[/math].

Applications

Least square regression. PageRank.

Complexity

The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields [math]O(d^3)[/math]. But using the Optimized CW-like algorithms yields [math]O(d^{2.373})[/math].

A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.

Quantum algorithm for sparse linear systems

HHL09 propose quantum algorithms to compute the norm of the solution [math]x[/math] to [math]Ax=b[/math]. The complexity is [math]O(k^2 \log d)[/math], where [math]d[/math] is the dimension of [math]x[/math] and [math]k[/math] is the condition number ([math]= ||A|| \cdot ||A^{-1}||[/math] for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in [math]O(kd)[/math]).

Multiplierless algorithm for Ax=b

ThaoRzepka19 (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve [math]Ax=b[/math]. The basic idea is to decompose it into equations [math]a_i x=b_i[/math]. Then we iterate for values of [math]i[/math], and adjust only [math]x_i[/math] to obtain a new value of [math]x[/math] for which [math]a_i x = b_i[/math]. It turns out that if [math]A[/math] is symmetric positive definite, then this algorithm converges. They show this by factorizing [math]A = MM^T[/math] and to show that, in terms of [math]M^Tx[/math], the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of [math]x_i[/math] at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a "weak greedy" manner.

It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.