<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Linear_systems</id>
	<title>Linear systems - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Linear_systems"/>
	<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Linear_systems&amp;action=history"/>
	<updated>2026-04-28T13:31:01Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.34.0</generator>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Linear_systems&amp;diff=72&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 12:48, 22 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Linear_systems&amp;diff=72&amp;oldid=prev"/>
		<updated>2020-01-22T12:48:32Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 12:48, 22 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This page is about solutions to linear systems Ax=b. We call d the dimension of x.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This page is about solutions to linear systems &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;Ax=b&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;. We call d the dimension of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Applications ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Applications ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l7&quot; &gt;Line 7:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 7:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Complexity ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Complexity ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields O(d&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;sup&amp;gt;&lt;/del&gt;3&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;)&lt;/del&gt;. But using the Optimized CW-like algorithms yields O(d&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;sup&amp;gt;&lt;/del&gt;2.373&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;)&lt;/del&gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;O(d&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;^&lt;/ins&gt;3&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;)&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;. But using the Optimized CW-like algorithms yields &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;O(d&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;^{&lt;/ins&gt;2.373&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;})&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l13&quot; &gt;Line 13:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 13:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Quantum algorithm for sparse linear systems ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Quantum algorithm for sparse linear systems ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[https://arxiv.org/pdf/0811.3171.pdf HHL][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Quantum+algorithm+for+linear+systems+of+equations&amp;amp;btnG= 09] propose quantum algorithms to compute the norm of the solution x to Ax=b. The complexity is O(k&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;sup&amp;gt;&lt;/del&gt;2&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt; &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;log d)&lt;/del&gt;, where d is the dimension of x and k is the &amp;lt;em&amp;gt;condition number&amp;lt;/em&amp;gt; (= ||A|| ||A&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;sup&amp;gt;&lt;/del&gt;-1&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|| &lt;/del&gt;for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in O(kd)).&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[https://arxiv.org/pdf/0811.3171.pdf HHL][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Quantum+algorithm+for+linear+systems+of+equations&amp;amp;btnG= 09] propose quantum algorithms to compute the norm of the solution &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;to &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;Ax=b&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;. The complexity is &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;O(k&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;^&lt;/ins&gt;2 &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;\log d)&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;, where &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;d&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;is the dimension of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;k&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;is the &amp;lt;em&amp;gt;condition number&amp;lt;/em&amp;gt; (&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;= ||A|| &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;\cdot &lt;/ins&gt;||A&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;^{&lt;/ins&gt;-1&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;}||&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt; for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;O(kd)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Multiplierless algorithm for Ax=b ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Multiplierless algorithm for Ax=b ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[https://arxiv.org/pdf/1911.12945.pdf ThaoRzepka][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Pseudo-inversion+of+time+encoding+of+bandlimited+signals&amp;amp;btnG= 19] (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve Ax=b. The basic idea is to decompose it into equations &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;a&amp;lt;sub&amp;gt;i&lt;/del&gt;&amp;lt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;/sub&lt;/del&gt;&amp;gt;x=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;b&amp;lt;sub&amp;gt;i&lt;/del&gt;&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sub&lt;/del&gt;&amp;gt;. Then we iterate for values of i, and adjust only x_i to obtain a new value of x for which &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;a&lt;/del&gt;&amp;lt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sub&amp;gt;i&amp;lt;/sub&lt;/del&gt;&amp;gt;x=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;b&amp;lt;sub&amp;gt;i&lt;/del&gt;&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sub&lt;/del&gt;&amp;gt;. It turns out that if A is symmetric positive definite, then this algorithm converges. They show this by factorizing A = MM&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;sup&amp;gt;&lt;/del&gt;T&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt; and to show that, in terms of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;M&lt;/del&gt;&amp;lt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;T&lt;/del&gt;&amp;lt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sup&lt;/del&gt;&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;x&lt;/del&gt;, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of x_i at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a &amp;quot;weak greedy&amp;quot; manner.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[https://arxiv.org/pdf/1911.12945.pdf ThaoRzepka][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Pseudo-inversion+of+time+encoding+of+bandlimited+signals&amp;amp;btnG= 19] (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;Ax=b&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;. The basic idea is to decompose it into equations &amp;lt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;a_i &lt;/ins&gt;x=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;b_i&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;. Then we iterate for values of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;i&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt;&lt;/ins&gt;, and adjust only &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x_i&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;to obtain a new value of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;for which &amp;lt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;a_i &lt;/ins&gt;x = &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;b_i&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;. It turns out that if &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;A&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;is symmetric positive definite, then this algorithm converges. They show this by factorizing &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;A = MM&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;^&lt;/ins&gt;T&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt; and to show that, in terms of &amp;lt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;M^Tx&lt;/ins&gt;&amp;lt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;math&lt;/ins&gt;&amp;gt;, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math&amp;gt;&lt;/ins&gt;x_i&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;/math&amp;gt; &lt;/ins&gt;at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a &amp;quot;weak greedy&amp;quot; manner.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Linear_systems&amp;diff=14&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang: Created page with &quot;This page is about solutions to linear systems Ax=b. We call d the dimension of x.  == Applications ==  Least square regression. PageRank.  == Complexity ==  The complexity of...&quot;</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Linear_systems&amp;diff=14&amp;oldid=prev"/>
		<updated>2020-01-20T21:27:31Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;This page is about solutions to linear systems Ax=b. We call d the dimension of x.  == Applications ==  Least square regression. PageRank.  == Complexity ==  The complexity of...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;This page is about solutions to linear systems Ax=b. We call d the dimension of x.&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
&lt;br /&gt;
Least square regression. PageRank.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
The complexity of solving Ax=b is that of matrix multiplication. The naive approach, say Gauss-Jordan, yields O(d&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;). But using the Optimized CW-like algorithms yields O(d&amp;lt;sup&amp;gt;2.373&amp;lt;/sup&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
A lot of the work is about the parallelization of matrix multiplication using hardware like GPU.&lt;br /&gt;
&lt;br /&gt;
== Quantum algorithm for sparse linear systems ==&lt;br /&gt;
&lt;br /&gt;
[https://arxiv.org/pdf/0811.3171.pdf HHL][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Quantum+algorithm+for+linear+systems+of+equations&amp;amp;btnG= 09] propose quantum algorithms to compute the norm of the solution x to Ax=b. The complexity is O(k&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; log d), where d is the dimension of x and k is the &amp;lt;em&amp;gt;condition number&amp;lt;/em&amp;gt; (= ||A|| ||A&amp;lt;sup&amp;gt;-1&amp;lt;/sup&amp;gt;|| for operator norms). This is an exponential quantum speedup with respect to best known classical algorithm (in O(kd)).&lt;br /&gt;
&lt;br /&gt;
== Multiplierless algorithm for Ax=b ==&lt;br /&gt;
&lt;br /&gt;
[https://arxiv.org/pdf/1911.12945.pdf ThaoRzepka][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Pseudo-inversion+of+time+encoding+of+bandlimited+signals&amp;amp;btnG= 19] (actually most of it seems to be in an unpublished paper) propose a multiplierless iteration algorithm to solve Ax=b. The basic idea is to decompose it into equations a&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;x=b&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;. Then we iterate for values of i, and adjust only x_i to obtain a new value of x for which a&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;x=b&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;. It turns out that if A is symmetric positive definite, then this algorithm converges. They show this by factorizing A = MM&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt; and to show that, in terms of M&amp;lt;sup&amp;gt;T&amp;lt;/sup&amp;gt;x, the dynamic is that of projections onto convex sets (POCS). By relaxing the computation of x_i at each step up to some power of 1/2, they propose a sequence of multiplierless approximate projections that nevertheless converges. Further acceleration can be achieved by choosing the order in which to update i-th coordinates in a &amp;quot;weak greedy&amp;quot; manner.&lt;br /&gt;
&lt;br /&gt;
It's not clear how critical this will be. It turns out that, so far, hardware has been so optimized for float multiplications. But there is a lot of research on specialized hardware. A big question mark is the parallelizability of the algorithm.&lt;/div&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
</feed>