The commonly used vector norms and matrix norms include

\begin{align} \|x\|_p =& \left(\sum_i|x_i|^p \right)^{1/p}\\ \|x\|_\infty =& \max_i |x_i| \\ \|A\|_F = & \left(\sum_{i,j}a_{ij}^2 \right)^{1/2}\\ \|A\|_1 = & \max_j \sum_i |a_{ij}| \\ \|A\|_\infty = & \max_i \sum_j |a_{ij}| \\ \|A\|_2 = & {\max}\sqrt{\lambda (A^TA)} \\ \|A\|_p = & \max_{x\neq0}\frac{\|Ax\|_p}{\|x\|_p} \end{align}

For error analysis, we need the following two inequalities

\begin{align} \|Ax\|_p \le& \|A\|_p \|x\|_p \\ \|AB\|_p \le& \|A\|_p \|B\|_p \end{align}

Let us first look at a simple case where the matrix $$A$$ is known without error, i.e.,

\begin{align} A(x+\delta_x) = b+\delta_b, \end{align}

where the quantities $$\delta$$ are the errors.

The original equation $$Ax=b$$ gives rise to

\begin{align} \frac{1}{\|x\|}\le \frac{\|A\|}{\|b\|}. \end{align}

Thus we get the following error propagation

\begin{align} \frac{\|\delta_x\|}{\|x\|} \le \kappa_p(A) \frac{\|\delta_b\|}{\|b\|}, \end{align}

where $$\kappa_p(A) = \|A\|_p\|A^{-1}\|_p$$ is the condition number of $$A$$. It is convenient to use $$p=2$$ and we have \begin{align} \kappa_2(A) = \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}, \end{align} where $$\sigma_{\max}(A)$$ is the largest singular value of $$A$$.

Now we consider the general case where $$A$$ can have error

\begin{align} (A+\delta_A) (x+\delta_x) = b+\delta_b. \end{align}

If $$\|A^{-1}\|\|\delta_A\|<1$$, then

\begin{align} \|\left(1+A^{-1}\delta_A \right)^{-1} \| \le \frac{1}{1-\|A^{-1}\|\|\delta_A\|}, \end{align}

and we get the following error propagation

\begin{align} \frac{\|\delta_x\|}{\|x\|} \le \frac{\kappa(A)}{1-\kappa(A)\frac{\|\delta_A\|}{\|A\|}}\left( \frac{\|\delta_A\|}{\|A\|}+ \frac{\|\delta_b\|}{\|b\|}\right). \end{align}