diff --git a/src/cgls.jl b/src/cgls.jl index b0c84c9a2..78b1632e6 100644 --- a/src/cgls.jl +++ b/src/cgls.jl @@ -42,7 +42,7 @@ Solve the regularized linear least-squares problem minimize ‖b - Ax‖₂² + λ‖x‖₂² -of size n × m using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization +of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations (AᴴA + λI) x = Aᴴb @@ -58,12 +58,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/cgne.jl b/src/cgne.jl index ca7a95565..f1e61481d 100644 --- a/src/cgne.jl +++ b/src/cgne.jl @@ -42,7 +42,7 @@ Solve the consistent linear system Ax + √λs = b -of size n × m using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization +of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind @@ -67,12 +67,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/craig.jl b/src/craig.jl index 02ae8f8c6..756d311a4 100644 --- a/src/craig.jl +++ b/src/craig.jl @@ -47,7 +47,7 @@ Find the least-norm solution of the consistent linear system Ax + λ²y = b -of size n × m using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a +of size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable. @@ -91,13 +91,13 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; -* `y`: a dense vector of length n; +* `x`: a dense vector of length n; +* `y`: a dense vector of length m; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/craigmr.jl b/src/craigmr.jl index 57a499350..fc0a38e89 100644 --- a/src/craigmr.jl +++ b/src/craigmr.jl @@ -40,7 +40,7 @@ Solve the consistent linear system Ax + λ²y = b -of size n × m using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. +of size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind @@ -87,13 +87,13 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; -* `y`: a dense vector of length n; +* `x`: a dense vector of length n; +* `y`: a dense vector of length m; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/crls.jl b/src/crls.jl index c57e6a503..bbfd116cb 100644 --- a/src/crls.jl +++ b/src/crls.jl @@ -34,7 +34,7 @@ Solve the linear least-squares problem minimize ‖b - Ax‖₂² + λ‖x‖₂² -of size n × m using the Conjugate Residuals (CR) method. +of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations (AᴴA + λI) x = Aᴴb. @@ -50,12 +50,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### Reference diff --git a/src/crmr.jl b/src/crmr.jl index b624b8a53..b7e236950 100644 --- a/src/crmr.jl +++ b/src/crmr.jl @@ -40,7 +40,7 @@ Solve the consistent linear system Ax + √λs = b -of size n × m using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization +of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind @@ -65,12 +65,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/gpmr.jl b/src/gpmr.jl index a2990ab2a..139643c85 100644 --- a/src/gpmr.jl +++ b/src/gpmr.jl @@ -22,10 +22,11 @@ export gpmr, gpmr! `T` is an `AbstractFloat` such as `Float32`, `Float64` or `BigFloat`. `FC` is `T` or `Complex{T}`. +Given matrices `A` of dimension m × n and `B` of dimension n × m, GPMR solves the unsymmetric partitioned linear system - [ λI A ] [ x ] = [ b ] - [ B μI ] [ y ] [ c ], + [ λIₘ A ] [ x ] = [ b ] + [ B μIₙ ] [ y ] [ c ], of size (n+m) × (n+m) where λ and μ are real or complex numbers. `A` can have any shape and `B` has the shape of `Aᴴ`. @@ -69,15 +70,15 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `B`: a linear operator that models a matrix of dimension m × n; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `B`: a linear operator that models a matrix of dimension n × m; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length n; -* `y`: a dense vector of length m; +* `x`: a dense vector of length m; +* `y`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### Reference diff --git a/src/krylov_processes.jl b/src/krylov_processes.jl index c434a51b3..deaf6abba 100644 --- a/src/krylov_processes.jl +++ b/src/krylov_processes.jl @@ -224,14 +224,14 @@ end #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; * `k`: the number of iterations of the Golub-Kahan process. #### Output arguments -* `V`: a dense m × (k+1) matrix; -* `U`: a dense n × (k+1) matrix; +* `V`: a dense n × (k+1) matrix; +* `U`: a dense m × (k+1) matrix; * `L`: a sparse (k+1) × (k+1) lower bidiagonal matrix. #### Reference @@ -297,16 +297,16 @@ end #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m; +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n; * `k`: the number of iterations of the Saunders-Simon-Yip process. #### Output arguments -* `V`: a dense n × (k+1) matrix; +* `V`: a dense m × (k+1) matrix; * `T`: a sparse (k+1) × k tridiagonal matrix; -* `U`: a dense m × (k+1) matrix; +* `U`: a dense n × (k+1) matrix; * `Tᴴ`: a sparse (k+1) × k tridiagonal matrix. #### Reference @@ -387,17 +387,17 @@ end #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `B`: a linear operator that models a matrix of dimension m × n; -* `b`: a vector of length n; -* `c`: a vector of length m; +* `A`: a linear operator that models a matrix of dimension m × n; +* `B`: a linear operator that models a matrix of dimension n × m; +* `b`: a vector of length m; +* `c`: a vector of length n; * `k`: the number of iterations of the Montoison-Orban process. #### Output arguments -* `V`: a dense n × (k+1) matrix; +* `V`: a dense m × (k+1) matrix; * `H`: a sparse (k+1) × k upper Hessenberg matrix; -* `U`: a dense m × (k+1) matrix; +* `U`: a dense n × (k+1) matrix; * `F`: a sparse (k+1) × k upper Hessenberg matrix. #### Reference diff --git a/src/lnlq.jl b/src/lnlq.jl index 8a5742b82..0611712e2 100644 --- a/src/lnlq.jl +++ b/src/lnlq.jl @@ -38,7 +38,7 @@ Find the least-norm solution of the consistent linear system Ax + λ²y = b -of size n × m using the LNLQ method, where λ ≥ 0 is a regularization parameter. +of size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter. For a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. @@ -84,13 +84,13 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; -* `y`: a dense vector of length n; +* `x`: a dense vector of length n; +* `y`: a dense vector of length m; * `stats`: statistics collected on the run in a [`LNLQStats`](@ref) structure. #### Reference diff --git a/src/lslq.jl b/src/lslq.jl index a3aa6994e..1caebdd48 100644 --- a/src/lslq.jl +++ b/src/lslq.jl @@ -38,7 +38,7 @@ Solve the regularized linear least-squares problem minimize ‖b - Ax‖₂² + λ²‖x‖₂² -of size n × m using the LSLQ method, where λ ≥ 0 is a regularization parameter. +of size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations (AᴴA + λ²I) x = Aᴴb @@ -83,8 +83,8 @@ In this case, `N` can still be specified and indicates the weighted norm in whic #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Keyword arguments @@ -105,7 +105,7 @@ In this case, `N` can still be specified and indicates the weighted norm in whic #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`LSLQStats`](@ref) structure. * `stats.err_lbnds` is a vector of lower bounds on the LQ error---the vector is empty if `window` is set to zero diff --git a/src/lsmr.jl b/src/lsmr.jl index 930443808..39bbf3367 100644 --- a/src/lsmr.jl +++ b/src/lsmr.jl @@ -43,7 +43,7 @@ Solve the regularized linear least-squares problem minimize ‖b - Ax‖₂² + λ²‖x‖₂² -of size n × m using the LSMR method, where λ ≥ 0 is a regularization parameter. +of size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations (AᴴA + λ²I) x = Aᴴb @@ -90,12 +90,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`LsmrStats`](@ref) structure. #### Reference diff --git a/src/lsqr.jl b/src/lsqr.jl index 80ca8003c..7dad61896 100644 --- a/src/lsqr.jl +++ b/src/lsqr.jl @@ -42,7 +42,7 @@ Solve the regularized linear least-squares problem minimize ‖b - Ax‖₂² + λ²‖x‖₂² -of size n × m using the LSQR method, where λ ≥ 0 is a regularization parameter. +of size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations (AᴴA + λ²I) x = Aᴴb @@ -85,12 +85,12 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### Reference diff --git a/src/tricg.jl b/src/tricg.jl index 6788a0026..578c7d07e 100644 --- a/src/tricg.jl +++ b/src/tricg.jl @@ -22,7 +22,7 @@ export tricg, tricg! `T` is an `AbstractFloat` such as `Float32`, `Float64` or `BigFloat`. `FC` is `T` or `Complex{T}`. -TriCG solves the symmetric linear system +Given a matrix `A` of dimension m × n, TriCG solves the symmetric linear system [ τE A ] [ x ] = [ b ] [ Aᴴ νF ] [ y ] [ c ], @@ -64,14 +64,14 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length n; -* `y`: a dense vector of length m; +* `x`: a dense vector of length m; +* `y`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### Reference diff --git a/src/trilqr.jl b/src/trilqr.jl index bb279e947..ab231e42f 100644 --- a/src/trilqr.jl +++ b/src/trilqr.jl @@ -26,8 +26,8 @@ Combine USYMLQ and USYMQR to solve adjoint systems. [0 A] [y] = [b] [Aᴴ 0] [x] [c] -USYMLQ is used for solving primal system `Ax = b` of size n. -USYMQR is used for solving dual system `Aᴴy = c` of size m. +USYMLQ is used for solving primal system `Ax = b` of size m × n. +USYMQR is used for solving dual system `Aᴴy = c` of size n × m. An option gives the possibility of transferring from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm. @@ -43,14 +43,14 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length m; -* `y`: a dense vector of length n; +* `x`: a dense vector of length n; +* `y`: a dense vector of length m; * `stats`: statistics collected on the run in a [`AdjointStats`](@ref) structure. #### Reference diff --git a/src/trimr.jl b/src/trimr.jl index 90ee54387..82e22b6cf 100644 --- a/src/trimr.jl +++ b/src/trimr.jl @@ -22,7 +22,7 @@ export trimr, trimr! `T` is an `AbstractFloat` such as `Float32`, `Float64` or `BigFloat`. `FC` is `T` or `Complex{T}`. -TriMR solves the symmetric linear system +Given a matrix `A` of dimension m × n, TriMR solves the symmetric linear system [ τE A ] [ x ] = [ b ] [ Aᴴ νF ] [ y ] [ c ], @@ -64,14 +64,14 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length n; -* `y`: a dense vector of length m; +* `x`: a dense vector of length m; +* `y`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### Reference diff --git a/src/usymlq.jl b/src/usymlq.jl index e89a400c2..357498973 100644 --- a/src/usymlq.jl +++ b/src/usymlq.jl @@ -28,7 +28,7 @@ export usymlq, usymlq! `T` is an `AbstractFloat` such as `Float32`, `Float64` or `BigFloat`. `FC` is `T` or `Complex{T}`. -Solve the linear system Ax = b of size n × m using the USYMLQ method. +Solve the linear system Ax = b of size m × n using the USYMLQ method. USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors `b` and `c`. The vector `c` is only used to initialize the process and a default value can be `b` or `Aᴴb` depending on the shape of `A`. @@ -52,13 +52,13 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References diff --git a/src/usymqr.jl b/src/usymqr.jl index 6dfef55fb..7705d0d7f 100644 --- a/src/usymqr.jl +++ b/src/usymqr.jl @@ -28,7 +28,7 @@ export usymqr, usymqr! `T` is an `AbstractFloat` such as `Float32`, `Float64` or `BigFloat`. `FC` is `T` or `Complex{T}`. -Solve the linear system Ax = b of size n × m using USYMQR. +Solve the linear system Ax = b of size m × n using USYMQR. USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors `b` and `c`. The vector `c` is only used to initialize the process and a default value can be `b` or `Aᴴb` depending on the shape of `A`. @@ -49,13 +49,13 @@ and `false` otherwise. #### Input arguments -* `A`: a linear operator that models a matrix of dimension n × m; -* `b`: a vector of length n; -* `c`: a vector of length m. +* `A`: a linear operator that models a matrix of dimension m × n; +* `b`: a vector of length m; +* `c`: a vector of length n. #### Output arguments -* `x`: a dense vector of length m; +* `x`: a dense vector of length n; * `stats`: statistics collected on the run in a [`SimpleStats`](@ref) structure. #### References