

In the many cases in which I made comparisons, Pseudo-inversion was always much faster and much more accurate than LU-decomposition or Cholski-decomposition. For the classical methods (such as the one mentioned in the first answer) you always have a problem with getting accurate results for matices with nearly linearly dependent rows or columns. You have to study it if you want to know the state of the art in inverting matrices. The method is a simple derivate from the deep and insightful 'singular value decomposition' also known as SVD. Mathematica implements it so that it works also for complex-valued matrices. Learning Wolfram Mathematica takes time, effort, and dedication, like acquiring any new skill. In addition to analytical math skills, the book helps to learn Wolfram Mathematica programming software. Most books on linear algebra don't mention the method. Matrix Operation and Inverse LU Factorization and Determinants Eigenvalues and Eigenvectors Linear Algebra and Geometry. Random matrices have uses in a surprising variety of fields, including statistics, physics, pure mathematics, biology, and finance, among others. > 1000 x 1000 most matrix elements 0) there is only one recommendable method: Penrose's pseudo-inversion, which works for arbitrary mxn matrices and has in all cases a meaningful result which for invertible matrices reduces to the usual matrix inverse. The efficient generation of matrix variates, estimation of their properties, and computations of their limiting distributions are tightly integrated with the existing probability & statistics framework.

Unless you have very large sparse matrices (e.g. Drazin inverse is one of the most significant inverses in the matrix theory, where its computation is an intensive and useful task. Matrix inversion is a good example for the conservatism in science.
