EQSLV

EQSLV, Lab, TOLER, MULT, --, KeepFile
Specifies the type of equation solver.

Compatible Products: DesSpc | Pro | Premium | Enterprise | Ent PP | Ent Solver | –

Lab

Equation solver type:

SPARSE

 — 

Sparse direct equation solver. Applicable to real-value or complex-value symmetric and unsymmetric matrices. Available only for STATIC, HARMIC (full method only), TRANS (full method only), SUBSTR, and PSD spectrum analysis types [ANTYPE]. Can be used for nonlinear and linear analyses, especially nonlinear analysis where indefinite matrices are frequently encountered. Well suited for contact analysis where contact status alters the mesh topology. Other typical well-suited applications are: (a) models consisting of shell/beam or shell/beam and solid elements (b) models with a multi-branch structure, such as an automobile exhaust or a turbine fan. This is an alternative to iterative solvers since it combines both speed and robustness. Generally, it requires considerably more memory (~10x) than the PCG solver to obtain optimal performance (running totally in-core). When memory is limited, the solver works partly in-core and out-of-core, which can noticeably slow down the performance of the solver. See the BCSOPTION command for more details on the various modes of operation for this solver.

This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, this solver preserves all of the merits of the classic or shared memory sparse solver. The total sum of memory (summed for all processes) is usually higher than the shared memory sparse solver. System configuration also affects the performance of the distributed memory parallel solver. If enough physical memory is available, running this solver in the in-core memory mode achieves optimal performance. The ideal configuration when using the out-of-core memory mode is to use one processor per machine on multiple machines (a cluster), spreading the I/O across the hard drives of each machine, assuming that you are using a high-speed network such as Infiniband to efficiently support all communication across the multiple machines.

This solver supports use of the GPU accelerator capability.

JCG

 — 

Jacobi Conjugate Gradient iterative equation solver. Available only for STATIC, HARMIC (full method only), and TRANS (full method only) analysis types [ANTYPE]. Can be used for structural, thermal, and multiphysics applications. Applicable for symmetric, unsymmetric, complex, definite, and indefinite matrices. Recommended for 3-D harmonic analyses in structural and multiphysics applications. Efficient for heat transfer, electromagnetics, piezoelectrics, and acoustic field problems.

This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, in addition to the limitations listed above, this solver only runs in a distributed parallel fashion for STATIC and TRANS (full method) analyses in which the stiffness is symmetric and only when not using the fast thermal option (THOPT). Otherwise, this solver runs in shared memory parallel mode inside Distributed ANSYS.

This solver supports use of the GPU accelerator capability. When using the GPU accelerator capability, in addition to the limitations listed above, this solver is available only for STATIC and TRANS (full method) analyses where the stiffness is symmetric and does not support the fast thermal option (THOPT).

ICCG

 — 

Incomplete Cholesky Conjugate Gradient iterative equation solver. Available for STATIC, HARMIC (full method only), and TRANS (full method only) analysis types [ANTYPE]. Can be used for structural, thermal, and multiphysics applications, and for symmetric, unsymmetric, complex, definite, and indefinite matrices. The ICCG solver requires more memory than the JCG solver, but is more robust than the JCG solver for ill-conditioned matrices.

This solver can only be run in shared memory parallel mode. This is also true when the solver is used inside Distributed ANSYS.

This solver does not support use of the GPU accelerator capability.

QMR

 — 

Quasi-Minimal Residual iterative equation solver. Available for the HARMIC (full method only) analysis type [ANTYPE]. Can be used for symmetric, complex, definite, and indefinite matrices. The QMR solver is more stable than the ICCG solver.

This solver can only be run in shared memory parallel mode. This is also true when the solver is used inside Distributed ANSYS.

This solver does not support use of the GPU accelerator capability.

PCG

 — 

Preconditioned Conjugate Gradient iterative equation solver (licensed from Computational Applications and Systems Integration, Inc.). Requires less disk file space than SPARSE and is faster for large models. Useful for plates, shells, 3-D models, large 2-D models, and other problems having symmetric, sparse, definite or indefinite matrices for nonlinear analysis. The PCG solver can also be used for single-field thermal analyses involving unsymmetric matrices. Requires twice as much memory as JCG. Available only for analysis types [ANTYPE] STATIC, TRANS (full method only), or MODAL (with PCG Lanczos option only). Also available for the use pass of substructure analyses (MATRIX50). The PCG solver can robustly solve equations with constraint equations (CE, CEINTF, CPINTF, and CERIG). With this solver, you can use the MSAVE command to obtain a considerable memory savings.

The PCG solver can handle ill-conditioned problems by using a higher level of difficulty (see PCGOPT). Ill-conditioning arises from elements with high aspect ratios, contact, and plasticity.

This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, this solver preserves all of the merits of the classic or shared memory PCG solver. The total sum of memory (summed for all processes) is about 30% more than the shared memory PCG solver.

This solver supports use of the GPU accelerator capability.

TOLER

Iterative solver tolerance value. Used only with the Jacobi Conjugate Gradient, Incomplete Cholesky Conjugate Gradient, Pre-conditioned Conjugate Gradient, and Quasi-Minimal Residual equation solvers. For the PCG solver, the default is 1.0E-8. When using the PCG Lanczos mode extraction method, the default solver tolerance value is 1.0E-4. For the JCG and ICCG solvers with symmetric matrices, the default is 1.0E-8. For the JCG and ICCG solvers with unsymmetric matrices, and for the QMR solver, the default is 1.0E-6. Iterations continue until the SRSS norm of the residual is less than TOLER times the norm of the applied load vector. For the PCG solver in the linear static analysis case, 3 error norms are used. If one of the error norms is smaller than TOLER, and the SRSS norm of the residual is smaller than 1.0E-2, convergence is assumed to have been reached. See Iterative Solver in the Mechanical APDL Theory Reference for details.


Note:  When used with the Pre-conditioned Conjugate Gradient equation solver, TOLER can be modified between load steps (this is typically useful for nonlinear analysis).


If a Lev_Diff value of 5 is specified on the PCGOPT command (either program- or user-specified), TOLER has no effect on the accuracy of the obtained solution from the PCG solver; a direct solver is used when Lev_Diff = 5.

MULT

Multiplier (defaults to 2.5 for nonlinear analyses; 1.0 for linear analyses) used to control the maximum number of iterations performed during convergence calculations. Used only with the Pre-conditioned Conjugate Gradient equation solver (PCG). The maximum number of iterations is equal to the multiplier (MULT) times the number of degrees of freedom (DOF). If MULT is input as a negative value, then the maximum number of iterations is equal to abs(MULT). Iterations continue until either the maximum number of iterations or solution convergence has been reached. In general, the default value for MULT is adequate for reaching convergence. However, for ill-conditioned matrices (that is, models containing elements with high aspect ratios or material type discontinuities) the multiplier may be used to increase the maximum number of iterations used to achieve convergence. The recommended range for the multiplier is 1.0  MULT  3.0. Normally, a value greater than 3.0 adds no further benefit toward convergence, and merely increases time requirements. If the solution does not converge with 1.0  MULT  3.0, or in less than 10,000 iterations, then convergence is highly unlikely and further examination of the model is recommended. Rather than increasing the default value of MULT, consider increasing the level of difficulty (Lev_Diff) on the PCGOPT command.

--

Unused field.

KeepFile

Determines whether files from a SPARSE solver run should be deleted or retained. Applies only to Lab = SPARSE for static and full transient analyses.

DELE

 — 

Deletes all files from the SPARSE solver run, including the factorized file, .DSPsymb, upon FINISH or /EXIT (default).

KEEP

 — 

Retains all necessary files from the SPARSE solver run, including the .DSPsymb file, in the working directory.

Command Default

The sparse direct solver is the default solver for all analyses, with the exception of modal/buckling analyses.

For modal/buckling analyses, there is no default solver. You must specify a solver with the MODOPT or BUCOPT command. The specified solver automatically chooses the required internal equation solver (for example, MODOPT,LANPCG automatically uses EQSLV,PCG internally, and BUCOPT,LANB automatically uses EQSLV,SPARSE internally).

Notes

The selection of a solver can affect the speed and accuracy of a solution. For a more detailed discussion of the merits of each solver, see Solution in the Basic Analysis Guide .

You may only specify the solver type in the first load step. You may, however, modify the solver tolerance in subsequent load steps for the iterative solvers.

This command is also valid in PREP7.

Distributed ANSYS Restriction All equation solvers are supported in Distributed ANSYS. However, the SPARSE and PCG solvers are the only distributed solvers that always run a fully distributed solution. The JCG solver runs in a fully distributed mode in some cases; in other cases, it does not. The ICCG and QMR solvers are not distributed solvers; therefore, you will not see the full performance improvements with these solvers that you would with a fully distributed solution.

Menu Paths

Main Menu>Preprocessor>Loads>Analysis Type>Analysis Options
Main Menu>Preprocessor>Loads>Analysis Type>Sol'n Controls>Sol'n Options
Main Menu>Solution>Analysis Type>Analysis Options
Main Menu>Solution>Analysis Type>Sol'n Controls>Sol'n Options

Release 18.2 - © ANSYS, Inc. All rights reserved.