Ansys

Software name: 
Ansys
Policy 

Ansys is proprietary software. Each user is required to use a valid license. A license can be obtained directly from Ansys Inc, or from the user's respective research institution. Licence credentials are submitted on a per job basis.

General 

ANSYS offers a comprehensive software suite that spans the entire range of physics, providing access to virtually any field of engineering simulation that a design process requires.

Description 

The Ansys software suite provides several different engineering simulation solution sets for:

  • Structural Mechanics
  • Multiphysics
  • Fluid Dynamics
  • Explicit Dynamics
  • Electromagnetics

More information can be found on the Ansys website. Product documentation can be accessed via the Ansys Customer Portal for those who have a license.

Availability 

Ansys modules are available on Abisko. The pre-compiled binaries that were installed with the software suite support both serial and parallel job execution, limited only by the license being used.

Usage at HPC2N 

To see what Ansys modules are available, enter:

module avail ansys

To use a specific Ansys product, add the module to your environment in your job submission file:

module add ansys/cfx

Loading the module will load the relevant environment variables and paths to use the product.

Note that you must provide proper license connection information in your job submission file in order for your job to run. See below for some examples.

CFX Usage

CFX jobs can be:

  • serial (single task)
  • local parallel (several tasks on a single node)
  • distributed parallel (several tasks across mutliple nodes)

Serial Example

When you have your CFX definition file ready, you need to create a Job Submission File. In this example we ask SLURM to allocate resources for just one task, and then launch the task via srun:

#!/bin/bash
#SBATCH -n 1   #ask SLURM to allocate resources for just one task, since it's a serial run
#SBATCH --time hh:mm:ss #specify a time limit for the job.
#SBATCH -J <jobName> 
#SBATCH -A <account>

currDir=`pwd`
defFile=$currDir/<CFX_defFile>

#specify the Ansys license server. default port is 1055
export ANSYSLMD_LICENSE_FILE=<port>@<licenseServer> 

#specify the Ansys licensing interconnect server. default port is 2325
export ANSYSLI_SERVERS=<port>@<interconnectLicenseServer>

time srun cfx5solve -def $defFile

Note that if cfx5solve can't connect to the license server and check-out a valid license for the requested number of tasks (one process in this case), the job won't run.

Also, the 'time' command isn't necessary, but it can be useful to see the job's total running time when done.

Next, in the directory where you have your submit file and all your needed job files, such as the definition file, submit the job:

sbatch <jobSubmissionFile.sh>

SLURM will then queue and run the job when resources become available. When the job is complete, you can view your slurm-<jobID>.out file to see the job's output.

Local Parallel Example (single node)

When you want to run multiple parallel tasks on one node (utilizing one core per task), you should create a local parallel job. In the following example, we want to run 48 parallel tasks (convenient for Abisko, since one node has 48 cores):

#!/bin/bash
#SBATCH -N 1          #our job's tasks will run on 1, and only 1 node.
#SBATCH -n 48
#SBATCH --time hh:mm:ss    #time limit for our job
#SBATCH --exclusive   #no other jobs allowed on the node while this job is running
#SBATCH -J <jobName>  #arbitrary job name to help the user identify the job
#SBATCH -A <account>  #which account to associate the job with

module add ansys/cfx    #setup our CFX environment

currDir=`pwd`
defFile=$currDir/<CFX_defFile>

#here we call an external script which will create the node list for CFX.
#This script was made available when we loaded the ansys/cfx module.
NODES=`gen_cfx_nodelist.pl`

#specify the Ansys license server. default port is 1055 
export ANSYSLMD_LICENSE_FILE=<port>@<licenseServer>

#specify the Ansys licensing interconnect server. default port is 2325
export ANSYSLI_SERVERS=<port>@<interconnectLicenseServer>

time cfx5solve -parallel -def $defFile -start-method "Platform MPI Local Parallel" -par-dist $NODES

Note that we didn't use srun to call cfx5solve; this is because cfx5solve spawns it's own processes when running in parallel. We simply tell SLURM the resources to allocate via the #SBATCH directives, and then allow cfx5solve to spawn itself.

Also, note that if we specify an amount 48 task or less, such as '-n 32' and have not explicitly set '-N 1', then SLURM may still distribute those tasks across more than one node, so the 'distributed parallel' mode should be used if '-N 1' is not used.

Distributed Parallel Example (multiple nodes), Abisko

To run more than 48 tasks we need to use multiple nodes on Abisko. Ansys refers to a multi-node job as a 'distributed parallel job'. In the following example, we ask SLURM to reserve resources for 384 tasks (8 nodes * 48 cores). Note that we explicitly specify we want 8 nodes, as well as exclusive access to those nodes while the job is running:

#!/bin/bash
#SBATCH -n 384
#SBATCH -N 8
#SBATCH --exclusive 
#SBATCH --time hh:mm:ss
#SBATCH -J <distJobName>
#SBATCH -A <account>

module add ansys/cfx

currDir=`pwd`
defFile=$currDir/boxbigorig.def

NODES=`gen_cfx_nodelist.pl`

export CFX5RSH=slurmrsh
export CFX_SOLVE_DISABLE_REMOTE_CHECKS=1

#specify the Ansys license server. default port is 1055 
export ANSYSLMD_LICENSE_FILE=<port>@<licenseServer> 

#specify the Ansys licensing interconnect server. default port is 2325 
export ANSYSLI_SERVERS=<port>@<interconnectLicenseServer>

time cfx5solve -parallel -def $defFile -start-method "Platform MPI Distributed Parallel" -par-dist $NODES

Note that the external script gen_cfx_nodelist.pl constructs a tasks per node list for us. We then pass this list to CFX so that it starts up the proper amount of tasks on each node according to the allocations SLURM has reserved for the job.

TIP for CFX parallel job efficiency:

For a CFX parallel job to run at peak efficiency, Ansys generally recommends one processing core for every 250,000 cells. For example, if you have a definition file with 7,000,000 cells, then you should run a parallel job with 28 tasks. Your mileage may vary so it's always good to experiment.

Fluent Usage

Fluent jobs can be:

  • serial (single task)
  • parallel (several tasks across one or more nodes)

Journal file

You should first prepare your journal file. Here is a simple example:

; Fluent Example Input File
;----------------
; Read case file
/file/read-case LIRJ.cas
;----------------
; Set the number of time steps and iterations/step
/solve/iterate 3000
;----------------
; Save Case & Data files
/file/write-data LIRJ.dat 
/exit yes
A relevant section for journal files can be found in the CFD online FAQ for Fluent, here.

Serial Job

An example submission file for a serial job could be as follows:

#!/bin/bash
#SBATCH -n 1          # only allocate 1 task 
#SBATCH -t 08:00:00   # upper limit of 8 hours to complete the job
#SBATCH -A <accountName>     # your project name - contact Ops if unsure what this is
#SBATCH -J fluent1 # sensible name for the job

module add ansys/fluent

export FLUENT_GUI=off

export ANSYSLI_SERVERS=<port>@<interconnectLicenseServer>
export ANSYSLMD_LICENSE_FILE=<port>@<licenseServer>

time fluent 2ddp -g -i <journalFile> > fluent1.out 2> fluent1.err
The example above will run one task, with standard output goint to the fluent1.out file, and error output going to the fluent1.err file.

Parallel Job

To run several tasks in parallel on one or more nodes, the submission file could be as follows (abisko - for running on another cluster, just change the number of nodes (N) you ask for, remembering how many cores there are per node):
#!/bin/bash
#SBATCH -N 2       # allocate 2 nodes for the job
#SBATCH -n 96      # 96 tasks total
#SBATCH --exclusive  # no other jobs on the nodes while job is running
#SBATCH -t 04:00:00  # upper time limit of 4 hours for the job
#SBATCH -A <accountName>  # the account this job will be submitted under
#SBATCH -J fluentP1 # sensible name for the job

module add ansys/fluent

export FLUENT_GUI=off
export ANSYSLI_SERVERS=<port>@<interconnectLicenseServer>
export ANSYSLMD_LICENSE_FILE=<port>@<licenseServer>

if [ -z "$SLURM_NPROCS" ]; then
    N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
    N=$SLURM_NPROCS
fi

echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
time fluent 2ddp -g -slurm -t$N -mpi=pcmpi -pib -i <journalFile> > fluentP1.out 2> fluentP1.err
Updated: 2017-11-24, 17:02