PETSc Installation
PETSc (Portable, Extensible Toolkit for Scientific Computing) is a library of scalable linear and non-linear equation solvers and ODE solvers, developed and maintained at the Mathematics & Computer Science Division at Argonne National Laboratory. The official webpage of PETSc provides the download and installation instructions, as well as extensive documentation. Most of the codes I work with use the TS module of PETsc for advanced high-order time-integration schemes and it will be useful to briefly go through the documentation for this module to understand and utilize them:
User Manual (PDF) (see section on TS: Scalable ODE and DAE Solvers)
TS Examples
To download the official or development releases of PETSc, go
here.
Complete installation instructions, including all options, are available here. This page contains brief instructions that I find useful to quickly download and install PETSc to use with the codes I am working with.
Download
Installation
The following steps should install PETSc:
Set environment variables PETSC_DIR and PETSC_ARCH (Can be set in .bashrc too)
export PETSC_DIR=/path/to/petsc/ (Eg. PETSC_DIR=/home/ghosh/petsc)
export PETSC_ARCH=(name of build) (Eg. arch-debug or arch-opt. It can be anything)
Move to the PETSc directory
Configure with options for debug (slow) or optimized (fast)
Example: If GNU compilers exist but PETSc needs to download and compile MPICH,
./configure --with-cc=gcc --with-fc=gfortran --download-mpich=1 --with-shared-libraries --with-debugging=1 (debug)
./configure --with-cc=gcc --with-fc=gfortran --download-mpich=1 --with-shared-libraries --with-debugging=0 (optimized)
Example: If MPICH already exists and PETSc needs to use that,
./configure --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-debugging=1 (debug)
./configure --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-debugging=0 (optimized)
Note: Use --with-batch for configuring on machines that use a job scheduler.
Note: PETSc can download and compile additional packages by adding --download-packagename=yes or use existing installations of those packages --with-packagename-include=/path/to/package/include and --with-packagename-lib=/path/to/package/lib or --with-packagename-dir=/path/to/package/ (use --help to see a list of these options and packages that PETSc can download). This page has examples of installing PETSc with downloading or using existing installations of HDF5, NetCDF and METIS packages.
Compilation
Some machine specific configure commands
LLNL LC Dane (With Hypre)
./configure --with-batch --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx COPTFLAGS="-O2 -std=c99" FOPTFLAGS="-O2" CXXOPTFLAGS="-O2" --with-shared-libraries --with-debugging=0 --download-make --download-hypre --with-cxx-dialect=C++11
LLNL LC Dane (With ParMETIS, Hypre and SuperLU)
./configure --with-batch --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx COPTFLAGS="-O2 -std=c99" FOPTFLAGS="-O2" CXXOPTFLAGS="-O2" --with-shared-libraries --with-debugging=0 --download-make --download-cmake --download-hypre --download-superlu --download-superlu_dist --download-parmetis --download-metis --with-cxx-dialect=C++11
LLNL LC Matrix (With CUDA and Hypre)
module load gcc/12.1.1-magic mvapich2/2.3.7 cuda/12.2.2 cmake/3.30.5 boost/1.80.0
./configure --with-make-np=36 --with-mpiexec=srun -G4 --gpu-bind=none --with-batch --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 COPTFLAGS="-g -O3" FOPTFLAGS="-g -O3" CXXOPTFLAGS="-g -O2" CUDAFLAGS+="-g -O3" --with-cuda=1 --with-cuda-arch=90 --with-debugging=0 --download-hypre
LLNL LC Tuolumne (With HIP)
module load cmake/3.24.2 rocm/6.4.0 rocmcc/6.4.0-cce-19.0.0d-magic
export MPI_DIR=/usr/tce/packages/cray-mpich/cray-mpich-9.0.1-rocmcc-6.4.1-magic
./configure --with-make-np=32 --with-batch --with-mpi-dir=$MPI_DIR --with-hip-dir=$ROCM_PATH COPTFLAGS="-g -O3" FOPTFLAGS="-g -O3" CXXOPTFLAGS="-g -O2" HIPOPTFLAGS="-g -O3" --with-cuda=0 --with-hip=1 --with-clanguage=c --with-debugging=0 --download-kokkos --download-kokkos-kernels --with-fortran-bindings=0
Note: The following environment variable should be available after loading the modules.
ROCM_PATH=/opt/rocm-6.4.0
NERSC Perlmutter (With CUDA and Hypre)
module load gpu craype craype-x86-milan craype-accel-nvidia80 cudatoolkit cmake/3.30.2
export CRAY_ACCEL_TARGET=nvidia80
export CXXFLAGS="-march=znver3"
export CFLAGS="-march=znver3"
export CC=cc
export CXX=CC
export FC=ftn
export CUDACXX=$(which nvcc)
export CUDAHOSTCXX=CC
./configure --with-make-np=8 --with-mpiexec=srun -G4 --gpu-bind=none --with-batch=0 --with-cc=cc --with-cxx=CC --with-fc=ftn COPTFLAGS="-g -O3" FOPTFLAGS="-g -O3" CXXOPTFLAGS="-g -O2" CUDAFLAGS="-g -O3" --with-cuda=1 --with-cuda-arch=80 --with-debugging=0 --download-hypre