Fortran Library

Caution

It is slightly more difficult to compile and link programs against the Fortran library directly, due to the myriad of pre-processing and the number of required headers/environment variables. As such, it is recommended users interface with this library via the much easier to use Python module pyCTQW.MPI.

However, if you have experience using PETSc and SLEPc with Fortran, and wish to proceed, a makefile template is provided that can be modified to link your program against libctqwMPI.

Compiling libctqwMPI

In addition to an MPI implementation (e.g. MPICH or Open MPI), the Fortran library libctqwMPI depends on the following components:

Once these dependencies are installed, simply open a terminal in the root directory of pyCTQW-X.Y and run

$ make fortran [options]

where available options include

Option Values Description
shared_lib 0 (default), 1

whether to build libctqwMPI as a shared library (shared_lib=1, producing libctqwMPI.so) or a static library (shared_lib=0 (default), producing libctqwMPI.a).

If built as a shared library, compiled programs will be smaller, but libctqwMPI.so will need to be added to a directory used by ld (either by setting the environment variable LD_LIBRARY_PATH or by placing libctqwMPI.so in /usr/local/lib etc).

The fortran library (libctqwMPI.so or libctqwMPI.a) can be found in the pyCTQW-X.Y/lib directory, with required module files found in the pyCTQW-X.Y/include directory.


Using libctqwMPI

To call functions and subroutines from libctqwMPI, your program should have the following structure:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
program main
    ! interfaces for libctqwMPI
    use ctqwMPI

! PETSc headers required by the preprocessor.
! Note the lack of indentation.
#include <finclude/petsc.h>

    PetscErrorCode :: ierr
    PetscMPIInt    :: rank
    !--------------------
    ! your variable
    ! declarations
    ! -------------------

    ! initialize SLEPc and PETSc
    call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
    call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)

    !--------------------
    ! your program code
    ! -------------------

    ! finalise PETSc
    call PetscFinalize(ierr)

end program main

CTQW subroutines can now be called directly; for example,

call qw_cheby(psi0,psi,t,H,0.*PETSc_i,Emax,rank,n)

See also


Command Line Options

Similarly to the Python scripts, PETSc allows command line arguments to be easily added to your Fortran programs, through the use of the following subroutines:

call PetscOptionsGetInt(PETSC_NULL_CHARACTER,"-n",n,flag,ierr)
CHKERRQ(ierr)
if (flag .eqv. .false.) n = 100

In this case, the command line option -n is added, and the command stored in variable n (of type PETSC_INT). Furthermore, if the flag is not provided, the program assigns n a default value of 100.

Furthermore, most PETSc and SLEPc subroutines accept command line options which modify their settings; for instance, when using the SLEPc EPS eigensolver, the EPS type can be changed dynamically when run through the following options:

$ mpirun -np 2 <program> -eps_type='lapack'

See also

For more details, please refer to the PETSc and SLEPc documentation.


Code Profiling

PETSc also allows for easy code profiling using PetscLogStage variables; however, unlike the built in log stages found in pyCTQW.MPI, these must be manually created for fortran programs.

For example, to create enable profiling of they Chebyshev method used for CTQW propagation, we would declare a log stage variable stage,

PetscLogStage  :: stage
PetscLogDouble :: tc0, tc1
PetscErrorCode :: ierr

and then wrap the Chebyshev subroutine like so:

! measure the initial wall time
call PetscTime(tc0,ierr)

! register the log stage, named 'Chebyshev'
call PetscLogStageRegister('Chebyshev',stage,ierr)
call PetscLogStagePush(stage,ierr)

! CTQW propagation
call qw_cheby(psi0,psi,t,H,0.*PETSc_i,Emax,rank,n)

! view the final statespace
call VecView(psi,PETSC_VIEWER_STDOUT_WORLD,ierr)

! wait here until all MPI nodes arrive
call PetscBarrier(psi,ierr)
! end the log stage
call PetscLogStagePop(ierr)

! measure the end wall time
call PetscTime(tc1,ierr)

Then, when calling the program with the -log_summary command line option,

$ mpirun -np 2 <program> -log_summary

the produced code profile will include a summary of performance within the ‘Chebyshev’ log stage.

See also

For more details, please refer to the PETSc and SLEPc documentation.


Linking libctqwMPI

To compile your program and link it against libctqwMPI, you will need to include the ctqw_common file (present in the root directory of the pyCTQW-X.Y folder) in your makefile - this helps set up the required library/include variables.

For example, consider the makefile below (used to compile exampleMPI.F90):

Makefile

[Download makefile]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/usr/bin/make -f

# If they are not automatically found, make sure
# your PETSc and SLEPc variables are correctly
# set by uncommenting out the below.
# are correctly
#PETSC_DIR = 
#PETSC_ARCH =
#SLEPC_DIR = 

CTQW_DIR = ../..
include $(CTQW_DIR)/ctqw_common

program = exampleMPI

#Makefile	
all: $(program)

$(program): $(program).o
	$(FLINKER) $(program).o -o $(program) $(CTQW_LIBS)

$(program).o: $(program).F90
	$(FLINKER) $(program).F90 $(CTQW_INCLUDE_DIRS) -c -o $(program).o 

clean::
	rm -f *.o *.mod *.s

.DEFAULT_GOAL := all

A brief summary of the variables:

CTQW_DIR the path to the pyCTQW-X.Y directory (used to correctly set up CTQW_LIBS and CTQW_INCLUDE_DIRS).
CTQW_LIBS

the libraries required to link against libctqwMPI; these include libctqwMPI.so (by default, this is compiled to CTQW_DIR/lib) as well as the PETSc and SLEPc libraries.

These are required when linking your object files.

CTQW_INCLUDE_DIRS

the location of the header/module files required - by default, these are compiled to CTQW_DIR/include. Note that this also contains required PETSc/SLEPc include directories.

These are required when compiling your code to an object file.


Running your program

To run your program, simply run in a terminal

$ mpirun -np X <program> [options]

where X is the number of MPI nodes you would like to run it on.

Important

If you compiled your program using the shared library libctqwMPI.so, make sure that the shared library is available at runtime, either by:

  • setting the environment variable LD_LIBRARY_PATH to include the directory containing libctqwMPI.so,
  • or by placing libctqwMPI.so in a directory used by ld (e.g. /usr/local/lib etc).

Acknowledgements

The graph isomorphism subroutine graphiscert() uses the external subroutine d_refsor(), a highly optimised Fortran sorting implementation written by Michel Olagnon and part of the ORDERPACK 2.0 suite of ranking and sorting algorithms for Fortran 90.