Explicit procedure:

  1. you need to be in virtual box or equivalent

  2. ssh -X username@aciss (-X flag is important so that gnuplot window is launched on your client)

  3. qsub -I (to reserve a node for this so your not running on the head node)

  4. in the aciss shell execute these commands:

    1. mkdir mpitest
    2. cd mpitest
    3. cp ~dkmatter/sciprog/craig/* . (copies all the relevant files to the mpitest directory)
    4. module load mpi-tor/openmpi-1.7_gcc-4.8
    5. module list (check that modules got loaded)
    6. make check


    If the make check command generated output, then all is well.

  5. now you need to execute the following command line; in this case we are asking for 2 processors and 1000 iterations on the code fit_data

    time mpirun -np 2 fit_data 1000


    The time command will output at the very end the wall clock time. (there will be some variations depending on what node your assigned by qsub)

  6. The output files that are generated are all named:

    fort.xx; where xx is indexed with respect to the rank (number of processors requested) starting at 21. Therefore for np 2 you would get fort.21 and fort.22 as the only output files.

    Only the highest number fort.xx file generated matters and the last line of that contains the best solution (lowest chi^2)

    cat fort.22 will list the contents of the relevant file (in this case for two processors)

    The chi square value is in the third column of the output.
    Run the fortran progam called rplot to enter your parameters from you minimum solution (source code = read.f90)

    Open gnuplot by typing gnuplot

      within gnuplot

      set yrange [6000:10000]

      plot "fort.20" , "fort.35"


    Now you have produced the plot with the lowest chi^2 solution on that.

    Now experiment with changing either the number of processors (values of 2,4,8 and 12 are allowed) of the number of iterations per processor.

    Keep track of
    1. The chi^2 value per run
    2. The wall clock time per mpirun command configuration


    Craig will then explain how mpirun is working.