0
votes

I am adapting a fortran mpi program from sequential to parallel writing for certain types of files. It uses netcdf 4.3.3.1/hdf5 1.8.9 parallel. I use intel compiler version 14.0.3.174.

When all reads/writes are done it is time to close the files. At this point, the simulations does not continue anymore. So all calls are waiting. When I check the call stack from each processor I can see the master root is different compared to the rest of them.

Mpi Master processor call stack:

__sched_yield,                                 FP=7ffc6aa978b0
opal_progress,                                 FP=7ffc6aa978d0
ompi_request_default_wait_all,                 FP=7ffc6aa97940
ompi_coll_tuned_sendrecv_actual,               FP=7ffc6aa979e0
ompi_coll_tuned_barrier_intra_recursivedoubling, FP=7ffc6aa97a40
PMPI_Barrier,                                  FP=7ffc6aa97a60
H5AC_rsp__dist_md_write__flush,                FP=7ffc6aa97af0
H5AC_flush,                                    FP=7ffc6aa97b20
H5F_flush,                                     FP=7ffc6aa97b50
H5F_flush_mounts,                              FP=7ffc6aa97b80
H5Fflush,                                      FP=7ffc6aa97ba0
NC4_close,                                     FP=7ffc6aa97be0
nc_close,                                      FP=7ffc6aa97c00
restclo,                                       FP=7ffc6aa98660
driver,                                        FP=7ffc6aaa5ef0
 main,                                          FP=7ffc6aaa5f90
 __libc_start_main,                             FP=7ffc6aaa6050
 _start,    

Remaining processors call stack:

__sched_yield,                                 FP=7fffe330cdd0
opal_progress,                                 FP=7fffe330cdf0
ompi_request_default_wait,                     FP=7fffe330ce50
ompi_coll_tuned_bcast_intra_generic,           FP=7fffe330cf30
ompi_coll_tuned_bcast_intra_binomial,          FP=7fffe330cf90
ompi_coll_tuned_bcast_intra_dec_fixed,         FP=7fffe330cfb0
mca_coll_sync_bcast,                           FP=7fffe330cff0
PMPI_Bcast,                                    FP=7fffe330d030
mca_io_romio_dist_MPI_File_set_size,           FP=7fffe330d080
PMPI_File_set_size,                            FP=7fffe330d0a0
H5FD_mpio_truncate,                            FP=7fffe330d0c0
H5FD_truncate,                                 FP=7fffe330d0f0
H5F_dest,                                      FP=7fffe330d110
H5F_try_close,                                 FP=7fffe330d340
H5F_close,                                     FP=7fffe330d360
H5I_dec_ref,                                   FP=7fffe330d370
H5I_dec_app_ref,                               FP=7fffe330d380
H5Fclose,                                      FP=7fffe330d3a0
NC4_close,                                     FP=7fffe330d3e0
nc_close,                                      FP=7fffe330d400
RESTCOM`restclo,                               FP=7fffe330de60
driver,                                        FP=7fffe331b6f0
main,                                          FP=7fffe331b7f0
__libc_start_main,                             FP=7fffe331b8b0
_start, 

I do realize one call stack contain bcast an the other a barrier. This might cause a deadlock. Yet I do not foresee how to continue from here. If a mpi call is not properly done (e.g only called in 1 proc), I would expect an error message instead of such behaviour.

Update: the source code is around 100k lines.

The files are opened this way:

cmode = ior(NF90_NOCLOBBER,NF90_NETCDF4)
cmode = ior(cmode, NF90_MPIIO)
CALL ipslnc( NF90_CREATE(fname,cmode=cmode,ncid=ncfid, comm=MPI_COMM, info=MPI_INFO))

And closed as:

iret = NF90_CLOSE(ncfid)
1
We will have to see the code.Vladimir F
If the code is so long, prepare a minimal reproducible example. Does this happen for some simple code which just writes the file and closes?Vladimir F
I am on this. But This is one of the big issues as well.rgrun

1 Answers

0
votes

It turns out when writting NF90_PUT_ATT, the root processor has a different value compared to the others. Once solved, the program runs as expected.