espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Mpi and cuda


From: Martin Kaiser
Subject: Mpi and cuda
Date: Tue, 22 Sep 2020 15:31:33 +0200

Hello everybody, 

I have a technical question about using the open MPI and CUDA implementations at the same time.
If I start my GPU accelerated espresso script in MPI, with the standard command like this:

mpirun -n 4 espresso script.py;

then 4 instances of the same job are started on my GPU, of which only one is actually doing some work on the GPU. If I monitor the usage with "nvidia-smi”, I get something like this:

GPU   GI   CI        PID   Type   Process name                  GPU Memory
 1   N/A  N/A     26365      C   /usr/bin/python3                  207MiB 
 1   N/A  N/A     26366      C   /usr/bin/python3                  129MiB 
 1   N/A  N/A     26367      C   /usr/bin/python3                  129MiB 
 1   N/A  N/A     26368      C   /usr/bin/python3                  129MiB

Additionally, if I kill this job, not all of the instances on the GPU are aborted, meaning that it is not freeing the memory on the card. 
Is there something I am doing wrong with how I compile or call Espresso? Or is it that the MPI implementation is not “aware of cuda” and instancing copies of the same job on the GPU.

Thanks for the help,
Martin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]