espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] electrical field+ lattice boltzmann


From: Axel Arnold
Subject: Re: [ESPResSo-users] electrical field+ lattice boltzmann
Date: Sat, 27 Dec 2014 22:09:04 +0100


On 25.12.2014, at 11:58, roya moghaddasi <address@hidden> wrote:

Hi Espresso users,

I have a system which includes LB, I want to employ an Electrical Field on my particles, should I also employ it on my lbnodes, I mean should I add [ext_force fx fy fz ] when I define lbfluid?

To my knowledge, only charged species move along an electric field. Which net charge do you think your fluid molecules carry?

Another question is that how can I run my script with LB parallel on gpu? should I make a change on the following lines? what should I write in the terminal to run it parallel on gpu? what happens to other parts of my code such as electrostatics, do they run in parallel as well? 

What do you mean with "parallel on gpu”? Even the smallest available CUDA devices have at least 32 compute cores, which we need to feed, so LBPUG always runs “parallel on gpu”.

Now, if you mean to run parallel on several GPUs, then this isn’t possible, as ESPResSo doesn’t have multi-GPU support yet.

However, you can still use MPI to run the CPU part in parallel. In this case, ESPResSo assumes that the GPU is connected to rank 0, and all non-GPU algorithms during propagation parallelize over the available MPI cores. For running the CPU parts in parallel, you first need to compile with an MPI compiler. Usually this is mpicxx, but that highly depends on your system, in particular if you are running on bigger clusters/supercomputers. After that you typically run “mpiexec -n <cores> Espresso <script>” to parallelize over <cores> cores, but again, that depends on your MPI implementation or queueing system.

During configure, Espresso is checking for some common names of the MPI compiler, so if it says at the end of running configure that MPI is disabled, then you need to figure out how to compile against MPI on your specific system. On standard Linux systems, the MPI is usually openmpi, but it may just not be installed. On larger clusters, you usually find in the documentation pages how to compile against MPI. Often, you need to load certain modules or similar. Here, we can’t help you, that is up to the system’s administrators.

Best,
Axel 

------------------------------------------------
JP Dr. Axel Arnold
ICP, Universität Stuttgart
Allmandring 3
70569 Stuttgart, Germany
Email: address@hidden
Phone: +49 711 685 67609


reply via email to

[Prev in Thread] Current Thread [Next in Thread]