espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] speedy question


From: Axel Arnold
Subject: Re: [ESPResSo-users] speedy question
Date: Fri, 24 Apr 2015 18:05:39 +0200

Hi,

If you simulate a single polymer, the I homogeneous particle distribution is only one problem. Espresso also tries to make the verlet cutoff as small as possible. That is good in melts, where the computation time scales like cutoff^3, but for polymers, bigger cut offs are probably better. You can play with setmd max_num_cells to limit the number of cells and by that increasing the cell size.

Having said that, I can only support what Peter said: simulating a 4000 beads chain will not produce anything statistically significant. You need some wrapping clever MC to generate configurations, then you can use MD to study eg short time dynamics, but not more.

In this case, you will pretty much for sure be better off running many single core instances. But even then, you should tune max num cells.

Axel

Dr. Axel Arnold

Axel Arnold
Martha-Schmidtmann-Str. 7
70374 Stuttgart, Germany
Phone: +49 173 870 6659
Am 24.04.2015 um 16:07 schrieb Peter Košovan <address@hidden>:

Hi Dušan,

What cellsystem do you use? What is the range and type of interactions? How many steps in the all-atom simulation were your several nanoseconds?

Note that if you run a single polymer chain (I suppose implicit solvent with Langevin dynamics), then the matter is distributed very unevenly in your box. So Espresso cannot fully benefit from its domain decomposition cellsystem (default), which divides the box evenly among all your CPUs. Some of your CPUs work hard, while others are bored. If your box is much bigger than the chain and the chain is very long, maybe just a single CPU works hard. It might be a good idea to try nsquare cellsystem.

In contrast, in an atomistic simulation there is a rather homogeneous distribution of matter and the cellsystem gives each CPU its fair share of the load.

Greetings,

peter

Btw: simulating 4k long polymer with Langevin dynamics is hopeless. The dynamics of such a long chain is far too slow to obtain meaningful statistics in a reasonable time (human lifetime). The characteristic time scale increases roughly as N^{5/3}. If you need such long chains, you should use MC.

2015-04-24 15:47 GMT+02:00 Dudo <address@hidden>:
Hi,

I'd like to ask a question concerning computational speed.

I'm running a simulation of 4000 beads chain, and the performance seems to me a bit slow since I'm able to calculate only around 7 millions steps a day.

The simulations are running in parallel on 12 cores which I found optimal.

May this speed be adequate? I did some time ago atomistic simulations of ~7,000 atoms and I was obtaining several nanoseconds a day.

Is there a way to compile Espresso for single precision or some tricks like that?

Kind regards,
Dudo

--
____________________
Ing. Dusan Racko, PhD
https://www.researchgate.net/profile/Dusan_Racko
Polymer Institute of the Slovak Academy of Sciences
Dubravska cesta 3
845 41 Bratislava, Slovak Republic
tel: +421 2 3229 4321



--
Dr. Peter Košovan

Department of Physical and Macromolecular Chemistry
Faculty of Science, Charles University in Prague, Czech Republic

Katedra fyzikální a makromolekulární chemie
Přírodovědecká fakulta Univerzity Karlovy v Praze

www.natur.cuni.cz/chemistry/fyzchem/
Tel. +420221951290
Fax +420224919752


Pokud je tento e-mail součástí obchodního jednání, Přírodovědecká fakulta Univerzity Karlovy v Praze:
a) si vyhrazuje právo jednání kdykoliv ukončit a to i bez uvedení důvodu,
b) stanovuje, že smlouva musí mít písemnou formu,
c) vylučuje přijetí nabídky s dodatkem či odchylkou,
d) stanovuje, že smlouva je uzavřena teprve výslovným dosažením shody na všech náležitostech smlouvy.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]