Hi,
On Fri, May 10, 2019 at 09:05:12PM +0800, address@hidden wrote:
> I think the DipolarDirectSumCpu is a long range interaction and the command of mpi version will divide the whole box into local boxes so that each node sees everything on its own local box and on one layer of cells. Due to that, the DipolarDirectSumCpu can't work fine! Is that why I could not activate magnetostatics method DipolarDirectSumCpu with the command of mpi version? If it's true, how can I calculate the dipolar interaction with open boundary?
The dipolar direct sum on th CPU is not parallelized. You can only use it on a single core.
The remaining simulation is typically suited for a single cpu core.
There is a gpu implementation (DipolarDirectSumGpu) optimized for systems with more than ~2000 magnetic particles.
For large systems (>~10000 dipoles), you can use the P2NFFT method
https://www.sciencedirect.com/science/article/pii/S0021999119301020
It scales as N log N in the number of dipoles (as opposed to N^2 for direct summatoin). The supplementary information contains usage examples for ESPResSo.
Hope that helps!
Regards, Rudolf