espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] GPU Lattice Boltzmann


From: Georg Rempfer
Subject: Re: [ESPResSo-users] GPU Lattice Boltzmann
Date: Wed, 15 Apr 2015 11:32:39 +0200

Hello Markus,

yes, you can use any Geforce model. Most of the GPUs we use ourselves are actually the cheaper gamer variants. The GPU LB was designed specifically with single precision GPUs in mind and performs well with these cards. We don't consider the lack of ECC memory to be a problem. In fact, we often switch it off in cards that have that capability for another speed gain. As far as I know, the two chip layout of the Titan is not visible to the user. We also use a number of Titan Black and Titan Z GPUs with the LB and they just act as any other Geforce card does. The only difference seems to be that they are slightly slower (~10%) than the ordinary Geforce 680 models, though this might be more due to energy saving considerations on NVIDIAs part.

We currently have only very limited support for multiple GPUs (as in several actual cards). The MMM1D electrostatics algorithm can use several cards on the same motherboard. No other method (including LB) supports that, so far.
I am not sure whether we support running the GPU LB on one card and the GPU P3M on another at this point. Florian Weik can answer that question.

Are you planning to buy GPUs specifically for GPU LB simulations using Espresso? If not, you can just try running Espresso with whatever is available to you. If the CUDA toolkit is set up properly, there is a good chance it "just works".

Greetings,
Georg

On Wed, Apr 15, 2015 at 10:21 AM, Wink, Markus <address@hidden> wrote:

Hello Georg, hello David,

 

thanks a lot for your answers. Unfortunately, I am still in the dark concerning the requirements of the hardware for the LBM on GPU.

 

I have found the diploma thesis of Dominic Röhm. He performed his studies on a NVIDIA Tesla C2050. The successor of it is the K40 I think. I was wondering, if one could also use a Geforce Titan Z instead (Or any other Geforce model). I am aware of, that the Geforce does not have ECC, nevertheless, would you see that as a problem? Would you recommend FP64 or is FP32 sufficient for LB simulations?

Does anyone have tried to use a Geforce model to simulate an LBM fluid? Furthermore I found out, that the Titan has got two chips. Concerning the mailing archive (http://lists.nongnu.org/archive/html/espressomd-users/2013-08/msg00004.html) it is not possible to use more than one GPU for the simulation (I guess that the Geforce Titan acts as two GPUs, am I right?). Is it still valid, or outdated in the current master?

 

Has anybody any recommendations or comments on the questions above?

 

 

Greetings

 

Markus

 

 

Von: espressomd-users-bounces+markus.wink=address@hidden [mailto:espressomd-users-bounces+markus.wink=address@hidden] Im Auftrag von Georg Rempfer
Gesendet: Freitag, 10. April 2015 21:25
An: address@hidden
Betreff: Re: [ESPResSo-users] GPU Lattice Boltzmann

 

Hey,

 

hardware wise you need an NVIDIA graphics card with compute capability >= 2.0. I think you would have a hard time finding a card that only has compute capability 1.1 these days, so that is not a serious constraint. The more memory the better of course. As for the software, you need the CUDA toolkit and matching drivers to be installed on the machine. Both of those you can get from the NVIDIA website (https://developer.nvidia.com/cuda-downloads) or from your distribution's repositories.

 

Greetings,

Georg

 

On Fri, Apr 10, 2015 at 1:44 PM, David Schwörer <address@hidden> wrote:

Hi,

A list of all CUDA capable GPUs is here:
https://developer.nvidia.com/cuda-gpus

You can find the instructions to get started with CUDA (what is needed
for ESPResSo on GPUs) in the documentation of nvidia:
http://docs.nvidia.com/cuda/index.html#getting-started-guides

Sincerely
David


On 04/10/2015 08:27 AM, Wink, Markus wrote:
> Hello everybody,
>
> so far I am using the Lattice-Boltzmann method with CPU. I am not familiar with GPUs, but I wanted to try out the GPU implementation. What are the minimum requirements for using it? Can someone please give me some information about what GPU is needed as a minimum? What libraries is the implementation referring to?
>
> Sorry for the not qualified question,  but I am a total novice to the field of GPU.
>
> Greetings
>
> Markus Wink

 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]