espressomd-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-devel] LB fluid with variable viscosity


From: Ivan Cimrak
Subject: Re: [ESPResSo-devel] LB fluid with variable viscosity
Date: Mon, 3 Dec 2018 10:05:01 +0100

Hello Michael,


I am going on with implementation of variable viscosity of lbfluid, for now 
only for LB. The GPU version will come later.

I added the necessary fields in LB_FluidNode. For now, I added only the 
var_visc_gamma_shear. (You can check it at 
github/icimrak/espresso/diff_viscosity) This field will not be updated every 
time_step, since the blood cells do not jump a lot and their boundaries do not 
cross lattice points very often. I suppose that once per 100-500 timestep would 
be sufficient to reflag the lb nodes. I added further code, when recomputing 
the modes in lbfields, and on several other places.  

I am now implementing the reflagging of lbfluid, by setting correct 
lbfields[index]var_visc_gamma_shear value. To this end, I need to perform on 
each MPI rank 
1. a procedure determining the intersections of blood cell boundaries with LB 
lattice grid lines parallel to x-axis (horizontal lines)
2. run over these lines and according to the intersections I need to 
flag/reflag individual LB lattice grid points and set the correct 
var_visc_gamma_shear value (depending on whether the grid node is inside or 
outside the blood cell).

Assume I can determin the intersection of blood cell boundaries and horizontal 
LB grid lines (by an algorithm based on a loop over the particles). I need to 
send this data from MPI slave to MPI master. But on each slave, I could have 
variable number of intersections. How can I implement this? Can I use the 
mpi_gather_stats/mpi_gather_stats_slave functions? How do I implement the 
variable size od data that need to be gathered?



Thank you,
Ivan
 




> On 27 Nov 2018, at 15:05, Michael Kuron <address@hidden> wrote:
> 
> Hi Ivan,
> 
>> But according to this value I will be able to assign proper shear,
>> bulk, ghost even, ghost odd relaxation times. So only one real number
>> for each lbnode will be necessary to store.
> 
> If you update this field less often than once per timestep, it might
> make sense to store all relaxation times instead of the single number
> as that will save you some computation time (including divisions, which
> are rather expensive on GPUs).
> 
>> But I am not sure how to implement such a field. Could you please
>> point me out to some similar variable in ESPResSo, co I can implement
>> it in a similar manner?
> 
> I guess it would be similar to the LB force field (node_f in
> lbgpu_cuda.cu or lbfields[index].force_density in lb.cpp).
> 
>> Or is there a guide how to implement a global field so that
>> parallelization will be taken care of?
> 
> You don't need to worry about parallelization. You can calculate this
> field locally on each MPI rank because it only depends on particles
> that are on that MPI rank. Furthermore, on the GPU you don't need to do
> anything because the particles from all MPI ranks are present on the
> GPU.
> 
> Michael
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]