getfem-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Getfem-users] parallelized Dot product of sparse vectors


From: Tarek Elsayed
Subject: [Getfem-users] parallelized Dot product of sparse vectors
Date: Tue, 25 Oct 2011 19:28:02 +0200

Hi all,
I noticed when implementing the dot product between two sparse vectors in a multi-threaded for loop, that the efficiency of the multi-threading (as measured by the ratio between the CPU time and wall-time) is very bad. 
I tried to implement the dot product myself as below, in a multi-threaded way, but still with no improvement! 
Any suggestions ?!

Code: 


float Dot(Svec v1, Svec& v2)   // Svec is the GMM Sparse vector

{
typename gmm::linalg_traits<Svec>::const_iterator its = gmm::vect_const_begin(v1);
typename gmm::linalg_traits<Svec>::const_iterator ite = gmm::vect_const_end(v1);
typename gmm::linalg_traits<Svec>::const_iterator it;

float result=0.0f; 
omp_set_num_threads(8);
#pragma omp parallel for default(shared)
for(it=its;it!=ite;it++)
{
 int i=it.index() ;
#pragma omp atomic 
 result += real( conj(v1[i]) * v2[i] );  
}
return result;
}


reply via email to

[Prev in Thread] Current Thread [Next in Thread]