gomp-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gomp-discuss] parallell vs. distributed


From: Biagio Lucini
Subject: Re: [Gomp-discuss] parallell vs. distributed
Date: Tue, 4 Feb 2003 16:10:24 +0000 (GMT)

On Tue, 4 Feb 2003, Lars Segerlund wrote:

>   Right now we might not have too much use of the 'distributed section' 
> attribute, but it would make it easy for us to do future enhancements, 
> or code as subroutines for profiling or diagnostics.
> 
>   In this context the parallell sections and distributed sections would 
> work from each end, as a distributed computation is best done on a 
> larger data set, and parallell optimizations ( thread ) would in general 
> work best for a bottom up approach.
> 
>   It's a bit of future enhancement/safety that I thought would be nice :-) .
> 
>   I think most people didn't like the idea, so I gave it up, I still 
> think it might be nice if one would like to run with mpi or pvm support, 
> as a lot of existing clusters could use this feature.
> 

I think we should make a distinction here: what is nice and what is
feasible. In principle I agree with Lars. Let's face reality: the most
cost-effective machines are Beowulf clusters, i.e. mixed "schema" machine:
several boards with four or more often two processor sharing a bank of
memory connected through more or less fast connections. Probably this is
the system we should target: people buying expensive multi-processor
computers can also afford and probably will buy (and use) a specific
compiler for them. People building clusters from off-the-shelf components
are more likely to use gcc and its extensions. Those people will benefit
the most from an integrated OpenMP-MPI solution.
However, we should probably be building up the project step-after-step. My
humble suggestion is that we start with pure OpenMP, but we keep open the
possibility of MPI (or PVM, though I have almost no idea how the latter
works). One possible issue is that MPI is not directive-based, the
parallelisation there is always explicit and the compiler can't refuse it.
How do we possibly deal with that? Well, probably the answer is to base 
distributed processing on OpenMP directives. Which takes us back to the
starting point: first of all we should understand how to make OpenMP
working.

Biagio







reply via email to

[Prev in Thread] Current Thread [Next in Thread]