gomp-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gomp-discuss] parallell vs. distributed


From: Lars Segerlund
Subject: [Gomp-discuss] parallell vs. distributed
Date: Tue, 04 Feb 2003 16:19:58 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.2.1) Gecko/20021226 Debian/1.2.1-9



Scott Robert Ladd wrote:
Lars Segerlund wrote

...[snip]...

 What about the distinction of 'parallell' and 'distributed' regions,
does everyone think this is a good idea ?


I'm not certain what you mean here. When I think of "distributed", I think
of an application like address@hidden, where work is shared across a loose
network of heterogenous machines. The OpenMP model is vetry much based on
"threads" (however a given system defines those) running in a shared-memory,
multiple processor environment.

..Scott


The reason I want to make a distinction is that fx. odinmp has an implementation which can run ontop of mpi ! ( hence distributed ), and thus is not sharing memory with the other 'threads'.

The classic approach has been to generate subroutines for the parallell parts, which copy in and out all needed parameters ( perhaps some syncronization and so on but you get the general idea ), this overhead is circumvented by fx. intels compiler assuming shared memory, and only generating parallell sections, ( of course with a set of private variables for each thread ), and the shared memory is really shared.

Right now we might not have too much use of the 'distributed section' attribute, but it would make it easy for us to do future enhancements, or code as subroutines for profiling or diagnostics.

In this context the parallell sections and distributed sections would work from each end, as a distributed computation is best done on a larger data set, and parallell optimizations ( thread ) would in general work best for a bottom up approach.

 It's a bit of future enhancement/safety that I thought would be nice :-) .

I think most people didn't like the idea, so I gave it up, I still think it might be nice if one would like to run with mpi or pvm support, as a lot of existing clusters could use this feature.

 / regards, Lars Segerlund.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]