octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: octave-mpi questions


From: J.D. Cole
Subject: Re: octave-mpi questions
Date: Sat, 22 Feb 2003 12:13:31 -0800
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.1) Gecko/20020827

John W. Eaton wrote:
On 21-Feb-2003, Martin Siegert <address@hidden> wrote:

| My impression is that MPICH2 is really beta (e.g., the F90 interface does
| not exist yet).

We only need a C interface, so I don't think this matters much.  Other
than missing interfaces, are there serious problems with MPICH2?

| The only MPI-1 compliant way of calling MPI_Init is
| MPI_Init(&argc,&argv);
| The question now is: is there a way to provide octave's command line
| arguments to the mpi_init module?

Sure, they are already available in the argv variable that is defined
when Octave starts.  But this is just a copy in an octave_value
object, so you would need to convert it back to an array of C
character strings.
  
IMHO I think it might be a mistake to require Octave to rely on the MPI-2 standard until support for it is more widespread.  Taken that into considering it may be value to implement support for dynamically linked startup files. Kind of an octaverc directory, which is executed prior to any octave startup. (It would also be equally appropriate to provide an octavecr directory which allows for shutdown routines to be called, such as MPI_Finalize.)  This is not only useful for MPI_Init function, but, may also be valuable to a user who wishes to use VSIPL standard of libraries, or other libraries which require "close to main" initialization calls. In addition to allowing Octave to "startup" external libraries, these files could also install appropriate variables, etc. (It is also would be worth considering specification of such files in .octaverc, perhaps a better solution which would cause less clutter as for as the distribution is concerned.)

JWE mentioned this concept here:
http://www.octave.org/octave-lists/archive/octave-maintainers.2003/msg00074.html
in the context of using feval (). The caveat to this implementation is that these initialization calls must ignore command line arguments not intended for the library startup AND the user must beware of arguments which may be relevant to multiple libraries, not excluding Octave.
In the thread starting here:

  http://www.octave.org/octave-lists/archive/help-octave.2001/msg00138.html

a claim was made that the argv/argc passed to MPI_Init must be
pointers to the actual variables passed to main, and that MPI_Init
needs to see them before they are processed by the program that calls
MPI_Init.  The MPI standard may claim that those are requirements, but
so far, no one has been able to explain *why* the standard includes
these requirements.  Also, there was a claim that bad things happen if
the function that calls MPI_Init returns, and I'm not sure whether
that is another limitation of the standard or was simply due to a
buggy implementation.
For starters, the standard dictates that the argv/argc dependency stems from implementation dependent needs. Specifically in Section 7.5 "Startup" of the MPI-1.1 document (available here MPI-1.1 document ) it states that MPI does not specify "how an MPI program is started or launched from the command line". I think this is due to the manner in which an MPI processes may be implemented, in addition to operating system interfacing. As stated in previous threads, the MPI-2 standard does not impose argv/argc passing to MPI specifically for the reason we are troubled by it, libraries may wish to use MPI functionality. (See MPI-2.0 document, section 4.2 "Passing NULL to MPI_Init".)

So, can anyone tell me *why* these things should cause problems?  It
seems to me that we should be able to define  MPI_Init in Octave that
does something like

  mpi_init (argv)

where argv is a cell array of strings that are packaged as appropriate
and passed on to the C MPI_Init.  Then users could pass whatever
arguments they like to the MPI system and we wouldn't have to modify
Octave's main program or argument processing at all (except perhaps we
would need a way start Octave and have it run a function and then
continue processing, so the client processes would know to go into
listener mode, waiting for commands from the server process).
  
I refer back to the octaverc/octavecr directory idea, above.
It would also be nice if Octave's mpi_init could start the client
processes, so you wouldn't have to use mpirun (or similar) to do that
when initially starting Octave.
  
    This may be a problem. The MPI-1.1 standard specifcally states that the process model is static, however, the MPI-2 standard provides new capability to dynamically create processes. (See MPI-2.0 document, section 5 "Process Creation and Management"). The new capability only gets half there, though. While some MPI implementations, such as LAM-MPI, allow applications to be started without the commands mpirun or mpiexec, is still no standardize way to start an MPI application (section 4.1 "Standardized Process Startup"). I tend to think that anyone using MPI in the first place could either write a script "mpioctave" to do the command line startup. For dynamic creation of processes, the user is still going to have to describe their distributed environment to spawn new processes.
    A second, less desirable, solution could be implemented to give this kind of functionality, however it involves creating separate (unix) processes to do the MPI communication, and would undoubtedly slow down the evaluation process.

There are obvious reasons why we may want to choose MPI-2 over MPI-1.1, however, it is my opinion that with dynamic startup capabilities, we don't need to go there yet. I am also of the opinion that many users running Octave on parallel architectures may not yet have the choice to use the newer standard. (Or it may come at quite a steep price.)

-JD

reply via email to

[Prev in Thread] Current Thread [Next in Thread]