[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Cooperation with distributed job processing systems
From: |
SF Markus Elfring |
Subject: |
Cooperation with distributed job processing systems |
Date: |
Sat, 03 Jan 2015 23:30:31 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 |
Hello,
The tool "make" can handle the parallel execution
of some processes by the command line parameter "-j"
to some degree.
It can happen that the computing resources in
a specific software build environment are too limited
for a challenging data processing application.
The computation takes longer than what could be possible
with alternative systems.
There are programming interfaces available which
provide support for submitting jobs to bigger and
more powerful computer systems.
Examples for corresponding tools:
1. Object request broker (from CORBA)
http://www.omg.org/spec/
2. Berkeley Open Infrastructure for Network Computing
http://boinc.berkeley.edu/
3. HTCondor™
http://research.cs.wisc.edu/htcondor/
4. Beowulf cluster
http://www.beowulf.org/
5. High Performance ParalleX
http://stellar-group.org/libraries/hpx/
6. Globus® Toolkit
http://toolkit.globus.org/toolkit/
I imagine that the software "jobserver" could
be extended for the convenient reuse of such APIs.
http://make.mad-scientist.net/papers/jobserver-implementation/
How are the chances to consider extensions for submitting
some jobs also for distributed execution directly
by such interfaces?
Regards,
Markus
- Cooperation with distributed job processing systems,
SF Markus Elfring <=