bug-apl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-apl] segfault when using 'CORE_COUNT_WANTED' configure flag


From: Dr . Jürgen Sauermann
Subject: Re: [Bug-apl] segfault when using 'CORE_COUNT_WANTED' configure flag
Date: Wed, 16 Oct 2019 18:18:02 +0200
User-agent: Mozilla/5.0 (X11; Linux i686; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

Hi Rowan,

a stack-trace for the segfault would be good (command gdb apl then: 'run' and finally 'bt' after
the segfault,

No idea what AST is.
You could try TAB-expansion to get options in various situations and try e.g.

]help ⌹

to get help for APL primitives. Currently system functions and variables are not in )help,
but I suppose extending file src/Help.def could easily add them.


Compiling APL is IMHO a wrong path. Too many problems, too little gain.

Best Regards,
Jürgen Sauermann


On 10/16/19 5:01 PM, Rowan Cannaday wrote:
Thank you for the explanation Jürgen.

That makes intuitive sense. A shared-memory single threaded service is a reasonable abstraction.

Another approach, is to compile a subset of APL to an intermediate representation.

Is there a way to export the AST?
in addition - is there an in-repl method of viewing help and/or arguments for system variables & functions?

By the way, a minor regression: segfaulting, but only after exiting.
```
      )OFF
====================================================
SEGMENTATION FAULT
thread: 0x7f8747766700
thread_cSegmentation fault
```

Thanks again,
- Rowan

On Wed, Oct 16, 2019 at 12:06 PM Dr. Jürgen Sauermann <mail@jürgen-sauermann.de> wrote:
Hi Blake,

it is sort of working, but I could well use some help in troubleshooting
the remaining problems. I can help fixing them, but finding their root cause
(and making them reproducible) is a different story.

My current interpretation of various benchmarks that Elias Mårtenson and
myself did some years ago is that the bandwidth of the memory interface
between the CPUs (or cores) and the memory is the limiting factor, and no
matter how efficient the APL interpreter is, this bottleneck will dictate the
speedup that can be achieved.

As an example, from 1985 to 1990, myself and 4 students had built a the
hardware of a parallel APL machine with 32 CPUs and measured a speedup
of close to 32 for sufficiently large vectors.

In contrast, if I remember correctly, then  Elias achieved a speedup of 12 with
80 CPUs using the parallel feature of GNU APL. The only difference that
I can see between our 1990 machine (called Datis-P-256 because the architecture
could be scaled up to 256 processors) was the memory architecture:

Datis-P had one separate memory for each CPU, while current multicore
boxes share their memory module(s) among different cores. That simply
boils down to the fact that the memory bandwidth of Datis-P scaled with the
number of processors, while the number of cores on a typical multi-core box
does not. As long as this is the case, parallel APL remains severely limited
in terms of the speedup that can be achieved.

Best Regards,
Jürgen Sauermann



On 10/16/19 12:58 PM, Blake McBride wrote:
Greetings,

I think getting the parallel processing working is important.  It may be that for various reasons the speedup in general cases is minimal and not worth the effort.  However, I'd imagine that there are particular use-cases utilizing large arrays where the speedup would be substantial.  That is when those types of enhancements would make APL a real benefit.

Thanks.

Blake


On Wed, Oct 16, 2019 at 5:27 AM Dr. Jürgen Sauermann <mail@jürgen-sauermann.de> wrote:
Hi Rowan,

fixed in SVN 1191.

You should not be too enthusiastic, though, because the speed-ups that
can be achieved are somewhat disappointing. And due to that, I
haven't put too much effort into fixing faults (sometimes apl hangs
on a semaphore when parallel execution is enabled).

Best Regards,
Jürgen Sauermann


On 10/16/19 5:15 AM, Rowan Cannaday wrote:
Hello,

intrigued by the ability to parallelize APL, thought I'd try to test it:

`apl --cfg` followed by a line of '=' signs followed by `apl -q`:


configurable options:
---------------------
    ASSERT_LEVEL_WANTED=2
    SECURITY_LEVEL_WANTED=0 (default)
    APSERVER_PATH=/tmp/GNU-APL/APserver (default)
    APSERVER_PORT=16366 (default)
    APSERVER_TRANSPORT=0 (default)
    CORE_COUNT_WANTED=2
    DYNAMIC_LOG_WANTED=yes
    MAX_RANK_WANTED=8 (default)
    RATIONAL_NUMBERS_WANTED=yes
    SHORT_VALUE_LENGTH_WANTED=12, therefore:
        sizeof(Value)       : 456 bytes
        sizeof(Cell)        :  24 bytes
        sizeof(Value header): 168 bytes

    VALUE_CHECK_WANTED=yes
    VALUE_HISTORY_WANTED=yes
    VF_TRACING_WANTED=no (default)
    VISIBLE_MARKERS_WANTED=yes

how ./configure was (probably) called:
--------------------------------------
    ./configure  'CORE_COUNT_WANTED=2' 'DEVELOP_WANTED=yes' 'VALUE_HISTORY_WANTED=yes' 'VISIBLE_MARKERS_WANTED=yes' '--enable-maintainer-mode'

BUILDTAG:
---------
    Project:        GNU APL
    Version / SVN:  1.8 / 1190M
    Build Date:     2019-10-16 02:45:24 UTC
    Build OS:       Linux 5.2.0-3-amd64 x86_64
    config.status:  'CORE_COUNT_WANTED=2' 'DEVELOP_WANTED=yes' 'VALUE_HISTORY_WANTED=yes' 'VISIBLE_MARKERS_WANTED=yes' '--enable-maintainer-mode'
    Archive SVN:    1161

================================================================================

$ apl -q


====================================================
SEGMENTATION FAULT
thread: 0x7f6078042e00
thread_contexts_count: 2
busy_worker_count:     0
active_core_count:     1
thread # 0:               0 RUN  job:   0 no-name
thread #-1:               0 RUN  job:   0 no-name


----------------------------------------
-- Stack trace at main.cc:88
----------------------------------------
0x7F6078FD1BBB __libc_start_main
0x5631406C386D  main
0x5631406CAD8D   init_apl(int, char const**)
0x5631407E881B    Parallel::init(bool)
0x563140832E2D     Thread_context::init_parallel(CoreCount, bool)
0x7F60794E5B18      sem_init
0x7F60794E8510
0x5631406CA95A
========================================
====================================================






reply via email to

[Prev in Thread] Current Thread [Next in Thread]