help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why is Elisp slow?


From: 조성빈
Subject: Re: Why is Elisp slow?
Date: Sat, 4 May 2019 22:38:25 +0900

2019. 5. 4. 오후 10:27, Ergus <spacibba@aol.com> 작성:

>> On Sat, May 04, 2019 at 12:18:29AM +0200, ?scar Fuentes wrote:
>> Ergus <spacibba@aol.com> writes:
>> 
>>>> More importantly, the libJIT build failed to show any significant
>>>> speed-up wrt byte code, so it sounds like maybe the whole idea was
>>>> either wrong or its design couldn't possibly provide any gains.  Or
>>>> maybe we just measured the speed-up in wrong scenarios.
>>>> 
>>> This is not surprising. My work is 80% performance measurement and
>>> benchmarking and the real improvements with JIT compilation (in my
>>> experience) and specially with libJIT is not as good as many people
>>> expect in most of the common scenarios. That's because the generated
>>> code is usually very generic (so it does not take advantage of
>>> architecture specific features), and strategies like vectorization and
>>> branch prediction are very difficult to hint (most of the times
>>> impossible). So the only real difference with a bytecode interpreter is
>>> the bytecode parsing part, but not too much more.
>> 
>> On Emacs case, I'm pretty sure whatever advantages comes from good
>> architecture-specific code, accurate branch prediction, etc are below
>> the noise level. As you pointed out below, Elisp is a dynamic language
>> and for turning this
>> 
>> (let ((acc 0))
>> (dotimes (i 10)
>>   (setq acc (+ acc (foo i)))))
>> 
>> into this
>> 
>> int acc = 0;
>> for(int i = 0; i < 10; ++i) {
>> acc += foo(i);
>> }
>> 
>> you need either sophisticated analysis (that, in practice, only works
>> for the "easy" cases) or annotate the code with type declarations (and
>> enforce then).
>> 
>> Because otherwise handling variables as containers for generic values is
>> incompatible with "C-like" performance.
>> 
>> And an Elisp -> C translator does not magically solve this.
>> 
> That's true. The code quality will be not very good (as what happen in
> Android Java native compilers and Cython) but some of the optimizations
> in the low level can be applied. That depends of the optimizations
> implemented and the information provided to the compilers (both of
> them).
> 
> A Lisp-C compiler for example can reduce significantly the function call
> overheads and callback overheads cause thanks to the Lisp syntax it is
> very easy to apply inline optimizations which in C represent a VERY
> important improvement.

I’m pretty sure that it is hard to do that, because lisp is a dynamic language, 
and it is hard to be sure if that function I(the compiler) wants to inline is 
really the function that the code writer wants to call.
The transpiled code essentially will be C code that contains a small embedded 
elisp interpreter, which imposes the same overhead as running elisp code 
through the elisp engine from Emacs.

> In your example code foo, setq and operator + are
> functions called in runtime, which interpreted means go to the symbol
> hashtable, find the pointer to the function, interpret the inputs and
> execute... compiling that... just think how it can change.

Transpiled C code will have the same overhead.

> 
> With a simple optimization like having the hardcoded 10 the compiler
> will know the number of iterations to execute and the C compiler applies
> many good optimizations in those cases.
> 
> SO it won't be the same than your second code, but performance could be
> in the same order in many cases.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]