|
From: | Matheus K. Ferst |
Subject: | Re: [PATCH v3 25/30] target/ppc: Move ADDI, ADDIS to decodetree, implement PADDI |
Date: | Fri, 30 Apr 2021 15:02:13 -0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 |
On 30/04/2021 11:31, Richard Henderson wrote:
On 4/30/21 7:05 AM, Matheus K. Ferst wrote:+ADDI 000001 10 0--.-- .................. \ + 001110 ..... ..... ................ @PLS_DI'm not sure about this. It's a bit surprising to find ADDI here, and the comment that explains why is likely to be ignored after the big copyright header.You could move the comment closer, and replicate, e.g. ADDI .... \ .... @PLS_D # PADDI
If we keep this naming, IMHO moving the comment closer looks better.
I'd prefer to keep a trans_PADDI like > static bool trans_PADDI(DisasContext *ctx, arg_PLS_D *a) > { > if(!resolve_PLS_D(ctx, a)) { > return false; > } > return trans_ADDI(ctx, a); > }But in this case ADDI probably doesn't use PLS_D. You could use static bool trans_PADDI(DisasContext *ctx, arg_PLS_D *a) { arg_D d; if (!resolve_PLS_D(ctx, &d, a)) { return false; } return trans_ADDI(ctx, &d); } making sure to use int64_t in the offset for arg_D.
We'd keep trans_ADDI with the same signature to avoid creating an arg_D on the stack. Patch 4 added type specification, maybe we can define an arg_D within arg_PLD_D? I'll play a bit to see if it works.
It's the middle way between v2 and v3. trans_ADDI code is reused, it'll probably be optimized as a tail call, and resolve_PLS_D is not called when it's not needed.My version won't tail-call, because of the escaping local storage, but I don't see how you can avoid it.r~
I haven't been able to test it properly yet, but at least on godbolt it seems that the compiler prefers to inline over tail call, so maybe that's not a problem.
Thanks, Matheus K. Ferst Instituto de Pesquisas ELDORADO <http://www.eldorado.org.br/> Analista de Software Júnior Aviso Legal - Disclaimer <https://www.eldorado.org.br/disclaimer.html>
[Prev in Thread] | Current Thread | [Next in Thread] |