avr-chat
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [avr-chat] Missed Optimisation ?


From: Michael Hennebry
Subject: Re: [avr-chat] Missed Optimisation ?
Date: Thu, 3 Mar 2011 10:40:41 -0600 (CST)
User-agent: Alpine 1.00 (DEB 882 2007-12-20)

On Thu, 3 Mar 2011, Alex Eremeenkov wrote:

03.03.2011 1:45, Michael Hennebry writes:
On Wed, 2 Mar 2011, Alex Eremeenkov wrote:

02.03.2011 19:20, Michael Hennebry writes:
On Tue, 1 Mar 2011, Graham Davies wrote:

Michael Hennebry wrote:

On further examination, I did find a "volatile uint32_t result;".
In context, I would guess that it was a complete
statement in the same file as the ISR.
Note the absence of attributes.
How could result not be in internal SRAM?

You may know that 'result' is going to be in internal SRAM, but you know things that the compiler doesn't. The compiler just puts the variable in memory.

If the compiler doesn't kow it, how would I?
The compiler knows the toolchain better than I do.


You *must* know because you are developer.
It's only developers responsibility to know what dedicated memory arrays means. Compiler doesn't know and shouldn't know what output linker set mean according real world.

Horse hocky.
The compiler is required to know because it is
required to emit the correct kinds of instructions.
volatile uint32_t result;
will go in the same memory space as
uint32_t consequence;

So you sure here that RW access to internal SRAM and external memory(via CPU memory interfaces), access to cached memory have a different instructions?

Actually, I'm fairly sure that they are the same.
Someone correct me if I'm wrong.
The reason I need to exclude external memory is that it is
external and therefore beyond the compiler's knowledge.
It could be a dual-port memory connected to another processor.
It could even be an FPGA.
Absent a contrary promise from the user,
a volatile variable in external memory
has to be treated as a generic volatile.
So far as I am aware, avr-gcc has no mechanism to provide such a promise.

And according different memory types instructions will be different?
Okay.
Explain me, please, for example, how I can compile program by same compiler once( have in result single object file). And make a link of it twice: first time for run with SRAM, second time for run with external memory? Compiler output is a same, programs works correct in both variants but each time from own place. Or you are disagree here?

I think we are close.
It seems to me that the sticking points are:
Can the compiler know that result is in internal SRAM?

It don't know. It's only linker point.

If it doesn't know, it can't emit correct code.
The code actually emitted, generally regarded as correct,
required just that knowledge.


Why it need such knowledge?
Compiler know that loading instruction it's a 'lds' for example. Nothning more he didn't need at all. It's a CPU and linker process responsibility where data at some address will be(SRAM, external RAM, cache, other memory interface) and how it's

If you disagree here again,
explain me how instruction below will determine in what actual memory segment address mapped during execution, if we have CPU with memory mapping configuration(any modern ARM CPU for example):

lds     r10, 0xFF00025A

Unless it's for an xmega, that is not an AVR instruction.
The address is too big.
Do xmegas have three-word instructions?

An AVR's LDS instruction refers to either IO space,
internal SRAM or external memory.
It does not refer to either flash or EEPROM.
If the corresponding C refers to a named variable,
it does not refer to IO space.
If the processor cannot use external memory,
that leaves internal SRAM.

What other possibilities did you have in mind?

--
Michael   address@hidden
"Pessimist: The glass is half empty.
Optimist:   The glass is half full.
Engineer:   The glass is twice as big as it needs to be."



reply via email to

[Prev in Thread] Current Thread [Next in Thread]