qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 1/5] memory: Define API for MemoryRegionOps to tak


From: Peter Maydell
Subject: Re: [Qemu-devel] [RFC 1/5] memory: Define API for MemoryRegionOps to take attrs and return status
Date: Fri, 27 Mar 2015 12:10:31 +0000

On 27 March 2015 at 12:02, Edgar E. Iglesias <address@hidden> wrote:
> On Fri, Mar 27, 2015 at 10:58:01AM +0000, Peter Maydell wrote:
>> So I was looking at how this would actually get plumbed through
>> the memory subsystem code, and there are some awkwardnesses
>> with this simple enum approach. In particular, functions like
>> address_space_rw want to combine the error returns from
>> several io_mem_read/write calls into a single response to
>> return to the caller. With an enum we'd need some pretty
>> ugly code to prioritise particular failure types, or to
>> do something arbitrary like "return first failure code".
>> Alternatively we could:
>> (a) make MemTxResult a uint32_t, where all-bits zero indicates
>> "OK" and any bit set indicates some kind of error, eg
>> bit 0 set for "device returned an error", and bit 1 for
>> "decode error", and higher bits available for other kinds
>> of extra info about errors in future. Then address_space_rw
>> just ORs together all the bits in all the return codes it
>> receives.
>> (b) give up and say "just use a bool"

> Is this related to masters relying on the memory frameworks magic
> handling of unaliged accesses?

Well, that, and masters that just want to say "write
this entire buffer" or otherwise access at larger
than the destination's access size.

> I guess that masters that really care about accurate the error
> handling would need to issue transactions without relying on
> the intermediate "magic" that splits unaligned accesses...

Yes, I think this is probably true. (I suspect we don't
actually care at that level of detail.)

> Anyway, I think your option a sounds the most flexible...

Yes, it's the best thing I can think of currently.

-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]