qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 4/5] xilinx_devcfg: Zynq devcfg device model


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v3 4/5] xilinx_devcfg: Zynq devcfg device model
Date: Wed, 29 May 2013 16:03:09 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

Il 29/05/2013 15:54, Peter Crosthwaite ha scritto:
> Hi Paolo,
> 
> On Wed, May 29, 2013 at 6:51 PM, Paolo Bonzini <address@hidden> wrote:
>> Il 24/05/2013 07:49, address@hidden ha scritto:
>>> +static const MemoryRegionOps devcfg_reg_ops = {
>>> +    .read = register_read_memory_le,
>>> +    .write = register_write_memory_le,
>>> +    .endianness = DEVICE_LITTLE_ENDIAN,
>>> +    .valid = {
>>> +        .min_access_size = 4,
>>> +        .max_access_size = 4,
>>
>> What happens if you have registers of mixed size within the same "bank"?
>>
> 
> You Probably should change these access restrictions per-register
> accordingly.

Can you add a generic .valid.accepts callback for the register API that
will check against the size of the register at that offset?

> The degenerate case is allowing greater-than-register size or
> misaligned accesses, which is dependent on the behaviour of the memory
> API. I can do some research, but do you know if the memory API
> supports misaligned transactions that span a memory region boundary?

I don't really care about that for now.

But the way to do that would be to add a .impl.accepts method (right now
there's only .valid.accepts).  Then you would point the register API's
implementation to .impl.accepts, and use

    .impl.accepts = register_accepts,
    .valid = {
         .min_access_size = 1,
         .max_access_size = 4,
     }

The memory API will do a RMW operation if needed.  Won't make much sense
with write-1-clear bits and the like, but for simple registers it would
work.

Paolo

> If so then there are no concerns, as the memory API will handle it for
> us. If not then I don't see it as an issue as its very rare (at least
> in my experience) for registers to support such strange misaligned or
> mulit-register accesses. That or we can look into memory API for
> answers.
> 
>>> +    }
>>> +};
>>> +
>>> +static void xilinx_devcfg_realize(DeviceState *dev, Error **errp)
>>> +{
>>> +    XilinxDevcfg *s = XILINX_DEVCFG(dev);
>>> +    const char *prefix = object_get_canonical_path(OBJECT(dev));
>>> +    int i;
>>> +
>>> +    for (i = 0; i < R_MAX; ++i) {
>>> +        RegisterInfo *r = &s->regs_info[i];
>>> +
>>> +        *r = (RegisterInfo) {
>>> +            .data = &s->regs[i],
>>> +            .data_size = sizeof(uint32_t),
>>> +            .access = &xilinx_devcfg_regs_info[i],
>>> +            .debug = XILINX_DEVCFG_ERR_DEBUG,
>>> +            .prefix = prefix,
>>> +            .opaque = s,
>>> +        };
>>> +        memory_region_init_io(&r->mem, &devcfg_reg_ops, r, "devcfg-regs", 
>>> 4);
>>
>> Could you add a register_init function that does register_reset +
>> memory_region_init_io?
>>
> 
> I wanted to factor this loop out into a helper as a next step so I
> think its already in my todo list, but i'm confused as to why reset
> and memory_region_init would bundled together - arent they normally in
> Device::reset and Device::realize (or Object::Init) respectively?
> 
> Regards,
> Peter
> 
>>> +        memory_region_add_subregion(&s->iomem, i * 4, &r->mem);
>>
>> Paolo
>>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]