qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 1/8] pci: pci_default_cap_write_config ignores w


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH 1/8] pci: pci_default_cap_write_config ignores wmask
Date: Tue, 16 Nov 2010 18:12:44 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Nov 12, 2010 at 08:49:20AM -0700, Alex Williamson wrote:
> > > > > @@ -1207,18 +1208,14 @@ void pci_default_cap_write_config(PCIDevice 
> > > > > *pci_dev,
> > > > >  
> > > > >  void pci_default_write_config(PCIDevice *d, uint32_t addr, uint32_t 
> > > > > val, int l)
> > > > >  {
> > > > > -    int i, was_irq_disabled = pci_irq_disabled(d);
> > > > > -    uint32_t config_size = pci_config_size(d);
> > > > > +    int was_irq_disabled = pci_irq_disabled(d);
> > > > >  
> > > > >      if (pci_access_cap_config(d, addr, l)) {
> > > > >          d->cap.config_write(d, addr, val, l);
> > > > >          return;
> > > > >      }
> > > > >  
> > > > 
> > > > I would like to also examine the need for _cap_
> > > > functions. Why can assigned devices just do
> > > > 
> > > >         pci_default_write_config 
> > > >         if (range_overlap(...msi)) {
> > > >         }
> > > >         if (range_overlap(...msix)) {
> > > >         }
> > > > and then we could remove all the _cap_ extensions
> > > > altogether?
> > > 
> > > I think that somewhere we need to track what capabilities are at what
> > > offset, config space isn't a performance path, but that look horribly
> > > inefficient and gets worse with more capabilities.
> > 
> > Looks like premature optimization to me.  I guess when we get more than
> > say 8 capabilities to support, I'll start to worry.
> > Even then, these optimizations are better internal in pci core.
> 
> It's not just an optimization, as noted in another reply, we should be
> using it to make sure we don't have collisions, and it simply makes the
> callback code much cleaner to be able to do a switch statement instead
> of a pile of 'if (ranges_overlap)', IMO.

Two if statements is not a pile :)
I think in the end we will have
a general pci handler dealing with all capabilities,
and device assignment would *maybe* deal with msi and msix.


> > > Why don't we define capability id 0xff as normal config space (first 64
> > > bytes), then add the capability id to read/write_config (this is what
> > > vfio does).  Then the driver can split capability handling off from
> > > their main functions if they want.
> > 
> > My feeling is we need higher level APIs than 'capability write'.
> > Otherwise we get the PCI config handling all over the place.
> > E.g. a callback when msi is enabled/disabled would make sense,
> > so that pci core can keep track of current state and only notify
> > the device when there are things to do.
> 
> I agree, but it's difficult to provide the flexibility to meet all the
> needs.  Device assignment might want to be called for more or less bit
> flips than an emulated device, PM is probably a good example of this.
> We could actually change state on a PMCSR write, but I'm not sure what
> an emulated device would do.  Does that mean we add a callback
> specifically for that, or do we provide some generic interface that
> drivers can register which bits they want to know about changing?

I'm arguing for the PM callback. This way pci config decoding
is local in pci.c, others use high level APIs.

> > >  Anyway, I think such an improvement
> > > could be added incrementally later.  Thanks,
> > > 
> > > Alex
> > 
> > Sure.
> > 
> > > > > -    for (i = 0; i < l && addr + i < config_size; val >>= 8, ++i) {
> > > > > -        uint8_t wmask = d->wmask[addr + i];
> > > > > -        d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val 
> > > > > & wmask);
> > > > > -    }
> > > > > +    pci_write_config(d, addr, val, l);
> > > > >  
> > > > >  #ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
> > > > >      if (kvm_enabled() && kvm_irqchip_in_kernel() &&
> > > 
> > > 
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]