qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: IRQ handling


From: Paul Brook
Subject: [Qemu-devel] Re: IRQ handling
Date: Mon, 9 Apr 2007 01:41:05 +0100
User-agent: KMail/1.9.5

[replying to a couple of different mails]

> What do you need to route an IRQ ?
> -> A peripheral destination

Agreed.

> What we got now ?
> -> a callback with 3 parameters: an opaque, a PIN (the n_IRQ) and a
> state

We have this in some places. Other places only have some parts.

> Is more needed to have a generic routing approach ?
> -> no. This can work to route any signal

Agreed.

> Can we do with less ?
> -> no. You need the 3 parameters.

Agreed.


In summary the IRQ source (ie. device raising the IRQ) needs to keep track of 
4 values:
1) Callback
2) Opaque callback argument
3) PIN number
4) IRQ state.

In most cases we get (4) for free because it's a direct function of device 
state, so I'll ignore it for now.

I believe (1) and (2) are inherently linked, and it makes no sense to 
set/change them individually.

In some cases code has multiple instances of (3) sharing a single (2). This 
is, in general, incorrect as a device may have outputs connected to different 
interrupt controllers. I think there are examples of this on the ARM boards.

Thus an IRQ source can treat (1), (2) and (3) as a single block of 
information, with no loss of flexibility.

> The problem is also: what does this patch adds, but complexity 

I believe my patch concentrates the (necessary) complexity in a single place. 
For the record, the net effect of my patch was to remove approximately 32 
lines of code (71 files changed, 594 insertions(+), 626 deletions(-))

> and arbitrary limitations ?

You have stated several times that my patch adds arbitrary limitations.
I refute this assertion absolutely.

There are no limits on the number of IRQs, or the topology of entities 
(devices, interrupt handlers, and CPUs) that can be supported. 

Hotplugging is not a problem, neither are systems with thousands of IRQs. I 
have local patches for an arm-based core with several hundred IRQ lines. 
Convention is that IRQ objects are created at the same time as the rest of 
the device state for the IRQ sink (ie. interrupt controller or CPU).

In practice this means a particular device needs to know how many IRQ inputs 
it has when it is instantiated. I believe this is entirely reasonable. Last 
time I checked it wasn't feasible to dynamically solder new pins onto an IC 
while it was running [*1]. Note that this is the number of IRQs*per instance 
of a device. It's entirely possible to have different instances of the "same" 
device with arbitrarily different numbers of IRQs, and an arbitrary number of 
devices/IRQs in a system. [*2]

If you want to do anything other that simple 1-1 connections (eg. shared IRQ 
lines) you can create a fake device to perform the appropriate multiplexing. 
This is what the PCI code does. It creates a "PCI bus" interrupt controller 
that maps individual device IRQ pins onto the host interface IRQs.

> What can be done (but once again, it changes nothing, just hide the
> "complexity"), is to replace the  { callback, n_IRQ } in devices
> structure by a IRQState structure and have the inline functions.

This is what I did, except I chose to make it an opaque structure, to prevent 
devices for meddling with it directly. I'd be amazed if inlining qemu_set_irq 
made any measurable difference to execution speed.

You seem to be saying that making this change has no benefit. I disagree quite 
strongly. 

Having each device keep track of 3 values (callback, opaque and nIRQ; see 
earlier) is a real mess, evidenced by the fact that devices don't do this 
consistently, the PCI code has grown its own slightly different mechanisms 
for signalling IRQs, and the ARM boards had their own partially generic 
implementation. Adding simple and consistent infrastructure for signalling 
interrupts is IMHO a worthwhile change in its own right.

>..
> To achieve this, you have to have a structure:
> struct PINState {
>     qemu_pin_handler handler;
>     void *opaque;
>     int n;
>     int level;
> };

Yes, and the existing code can be extended to implement this without wasting 
any of the current changes.

What you're talking about (and in later emails with tristate pins) is a 
generic mechanism for emulating single-bit buses. I don't claim that my 
implementation can do this as-is.

I have implemented sufficient infrastructure for a single-master single-slave 
bus. The most common example of which is a IRQ line. I believe it also covers 
a usefully large subset GPIO pin uses.

I say that my changes are a necessary first step in implementing a fully 
generic single-bit bus framework. My implementation adds infrastructure and 
abstraction for the "master" device (IRQ source), while leaving the "slave" 
(IRQ sink) device code largely unchanged.

Paul

[*1] I guess you could theoretically do this with a self-modifying FPGA SoC. 
It's not impossible to model, just a bit hairy. Effectively an extreme case of 
hotplugging.

[*2] Technically you're limited by available memory on the host. However 
struct IRQState is very small, so you have other much larger problems before 
you even come close to that limit.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]