qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH 41/77] ppc/pnv: Add LPC controller an


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH 41/77] ppc/pnv: Add LPC controller and hook it up with a UART and RTC
Date: Wed, 02 Dec 2015 16:29:07 +1100

On Wed, 2015-12-02 at 13:24 +1100, Alexey Kardashevskiy wrote:
> > But on the whole I agree with you, since the LPC is part of the P8
> > chip, I think it makes sense to include it even with -nodefaults.
> 
> POWER8 chips all have 8 threads per core but we do not always assume -smt 
> ...,threads=8, how are LPC or PHB different? 

First, for pseries which is paravirtualized it's a different can of
worms completely. For powernv, we *should* represent all 8 threads,
we just can't yet due to TCG limitations.

> PHB is more interesting - how is the user supposed to add more?

That's an open question. Since we model a real P8 chip we can only
model the PHBs as they exist on it, which is up to 3 per chip at
very specific XSCOM addresses. We could try to model some non-existing
P8 chip with more but bad things will happen when the FW try to assign
interrupt numbers for example.

We simulate a machine that has been primed by HostBoot before OPAL
starts. So we rely on what the device-tree tells us of what PHB were
enabled but appart from that, we have to stick to the limitations.

> And there always will be the default one 
> which properties are set in a separate way (via -global, not -device). I 
> found it sometime really annoying to debug the existing pseries which 
> always adds a default PHB (I know, this was to make libvirt happy but this 
> is not the case here).
> 
> Out of curiosity - if we have 2 chips, will the system work if the second 
> chip does not get any LPC or PHB attached?

This is something I need to look into, there's a lot of work needed to
properly model "chips" that I haven't done yet, but what is there is
sufficient for a lot of usages already.

Cheers,
Ben.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]