qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v2 15/15] arm/xlnx-zynqmp: put APUs and RPUs in sepa


From: Luc Michel
Subject: [Qemu-devel] [PATCH v2 15/15] arm/xlnx-zynqmp: put APUs and RPUs in separate GDB groups
Date: Wed, 17 Oct 2018 19:02:27 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0


On 10/5/18 8:49 PM, Eduardo Habkost wrote:
> On Fri, Oct 05, 2018 at 03:50:01PM +0200, Philippe Mathieu-Daudé wrote:
>> On 04/10/2018 23:53, Eduardo Habkost wrote:
>>> On Thu, Oct 04, 2018 at 09:01:09PM +0100, Peter Maydell wrote:
>>>> On 4 October 2018 at 20:52, Eduardo Habkost <address@hidden> wrote:
>>>>> Changing the object hierarchy based on GDB groups doesn't seem
>>>>> right, but I don't think it would be a big deal if we have the
>>>>> board code explicitly telling the GDB code how to group the CPUs.
>>>>>
>>>>> If you really want to do it implicitly, would it work if you
>>>>> simply group the CPUs based on object_get_canonical_path()?
>>>>>
>>>>> If a more explicit GDB grouping API is acceptable, what about
>>>>> just adding a INTERFACE_GDB_GROUP interface name to (existing)
>>>>> container objects that we expect to become GDB groups?
>>>>>
>>>>> I'm not sure which way is better. I'm a bit worried that making
>>>>> things too implicit could easily break (e.g. if somebody changes
>>>>> the CPU QOM hierarchy in the future for unrelated reasons).
>>>>
>>>> I don't want things implicit. I just don't want the explicitness
>>>> to be "this is all about GDB", because it isn't. I want us
>>>> to explicitly say "these 4 CPUs are in one cluster" (or
>>>> whatever term we use), because that affects more than merely GDB.
>>>
>>> We already have a way to say "these 4 CPUs are in one cluster",
>>> don't we?  That's the QOM hierarchy.
>>>
>>> My question is if "the CPUs are in one cluster" should implicitly
>>> mean "the CPUs are in one GDB group".
>>>
>>
>> What about having the container implement INTERFACE_CPU_CLUSTER?
>>
>> Or even cleaner, add a TYPE_CPU_CLUSTER which is just a container for
>> TYPE_CPU[*]?
> 
> Sounds good to me.  An interface sounds more flexible, to avoid
> clashing with existing type hierarchies for
> nodes/sockets/cores/etc.
But we still need a container sub-class specialized for that matter
right? Or are we going to have the generic container class implements
this not-so-generic interface?

> 
> But first of all, I think we need a good definition of what
> exactly is a cluster, and what is the purpose of this
> abstraction.
I think it has implications that go way beyond this patch set.
Here we want to put the APUs (cortex-a53) and the RPUs (cortex-r5) in
different groups mainly because they have different architectures (I
think the address space is more or less the same for all the CPUs in
this SoC).

The current configuration is wrong since the A53 and the R5 probably
don't have the same features, hence for the same piece of code,
translations can differ from one another (e.g. one could have VFPv4 and
not the other). So the translation cache should not be shared.

We could imagine modelling more complex heterogeneous architectures. One
that come to my mind is a many-core chip from Kalray, which is organised
in 16 clusters of 16 cores each. In a cluster, the cores are SMP,
accessing the same SRAM. But inter-cluster communication is done through
an explicit NoC, using DMAs.

In that case, a "cluster" QEMU abstraction would make sense since cores
between clusters must not share the same address space, nor translation
cache.

Regarding GDB, two CPUs should be put in different groups if:
  - Their architectures are different
  - or if the extra XML descriptions we send to GDB for those CPUs are
different (extra registers).

So for now I think we can introduce this new "cpu cluster" abstraction
as it makes sense for the kind of system we (could) want to model in
QEMU. For now it will only be used by the GDB stub but it definitely has
a deeper implication.

> 
> If we end up with a new abstraction that is only going to used by
> GDB code and nothing else, I don't see the point of pretending it
> is not a GDB-specific abstraction.
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]