qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH] spapr/pci: populate PCI DT in reverse order


From: Thomas Huth
Subject: Re: [Qemu-ppc] [PATCH] spapr/pci: populate PCI DT in reverse order
Date: Tue, 1 Dec 2015 22:48:38 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 30/11/15 11:45, Greg Kurz wrote:
> Since commit 1d2d974244c6 "spapr_pci: enumerate and add PCI device tree", QEMU
> populates the PCI device tree in the opposite order compared to SLOF.
> 
> Before 1d2d974244c6:
> 
> Populating /address@hidden
>                      00 0000 (D) : 1af4 1000    virtio [ net ]
>                      00 0800 (D) : 1af4 1001    virtio [ block ]
>                      00 1000 (D) : 1af4 1009    virtio [ network ]
> Populating /address@hidden/address@hidden
> 
> 
> 7e5294b8 :  /address@hidden
> 7e52b998 :  |-- address@hidden
> 7e52c0c8 :  |-- address@hidden
> 7e52c7e8 :  +-- address@hidden ok
> 
> Since 1d2d974244c6:
> 
> Populating /address@hidden
>                      00 1000 (D) : 1af4 1009    virtio [ network ]
> Populating /address@hidden/address@hidden
>                      00 0800 (D) : 1af4 1001    virtio [ block ]
>                      00 0000 (D) : 1af4 1000    virtio [ net ]
> 
> 
> 7e5e8118 :  /address@hidden
> 7e5ea6a0 :  |-- address@hidden
> 7e5eadb8 :  |-- address@hidden
> 7e5eb4d8 :  +-- address@hidden ok
> 
> This behaviour change is not actually a bug since no assumptions should be
> made on DT ordering. But it has no real justification either, other than
> being the consequence of the way fdt_add_subnode() inserts new elements
> to the front of the FDT rather than adding them to the tail.
> 
> This patch reverts to the historical SLOF ordering by walking PCI devices in
> reverse order.

I've applied your patch here locally, and indeed, the device tree looks
nicer to me, too, when the nodes are listed in ascending order.

Tested-by: Thomas Huth <address@hidden>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]