qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v3 6/6] PPC 85xx: Add qemu-ppce500 machine


From: Scott Wood
Subject: Re: [Qemu-ppc] [PATCH v3 6/6] PPC 85xx: Add qemu-ppce500 machine
Date: Thu, 20 Feb 2014 09:45:15 -0600

On Thu, 2014-02-20 at 13:34 +0100, Alexander Graf wrote:
> On 19.02.2014, at 01:03, Scott Wood <address@hidden> wrote:
> 
> > On Tue, 2014-02-11 at 01:10 +0100, Alexander Graf wrote:
> >> +  puts("\n");
> >> +
> >> +  /* Start MMIO and PIO range maps above RAM */
> >> +  map_addr = CONFIG_MAX_MEM_MAPPED;
> > 
> > It'd be better to hardcode virtual addresses for this (as other boards
> > do), and limit the size you map to the smaller of the hardcoded size or
> > the device tree size.
> 
> I don't understand this comment. CONFIG_MAX_MEM_MAPPED is basically the
> first address available to IO maps, so with this it is properly
> hardcoded and ensured to always map IO to the same physical address
> regardless of memory passed in.

I mean an explicit address in the board config file, rather than hiding
it here.  It helps to have the full address map in one place.  Consider
what would happen if some other part of the code tried the same trick
with CONFIG_MAX_MEM_MAPPED. :-)
 
> >> +  mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(0);
> >> +  mas1 = MAS1_VALID | MAS1_TID(0) | MAS1_TS | MAS1_TSIZE(BOOKE_PAGESZ_1M);
> >> +  mas2 = FSL_BOOKE_MAS2(fdt_virt_tlb, 0);
> >> +  mas3 = FSL_BOOKE_MAS3(fdt_phys_tlb, 0, MAS3_SW|MAS3_SR);
> >> +  mas7 = FSL_BOOKE_MAS7(fdt_phys_tlb);
> > 
> > What if the fdt straddles a 1M boundary?
> 
> Then we fix the hypervisor ;). Even the 1MB is only an approximation.
> We don't know the size of the fdt. But I think we can expect the
> hypervisor to align it on 1MB. The masks here really are just to be
> nice to a hypervisor if it's broken (or knows exactly what it's doing).

What's special about 1 MiB?  If you want to rely on DTC_PAD_MASK not
changing to simplify this code, since it's QEMU-specific, fine -- but I
wouldn't consider it "broken" for an arbitrary hypervisor to do
differently.
 
-Scott





reply via email to

[Prev in Thread] Current Thread [Next in Thread]