qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Seabios] [PATCH 0/6] 64bit PCI BARs allocations (take


From: Alexey Korolev
Subject: Re: [Qemu-devel] [Seabios] [PATCH 0/6] 64bit PCI BARs allocations (take 2)
Date: Mon, 5 Mar 2012 19:03:08 +1300
User-agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2

> On Thu, Mar 01, 2012 at 06:50:43PM +1300, Alexey Korolev wrote:
>> Hi,
>>
>> This patch series enables 64bit BAR support in seabios. 
>> It has a bit different approach for resources accounting, We did this
>> because we wanted:
>> a) Provide 64bit bar support for PCI BARs and bridges with 64bit memory
>> window.
>> b) Allow migration to 64bit bit ranges if we did not fit into 32bit
>> range
>> c) Keep implementation simple.
> Hrmm.  By my count, this would be the third "rewrite" of the PCI bar
> initialization in the last 14 months.
Correct, it could be called a "rewrite"
> [...]
>> The patch series includes 6 patches.
>> In the 1st patch we introduce new structures. 
> Patch 1 does not look like it will compile independently.  There is no
> point in breaking up patches if each part doesn't compile.
Sorry about this. In my case it was difficult to do as the changes are 
extensive and I wanted to explain how they work.
Probably the best approach would be splitting into two series.
>> In the 2nd patch we introduce support functions for basic hlist
>> operations, plus modify service functions to support 64bits address
>> ranges. 
>>      Note: I've seen similar hlist operations in post memory manager 
>>         and stack location operations, it makes sense to move
>>         them to a header file. 
>>
>> In the 3rd patch a new function to fill pci_region structures with
>> entries, and discover topology is added.
>>
>> In the 4th patch we define address range for pci_region structure,
>> migrate entries to 64bits address range if necessary, and program PCI
>> BAR addresses and bridge regions.
>>
>> In the 6th patch we clear old code.
> Given the churn in this area, I don't want to commit patches that do
> wholesale code replacement.  I'd prefer to see each patch
> independently add some functionality and perform its related cleanup.
I understand your point, will rework.
Would it be reasonable if I send one patch series for redesign of existing 
implementation
and another one for 64bit support?
> Also, since Gerd has some patches pending in this area, we should
> figure out which direction makes sense.  Can you explain on how this
> 64bit support is different from the support proposed by Gerd?

Ah it's a difficult thing, I don't want to criticise. You'll hate me :).

I think Gerd's implementation is about saving existing approach and having 
64bit BARs support
with incremental sort of changes. It is reasonable, but causes the code to be 
bulky, and adding extra types
(PCI_REGION_TYPE_PREFMEM64) is a bit misleading.

1. Gerd's implementaton creates extra new region types:

 static const char *region_type_name[] = {

    [ PCI_REGION_TYPE_IO ]        = "io",
    [ PCI_REGION_TYPE_MEM ]       = "mem",
    [ PCI_REGION_TYPE_PREFMEM ]   = "prefmem",
    [ PCI_REGION_TYPE_PREFMEM64 ] = "prefmem64",
 };

Note:  if we are going to support 64bit PCI non prefetchable BARs on root bus 
we have to add an
extra type PCI_REGION_TYPE_MEM64. In reality there are just 3 types 
(PCI_REGION_TYPE_IO,
PCI_REGION_TYPE_MEM,PCI_REGION_TYPE_PREFMEM), so adding new types makes code
bulkier and less strait-forward.

In my implementation there is no need to create extra new region types, we are 
still using standard types.

2. If we going to use existing approach and want to support 64bit bridges and 
bars with size over 4GB
we need to extend arrays and rewrite all bus sizing functions:

     struct {
         /* pci region stats */
         u32 count[32 - PCI_MEM_INDEX_SHIFT];
-        u32 sum, max;
+        u64 sum, max;
         /* seconday bus region sizes */
-        u32 size;
+        u64 size;
         /* pci region assignments */
-        u32 bases[32 - PCI_MEM_INDEX_SHIFT];
-        u32 base;
+        u64 bases[32 - PCI_MEM_INDEX_SHIFT];
+        u64 base;
     } r[PCI_REGION_TYPE_COUNT];

In other words instead of 32 for "bases" and "count" arrays it must be (40 or 
39). Plus we have to rewrite index to size, size to index,
size round up and so on
functions to make them 64bit capable.

My implementation does this without adding much code, we just declare size of 
each entry.

3. About RAM use (may not be important), but lets count:
   5 region types * (40 -PCI_MEM_INDEX_SHIFT ) * (sizeof(u64) + sizeof(u32)) = 
1680 bytes for each bus.
   if we don' bother about PCI_REGION_TYPE_MEM64 it will be 1344 bytes for each 
bus. I didn't count other members so the size will be even more.
   Plus we allocate 20 bytes * PCI_NUM_REGIONS = 140bytes for each pci device.

   In my approach we are using just 16bytes for each region, so 48bytes per 
bus, and 48bytes for each entry.

In conclusion:

Gerd's approach:
 src/pci.h     |   14 ++-
 src/pciinit.c |  272 +++++++++++++++++++++++++++++++++++++++++----------------
 2 files changed, 206 insertions(+), 82 deletions(-)

+ Less marginal changes in code
 - Bulkier code
 - Doesn't support PCI bars and bridges with size over 4GB.
 - Doesn't support 64bit non-prefetchable BARs on root bus.


My approach:
 src/pci.h     |    6 -
 src/pciinit.c |  509 ++++++++++++++++++++++++++++++++-------------------------
 2 files changed, 284 insertions(+), 231 deletions(-)

 - More marginal changes in code
 + Less bulkier code
 + Does support PCI bars and bridges with size over 4GB.
 + Does support 64bit non-prefetchable BARs on root bus.

I would certainly welcome Gerd's opinions on the two approaches as well.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]