qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] Switch from array based resource allocation


From: Alexey Korolev
Subject: Re: [Qemu-devel] [PATCH 3/4] Switch from array based resource allocation to list
Date: Tue, 3 Apr 2012 18:39:22 +1200
User-agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2

Hi Kevin,

Thank you for the patches!
I've created a diff of final version of your changes over mine, to make it 
clear what has changed.

Rather than including the complete diff, I've just left relevant parts and 
added comments.

--- a/src/pciinit.c +++ b/src/pciinit.c@@ -12,8 +12,9 @@
........
@@ -34,25 +35,44 @@
 struct pci_region_entry {
     struct pci_device *dev;
     int bar;
-    u32 base;
     u32 size;
-    int is64bit;
+    int is64;
     enum pci_region_type type;
-    struct pci_bus *this_bus;
+    struct pci_bus *child_bus;

Structure members naming was one of difficult things when I was writing the 
code.
The child_bus might be a bit confusing as people may thing that it describes a
child bus in the bus topology,in fact this element describes the bus this 
pci_region_entry
is representing.

.......

+
+static int pci_size_to_index(u32 size, enum pci_region_type type)
+{
+    int index = __fls(size);
+    int shift = (type == PCI_REGION_TYPE_IO) ?
+        PCI_IO_INDEX_SHIFT : PCI_MEM_INDEX_SHIFT;
+
+    if (index < shift)
+        index = shift;
+    index -= shift;
+    return index;
+}
+
+static u32 pci_index_to_size(int index, enum pci_region_type type)
+{
+    int shift = (type == PCI_REGION_TYPE_IO) ?
+        PCI_IO_INDEX_SHIFT : PCI_MEM_INDEX_SHIFT;
+
+    return 0x1 << (index + shift);
+}

The only purpose to have these functions is to define the minimum size of pci 
BAR.
They are used only once.
What if we add size adjustment to pci_region_create_entry, or just create a 
function like
pci_adjust_size(u32 size, enum pci_region_type type, int bridge)?
 
.........

-    list_add_head(&bus->r[type].list, entry);
     entry->parent_bus = bus;
-
+    // Insert into list in sorted order.
+    struct pci_region_entry **pprev;
+    for (pprev = &bus->r[type].list; *pprev; pprev = &(*pprev)->next) {
+        struct pci_region_entry *pos = *pprev;
+        if (pos->size < size)
+            break;
+    }
+    entry->next = *pprev;
+    *pprev = entry;
+
..........
 static void pci_bios_map_devices(struct pci_bus *busses)
 {
-    struct pci_region_entry *entry, *next;
-
+    // Map regions on each device.
     int bus;
     for (bus = 0; bus<=MaxPCIBus; bus++) {
         int type;
         for (type = 0; type < PCI_REGION_TYPE_COUNT; type++) {
-            u32 size;
-            for (size = busses[bus].r[type].max; size > 0; size >>= 1) {
-                    list_foreach_entry_safe(busses[bus].r[type].list,
-                                              next, entry) {
-                        if (size == entry->size) {
-                            entry->base = busses[bus].r[type].base;
-                            busses[bus].r[type].base += size;
-                            pci_region_map_one_entry(entry);
-                            list_del(entry);
-                            free(entry);
-                    }
-                }
+            struct pci_region_entry *entry = busses[bus].r[type].list;
+            while (entry) {
+                pci_region_map_one_entry(entry);
+                struct pci_region_entry *next = entry->next;
+                free(entry);
+                entry = next;
             }
         }
     }
Right, instead of sorting entries by size in pci_bios_map_devices, the entries 
are sorted when they are created.
This makes the implementation simpler.
Note: In case of 64bit BARs we have to migrate entries, so just sorting on 
create won't be enough,
we should also add sorting when entries are migrated. This will add some more 
complexity, while in case of old
implementation complexity will remain the same. I expect it shouldn't be that 
complicated, so any of these approaches
are fine for me.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]