qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH kernel v5 5/5] virtio-balloon: tell host vm's un


From: Dave Hansen
Subject: Re: [Qemu-devel] [PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
Date: Wed, 30 Nov 2016 11:15:23 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 11/30/2016 12:43 AM, Liang Li wrote:
> +static void send_unused_pages_info(struct virtio_balloon *vb,
> +                             unsigned long req_id)
> +{
> +     struct scatterlist sg_in;
> +     unsigned long pos = 0;
> +     struct virtqueue *vq = vb->req_vq;
> +     struct virtio_balloon_resp_hdr *hdr = vb->resp_hdr;
> +     int ret, order;
> +
> +     mutex_lock(&vb->balloon_lock);
> +
> +     for (order = MAX_ORDER - 1; order >= 0; order--) {

I scratched my head for a bit on this one.  Why are you walking over
orders, *then* zones.  I *think* you're doing it because you can
efficiently fill the bitmaps at a given order for all zones, then move
to a new bitmap.  But, it would be interesting to document this.

> +             pos = 0;
> +             ret = get_unused_pages(vb->resp_data,
> +                      vb->resp_buf_size / sizeof(unsigned long),
> +                      order, &pos);

FWIW, get_unsued_pages() is a pretty bad name.  "get" usually implies
bumping reference counts or consuming something.  You're just
"recording" or "marking" them.

> +             if (ret == -ENOSPC) {
> +                     void *new_resp_data;
> +
> +                     new_resp_data = kmalloc(2 * vb->resp_buf_size,
> +                                             GFP_KERNEL);
> +                     if (new_resp_data) {
> +                             kfree(vb->resp_data);
> +                             vb->resp_data = new_resp_data;
> +                             vb->resp_buf_size *= 2;

What happens to the data in ->resp_data at this point?  Doesn't this
just throw it away?

...
> +struct page_info_item {
> +     __le64 start_pfn : 52; /* start pfn for the bitmap */
> +     __le64 page_shift : 6; /* page shift width, in bytes */
> +     __le64 bmap_len : 6;  /* bitmap length, in bytes */
> +};

Is 'bmap_len' too short?  a 64-byte buffer is a bit tiny.  Right?

> +static int  mark_unused_pages(struct zone *zone,
> +             unsigned long *unused_pages, unsigned long size,
> +             int order, unsigned long *pos)
> +{
> +     unsigned long pfn, flags;
> +     unsigned int t;
> +     struct list_head *curr;
> +     struct page_info_item *info;
> +
> +     if (zone_is_empty(zone))
> +             return 0;
> +
> +     spin_lock_irqsave(&zone->lock, flags);
> +
> +     if (*pos + zone->free_area[order].nr_free > size)
> +             return -ENOSPC;

Urg, so this won't partially fill?  So, what the nr_free pages limit
where we no longer fit in the kmalloc()'d buffer where this simply won't
work?

> +     for (t = 0; t < MIGRATE_TYPES; t++) {
> +             list_for_each(curr, &zone->free_area[order].free_list[t]) {
> +                     pfn = page_to_pfn(list_entry(curr, struct page, lru));
> +                     info = (struct page_info_item *)(unused_pages + *pos);
> +                     info->start_pfn = pfn;
> +                     info->page_shift = order + PAGE_SHIFT;
> +                     *pos += 1;
> +             }
> +     }

Do we need to fill in ->bmap_len here?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]