qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 10/17] virtio-mem: Paravirtualized memory hot(un)plug


From: David Hildenbrand
Subject: Re: [PATCH v1 10/17] virtio-mem: Paravirtualized memory hot(un)plug
Date: Fri, 15 May 2020 18:48:55 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0

On 15.05.20 17:37, Dr. David Alan Gilbert wrote:
> I'm not sure if it's possible to split this up; it's a bit big.

Functionality-wise, it's the bare minimum. I could split out handling of
all 4 request types, but they are only ~150-200 LOC. Not sure if that is
really worth the trouble. open for suggestions.

> It could also do with a pile of trace_ entries to figure out what it's
> doing.

Good idea, will add that with a patch on top.

[...]

>> +static int virtio_mem_set_block_state(VirtIOMEM *vmem, uint64_t start_gpa,
>> +                                      uint64_t size, bool plug)
>> +{
>> +    const uint64_t offset = start_gpa - vmem->addr;
>> +    int ret;
>> +
>> +    if (!plug) {
>> +        if (virtio_mem_discard_inhibited()) {
>> +            return -EBUSY;
>> +        }
>> +        /* Note: Discarding should never fail at this point. */
>> +        ret = ram_block_discard_range(vmem->memdev->mr.ram_block, offset, 
>> size);
>> +        if (ret) {
> 
> error_report ?


error_report("Unexpected error discarding RAM: %s",
             strerror(-ret));
it is.

[...]

>> +    ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
>> +    if (ret) {
>> +        /* Note: Discarding should never fail at this point. */
> 
> error_report?

dito

> 
>> +        return -EBUSY;
>> +    }
>> +    bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size);
>> +    vmem->size = 0;
>> +
>> +    virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
>> +    return 0;
>> +}

[...]

>> +static void virtio_mem_handle_request(VirtIODevice *vdev, VirtQueue *vq)
>> +{
>> +    const int len = sizeof(struct virtio_mem_req);
>> +    VirtIOMEM *vmem = VIRTIO_MEM(vdev);
>> +    VirtQueueElement *elem;
>> +    struct virtio_mem_req req;
>> +    uint64_t type;
>> +
>> +    while (true) {
>> +        elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
>> +        if (!elem) {
>> +            return;
>> +        }
>> +
>> +        if (iov_to_buf(elem->out_sg, elem->out_num, 0, &req, len) < len) {
>> +            virtio_mem_bad_request(vmem, "invalid request size");
> 
> Print the size.

Make sense, I'll probably get rid of virtio_mem_bad_request() and just
do the virtio_error() directly with additional paramaters.

> 
>> +            g_free(elem);
>> +            return;
>> +        }
>> +
>> +        if (iov_size(elem->in_sg, elem->in_num) <
>> +            sizeof(struct virtio_mem_resp)) {
>> +            virtio_mem_bad_request(vmem, "not enough space for response");
>> +            g_free(elem);
>> +            return;
>> +        }
>> +
>> +        type = le16_to_cpu(req.type);
>> +        switch (type) {
>> +        case VIRTIO_MEM_REQ_PLUG:
>> +            virtio_mem_plug_request(vmem, elem, &req);
>> +            break;
>> +        case VIRTIO_MEM_REQ_UNPLUG:
>> +            virtio_mem_unplug_request(vmem, elem, &req);
>> +            break;
>> +        case VIRTIO_MEM_REQ_UNPLUG_ALL:
>> +            virtio_mem_unplug_all_request(vmem, elem);
>> +            break;
>> +        case VIRTIO_MEM_REQ_STATE:
>> +            virtio_mem_state_request(vmem, elem, &req);
>> +            break;
>> +        default:
>> +            virtio_mem_bad_request(vmem, "unknown request type");
> 
> Could include the type .

Yes, will do!

[...]

>> +
>> +static int virtio_mem_pre_save(void *opaque)
>> +{
>> +    VirtIOMEM *vmem = VIRTIO_MEM(opaque);
>> +
>> +    vmem->migration_addr = vmem->addr;
>> +    vmem->migration_block_size = vmem->block_size;
> 
> You might look at VMSTATE_WITH_TMP could avoid you having the dummy
> fields.

Thanks, will have a look.

-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]