qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/3] accel: introduce accelerator blocker API


From: Emanuele Giuseppe Esposito
Subject: Re: [PATCH v2 1/3] accel: introduce accelerator blocker API
Date: Fri, 11 Nov 2022 15:52:40 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0


Am 11/11/2022 um 11:48 schrieb Paolo Bonzini:
> On 11/10/22 17:48, Emanuele Giuseppe Esposito wrote:
>> +/*
>> + * QEMU accel blocker class
> 
> "Lock to inhibit accelerator ioctls"
> 
>> + *
>> + * Copyright (c) 2014 Red Hat Inc.
> 
> 2022, you can also add an Author line.
> 
>> +static int accel_in_ioctls(void)
> 
> Return bool (and return early if ret becomes true).
> 
>> +void accel_ioctl_inhibit_begin(void)
>> +{
>> +    CPUState *cpu;
>> +
>> +    /*
>> +     * We allow to inhibit only when holding the BQL, so we can identify
>> +     * when an inhibitor wants to issue an ioctl easily.
>> +     */
>> +    g_assert(qemu_mutex_iothread_locked());
>> +
>> +    /* Block further invocations of the ioctls outside the BQL.  */
>> +    CPU_FOREACH(cpu) {
>> +        qemu_lockcnt_lock(&cpu->in_ioctl_lock);
>> +    }
>> +    qemu_lockcnt_lock(&accel_in_ioctl_lock);
>> +
>> +    /* Keep waiting until there are running ioctls */
>> +    while (accel_in_ioctls()) {
>> +        /* Reset event to FREE. */
>> +        qemu_event_reset(&accel_in_ioctl_event);
>> +
>> +        if (accel_in_ioctls()) {
>> +
>> +            CPU_FOREACH(cpu) {
>> +                /* exit the ioctl */
>> +                qemu_cpu_kick(cpu);
> 
> Only kick if the lockcnt count is > 0? (this is not racy; if it is == 0,
> it cannot ever become > 0 again while the lock is taken)

Better:

accel_has_to_wait(void)
{
    CPUState *cpu;
    bool needs_to_wait = false;

    CPU_FOREACH(cpu) {
        if (qemu_lockcnt_count(&cpu->in_ioctl_lock)) {
            qemu_cpu_kick(cpu);
            needs_to_wait = true;
        }
    }

    return needs_to_wait || qemu_lockcnt_count(&accel_in_ioctl_lock);
}

And then the loop becomes:

while (true) {
        qemu_event_reset(&accel_in_ioctl_event);

        if (accel_has_to_wait()) {
            qemu_event_wait(&accel_in_ioctl_event);
        } else {
            /* No ioctl is running */
            return;
        }
}

> 
>> diff --git a/include/sysemu/accel-blocker.h
>> b/include/sysemu/accel-blocker.h
>> new file mode 100644
>> index 0000000000..135ebea566
>> --- /dev/null
>> +++ b/include/sysemu/accel-blocker.h
>> @@ -0,0 +1,45 @@
>> +/*
>> + * Accelerator blocking API, to prevent new ioctls from starting and
>> wait the
>> + * running ones finish.
>> + * This mechanism differs from pause/resume_all_vcpus() in that it
>> does not
>> + * release the BQL.
>> + *
>> + *  Copyright (c) 2014 Red Hat Inc.
> 
> 2022, you can also add an Author line here too.
> 
>> + * This work is licensed under the terms of the GNU GPL, version 2 or
>> later.
>> + * See the COPYING file in the top-level directory.
>> + */
>> +#ifndef ACCEL_BLOCKER_H
>> +#define ACCEL_BLOCKER_H
>> +
>> +#include "qemu/osdep.h"
>> +#include "qemu/accel.h"
> 
> qemu/accel.h not needed?
> 
>> +#include "sysemu/cpus.h"
>> +
>> +extern void accel_blocker_init(void);
>> +
>> +/*
>> + * accel_set_in_ioctl/accel_cpu_set_in_ioctl:
>> + * Mark when ioctl is about to run or just finished.
>> + * If @in_ioctl is true, then mark it is beginning. Otherwise marks
>> that it is
>> + * ending.
>> + *
>> + * These functions will block after accel_ioctl_inhibit_begin() is
>> called,
>> + * preventing new ioctls to run. They will continue only after
>> + * accel_ioctl_inibith_end().
>> + */
>> +extern void accel_set_in_ioctl(bool in_ioctl);
>> +extern void accel_cpu_set_in_ioctl(CPUState *cpu, bool in_ioctl);
> 
> Why not just
> 
> extern void accel_ioctl_begin(void);
> extern void accel_ioctl_end(void);
> extern void accel_cpu_ioctl_begin(CPUState *cpu);
> extern void accel_cpu_ioctl_end(CPUState *cpu);
> 
> ?

Ok, makes sense.

Thank you,
Emanuele

> 
> Otherwise it's very nice.
> 
> Paolo
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]