qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [FreeBSD] Host build i386 failing to build aarch64 targ


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [FreeBSD] Host build i386 failing to build aarch64 targets
Date: Mon, 21 Mar 2016 16:56:41 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0


On 21/03/2016 16:36, Alex Bennée wrote:
> >  341     /* The icount_warp_timer is rescheduled soon after 
> > vm_clock_warp_start
> >  342      * changes from -1 to another value, so the race here is okay.
> >  343      */
> >  344     if (atomic_read(&vm_clock_warp_start) == -1) {
> >  345         return;
> >  346     }
> >  347
> Odd, the comments say that vm_clock_warp start is protected by the
> seqlock, and in fact every other access to it is a plain access.

Yes, the comment says why this is safe.

The change from -1 to positive is here:

        if (vm_clock_warp_start == -1 || vm_clock_warp_start > clock) {
            vm_clock_warp_start = clock;
        }
        seqlock_write_unlock(&timers_state.vm_clock_seqlock);
        timer_mod_anticipate(icount_warp_timer, clock + deadline);

If we get a race we must be like this:

        icount_warp_rt           qemu_start_warp_timer
        --------------           ---------------------
        read -1
                                 write to vm_clock_warp_start
                                 unlock
                                 timer_mod_anticipate (*)

As soon as you reach (*) the timer is rescheduled and will read a value
other than -1.

> It seems to me the code should probably just be:
> 
>     seqlock_write_lock(&timers_state.vm_clock_seqlock);
>     if (vm_clock_warp_start !== -1 && runstate_is_running()) {
>       .. do stuff ..
>     }
>     vm_clock_warp_start = -1;
>     seqlock_write_unlock(&timers_state.vm_clock_seqlock);
> 
>     if (we_did_stuff && qemu_clock_expired(QEMU_CLOCK_VIRTUAL)) {
>         qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
>     }

Yes, you can make it like that, or even better wrap the read with a
seqlock_read_begin/seqlock_read_retry loop.  The condition will often be
false and it's pointless to lock/unlock the mutex for that.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]