qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH for 2.10 00/17] Block layer thread


From: no-reply
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH for 2.10 00/17] Block layer thread safety, part 1
Date: Thu, 20 Apr 2017 05:42:42 -0700 (PDT)

Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: address@hidden
Subject: [Qemu-devel] [PATCH for 2.10 00/17] Block layer thread safety, part 1

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

# Useful git options
git config --local diff.renamelimit 0
git config --local diff.renames True

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/address@hidden -> patchew/address@hidden
Switched to a new branch 'test'
c716264 block: make accounting thread-safe
7d0704f block: protect modification of dirty bitmaps with a mutex
2259950 block: introduce dirty_bitmap_mutex
0532009 block: optimize access to reqs_lock
862fd98 coroutine-lock: introduce qemu_co_mutex_lock_unlock
5d9d699 block: protect tracked_requests and flush_queue with reqs_lock
a65d4b0 block: access write_gen with atomics
01ac75b block: use Stat64 for wr_highest_offset
628d1ba util: add stats64 module
ea6bb59 throttle-groups: protect throttled requests with a CoMutex
895a11d throttle-groups: do not use qemu_co_enter_next
42ce4b3 block: access io_plugged with atomic ops
bc2deb1 block: access wakeup with atomic ops
18d2d96 block: access serialising_in_flight with atomic ops
a11151d block: access io_limits_disabled with atomic ops
c4d7323 block: access quiesce_counter with atomic ops
9c192bd block: access copy_on_read with atomic ops

=== OUTPUT BEGIN ===
Checking PATCH 1/17: block: access copy_on_read with atomic ops...
Checking PATCH 2/17: block: access quiesce_counter with atomic ops...
Checking PATCH 3/17: block: access io_limits_disabled with atomic ops...
Checking PATCH 4/17: block: access serialising_in_flight with atomic ops...
Checking PATCH 5/17: block: access wakeup with atomic ops...
Checking PATCH 6/17: block: access io_plugged with atomic ops...
Checking PATCH 7/17: throttle-groups: do not use qemu_co_enter_next...
Checking PATCH 8/17: throttle-groups: protect throttled requests with a 
CoMutex...
Checking PATCH 9/17: util: add stats64 module...
WARNING: architecture specific defines should be avoided
#35: FILE: include/qemu/stats64.h:18:
+#if __SIZEOF_LONG__ < 8

ERROR: memory barrier without comment
#343: FILE: util/stats64.c:100:
+        smp_wmb();

ERROR: memory barrier without comment
#372: FILE: util/stats64.c:129:
+        smp_wmb();

total: 2 errors, 1 warnings, 350 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 10/17: block: use Stat64 for wr_highest_offset...
Checking PATCH 11/17: block: access write_gen with atomics...
Checking PATCH 12/17: block: protect tracked_requests and flush_queue with 
reqs_lock...
Checking PATCH 13/17: coroutine-lock: introduce qemu_co_mutex_lock_unlock...
Checking PATCH 14/17: block: optimize access to reqs_lock...
Checking PATCH 15/17: block: introduce dirty_bitmap_mutex...
Checking PATCH 16/17: block: protect modification of dirty bitmaps with a 
mutex...
Checking PATCH 17/17: block: make accounting thread-safe...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]