[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v4 00/10] aio_context_acquire/release pushdown,
From: |
no-reply |
Subject: |
Re: [Qemu-devel] [PATCH v4 00/10] aio_context_acquire/release pushdown, part 1 |
Date: |
Thu, 12 Jan 2017 09:30:45 -0800 (PST) |
Hi,
Your series seems to have some coding style problems. See output below for
more information:
Message-id: address@hidden
Subject: [Qemu-devel] [PATCH v4 00/10] aio_context_acquire/release pushdown,
part 1
Type: series
=== TEST SCRIPT BEGIN ===
#!/bin/bash
BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0
# Useful git options
git config --local diff.renamelimit 0
git config --local diff.renames True
commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
failed=1
echo
fi
n=$((n+1))
done
exit $failed
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
* [new tag] patchew/address@hidden -> patchew/address@hidden
Switched to a new branch 'test'
c3b6ff3 async: optimize aio_bh_poll
10daea4 aio: document locking
c87dd80 aio-win32: remove walking_handlers, protecting AioHandler list with
list_lock
ebb7fc8 aio-posix: remove walking_handlers, protecting AioHandler list with
list_lock
a57c5b9 aio: tweak walking in dispatch phase
ec80912 aio-posix: split aio_dispatch_handlers out of aio_dispatch
3477fee qemu-thread: optimize QemuLockCnt with futexes on Linux
5e9110c aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh
8babf64 qemu-thread: introduce QemuLockCnt
26a92ef aio: rename bh_lock to list_lock
=== OUTPUT BEGIN ===
Checking PATCH 1/10: aio: rename bh_lock to list_lock...
Checking PATCH 2/10: qemu-thread: introduce QemuLockCnt...
Checking PATCH 3/10: aio: make ctx->list_lock a QemuLockCnt, subsuming
ctx->walking_bh...
Checking PATCH 4/10: qemu-thread: optimize QemuLockCnt with futexes on Linux...
ERROR: code indent should never use tabs
#108: FILE: util/lockcnt.c:24:
+#define QEMU_LOCKCNT_STATE_FREE 0^I/* free, uncontended */$
ERROR: code indent should never use tabs
#109: FILE: util/lockcnt.c:25:
+#define QEMU_LOCKCNT_STATE_LOCKED 1^I/* locked, uncontended */$
WARNING: line over 80 characters
#165: FILE: util/lockcnt.c:81:
+ int new = expected - QEMU_LOCKCNT_STATE_LOCKED +
QEMU_LOCKCNT_STATE_WAITING;
WARNING: line over 80 characters
#203: FILE: util/lockcnt.c:119:
+ val = atomic_cmpxchg(&lockcnt->count, val, val +
QEMU_LOCKCNT_COUNT_STEP);
WARNING: line over 80 characters
#209: FILE: util/lockcnt.c:125:
+ if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val,
QEMU_LOCKCNT_COUNT_STEP,
WARNING: line over 80 characters
#245: FILE: util/lockcnt.c:161:
+ val = atomic_cmpxchg(&lockcnt->count, val, val -
QEMU_LOCKCNT_COUNT_STEP);
WARNING: line over 80 characters
#253: FILE: util/lockcnt.c:169:
+ if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val, locked_state,
&waited)) {
WARNING: line over 80 characters
#258: FILE: util/lockcnt.c:174:
+ /* At this point we do not know if there are more waiters.
Assume
WARNING: line over 80 characters
#294: FILE: util/lockcnt.c:210:
+ if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val, locked_state,
&waited)) {
total: 2 errors, 7 warnings, 447 lines checked
Your patch has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
Checking PATCH 5/10: aio-posix: split aio_dispatch_handlers out of
aio_dispatch...
Checking PATCH 6/10: aio: tweak walking in dispatch phase...
Checking PATCH 7/10: aio-posix: remove walking_handlers, protecting AioHandler
list with list_lock...
Checking PATCH 8/10: aio-win32: remove walking_handlers, protecting AioHandler
list with list_lock...
Checking PATCH 9/10: aio: document locking...
Checking PATCH 10/10: async: optimize aio_bh_poll...
=== OUTPUT END ===
Test command exited with code: 1
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to address@hidden
- [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh, (continued)
- [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 01/10] aio: rename bh_lock to list_lock, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 05/10] aio-posix: split aio_dispatch_handlers out of aio_dispatch, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 04/10] qemu-thread: optimize QemuLockCnt with futexes on Linux, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 06/10] aio: tweak walking in dispatch phase, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 02/10] qemu-thread: introduce QemuLockCnt, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 07/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 09/10] aio: document locking, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 10/10] async: optimize aio_bh_poll, Paolo Bonzini, 2017/01/12
- [Qemu-devel] [PATCH 08/10] aio-win32: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2017/01/12
- Re: [Qemu-devel] [PATCH v4 00/10] aio_context_acquire/release pushdown, part 1,
no-reply <=