qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] ddd633: minikconf: explicitly set encoding to


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] ddd633: minikconf: explicitly set encoding to UTF-8
Date: Fri, 26 Jun 2020 09:00:28 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: ddd633e525fec68437d04b074130aedc9d461331
      
https://github.com/qemu/qemu/commit/ddd633e525fec68437d04b074130aedc9d461331
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M scripts/minikconf.py

  Log Message:
  -----------
  minikconf: explicitly set encoding to UTF-8

QEMU currently only has ASCII Kconfig files but Linux actually uses
UTF-8. Explicitly specify the encoding and that we're doing text file
I/O.

It's unclear whether or not QEMU will ever need Unicode in its Kconfig
files. If we start using the help text then it will become an issue
sooner or later. Make this change now for consistency with Linux
Kconfig.

Reported-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200521153616.307100-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 58ebc2c31337734a8a79b0566b31b19040deb2ea
      
https://github.com/qemu/qemu/commit/58ebc2c31337734a8a79b0566b31b19040deb2ea
  Author: Daniele Buono <dbuono@linux.vnet.ibm.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M include/qemu/coroutine_int.h
    M util/coroutine-ucontext.c

  Log Message:
  -----------
  coroutine: support SafeStack in ucontext backend

LLVM's SafeStack instrumentation does not yet support programs that make
use of the APIs in ucontext.h
With the current implementation of coroutine-ucontext, the resulting
binary is incorrect, with different coroutines sharing the same unsafe
stack and producing undefined behavior at runtime.
This fix allocates an additional unsafe stack area for each coroutine,
and sets the new unsafe stack pointer before calling swapcontext() in
qemu_coroutine_new.
This is the only place where the pointer needs to be manually updated,
since sigsetjmp/siglongjmp are already instrumented by LLVM to properly
support SafeStack.
The additional stack is then freed in qemu_coroutine_delete.

Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Message-id: 20200529205122.714-2-dbuono@linux.vnet.ibm.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: ff76097ad8f7fdc9d1d707bed85c146fdbb5a16d
      
https://github.com/qemu/qemu/commit/ff76097ad8f7fdc9d1d707bed85c146fdbb5a16d
  Author: Daniele Buono <dbuono@linux.vnet.ibm.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M util/coroutine-sigaltstack.c

  Log Message:
  -----------
  coroutine: add check for SafeStack in sigaltstack

Current implementation of LLVM's SafeStack is not compatible with
code that uses an alternate stack created with sigaltstack().
Since coroutine-sigaltstack relies on sigaltstack(), it is not
compatible with SafeStack. The resulting binary is incorrect, with
different coroutines sharing the same unsafe stack and producing
undefined behavior at runtime.

In the future LLVM may provide a SafeStack implementation compatible with
sigaltstack(). In the meantime, if SafeStack is desired, the coroutine
implementation from coroutine-ucontext should be used.
As a safety check, add a control in coroutine-sigaltstack to throw a
preprocessor #error if SafeStack is enabled and we are trying to
use coroutine-sigaltstack to implement coroutines.

Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Message-id: 20200529205122.714-3-dbuono@linux.vnet.ibm.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 1e4f6065da160977f212270893372436b8f13336
      
https://github.com/qemu/qemu/commit/1e4f6065da160977f212270893372436b8f13336
  Author: Daniele Buono <dbuono@linux.vnet.ibm.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M configure

  Log Message:
  -----------
  configure: add flags to support SafeStack

This patch adds a flag to enable/disable the SafeStack instrumentation
provided by LLVM.

On enable, make sure that the compiler supports the flags, and that we
are using the proper coroutine implementation (coroutine-ucontext).
On disable, explicitly disable the option if it was enabled by default.

While SafeStack is supported only on Linux, NetBSD, FreeBSD and macOS,
we are not checking for the O.S. since this is already done by LLVM.

Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Message-id: 20200529205122.714-4-dbuono@linux.vnet.ibm.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: d6d1a65ccab670788b6c30d918ac7bd636513f8e
      
https://github.com/qemu/qemu/commit/d6d1a65ccab670788b6c30d918ac7bd636513f8e
  Author: Daniele Buono <dbuono@linux.vnet.ibm.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M tests/check-block.sh

  Log Message:
  -----------
  check-block: enable iotests with SafeStack

SafeStack is a stack protection technique implemented in llvm. It is
enabled with a -fsanitize flag.
iotests are currently disabled when any -fsanitize option is used,
because such options tend to produce additional warnings and false
positives.

While common -fsanitize options are used to verify the code and not
added in production, SafeStack's main use is in production environments
to protect against stack smashing.

Since SafeStack does not print any warning or false positive, enable
iotests when SafeStack is the only -fsanitize option used.
This is likely going to be a production binary and we want to make sure
it works correctly.

Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Message-id: 20200529205122.714-5-dbuono@linux.vnet.ibm.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 2446e0e2e9c9aaa5f8e8c7ef9a41fe8516054831
      
https://github.com/qemu/qemu/commit/2446e0e2e9c9aaa5f8e8c7ef9a41fe8516054831
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: poll queues without q->lock

A lot of CPU time is spent simply locking/unlocking q->lock during
polling. Check for completion outside the lock to make q->lock disappear
from the profile.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20200617132201.1832152-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: d38253cf8b44e3b94a5b327d014ab035ae1126ed
      
https://github.com/qemu/qemu/commit/d38253cf8b44e3b94a5b327d014ab035ae1126ed
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: drop tautologous assertion

nvme_process_completion() explicitly checks cid so the assertion that
follows is always true:

  if (cid == 0 || cid > NVME_QUEUE_SIZE) {
      ...
      continue;
  }
  assert(cid <= NVME_QUEUE_SIZE);

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200617132201.1832152-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 04b3fb39c815e6de67c5003e610d1cdecc911980
      
https://github.com/qemu/qemu/commit/04b3fb39c815e6de67c5003e610d1cdecc911980
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: don't access CQE after moving cq.head

Do not access a CQE after incrementing q->cq.head and releasing q->lock.
It is unlikely that this causes problems in practice but it's a latent
bug.

The reason why it should be safe at the moment is that completion
processing is not re-entrant and the CQ doorbell isn't written until the
end of nvme_process_completion().

Make this change now because QEMU expects completion processing to be
re-entrant and later patches will do that.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200617132201.1832152-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 1086e95da1705087db542276dcbb8ba4d55cb97f
      
https://github.com/qemu/qemu/commit/1086e95da1705087db542276dcbb8ba4d55cb97f
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: switch to a NVMeRequest freelist

There are three issues with the current NVMeRequest->busy field:
1. The busy field is accidentally accessed outside q->lock when request
   submission fails.
2. Waiters on free_req_queue are not woken when a request is returned
   early due to submission failure.
2. Finding a free request involves scanning all requests. This makes
   request submission O(n^2).

Switch to an O(1) freelist that is always accessed under the lock.

Also differentiate between NVME_QUEUE_SIZE, the actual SQ/CQ size, and
NVME_NUM_REQS, the number of usable requests. This makes the code
simpler than using NVME_QUEUE_SIZE everywhere and having to keep in mind
that one slot is reserved.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20200617132201.1832152-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: a5db74f324ee55badfa61b03922ec24439bb94a6
      
https://github.com/qemu/qemu/commit/a5db74f324ee55badfa61b03922ec24439bb94a6
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: clarify that free_req_queue is protected by q->lock

Existing users access free_req_queue under q->lock. Document this.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200617132201.1832152-6-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: b75fd5f55467307a6e367bc349a8ea6ce30d8a1c
      
https://github.com/qemu/qemu/commit/b75fd5f55467307a6e367bc349a8ea6ce30d8a1c
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c

  Log Message:
  -----------
  block/nvme: keep BDRVNVMeState pointer in NVMeQueuePair

Passing around both BDRVNVMeState and NVMeQueuePair is unwieldy. Reduce
the number of function arguments by keeping the BDRVNVMeState pointer in
NVMeQueuePair. This will come in handly when a BH is introduced in a
later patch and only one argument can be passed to it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200617132201.1832152-7-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 7838c67f22a81fcf669785cd6c0876438422071a
      
https://github.com/qemu/qemu/commit/7838c67f22a81fcf669785cd6c0876438422071a
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2020-06-23 (Tue, 23 Jun 2020)

  Changed paths:
    M block/nvme.c
    M block/trace-events

  Log Message:
  -----------
  block/nvme: support nested aio_poll()

QEMU block drivers are supposed to support aio_poll() from I/O
completion callback functions. This means completion processing must be
re-entrant.

The standard approach is to schedule a BH during completion processing
and cancel it at the end of processing. If aio_poll() is invoked by a
callback function then the BH will run. The BH continues the suspended
completion processing.

All of this means that request A's cb() can synchronously wait for
request B to complete. Previously the nvme block driver would hang
because it didn't process completions from nested aio_poll().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20200617132201.1832152-8-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


  Commit: 87fb952da83b223c82048a29aaf03680af1ea92f
      
https://github.com/qemu/qemu/commit/87fb952da83b223c82048a29aaf03680af1ea92f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   2020-06-26 (Fri, 26 Jun 2020)

  Changed paths:
    M block/nvme.c
    M block/trace-events
    M configure
    M include/qemu/coroutine_int.h
    M scripts/minikconf.py
    M tests/check-block.sh
    M util/coroutine-sigaltstack.c
    M util/coroutine-ucontext.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into 
staging

Pull request

# gpg: Signature made Wed 24 Jun 2020 11:01:57 BST
# gpg:                using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35  775A 9CA4 ABB3 81AB 73C8

* remotes/stefanha/tags/block-pull-request:
  block/nvme: support nested aio_poll()
  block/nvme: keep BDRVNVMeState pointer in NVMeQueuePair
  block/nvme: clarify that free_req_queue is protected by q->lock
  block/nvme: switch to a NVMeRequest freelist
  block/nvme: don't access CQE after moving cq.head
  block/nvme: drop tautologous assertion
  block/nvme: poll queues without q->lock
  check-block: enable iotests with SafeStack
  configure: add flags to support SafeStack
  coroutine: add check for SafeStack in sigaltstack
  coroutine: support SafeStack in ucontext backend
  minikconf: explicitly set encoding to UTF-8

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


Compare: https://github.com/qemu/qemu/compare/10f7ffabf9c5...87fb952da83b



reply via email to

[Prev in Thread] Current Thread [Next in Thread]