> On Apr 23, 2020, at 9:54 AM, 罗勇刚(Yonggang Luo) <address@hidden> wrote:
>
> Does multi-process support on Windows?
> I found it use mmap and unix socket for inter-process communication, that may not support under Windows.
Hi Yonggang,
We have only tested this on Linux till now. Are you using QEMU with Windows?
Yeap, I am using QEMU with windows.
> And also, can the python script be replaced by C implementation?
The functionality in the python script would eventually move to libvirt. The python
script is a temporary measure.
Does that means without libvirt, the QEMU can not be called directly?
Thank you very much!
—
Jag
>
> On Thu, Apr 23, 2020 at 12:38 PM <address@hidden> wrote:
> From: Elena Ufimtseva <address@hidden>
>
> Signed-off-by: Elena Ufimtseva <address@hidden>
> Signed-off-by: Jagannathan Raman <address@hidden>
> Signed-off-by: John G Johnson <address@hidden>
> ---
> MAINTAINERS | 2 +
> docs/multi-process.rst | 85 +++++++++++++++++++++++++
> scripts/mpqemu-launcher-perf-mode.py | 92 ++++++++++++++++++++++++++++
> scripts/mpqemu-launcher.py | 53 ++++++++++++++++
> 4 files changed, 232 insertions(+)
> create mode 100644 docs/multi-process.rst
> create mode 100755 scripts/mpqemu-launcher-perf-mode.py
> create mode 100755 scripts/mpqemu-launcher.py
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ed48615e15..8ff3bfae6a 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -2880,6 +2880,8 @@ F: remote/iohub.c
> F: remote/remote-opts.h
> F: remote/remote-opts.c
> F: docs/devel/multi-process.rst
> +F: scripts/mpqemu-launcher.py
> +F: scripts/mpqemu-launcher-perf-mode.py
>
> Build and test automation
> -------------------------
> diff --git a/docs/multi-process.rst b/docs/multi-process.rst
> new file mode 100644
> index 0000000000..8387d6c691
> --- /dev/null
> +++ b/docs/multi-process.rst
> @@ -0,0 +1,85 @@
> +Multi-process QEMU
> +==================
> +
> +This document describes how to configure and use multi-process qemu.
> +For the design document refer to docs/devel/qemu-multiprocess.
> +
> +1) Configuration
> +----------------
> +
> +To enable support for multi-process add --enable-mpqemu
> +to the list of options for the "configure" script.
> +
> +
> +2) Usage
> +--------
> +
> +Multi-process QEMU requires an orchestrator to launch. Please refer to a
> +light-weight python based orchestrator for mpqemu in
> +scripts/mpqemu-launcher.py to lauch QEMU in multi-process mode.
> +
> +scripts/mpqemu-launcher-perf-mode.py launches in "perf" mode. In this mode,
> +the same QEMU process connects to multiple remote devices, each emulated in
> +a separate process.
> +
> +As of now, we only support the emulation of lsi53c895a in a separate process.
> +
> +Following is a description of command-line used to launch mpqemu.
> +
> +* Orchestrator:
> +
> + - The Orchestrator creates a unix socketpair
> +
> + - It launches the remote process and passes one of the
> + sockets to it via command-line.
> +
> + - It then launches QEMU and specifies the other socket as an option
> + to the Proxy device object
> +
> +* Remote Process:
> +
> + - The first command-line option in the remote process is one of the
> + sockets created by the Orchestrator
> +
> + - The remaining options are no different from how one launches QEMU with
> + devices. The only other requirement is each PCI device must have a
> + unique ID specified to it. This is needed to pair remote device with the
> + Proxy object.
> +
> + - Example command-line for the remote process is as follows:
> +
> + /usr/bin/qemu-scsu-dev 4 \
> + -device lsi53c895a,id=lsi0 \
> + -drive id=drive_image2,file=/build/ol7-nvme-test-1.qcow2 \
> + -device scsi-hd,id=drive2,drive=drive_image2,bus=lsi0.0,scsi-id=0
> +
> +* QEMU:
> +
> + - Since parts of the RAM are shared between QEMU & remote process, a
> + memory-backend-memfd is required to facilitate this, as follows:
> +
> + -object memory-backend-memfd,id=mem,size=2G
> +
> + - A "pci-proxy-dev" device is created for each of the PCI devices emulated
> + in the remote process. A "socket" sub-option specifies the other end of
> + unix channel created by orchestrator. The "id" sub-option must be specified
> + and should be the same as the "id" specified for the remote PCI device
> +
> + - Example commandline for QEMU is as follows:
> +
> + -device pci-proxy-dev,id=lsi0,socket=3
> +
> +* Monitor / QMP:
> +
> + - The remote process supports QEMU monitor. It could be specified using the
> + "-monitor" or "-qmp" command-line options
> +
> + - As an example, one could connect to the monitor by adding the following
> + to the command-line of the remote process
> +
> + -monitor unix:/home/qmp-sock,server,nowait
> +
> + - The user could connect to the monitor using the qmp script or using
> + "socat" as outlined below:
> +
> + socat /home/qmp-sock stdio
> diff --git a/scripts/mpqemu-launcher-perf-mode.py b/scripts/mpqemu-launcher-perf-mode.py
> new file mode 100755
> index 0000000000..2733424c76
> --- /dev/null
> +++ b/scripts/mpqemu-launcher-perf-mode.py
> @@ -0,0 +1,92 @@
> +#!/usr/bin/env python3
> +
> +import socket
> +import os
> +import subprocess
> +import time
> +
> +PROC_QEMU='/usr/bin/qemu-system-x86_64'
> +
> +PROC_REMOTE='/usr/bin/qemu-scsi-dev'
> +
> +proxy_1, remote_1 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
> +proxy_2, remote_2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
> +proxy_3, remote_3 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
> +
> +remote_cmd_1 = [ PROC_REMOTE, \
> + str(remote_1.fileno()), \
> + '-device', 'lsi53c895a,id=lsi1', \
> + '-drive', 'id=drive_image1,' \
> + 'file=/build/ol7-nvme-test-1.qcow2', \
> + '-device', 'scsi-hd,id=drive1,drive=drive_image1,' \
> + 'bus=lsi1.0,scsi-id=0', \
> + ]
> +
> +remote_cmd_2 = [ PROC_REMOTE, \
> + str(remote_2.fileno()), \
> + '-device', 'lsi53c895a,id=lsi2', \
> + '-drive', 'id=drive_image2,' \
> + 'file=/build/ol7-nvme-test-2.qcow2', \
> + '-device', 'scsi-hd,id=drive2,drive=drive_image2,' \
> + 'bus=lsi2.0,scsi-id=0' \
> + ]
> +
> +remote_cmd_3 = [ PROC_REMOTE, \
> + str(remote_3.fileno()), \
> + '-device', 'lsi53c895a,id=lsi3', \
> + '-drive', 'id=drive_image3,' \
> + 'file=/build/ol7-nvme-test-3.qcow2', \
> + '-device', 'scsi-hd,id=drive3,drive=drive_image3,' \
> + 'bus=lsi3.0,scsi-id=0' \
> + ]
> +
> +proxy_cmd = [ PROC_QEMU, \
> + '-name', 'OL7.4', \
> + '-machine', 'q35,accel=kvm', \
> + '-smp', 'sockets=1,cores=1,threads=1', \
> + '-m', '2048', \
> + '-object', 'memory-backend-memfd,id=sysmem-file,size=2G', \
> + '-numa', 'node,memdev=sysmem-file', \
> + '-device', 'virtio-scsi-pci,id=virtio_scsi_pci0', \
> + '-drive', 'id=drive_image1,if=none,format=qcow2,' \
> + 'file=/home/ol7-hdd-1.qcow2', \
> + '-device', 'scsi-hd,id=image1,drive=drive_image1,' \
> + 'bus=virtio_scsi_pci0.0', \
> + '-boot', 'd', \
> + '-vnc', ':0', \
> + '-device', 'pci-proxy-dev,id=lsi1,' \
> + 'socket='+str(proxy_1.fileno()), \
> + '-device', 'pci-proxy-dev,id=lsi2,' \
> + 'socket='+str(proxy_2.fileno()), \
> + '-device', 'pci-proxy-dev,id=lsi3,' \
> + 'socket='+str(proxy_3.fileno()) \
> + ]
> +
> +
> +pid = os.fork();
> +if pid == 0:
> + # In remote_1
> + print('Launching Remote process 1');
> + process = subprocess.Popen(remote_cmd_1, pass_fds=[remote_1.fileno()])
> + os._exit(0)
> +
> +
> +pid = os.fork();
> +if pid == 0:
> + # In remote_2
> + print('Launching Remote process 2');
> + process = subprocess.Popen(remote_cmd_2, pass_fds=[remote_2.fileno()])
> + os._exit(0)
> +
> +
> +pid = os.fork();
> +if pid == 0:
> + # In remote_3
> + print('Launching Remote process 3');
> + process = subprocess.Popen(remote_cmd_3, pass_fds=[remote_3.fileno()])
> + os._exit(0)
> +
> +
> +print('Launching Proxy process');
> +process = subprocess.Popen(proxy_cmd, pass_fds=[proxy_1.fileno(), \
> + proxy_2.fileno(), proxy_3.fileno()])
> diff --git a/scripts/mpqemu-launcher.py b/scripts/mpqemu-launcher.py
> new file mode 100755
> index 0000000000..81e370663e
> --- /dev/null
> +++ b/scripts/mpqemu-launcher.py
> @@ -0,0 +1,53 @@
> +#!/usr/bin/env python3
> +import socket
> +import os
> +import subprocess
> +import time
> +
> +PROC_QEMU='/usr/bin/qemu-system-x86_64'
> +
> +PROC_REMOTE='/usr/bin/qemu-scsi-dev'
> +
> +proxy, remote = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
> +
> +remote_cmd = [ PROC_REMOTE, \
> + str(remote.fileno()), \
> + '-device', 'lsi53c895a,id=lsi1', \
> + '-drive', 'id=drive_image1,file=/build/ol7-nvme-test-1.qcow2', \
> + '-device', 'scsi-hd,id=drive1,drive=drive_image1,bus=lsi1.0,' \
> + 'scsi-id=0', \
> + '-device', 'lsi53c895a,id=lsi2', \
> + '-drive', 'id=drive_image2,file=/build/ol7-nvme-test-2.qcow2', \
> + '-device', 'scsi-hd,id=drive2,drive=drive_image2,bus=lsi2.0,' \
> + 'scsi-id=0' \
> + ]
> +
> +proxy_cmd = [ PROC_QEMU, \
> + '-name', 'OL7.4', \
> + '-machine', 'q35,accel=kvm', \
> + '-smp', 'sockets=1,cores=1,threads=1', \
> + '-m', '2048', \
> + '-object', 'memory-backend-memfd,id=sysmem-file,size=2G', \
> + '-numa', 'node,memdev=sysmem-file', \
> + '-device', 'virtio-scsi-pci,id=virtio_scsi_pci0', \
> + '-drive', 'id=drive_image1,if=none,format=qcow2,' \
> + 'file=/home/ol7-hdd-1.qcow2', \
> + '-device', 'scsi-hd,id=image1,drive=drive_image1,' \
> + 'bus=virtio_scsi_pci0.0', \
> + '-boot', 'd', \
> + '-vnc', ':0', \
> + '-device', 'pci-proxy-dev,id=lsi1,socket='+str(proxy.fileno()), \
> + '-device', 'pci-proxy-dev,id=lsi2,socket='+str(proxy.fileno()) \
> + ]
> +
> +
> +pid = os.fork();
> +
> +if pid:
> + # In Proxy
> + print('Launching QEMU with Proxy object');
> + process = subprocess.Popen(proxy_cmd, pass_fds=[proxy.fileno()])
> +else:
> + # In remote
> + print('Launching Remote process');
> + process = subprocess.Popen(remote_cmd, pass_fds=[remote.fileno()])
> --
> 2.25.GIT
>
>
>
>
> --
> 此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo