qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU


From: Jan Kiszka
Subject: Re: [RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU
Date: Wed, 29 Apr 2020 13:50:12 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0

On 29.04.20 06:15, Liang Yan wrote:
Hi, All,

Did a test for these patches, all looked fine.

Test environment:
Host: opensuse tumbleweed + latest upstream qemu  + these three patches
Guest: opensuse tumbleweed root fs + custom kernel(5.5) + related
uio-ivshmem driver + ivshmem-console/ivshmem-block tools


1. lspci show

00:04.0 Unassigned class [ff80]: Siemens AG Device 4106 (prog-if 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
<MAbort- >SERR- <PERR- INTx-
Latency: 0
Region 0: Memory at fea56000 (32-bit, non-prefetchable) [size=4K]
Region 1: Memory at fea57000 (32-bit, non-prefetchable) [size=4K]
Region 2: Memory at fd800000 (64-bit, prefetchable) [size=1M]
Capabilities: [4c] Vendor Specific Information: Len=18 <?>
Capabilities: [40] MSI-X: Enable+ Count=2 Masked-
Vector table: BAR=1 offset=00000000
PBA: BAR=1 offset=00000800
Kernel driver in use: virtio-ivshmem


2. virtio-ivshmem-console test
2.1 ivshmem2-server(host)

airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003
*** Example code, do not use in production ***

2.2 guest vm backend(test-01)
localhost:~ # echo "110a 4106 1af4 1100 ffc003 ffffff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[  185.831277] uio_ivshmem 0000:00:04.0: state_table at
0x00000000fd800000, size 0x0000000000001000
[  185.835129] uio_ivshmem 0000:00:04.0: rw_section at
0x00000000fd801000, size 0x0000000000007000

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

2.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

2.4 backend will serial output of frontend

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

localhost:~/virtio # ./virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x1
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
queue_sel: 1
queue size: 8
queue driver vector: 2
queue desc: 0x400
queue driver: 0x480
queue device: 0x4c0
queue enable: 1
device_status: 0xf

Welcome to openSUSE Tumbleweed 20200326 - Kernel 5.5.0-rc5-1-default+
(hvc0).

enp0s3:


localhost login:

2.5 close backend and frontend will show
localhost:~ # [  185.685041] virtio-ivshmem 0000:00:04.0: backend failed!

3. virtio-ivshmem-block test

3.1 ivshmem2-server(host)
airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 1M -n 2 -V 2 -P 0x8002
*** Example code, do not use in production ***

3.2 guest vm backend(test-01)

localhost:~ # echo "110a 4106 1af4 1100 ffc002 ffffff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[   77.701462] uio_ivshmem 0000:00:04.0: state_table at
0x00000000fd800000, size 0x0000000000001000
[   77.705231] uio_ivshmem 0000:00:04.0: rw_section at
0x00000000fd801000, size 0x00000000000ff000

localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...

3.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

3.4 guest vm backend(test-01)
localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x206
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
device_status: 0xf

3.5 guest vm frontend(test-02), a new disk is attached:

fdisk /dev/vdb

Disk /dev/vdb: 192 KiB, 196608 bytes, 384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3.6 close backend and frontend will show
localhost:~ # [ 1312.284301] virtio-ivshmem 0000:00:04.0: backend failed!



Tested-by: Liang Yan <address@hidden>


Thanks for testing this! I'll look into your patch findings.

Jan

On 1/7/20 9:36 AM, Jan Kiszka wrote:
Overdue update of the ivshmem 2.0 device model as presented at [1].

Changes in v2:
  - changed PCI device ID to Siemens-granted one,
    adjusted PCI device revision to 0
  - removed unused feature register from device
  - addressed feedback on specification document
  - rebased over master

This version is now fully in sync with the implementation for Jailhouse
that is currently under review [2][3], UIO and virtio-ivshmem drivers
are shared. Jailhouse will very likely pick up this revision of the
device in order to move forward with stressing it.

More details on the usage with QEMU were in the original cover letter
(with adjustements to the new device ID):

If you want to play with this, the basic setup of the shared memory
device is described in patch 1 and 3. UIO driver and also the
virtio-ivshmem prototype can be found at

     http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2

Accessing the device via UIO is trivial enough. If you want to use it
for virtio, this is additionally to the description in patch 3 needed on
the virtio console backend side:

     modprobe uio_ivshmem
     echo "110a 4106 1af4 1100 ffc003 ffffff" > 
/sys/bus/pci/drivers/uio_ivshmem/new_id
     linux/tools/virtio/virtio-ivshmem-console /dev/uio0

And for virtio block:

     echo "110a 4106 1af4 1100 ffc002 ffffff" > 
/sys/bus/pci/drivers/uio_ivshmem/new_id
     linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img

After that, you can start the QEMU frontend instance with the
virtio-ivshmem driver installed which can use the new /dev/hvc* or
/dev/vda* as usual.

Any feedback welcome!

Jan

PS: Let me know if I missed someone potentially interested in this topic
on CC - or if you would like to be dropped from the list.

[1] https://kvmforum2019.sched.com/event/TmxI
[2] https://groups.google.com/forum/#!topic/jailhouse-dev/ffnCcRh8LOs
[3] https://groups.google.com/forum/#!topic/jailhouse-dev/HX-0AGF1cjg

Jan Kiszka (3):
   hw/misc: Add implementation of ivshmem revision 2 device
   docs/specs: Add specification of ivshmem device revision 2
   contrib: Add server for ivshmem revision 2

  Makefile                                  |    3 +
  Makefile.objs                             |    1 +
  configure                                 |    1 +
  contrib/ivshmem2-server/Makefile.objs     |    1 +
  contrib/ivshmem2-server/ivshmem2-server.c |  462 ++++++++++++
  contrib/ivshmem2-server/ivshmem2-server.h |  158 +++++
  contrib/ivshmem2-server/main.c            |  313 +++++++++
  docs/specs/ivshmem-2-device-spec.md       |  376 ++++++++++
  hw/misc/Makefile.objs                     |    2 +-
  hw/misc/ivshmem2.c                        | 1085 +++++++++++++++++++++++++++++
  include/hw/misc/ivshmem2.h                |   48 ++
  include/hw/pci/pci_ids.h                  |    2 +
  12 files changed, 2451 insertions(+), 1 deletion(-)
  create mode 100644 contrib/ivshmem2-server/Makefile.objs
  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
  create mode 100644 contrib/ivshmem2-server/main.c
  create mode 100644 docs/specs/ivshmem-2-device-spec.md
  create mode 100644 hw/misc/ivshmem2.c
  create mode 100644 include/hw/misc/ivshmem2.h


--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]