qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 0/9] hw/mos6522: VIA timer emulation fixes and improvement


From: Mark Cave-Ayland
Subject: Re: [PATCH v1 0/9] hw/mos6522: VIA timer emulation fixes and improvements
Date: Thu, 18 Nov 2021 11:13:01 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0

On 17/11/2021 03:03, Finn Thain wrote:

On Fri, 24 Sep 2021, I wrote:

This is a patch series for QEMU that I started last year. The aim was to
try to get a monotonic clocksource for Linux/m68k guests. That hasn't
been achieved yet (for q800 machines). I'm submitting the patch series
because,

  - it improves 6522 emulation fidelity, although slightly slower, and


I did some more benchmarking to examine the performance implications.

I measured a performance improvement with this patch series. For a
Linux/m68k guest, the execution time for a gettimeofday syscall dropped
from 9 us to 5 us. (This is a fairly common syscall.)

The host CPU time consumed by qemu in starting the guest kernel and
executing a benchmark involving 20 million gettimeofday calls was as
follows.

qemu-system-m68k mainline (42f6c9179b):
     real     198 s
     sys      123 s
     user     73 s
     sys/user 1.68

qemu-system-m68k patched (0a0bca4711):
     real     112 s
     sys      63 s
     user     47 s
     sys/user 1.34

As with any microbenchmark, this workload is not a real-world one. For
comparison, here are measurements of the time to fully startup and
immediately shut down Debian Linux/m68k SID (systemd):

qemu-system-m68k mainline (42f6c9179b)
     real     31.5 s
     sys      1.59 s
     user     27.4 s
     sys/user 0.06

qemu-system-m68k patched (0a0bca4711)
     real     31.2 s
     sys      1.17 s
     user     27.6 s
     sys/user 0.04

The decrease in sys/user ratio reflects the small cost that has to be paid
for the improvement in 6522 emulation fidelity and timer accuracy. But
note that in both benchmarks wallclock execution time dropped, meaning
that the system is faster overall.

The gettimeofday testing revealed that the Linux kernel does not properly
protect userland from pathological hardware timers, and the gettimeofday
result was seen to jump backwards (that was unexpected, though Mark did
predict it).

This backwards jump was often observed in the mainline build during the
gettimeofday benchmark and is result of bugs in mos6522.c. The patched
build did not exhibit this problem (as yet).

The two benefits described here are offered in addition to all of the
other benefits described in the patches themselves. Please consider
merging this series.

Hi Finn,

I've not forgotten about this series - we're now in 6.2 freeze, but it's on my TODO list to revisit in the next development cycle this along with the ESP stress-ng changes which I've also been looking at. As mentioned in my previous reviews the patch will need some further analysis: particularly the logic in mos6522_read() that can generate a spurious interrupt on a register read needs to be removed, and also testing is required to ensure that these changes don't affect the CUDA clock warping which allows OS X to boot under qemu-system-ppc.

I'm confident that qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) is monotonic, since if it were not then there would be huge numbers of complaints from QEMU users. It appears that Linux can potentially alter the ticks in mac_read_clk() at https://github.com/torvalds/linux/blob/master/arch/m68k/mac/via.c#L624 which suggests the issue is related to timer wraparound. I'd like to confirm exactly which part of your series fixes the specific problem of the clock jumping backwards before merging these changes.

I realized that I could easily cross-compile a 5.14 kernel to test this theory with the test root image and .config you supplied at https://gitlab.com/qemu-project/qemu/-/issues/611 using the QEMU docker-m68k-cross image to avoid having to build a complete toolchain by hand. The kernel builds successfully using this method, but during boot it hangs sending the first SCSI CDB to the ESP device, failing to send the last byte using PDMA.

Are there known issues cross-compiling an m68k kernel on an x86 host? Or are your current kernels being built from a separate branch outside of mainline Linux git?


ATB,

Mark.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]