qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] "Using Python to investigate EFI and ACPI"


From: josh
Subject: Re: [Qemu-devel] "Using Python to investigate EFI and ACPI"
Date: Thu, 3 Sep 2015 09:41:21 -0700
User-agent: Mutt/1.5.20 (2009-06-14)

On Thu, Sep 03, 2015 at 05:53:45PM +0200, Laszlo Ersek wrote:
> On 09/03/15 16:50, Josh Triplett wrote:
> > On Thu, Sep 03, 2015 at 11:16:40AM +0200, Laszlo Ersek wrote:
> >> Then this payload is passed to the guest firmware (SeaBIOS or OVMF) over
> >> "fw_cfg" (which is a simple protocol, comprising, at this point, one
> >> selector and one data register, which are IO ports or MMIO locations --
> >> see "docs/specs/fw_cfg.txt" in QEMU and
> >> "Documentation/devicetree/bindings/arm/fw-cfg.txt" in the kernel).
> > 
> > Interesting; I hadn't seen that protocol before.
> > 
> > Do you virtualize those I/O ports by CPU, to make them thread-safe, or
> > does the last address written to 0x510 get saved system-wide, making it
> > unsafe for concurrent access?
> 
> I think fw_cfg is not meant to be accessed by several CPUs concurrently.
> The protocol is stateful (selected key, offset within blob associated
> with selected key, etc), and "accessing CPU" is not part of that state.

Not that hard to fix; just keep all the state in the accessing CPU
rather than the system.  Current processors do that for the PCI I/O port
pair, to avoid race conditions.  You could easily do that for the fw_cfg
I/O ports.  As a bonus, you then wouldn't need to take any kind of lock
around accesses to that state, because the CPU owns that state.

(That's the easy fix; the harder fix would be creating a new race-free
MMIO protocol and mapping all of the data structures into memory
directly, which would provide a performance benefit as well.  I'd love
to see a general version of such a protocol for a more efficient virtio
filesystem, though in the simpler case of fw_cfg you can just map all of
the structures into memory.)

> >> With this background, you can probably see where I'm going with this. It
> >> is not really easy to *test* the AML methods that QEMU generates
> >> (piecing them together, with helper functions, from AML primitives),
> >> without dedicated guest kernel drivers. I think the only method that I
> >> know of is the following:
> >>
> >> - in the Linux guest, dump the ACPI tables with acpidump, or save them
> >> from sysfs directly (/sys/firmware/acpi/tables)
> >> - pass the DSDT and the SSDTs (and any data tables referenced by them?)
> >> to AcpiExec from the ACPICA suite
> >> - work with AcpiExec
> >>
> >> But, for simple testing, can we maybe run your tool within the guest,
> >> before the runtime OS boots?
> > 
> > Yes, absolutely.  We have a batch-mode testing mechanism based on a
> > config file; you'd probably want to make use of that.  With some
> > extensions, it could dump results either to an emulated serial port or
> > some other interface that you can read from outside qemu.  We also need
> > to work on making the results more machine-parseable for automation.
> 
> While I certainly don't discount automation, my primary use case is
> interactive development / testing. :) (Although, I can see myself
> canning some commands in a script or config file, and invoking *that*
> interactively. Which was your point, probably.)

Interactive development and testing is even easier; you can do that
today.  And yes, if you find yourself doing the same thing repeatedly,
you should put it in a module you can run.  Turning the results into an
automated regression test would be even better. :)

> >> Thus it would be awesome if we had some AcpiExec-like functionality
> >> early on in the guest (for example in the form of a UEFI Shell
> >> application, or as a python tool that runs within the edk2 Python port,
> >> or even in grub).
> >>
> >> For example, assume your runtime guest OS is Windows (with its picky,
> >> closed-source ACPI interpreter); you make a change in QEMU's ACPI
> >> generator, rebuild QEMU, reboot the guest, drop to the UEFI shell to
> >> verify the change "by eye", exit the shell, and *then* continue booting
> >> Windows. (Which will hopefully not BSOD at that point, after the
> >> verification with BITS / AcpiExec etc.)
> >>
> >> So, I have three questions:
> >>
> >> (1) What is the relationship between the ACPI facility of BITS, and ACPICA?
> > 
> > BITS links in ACPICA and uses it to evaluate ACPI.  We pull in ACPICA as
> > a git submodule and build it as part of BITS.  acpi.evaluate uses the
> > ACPICA interpreter.
> 
> Awesome! :)
> 
> Another question: when you execute an AML method that does, say, IO port
> access, does the AML interpreter of ACPICA actually *perform* that IO
> port access? Because, the one that is embedded in Linux obviously does,
> and the one that is embedded in the userspace ACPICA command line
> utility "acpiexec" obviously doesn't.

You need to pass unsafe_io=True to evaluate, in which case it'll do I/O.
Otherwise, it'll ignore I/O.  (On our TODO list: ignoring but logging
I/O so we can look at the I/O accesses as part of the test.)

Actually, that reminds me: we should probably fix AcpiOsWriteMemory to
do the same thing.

> I assume (and very much hope) that the IO port access *is* performed
> from BITS, simply because you developed it for physical machines, and it
> wouldn't make much sense to avoid actual hardware access that was
> implemented by the BIOS vendor for that platform.

We want to default to not performing those accesses, but we definitely
have the option to do so if you know you *want* to trigger real I/O.

> If that is the case, then this tool could become the killer ACPI tester
> for QEMU developers -- the hardware accesses in the AML methods
> generated by QEMU would actually poke QEMU devices! (Unlike the
> userspace "acpiexec" utility.) It would completely detach Linux guest
> driver development from host side / firmware development. \o/

That's exactly why we went with a pre-OS environment rather than an OS;
you don't want to undermine the OS, and you don't want your tests
affected by whatever the OS has done.

> >> (2) Is there a bit more comprehensive documentation about the ACPI
> >> module of BITS? AcpiExec and the ACPICA Debugger have quite indulged me
> >> with their incredible documentation (documents/acpica-reference.pdf). It
> >> would be great if BITS' ACPI module had a list of commands, to see what
> >> is there to play with.
> > 
> > We don't come close to the level of documentation for ACPICA, but we do
> > have pydoc documentation for the modules in BITS, including acpi.  You
> > can run help("acpi") within BITS, or read acpi.py.  We've tried to make
> > sure all of the methods considered "API" have docstrings.
> 
> I'll have to digest this some, and play with it.

Please feel free to ask if you have any questions.  Which reminds me, I
still need to get a BITS mailing list set up.

> >> ... I apologize if tools / documentation already exist for this kind of
> >> development work; everyone please educate me then. I hope my questions
> >> make at least some sense; I realize this email isn't well organized.
> > 
> > Makes perfect sense, and thanks for your mail!  I love the idea of using
> > BITS to test qemu's own ACPI.
> 
> Thank you very much! :)
> 
> (I must say, I have found the LWN article at just the right time. I
> intend to start implementing a VMGenID device for QEMU, and it's all
> ACPI based. Here's our design for that:
> <http://thread.gmane.org/gmane.comp.emulators.qemu/357940>. I've been
> already dreading the need for a Linux guest driver, in order to
> white-box test the device & the ACPI stuff from the guest side. :))

Interesting!  Yeah, BITS should make testing that trivial.  You can read
out the identifier, snapshot and resume, and read it out again.

One request there: please make that device optional in qemu, because
some users of qemu and snapshots specifically *won't* want the OS to
know that anything has happened.

- Josh Triplett



reply via email to

[Prev in Thread] Current Thread [Next in Thread]