qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] qga: implement guest-file-ioctl


From: Ladi Prosek
Subject: Re: [Qemu-devel] [PATCH] qga: implement guest-file-ioctl
Date: Mon, 6 Feb 2017 16:50:43 +0100

On Mon, Feb 6, 2017 at 4:37 PM, Denis V. Lunev <address@hidden> wrote:
> On 02/01/2017 04:41 PM, Ladi Prosek wrote:
>> On Wed, Feb 1, 2017 at 12:03 PM, Daniel P. Berrange <address@hidden> wrote:
>>> On Wed, Feb 01, 2017 at 11:50:43AM +0100, Ladi Prosek wrote:
>>>> On Wed, Feb 1, 2017 at 11:20 AM, Daniel P. Berrange <address@hidden> wrote:
>>>>> On Wed, Feb 01, 2017 at 11:06:46AM +0100, Ladi Prosek wrote:
>>>>>> Analogous to guest-file-read and guest-file-write, this commit adds
>>>>>> support for issuing IOCTLs to files in the guest. With the goal of
>>>>>> abstracting away the differences between Posix ioctl() and Win32
>>>>>> DeviceIoControl() to provide one unified API, the schema distinguishes
>>>>>> between input and output buffer sizes (as required by Win32) and
>>>>>> allows the caller to supply either a 'buffer', pointer to which will
>>>>>> be passed to the Posix ioctl(), or an integer argument, passed to
>>>>>> ioctl() directly.
>>>>> What is the intended usage scenario for this ?
>>>> My specific case is extracting a piece of data from Windows guests.
>>>> Guest driver exposes a file object with a well-known IOCTL code to
>>>> return a data structure from the kernel.
>>> Please provide more information about what you're trying to do.
>>>
>>> If we can understand the full details it might suggest a different
>>> approach that exposing a generic ioctl passthrough facility.
>> The end goal is to be able to create a Window crash dump file from a
>> running (or crashed, but running is more interesting because Windows
>> can't do that by itself) Windows VM. To do that without resorting to
>> hacks, the host application driving this needs to get the crash dump
>> header, which Windows provides in its KeInitializeCrashDumpHeader
>> kernel API.
>>
>> I believe that the most natural way to do this is to have
>> 1) a driver installed in the guest providing a way to call
>> KeInitializeCrashDumpHeader from user space
>> 2) an agent in the guest, running in user space, calling the driver
>> and passing the result back to the host
>>
>> Now 2) may as well be an existing agent, such as the QEMU guest agent,
>> and that's why I am here :)
>>
>> KeInitializeCrashDumpHeader returns an opaque byte array, happens to
>> be 8192 bytes at the moment. My first choice for the kernel-user
>> interface in the guest is IOCTL because what I'm trying to get across
>> is a block, a "datagram", not a stream, and I have the option for
>> easily adding more functionality later by adding more IOCTL codes with
>> the file object still representing "the driver".
>>
>> I could use regular file I/O as well. I would either have to devise a
>> protocol for talking to the driver, have a way of delimiting messages,
>> re-syncing the channel etc., or make a slight semantic shift and
>> instead of the file object representing the driver, it would represent
>> this one particular function of the driver. Opening the file and
>> reading from it until eof would yield the crash dump header.
> I think that this is not as good as can be for the whole design of the
> feature.
> The problem here is that userspace starts toooooooooo late and is not
> accessible when the guest BSODS and we do need a dump for analysis.
>
> May be it worth to push this header to QEMU at boot time through virtio bus?

Yes, definitely an option. I believe that having the ability to create
a dump out of a live system, i.e. without crashing it, is what adds
the most value here. And that would most likely happen on an
up-and-running guest so the difference between being able to do it
after a kernel driver loads and after a user space service starts is
not that significant.

Still, the sooner the better, for sure. I think I've seen
virtio-pstore suggested as a possible channel to use to push the
header.

Thanks!

> Den



reply via email to

[Prev in Thread] Current Thread [Next in Thread]