qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: add capability to bypass the shared


From: Peng Tao
Subject: Re: [Qemu-devel] [PATCH] migration: add capability to bypass the shared memory
Date: Mon, 2 Jul 2018 21:52:08 +0800

On Mon, Jul 2, 2018 at 9:10 PM, Stefan Hajnoczi <address@hidden> wrote:
> On Sat, Mar 31, 2018 at 04:45:00PM +0800, Lai Jiangshan wrote:
>> a) feature: qemu-local-migration, qemu-live-update
>> Set the mem-path on the tmpfs and set share=on for it when
>> start the vm. example:
>> -object \
>> memory-backend-file,id=mem,size=128M,mem-path=/dev/shm/memory,share=on \
>> -numa node,nodeid=0,cpus=0-7,memdev=mem
>>
>> when you want to migrate the vm locally (after fixed a security bug
>> of the qemu-binary, or other reason), you can start a new qemu with
>> the same command line and -incoming, then you can migrate the
>> vm from the old qemu to the new qemu with the migration capability
>> 'bypass-shared-memory' set. The migration will migrate the device-state
>> *ONLY*, the memory is the origin memory backed by tmpfs file.
>
> Marcelo, Andrea, Paolo: There was a more complex local migration
> approach in 2013 with fd passing and vmsplice.  They specifically
> avoided the approach proposed in this patch, but I don't remember why.
>
> The closest to an explanation I've found is this message from Marcelo:
>
>   Another possibility is to use memory that is not anonymous for guest
>   RAM, such as hugetlbfs or tmpfs.
>
>   IIRC ksm and thp have limitations wrt tmpfs.
>
> https://www.spinics.net/lists/linux-mm/msg67437.html
>
> Have the limitations been been solved since then?
>
>> c)  feature: vm-template, vm-fast-live-clone
>> the template vm is started as 1), and paused when the guest reaches
>> the template point(example: the guest app is ready), then the template
>> vm is saved. (the qemu process of the template can be killed now, because
>> we need only the memory and the device state files (in tmpfs)).
>>
>> Then we can launch one or multiple VMs base on the template vm states,
>> the new VMs are started without the “share=on”, all the new VMs share
>> the initial memory from the memory file, they save a lot of memory.
>> all the new VMs start from the template point, the guest app can go to
>> work quickly.
>>
>> The new VM booted from template vm can’t become template again,
>> if you need this unusual chained-template feature, you can write
>> a cloneable-tmpfs kernel module for it.
>>
>> The libvirt toolkit can’t manage vm-template currently, in the
>> hyperhq/runv, we use qemu wrapper script to do it. I hope someone add
>> “libvrit managed template” feature to libvirt.
>
> This feature has been discussed multiple times in the past and probably
> the reason why it's not in libvirt yet is that no one wants it badly
> enough that they have solved the security issues.
>
> RAM and disk contain secrets like address-space layout randomization,
> random number generator state, cryptographic keys, etc.  Both the kernel
> and userspace handle secrets, making it hard to isolate all secrets and
> wipe them when cloning.
>
Hi Stefan,

> Risks:
> 1. If one cloned VM is exploited then all other VMs are more likely to
>    be exploitable (e.g. kernel address space layout randomization).
w.r.t. KASLR, any memory duplication technology would expose it. I
remember there are CVEs (e.g., CVE-2015-2877) specific to this kind
attack against KSM and it was stated that "Basically if you care about
this attack vector, disable deduplication.". Share-until-written
approaches for memory conservation among mutually untrusting tenants
are inherently detectable for information disclosure, and can be
classified as potentially misunderstood behaviors rather than
vulnerabilities. [1]

I think the same applies to vm templating as well. Actually VM
templating is more useful (than KSM) in this regard since we can
create a template for each trusted tenant where as with KSM all VMs on
a host are treated equally.

[1] https://access.redhat.com/security/cve/cve-2015-2877

> 2. If you give VMs cloned from the same template to untrusted users,
>    they may be able to determine the secrets other users' VMs.
In kata and runv, vm templating is used carefully so that we do not
use or save any secret keys before creating the template VM. IOW, the
feature is not supposed to be used generally to create any template
VMs at any stage.

>
> How are you wiping secrets and re-randomizing cloned VMs?
I think we can write some host generated random seeds to guest's
urandom device, when cloning VMs from the same template before handing
it to users. Is it enough or do you think there are more to do w/
re-randomizing?

>  Security is a
> major factor for using Kata, so it's important not to leak secrets
> between cloned VMs.
>
Yes, indeed! And it is all about trade-offs, VM templating or KSM. If
we want security above anything, we should just disable all the
sharing. But there is actually no ceiling (think about physical
isolation!). So it's more about trade-offs. With Kata, VM templating
and KSM give users options to achieve better performance and lower
memory footprint with little sacrifice. The security advantage of
running VM-based containers is still there.

Cheers,
Tao



reply via email to

[Prev in Thread] Current Thread [Next in Thread]