qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Spice-devel] viewing continuous guest virtual memory a


From: Yonit Halperin
Subject: Re: [Qemu-devel] [Spice-devel] viewing continuous guest virtual memory as continuous in qemu
Date: Mon, 03 Oct 2011 10:17:59 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4

On 10/02/2011 03:24 PM, Alon Levy wrote:
Hi,

  I'm trying to acheive the $subject. Some background: currently spice relies 
on a preallocated pci bar for both surfaces and for VGA framebuffer + commands. 
I have been trying to get rid of the surfaces bar. To do that I allocate memory 
in the guest and then translate it for spice-server consumption using 
cpu_physical_memory_map.

  AFAIU this works only when the guest allocates a continuous range of physical 
pages. This is a large requirement from the guest, which I'd like to drop. So I 
would like to have the guest use a regular allocator, generating for instance 
two sequential pages in virtual memory that are scattered in physical memory. 
Those two physical guest page addresses (gp1 and gp2) correspond to two host 
virtual memory addresses (hv1, hv2). I would now like to provide to 
spice-server a single virtual address p that maps to those two pages in 
sequence. I don't want to handle my own scatter-gather list, I would like to 
have this mapping done once so I can use an existing library that requires a 
single pointer (for instance pixman or libGL) to do the rendering.

  Is there any way to acheive that without host kernel support, in user space, 
i.e. in qemu? or with an existing host kernel device?

  I'd appreciate any help,

Alon
_______________________________________________
Spice-devel mailing list
address@hidden
http://lists.freedesktop.org/mailman/listinfo/spice-devel

Hi,
won't there be an overhead for rendering on a non continuous surface? Will it be worthwhile comparing to not creating the surface?

BTW. We should test if the split to vram (surfaces) and devram (commands and others) is more efficient than having one section. Even if it is more efficient, we can remove the split and give to the surfaces higher allocation priority on a part of the pci bar. Anyway, by default, we can try allocating surfaces on the guest RAM. If it fails, we can try to allocate on the pci-bar.

Cheers,
Yonit.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]