qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] The reason behind block linking constraint?


From: Max Filippov
Subject: Re: [Qemu-devel] The reason behind block linking constraint?
Date: Mon, 26 Sep 2011 15:41:00 +0400

>  Sorry, I have to be sure what you talked about is guest or host.
> Let me try.
>
>> Well, my explanation sucks. Let's say it other way, more precisely:
>> - you have two pieces of code in different pages, one of them jumps to the 
>> other;
>
>  guest code in different guest pages.

Right.

>> - and you have two TBs, tb1 for the first piece and tb2 for the second;
>
>  tb1 and tb2 are in the code cache (host binary).

Right.

>> - and you link them and there's direct jump from tb1 to tb2;
>> - now you change the mapping of the code page that contains second piece of 
>> code;
>
>  change the mapping of the guest page which contains second piece of
> guest binary. Mapping guest page to what? Host virtual address?

Mapping of guest physical memory to guest virtual memory. Change in
the guest TLB. If we're talking about i386 guest that's change in the
page table + TLB flush, for the changed page or for the whole TLB.

>> - after that there's another code (or no code at all) at the place where the 
>> second piece of code used to be;
>> - but the jump to tb2 still remains in tb1.
>
>  there's another code (or no code at all) at the guest page which
> used to contain second piece of guest binary.

At the virtual addresses of that guest page, right.

>  So if we execute tb2, it might have wrong memory access through
> the mapping of guest page. Am I right?

If we execute tb2, it's not what guest would expect us to do at least.

>> >   I assume that "all TBs in that page will be gone" means QEMU will
>> > invalidate those TBs.
>>
>> No, it won't. I had to say "all code in that page will be gone", sorry for 
>> the confusion.
>
>  O.K., here TB is in the code cache, page is guest page, code is guest
> binary. So the second piece of guest binary in that guest page will be
> gone, but TBs related to the guest page still remain in the code cache.
> No invalidation here.

Right.

>> > If not, I think tb_find_fast will return tb2 which should not be executed.
>>
>> It won't either. tb_find_fast searches tb this way:
>>
>>   tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
>>
>> but 'page mapping change' implies TLB flush, at least for that page.
>> Both tlb_flush and tlb_flush_page will clear env->tb_jmp_cache and 
>> tb_find_fast will have to call tb_find_slow.
>
>  Yes, I see QEMU use memset to clear env->tb_jmp_cache while doing
> tlb_flush.
>
>> Exactly. The exception will be raised inside the guest and the guest will 
>> execute its page fault handler or whatever.
>
>  Thanks, Max. Although I still doesn't totally understand how softmmu
> is done in QEMU, but the whole picture is much clear to me now. And
> about the page boundary restraint of (direct) block linking,
>
>  if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
>      (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK))  {
>
> I guess this is because it'll be too complicated to track all links
> jump to this (guest) page. A guest page might contains hundreds of TBs.
> If the guest page is gone, then it's not a easy thing to do unlinking.
> Does this make sense?

I'm not familiar with the motivation for the current implementation.
I guess that tracking otherwise linkable cross-page jumps just doesn't
worth it, because such jumps are rare.
I don't have any numbers though, but I think that qemu profiling may
be used to get these numbers.

-- 
Thanks.
-- Max



reply via email to

[Prev in Thread] Current Thread [Next in Thread]