qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] On-demand taint tracking


From: Heng Yin
Subject: [Qemu-devel] On-demand taint tracking
Date: Fri, 23 Feb 2007 18:02:46 -0500
User-agent: Thunderbird 1.5.0.9 (X11/20070104)

Hi Qemu developers,

I have implemented a whole-system taint tracking system on Qemu. But the performance overhead is big. Now I want to optimize it by performing on-demand taint tracking. The idea is that Qemu runs in virtualization mode most of time (running with kqemu), and switches to emulation mode to propagate taint information when necessary. When taint information is not propagating for a while, I put Qemu into virtualization mode again. Before I put it into virtualization mode, I disable the tainted pages by removing their PG_PRESENT flags. So once kqemu accesses one of these pages, the page fault handler gets called, and qemu gets control.

I have written something for this, but it does not work. The guest OS crashes immediately when I put Qemu into virtualization mode. Kqemu does not raise page fault before the target OS crashes.

I list part of my code below. Can someone give any hints of what I did wrong here?

/*this function disable all the tainted pages, and put it
  into virtualization mode */
int switch_e2v()
{
  int i;
  uint32_t pte;
        
  //enable the pages
  for(i=0; i<ram_size/4096; i++) {
    page_table_t *page = tc_page_table[i];
    if(!page || !page->pte_addr) continue;

    //if this page is tainted, I get its pte, and clear
    //its PG_PRESENT flag
    pte = ldl_phys(page->pte_addr);
    pte &= ~PG_PRESENT_MASK;

    // I set the avail bits to all 1s, so that I know this
    //page is different from those actually not present
    pte |= 0xe00;
    stl_phys_notdirty(page->pte_addr, pte);
  }
  emulation_mode = 0; //indicate we are entering virtualization mode
  return 0;
}

Any comments are highly appreciated!
Thanks a lot,
Heng




reply via email to

[Prev in Thread] Current Thread [Next in Thread]