[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Network Performance between Win Host and Linux
From: |
Leonardo E. Reiter |
Subject: |
Re: [Qemu-devel] Network Performance between Win Host and Linux |
Date: |
Tue, 11 Apr 2006 16:40:53 -0400 |
User-agent: |
Mozilla Thunderbird 1.0.7 (X11/20051013) |
Hi Ken,
I'm attaching a pretty old patch I made (from the 0.7.1 days), which did
a quick and dirty merge of the select's. It's not something that is
clean and it will need adapting to 0.8.0... but, I figure you could draw
some quick hints on how to merge the 2. Basically it fills the select
bitmaps when it walks through the fd's the first time, then calls select
instead of poll. It also has slirp fill its own bits (fd's) in before
calling select. So this is condensed to 1 select call.
Do what you want with the code - like I said, it's messy and old. But
maybe you can at least use it to quickly test your hypothesis. I'd be
interested in learning about any benchmarks you come up with if you
merge the select+poll. Also, it may not be valid at all on Windows
hosts since there is a question about select() being interrupted
properly on those hosts - it should work on Linux/BSD.
Regards,
Leo Reiter
P.S. this patch should be applied with -p1, not -p0 like my newer
patches are applied. Sorry for that - like I said, it's quite old.
Kenneth Duda wrote:
Paul, thanks for the note.
In my case, the guest CPU is idle. The host CPU utilization is only 5
or 10 percent when running "find / -print > /dev/null" on the guest.
So I don't think guest interrupt latency is the issue for me in this
case.
My first guess is that qemu is asleep when the NFS response arrives on
the slirp socket, and stays asleep for several milliseconds before
deciding to check if anything has shown up via slirp. The problem is
that vl.c's main_loop_wait() has separate calls to select() for slirp
versus non-slirp fd's. I think this is the problem because strace
reveals qemu blocking for several milliseconds at a time in select(),
waking up with a SIGALRM, and then polling slirp and finding stuff to
do there. These select calls don't appear hard to integrate, and the
author seems to feel this would be a good idea anyway; from vl.c:
#if defined(CONFIG_SLIRP)
/* XXX: merge with the previous select() */
if (slirp_inited) {
I will take a swing at this first. Please let me know if there's
anything I should be aware of.
Thanks,
-Ken
--
Leonardo E. Reiter
Vice President of Product Development, CTO
Win4Lin, Inc.
Virtual Computing from Desktop to Data Center
Main: +1 512 339 7979
Fax: +1 512 532 6501
http://www.win4lin.com
--- qemu/vl.c 2005-05-11 17:10:02.000000000 -0400
+++ qemu-select/vl.c 2005-05-11 17:13:24.000000000 -0400
@@ -2598,51 +2598,85 @@
void main_loop_wait(int timeout)
{
#ifndef _WIN32
- struct pollfd ufds[MAX_IO_HANDLERS + 1], *pf;
IOHandlerRecord *ioh, *ioh_next;
uint8_t buf[4096];
int n, max_size;
#endif
int ret;
+#if defined(CONFIG_SLIRP) || !defined(_WIN32)
+ fd_set rfds, wfds, xfds;
+ int nfds;
+ struct timeval tv;
+#endif
+#if defined(CONFIG_SLIRP)
+ int slirp_nfds;
+#endif
#ifdef _WIN32
if (timeout > 0)
Sleep(timeout);
+
+#if defined(CONFIG_SLIRP)
+ /* XXX: merge with poll() */
+ if (slirp_inited) {
+
+ nfds = -1;
+ FD_ZERO(&rfds);
+ FD_ZERO(&wfds);
+ FD_ZERO(&xfds);
+ slirp_select_fill(&nfds, &rfds, &wfds, &xfds);
+ tv.tv_sec = 0;
+ tv.tv_usec = 0;
+ ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
+ if (ret >= 0) {
+ slirp_select_poll(&rfds, &wfds, &xfds);
+ }
+ }
+#endif
#else
/* poll any events */
/* XXX: separate device handlers from system ones */
- pf = ufds;
+ FD_ZERO(&rfds);
+ FD_ZERO(&wfds);
+ FD_ZERO(&xfds);
+ nfds = -1;
for(ioh = first_io_handler; ioh != NULL; ioh = ioh->next) {
if (!ioh->fd_can_read) {
+ FD_SET(ioh->fd, &rfds);
max_size = 0;
- pf->fd = ioh->fd;
- pf->events = POLLIN;
- ioh->ufd = pf;
- pf++;
+ if (ioh->fd > nfds)
+ nfds = ioh->fd;
} else {
max_size = ioh->fd_can_read(ioh->opaque);
if (max_size > 0) {
if (max_size > sizeof(buf))
max_size = sizeof(buf);
- pf->fd = ioh->fd;
- pf->events = POLLIN;
- ioh->ufd = pf;
- pf++;
- } else {
- ioh->ufd = NULL;
+ FD_SET(ioh->fd, &rfds);
+ if (ioh->fd > nfds)
+ nfds = ioh->fd;
}
}
ioh->max_size = max_size;
}
+
+#if defined(CONFIG_SLIRP)
+ if (slirp_inited) {
+ slirp_nfds = -1;
+ slirp_select_fill(&slirp_nfds, &rfds, &wfds, &xfds);
+ if (slirp_nfds > nfds)
+ nfds = slirp_nfds;
+ }
+#endif /* CONFIG_SLIRP */
+
+ tv.tv_sec = 0;
+ tv.tv_usec = timeout * 1000;
+ ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
- ret = poll(ufds, pf - ufds, timeout);
if (ret > 0) {
/* XXX: better handling of removal */
for(ioh = first_io_handler; ioh != NULL; ioh = ioh_next) {
ioh_next = ioh->next;
- pf = ioh->ufd;
- if (pf) {
- if (pf->revents & POLLIN) {
+ if (FD_ISSET(ioh->fd, &rfds)) {
if (ioh->max_size == 0) {
/* just a read event */
ioh->fd_read(ioh->opaque, NULL, 0);
@@ -2654,31 +2688,16 @@
ioh->fd_read(ioh->opaque, NULL, -errno);
}
}
- }
- }
+ }
}
- }
-#endif /* !defined(_WIN32) */
-#if defined(CONFIG_SLIRP)
- /* XXX: merge with poll() */
- if (slirp_inited) {
- fd_set rfds, wfds, xfds;
- int nfds;
- struct timeval tv;
- nfds = -1;
- FD_ZERO(&rfds);
- FD_ZERO(&wfds);
- FD_ZERO(&xfds);
- slirp_select_fill(&nfds, &rfds, &wfds, &xfds);
- tv.tv_sec = 0;
- tv.tv_usec = 0;
- ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
- if (ret >= 0) {
+#if defined(CONFIG_SLIRP)
+ if (slirp_inited)
slirp_select_poll(&rfds, &wfds, &xfds);
- }
}
-#endif
+#endif /* defined(CONFIG_SLIRP) */
+
+#endif /* !defined(_WIN32) */
if (vm_running) {
qemu_run_timers(&active_timers[QEMU_TIMER_VIRTUAL],
[Qemu-devel] Re: Network Performance between Win Host and Linux, Kenneth Duda, 2006/04/11
Re: [Qemu-devel] Re: Network Performance between Win Host and Linux, Leonardo E. Reiter, 2006/04/12