qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Running several ARM VMs with a multicast-based VLAN r


From: Mike Lovell
Subject: Re: [Qemu-discuss] Running several ARM VMs with a multicast-based VLAN results in extremely slow forwarded connections
Date: Wed, 03 Oct 2012 10:19:22 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120827 Thunderbird/15.0

On 10/03/2012 12:24 AM, Alex Rønne Petersen wrote:
Hi folks,

I'm using a script like this:

#!/usr/bin/env bash
qemu-system-arm \
     -machine versatilepb \
     -kernel vmlinuz-3.2.0-3-versatile \
     -hda hda$1.img \
     -initrd initrd.img-3.2.0-3-versatile \
     -append "root=/dev/sda1" \
     -m 640 \
     -k da \
     -localtime \
     -net nic,vlan=0,macaddr=52:54:00:12:34:$((55 + $1)) \
     -net user,vlan=0 \
     -net socket,mcast=230.0.0.42:6045 \
     -redir tcp:$((6049 + $1))::23 \
     -vnc :$((0 + $1)) \

(The first argument is simply a VM identifier.)

Given this script, I start up 6 VMs. They're able to see each other
and seem to communicate just fine and very responsively.

However, if I try to SSH into any of the machines (note the -redir
argument which sets up the forwarding), the connection is *extremely*
slow. It can literally take 40+ seconds for a command to go through.
This happens regardless of whether I SSH from outside or inside the
host machine. Now, if I remove the multicast line entirely (thus of
course having no connectivity between VMs), SSH connections into each
VM are perfectly responsive. VNC connections to the QEMU instances are
responsive regardless of network settings, FWIW.

The host system is Ubuntu 12.04 (x86_64) running QEMU 1.0.50.

Does anyone know what's going awry here?

Thanks in advance,
Alex


i'm pretty sure the problem is the combination of the user and mcast socket networks on multiple vms. the mcast socket connects all of the vms together but it is also connecting all of the user network backends together. these are connected like they were connected to an old ethernet hub where packets from one are always flooded to all. each of the user networks are all using 10.0.2.2 using the same mac address and those addresses are the default gateway for the vms. so you have a lot of user networks competing and getting confused.

there are 2 easy ways to change this. first would be only have 1 qemu process have a '-net user' option on the command line. this will only make 1 user network and the socket network will connect all the vms to it. i haven't tried this one myself so i'm not entirely sure the user network will do dhcp for the multiple vms. but you can give it a try.

the second option would be to have two nics on each guest, one for the user network and one for the mcast socket network. doing '-net user,vlan=0 -net nic,vlan=0,macaddr=52:54:00:12:34:$((55+$1)) -net socket,mcast=230.0.0.42:6045,vlan=1 -net nic,vlan=1,macaddr=52:54:00:12:35:$((55+$1))' would have this effect. then you will have two network interfaces in the guest, eth0 to use for a default route to the outside and eth1 for talking between the vms (that you'll need to configure in the guest).

hope that helps.

mike



reply via email to

[Prev in Thread] Current Thread [Next in Thread]