qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-discuss] use veth device with qemu


From: Corin Langosch
Subject: [Qemu-discuss] use veth device with qemu
Date: Sat, 26 Sep 2015 10:44:32 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

Hi guys,

I'd the like to run each qemu in its own netns, while still giving it full/ 
transparent network access. I can connect
qemu to the network by creating another bridge inside the guest netns with the 
veth peer and qemu tap device like this:

ip link add qemu1-h type veth peer name qemu1-g
brctl addif br0 qemu1-h
ip netns add qemu1
ip link set qemu1-g netns qemu1
ip netns exec qemu1 brctl addbr br0
ip netns exec qemu1 brctl addif br0 qemu1-g
ip netns exec qemu1 ip tuntap add tap0 mode tap
ip netns exec qemu1 brctl addif br0 tap0

ip netns exec qemu1 /opt/qemu/current/bin/qemu-system-x86_64 -enable-kvm -m 
1024 -netdev
tap,id=netdev1,ifname=tap0,script=,downscript= -device 
virtio-net-pci,id=nic1,addr=0x0a,mac=02:d6:c0:2c:ab:a1,netdev=netdev1

It works, but is there an easier (probably also more performant) solution to 
this? One without having to create another
bridge in each qemu netns and somehow use the veth peer with qemu directly?

Background information: I'm running many qemu guests with each guest having its 
own tap device on the host for
networking. For firewalling of the guests I use iptables on the host with 
connection tracking enabled (I cannot do the
firewalling inside the guests). However a single (very busy) guest can overflow 
the conntrack table on the host. As this
table is shared among all guests (and the host) this can render the whole host/ 
guests unreachable because the host
starts dropping packets/ connections. I hope (anybody knows?) conntrack is 
using separate data structures for each netns
and so putting each guest in its own netns would prevent overflowing conntrack 
on the host/ other guests.

Other suggestions to this problem are welcome.

Cheers
Corin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]