[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt-users] Networking with qemu/kvm+libvirt



On 02/08/2016 04:20 PM, Andre Goree wrote:
On 01/11/2016 3:05 pm, Laine Stump wrote:
On 01/11/2016 02:25 PM, Andre Goree wrote:

I have some questions regarding the way that networking is handled via qemu/kvm+libvirt -- my apologies in advance if this is not the proper mailing list for such a question.


I am trying to determine how exactly I can manipulate traffic from a _guest's_ NIC using iptables on the _host_.

It depends on which type of networking you are using.

1) If your guest is using a macvtap device to connect to the outside,
then iptables processing isn't done on the traffic. I saw something
awhile back about getting that limitation removed from macvtap in the
the kernel, but don't remember what is the current status.

2) If your guest is using a standard tap device that is attached to an
Open vSwitch bridge, then iptables processing isn't done - ovs has
it's own version of packet filtering (that's as much as I know about
it). Note that OpenStack's networking uses OVS, but sets up a separate
Linux host bridge device for each guest device and puts it in between
the guest's tap device and the OVS bridge at least partly so that
iptables filtering can be done on the guest traffic.

3) If your guest is using a standard tap device that is attached to a
Linux host bridge, then all the traffic to/from the guest will be
processed by iptables and ebtables on the host. libvirt has a
subsystem that can help you create filtering rules that will be
applied to the guest interfaces *on the host*:


  https://libvirt.org/formatnwfilter.html

On the host, there is a bridged virtual NIC that corresponds to the guest's NIC. That interface does not have an IP setup on it on the host, however within the vm itself the IP is configured and everything works as expected.

During my testing, I've seemingly determined that traffic from the vm does NOT traverse iptables on the host, but I _can_ in fact see the traffic via tcpdump on the host. This seems odd to me, unless the traffic is passed on during interaction with the kernel, and thus never actually reaches iptables. I've gone as far as trying to log via iptables any and all traffic traversing the guest's interface on the host, but to no avail (iptables does not see any traffic from the guest's NIC on the host).

Is this the way it's supposed to work? And if so, is there any way I can do IP/port redirection silently on the _host_?

libvirt's "default" network does that for traffic outbound from the
guest. For traffic inbound to a guest connected to libvirt's default
network (or any other Linux host bridge), you can add a DNAT rule.
Here is an example:

http://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections

You may also find this article useful:

   https://libvirt.org/firewall.html

Thank you again for pointing me in the right direction, I definitely have an overall and much better understanding of how libvirt networking works, along with the amazingly awesome tool, nwfilter. However I still am having some trouble. I've been pouring through the documentation, and can't seem to figure out how exactly I'd create a rule to forward packets destined for one IP to another, different IP.

nwfilter doesn't support that.


My confusion stems from determining how exactly to produce a forward rule given the available "actions" and "directions" within network rules.

The available "actions" are "drop," "reject," "accept," "return," and "continue." With my knowledge of iptables in mind, I would also expect there to be some sort of 'target' that I can point nwfilter to that does DNAT/SNAT, etc. From what I gather from the docs, nwfilter is only capable of very simple filtering (e.g., allowing/disallowing specific types of traffic), v.s. routing, which is what I'd like to do.

routing is not NAT, and NAT is not routing. DNAT and SNAT rules change the IP addresses in the packet, but do not themselves make any routing decisions. The IP routing engine determines the best "next hop" for a packet based on its destination address.

However, via OpenStack, using a magic IP to configure new instances (they're given an IP that allows them to connect to a metadata server sitting on the network) indeed works, so I know it's just something simple I'm either missing or just not understanding.

The purpose of all this is to kinda mimic OpenStack's magic IP setup without using OpenStack itself -- as I have my own platform using qemu/kvm+libvirt that precludes me from using a different platform.

I'm unfamiliar with OpenStack's "magic IP", so I'm not sure exactly what to recommend.

(also, I'm not sure how your mail client managed to get the "justsendmailnothingelse" email added to a Cc, but that address is not intended for me to receive mail; it is used only to *send* mail (and nothing else :-), that's why the From: is set to my canonical address)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]