I’m running Cisco Modeling labs, which uses Ubuntu and QEMU under the hood. CML allows you to place external connectors in a lab that bridge virtual and physical lab devices. These external connectors are Linux bridges running on the CML controller hypervisor.

In my case the controller is itself virtualized under ESXi and is connected to a distributed vSwitch, which is in turn connected to a physical switch and the rest of the lab.

Now, out of the box the connectors work fine. You connect a virtual node to it and it shows up on your physical network. Unfortunately, you can’t have a trunk running between the physical and virtual lab nodes because the path between them is itself a trunk.

After some research, I came across Q-in-Q, aka dot1q tunneling aka 802.1ad aka nested vlans. The physical infrastructure (also Cisco) supports Q-in-Q by entering switchport mode dot1q-tunnel in interface config mode. Then you assign it to an outer vlan with switchport access vlan <vlan-number>. All traffic entering that port, regardless of if it’s tagged or not, gets this outer vlan tag, and the tag already present becomes the inner tag. The frame is switched according to the outer tag. Once the frame reaches the other side of the tunnel, the outer tag is stripped and the destination device only sees the correct inner vlan tag.

While this is easy enough to set up in IOS, I’m having no luck doing this with a linux bridge. They do support 802.1ad, which you can indicate when creating the bridge. The big issue is that the interface on the bridge the virtual node is connected to is created on the fly when you connect the node to the external connector. I can’t set up anything on the port itself, it has to inherit any settings from the bridge, and I’m not sure how to do that. Further, the interface on the bridge that faces the rest of the network must be configured as a trunk in order to allow the outer-vlan tagged traffic to pass to the vSwitch, and I’m not sure how to do that either.

There is a vlan_default_pvid option when creating a bridge, but that doesn’t seem to do what I want. The desired behavior is as follows:

  1. a frame is generated by the virtual lab node, with or without tags, and enters the ephemeral bridge interface created by CML.
  2. regardless of the presence or absence of a vlan tag, the bridge pushes an outer tag onto the frame, possibly resulting in two vlan tags.
  3. The frame is switched across the vSwitch and physical infrastructure, which is completely agnostic to this inner vlan if present.
  4. Before egressing the final switch before the physical lab node, the outer tag is stripped, leaving only the inner tag for the lab node to see.
  5. The lab node processes the frame as though it were directly connected to the virtual node.

For traffic in reverse, the process is similar. Double-tagged traffic traverses the link between the distributed vSwitch and the linux bridge. The bridge strips the outer tag, then pushes the traffic with just the inner tag to the virtual lab node.