-
Notifications
You must be signed in to change notification settings - Fork 40
Configuring network for OpenVZ container
To live-migrate a container that is accessible via network, it should possible to access this container with the same IP address regardless of on what node it runs, source or destination. The easiest way to do it is by attaching the container and both nodes to the same network segment via bridging.
To do so, you should first have your hosts' main links (eth0 below) attached to a bridge, and next -- attach the container to this bridge on the source node.
It's just and example of how to do it on Fedora-20, you can skip this section and do it in your own way.
- Disable NetworkManager and enable legacy networking
# chkconfig NetworkManager off # chkconfig network on # service NetworkManager stop # service network start
- Configure bridge and attach eth0 device to it.
/etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes DELAY=0
To /etc/sysconfig/network-scripts/ifcfg-eth0
add line
BRIDGE="br0"
Then do
# service network restart
After this you should have eth0 be in bridge (seen by the brctl show br0
) and br0 have the ip address (seen by ip a l
).
Next step is to attach the container to the bridge. To do so, the veth device should be used.
- Assign veth to container
# vzctl set $id --netif_add eth0,$ct_mac,$host_veth_name,$host_mac,br0
Any of $ct_mac, $host_mac and $host_veth_name can be empty, in that case vzctl will generate the value itself. Plz note, that if $host_mac is arithmetically less that the mac on your eth0 link after starting CT you would see some lag in networking due to br0's mac would change.
- Make sure you have eth0 config file in CT's
/etc/sysconfig/network-scripts/ifcfg-eth0
.