Hide Forgot
Description of problem: When 'vagrant up --no-provision' is run to setup the environment for ceph containers, configuration fails with the below error. Bringing machine 'osd2' up with 'libvirt' provider... ==> mon0: Box 'centos/atomic-host' could not be found. Attempting to find and install... mon0: Box Provider: libvirt mon0: Box Version: >= 0 ==> mon0: Loading metadata for box 'centos/atomic-host' mon0: URL: https://atlas.hashicorp.com/centos/atomic-host ==> mon0: Adding box 'centos/atomic-host' (v7.20160730) for provider: libvirt mon0: Downloading: https://atlas.hashicorp.com/centos/boxes/atomic-host/versions/7.20160730/providers/libvirt.box ==> mon0: Successfully added box 'centos/atomic-host' (v7.20160730) for 'libvirt'! ==> osd2: Box 'centos/atomic-host' could not be found. Attempting to find and install... osd2: Box Provider: libvirt osd2: Box Version: >= 0 ==> mon0: Uploading base box image as volume into libvirt storage... Progress: 59%==> osd2: Loading metadata for box 'centos/atomic-host' Progress: 59% osd2: URL: https://atlas.hashicorp.com/centos/atomic-host ==> mon0: Creating image (snapshot of base box volume). ==> mon0: Creating domain with the following settings... ==> mon0: -- Name: ceph-ansible_mon0 ==> mon0: -- Domain type: kvm ==> mon0: -- Cpus: 1 ==> mon0: -- Memory: 1024M ==> mon0: -- Management MAC: ==> mon0: -- Loader: ==> mon0: -- Base box: centos/atomic-host ==> mon0: -- Storage pool: default ==> mon0: -- Image: /var/lib/libvirt/images/ceph-ansible_mon0.img (11G) ==> mon0: -- Volume Cache: default ==> mon0: -- Kernel: ==> mon0: -- Initrd: ==> mon0: -- Graphics Type: vnc ==> mon0: -- Graphics Port: 5900 ==> mon0: -- Graphics IP: 127.0.0.1 ==> mon0: -- Graphics Password: Not defined ==> mon0: -- Video Type: cirrus ==> mon0: -- Video VRAM: 9216 ==> mon0: -- Keymap: en-us ==> mon0: -- TPM Path: ==> mon0: -- INPUT: type=mouse, bus=ps2 ==> mon0: -- Command line : ==> osd2: Adding box 'centos/atomic-host' (v7.20160730) for provider: libvirt ==> osd0: Box 'centos/atomic-host' could not be found. Attempting to find and install... osd0: Box Provider: libvirt ==> mon0: Creating shared folders metadata... osd0: Box Version: >= 0 ==> osd0: Loading metadata for box 'centos/atomic-host' osd0: URL: https://atlas.hashicorp.com/centos/atomic-host ==> osd0: Adding box 'centos/atomic-host' (v7.20160730) for provider: libvirt ==> mon0: Starting domain. ==> osd2: Creating image (snapshot of base box volume). ==> osd1: Box 'centos/atomic-host' could not be found. Attempting to find and install... osd1: Box Provider: libvirt osd1: Box Version: >= 0 ==> osd1: Loading metadata for box 'centos/atomic-host' osd1: URL: https://atlas.hashicorp.com/centos/atomic-host ==> osd2: Creating domain with the following settings... ==> osd2: -- Name: ceph-ansible_osd2 ==> osd2: -- Domain type: kvm ==> osd2: -- Cpus: 1 ==> osd2: -- Memory: 1024M ==> osd2: -- Management MAC: ==> osd2: -- Loader: ==> osd2: -- Base box: centos/atomic-host ==> osd2: -- Storage pool: default ==> osd2: -- Image: /var/lib/libvirt/images/ceph-ansible_osd2.img (11G) ==> osd2: -- Volume Cache: default ==> osd2: -- Kernel: ==> osd2: -- Initrd: ==> osd2: -- Graphics Type: vnc ==> osd2: -- Graphics Port: 5900 ==> mon0: Waiting for domain to get an IP address... ==> osd2: -- Graphics IP: 127.0.0.1 ==> osd0: Creating image (snapshot of base box volume). ==> osd2: -- Graphics Password: Not defined ==> osd1: Adding box 'centos/atomic-host' (v7.20160730) for provider: libvirt ==> osd2: -- Video Type: cirrus ==> osd2: -- Video VRAM: 9216 ==> osd0: Creating domain with the following settings... ==> osd0: -- Name: ceph-ansible_osd0 ==> osd2: -- Keymap: en-us ==> osd2: -- TPM Path: ==> osd0: -- Domain type: kvm ==> osd0: -- Cpus: 1 ==> osd2: -- Disks: vdb(qcow2,11G), vdc(qcow2,11G) ==> osd2: -- Disk(vdb): /var/lib/libvirt/images/disk-2-0.disk ==> osd0: -- Memory: 1024M ==> osd2: -- Disk(vdc): /var/lib/libvirt/images/disk-2-1.disk ==> osd0: -- Management MAC: ==> osd0: -- Loader: ==> osd2: -- INPUT: type=mouse, bus=ps2 ==> osd0: -- Base box: centos/atomic-host ==> osd2: -- Command line : ==> osd0: -- Storage pool: default ==> osd0: -- Image: /var/lib/libvirt/images/ceph-ansible_osd0.img (11G) ==> osd0: -- Volume Cache: default ==> osd0: -- Kernel: ==> osd0: -- Initrd: ==> osd0: -- Graphics Type: vnc ==> osd0: -- Graphics Port: 5900 ==> osd0: -- Graphics IP: 127.0.0.1 ==> osd0: -- Graphics Password: Not defined ==> osd0: -- Video Type: cirrus ==> osd0: -- Video VRAM: 9216 ==> osd0: -- Keymap: en-us ==> osd0: -- TPM Path: ==> osd0: -- Disks: vdb(qcow2,11G), vdc(qcow2,11G) ==> osd0: -- Disk(vdb): /var/lib/libvirt/images/disk-0-0.disk ==> osd0: -- Disk(vdc): /var/lib/libvirt/images/disk-0-1.disk ==> osd0: -- INPUT: type=mouse, bus=ps2 ==> osd0: -- Command line : ==> osd2: Creating shared folders metadata... ==> osd1: Creating image (snapshot of base box volume). ==> osd2: Starting domain. ==> osd0: Creating shared folders metadata... ==> osd0: Starting domain. ==> osd1: Creating domain with the following settings... ==> osd1: -- Name: ceph-ansible_osd1 ==> osd1: -- Domain type: kvm ==> osd1: -- Cpus: 1 ==> osd1: -- Memory: 1024M ==> osd1: -- Management MAC: ==> osd1: -- Loader: ==> osd0: Waiting for domain to get an IP address... ==> osd1: -- Base box: centos/atomic-host ==> osd1: -- Storage pool: default ==> osd1: -- Image: /var/lib/libvirt/images/ceph-ansible_osd1.img (11G) ==> osd1: -- Volume Cache: default ==> osd1: -- Kernel: ==> osd1: -- Initrd: ==> osd2: Waiting for domain to get an IP address... ==> osd1: -- Graphics Type: vnc ==> osd1: -- Graphics Port: 5900 ==> osd1: -- Graphics IP: 127.0.0.1 ==> osd1: -- Graphics Password: Not defined ==> osd1: -- Video Type: cirrus ==> osd1: -- Video VRAM: 9216 ==> osd1: -- Keymap: en-us ==> osd1: -- TPM Path: ==> osd1: -- Disks: vdb(qcow2,11G), vdc(qcow2,11G) ==> osd1: -- Disk(vdb): /var/lib/libvirt/images/disk-1-0.disk ==> osd1: -- Disk(vdc): /var/lib/libvirt/images/disk-1-1.disk ==> osd1: -- INPUT: type=mouse, bus=ps2 ==> osd1: -- Command line : ==> osd1: Creating shared folders metadata... ==> osd1: Starting domain. ==> osd1: Waiting for domain to get an IP address... ==> mon0: Waiting for SSH to become available... ==> mon0: Setting hostname... ==> mon0: Configuring and enabling network interfaces... ==> mon0: Machine not provisioned because `--no-provision` is specified. ==> osd0: Waiting for SSH to become available... ==> osd2: Waiting for SSH to become available... ==> osd2: Setting hostname... ==> osd2: Configuring and enabling network interfaces... ==> osd1: Waiting for SSH to become available... ==> osd1: Setting hostname... ==> osd1: Configuring and enabling network interfaces... ==> osd2: An error occurred. The error will be shown after all tasks complete. ==> osd1: An error occurred. The error will be shown after all tasks complete. ==> osd0: Setting hostname... ==> osd0: Configuring and enabling network interfaces... ==> osd0: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'osd0' machine. Please handle this error then try again: The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth1' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth1-1474885650-0' '/etc/sysconfig/network-scripts/ifcfg-eth1' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth1' # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth2' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth2-1474885651-1' '/etc/sysconfig/network-scripts/ifcfg-eth2' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth2' Stdout from the command: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device eth2 has different MAC address than expected, ignoring. Stderr from the command: usage: ifdown <configuration> usage: ifdown <configuration> An error occurred while executing the action on the 'osd1' machine. Please handle this error then try again: The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth1' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth1-1474885648-0' '/etc/sysconfig/network-scripts/ifcfg-eth1' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth1' # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth2' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth2-1474885648-1' '/etc/sysconfig/network-scripts/ifcfg-eth2' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth2' Stdout from the command: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device eth2 has different MAC address than expected, ignoring. Stderr from the command: usage: ifdown <configuration> usage: ifdown <configuration> An error occurred while executing the action on the 'osd2' machine. Please handle this error then try again: The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth1' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth1-1474885647-0' '/etc/sysconfig/network-scripts/ifcfg-eth1' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth1' # Down the interface before munging the config file. This might # fail if the interface is not actually set up yet so ignore # errors. /sbin/ifdown 'eth2' || true # Move new config into place mv '/tmp/vagrant-network-entry-eth2-1474885647-1' '/etc/sysconfig/network-scripts/ifcfg-eth2' # Bring the interface up ARPCHECK=no /sbin/ifup 'eth2' Stdout from the command: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device eth2 has different MAC address than expected, ignoring. Stderr from the command: usage: ifdown <configuration> usage: ifdown <configuration> Version-Release number of selected component (if applicable): # vagrant --version Vagrant 1.8.5 current hash of ceph-ansible - 614ee4937d7445569b93ec3644c1fcd4d7cbc8ba # vagrant status Current machine states: mon0 running (libvirt) osd0 running (libvirt) osd1 running (libvirt) osd2 running (libvirt) This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`. How reproducible: Always Steps to Reproduce: 1. Configure ceph containers using vagrant by following the guide - https://docs.google.com/document/d/1Ef5a_-Yjozy5Ue3C0M7mMQNn6zWZe0-514bhxKwFHI8/edit?ts=576a3d95# Actual results: Expected results: Additional info:
Please attach your vagrant_variables.yml
Created attachment 1204848 [details] vagrant_variable file
When using libvirt, you need to set "eth" to "eth1", not enp0s8. This isn't entirely clear from the comment, but the "libvirt" part trumps the "ubuntu or centos" part.
I had tried setting 'eth' to 'eth1' earlier today and I still saw the same issue.
can you lookup the name of your interface? just ssh into the vm: vagrant ssh osd0 and get the name of the interface?
[root@rhs-gp-srv5 ceph-ansible]# vagrant ssh osd0 [vagrant@ceph-osd0 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:bf:9f:e9 brd ff:ff:ff:ff:ff:ff inet 192.168.121.44/24 brd 192.168.121.255 scope global dynamic eth0 valid_lft 2718sec preferred_lft 2718sec inet6 fe80::5054:ff:febf:9fe9/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:99:9d:1e brd ff:ff:ff:ff:ff:ff inet 192.168.0.100/24 brd 192.168.0.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe99:9d1e/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:73:94:7a brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global dynamic eth2 valid_lft 3461sec preferred_lft 3461sec inet6 fe80::5054:ff:fe73:947a/64 scope link valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:e7:f4:c2:66 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever
Sorry but I can not reproduce your issue with vagrant 1.8.5 and the virtual box provider.
Yes you're right, however I don't have any setup to test this. Could it be an issue with the libvirt-provider?
Ok I just logged in and figured out we encountered that issue before. Ivan reported that problem to the libvirt-provider team here: https://github.com/vagrant-libvirt/vagrant-libvirt/issues/645 The issue is still present though...
The best thing we can do here is to checkout the libvirt-provider repo before the commit that introduced this bug. We need to state that in the doc as well.
Note that hitting this error does not have any adverse affect on the Vagrant instances nor on any of the playbook runs. All it does is change eth2 for the OSDs to receive DHCP addresses instead of the statically assigned ones from vagrant-libvirt. But we hadn't been using the OSD eth2 interfaces anyway so it doesn't really impact anything. It's more or less of a benign error for now, but we should try and fix it or at least work around it.
Per Ivan's comment #15, this is not a blocker to the container image of 2.0 going out.
I've opened a ceph-ansible PR https://github.com/ceph/ceph-ansible/pull/1015 to fix this issue by working around the vagrant-libvirt bug so that you should no longer see this error.
Thanks Ivan, FYI this is not part of v1.0.8
We can close this as we are not testing on Vagrant for 2.3.