Description of problem: We have one nova instance with the same cinder volume attached two times: [root@mi-host2012 ~(openstack_admin)]# nova show 2242e958-c73b-4eb7-b0a0-1f4f3540d243 +--------------------------------------+--------------------------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | mi-host2013 | | OS-EXT-SRV-ATTR:hypervisor_hostname | mi-host2013 | | OS-EXT-SRV-ATTR:instance_name | instance-00000695 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2015-12-16T09:18:29.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2015-12-16T09:17:54Z | | flavor | m1.large (4) | | hostId | e1afc6dfe39d4c1308bee127659a580d433e7058f5c12cafa1d0f13e | | id | 2242e958-c73b-4eb7-b0a0-1f4f3540d243 | | image | MYD_WIN_02_IMG_THRMYD (5f00b165-3326-4e8b-bc89-a7f202128ed6) | | key_name | MYD_LIX_WEB_KEY01 | | metadata | {} | | name | MYD_WIN_THRMYD_01 | | os-extended-volumes:volumes_attached | [{"id": "17bb4a19-87dd-49d4-acbd-776464763d7a"}, {"id": "17bb4a19-87dd-49d4-acbd-776464763d7a"}] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | f27c7b5793f342bcaf12c96d14635361 | | updated | 2015-12-17T00:24:10Z | | user_id | da0b7620e5c3440a95c6e01fc01f2bdc | | vpdc network | 192.168.0.120, 2.239.208.91 | +--------------------------------------+--------------------------------------------------------------------------------------------------+ additionally our customer said that on this instance (windows server 2008 r2) he has two cinder volumes attached, one is the above 17bb4a19-87dd-49d4-acbd-776464763d7a, and the second is the volume with id 351f275b-1784-4212-a378-9e1e9a4e49c8, who is marked as available: [root@mi-host2012 ~(openstack_admin)]# cinder show 351f275b-1784-4212-a378-9e1e9a4e49c8 +---------------------------------------+--------------------------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2015-11-19T23:21:09.000000 | | display_description | | | display_name | MYD_DSK_01 | | encrypted | False | | id | 351f275b-1784-4212-a378-9e1e9a4e49c8 | | metadata | {u'readonly': u'False'} | | os-vol-host-attr:host | ha-controller@cinder-volumes-1-v7000-ber#NFS-V7000-BER | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f27c7b5793f342bcaf12c96d14635361 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | size | 300 | | snapshot_id | None | | source_volid | None | | status | available | | volume_type | cinder-volumes-1-v7000-ber | +---------------------------------------+--------------------------------------------------------+ On hypervisor I saw that this available cinder volume is attached to the instance (have a look at file=/var/lib/nova/mnt/9d54ef64dcb7aec083c6bf037c88ed76/volume-351f275b-1784-4212-a378-9e1e9a4e49c8,if=none,id=drive-virtio-disk1): [root@mi-host2013 ~]# ps -ef | grep instance-00000695 qemu 20378 1 99 01:07 ? 09:27:25 /usr/libexec/qemu-kvm -name instance-00000695 -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 2242e958-c73b-4eb7-b0a0-1f4f3540d243 -smbios type=1,manufacturer=Red Hat,product=OpenStack Compute,version=2014.2.3-9.el7ost,serial=9c451138-6738-4ee9-9834-4c60e11cc966,uuid=2242e958-c73b-4eb7-b0a0-1f4f3540d243 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000695.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/2242e958-c73b-4eb7-b0a0-1f4f3540d243/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/mnt/9d54ef64dcb7aec083c6bf037c88ed76/volume-351f275b-1784-4212-a378-9e1e9a4e49c8,if=none,id=drive-virtio-disk1,format=raw,serial=351f275b-1784-4212-a378-9e1e9a4e49c8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=40,id=hostnet0,vhost=on,vhostfd=41 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:02:0b:ce,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/2242e958-c73b-4eb7-b0a0-1f4f3540d243/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:11 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:25 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on root 34775 31589 0 09:30 pts/20 00:00:00 grep --color=auto instance-00000695 Version-Release number of selected component (if applicable): openstack-nova-common-2014.2.3-9.el7ost.noarch openstack-nova-compute-2014.2.3-9.el7ost.noarch python-nova-2014.2.3-9.el7ost.noarch python-novaclient-2.20.0-1.el7ost.noarch
rpm is in openstack-nova-common-2015.1.3-7.el7ost.noarch automation passed https://rhos-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/RHOS/view/RHOS7/job/rhos-jenkins-rhos-7.0-puddle-rhel-7.2-3networkers-packstack-neutron-ml2-vxlan-rabbitmq-tempest-git-all/34/
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0507.html