Description of problem: When one of the host UUID is zero, UI fails to remove the brick and does not update the bricks further from there. Version-Release number of selected component (if applicable): ovirt-engine-3.6.0-0.0.master.20150429182702.git939c2e8.el6.noarch How reproducible: Always Steps to Reproduce: 1. Have a three node cluster 2. Separate out data and mgmt traffic. 3. Now reboot one of the node. 4. Remove the node which is rebooted and add it again Actual results: UI should show all the bricks which were part of the volume in the UI. Expected results: UI does not show the brick on the node which was rebooted. Additional info:
From Engine log: 2015-05-11 12:20:07,979 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-78) [] Could not add brick '10.70.33.221:/b3' to volume 'df20011c-590d-4c4f-aabd-eae095e6fd6d' - server uuid '00000000-0000-0000-0000-000000000000' not found in cluster '42dae32e-9b3e-4344-be80-8c4b8139a0f0' 2015-05-11 12:20:07,991 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-78) [] FINISH, GlusterVolumesListVDSCommand, return: {df20011c-590d-4c4f-aabd-eae095e6fd6d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@6115bd8d}, log id: 64c5594f 2015-05-11 12:20:08,001 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler_Worker-78) [] Error while updating volume 'vol1': null
gluster vol info --xml output : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>vol1</name> <id>df20011c-590d-4c4f-aabd-eae095e6fd6d</id> <status>1</status> <statusStr>Started</statusStr> <brickCount>3</brickCount> <distCount>1</distCount> <stripeCount>1</stripeCount> <replicaCount>1</replicaCount> <disperseCount>0</disperseCount> <redundancyCount>0</redundancyCount> <type>0</type> <typeStr>Distribute</typeStr> <transport>0</transport> <xlators/> <bricks> <brick uuid="a7ae920d-e8a7-406f-b885-c32dea12656c">rhs-client24.lab.eng.blr.redhat.com:/b2<name>rhs-client24.lab.eng.blr.redhat.com:/b2</name><hostUuid>a7ae920d-e8a7-406f-b885-c32dea12656c</hostUuid></brick> <brick uuid="00000000-0000-0000-0000-000000000000">10.70.33.221:/b3<name>10.70.33.221:/b3</name><hostUuid>00000000-0000-0000-0000-000000000000</hostUuid></brick> <brick uuid="c0eec9b7-9cb8-4bec-9944-b3a286ff6a4f">10.70.37.186:/b1<name>10.70.37.186:/b1</name><hostUuid>c0eec9b7-9cb8-4bec-9944-b3a286ff6a4f</hostUuid></brick> </bricks> <optCount>3</optCount> <options> <option> <name>nfs.disable</name> <value>off</value> </option> <option> <name>user.cifs</name> <value>enable</value> </option> <option> <name>auth.allow</name> <value>*</value> </option> </options> </volume> <count>1</count> </volumes> </volInfo> </cliOutput>
Similar bug 1064777
*** Bug 1064777 has been marked as a duplicate of this bug. ***
Will verify the bug once https://bugzilla.redhat.com/show_bug.cgi?id=1193999 is fixed.
please open a bug in case you see that this is not working.