Bug 1220263 - [New] - UI fails to remove the brick when of the host UUID is zero and does not check further to update any new bricks.
Summary: [New] - UI fails to remove the brick when of the host UUID is zero and does n...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Frontend.WebAdmin
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Sahina Bose
QA Contact: RamaKasturi
URL:
Whiteboard:
: 1064777 (view as bug list)
Depends On:
Blocks: rhsc_qe_tracker_everglades 1224276
TreeView+ depends on / blocked
 
Reported: 2015-05-11 06:49 UTC by RamaKasturi
Modified: 2016-03-03 16:55 UTC (History)
9 users (show)

Fixed In Version: ovirt-3.6.0-alpha1.2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1224276 (view as bug list)
Environment:
Last Closed: 2016-03-03 16:55:39 UTC
oVirt Team: Gluster
Embargoed:
ylavi: ovirt-3.6.0?
ylavi: planning_ack+
rule-engine: devel_ack+
ylavi: testing_ack?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 40799 0 master MERGED engine: Fixed issue with null brick during sync Never
oVirt gerrit 40850 0 ovirt-engine-3.5-gluster MERGED engine: Fixed issue with null brick during sync Never

Description RamaKasturi 2015-05-11 06:49:32 UTC
Description of problem:
When one of the host UUID is zero, UI fails to remove the brick and does not update the bricks further from there.

Version-Release number of selected component (if applicable):
ovirt-engine-3.6.0-0.0.master.20150429182702.git939c2e8.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Have a three node cluster
2. Separate out data and mgmt traffic.
3. Now reboot one of the node.
4. Remove the node which is rebooted and add it again

Actual results:
UI should show all the bricks which were part of the volume in the UI.

Expected results:
UI does not show the brick on the node which was rebooted.

Additional info:

Comment 1 RamaKasturi 2015-05-11 06:50:40 UTC
From Engine log:

2015-05-11 12:20:07,979 WARN  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-78) [] Could not add brick '10.70.33.221:/b3' to volume 'df20011c-590d-4c4f-aabd-eae095e6fd6d' - server uuid '00000000-0000-0000-0000-000000000000' not found in cluster '42dae32e-9b3e-4344-be80-8c4b8139a0f0'
2015-05-11 12:20:07,991 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-78) [] FINISH, GlusterVolumesListVDSCommand, return: {df20011c-590d-4c4f-aabd-eae095e6fd6d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@6115bd8d}, log id: 64c5594f
2015-05-11 12:20:08,001 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler_Worker-78) [] Error while updating volume 'vol1': null

Comment 2 RamaKasturi 2015-05-11 06:52:01 UTC
gluster vol info --xml output :

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
        <name>vol1</name>
        <id>df20011c-590d-4c4f-aabd-eae095e6fd6d</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <brickCount>3</brickCount>
        <distCount>1</distCount>
        <stripeCount>1</stripeCount>
        <replicaCount>1</replicaCount>
        <disperseCount>0</disperseCount>
        <redundancyCount>0</redundancyCount>
        <type>0</type>
        <typeStr>Distribute</typeStr>
        <transport>0</transport>
        <xlators/>
        <bricks>
          <brick uuid="a7ae920d-e8a7-406f-b885-c32dea12656c">rhs-client24.lab.eng.blr.redhat.com:/b2<name>rhs-client24.lab.eng.blr.redhat.com:/b2</name><hostUuid>a7ae920d-e8a7-406f-b885-c32dea12656c</hostUuid></brick>
          <brick uuid="00000000-0000-0000-0000-000000000000">10.70.33.221:/b3<name>10.70.33.221:/b3</name><hostUuid>00000000-0000-0000-0000-000000000000</hostUuid></brick>
          <brick uuid="c0eec9b7-9cb8-4bec-9944-b3a286ff6a4f">10.70.37.186:/b1<name>10.70.37.186:/b1</name><hostUuid>c0eec9b7-9cb8-4bec-9944-b3a286ff6a4f</hostUuid></brick>
        </bricks>
        <optCount>3</optCount>
        <options>
          <option>
            <name>nfs.disable</name>
            <value>off</value>
          </option>
          <option>
            <name>user.cifs</name>
            <value>enable</value>
          </option>
          <option>
            <name>auth.allow</name>
            <value>*</value>
          </option>
        </options>
      </volume>
      <count>1</count>
    </volumes>
  </volInfo>
</cliOutput>

Comment 3 Kanagaraj 2015-05-12 10:12:35 UTC
Similar bug 1064777

Comment 4 Sahina Bose 2015-05-12 10:27:38 UTC
*** Bug 1064777 has been marked as a duplicate of this bug. ***

Comment 5 RamaKasturi 2016-02-16 09:40:57 UTC
Will verify the bug once https://bugzilla.redhat.com/show_bug.cgi?id=1193999 is fixed.

Comment 6 RamaKasturi 2016-03-03 16:55:39 UTC
please open a bug in case you see that this is not working.


Note You need to log in before you can comment on or make changes to this bug.