Bug 1636427
Summary: | Order of brick placement on the host is not preserved as defined in the playbook | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> |
Component: | rhhi | Assignee: | Sahina Bose <sabose> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | rhhiv-1.5 | CC: | rhs-bugs, sabose, sankarshan, sasundar, surs |
Target Milestone: | --- | ||
Target Release: | RHHI-V 1.5 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | gluster-ansible-roles-1.0.3 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1636425 | Environment: | |
Last Closed: | 2019-05-20 04:54:53 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1636425 | ||
Bug Blocks: | 1520836 |
Description
SATHEESARAN
2018-10-05 11:10:06 UTC
sas, with my setup it works as expected: Volume Name: data Type: Replicate Volume ID: fd48f9e2-617c-463e-b68e-298c2704da18 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.70.43.26:/gluster_bricks/data/data Brick2: 10.70.43.169:/gluster_bricks/data/data Brick3: 10.70.43.104:/gluster_bricks/data/data (arbiter) Options Reconfigured: performance.strict-o-direct: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on ======================================= Would like to try on your setup, if this has anything to do with FQDNs. It shouldn't, we can try anyway. This is my conf file: hc-nodes: hosts: # Host1 - Provide the FQDN/IP of backend network 10.70.43.169: # Set up GlusterFS hyperconverged interface gluster_features_hci_cluster: "{{ groups['hc-nodes'] }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } #Host2 10.70.43.26: gluster_features_hci_cluster: "{{ groups['hc-nodes'] }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } #Host3 10.70.43.104: # Set up GlusterFS hyperconverged interface gluster_features_hci_cluster: "{{ groups['hc-nodes'] }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } ============= And my CLI: ansible-playbook -i gluster_inventory.yml hc_deployment.yml --tags hcivolcreate sas, more update on this: This is due to a bug in Ansible. https://github.com/ansible/ansible/issues/34861 Culprit is this line: > gluster_features_hci_cluster: "{{ groups['hc-nodes'] }}" The groups here is not keeping the ordering right, it sorts in descending order. Solution till Ansible fixes the issue is to define the cluster as a list and use that variable. Once you test this out, can we close this bug and open another to update the playbooks/ to include this in the playbooks we provide as reference. Since this is more of a playbook and inventory file bug than a bug in role. For example: =========================== hc-nodes: vars: cluster_nodes: - host1 - host2 - host3 hosts: #Host3 10.70.43.191: # Set up GlusterFS hyperconverged interface gluster_features_hci_cluster: "{{ cluster_nodes }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } # Host1 - Provide the FQDN/IP of backend network 10.70.43.169: # Set up GlusterFS hyperconverged interface gluster_features_hci_cluster: "{{ cluster_nodes }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } #Host2 10.70.43.26: gluster_features_hci_cluster: "{{ cluster_nodes }}" gluster_features_hci_volumes: - { volname: 'engine', brick: '/gluster_bricks/engine/engine', arbiter: 1 } - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 } - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', arbiter: 1 } @sas this enhancement to playbook[1] by Gobinda should fix this ordering issue and the readability of the playbook as well. [1] https://github.com/gluster/gluster-ansible/commit/77dd66e (In reply to Sachidananda Urs from comment #3) > sas, more update on this: This is due to a bug in Ansible. > > https://github.com/ansible/ansible/issues/34861 > > Culprit is this line: > > > gluster_features_hci_cluster: "{{ groups['hc-nodes'] }}" > > The groups here is not keeping the ordering right, it sorts in descending > order. Solution till Ansible fixes the issue is to define the cluster as a > list and use that variable. > > Once you test this out, can we close this bug and open another to update the > playbooks/ to include this in the playbooks we provide as reference. > > Since this is more of a playbook and inventory file bug than a bug in role. > > For example: > > =========================== > hc-nodes: > vars: > cluster_nodes: > - host1 > - host2 > - host3 > hosts: > #Host3 > 10.70.43.191: > # Set up GlusterFS hyperconverged interface > gluster_features_hci_cluster: "{{ cluster_nodes }}" > gluster_features_hci_volumes: > - { volname: 'engine', brick: '/gluster_bricks/engine/engine', > arbiter: 1 } > - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 > } > - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', > arbiter: 1 } > > # Host1 - Provide the FQDN/IP of backend network > 10.70.43.169: > # Set up GlusterFS hyperconverged interface > gluster_features_hci_cluster: "{{ cluster_nodes }}" > gluster_features_hci_volumes: > - { volname: 'engine', brick: '/gluster_bricks/engine/engine', > arbiter: 1 } > - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 > } > - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', > arbiter: 1 } > > #Host2 > 10.70.43.26: > > gluster_features_hci_cluster: "{{ cluster_nodes }}" > gluster_features_hci_volumes: > - { volname: 'engine', brick: '/gluster_bricks/engine/engine', > arbiter: 1 } > - { volname: 'data', brick: '/gluster_bricks/data/data', arbiter: 1 > } > - { volname: 'vmstore', brick: '/gluster_bricks/vmstore/vmstore', > arbiter: 1 } Thanks for the information , that helps The patch is already merged upstream, moving the bug to MODIFIED The dependent gluster-ansible bug is already ON_QA, moving this bug too to that state. Tested with ovirt-ansible-roles-1.0.3 The volumes are created on the hosts as per the playbook definitions |