Created attachment 1539123 [details] import fail Description of problem: When storage nodes do not have at least one lvm then import flow is failing. I have created 6 nodes setup without any lvm partition, also it does not have any volumes. When i tried to import the cluster it keeps waiting for gluster-integration sync to happen and it is timed out after some time. error Failure in Job 21518183-cef3-4bb0-a8c7-39505a7d9948 Flow tendrl.flows.ImportCluster with error: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/tendrl/commons/jobs/__init__.py", line 240, in process_job the_flow.run() File "/usr/lib/python2.7/site-packages/tendrl/commons/flows/import_cluster/__init__.py", line 131, in run exc_traceback) FlowExecutionFailedError: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/site-packages/tendrl/commons/flows/import_cluster/__init__.py", line 98, in run\n super(ImportCluster, self).run()\n', ' File "/usr/lib/python2.7/site-packages/tendrl/commons/flows/__init__.py", line 186, in run\n (atom_fqn, self._defs[\'help\'])\n', 'AtomExecutionFailedError: Atom Execution failed. Error: Error executing atom: tendrl.objects.Cluster.atoms.ImportCluster on flow: Import existing Gluster Cluster\n'] Version-Release number of selected component (if applicable): tendrl-gluster-integration-1.6.3-14.el7rhgs.noarch How reproducible: 100% Steps to Reproduce: 1.Create gluster cluster node without any lvm partition on the disk 2.import the detected cluster using WA 3. Wait for some time and it will show import flow status is failed Actual results: WA unable to import a detected cluster when any one or all nodes does not have at least one lvm on ant disk Expected results: WA should able to import the cluster Additional info:
From QE perspective, this is the same case as BZ 1676897, which as been already acked and recognized as a regression. Dev insisted on creating a new BZ for tracking work on this special case.
Tested and Verified both by automatic test suite (on the same cluster and configuration as where it was discovered) and manually on freshly installed cluster. Cluster was properly imported into RHGS WA. Version-Release number of selected component: Web Administration Server: Red Hat Enterprise Linux Server release 7.6 (Maipo) collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 etcd-3.2.7-1.el7.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch rubygem-etcd-0.3.0-2.el7rhgs.noarch tendrl-ansible-1.6.3-11.el7rhgs.noarch tendrl-api-1.6.3-13.el7rhgs.noarch tendrl-api-httpd-1.6.3-13.el7rhgs.noarch tendrl-commons-1.6.3-17.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-21.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-3.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-21.el7rhgs.noarch tendrl-node-agent-1.6.3-18.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-3.el7rhgs.noarch tendrl-ui-1.6.3-15.el7rhgs.noarch Gluster Storage Server: Red Hat Enterprise Linux Server release 7.6 (Maipo) Red Hat Gluster Storage Server 3.4 collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 glusterfs-3.12.2-45.el7rhgs.x86_64 glusterfs-api-3.12.2-45.el7rhgs.x86_64 glusterfs-cli-3.12.2-45.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-45.el7rhgs.x86_64 glusterfs-events-3.12.2-45.el7rhgs.x86_64 glusterfs-fuse-3.12.2-45.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-45.el7rhgs.x86_64 glusterfs-libs-3.12.2-45.el7rhgs.x86_64 glusterfs-rdma-3.12.2-45.el7rhgs.x86_64 glusterfs-server-3.12.2-45.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64 python2-gluster-3.12.2-45.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-3.el7rhgs.noarch tendrl-commons-1.6.3-17.el7rhgs.noarch tendrl-gluster-integration-1.6.3-15.el7rhgs.noarch tendrl-node-agent-1.6.3-18.el7rhgs.noarch tendrl-selinux-1.5.4-3.el7rhgs.noarch vdsm-gluster-4.19.43-2.3.el7rhgs.noarch >> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0660