Bug 1398752 - Host initialization fails if an ISO is mounted on the RHSC node
Summary: Host initialization fails if an ISO is mounted on the RHSC node
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: agent
Version: 2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 3
Assignee: Shubhendu Tripathi
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-25 19:08 UTC by Vimal Kumar
Modified: 2020-04-15 14:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 03:56:26 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Vimal Kumar 2016-11-25 19:08:20 UTC
a) Description of problem:

After installing the RHSC, the Ceph hosts does appear on the console. The host signature is accepted, and it shows at the `Hosts` tab with initializing status. 

If a ISO image is kept mounted, the host initializing fails. It will be a success, if no ISOs are kept mounted.

Case 1: `df -h` and Skyring.log with ISOs mounted:

[root@ ~]# df
Filesystem      1K-blocks    Used  Available Use% Mounted on
/dev/md127       26215932 2192016   24023916   9% /
devtmpfs        131959176       0  131959176   0% /dev
tmpfs           131969540      12  131969528   1% /dev/shm
tmpfs           131969540   17492  131952048   1% /run
tmpfs           131969540       0  131969540   0% /sys/fs/cgroup
/dev/loop0         122500  122500          0 100% /mnt/repo_consola
/dev/loop1         152680  152680          0 100% /mnt/repo_ceph
/dev/sdo1          505580  128288     377292  26% /boot
/dev/md126       88253336 3981640   84271696   5% /home
/dev/loop2        3947824 3947824          0 100% /mnt/rhel7

~~~
2016-10-06T12:16:10+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, 'hostname', 'test.ping'), kwargs={}
2016-10-06T12:16:10+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'grains.item', ['machine_id']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'machine_id': '57c9034836034a8489de8974099b211e'}}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], ['grains.item', 'network.subnets'], [['ipv4', 'ipv6'], []]), kwargs={'expr_form': 'list'}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'grains.item': {'ipv4': ['1.1.1.6', '127.0.0.1', '20.48.0.26', '20.48.22.16'], 'ipv6': ['::1', 'fe80::e20e:daff:fe00:893f', 'fe80::e20e:daff:fe00:8940']}, 'network.subnets': ['20.48.0.0/23', '1.1.1.0/24', '20.48.22.0/23']}}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, 'hostname', 'test.ping'), kwargs={}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ["lsblk --all --bytes --noheadings --output='NAME,KNAME,FSTYPE,MOUNTPOINT,UUID,PARTUUID,MODEL,SIZE,TYPE,PKNAME,VENDOR' --path --raw"]), kwargs={'expr_form': 'list'}
2016-10-06T12:16:11+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': '/dev/sda /dev/sda     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sda1 /dev/sda1 xfs /var/lib/ceph/osd/ceph-0 cb8e4660-5efd-4aac-8241-f94ff485b521 00dbb2ff-db1e-42bf-a6d8-678cade6012f  4000785964544 part /dev/sda \n/dev/sdb /dev/sdb     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdb1 /dev/sdb1 xfs /var/lib/ceph/osd/ceph-3 2d18e0b4-b03f-4349-981b-97baa916cab4 0005c1ea-aeb6-4a42-97c8-063ac04377e3  4000785964544 part /dev/sdb \n/dev/sdc /dev/sdc     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdc1 /dev/sdc1 xfs /var/lib/ceph/osd/ceph-6 2fa0326e-4fbd-44e3-b479-72b2ca287955 3070b6ac-6922-4e73-9bce-88ef129f15b8  4000785964544 part /dev/sdc \n/dev/sdd /dev/sdd     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdd1 /dev/sdd1 xfs /var/lib/ceph/osd/ceph-11 4265c5e3-a079-450a-812a-746895aeae89 6fd8ccff-4008-4ae5-8ea3-e547d133268e  4000785964544 part /dev/sdd \n/dev/sde /dev/sde     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sde1 /dev/sde1 xfs /var/lib/ceph/osd/ceph-13 d3470ec4-0f1b-46fd-ad00-3f650e1d0f61 0ff07e64-2faf-4c29-9a1b-bbcde3278e4c  4000785964544 part /dev/sde \n/dev/sdf /dev/sdf     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdf1 /dev/sdf1 xfs /var/lib/ceph/osd/ceph-16 92625bb7-e493-49a0-8b19-55ecda54c6a4 1af40acf-78f5-4a48-8ecc-66e871068d5a  4000785964544 part /dev/sdf \n/dev/sdg /dev/sdg     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdg1 /dev/sdg1 xfs /var/lib/ceph/osd/ceph-18 3ad5cc49-abfe-405d-9624-9ae941b8fea3 26926f0f-7420-490a-abc5-0163fe6c3736  4000785964544 part /dev/sdg \n/dev/sdh /dev/sdh     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdh1 /dev/sdh1 xfs /var/lib/ceph/osd/ceph-22 0e9cb7e4-f335-4c95-9808-efd71cd7cc14 69400b4d-17a6-480a-bb27-8d7e7f294ce1  4000785964544 part /dev/sdh \n/dev/sdi /dev/sdi     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdi1 /dev/sdi1 xfs /var/lib/ceph/osd/ceph-24 b3acbc97-f484-4375-a781-71451ca159c1 a1ccc8cd-753d-4d36-8b7a-4f7910cb9e48  4000785964544 part /dev/sdi \n/dev/sdj /dev/sdj     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdj1 /dev/sdj1 xfs /var/lib/ceph/osd/ceph-28 a05da2f2-4ca0-4418-a102-8d50649e637f cc0ec7f1-ad9d-433b-af36-3120111eaee0  4000785964544 part /dev/sdj \n/dev/sdk /dev/sdk     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdk1 /dev/sdk1 xfs /var/lib/ceph/osd/ceph-31 14b18c8f-fc33-436b-a2c0-eb3d74a19378 ce0ba8e2-7cec-48fd-ba8b-567350cee7b3  4000785964544 part /dev/sdk \n/dev/sdl /dev/sdl     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdl1 /dev/sdl1 xfs /var/lib/ceph/osd/ceph-34 ce76caf0-b4b3-48ac-a538-620fbca551ed 74264594-d34c-4852-85ac-b12c8ab1aa80  4000785964544 part /dev/sdl \n/dev/sdm /dev/sdm     MZ6ER400HAGL/003 400088457216 disk  SAMSUNG\\x20\n/dev/sdm1 /dev/sdm1    1af815c6-2989-4352-a379-effd794a5008  51380224000 part /dev/sdm \n/dev/sdm2 /dev/sdm2    8bd17f51-1f6e-4624-9779-d93b5601b097  51380224000 part /dev/sdm \n/dev/sdm3 /dev/sdm3    153c59de-527b-426a-b668-7dabd6557ff1  51380224000 part /dev/sdm \n/dev/sdm4 /dev/sdm4    d8ed29d7-1c22-4d58-87b0-d4249ae7e14e  51380224000 part /dev/sdm \n/dev/sdm5 /dev/sdm5    0d4f8a47-0a65-4ae2-a64f-3632b14da4ac  51380224000 part /dev/sdm \n/dev/sdm6 /dev/sdm6    592bded0-359a-49da-9b28-c3c31bf3416c  51380224000 part /dev/sdm \n/dev/sdn /dev/sdn     MZ6ER400HAGL/003 400088457216 disk  SAMSUNG\\x20\n/dev/sdn1 /dev/sdn1    9d919b16-642f-4902-939b-7a3fc9d92e66  51380224000 part /dev/sdn \n/dev/sdn2 /dev/sdn2    952dc407-091f-4444-94d1-e7cd140be8d8  51380224000 part /dev/sdn \n/dev/sdn3 /dev/sdn3    d9f683dd-57df-4d93-9b37-8769a4301537  51380224000 part /dev/sdn \n/dev/sdn4 /dev/sdn4    d4c9b416-d3ad-4645-8a53-de96f394119e  51380224000 part /dev/sdn \n/dev/sdn5 /dev/sdn5    4290714e-b66f-4f82-94df-65a7cb4b2c7f  51380224000 part /dev/sdn \n/dev/sdn6 /dev/sdn6    bc9c50dc-92c5-4cc4-9ceb-da1929263d44  51380224000 part /dev/sdn \n/dev/sdo /dev/sdo     INTEL\\x20SSDSC2BB12 120034123776 disk  ATA\\x20\\x20\\x20\\x20\\x20\n/dev/sdo1 /dev/sdo1 xfs /boot b20a049e-4e4a-4e5b-8799-c5c061d592d2   524288000 part /dev/sdo \n/dev/sdo2 /dev/sdo2 linux_raid_member  6d301bd3-cf8a-4085-5fb2-e47ab2fa017d   26875002880 part /dev/sdo \n/dev/md127 /dev/md127 xfs / 74277e50-ff62-4858-a81c-0a9cb3fd088b   26858225664 raid1 /dev/sdo2 \n/dev/sdo3 /dev/sdo3 LVM2_member  Mye7fn-b8EY-fdlr-GJ8y-fXM7-ZHRH-Yov8D3   2149580800 part /dev/sdo \n/dev/mapper/rhel-swap /dev/dm-0 swap [SWAP] b9d7eb46-da00-42e6-947d-20548461233f   4294967296 lvm /dev/sdo3 \n/dev/sdo4 /dev/sdo4      1024 part /dev/sdo \n/dev/sdo5 /dev/sdo5 linux_raid_member  b8adafe0-771f-5753-a65a-b60b59f2b14f   90482671616 part /dev/sdo \n/dev/md126 /dev/md126 xfs /home 0389eab2-e9ac-43b9-88ff-7add22f76ea6   90415562752 raid1 /dev/sdo5 \n/dev/sdp /dev/sdp     INTEL\\x20SSDSC2BB12 120034123776 disk  ATA\\x20\\x20\\x20\\x20\\x20\n/dev/sdp1 /dev/sdp1 linux_raid_member  6d301bd3-cf8a-4085-5fb2-e47ab2fa017d   26875002880 part /dev/sdp \n/dev/md127 /dev/md127 xfs / 74277e50-ff62-4858-a81c-0a9cb3fd088b   26858225664 raid1 /dev/sdp1 \n/dev/sdp2 /dev/sdp2 LVM2_member  uzmmWC-Hm35-x8D9-eDM4-K4zn-KhK6-kbh4yS   2149580800 part /dev/sdp \n/dev/mapper/rhel-swap /dev/dm-0 swap [SWAP] b9d7eb46-da00-42e6-947d-20548461233f   4294967296 lvm /dev/sdp2 \n/dev/sdp3 /dev/sdp3 linux_raid_member  b8adafe0-771f-5753-a65a-b60b59f2b14f   90483720192 part /dev/sdp \n/dev/md126 /dev/md126 xfs /home 0389eab2-e9ac-43b9-88ff-7add22f76ea6   90415562752 raid1 /dev/sdp3 \n/dev/loop0 /dev/loop0 iso9660 /mnt/repo_consola 2016-08-18-15-03-10-00   125440000 loop  \n/dev/loop1 /dev/loop1 iso9660 /mnt/repo_ceph 2016-08-18-13-51-59-00   156344320 loop  \n/dev/loop2 /dev/loop2 iso9660 /mnt/rhel7 2015-10-30-11-11-49-00   4043309056 loop'}
2016-10-06T12:16:11.859+02:00 ERROR    salt_node_manager.go:110 GetStorageNodeInstance] admin:a40a4ccb-fef6-4509-920f-355d52b5b389-Error getting disk details for node: hostname. error: GetNodeDisk(): exception happened in python side
2016-10-06T12:16:11.859+02:00 CRITICAL util.go:458 initializeStorageNode] admin:a40a4ccb-fef6-4509-920f-355d52b5b389-Error getting the details for node: hostname
~~~

Case 2: `df -h` and Skyring.log with no ISO mounts

[root@ ~]# df
Filesystem      1K-blocks    Used  Available Use% Mounted on
/dev/md127       26215932 2185676   24030256   9% /
devtmpfs        131959176       0  131959176   0% /dev
tmpfs           131969540      12  131969528   1% /dev/shm
tmpfs           131969540   17492  131952048   1% /run
tmpfs           131969540       0  131969540   0% /sys/fs/cgroup
/dev/sdo1          505580  128288     377292  26% /boot
/dev/md126       88253336 3981640   84271696   5% /home
/dev/sdd1      3905109820 1450780 3903659040   1% /var/lib/ceph/osd/ceph-11
/dev/sdh1      3905109820 1365972 3903743848   1% /var/lib/ceph/osd/ceph-22
/dev/sdl1      3905109820 1201524 3903908296   1% /var/lib/ceph/osd/ceph-34
/dev/sdg1      3905109820 1162088 3903947732   1% /var/lib/ceph/osd/ceph-18
/dev/sde1      3905109820 1437496 3903672324   1% /var/lib/ceph/osd/ceph-13
/dev/sdj1      3905109820 1847432 3903262388   1% /var/lib/ceph/osd/ceph-28
/dev/sdb1      3905109820 1330052 3903779768   1% /var/lib/ceph/osd/ceph-3
/dev/sdi1      3905109820 1256072 3903853748   1% /var/lib/ceph/osd/ceph-24
/dev/sdc1      3905109820 1338472 3903771348   1% /var/lib/ceph/osd/ceph-6
/dev/sdf1      3905109820 1285416 3903824404   1% /var/lib/ceph/osd/ceph-16
/dev/sda1      3905109820 1386656 3903723164   1% /var/lib/ceph/osd/ceph-0
/dev/sdk1      3905109820 1579424 3903530396   1% /var/lib/ceph/osd/ceph-31
tmpfs            26393908       0   26393908   0% /run/user/0

~~~
2016-10-06T12:16:57+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, 'hostname', 'test.ping'), kwargs={}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'grains.item', ['machine_id']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'machine_id': '57c9034836034a8489de8974099b211e'}}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], ['grains.item', 'network.subnets'], [['ipv4', 'ipv6'], []]), kwargs={'expr_form': 'list'}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'grains.item': {'ipv4': ['1.1.1.6', '127.0.0.1', '20.48.0.26', '20.48.22.16'], 'ipv6': ['::1', 'fe80::e20e:daff:fe00:893f', 'fe80::e20e:daff:fe00:8940']}, 'network.subnets': ['20.48.0.0/23', '1.1.1.0/24', '20.48.22.0/23']}}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, 'hostname', 'test.ping'), kwargs={}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ["lsblk --all --bytes --noheadings --output='NAME,KNAME,FSTYPE,MOUNTPOINT,UUID,PARTUUID,MODEL,SIZE,TYPE,PKNAME,VENDOR' --path --raw"]), kwargs={'expr_form': 'list'}
2016-10-06T12:16:57+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': '/dev/sda /dev/sda     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sda1 /dev/sda1 xfs /var/lib/ceph/osd/ceph-0 cb8e4660-5efd-4aac-8241-f94ff485b521 00dbb2ff-db1e-42bf-a6d8-678cade6012f  4000785964544 part /dev/sda \n/dev/sdb /dev/sdb     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdb1 /dev/sdb1 xfs /var/lib/ceph/osd/ceph-3 2d18e0b4-b03f-4349-981b-97baa916cab4 0005c1ea-aeb6-4a42-97c8-063ac04377e3  4000785964544 part /dev/sdb \n/dev/sdc /dev/sdc     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdc1 /dev/sdc1 xfs /var/lib/ceph/osd/ceph-6 2fa0326e-4fbd-44e3-b479-72b2ca287955 3070b6ac-6922-4e73-9bce-88ef129f15b8  4000785964544 part /dev/sdc \n/dev/sdd /dev/sdd     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdd1 /dev/sdd1 xfs /var/lib/ceph/osd/ceph-11 4265c5e3-a079-450a-812a-746895aeae89 6fd8ccff-4008-4ae5-8ea3-e547d133268e  4000785964544 part /dev/sdd \n/dev/sde /dev/sde     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sde1 /dev/sde1 xfs /var/lib/ceph/osd/ceph-13 d3470ec4-0f1b-46fd-ad00-3f650e1d0f61 0ff07e64-2faf-4c29-9a1b-bbcde3278e4c  4000785964544 part /dev/sde \n/dev/sdf /dev/sdf     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdf1 /dev/sdf1 xfs /var/lib/ceph/osd/ceph-16 92625bb7-e493-49a0-8b19-55ecda54c6a4 1af40acf-78f5-4a48-8ecc-66e871068d5a  4000785964544 part /dev/sdf \n/dev/sdg /dev/sdg     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdg1 /dev/sdg1 xfs /var/lib/ceph/osd/ceph-18 3ad5cc49-abfe-405d-9624-9ae941b8fea3 26926f0f-7420-490a-abc5-0163fe6c3736  4000785964544 part /dev/sdg \n/dev/sdh /dev/sdh     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdh1 /dev/sdh1 xfs /var/lib/ceph/osd/ceph-22 0e9cb7e4-f335-4c95-9808-efd71cd7cc14 69400b4d-17a6-480a-bb27-8d7e7f294ce1  4000785964544 part /dev/sdh \n/dev/sdi /dev/sdi     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdi1 /dev/sdi1 xfs /var/lib/ceph/osd/ceph-24 b3acbc97-f484-4375-a781-71451ca159c1 a1ccc8cd-753d-4d36-8b7a-4f7910cb9e48  4000785964544 part /dev/sdi \n/dev/sdj /dev/sdj     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdj1 /dev/sdj1 xfs /var/lib/ceph/osd/ceph-28 a05da2f2-4ca0-4418-a102-8d50649e637f cc0ec7f1-ad9d-433b-af36-3120111eaee0  4000785964544 part /dev/sdj \n/dev/sdk /dev/sdk     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdk1 /dev/sdk1 xfs /var/lib/ceph/osd/ceph-31 14b18c8f-fc33-436b-a2c0-eb3d74a19378 ce0ba8e2-7cec-48fd-ba8b-567350cee7b3  4000785964544 part /dev/sdk \n/dev/sdl /dev/sdl     ST4000NM0023\\x20\\x20\\x20\\x20 4000787030016 disk  SEAGATE\\x20\n/dev/sdl1 /dev/sdl1 xfs /var/lib/ceph/osd/ceph-34 ce76caf0-b4b3-48ac-a538-620fbca551ed 74264594-d34c-4852-85ac-b12c8ab1aa80  4000785964544 part /dev/sdl \n/dev/sdm /dev/sdm     MZ6ER400HAGL/003 400088457216 disk  SAMSUNG\\x20\n/dev/sdm1 /dev/sdm1    1af815c6-2989-4352-a379-effd794a5008  51380224000 part /dev/sdm \n/dev/sdm2 /dev/sdm2    8bd17f51-1f6e-4624-9779-d93b5601b097  51380224000 part /dev/sdm \n/dev/sdm3 /dev/sdm3    153c59de-527b-426a-b668-7dabd6557ff1  51380224000 part /dev/sdm \n/dev/sdm4 /dev/sdm4    d8ed29d7-1c22-4d58-87b0-d4249ae7e14e  51380224000 part /dev/sdm \n/dev/sdm5 /dev/sdm5    0d4f8a47-0a65-4ae2-a64f-3632b14da4ac  51380224000 part /dev/sdm \n/dev/sdm6 /dev/sdm6    592bded0-359a-49da-9b28-c3c31bf3416c  51380224000 part /dev/sdm \n/dev/sdn /dev/sdn     MZ6ER400HAGL/003 400088457216 disk  SAMSUNG\\x20\n/dev/sdn1 /dev/sdn1    9d919b16-642f-4902-939b-7a3fc9d92e66  51380224000 part /dev/sdn \n/dev/sdn2 /dev/sdn2    952dc407-091f-4444-94d1-e7cd140be8d8  51380224000 part /dev/sdn \n/dev/sdn3 /dev/sdn3    d9f683dd-57df-4d93-9b37-8769a4301537  51380224000 part /dev/sdn \n/dev/sdn4 /dev/sdn4    d4c9b416-d3ad-4645-8a53-de96f394119e  51380224000 part /dev/sdn \n/dev/sdn5 /dev/sdn5    4290714e-b66f-4f82-94df-65a7cb4b2c7f  51380224000 part /dev/sdn \n/dev/sdn6 /dev/sdn6    bc9c50dc-92c5-4cc4-9ceb-da1929263d44  51380224000 part /dev/sdn \n/dev/sdo /dev/sdo     INTEL\\x20SSDSC2BB12 120034123776 disk  ATA\\x20\\x20\\x20\\x20\\x20\n/dev/sdo1 /dev/sdo1 xfs /boot b20a049e-4e4a-4e5b-8799-c5c061d592d2   524288000 part /dev/sdo \n/dev/sdo2 /dev/sdo2 linux_raid_member  6d301bd3-cf8a-4085-5fb2-e47ab2fa017d   26875002880 part /dev/sdo \n/dev/md127 /dev/md127 xfs / 74277e50-ff62-4858-a81c-0a9cb3fd088b   26858225664 raid1 /dev/sdo2 \n/dev/sdo3 /dev/sdo3 LVM2_member  Mye7fn-b8EY-fdlr-GJ8y-fXM7-ZHRH-Yov8D3   2149580800 part /dev/sdo \n/dev/mapper/rhel-swap /dev/dm-0 swap [SWAP] b9d7eb46-da00-42e6-947d-20548461233f   4294967296 lvm /dev/sdo3 \n/dev/sdo4 /dev/sdo4      1024 part /dev/sdo \n/dev/sdo5 /dev/sdo5 linux_raid_member  b8adafe0-771f-5753-a65a-b60b59f2b14f   90482671616 part /dev/sdo \n/dev/md126 /dev/md126 xfs /home 0389eab2-e9ac-43b9-88ff-7add22f76ea6   90415562752 raid1 /dev/sdo5 \n/dev/sdp /dev/sdp     INTEL\\x20SSDSC2BB12 120034123776 disk  ATA\\x20\\x20\\x20\\x20\\x20\n/dev/sdp1 /dev/sdp1 linux_raid_member  6d301bd3-cf8a-4085-5fb2-e47ab2fa017d   26875002880 part /dev/sdp \n/dev/md127 /dev/md127 xfs / 74277e50-ff62-4858-a81c-0a9cb3fd088b   26858225664 raid1 /dev/sdp1 \n/dev/sdp2 /dev/sdp2 LVM2_member  uzmmWC-Hm35-x8D9-eDM4-K4zn-KhK6-kbh4yS   2149580800 part /dev/sdp \n/dev/mapper/rhel-swap /dev/dm-0 swap [SWAP] b9d7eb46-da00-42e6-947d-20548461233f   4294967296 lvm /dev/sdp2 \n/dev/sdp3 /dev/sdp3 linux_raid_member  b8adafe0-771f-5753-a65a-b60b59f2b14f   90483720192 part /dev/sdp \n/dev/md126 /dev/md126 xfs /home 0389eab2-e9ac-43b9-88ff-7add22f76ea6   90415562752 raid1 /dev/sdp3 \n/dev/loop0 /dev/loop0       loop  \n/dev/loop1 /dev/loop1       loop  \n/dev/loop2 /dev/loop2       loop'}
2016-10-06T12:16:57+0000 ERROR    saltwrapper.py:232 saltwrapper.GetNodeDisk] admin:521b21ea-e6b7-490d-a03f-ded1d56607a0-Skipping the disk: /dev/loop2 as size field is blank.
2016-10-06T12:16:57+0000 ERROR    saltwrapper.py:232 saltwrapper.GetNodeDisk] admin:521b21ea-e6b7-490d-a03f-ded1d56607a0-Skipping the disk: /dev/loop1 as size field is blank.
2016-10-06T12:16:57+0000 ERROR    saltwrapper.py:232 saltwrapper.GetNodeDisk] admin:521b21ea-e6b7-490d-a03f-ded1d56607a0-Skipping the disk: /dev/loop0 as size field is blank.
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ['lscpu']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': 'Architecture:          x86_64\nCPU op-mode(s):        32-bit, 64-bit\nByte Order:            Little Endian\nCPU(s):                40\nOn-line CPU(s) list:   0-39\nThread(s) per core:    2\nCore(s) per socket:    10\nSocket(s):             2\nNUMA node(s):          2\nVendor ID:             GenuineIntel\nCPU family:            6\nModel:                 62\nModel name:            Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\nStepping:              4\nCPU MHz:               2492.445\nBogoMIPS:              4405.99\nVirtualization:        VT-x\nL1d cache:             32K\nL1i cache:             32K\nL2 cache:              256K\nL3 cache:              25600K\nNUMA node0 CPU(s):     0-9,20-29\nNUMA node1 CPU(s):     10-19,30-39'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'grains.item', ['osfullname']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'osfullname': 'Red Hat Enterprise Linux Server'}}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'grains.item', ['osrelease']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'osrelease': '7.2'}}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ['uname --all']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': 'Linux hostname 3.10.0-327.28.3.el7.x86_64 #1 SMP Fri Aug 12 13:21:05 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ['getenforce']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': 'Enforcing'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24e6c50>, ['hostname'], 'cmd.run', ['cat /proc/meminfo']), kwargs={'expr_form': 'list'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': 'MemTotal:       263939080 kB\nMemFree:        256536328 kB\nMemAvailable:   252715072 kB\nBuffers:           12504 kB\nCached:          2690216 kB\nSwapCached:            0 kB\nActive:          2997268 kB\nInactive:        2355756 kB\nActive(anon):    2650936 kB\nInactive(anon):    16916 kB\nActive(file):     346332 kB\nInactive(file):  2338840 kB\nUnevictable:           0 kB\nMlocked:               0 kB\nSwapTotal:       4194300 kB\nSwapFree:        4194300 kB\nDirty:            241892 kB\nWriteback:             0 kB\nAnonPages:       2649272 kB\nMapped:           118836 kB\nShmem:             17508 kB\nSlab:             322332 kB\nSReclaimable:     158024 kB\nSUnreclaim:       164308 kB\nKernelStack:       69584 kB\nPageTables:        24428 kB\nNFS_Unstable:          0 kB\nBounce:                0 kB\nWritebackTmp:          0 kB\nCommitLimit:    136163840 kB\nCommitted_AS:   14802540 kB\nVmallocTotal:   34359738367 kB\nVmallocUsed:      708532 kB\nVmallocChunk:   34224707584 kB\nHardwareCorrupted:     0 kB\nAnonHugePages:    712704 kB\nHugePages_Total:       0\nHugePages_Free:        0\nHugePages_Rsvd:        0\nHugePages_Surp:        0\nHugepagesize:       2048 kB\nDirectMap4k:      171584 kB\nDirectMap2M:     3987456 kB\nDirectMap1G:    266338304 kB'}
2016-10-06T12:16:58+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, ('hostname',), 'state.sls', ['collectd.df,collectd.memory,collectd.cpu,collectd.disk,collectd.network,collectd.swap,collectd.dbpush']), kwargs={'kwarg': {'pillar': {'collectd': {'thresholds': {'df': {'WarningMax': '80', 'FailureMax': '90'}, 'memory': {'WarningMax': '80', 'FailureMax': '90'}, 'cpu': {'WarningMax': '80', 'FailureMax': '90'}, 'swap': {'WarningMax': '50', 'FailureMax': '70'}}, 'master_name': 'jumpbox.iaas.lab'}}}, 'expr_form': 'list'}


2016-10-06T12:17:00+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'file_|-/etc/collectd.d/dbpush.conf_|-/etc/collectd.d/dbpush.conf_|-managed': {'comment': 'File /etc/collectd.d/dbpush.conf is in the correct state', 'name': '/etc/collectd.d/dbpush.conf', 'start_time': '12:44:36.565262', 'result': True, 'duration': 8.839, '__run_num__': 4, 'changes': {}}, 'file_|-/etc/collectd.d/cpu.conf_|-/etc/collectd.d/cpu.conf_|-managed': {'comment': 'File /etc/collectd.d/cpu.conf is in the correct state', 'name': '/etc/collectd.d/cpu.conf', 'start_time': '12:44:36.587513', 'result': True, 'duration': 8.0, '__run_num__': 7, 'changes': {}}, 'file_|-/etc/collectd.d_|-/etc/collectd.d_|-directory': {'comment': 'Directory /etc/collectd.d is in the correct state', 'name': '/etc/collectd.d', 'start_time': '12:44:36.545403', 'result': True, 'duration': 0.827, '__run_num__': 1, 'changes': {}}, 'file_|-/etc/collectd.d/disk.conf_|-/etc/collectd.d/disk.conf_|-managed': {'comment': 'File /etc/collectd.d/disk.conf is in the correct state', 'name': '/etc/collectd.d/disk.conf', 'start_time': '12:44:36.574199', 'result': True, 'duration': 5.821, '__run_num__': 5, 'changes': {}}, 'service_|-collectd-service_|-collectd_|-running': {'comment': 'Service collectd is already enabled, and is in the desired state', 'name': 'collectd', 'start_time': '12:44:36.626084', 'result': True, 'duration': 346.771, '__run_num__': 10, 'changes': {}}, 'file_|-/etc/collectd.d/swap.conf_|-/etc/collectd.d/swap.conf_|-managed': {'comment': 'File /etc/collectd.d/swap.conf is in the correct state', 'name': '/etc/collectd.d/swap.conf', 'start_time': '12:44:36.557074', 'result': True, 'duration': 8.089, '__run_num__': 3, 'changes': {}}, 'file_|-/etc/collectd.conf_|-/etc/collectd.conf_|-managed': {'comment': 'File /etc/collectd.conf is in the correct state', 'name': '/etc/collectd.conf', 'start_time': '12:44:36.595602', 'result': True, 'duration': 22.702, '__run_num__': 8, 'changes': {}}, 'pkg_|-collectd_|-collectd_|-installed': {'comment': 'Package collectd is already installed.', 'name': 'collectd', 'start_time': '12:44:35.993273', 'result': True, 'duration': 549.966, '__run_num__': 0, 'changes': {}}, 'file_|-/etc/collectd.d/df.conf_|-/etc/collectd.d/df.conf_|-managed': {'comment': 'File /etc/collectd.d/df.conf is in the correct state', 'name': '/etc/collectd.d/df.conf', 'start_time': '12:44:36.546318', 'result': True, 'duration': 10.647, '__run_num__': 2, 'changes': {}}, 'file_|-/etc/collectd.d/network.conf_|-/etc/collectd.d/network.conf_|-managed': {'comment': 'File /etc/collectd.d/network.conf is in the correct state', 'name': '/etc/collectd.d/network.conf', 'start_time': '12:44:36.618397', 'result': True, 'duration': 6.901, '__run_num__': 9, 'changes': {}}, 'file_|-/etc/collectd.d/memory.conf_|-/etc/collectd.d/memory.conf_|-managed': {'comment': 'File /etc/collectd.d/memory.conf is in the correct state', 'name': '/etc/collectd.d/memory.conf', 'start_time': '12:44:36.580113', 'result': True, 'duration': 7.309, '__run_num__': 6, 'changes': {}}}}
2016-10-06T12:17:00+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, 'hostname', 'saltutil.sync_all'), kwargs={}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': {'outputters': [], 'grains': [], 'beacons': [], 'utils': [], 'returners': [], 'modules': [], 'renderers': [], 'states': []}}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, ['hostname'], 'service.restart', ['skynetd']), kwargs={'expr_form': 'list'}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, ['hostname'], 'cmd.run', ['dbus-send --system --type=method_call --dest=org.storaged.Storaged /org/storaged/Storaged/Manager org.storaged.Storaged.Manager.EnableModules boolean:true']), kwargs={'expr_form': 'list'}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': ''}
2016-10-06T12:17:01+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, 'hostname', 'service.enable', ['collectd']), kwargs={}
2016-10-06T12:17:02+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
2016-10-06T12:17:02+0000 INFO saltwrapper.py:30 saltwrapper.wrapper] args=(<salt.client.LocalClient object at 0x24c20d0>, 'hostname', 'service.start', ['collectd']), kwargs={}
2016-10-06T12:17:02+0000 INFO saltwrapper.py:32 saltwrapper.wrapper] rv={'hostname': True}
~~~

b) Version-Release number of selected component (if applicable):

Red Hat Storage Console 2.0

c) How reproducible:

Always

d) Steps to Reproduce:

While initializing hosts, have an ISO mounted. This would fail the initialization.

e) Actual results:

Initialization fails

f) Expected results:

The host initialization should not fail.

Comment 2 Shubhendu Tripathi 2016-12-12 03:54:49 UTC
I feel the same issue was reported by Riyas and documentation team is working on the same.
I will include you in the mail chain for more clarity.


Note You need to log in before you can comment on or make changes to this bug.