Bug 1297019 - Smart State Analysis Fails for RHEL7.1 VM on RHEV/NFS with CloudForms 4
Smart State Analysis Fails for RHEL7.1 VM on RHEV/NFS with CloudForms 4
Status: CLOSED ERRATA
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: SmartState Analysis (Show other bugs)
5.5.0
x86_64 Linux
high Severity high
: GA
: 5.6.0
Assigned To: Mo Morsi
Satyajit Bulage
rhev
: ZStream
Depends On:
Blocks: 1303100
  Show dependency treegraph
 
Reported: 2016-01-08 13:27 EST by james labocki
Modified: 2016-07-27 02:19 EDT (History)
8 users (show)

See Also:
Fixed In Version: 5.6.0.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1303100 (view as bug list)
Environment:
Last Closed: 2016-06-29 11:26:29 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1348 normal SHIPPED_LIVE CFME 5.6.0 bug fixes and enhancement update 2016-06-29 14:50:04 EDT

  None (edit)
Description james labocki 2016-01-08 13:27:23 EST
Smart State Analysis fails with the following error:

[----] E, [2016-01-08T13:15:31.067564 #3488:faf988] ERROR -- : Q-task_id([bb5e2978-b633-11e5-ba08-001a4a579b01]) MIQ(VmScan#process_abort) job aborting, Unable to mount filesystem.  Reason:[undefined method `each_slice' for nil:NilClass]


RHEV and CloudForms versions:

Red Hat Enterprise Virtualization Manager Version: 3.5.4.2-1.3.el6ev
CloudForms Version 5.5.0.13.20151201120956_653c0d4

Here is a `df` of the VM I am running smart state against:

[root@localhost ~]# df -T
Filesystem            Type     1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root xfs       18307072 1923272  16383800  11% /
devtmpfs              devtmpfs   1930172       0   1930172   0% /dev
tmpfs                 tmpfs      1940336       0   1940336   0% /dev/shm
tmpfs                 tmpfs      1940336   41100   1899236   3% /run
tmpfs                 tmpfs      1940336       0   1940336   0% /sys/fs/cgroup
/dev/vda1             xfs         508588  168392    340196  34% /boot
tmpfs                 tmpfs       388068       0    388068   0% /run/user/0
Comment 2 Dave Johnson 2016-01-08 14:35:24 EST
James reached out to QE and we ran a quick test to try and reproduce which we failed to do so.  Our environment is as follows...

RHEVM: 3.5.5-0.1.el6ev (with NFS attached data domain)
CFME 5.5.0.13.20151201120956_653c0d4 
Same type of VM, rhel 7.1 XFS as well...

[root@localhost ~]# df -T
Filesystem            Type     1K-blocks   Used Available Use% Mounted on
/dev/mapper/rhel-root xfs        4192256 983348   3208908  24% /
devtmpfs              devtmpfs    243700      0    243700   0% /dev
tmpfs                 tmpfs       250060      0    250060   0% /dev/shm
tmpfs                 tmpfs       250060   4400    245660   2% /run
tmpfs                 tmpfs       250060      0    250060   0% /sys/fs/cgroup
/dev/vda1             xfs         508588  98084    410504  20% /boot


I reached out to James as well for connection details for further investigation.  In the mean time, trying to find a RHEV 3.5.4 environment to test with.
Comment 4 Dave Johnson 2016-01-11 17:30:08 EST
Yes, it is something specific to the OSE VMs, the rest of them scan fine.  I did spin up a second CFME which helped and reproduced it.  I thought it was related to either multi-disk or multiple volume groups but when I tried to use resources containing them in our QE lab, it failed to reproduce so it seems to be something more specific.  

Rich, here is that stack trace, hopefully that will tell us the specific issue we are hitting. 


[----] D, [2016-01-11T08:57:28.147972 #60853:cdb98c] DEBUG -- : MiqVolumeManager: No LVM metadata detected on PV: /mnt/e9cada85-edbc-4c60-9473-89628eaf38d9/10.19.1.2:_nfsshare/6c67bc75-6584-47b9-bb20-0fa1b7e736ab/images/55bc3aa6-d2cb-402c-b06f-35228a91d11c/b0fdd4aa-f09a-4ba2-bfc5-c1c059f846bb, partition: 1
[----] I, [2016-01-11T08:57:28.149893 #60853:cdb98c]  INFO -- : MiqVolumeManager: LVM2 metadata detected on PV: /mnt/e9cada85-edbc-4c60-9473-89628eaf38d9/10.19.1.2:_nfsshare/6c67bc75-6584-47b9-bb20-0fa1b7e736ab/images/55bc3aa6-d2cb-402c-b06f-35228a91d11c/b0fdd4aa-f09a-4ba2-bfc5-c1c059f846bb, partition: 2
[----] D, [2016-01-11T08:57:28.149960 #60853:cdb98c] DEBUG -- : MiqVolumeManager.parseLvmMetadata: parsing LVM2 metadata
[----] E, [2016-01-11T08:57:28.152530 #60853:cdb98c] ERROR -- : Unable to mount filesystem.  Reason:[undefined method `each_slice' for nil:NilClass] for VM [/rhev/data-center/00000002-0002-0002-0002-000000000249/mastersd/master/vms/e9cada85-edbc-4c60-9473-89628eaf38d9/e9cada85-edbc-4c60-9473-89628eaf38d9.ovf]
[----] E, [2016-01-11T08:57:28.152708 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:236:in `getSegObj'
[----] E, [2016-01-11T08:57:28.152760 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:229:in `block in getLvObj'
[----] E, [2016-01-11T08:57:28.152800 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/cfme-gemset/gems/activesupport-4.2.5/lib/active_support/core_ext/range/each.rb:7:in `each'
[----] E, [2016-01-11T08:57:28.152859 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/cfme-gemset/gems/activesupport-4.2.5/lib/active_support/core_ext/range/each.rb:7:in `each_with_time_with_zone'
[----] E, [2016-01-11T08:57:28.152902 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:229:in `getLvObj'
[----] E, [2016-01-11T08:57:28.152939 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:205:in `block in getVgObj'
[----] E, [2016-01-11T08:57:28.153000 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:205:in `each'
[----] E, [2016-01-11T08:57:28.153063 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:205:in `getVgObj'
[----] E, [2016-01-11T08:57:28.153105 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqLvm.rb:184:in `parse'
[----] E, [2016-01-11T08:57:28.153142 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:130:in `block (2 levels) in parseLvmMetadata'
[----] E, [2016-01-11T08:57:28.153178 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:126:in `each'
[----] E, [2016-01-11T08:57:28.153240 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:126:in `block in parseLvmMetadata'
[----] E, [2016-01-11T08:57:28.153299 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:123:in `each_value'
[----] E, [2016-01-11T08:57:28.153341 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:123:in `parseLvmMetadata'
[----] E, [2016-01-11T08:57:28.153388 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/VolumeManager/MiqVolumeManager.rb:83:in `initialize'
[----] E, [2016-01-11T08:57:28.153455 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/MiqVm/MiqVm.rb:167:in `new'
[----] E, [2016-01-11T08:57:28.153495 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/MiqVm/MiqVm.rb:167:in `volumeManager'
[----] E, [2016-01-11T08:57:28.153532 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/MiqVm/MiqVm.rb:161:in `rootTrees'
[----] E, [2016-01-11T08:57:28.153567 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/metadata/MIQExtract/MIQExtract.rb:58:in `initialize'
[----] E, [2016-01-11T08:57:28.153609 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/mixins/scanning_mixin.rb:237:in `new'
[----] E, [2016-01-11T08:57:28.153648 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/mixins/scanning_mixin.rb:237:in `scan_via_miq_vm'
[----] E, [2016-01-11T08:57:28.153685 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/manageiq/providers/redhat/infra_manager/vm_or_template_shared/scanning.rb:17:in `perform_metadata_scan'
[----] E, [2016-01-11T08:57:28.153721 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_server/server_smart_proxy.rb:123:in `scan_metadata'
[----] E, [2016-01-11T08:57:28.153756 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue.rb:345:in `block in deliver'
[----] E, [2016-01-11T08:57:28.153792 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/rh-ruby22/root/usr/share/ruby/timeout.rb:89:in `block in timeout'
[----] E, [2016-01-11T08:57:28.153841 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/rh-ruby22/root/usr/share/ruby/timeout.rb:34:in `block in catch'
[----] E, [2016-01-11T08:57:28.153890 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/rh-ruby22/root/usr/share/ruby/timeout.rb:34:in `catch'
[----] E, [2016-01-11T08:57:28.153931 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/rh-ruby22/root/usr/share/ruby/timeout.rb:34:in `catch'
[----] E, [2016-01-11T08:57:28.153968 #60853:cdb98c] ERROR -- : MIQExtract.new /opt/rh/rh-ruby22/root/usr/share/ruby/timeout.rb:104:in `timeout'
[----] E, [2016-01-11T08:57:28.154005 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue.rb:341:in `deliver'
[----] E, [2016-01-11T08:57:28.154041 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message'
[----] E, [2016-01-11T08:57:28.154077 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
[----] E, [2016-01-11T08:57:28.154112 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:151:in `block in do_work'
[----] E, [2016-01-11T08:57:28.154148 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:145:in `loop'
[----] E, [2016-01-11T08:57:28.154184 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:145:in `do_work'
[----] E, [2016-01-11T08:57:28.154220 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
[----] E, [2016-01-11T08:57:28.154255 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `loop'
[----] E, [2016-01-11T08:57:28.154290 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `do_work_loop'
[----] E, [2016-01-11T08:57:28.154326 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in `run'
[----] E, [2016-01-11T08:57:28.154409 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:128:in `start'
[----] E, [2016-01-11T08:57:28.154479 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/app/models/miq_worker/runner.rb:29:in `start_worker'
[----] E, [2016-01-11T08:57:28.154547 #60853:cdb98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/lib/workers/bin/worker.rb:2:in `<top (required)>'
Comment 7 Mo Morsi 2016-01-25 16:05:22 EST
Poking around with the vm in question yields a volume group segment with no stripes:

segment1 {
start_extent = 0
extent_count = 3068

type = "thin-pool"
metadata = "docker-pool_tmeta"
pool = "docker-pool_tdata"
transaction_id = 0
chunk_size = 1024
discards = "passdown"
}

This is part of the docker-vg volume group (not sure if that matters). Adding a check skipping processing of the stripes if not defined (PR: #6332) yields another error.

[----] E, [2016-01-25T15:54:54.162253 #54538:10cd98c] ERROR -- : XFS::Inode: Bad Magic # inode 2048
[----] E, [2016-01-25T15:54:54.162409 #54538:10cd98c] ERROR -- : Unable to mount filesystem.  Reason:[XFS::Inode: Invalid Magic Number for inode 2048] for VM [/rhev/data-center/00000002-0002-0002-0002-0000000002
49/mastersd/master/vms/e9cada85-edbc-4c60-9473-89628eaf38d9/e9cada85-edbc-4c60-9473-89628eaf38d9.ovf]
[----] E, [2016-01-25T15:54:54.162542 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/xfs/inode.rb:209:in `valid_inode?'
[----] E, [2016-01-25T15:54:54.162623 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/xfs/inode.rb:235:in `initialize'
[----] E, [2016-01-25T15:54:54.162662 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/xfs/superblock.rb:465:in `new'
[----] E, [2016-01-25T15:54:54.162698 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/xfs/superblock.rb:465:in `get_inode'
[----] E, [2016-01-25T15:54:54.162734 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/xfs/directory.rb:26:in `initialize'
[----] E, [2016-01-25T15:54:54.162770 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqFS/modules/XFS.rb:49:in `new'
[----] E, [2016-01-25T15:54:54.162815 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqFS/modules/XFS.rb:49:in `fs_init'
[----] E, [2016-01-25T15:54:54.162855 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqFS/MiqFS.rb:41:in `initialize'
[----] E, [2016-01-25T15:54:54.162900 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqFS/MiqFS.rb:27:in `new'
[----] E, [2016-01-25T15:54:54.162941 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqFS/MiqFS.rb:27:in `getFS'
[----] E, [2016-01-25T15:54:54.162977 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqMountManager.rb:13:in `block in mountVolumes'
[----] E, [2016-01-25T15:54:54.163013 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqMountManager.rb:11:in `each'
[----] E, [2016-01-25T15:54:54.163049 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/fs/MiqMountManager.rb:11:in `mountVolumes'
[----] E, [2016-01-25T15:54:54.163085 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/MiqVm/MiqVm.rb:161:in `rootTrees'
[----] E, [2016-01-25T15:54:54.163121 #54538:10cd98c] ERROR -- : MIQExtract.new /var/www/miq/vmdb/gems/pending/metadata/MIQExtract/MIQExtract.rb:58:in `initialize'


Not sure what the status of this one is: related, resolved, etc
Comment 9 Mo Morsi 2016-01-26 11:19:30 EST
Yes it occurs on the logical volume "docker-pool_tdata" that's part of the docker-vg volume group. Note this the the volume marked as "pool" in the volume which was previously failing (see above)

[----] I, [2016-01-26T11:07:43.184236 #34187:723990]  INFO -- : MIQ(FsProbe-getFsMod) FS probe detected [XFS] for [docker-pool_tdata] [partition: 0]
[----] E, [2016-01-26T11:07:43.185323 #34187:723990] ERROR -- : XFS::Inode: Bad Magic # inode 2048

---

docker-pool_tdata {
id = "4Qmel3-l2Be-8tuD-pXQr-b8Rp-My1E-x4Yq5U"
status = ["READ", "WRITE"]
flags = []
creation_host = "localhost.localdomain.localdomain"
creation_time = 1452217142
segment_count = 1 

segment1 {
start_extent = 0 
extent_count = 3068

type = "striped"
stripe_count = 1 

stripes = [ 
"pv0", 8
]
}
}


I added some output returning the MAGIC NUMBER causing the error. It seems @in['magic'] is being detects as 0 for the root inode
Comment 11 Rich Oliveri 2016-02-15 11:26:30 EST
https://github.com/ManageIQ/manageiq/pull/6332

Skips 'thin' and 'thin-pool' logical volumes and volumes associated with them.
Comment 12 Satyajit Bulage 2016-05-24 11:12:08 EDT
While verifying this bug, I am facing issues that " Unable to mount filesystem. Reason:[No such file or directory @ rb_file_s_stat - /dev/c8b03cf2-95b3-44f9-bdcf-d3e8619b2ba2/e40b0dc9-c019-484b-a8bb-23298f98e106 No such file or directory @ rb_file_s_stat - /dev/c8b03cf2-95b3-44f9-bdcf-d3e8619b2ba2/226b7106-6642-49f2-9f63-a9b6e7454851]
".
This issue is related to SmartState Analysis on RHEVM provider and a bug is logged (#BZ 1338815).
Comment 13 Satyajit Bulage 2016-05-30 03:20:39 EDT
SmartState Analysis works on RHEL7.1, but users/groups are not getting populated in the summary page.
Comment 14 Satyajit Bulage 2016-06-03 09:14:14 EDT
SmartState Analysis on RHEL7.1 works fine and populating all information in a summary page.


Verified Version:- 5.6.0.9-rc2.20160531154046_b4e2f6d
Comment 16 errata-xmlrpc 2016-06-29 11:26:29 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1348

Note You need to log in before you can comment on or make changes to this bug.