Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1659774

Summary: VDSM error: no medium found after upgrade to Ovirt 4.2.7.5-1
Product: [oVirt] vdsm Reporter: Alexander Lindqvist <alexander>
Component: GeneralAssignee: Kaustav Majumder <kmajumde>
Status: CLOSED NOTABUG QA Contact: Avihai <aefrat>
Severity: high Docs Contact:
Priority: high    
Version: 4.20.31CC: bugs, frolland, godas, kirill.prokin, kmajumde, lsvaty, marcvanwageningen, mkalinin, sabose
Target Milestone: ovirt-4.4.0Keywords: Reopened
Target Release: ---Flags: rule-engine: ovirt-4.3+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1671772 (view as bug list) Environment:
Last Closed: 2020-03-02 15:59:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1671772    

Description Alexander Lindqvist 2018-12-16 10:39:27 UTC
Description of problem:
After upgrade of cluster to Ovirt 4.2.7.5 and CentOS 7.6 error is logged in Events.

VDSM xx.host.local command GetStorageDeviceListVDS failed: Internal JSON-RPC error: {'reason': "'gluster_vg_sda-/dev/sdd: open failed: No medium found'"}

Version-Release number of selected component (if applicable):
vdsm-4.20.43-1

Steps to Reproduce:
1. Upgraded selfhosted engine from Ovirt 4.2.7.4 to 4.2.7.5 and then upgraded to CentOS 7.6. 
2. Upgraded hosts to CentOS 7.6
After upgrade everything seems ok but there is an error logged in Events for all three hosts every other hour.

Actual results:
gluster_vg_sda-/dev/sdd: open failed: No medium found

Expected results:
No error logged in events

Additional info:
/etc/sdd seems to be the internal sd memory card reader on the HP Proliant DL385 gen 10 motherboard and is probably now detected on CentOS 7.6. How do I get rid of this error ?

dmesg
[    2.902888] smartpqi 0000:43:00.0: added 6:0:-:- 51402ec0102627e0 Enclosure         HPE      Smart Adapter    AIO-
[    2.905137] scsi 6:0:1:0: Enclosure         HPE      Smart Adapter    1.60 PQ: 0 ANSI: 5
[    2.924902] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.925336] ata6.00: ATA-10: VR000150GWEPP, 4IDMHPG0, max UDMA/100
[    2.925340] ata6.00: 293046768 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
[    2.927951] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.928419] ata5.00: 293046768 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
[    2.930881] smartpqi 0000:43:00.0: added 6:1:0:0 4000000000000000 Direct-Access     HPE      LOGICAL VOLUME   SSDSmartPathCap- En- RAID-ADG    
[    2.930961] scsi 6:1:0:0: Direct-Access     HPE      LOGICAL VOLUME   1.60 PQ: 0 ANSI: 5
[    2.946827] smartpqi 0000:43:00.0: added 6:2:0:0 0000000000000000 RAID              HPE      P408i-a SR Gen10 
[    2.964261] sd 6:1:0:0: [sda] 35162624432 512-byte logical blocks: (18.0 TB/16.3 TiB)
[    2.964332] sd 4:0:0:0: [sdb] 293046768 512-byte logical blocks: (150 GB/139 GiB)
[    2.964336] sd 4:0:0:0: [sdb] 4096-byte physical blocks
[    2.964410] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    2.964429] sd 6:1:0:0: [sda] Mode Sense: 73 00 00 08
[    2.964432] sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[    2.966314]  sdb: sdb1 sdb2 sdb3 sdb4
[    2.966637] sd 4:0:0:0: [sdb] Attached SCSI disk
[    2.967858] sd 5:0:0:0: [sdc] 293046768 512-byte logical blocks: (150 GB/139 GiB)
[    2.967953] sd 5:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[    2.969824]  sdc: sdc1 sdc2 sdc3 sdc4
[    3.024263] md/raid1:md127: active with 2 out of 2 mirrors
[    3.024284] md127: detected capacity change from 0 to 4293918720
[    3.024880] md/raid1:md125: active with 2 out of 2 mirrors
[    3.025270] md125: detected capacity change from 0 to 144318660608
[    3.026403] md/raid1:md126: active with 2 out of 2 mirrors
[    3.027086] md126: detected capacity change from 0 to 1072693248
[    3.133839] SGI XFS with ACLs, security attributes, no debug enabled
[    3.137435] XFS (md125): Mounting V5 Filesystem
[    3.176081] XFS (md125): Ending clean mount
[    3.518154] systemd-journald[266]: Received SIGTERM from PID 1 (systemd).
[    3.582985] type=1404 audit(1544905764.026:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
[    3.617626] SELinux: 2048 avtab hash slots, 112173 rules.
[    3.654921] SELinux: 2048 avtab hash slots, 112173 rules.
[    3.681985] SELinux:  8 users, 14 roles, 5040 types, 316 bools, 1 sens, 1024 cats
[    3.681989] SELinux:  129 classes, 112173 rules
[    3.685758] SELinux:  Class bpf not defined in policy.
[    3.685760] SELinux: the above unknown classes and permissions will be allowed
[    3.685763] SELinux:  Completing initialization.
[    3.685764] SELinux:  Setting up existing superblocks.
How reproducible:
[    3.696829] type=1403 audit(1544905764.140:3): policy loaded auid=4294967295 ses=4294967295
[    3.702485] systemd[1]: Successfully loaded SELinux policy in 118.909ms.
[    3.737670] ip_tables: (C) 2000-2006 Netfilter Core Team
[    3.738068] systemd[1]: Inserted module 'ip_tables'
[    3.762248] systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 22.375ms.
[    3.875595] scsi 7:0:0:0: Direct-Access     Generic- SD/MMC CRW       1.00 PQ: 0 ANSI: 6
[    3.878409] sd 7:0:0:0: [sdd] Attached SCSI removable disk

Comment 1 Sahina Bose 2019-01-03 05:03:58 UTC
Kaustav, this looks like an error with blivet libs? Can you take a look

Comment 2 Kaustav Majumder 2019-01-03 06:49:03 UTC
In vdsm code blivet filters these devices while scanning storage devices
   device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']

should we add one for sd card as well

Comment 3 Sahina Bose 2019-01-07 05:15:38 UTC
(In reply to Kaustav Majumder from comment #2)
> In vdsm code blivet filters these devices while scanning storage devices
>    device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']
> 
> should we add one for sd card as well

Yes, looks like it.

Comment 4 Sandro Bonazzola 2019-01-28 09:41:35 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 5 Gobinda Das 2019-02-27 11:01:25 UTC
Ovirt-4.3.1 already released, so moving to ovirt-4.3.2

Comment 6 marcvw 2019-05-22 07:52:26 UTC
This message also appears at a Dell PowerEdge R410 system with an iDRAC (Integrated Dell Remote Access Controller 6 - Enterprise, firmware 2.92), BIOS Revision: 1.14:

May 22 08:18:07 myserver vdsm[16099]: ERROR Internal server error#012Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in _handle_request#0
12    res = method(**params)#012  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 193, in _dynamicMethod#012    result = fn(*methodArgs)#012  File "/usr/lib/python2.7/site-packages/vds
m/gluster/apiwrapper.py", line 82, in storageDevicesList#012    return self._gluster.storageDevicesList()#012  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in wrapper#012    rv
 = func(*args, **kwargs)#012  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 516, in storageDevicesList#012    status = self.svdsmProxy.glusterStorageDevicesList()#012  File "/usr/li
b/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__#012    return callMethod()#012  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda>#012    *
*kwargs)#012  File "<string>", line 2, in glusterStorageDevicesList#012  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod#012    raise convert_to_error(kind, result)#01
2KeyError: 'data01-/dev/sdc: open failed: No medium found'




[root@myserver log]# grep -i sdc /var/log/dmesg
[    3.869577] sd 5:0:0:0: [sdc] Attached SCSI removable disk

[root@myserver log]# grep -i sdc /var/log/dmesg.old
[    3.813400] sd 5:0:0:0: [sdc] Attached SCSI removable disk

[root@myserver log]# grep -i sdc /var/log/messages-20190512
May  8 09:42:38 myserver multipathd: sdc: remove path (uevent)
May  8 09:43:15 myserver kernel: sd 10:0:0:1: [sdc] Attached SCSI removable disk
May  8 09:43:16 myserver multipathd: sdc: add path (uevent)
May  8 09:43:16 myserver multipathd: sdc: failed to get path uid
May  8 09:43:20 myserver multipathd: sdc: remove path (uevent)
May  8 09:43:22 myserver kernel: sd 11:0:0:0: [sdc] Attached SCSI removable disk
May  8 09:43:22 myserver multipathd: sdc: add path (uevent)
May  8 09:43:22 myserver multipathd: sdc: failed to get path uid
May 10 19:59:20 myserver lvm[4601]: /dev/sdc: open failed: No medium found
May 10 20:10:56 myserver kernel: sd 5:0:0:0: [sdc] Attached SCSI removable disk
May 10 20:11:01 myserver multipathd: sdc: add path (uevent)
May 10 20:11:01 myserver multipathd: sdc: spurious uevent, path already in pathvec
May 10 20:11:01 myserver multipathd: sdc: failed to get path uid
May 10 20:11:02 myserver lvm: /dev/sdc: open failed: No medium found
May 10 20:11:51 myserver lvm: /dev/sdc: open failed: No medium found
May 10 20:11:51 myserver lvm: /dev/sdc: open failed: No medium found
May 10 20:11:57 myserver lvm[7304]: /dev/sdc: open failed: No medium found
May 10 20:12:30 myserver lvm: /dev/sdc: open failed: No medium found

[root@myserver log]# lshw -class disk
...
  *-disk:0
       description: SCSI Disk
       product: LCDRIVE
       vendor: iDRAC
       physical id: 0
       bus info: scsi@5:0.0.0
       logical name: /dev/sdc
       version: 0323
       capabilities: removable
       configuration: logicalsectorsize=512 sectorsize=512
     *-medium
          physical id: 0
          logical name: /dev/sdc

Comment 8 Gobinda Das 2019-11-21 10:14:20 UTC
This is non-reproducible  in QE and Dev environment. So I am closing this, please reopen if issue found again.

Comment 9 Alexander Lindqvist 2019-11-21 12:14:41 UTC
This is still an issue on all our HP Proliant DL385 gen 10 servers. Now running on Ovirt 4.2.8 and vdsm-4.20.46-1.el7.x86_64.

Can you please filter out the sd card reader so we dont see these errors ?
It is a bug as it wasnt present in Ovirt 4.2.7.4 and it also affects more servers than ours.

Comment 10 Kaustav Majumder 2019-11-22 05:18:27 UTC
(In reply to Alexander Lindqvist from comment #9)
> This is still an issue on all our HP Proliant DL385 gen 10 servers. Now
> running on Ovirt 4.2.8 and vdsm-4.20.46-1.el7.x86_64.
> 
> Can you please filter out the sd card reader so we dont see these errors ?
> It is a bug as it wasnt present in Ovirt 4.2.7.4 and it also affects more
> servers than ours.

Can you run this on one of your servers? It lists the devices present in the host so we are able to further debug the issue.

cat list_blivet_devices.py

import blivet
blivetEnv = blivet.Blivet()
blivetEnv.reset()
print (blivetEnv.devices)

Comment 11 Marina Kalinin 2019-11-26 20:14:18 UTC
Would this bug be fixed by BZ#1670722 by chance?

Comment 12 Alexander Lindqvist 2019-11-27 08:27:55 UTC
# python list_blivet_devices.py

** (process:12273): WARNING **: 09:23:19.108: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
Traceback (most recent call last):
  File "list_blivet_devices.py", line 3, in <module>
    blivetEnv.reset()
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 511, in reset
    self.devicetree.populate(cleanupOnly=cleanupOnly)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2256, in populate
    self._populate()
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2323, in _populate
    self.addUdevDevice(dev)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1293, in addUdevDevice
    self.handleUdevDeviceFormat(info, device)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2009, in handleUdevDeviceFormat
    self.handleUdevLVMPVFormat(info, device)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1651, in handleUdevLVMPVFormat
    self.handleVgLvs(vg_device)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1588, in handleVgLvs
    addLV(lv)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1526, in addLV
    addRequiredLV(pool_device_name, "failed to look up thin pool")
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1432, in addRequiredLV
    addLV(lv_info[name])
KeyError: 'gluster_vg_sda-/dev/sdd: open failed: No medium found'

Comment 13 Gobinda Das 2020-03-02 15:59:35 UTC
Closing this as it's very old (4.2), Please check with latest and if issue still persists then reopen.