Bug 1671772 - [downstream clone] VDSM error: no medium found after upgrade to Ovirt 4.2.7.5-1
Summary: [downstream clone] VDSM error: no medium found after upgrade to Ovirt 4.2.7.5-1
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.2.7
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Kaustav Majumder
QA Contact: Lukas Svaty
URL:
Whiteboard:
Depends On: 1659774
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-01 15:38 UTC by Marina Kalinin
Modified: 2022-03-13 17:07 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1659774
Environment:
Last Closed: 2019-11-21 10:13:32 UTC
oVirt Team: Gluster
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45250 0 None None None 2022-03-13 17:07:06 UTC

Description Marina Kalinin 2019-02-01 15:38:58 UTC
+++ This bug was initially created as a clone of Bug #1659774 +++

Description of problem:
After upgrade of cluster to Ovirt 4.2.7.5 and CentOS 7.6 error is logged in Events.

VDSM xx.host.local command GetStorageDeviceListVDS failed: Internal JSON-RPC error: {'reason': "'gluster_vg_sda-/dev/sdd: open failed: No medium found'"}

Version-Release number of selected component (if applicable):
vdsm-4.20.43-1

Steps to Reproduce:
1. Upgraded selfhosted engine from Ovirt 4.2.7.4 to 4.2.7.5 and then upgraded to CentOS 7.6. 
2. Upgraded hosts to CentOS 7.6
After upgrade everything seems ok but there is an error logged in Events for all three hosts every other hour.

Actual results:
gluster_vg_sda-/dev/sdd: open failed: No medium found

Expected results:
No error logged in events

Additional info:
/etc/sdd seems to be the internal sd memory card reader on the HP Proliant DL385 gen 10 motherboard and is probably now detected on CentOS 7.6. How do I get rid of this error ?

dmesg
[    2.902888] smartpqi 0000:43:00.0: added 6:0:-:- 51402ec0102627e0 Enclosure         HPE      Smart Adapter    AIO-
[    2.905137] scsi 6:0:1:0: Enclosure         HPE      Smart Adapter    1.60 PQ: 0 ANSI: 5
[    2.924902] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.925336] ata6.00: ATA-10: VR000150GWEPP, 4IDMHPG0, max UDMA/100
[    2.925340] ata6.00: 293046768 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
[    2.927951] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.928419] ata5.00: 293046768 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
[    2.930881] smartpqi 0000:43:00.0: added 6:1:0:0 4000000000000000 Direct-Access     HPE      LOGICAL VOLUME   SSDSmartPathCap- En- RAID-ADG    
[    2.930961] scsi 6:1:0:0: Direct-Access     HPE      LOGICAL VOLUME   1.60 PQ: 0 ANSI: 5
[    2.946827] smartpqi 0000:43:00.0: added 6:2:0:0 0000000000000000 RAID              HPE      P408i-a SR Gen10 
[    2.964261] sd 6:1:0:0: [sda] 35162624432 512-byte logical blocks: (18.0 TB/16.3 TiB)
[    2.964332] sd 4:0:0:0: [sdb] 293046768 512-byte logical blocks: (150 GB/139 GiB)
[    2.964336] sd 4:0:0:0: [sdb] 4096-byte physical blocks
[    2.964410] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    2.964429] sd 6:1:0:0: [sda] Mode Sense: 73 00 00 08
[    2.964432] sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[    2.966314]  sdb: sdb1 sdb2 sdb3 sdb4
[    2.966637] sd 4:0:0:0: [sdb] Attached SCSI disk
[    2.967858] sd 5:0:0:0: [sdc] 293046768 512-byte logical blocks: (150 GB/139 GiB)
[    2.967953] sd 5:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[    2.969824]  sdc: sdc1 sdc2 sdc3 sdc4
[    3.024263] md/raid1:md127: active with 2 out of 2 mirrors
[    3.024284] md127: detected capacity change from 0 to 4293918720
[    3.024880] md/raid1:md125: active with 2 out of 2 mirrors
[    3.025270] md125: detected capacity change from 0 to 144318660608
[    3.026403] md/raid1:md126: active with 2 out of 2 mirrors
[    3.027086] md126: detected capacity change from 0 to 1072693248
[    3.133839] SGI XFS with ACLs, security attributes, no debug enabled
[    3.137435] XFS (md125): Mounting V5 Filesystem
[    3.176081] XFS (md125): Ending clean mount
[    3.518154] systemd-journald[266]: Received SIGTERM from PID 1 (systemd).
[    3.582985] type=1404 audit(1544905764.026:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
[    3.617626] SELinux: 2048 avtab hash slots, 112173 rules.
[    3.654921] SELinux: 2048 avtab hash slots, 112173 rules.
[    3.681985] SELinux:  8 users, 14 roles, 5040 types, 316 bools, 1 sens, 1024 cats
[    3.681989] SELinux:  129 classes, 112173 rules
[    3.685758] SELinux:  Class bpf not defined in policy.
[    3.685760] SELinux: the above unknown classes and permissions will be allowed
[    3.685763] SELinux:  Completing initialization.
[    3.685764] SELinux:  Setting up existing superblocks.
How reproducible:
[    3.696829] type=1403 audit(1544905764.140:3): policy loaded auid=4294967295 ses=4294967295
[    3.702485] systemd[1]: Successfully loaded SELinux policy in 118.909ms.
[    3.737670] ip_tables: (C) 2000-2006 Netfilter Core Team
[    3.738068] systemd[1]: Inserted module 'ip_tables'
[    3.762248] systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 22.375ms.
[    3.875595] scsi 7:0:0:0: Direct-Access     Generic- SD/MMC CRW       1.00 PQ: 0 ANSI: 6
[    3.878409] sd 7:0:0:0: [sdd] Attached SCSI removable disk

--- Additional comment from Sahina Bose on 2019-01-03 00:03:58 EST ---

Kaustav, this looks like an error with blivet libs? Can you take a look

--- Additional comment from Kaustav Majumder on 2019-01-03 01:49:03 EST ---

In vdsm code blivet filters these devices while scanning storage devices
   device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']

should we add one for sd card as well

--- Additional comment from Sahina Bose on 2019-01-07 00:15:38 EST ---

(In reply to Kaustav Majumder from comment #2)
> In vdsm code blivet filters these devices while scanning storage devices
>    device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']
> 
> should we add one for sd card as well

Yes, looks like it.

--- Additional comment from Sandro Bonazzola on 2019-01-28 04:41:35 EST ---

This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 9 Daniel Gur 2019-08-28 13:14:50 UTC
sync2jira

Comment 10 Daniel Gur 2019-08-28 13:19:53 UTC
sync2jira

Comment 11 Marina Kalinin 2019-09-30 21:07:59 UTC
Hi Kaustav,
Is this BZ somehow related to this BZ#1670722?

They both seem to have the error of "no medium found". And maybe fixing the other bug can fix this one? The other one is on QA and maybe you can explain better what happened in this bug?

Comment 12 Marina Kalinin 2019-10-01 18:42:15 UTC
I also do not think this bug should be high severity. It is about a 'no medium found' message logged if the host has an empty sd card slot. Not a show stopper, just confusing.

Comment 13 Gobinda Das 2019-11-21 10:13:32 UTC
This is non-reproducible  in QE and Dev environment. So I am closing this, please reopen if issue found again.


Note You need to log in before you can comment on or make changes to this bug.