RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1358748 - lshw does not report devices which have pv's created on them (on the entire device)
Summary: lshw does not report devices which have pv's created on them (on the entire d...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lshw
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Petr Oros
QA Contact: Mike Gahagan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-21 12:08 UTC by nikhil kshirsagar
Modified: 2019-11-14 08:46 UTC (History)
2 users (show)

Fixed In Version: lshw-B.02.17-12.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 03:47:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2326 0 normal SHIPPED_LIVE lshw bug fix update 2016-11-03 13:43:50 UTC

Description nikhil kshirsagar 2016-07-21 12:08:37 UTC
Description of problem:
when whole disk is considered as PV, the disk is not found in the lshw output. 



Version-Release number of selected component (if applicable):
lshw-B.02.17-5.el7.x86_64

How reproducible:
create a pv on /dev/sdb (or any disk)
lshw will not list it.

Steps to Reproduce:
see below in additional info

Actual results:
disks not found in lshw output even though they exist. The disks not found have pvs on them (entire device, no partitions)

Expected results:
lshw should report pv's

Additional info:

[root@dhcp6-217 src]# fdisk -l

Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000df364

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     1026047      512000   83  Linux
/dev/vda2         1026048    20971519     9972736   8e  Linux LVM

Disk /dev/sda: 3221 MB, 3221225472 bytes, 6291456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel_dhcp6--217-root: 9093 MB, 9093251072 bytes, 17760256 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel_dhcp6--217-swap: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@dhcp6-217 src]# pvs
  PV         VG             Fmt  Attr PSize PFree 
  /dev/sda                  lvm2 ---  3.00g  3.00g
  /dev/sdb                  lvm2 ---  2.00g  2.00g
  /dev/vda2  rhel_dhcp6-217 lvm2 a--  9.51g 40.00m
[root@dhcp6-217 src]# lshw -class disk
  *-cdrom                 
       description: SCSI CD-ROM
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/cdrom
       logical name: /dev/sr0
       capabilities: audio
       configuration: status=nodisc
[root@dhcp6-217 src]# 


----------------------------------------------------------------------------------------------


<nkshirsa> i looked at the lshw code to understand why pv's are not being reported, it seems like they are being reported in the scan_scsi and scan_device methods, (i can see the device on which i have made pv), but when it arrives to print (which is a complex recursive call and i cannot seem to be able to print node through gdb) it doesnt have that device to be printed)
<nkshirsa> i think the issue is while printing.. 
<poros> really? where?
<nkshirsa> during building the computer node, i can see its finding the disks and even marking them as lvm disks.. 
<nkshirsa> but when it calls print from the main function in lshw.cc , its not able to find the recursive structures inside while printing .. so it never prints them.. 
<nkshirsa> i ran it through gdb and i can see scan_partitions and scan_lvm being called and correctly populated for the pv's
<nkshirsa> but once it prints, it goes into several levels of recursion and im having trouble printing the vector node, config, resources through gdb, so i lose track of where i am in the tree.. but i am sure its not finding those disk objects and therefore not printing them
<nkshirsa> anyway, im done for today, please let me know if you do understand why the pv's are not being printed. i am now filing bz and will put all the investigation so far in it, it will get assigned to you i guess. 
<poros> ok

Comment 4 Petr Oros 2016-07-27 10:37:47 UTC
After deep inspection i found reason why lshw drop device from list.

[root@localhost]# lshw -c disk
  *-virtio2
       description: Virtual I/O device
       physical id: 0
       bus info: virtio@2
       logical name: /dev/vdb
       size: 20GiB (21GB)
       configuration: driver=virtio_blk logicalsectorsize=512 sectorsize=512
[root@localhost]# 
[root@localhost]# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created
[root@localhost]# lshw -c disk
[root@localhost]#

but:

[root@localhost]# lshw -c volume
  *-virtio2
       description: Virtual I/O device
       physical id: 0
       bus info: virtio@2
       logical name: /dev/vdb
       serial: 9Q74c8-Kc2D-0EHu-gBPe-Gowc-wytU-cIiczq
       size: 20GiB
       capacity: 20GiB
       capabilities: lvm2
       configuration: driver=virtio_blk logicalsectorsize=512 sectorsize=512

This is caused by these lines in partitions.cc
In function scan_partitions:
  if(!medium->isCapable("partitioned"))
  {
    if(scan_volume(*medium, s))	// whole disk volume?
      medium->setClass(hw::volume);
  }

That means, when volume is whole disk, lshw classify device like volume, not like disk.

Comment 9 Mike Gahagan 2016-08-19 19:50:36 UTC
Confirmed an empty PV encompassing an entire disk (in this case a USB key) will show up as a disk, not as a volume... the pv itself will show as a medium under the disk entry.

Comment 11 errata-xmlrpc 2016-11-04 03:47:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2326.html


Note You need to log in before you can comment on or make changes to this bug.