RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 995712 - list-filesystems command fails if there are no block devices
Summary: list-filesystems command fails if there are no block devices
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 995711
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-10 10:45 UTC by Richard W.M. Jones
Modified: 2014-06-18 02:00 UTC (History)
6 users (show)

Fixed In Version: libguestfs-1.22.6-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 995711
Environment:
Last Closed: 2014-06-13 11:48:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Richard W.M. Jones 2013-08-10 10:45:31 UTC
+++ This bug was initially created as a clone of Bug #995711 +++

Description of problem:

In libguestfs >= 1.20 when using the libvirt backend, it is
possible to have no block devices.  However the list-filesystems
command / guestfs_list_filesystems API gives an error when
this happens.  It actually returns an error without setting
an error string, which is itself another bug.

Version-Release number of selected component (if applicable):

libguestfs 1.23.14 with libvirt backend, Fedora 19
libguestfs-1.23.14-1.fc20.x86_64
libvirt-1.0.5.2-1.fc19.x86_64

How reproducible:

100%

Steps to Reproduce:
1. guestfish run : list-filesystems
2. echo $?
3.

Actual results:

Exit code is 1 indicating an error, although none is printed.

Expected results:

It should return an empty list, no error.

Additional info:

If you use guestfish -vx then you see the output below.
Note the line which says "list_filesystems = NULL (error)".

libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
libguestfs: [02803ms] appliance is up
libguestfs: trace: launch = 0
libguestfs: trace: list_filesystems
libguestfs: trace: list_devices
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 7 (list_devices) took 0.00 seconds
libguestfs: trace: list_devices = []
libguestfs: trace: list_partitions
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 8 (list_partitions) took 0.00 seconds
libguestfs: trace: list_partitions = []
libguestfs: trace: list_md_devices
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 300 (list_md_devices) took 0.00 seconds
libguestfs: trace: list_md_devices = []
libguestfs: trace: feature_available "lvm2"
guestfsd: main_loop: new request, len 0x34
guestfsd: main_loop: proc 398 (feature_available) took 0.00 seconds
libguestfs: trace: feature_available = 1
libguestfs: trace: lvs
guestfsd: main_loop: new request, len 0x28
lvm lvs -o vg_name,lv_name --noheadings --separator /
  WARNING: Failed to connect to lvmetad: No such file or directory. Falling back to internal scanning.
  No volume groups found
guestfsd: main_loop: proc 11 (lvs) took 0.01 seconds
libguestfs: trace: lvs = []
libguestfs: trace: feature_available "ldm"
guestfsd: main_loop: new request, len 0x34
guestfsd: main_loop: proc 398 (feature_available) took 0.00 seconds
libguestfs: trace: feature_available = 1
libguestfs: trace: list_ldm_volumes
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 380 (list_ldm_volumes) took 0.00 seconds
libguestfs: trace: list_ldm_volumes = []
libguestfs: trace: list_ldm_partitions
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 381 (list_ldm_partitions) took 0.00 seconds
libguestfs: trace: list_ldm_partitions = []
libguestfs: trace: list_filesystems = NULL (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x7f29dcddd190 (state 2)
libguestfs: trace: internal_autosync
guestfsd: main_loop: new request, len 0x28
libguestfs: trace: internal_autosync = 0
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsNpMXiF

Comment 1 Richard W.M. Jones 2013-08-10 10:45:55 UTC
Also happens with libguestfs 1.22.5 in RHEL 7.

Comment 3 bfan 2013-08-12 06:10:02 UTC
Can reproduce with libguestfs-1.22.5-3.el7.x86_64

[root@dhcp-9-42 images]# guestfish run : list-filesystems
[root@dhcp-9-42 images]# echo $?
1

Comment 7 bfan 2013-12-03 08:36:40 UTC
Verified with libguestfs-1.22.6-16.el7.x86_64,

# echo $LIBGUESTFS_BACKEND
libvirt
# guestfish run : list-filesystems
# echo $?
0

Returns an empty list, and no error

Comment 8 Ludek Smid 2014-06-13 11:48:45 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.