Bug 995711 - list-filesystems command fails if there are no block devices
list-filesystems command fails if there are no block devices
Status: CLOSED UPSTREAM
Product: Virtualization Tools
Classification: Community
Component: libguestfs (Show other bugs)
unspecified
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Richard W.M. Jones
:
Depends On:
Blocks: 995712
  Show dependency treegraph
 
Reported: 2013-08-10 06:42 EDT by Richard W.M. Jones
Modified: 2013-08-15 16:51 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 995712 (view as bug list)
Environment:
Last Closed: 2013-08-15 16:51:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Richard W.M. Jones 2013-08-10 06:42:55 EDT
Description of problem:

In libguestfs >= 1.20 when using the libvirt backend, it is
possible to have no block devices.  However the list-filesystems
command / guestfs_list_filesystems API gives an error when
this happens.  It actually returns an error without setting
an error string, which is itself another bug.

Version-Release number of selected component (if applicable):

libguestfs 1.23.14 with libvirt backend, Fedora 19
libguestfs-1.23.14-1.fc20.x86_64
libvirt-1.0.5.2-1.fc19.x86_64

How reproducible:

100%

Steps to Reproduce:
1. guestfish run : list-filesystems
2. echo $?
3.

Actual results:

Exit code is 1 indicating an error, although none is printed.

Expected results:

It should return an empty list, no error.

Additional info:

If you use guestfish -vx then you see the output below.
Note the line which says "list_filesystems = NULL (error)".

libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
libguestfs: [02803ms] appliance is up
libguestfs: trace: launch = 0
libguestfs: trace: list_filesystems
libguestfs: trace: list_devices
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 7 (list_devices) took 0.00 seconds
libguestfs: trace: list_devices = []
libguestfs: trace: list_partitions
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 8 (list_partitions) took 0.00 seconds
libguestfs: trace: list_partitions = []
libguestfs: trace: list_md_devices
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 300 (list_md_devices) took 0.00 seconds
libguestfs: trace: list_md_devices = []
libguestfs: trace: feature_available "lvm2"
guestfsd: main_loop: new request, len 0x34
guestfsd: main_loop: proc 398 (feature_available) took 0.00 seconds
libguestfs: trace: feature_available = 1
libguestfs: trace: lvs
guestfsd: main_loop: new request, len 0x28
lvm lvs -o vg_name,lv_name --noheadings --separator /
  WARNING: Failed to connect to lvmetad: No such file or directory. Falling back to internal scanning.
  No volume groups found
guestfsd: main_loop: proc 11 (lvs) took 0.01 seconds
libguestfs: trace: lvs = []
libguestfs: trace: feature_available "ldm"
guestfsd: main_loop: new request, len 0x34
guestfsd: main_loop: proc 398 (feature_available) took 0.00 seconds
libguestfs: trace: feature_available = 1
libguestfs: trace: list_ldm_volumes
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 380 (list_ldm_volumes) took 0.00 seconds
libguestfs: trace: list_ldm_volumes = []
libguestfs: trace: list_ldm_partitions
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 381 (list_ldm_partitions) took 0.00 seconds
libguestfs: trace: list_ldm_partitions = []
libguestfs: trace: list_filesystems = NULL (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x7f29dcddd190 (state 2)
libguestfs: trace: internal_autosync
guestfsd: main_loop: new request, len 0x28
libguestfs: trace: internal_autosync = 0
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsNpMXiF

Note You need to log in before you can comment on or make changes to this bug.