Hide Forgot
Description of problem: ssm list report error when vg is exported Version-Release number of selected component (if applicable): system-storage-manager-0.4-5.el7.noarch How reproducible: Steps to Reproduce: 1.Export a Volume group. # vgexport ssmvg Volume group "ssmvg" successfully exported 2.Run ssm list command , below error is seen. # ssm list ------------------------------------------------------------- Device Free Used Total Pool Mount point ------------------------------------------------------------- /dev/sda 10.00 GB 0.00 KB 10.00 GB ssmvg /dev/vda 20.00 GB PARTITIONED /dev/vda1 200.00 MB /boot /dev/vda2 1000.00 MB SWAP /dev/vda3 20.00 MB 18.80 GB 18.83 GB myvg ------------------------------------------------------------- -------------------------------------------------- Pool Type Devices Free Used Total -------------------------------------------------- myvg lvm 1 20.00 MB 18.80 GB 18.82 GB ssmvg lvm 1 10.00 GB 0.00 KB 10.00 GB -------------------------------------------------- myvg|19718144.00|1|0|linear|rootvol||-wi-ao---- Volume group ssmvg is exported SSM Error (2012): ERROR running command: "lvm lvs --separator | --noheadings --nosuffix --units k -o vg_name,lv_size,stripes,stripesize,segtype,lv_name,origin,lv_attr" 3. Error is not seen when export flag on VG is removed. Actual results: ssm list shows the output with the above error. Expected results: ssm list should list the exported VG without any errors Additional info:
this occurs because ssm expects a zero return from lvs. If the system contains an exported vg, lvs will have a return of 5 regardless of successful run. ssm has the ability to ignore failures and continue, but it appears to be an all or nothing setting. I spoke with the LVM developers who stated that lvs will have a non-zero return under the following conditions: * inconsistent Volume Group * Problems with System ID * Clustered VG * Exported VG perhaps it would be nice to add the can_fail=True option to _parse_data() so that it can be passed on to the misc.run() call, allowing it to be specifically targeted for lvs.. although im not sure how important it is that lvs return successfully for the rest of the code path.
Or perhaps the real answer here is to address why `lvs` returns a non-zero value for things like 'there is a clustered vg on the system!'. it should be informational; allowing a zero return.
opened a bz with the lvm team: https://bugzilla.redhat.com/show_bug.cgi?id=1321617 perhaps we can close this out as duplicate after review from lczerner
This lvs behavior is weird. I'll try to get some more information from lvm team, or try to convincem to change that (which I do not believe it will happen). Thanks for the report and the suggestion on how to fix it! -Lukas
It looks like based on the responses in https://bugzilla.redhat.com/show_bug.cgi?id=1321617 there may be a couple options: 1. Make this bug depend on the "per-object error code" patchset from Petr R (see https://bugzilla.redhat.com/show_bug.cgi?id=1321617#c3) and modify ssm to use the per object error codes. Unclear if this is suitable for RHEL7.4 2. Find some way to filter or ignore specific errors that may occur when asking for the full list of LVs (Implied when 'lvs' is given with no specific lv name) At least may want to explain what an error with this command means - i.e. an error occurred listing one or more of the LVs on the system, the following list may be incomplete, etc.
Removing from the 7.4 Filesystems and storage RPL. Doesn't mean the work can't be done, just not tracking it at the RPL level.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3277