Bug 1309639 - [RFE] reporting needs to improve handling of displayed values, including failures
[RFE] reporting needs to improve handling of displayed values, including fai...
Status: NEW
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
rawhide
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Peter Rajnoha
Fedora Extras Quality Assurance
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-18 06:12 EST by Zdenek Kabelac
Modified: 2016-02-18 06:30 EST (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Zdenek Kabelac 2016-02-18 06:12:22 EST
Description of problem:

Currently  vgs/pvs/lvs reports  e.g. binary values in a very limited way (even with some bugs).

Value with --binary can have  0/1/-1  state.

However this mostly applies only for metadata value. But if user tries to use such values for live dm data - we lose number of important differences.

So IMHO - we more have states;

Undefined (field has not defined this value)  (e.g. SnapInvalid for linear LV)
- should be blanked 
- numerical value??

Unknown  (value could not be found - i.e. live failed thin-pool and out-of-data)
- "unknown"
-  -1

Error/Fail (value reading failed - i.e. memaloc)
- "error"  (or 'fail')
- ???

Yes/True/False/No


The proper reporting here is needed so the user may easily recognize which
values do have proper value, and which of them are just unusable.

This is even more important when we start to support  aggregated reporting
for i.e. dbus engine - we can't let DBus stop working when 1 LV
starts to be cause  status reading problems.

And would be also quite bad plan to let dbus parse error messages to decipher on his own  which fields are actually broken.




Version-Release number of selected component (if applicable):
lvm2 2.02.142

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Peter Rajnoha 2016-02-18 06:27:12 EST
This is about reporting failures along the way we do reporting itself (it's a bit related to our plans for reporting errors/warnings while processing and make this to go through the reporting engine too).

It's true indeed that we need to check error code from the reporting command when it's finished but this one only returns the overall code for the whole command run and so we need to parse stderr to check in which part the reporting failed exactly (e.g. failed lv_info for certain LV segments and similar).

This is not about binary fields, it's about making the failure in reporting to be visible for each field we report - so maybe a global value (above all the report types!) to represent FAILURE could help here (in that case the reporting engine itself could report "FAIL" in place of the field where the *_disp function failed so we don't need to change all the existing *_disp function to handle this).

Currently, all the *_disp functions that reporting code calls for each field to report itself only return 0 for success or 1 for failure. We could enhance this to return more error codes which the reporting code could recognize and report a special value like "FAILED" or more variants of it for the failed field if needed.

So whatever we do here to make it easier to recognize failed states, it must be done globally, it's not bound to certain field type like the "binary" one! For example, the memory allocation can fail when we dup string field and various other sources of failures...

Note You need to log in before you can comment on or make changes to this bug.