Bug 654691
Summary: | LVMs commands is inconsistent - sometimes return error when already in state and sometimes fail | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Eduardo Warszawski <ewarszaw> |
Component: | lvm2 | Assignee: | Alasdair Kergon <agk> |
lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED WONTFIX | Docs Contact: | |
Severity: | low | ||
Priority: | low | CC: | agk, dwysocha, heinzm, iheim, jbrassow, joe.thornber, msnitzer, prajnoha, prockai, thornber, zkabelac |
Version: | 7.0 | Keywords: | FutureFeature, Reopened, Triaged |
Target Milestone: | pre-dev-freeze | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-01-18 23:34:17 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 656240, 756082 |
Description
Eduardo Warszawski
2010-11-18 15:47:45 UTC
There was a long discussion, never satisfactorily resolved: does successful exit status indicate "the action you requested succeeded" or "the device is now in the state you requested"? Originally I wrote the tools with the latter in mind, but I was later persuaded to change to the former. Maybe one day we'll make it configurable... (In reply to comment #2) > There was a long discussion, never satisfactorily resolved: does successful > exit status indicate "the action you requested succeeded" or "the device is now > in the state you requested"? > > Originally I wrote the tools with the latter in mind, but I was later persuaded > to change to the former. Maybe one day we'll make it configurable... Fine, but the current behaviour is inconsistent. For example, removing a non-existent tag does not return an error. activating an already active LV does not return an error. If you want to return an error in such cases (which I agree is the right behaviour) then the error code cannot be RC but a distinct return code (needs to be handled by machine so text is not a good option. So I'm reopening and renaming the bug. Other than handling specific cases that cause specific problems, I'm not convinced it's worth the effort and knock-on problems a full audit would involve. Maybe this'll just happen by default when the tools are converted to use the library (which *will* necessarily have to be stricter). (In short, I am still not clear in my own mind what 'correct' behaviour always is: there are plenty of difficult cases to decide.) (In reply to comment #5) > (In short, I am still not clear in my own mind what 'correct' behaviour always > is: there are plenty of difficult cases to decide.) I'm not sure there is a "correct" behaviour. However, I do think LVM should be consistent. Personally, I prefer not having this fail as it complicates things when we try to run multiple actions in a single command (e.g. deltag and addtag together). In any event, current behaviour is not only inconsistent but there is no way to differentiate between the different states using the rc. Since RHEL 6.1 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as an exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux. Another example: VG has lvol1 and lvol2. lvol1 has tag tag1. lvol2 is untagged. You run lvchange --deltag tag1 VG What's the right return code? Success, because tag1 was present on lvol1 and got removed? Failure, because lvol2 doesn't possess tag1 but you asked it to remove the tag from every LV in the VG and one of the LVs doesn't have the tag? lvchange -ay VG/lvol1 VG/lvol2 and lvol1 is already active but lvol2 is not? Success because everything's active afterwards? Failure because lvol1 was already active so couldn't be activated? Is lvchange -ay VG the same? Is vgchange -ay VG different because as a vg* command the 'unit' under consideration is the VG rather than the LV, and the VG was not fully active before, but is afterwards - so that's always success. I don't see any reasonable way to resolve the incompatible requirements from different users apart from making the return code configurable so the user decides whether they want an error or not in these cases. What we'd have to do is create the configuration option, attempt to define the behaviour unambiguously along the lines I suggested (including the types of cases in this comment), then gradually migrate the tools across to work the new way. Potentially it could be done by introducing a new error code #define EALREADY_IN_STATE 2 instead of ECMD_FAILED (5) (renumbering existing 2->3 and 3->4 as they are more serious and the highest error code wins when combining errors) and treating the error the same as ECMD_PROCESSED (1) if the config setting tells us to. (that'd be an internal mapping, externally we shouldn't change existing codes so it might appear as code 4 externally) Still no clarity on the best way to procede - deferring. This will go on forever - reopen when it is a problem or when the issue is taken-up. Re-closing like was done after comment 2 in 2010. |