Bug 1639360
| Summary: | Separate lvm activation from other lvm commands | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Nir Soffer <nsoffer> |
| Component: | vdsm | Assignee: | Vojtech Juranek <vjuranek> |
| Status: | CLOSED ERRATA | QA Contact: | Ilan Zuckerman <izuckerm> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.3.0 | CC: | achareka, aefrat, bubrown, bugs, dfediuck, gveitmic, lsurette, michal.skrivanek, mkalinin, mwest, pelauter, rdlugyhe, srevivo, tnisan, vjuranek, ycui |
| Target Milestone: | ovirt-4.4.0 | Flags: | lsvaty:
testing_plan_complete-
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, mixing the Logical Volume Manager (LVM) activation and deactivation commands with other commands caused possible undefined LVM behavior and warnings in the logs. The current release fixes this issue. It runs the LVM activation and deactivation commands separately from other commands. This produces resulting well-defined LVM behavior and clear errors in case of failure.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-08-04 13:26:06 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1411103 | ||
| Bug Blocks: | |||
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1. https://gerrit.ovirt.org/c/101378/ does not fix the underlying issue, but suppress the warnings in vdsm log. The warnings are still available when enabling DEBUG log level. I would raise the priority of this. We are using lvm incorrectly and we must fix our ways. Checking which patches can be backported: - https://gerrit.ovirt.org/95161 - easy to backport - https://gerrit.ovirt.org/104882 - big change, not suitable for backport - https://gerrit.ovirt.org/104883 - easy change but needs to be reimplemented without depending on https://gerrit.ovirt.org/104882 If we have a proof that mixing activation and tag changes can cause corruption we can fix this in 4.3. We don't have any data supporting the claim that this is related to data corruption. Therefore I don't any reason to backport the fix. Tal, this bug should be 4.4 bug, and move to ON_QA. To verify this BZ, i used storage ART automation test suit. The test case is TestCase10443 to be precise. Here is what it does: 1. Create a disk with wipe after delete attribute 2. Create a VM and install OS on it 3. Attach the disk to the VM => disk is created with 'wipe_after_delete' = True 4. Create a file from guest 5. Remove the disk => so the disk data should be deleted/wiped Actual result: Success. I grepped "Combining activation change with other commands is not advised" in the appropriate vdsm log (turned it to DEBUG just in case). As suggested by Vojtek. This string wasnt found in the log. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246 |
Description of problem: Vdsm is mixing lvm activation with adding and removing tags in the the same lvm command. This show a warning with recent LVM versions. Here is an example command: Here is example command we use: /usr/sbin/lvm lvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3600a098038304437415d4b6a59676c56'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n -a y --deltag IU_4181c731-8b1a-4929-9c30-f3bfa011c129 --addtag IU__remove_me_ZERO_4181c731-8b1a-4929-9c30-f3bfa011c129 c80a5bd0-809e-4c4e-97a8-111611105b3e/885511bd-7164-458f-a1ac-1f8ad4355972 And we get these warning: WARNING: Combining activation change with other commands is not advised. This command did: - activate the lv - delete tag IU_4181c731-8b1a-4929-9c30-f3bfa011c129 - add tag IU__remove_me_ZERO_4181c731-8b1a-4929-9c30-f3bfa011c129 According to David Teigland response on linux-lvm: https://www.redhat.com/archives/linux-lvm/2018-October/msg00017.html Mixing activation with other commands can lead to undefined behavior. Version-Release number of selected component (if applicable): LVM since Feb 2017. We should split the internal APIs for activating and deactivating logical volumes and editing logical volume tags, and issue separate commands.