Bug 1356087

Summary: [RFE] Mechanism to prevent VGs/LVs or file-system from getting erased
Product: Red Hat Enterprise Linux 8 Reporter: cshao <cshao>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Release Test Team <release-test-team-automation>
Severity: high Docs Contact:
Priority: medium    
Version: 8.1CC: agk, anaconda-maint-list, aoconnor, bugs, fdeutsch, huzhao, jkonecny, leiwang, sbueno, vtrefny, weiwang, yaniwang, ycui, yturgema
Target Milestone: alphaKeywords: FutureFeature
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-01 03:03:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1281845, 1362310    
Attachments:
Description Flags
all log info none

Description cshao 2016-07-13 11:39:16 UTC
Description of problem:
LV should be protected during reinstall RHVH..

Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.0-20160708.0 
imgbased-0.7.2-0.1.el7ev.noarch
redhat-release-virtualization-host-4.0-0.13.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Install RHVH manually
2. Create one VG for RHVH installatin(vg1)
3. Create another separate VG(vg2) with an LV for /local storage domain.
4. Add RHVH to Engine
5. Create local storage domain on vg2.
6. Ensure setup is working
7. Reinstall RHVH
   - Boot from ISO.
   - Remove the VG1 used for RHVH installation.
   - Create a new VG for RHVH installation.
   - Keep the local storage domain VG2.
8. Login RHVH, run command #findmnt.
9. Check the output.

Actual results:
There is no output about vg2, so no protect LV during reinstall.


Expected results:
LV should be protected during reinstall RHVH..

Additional info:
Before reinstall:
# lvs
  LV     VG              Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   rhel_dhcp-8-139 Vwi-aotz-- 122.08g pool00        0.05                                   
  pool00 rhel_dhcp-8-139 twi-aotz-- 222.08g               0.94   0.86                            
  root   rhel_dhcp-8-139 Vwi-aotz-- 100.00g pool00        2.02                                   
  a      vg1             Vwi-aotz--  41.66g pool00        0.06                                   
  pool00 vg1             twi-aotz--  41.66g               0.06   0.47                            
[root@dhcp-8-139 a]# vgs
  VG              #PV #LV #SN Attr   VSize   VFree 
  rhel_dhcp-8-139   1   3   0 wz--n- 238.08g 15.78g
  vg1               1   2   0 wz--n-  50.00g  8.25g
[root@dhcp-8-139 a]# pwd
/a
[root@dhcp-8-139 a]# ll
total 0
drwxr-xr-x. 5 vdsm kvm 45 Jul 13 18:49 b77bccf3-9465-4672-af10-991a829efc84
-rwxr-xr-x. 1 vdsm kvm  0 Jul 13 18:49 __DIRECT_IO_TEST__



After reinstall:
# lvs
  LV     VG              Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   rhel_dhcp-8-139 -wi-ao---- 138.00g                                                      
  root   rhel_dhcp-8-139 -wi-ao---- 100.08g                                                      
  a      vg1             Vwi-a-tz--  41.66g pool00        0.06                                   
  pool00 vg1             twi-aotz--  41.66g               0.06   0.47                            
[root@dhcp-8-139 ~]# vgs
  VG              #PV #LV #SN Attr   VSize   VFree
  rhel_dhcp-8-139   1   2   0 wz--n- 238.08g    0 
  vg1               1   2   0 wz--n-  50.00g 8.25g
# cd /a
-bash: cd: /a: No such file or directory

Comment 1 cshao 2016-07-13 11:46:23 UTC
Created attachment 1179223 [details]
all log info

Comment 2 Fabian Deutsch 2016-07-18 16:35:00 UTC
anaconda should provide a generic mechanism i.e. through LVM tags or file based flags to prevent the erasure of certain VGs, LVs or file-systems.

Use case: An LV with crucial data exists, if the file (inside the filesystem) /.anaconda-protected exists, then anaconda will raise an error if the users asks to erase the filesystem.

Comment 4 Ying Cui 2016-08-08 06:47:17 UTC
I would like to escalate this bug due to one rhev customer ticket in bug 1281845. With this bug open, we can not verify the bug 1281845.

Comment 5 Samantha N. Bueno 2017-05-26 16:04:19 UTC
Hi Ying, sorry about this, but we did not get to this feature in 7.4, but we will evaluate it once more during 7.5 planning.

Comment 6 Samantha N. Bueno 2017-10-18 09:52:31 UTC
Deferring this once more, to 7.6. We had to be very selective about bugs for 7.5, and I'm sorry to say that this was not one we could consider. We'll evaluate it in the next planning season.

Comment 7 Sandro Bonazzola 2018-11-28 11:11:46 UTC
This missed 7.2.z -> 7.6; can we have a target milestone for this? 7.6.z? 7.7?

Comment 8 Jiri Konecny 2018-11-29 09:22:52 UTC
Anaconda is not doing Z streams but we will look on this during 7.7 planning.

Comment 9 Vojtech Trefny 2018-12-05 12:49:41 UTC
So you basically need some mechanism to mark an LV to prevent users from accidentally removing it during installation? Who (and when) will mark the LVs as protected?

Using LVM tags should be relatively easy to implement in Blivet (storage library used by Anaconda). We currently don't support LVM tags, but just reading it and marking the device as protected if the tag is "anaconda-protected" (or something similar) shouldn't be hard.

Comment 10 Fabian Deutsch 2018-12-05 13:21:39 UTC
Yuval, can you take a look?

Comment 11 Sandro Bonazzola 2018-12-10 15:48:54 UTC
I think it's totally fine using "anaconda-protected" to preserve LVs during reinstallation.
We'll update RHV documentation to tell users to tag in such way the LVs they want to protect.

Comment 12 Sandro Bonazzola 2019-06-14 07:18:53 UTC
Missed 7.7, retrying with 8.1 for RHV 4.4

Comment 13 Sandro Bonazzola 2020-03-18 11:24:57 UTC
Not tracking this for RHV 4.4 anymore.

Comment 16 RHEL Program Management 2020-11-01 03:03:02 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.