RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1450667 - ReaR recovery fails when the OS contains a Thin Pool/Volume
Summary: ReaR recovery fails when the OS contains a Thin Pool/Volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rear
Version: 7.3
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: David Jež
URL:
Whiteboard:
: 1500632 1672218 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-14 13:40 UTC by Jesús Serrano Sánchez-Toscano
Modified: 2021-09-09 12:18 UTC (History)
11 users (show)

Fixed In Version: rear-2.4-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:43:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
TEST3-ERROR1_rear-fastvm-r7-3-60.log (11.35 KB, text/plain)
2017-05-14 13:40 UTC, Jesús Serrano Sánchez-Toscano
no flags Details
TEST3-ERROR2_rear-fastvm-r7-3-60.log (15.56 KB, text/plain)
2017-05-14 13:43 UTC, Jesús Serrano Sánchez-Toscano
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github rear rear issues 1380 0 None closed ReaR recovery fails when the OS contains a Thin Pool/Volume 2021-02-04 05:43:51 UTC
Red Hat Knowledge Base (Solution) 3031921 0 None None None 2018-05-22 10:32:30 UTC
Red Hat Product Errata RHBA-2018:3293 0 None None None 2018-10-30 11:44:02 UTC

Description Jesús Serrano Sánchez-Toscano 2017-05-14 13:40:22 UTC
Created attachment 1278654 [details]
TEST3-ERROR1_rear-fastvm-r7-3-60.log

Description of problem:
When using ReaR, the recovery process of a system which has a VG which contains a Thin Pool/Volume fails and the system is no longer recoverable. This has a high impact because ReaR will not be able to restore the system back to an operational state. The location of the Thin Pool is independent of the VG which contains it, that is, it does not matter if the Thin Pool is contained in the rootvg or in an independent VG.


Version-Release number of selected component (if applicable):
rear-1.17.2-7.el7_3.x86_64


How reproducible:
Always, when there is a Thin Pool/Volume on the system.


Steps to Reproduce:
1. Create a Thin Pool/Volume on the system.

      # lvcreate -L 100M -T r7vg/mythinpool
      # lvcreate -V 1G -T r7vg/mythinpool -n thinvolume
      # lvs
          LV         VG   Attr       LSize   Pool       Origin Data%  Meta%  Move Log Cpy%Sync Convert
          mythinpool r7vg twi-aotz-- 100.00m                   0.00   0.98                            
          root_lv    r7vg -wi-ao----   4.88g                                                          
          swap_lv    r7vg -wi-ao---- 256.00m                                                          
          thinvolume r7vg Vwi-a-tz--   1.00g mythinpool        0.00                                   

    Note: Optionally, a file system can be created and mounted on top of the thin volume (thinvolume)

2. Create a ReaR backup ISO (rear mkbackup). ISO file for testing purposes can be downloaded from:

      http://file.brq.redhat.com/jserrano/01840125/TEST3_rear-fastvm-r7-3-60.iso

3. Boot up from the ReaR ISO created and try to recover (rear recover). Note: When doing the recovery testing, do it on another VM because the upcoming failures will prevent the system to boot afterwards.
4. First time it will fail because for restoring LVM metadata which includes Thin Pools, the option --force needs to be included manually:

      2017-05-14 11:31:09 Restoring LVM VG r7vg
      +++ Print 'Restoring LVM VG r7vg'
      +++ test 1
      +++ echo -e 'Restoring LVM VG r7vg'
      +++ '[' -e /dev/r7vg ']'
      +++ lvm vgcfgrestore -f /var/lib/rear/layout/lvm/r7vg.cfg r7vg
        WARNING: Failed to connect to lvmetad. Falling back to device scanning.
   >>>> Consider using option --force to restore Volume Group r7vg with thin volumes.
        Restore failed.
      2017-05-14 11:31:09 An error occurred during layout recreation.
      2017-05-14 11:31:31 User selected: 6) Abort Relax-and-Recover
      2017-05-14 11:31:31 Error detected during restore.
      2017-05-14 11:31:31 Restoring backup of /var/lib/rear/layout/disklayout.conf
      2017-05-14 11:31:31 ERROR: There was an error restoring the system layout. See /var/log/rear/rear-fastvm-r7-3-60.log for details.

   Log file for this error attached to the bugzilla for investigation: 

      TEST3-ERROR1_rear-fastvm-r7-3-60.log

5. Edit the recovery script manually and add "--force" to the corresponding vgcfgrestore command on the VG which includes the Thin Pool. After the change, the command will look like the following:

      lvm vgcfgrestore --force -f "/var/lib/rear/layout/lvm/r7vg.cfg" r7vg >&2

6. Re-try again and it will fail because, after successfully restoring the LVM metadata, the binary /usr/sbin/thin_check is not found on the ISO and, thus, the activation of the Thin Pool/Volume fails:

      2017-05-14 11:33:21 Restoring LVM VG r7vg
      +++ Print 'Restoring LVM VG r7vg'
      +++ test 1
      +++ echo -e 'Restoring LVM VG r7vg'
      +++ '[' -e /dev/r7vg ']'
      +++ lvm vgcfgrestore --force -f /var/lib/rear/layout/lvm/r7vg.cfg r7vg
        WARNING: Failed to connect to lvmetad. Falling back to device scanning.
        WARNING: Forced restore of Volume Group r7vg with thin volumes.
        Restored volume group r7vg
      +++ lvm vgchange --available y r7vg
        WARNING: Failed to connect to lvmetad. Falling back to device scanning.
   >>>> /usr/sbin/thin_check: execvp failed: No such file or directory
        Check of pool r7vg/mythinpool failed (status:2). Manual repair required!
   >>>> /usr/sbin/thin_check: execvp failed: No such file or directory
        2 logical volume(s) in volume group "r7vg" now active
      2017-05-14 11:33:21 An error occurred during layout recreation.
      2017-05-14 11:33:26 User selected: 6) Abort Relax-and-Recover
      2017-05-14 11:33:26 Error detected during restore.
      2017-05-14 11:33:26 Restoring backup of /var/lib/rear/layout/disklayout.conf
      2017-05-14 11:33:26 ERROR: There was an error restoring the system layout. See /var/log/rear/rear-fastvm-r7-3-60.log for details.

   Log file for this error attached to the bugzilla for investigation: 

      TEST3-ERROR2_rear-fastvm-r7-3-60.log

   Note: The warnings about failing to connect to lvmetad are not relevant because, even when using use_lvmetad = 0, the recovery process fails with the same error (missing binary).


Actual results:
The recovery process fail and there is no way to recover the system because there is a VG which includes a Thin Pool/Volume.


Expected results:
To restore successfully the LVM metadata and activate all the LVs in the system even though there is a VG which contains a Thin Pool/Volume.


Additional info:
There isn't any workaround known.

Comment 2 Jesús Serrano Sánchez-Toscano 2017-05-14 13:43:56 UTC
Created attachment 1278655 [details]
TEST3-ERROR2_rear-fastvm-r7-3-60.log

Log of the fail Rear recover process due to missing binary /usr/sbin/thin_check

Comment 3 Jesús Serrano Sánchez-Toscano 2017-05-14 13:44:34 UTC
Comment on attachment 1278654 [details]
TEST3-ERROR1_rear-fastvm-r7-3-60.log

Log of ReaR recovery due to missing --force option in vgcfgrestore command

Comment 4 Ondrej Faměra 2017-05-29 12:44:21 UTC
Additional information from our customer where they attempted to fix the problem by providing missing binaries and symlinks to system.
I have verified that recovery process then finishes successfully.

## Changes done in generated ReaR ISO:

Binaries copied from source machine:
- /usr/sbin/pdata_tools
- /lib64/libaio.so.1.0.1
- /lib64/libstdc++.so.6.0.19

Symlinks created in the running ReaR ISO image:
/usr/sbin/thin_check -> pdata_tools
/usr/sbin/thin_delta -> pdata_tools
/usr/sbin/thin_dump -> pdata_tools
/usr/sbin/thin_ls -> pdata_tools
/usr/sbin/thin_metadata_size -> pdata_tools
/usr/sbin/thin_repair -> pdata_tools
/usr/sbin/thin_restore -> pdata_tools
/usr/sbin/thin_rmap -> pdata_tools
/usr/sbin/thin_trim -> pdata_tools
/lib64/libaio.so.1 -> /lib64/libaio.so.1.0.1
/lib64/libstdc++.so.6 -> libstdc++.so.6.0.19

Added '--force' to 'vgcfgrestore' command in diskrestore.sh file.

Comment 7 Jesús Serrano Sánchez-Toscano 2017-08-21 12:03:46 UTC
I have managed another reproducer with a newer version of ReaR. Although the results are different this time (I got a system which was able to boot), the LVM Thin Pool/Volumes were not restored at all.

Details of a new reproducer in my lab using the latest version of ReaR:

   Version installed: rear-2.00-2.el7.x86_64
   Hypervisor: ofamera-devel.usersys.redhat.com
   Original machine: fvm-rhel-7-3-34  <-- Executed 'rear mkbackup'
   Recovery machine: fvm-rhel-7-3-38  <-- Executed 'rear recover'
   NFS backup server: fvm-rhel-7-3-44
   User: root
   Pass: testtest


*******************
*** BACKUP TEST ***
*******************

RESULT: Backup taken successfully. ISO + backup.tar.gz were sent to the NFS server

-> Original system (fvm-rhel-7-3-34):

   [root@fvm-rhel-7-3-34 ~]# rear -d -v mkbackup
   Relax-and-Recover 2.00 / Git
   Using log file: /var/log/rear/rear-fvm-rhel-7-3-34.log
   Using backup archive 'backup.tar.gz'
   Creating disk layout
   Creating root filesystem layout
   Copying logfile /var/log/rear/rear-fvm-rhel-7-3-34.log into initramfs as '/tmp/rear-fvm-rhel-7-3-34-partial-2017-08-21T10:14:10+0200.log'
   Copying files and directories
   Copying binaries and libraries
   Copying kernel modules
   Creating initramfs
   Making ISO image
   Wrote ISO image: /var/lib/rear/output/rear-fvm-rhel-7-3-34.iso (134M)
   Copying resulting files to nfs location
   Saving /var/log/rear/rear-fvm-rhel-7-3-34.log as rear-fvm-rhel-7-3-34.log to nfs location
   Creating tar archive '/tmp/rear.WaTt8XedE2CpdJ3/outputfs/fvm-rhel-7-3-34/backup.tar.gz'
   Archived 850 MiB [avg 3349 KiB/sec] OK
   Archived 850 MiB in 261 seconds [avg 3336 KiB/sec]
   You should also rm -Rf /tmp/rear.WaTt8XedE2CpdJ3
   [root@fvm-rhel-7-3-34 ~]# lvs
     LV          VG      Attr       LSize   Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
     root_lv     r7vg    -wi-ao----   4.88g                                                           
     swap_lv     r7vg    -wi-ao---- 256.00m                                                           
     lv_thin     vg_thin Vwi-a-tz--   1.00g lv_thinpool        0.00                                   
     lv_thinpool vg_thin twi-aotz--  92.00m                    0.00   0.98                            
   
-> NFS backup server (fvm-rhel-7-3-44):   

   [root@fvm-rhel-7-3-44 ~]# ls -l /media/backups/fvm-rhel-7-3-34/
   total 1010332
   -rw-------. 1 nfsnobody nfsnobody   2252128 Aug 21 10:20 backup.log
   -rw-------. 1 nfsnobody nfsnobody 891757428 Aug 21 10:20 backup.tar.gz
   -rw-------. 1 nfsnobody nfsnobody       202 Aug 21 10:16 README
   -rw-------. 1 nfsnobody nfsnobody 140369920 Aug 21 10:15 rear-fvm-rhel-7-3-34.iso
   -rw-------. 1 nfsnobody nfsnobody    183744 Aug 21 10:16 rear-fvm-rhel-7-3-34.log
   -rw-------. 1 nfsnobody nfsnobody         0 Aug 21 10:20 selinux.autorelabel
   -rw-------. 1 nfsnobody nfsnobody       273 Aug 21 10:16 VERSION



*********************
*** RECOVERY TEST ***
*********************

RESULT: It recovered the system (it was able to boot afterwards) but only the OS itself (root VG/LVs), not the LVM Thin Pool/Volumes in the second disk

-> Original system (fvm-rhel-7-3-34):

   [jserrano@ofamera-devel ~]$ fast-vm ssh 34
   [inf] checking the 192.168.33.34 for active SSH connection (ctrl+c to interrupt)
   [inf] 
   SSH ready
   Warning: Permanently added '192.168.33.34' (ECDSA) to the list of known hosts.
   
   System is booting up. See pam_nologin(8)
   Last login: Mon Aug 21 12:35:16 2017 from gateway
   
   [root@fvm-rhel-7-3-34 ~]# lvs
     LV          VG      Attr       LSize   Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
     root_lv     r7vg    -wi-ao----   4.88g                                                           
     swap_lv     r7vg    -wi-ao---- 256.00m                                                           
     lv_thin     vg_thin Vwi-a-tz--   1.00g lv_thinpool        0.00                                   
     lv_thinpool vg_thin twi-aotz--  92.00m                    0.00   0.98                            
   [root@fvm-rhel-7-3-34 ~]# 
   [root@fvm-rhel-7-3-34 ~]# lsblk
   NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
   sda                             8:0    0     6G  0 disk 
   ├─sda1                          8:1    0   500M  0 part /boot
   └─sda2                          8:2    0   5.5G  0 part 
     ├─r7vg-root_lv              253:0    0   4.9G  0 lvm  /
     └─r7vg-swap_lv              253:1    0   256M  0 lvm  [SWAP]
   sdb                             8:16   0 102.4M  0 disk 
   ├─vg_thin-lv_thinpool_tmeta   253:2    0     4M  0 lvm  
   │ └─vg_thin-lv_thinpool-tpool 253:4    0    92M  0 lvm  
   │   ├─vg_thin-lv_thinpool     253:5    0    92M  0 lvm  
   │   └─vg_thin-lv_thin         253:6    0     1G  0 lvm  
   └─vg_thin-lv_thinpool_tdata   253:3    0    92M  0 lvm  
     └─vg_thin-lv_thinpool-tpool 253:4    0    92M  0 lvm  
       ├─vg_thin-lv_thinpool     253:5    0    92M  0 lvm  
       └─vg_thin-lv_thin         253:6    0     1G  0 lvm  
   sr0                            11:0    1  1024M  0 rom  


-> Recovery machine (fvm-rhel-7-3-38) -after 'rear recover'-:

   [jserrano@ofamera-devel ~]$ fast-vm ssh 38
   [inf] checking the 192.168.33.38 for active SSH connection (ctrl+c to interrupt)
   [inf] 
   SSH ready
   Warning: Permanently added '192.168.33.38' (ECDSA) to the list of known hosts.
   Last login: Mon Aug 21 12:26:46 2017
   
   [root@fvm-rhel-7-3-34 ~]# lvs
     LV      VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
     root_lv r7vg -wi-ao----   4.88g                                                    
     swap_lv r7vg -wi-ao---- 256.00m                                                    
   
   [root@fvm-rhel-7-3-34 ~]# lsblk
   NAME             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
   sda                8:0    0     6G  0 disk 
   ├─sda1             8:1    0   500M  0 part /boot
   └─sda2             8:2    0   5.5G  0 part 
     ├─r7vg-root_lv 253:0    0   4.9G  0 lvm  /
     └─r7vg-swap_lv 253:1    0   256M  0 lvm  [SWAP]
   sdb                8:16   0 102.4M  0 disk 
   

Please, let me know if I have missed anything on my test.

Comment 9 Renaud Métrich 2017-10-11 08:55:35 UTC
*** Bug 1500632 has been marked as a duplicate of this bug. ***

Comment 10 Renaud Métrich 2017-10-11 09:00:21 UTC
When LVM Thin Pool is part of the rootvg, then no recovery can ever succeed.

Tested with rear-2.00-2.el7.x86_64.

Initial error:

lvm vgcfgrestore -f /var/lib/rear/layout/lvm/rhel.cfg vgroot
Consider using option --force to restore Volume Group rhel with thin volumes.
Restore failed.

Trying to use the "--force" option by modifying /var/lib/rear/layout/diskrestore.sh doesn't help.
It then fails checking the Thin volume, because /usr/sbin/thin_check is not part of the ReaR image by default.

Finally, after adding /usr/sbin/thin_check to the ReaR image (using REQUIRED_PROGS variable), it still fails when trying to make the VG available:

lvm vgchange --available y rhel
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  Monitoring rhel/pool00 failed.
  device-mapper: reload ioctl on  (252:5) failed: No data available
  2 logical volume(s) in volume group "rhel" now active

I've then stopped investigation at this step.

Steps to Reproduce:

1. Install a VM, selecting "LVM Thin Provisioning" instead of "LVM" for root LV
2. Create ReaR rescue image
3. Try restoring the disk layout

Comment 11 Jesús Serrano Sánchez-Toscano 2017-10-20 12:03:50 UTC
@Renaud Métrich:

Please, notice that the recovery fails also when the LVM Thin Pool is *not* part of the rootvg. Refer to the "RECOVERY TEST" section from my latest test in https://bugzilla.redhat.com/show_bug.cgi?id=1450667#c7


   [root@fvm-rhel-7-3-34 ~]# lsblk
   NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
   sda                             8:0    0     6G  0 disk 
   ├─sda1                          8:1    0   500M  0 part /boot
   └─sda2                          8:2    0   5.5G  0 part 
     ├─r7vg-root_lv              253:0    0   4.9G  0 lvm  /
     └─r7vg-swap_lv              253:1    0   256M  0 lvm  [SWAP]
   sdb                             8:16   0 102.4M  0 disk 
   ├─vg_thin-lv_thinpool_tmeta   253:2    0     4M  0 lvm  
   │ └─vg_thin-lv_thinpool-tpool 253:4    0    92M  0 lvm  
   │   ├─vg_thin-lv_thinpool     253:5    0    92M  0 lvm  
   │   └─vg_thin-lv_thin         253:6    0     1G  0 lvm  
   └─vg_thin-lv_thinpool_tdata   253:3    0    92M  0 lvm  
     └─vg_thin-lv_thinpool-tpool 253:4    0    92M  0 lvm  
       ├─vg_thin-lv_thinpool     253:5    0    92M  0 lvm  
       └─vg_thin-lv_thin         253:6    0     1G  0 lvm  
   sr0                            11:0    1  1024M  0 rom

Comment 12 Gratien D'haese 2017-10-31 15:21:52 UTC
The best approach would be with writing a prep script which identifies that Thin LVM is in use and copy the required binaries to the rescue image. It is important that we have the required binaries within the rescue image to start with.
Once that has been done we can identify if more steps are required.

Comment 14 Renaud Métrich 2017-11-02 12:33:55 UTC
The following needs to be added to /etc/rear/local.conf:

REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" thin_dump thin_restore thin_check thin_repair )
LIBS=( "${LIBS[@]}" /usr/lib64/*lvm2* )


Also, "vgcfgrestore" has to be replaced by "vgcfgrestore --force" in /usr/share/rear/layout/prepare/GNU/Linux/110_include_lvm_code.sh:

lvm vgcfgrestore --force -f "$VAR_DIR/layout/lvm/${vgrp#/dev/}.cfg" ${vgrp#/dev/} >&2

But this is not sufficient. The issue now is with devicemapper.
Starting dmeventd in debug (/usr/sbin/dmeventd -f -l -ddd), we can see the following, when trying to activate the VG:

VG activation:

# lvm vgchange --available y rhel
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  device-mapper: reload ioctl on  (252:5) failed: No data available
  2 logical volume(s) in volume group "rhel" now active

dmeventd trace:

[ 0:10] b08b5700:lvm       Locking memory
[ 0:10] b08b5700:lvm         lvm plugin initilized.
[ 0:10] b08b5700:dm        dmeventd/thin_command not found in config: defaulting to lvm lvextend --use-policies
[ 0:10] b08b5700:thin      Monitoring thin pool rhel-pool00-tpool.
[ 0:10] b08b5700:dm          dm waitevent  LVM-hrMgDvLvLxUqxfsCrVZUGQKRKRrKy2tgNJoCljwt0WOPHbEjh06B1FQyxaafHrMu-tpool [ opencount flush ]   [16384] (*1)
[ 0:20] b08b5700:dm      device-mapper: waitevent ioctl on  LVM-hrMgDvLvLxUqxfsCrVZUGQKRKRrKy2tgNJoCljwt0WOPHbEjh06B1FQyxaafHrMu-tpool failed: Interrupted system call
[ 0:20] b08b5700:dm          dm status  LVM-hrMgDvLvLxUqxfsCrVZUGQKRKRrKy2tgNJoCljwt0WOPHbEjh06B1FQyxaafHrMu-tpool [ opencount noflush ]   [16384] (*1)
...

Comment 15 Zdenek Kabelac 2018-05-15 09:15:56 UTC
Few comments from lvm2:

There is nothing to backup on thin-pool.

There is no way to restore thin-pool.

Thin-pool consist of set of data chunks  (in _tdata LV) and it's mapping  in (_tmeta LV).

Try to backup these on running live thin-pool makes no sense at all.

You could only backup individual active thin LVs.

----

There is no support on lvm2 side for 'relocation' of thin-pool to another machine - and for this task it would be actually needed.

ATM 

The easiest way to copy thin-pool to another machine is to just have thin-pool inactive - activate  individual _tdata & _tmeta LVS (this in only possible on git HEAD of lvm2) copy these volumes.

And then you would have to extra thin-pool lvm2 metadata to restore all the settings for thin-pool and thin LV in  different VG.

---

Please do NOT use   'vgcfgrestore --force'  in  ANY automated tool.
Whenever  option --force is used -  it CAN and MAY destroy data and several kittens may dies as well...

Option --force is there for those who know EXACTLY what they are doing and can accept the risk of data loosing.

---

For live online thin-pool migration/copying we would need to implement support for 'remote replication' with the use of tool like  'thin_delta'

Comment 16 Renaud Métrich 2018-05-15 09:44:18 UTC
Thanks for the insights.

In a nutshell, we must then implement as shown below:

1. During backup

- Collect PV, VG and LV usual data (size, etc)
- [NEW] Add "thin" attributes for LVs and Pools as well ("lvmvol" lines), e.g.

  Current layout:

  lvmvol /dev/rhel pool00 3068 25133056 
  lvmvol /dev/rhel root 2556 20938752 
  lvmvol /dev/rhel swap 512 4194304 

  Missing knowledge:

  - "pool00" is a thin pool
  - "root" is hosted on "pool00"
  - "swap" is hosted on "pool00"

  --> To be done in /usr/share/rear/layout/save/GNU/Linux/220_lvm_layout.sh

2. During restore

- [NEW] Check whether vgcfgrestore can be used or not (depending on thin pool existence? or just let it fail)
- [NEW] If there is a thin pool, do not use vgcfgrestore but "legacy" tools (vgcreate / lvcreate)

  --> To be done in /usr/share/rear/layout/prepare/GNU/Linux/110_include_lvm_code.sh

Comment 17 Renaud Métrich 2018-05-16 10:49:31 UTC
GitHub Pull Request: https://github.com/rear/rear/pull/1806

The proposed code does the following:

1. During backup

- Collect additional LV properties

    origin: originating LV (for cache and snapshots)
    lv_layout: type of LV (mirror, raid, thin, etc)
    pool_lv: thin pool hosting LV
    chunk_size: size of the chunk, for various volumes
    stripes: number of stripes, for Mirror volumes and Stripes volumes
    stripe_size: size of a stripe, for Raid volumes

- Skip caches and snapshots

2. During restore

- If in Migration mode (e.g. different disks but same size), go through the vgcreate/lvcreate code (Legacy Method), printing Warnings because the initial layout may not be preserved (because we do not save all attributes needed for re-creating LVM volumes)

- Otherwise, try "vgcfgrestore"

- If it fails

  - Try "vgcfgrestore --force"
  - If it fails, use vgcreate/lvcreate (Legacy Method)
  - Otherwise, remove Thin pools (which are broken due to --force flag)
  - Create Thin pools using Legacy Method (but do not create other LVs which have been succesfully restored)

Comment 18 Renaud Métrich 2018-05-28 09:08:03 UTC
https://github.com/rear/rear/pull/1806 merged into rear:master (ReaR 2.4???).

@pavel, we should definitely rebase to 2.4 for RHEL7.6 if we can.

Comment 19 Pavel Cahyna 2018-06-27 16:23:39 UTC
merged upstream in 2.4/b8630a6417255393524e8df4c20f3ba24f00b85d

Comment 24 errata-xmlrpc 2018-10-30 11:43:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3293

Comment 25 Pavel Cahyna 2019-08-05 10:32:23 UTC
*** Bug 1672218 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.