RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1504044 - unable to recover from degraded raid1 with thin pool
Summary: unable to recover from degraded raid1 with thin pool
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: 7.8
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1834485
TreeView+ depends on / blocked
 
Reported: 2017-10-19 11:39 UTC by Leo Bergolth
Modified: 2021-09-03 12:49 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1834485 (view as bug list)
Environment:
Last Closed: 2020-05-11 19:39:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vgchange -ay -vvv output (17.87 KB, text/plain)
2017-10-19 11:42 UTC, Leo Bergolth
no flags Details
Patch to avoid partial on thin/cache/... SubLVs in case they are raid != raidd0|raid0_meta (1.86 KB, application/mbox)
2018-07-25 14:18 UTC, Heinz Mauelshagen
no flags Details

Description Leo Bergolth 2017-10-19 11:39:58 UTC
Description of problem:
I have a thin pool mirrored on two PVs using --type raid1.

If one drive fails, lvm thinks that the thin pool is a partial (not just degraded) LV, so removing the missing PV would also remove the thin pool! (see below)

Is there a way to work around this problem?


Version-Release number of selected component (if applicable):
lvm2-2.02.171-8.el7.x86_64


Steps to Reproduce:
-------------------- 8< --------------------
# pvcreate /dev/vdb 
  Physical volume "/dev/vdb" successfully created.
# pvcreate /dev/vdc 
  Physical volume "/dev/vdc" successfully created.
# vgcreate vg_test /dev/vdb /dev/vdc
  Volume group "vg_test" successfully created
# lvcreate --type raid1 -m 1 -n thinmeta -L100m vg_test /dev/vdb /dev/vdc
  Logical volume "thinmeta" created.
# lvcreate --type raid1 -m 1 -n Thin -L2g vg_test /dev/vdb /dev/vdc 
  Logical volume "Thin" created.
# lvconvert -y --type thin-pool --poolmetadata vg_test/thinmeta vg_test/Thin
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting logical volume vg_test/Thin and vg_test/thinmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted vg_test/Thin_tdata to thin pool.

# vgchange -a n vg_test
  0 logical volume(s) in volume group "vg_test" now active

### add global_filter = [ "r|^/dev/vdc$|" ] to lvm.conf
# systemctl restart lvm2-lvmetad.service
# pvscan --cache

# pvs
  WARNING: Device for PV na0HS1-ZcQt-bnt0-tFfB-Zl20-irIg-FffGxH not found or rejected by a filter.
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/vda2  vg_sys  lvm2 a--  <12.00g <2.00g
  /dev/vdb   vg_test lvm2 a--  <12.00g  9.79g
  [unknown]  vg_test lvm2 a-m  <12.00g  9.89g

# vgchange -a y vg_test
  WARNING: Device for PV na0HS1-ZcQt-bnt0-tFfB-Zl20-irIg-FffGxH not found or rejected by a filter.
  Refusing activation of partial LV vg_test/Thin.  Use '--activationmode partial' to override.
  0 logical volume(s) in volume group "vg_test" now active

# vgchange -a y --activationmode=partial vg_test
  PARTIAL MODE. Incomplete logical volumes will be processed.
  WARNING: Device for PV na0HS1-ZcQt-bnt0-tFfB-Zl20-irIg-FffGxH not found or rejected by a filter.
  1 logical volume(s) in volume group "vg_test" now active

### removing the missing PV would also remove my thin pool!
# vgreduce --removemissing  vg_test
  WARNING: Device for PV na0HS1-ZcQt-bnt0-tFfB-Zl20-irIg-FffGxH not found or rejected by a filter.
  WARNING: Partial LV Thin needs to be repaired or removed. 
  WARNING: Partial LV Thin_tmeta needs to be repaired or removed. 
  WARNING: Partial LV Thin_tdata needs to be repaired or removed. 
  WARNING: Partial LV Thin_tmeta_rimage_1 needs to be repaired or removed. 
  WARNING: Partial LV Thin_tmeta_rmeta_1 needs to be repaired or removed. 
  WARNING: Partial LV Thin_tdata_rimage_1 needs to be repaired or removed. 
  WARNING: Partial LV Thin_tdata_rmeta_1 needs to be repaired or removed. 
  There are still partial LVs in VG vg_test.
  To remove them unconditionally use: vgreduce --removemissing --force.
  WARNING: Proceeding to remove empty missing PVs.
-------------------- 8< --------------------

Comment 2 Leo Bergolth 2017-10-19 11:42:08 UTC
Created attachment 1340705 [details]
vgchange -ay -vvv output

Comment 3 Leo Bergolth 2017-10-19 11:43:56 UTC
# lvs -a -o+devices vg_test                            
  WARNING: Device for PV na0HS1-ZcQt-bnt0-tFfB-Zl20-irIg-FffGxH not found or rejected by a filter.
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                      
  Thin                  vg_test twi-a-tzp-   2.00g             0.00   0.05                             Thin_tdata(0)                                
  [Thin_tdata]          vg_test rwi-aor-p-   2.00g                                    100.00           Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] vg_test iwi-aor---   2.00g                                                     /dev/vdb(27)                                 
  [Thin_tdata_rimage_1] vg_test Iwi-aor-p-   2.00g                                                     [unknown](27)                                
  [Thin_tdata_rmeta_0]  vg_test ewi-aor---   4.00m                                                     /dev/vdb(26)                                 
  [Thin_tdata_rmeta_1]  vg_test ewi-aor-p-   4.00m                                                     [unknown](26)                                
  [Thin_tmeta]          vg_test ewi-aor-p- 100.00m                                    100.00           Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] vg_test iwi-aor--- 100.00m                                                     /dev/vdb(1)                                  
  [Thin_tmeta_rimage_1] vg_test Iwi-aor-p- 100.00m                                                     [unknown](1)                                 
  [Thin_tmeta_rmeta_0]  vg_test ewi-aor---   4.00m                                                     /dev/vdb(0)                                  
  [Thin_tmeta_rmeta_1]  vg_test ewi-aor-p-   4.00m                                                     [unknown](0)                                 
  [lvol0_pmspare]       vg_test ewi------- 100.00m                                                     /dev/vdb(539)

Comment 4 Jonathan Earl Brassow 2017-10-19 22:12:46 UTC
Yeah, strange it doesn't respond to 'degraded' mode activation attempt...

[root@bp-02 ~]# vgchange -ay vg
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
  Refusing activation of partial LV vg/tpool.  Use '--activationmode partial' to override.
  0 logical volume(s) in volume group "vg" now active
[root@bp-02 ~]# vgchange -ay --activationmode degraded vg
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
  Refusing activation of partial LV vg/tpool.  Use '--activationmode partial' to override.
  0 logical volume(s) in volume group "vg" now active
[root@bp-02 ~]# vgchange -ay --activationmode partial vg
  PARTIAL MODE. Incomplete logical volumes will be processed.
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
  1 logical volume(s) in volume group "vg" now active


You can use 'lvconvert --repair' in order to fix the various raid devices (you will need a replacement disk though).  Here's an example of what I did:

[root@bp-02 ~]# devices vg
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
  LV                     Attr       Cpy%Sync Devices
  [lvol0_pmspare]        ewi-------          /dev/sdb1(128127)
  tpool                  twi-a-tzp-          tpool_tdata(0)
  [tpool_tdata]          rwi-aor-p- 100.00   tpool_tdata_rimage_0(0),tpool_tdata_rimage_1(0)
  [tpool_tdata_rimage_0] iwi-aor---          /dev/sdb1(127)
  [tpool_tdata_rimage_1] iwi-aor-p-          unknown device(127)
  [tpool_tdata_rmeta_0]  ewi-aor---          /dev/sdb1(126)
  [tpool_tdata_rmeta_1]  ewi-aor-p-          unknown device(126)
  [tpool_tmeta]          ewi-aor-p- 100.00   tpool_tmeta_rimage_0(0),tpool_tmeta_rimage_1(0)
  [tpool_tmeta_rimage_0] iwi-aor---          /dev/sdb1(1)
  [tpool_tmeta_rimage_1] iwi-aor-p-          unknown device(1)
  [tpool_tmeta_rmeta_0]  ewi-aor---          /dev/sdb1(0)
  [tpool_tmeta_rmeta_1]  ewi-aor-p-          unknown device(0)
[root@bp-02 ~]# lvconvert --repair vg/tpool_tdata
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg/tpool_tdata successfully replaced.
[root@bp-02 ~]# lvconvert --repair vg/tpool_tmeta
  Couldn't find device with uuid wo4dme-NF0T-810j-4Ixi-8W95-uZSW-mcJQig.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg/tpool_tmeta successfully replaced.

Comment 5 Heinz Mauelshagen 2017-10-20 11:28:19 UTC
(In reply to Jonathan Earl Brassow from comment #4)
> Yeah, strange it doesn't respond to 'degraded' mode activation attempt...

The activation code doesn't pay attention to cmd->degraded_activation in this context, only to cmd->partial_activation in case the LV is partial. In  the latter when partial is not set for the activation mode it bails out with that message.

The code has to change to first distinguish between raid and other LV types and handle partial/degraded accordingly.

Comment 6 Leo Bergolth 2017-10-20 13:59:11 UTC
Thanks!
Unfortunately on my box with real-life data, there is a linear (non-thin) raid1 LV (for booting) in addition to the thin pool. For this linear LV, vgreduce already had replaced the missing PV with the error target in a previous recovery attempt.

Now lvconvert --repair refuses to repair because of "components with error targets that must be removed first".

I managed to reproduce this situation on my test box:

-------------------- 8< --------------------
# lvcreate --type raid1 -m 1 -n boot -L100m vg_test /dev/vdb /dev/vdc
  Logical volume "boot" created.
# lvcreate --type raid1 -m 1 -n thinmeta -L100m vg_test /dev/vdb /dev/vdc
  Logical volume "thinmeta" created.
# lvcreate --type raid1 -m 1 -n Thin -L2g vg_test /dev/vdb /dev/vdc
  Logical volume "Thin" created.
# lvconvert -y --type thin-pool --poolmetadata vg_test/thinmeta vg_test/Thin
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting logical volume vg_test/Thin and vg_test/thinmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted vg_test/Thin_tdata to thin pool.
# lvcreate -n thintest -V 100m -T vg_test/Thin
  Using default stripesize 64.00 KiB.
  Logical volume "thintest" created.

# vgchange -a n vg_test
  0 logical volume(s) in volume group "vg_test" now active

### filter vdc by adding add global_filter = [ "r|^/dev/vdc$|" ] to lvm.conf
# lvs -a -o+devices vg_test
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                      
  Thin                  vg_test twi---tzp-   2.00g                                                     Thin_tdata(0)                                
  [Thin_tdata]          vg_test rwi---r-p-   2.00g                                                     Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] vg_test Iwi---r---   2.00g                                                     /dev/vdb(53)                                 
  [Thin_tdata_rimage_1] vg_test Iwi---r-p-   2.00g                                                     [unknown](53)                                
  [Thin_tdata_rmeta_0]  vg_test ewi---r---   4.00m                                                     /dev/vdb(52)                                 
  [Thin_tdata_rmeta_1]  vg_test ewi---r-p-   4.00m                                                     [unknown](52)                                
  [Thin_tmeta]          vg_test ewi---r-p- 100.00m                                                     Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] vg_test Iwi---r--- 100.00m                                                     /dev/vdb(27)                                 
  [Thin_tmeta_rimage_1] vg_test Iwi---r-p- 100.00m                                                     [unknown](27)                                
  [Thin_tmeta_rmeta_0]  vg_test ewi---r---   4.00m                                                     /dev/vdb(26)                                 
  [Thin_tmeta_rmeta_1]  vg_test ewi---r-p-   4.00m                                                     [unknown](26)                                
  boot                  vg_test rwi---r-p- 100.00m                                                     boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       vg_test Iwi---r--- 100.00m                                                     /dev/vdb(1)                                  
  [boot_rimage_1]       vg_test Iwi---r-p- 100.00m                                                     [unknown](1)                                 
  [boot_rmeta_0]        vg_test ewi---r---   4.00m                                                     /dev/vdb(0)                                  
  [boot_rmeta_1]        vg_test ewi---r-p-   4.00m                                                     [unknown](0)                                 
  [lvol0_pmspare]       vg_test ewi------- 100.00m                                                     /dev/vdb(565)                                
  thintest              vg_test Vwi---tzp- 100.00m Thin                                                                                             

# vgchange -a y vg_test --activationmode partial
  PARTIAL MODE. Incomplete logical volumes will be processed.
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
  2 logical volume(s) in volume group "vg_test" now active

### open /dev/vg_test/thintest in order to simulate that it is "in use"
# sleep 9999999999 </dev/vg_test/thintest &
[1] 12034

### now try to remove the missing PV
# vgreduce --removemissing vg_test --force
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
  WARNING: Removing partial LV vg_test/Thin.
  Logical volume vg_test/thintest in use.

### vgreduce ran partially:
### it replaced the missing PV with the error target in the "boot" LV
### but failed when processing Thin
# lvs -a -o+devices vg_test
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
  Internal error: WARNING: Segment type linear found does not match expected type error for vg_test/boot_rmeta_1.
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                      
  Thin                  vg_test twi-aotzp-   2.00g             0.00   0.05                             Thin_tdata(0)                                
  [Thin_tdata]          vg_test rwi-aor-p-   2.00g                                    100.00           Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] vg_test iwi-aor---   2.00g                                                     /dev/vdb(53)                                 
  [Thin_tdata_rimage_1] vg_test Iwi-aor-p-   2.00g                                                     [unknown](53)                                
  [Thin_tdata_rmeta_0]  vg_test ewi-aor---   4.00m                                                     /dev/vdb(52)                                 
  [Thin_tdata_rmeta_1]  vg_test ewi-aor-p-   4.00m                                                     [unknown](52)                                
  [Thin_tmeta]          vg_test ewi-aor-p- 100.00m                                    100.00           Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] vg_test iwi-aor--- 100.00m                                                     /dev/vdb(27)                                 
  [Thin_tmeta_rimage_1] vg_test Iwi-aor-p- 100.00m                                                     [unknown](27)                                
  [Thin_tmeta_rmeta_0]  vg_test ewi-aor---   4.00m                                                     /dev/vdb(26)                                 
  [Thin_tmeta_rmeta_1]  vg_test ewi-aor-p-   4.00m                                                     [unknown](26)                                
  boot                  vg_test rwi---r--- 100.00m                                                     boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       vg_test Iwi-a-r-r- 100.00m                                                     /dev/vdb(1)                                  
  [boot_rimage_1]       vg_test vwi---r--- 100.00m                                                                                                  
  [boot_rmeta_0]        vg_test ewi-a-r-r-   4.00m                                                     /dev/vdb(0)                                  
  [boot_rmeta_1]        vg_test ewi-XXr-r-   4.00m                                                                                                  
  [lvol0_pmspare]       vg_test ewi------- 100.00m                                                     /dev/vdb(565)                                
  thintest              vg_test Vwi-aotzp- 100.00m Thin        0.00                                                                                 

# vgextend vg_test /dev/vdd
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
  Volume group "vg_test" successfully extended

# lvconvert --repair /dev/vg_test/boot /dev/vdd 
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Couldn't find device with uuid 9cZ4Wu-wDdj-jiVb-oejC-TQn3-03ql-LIeh1F.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  vg_test/boot has components with error targets that must be removed first: vg_test/boot_rimage_1.
  Try removing the PV list and rerun. the command.
  Failed to remove the specified images from vg_test/boot.
  Failed to replace faulty devices in vg_test/boot.
-------------------- 8< --------------------

Do you have a hint how to solve this?

Comment 7 Roman Bednář 2018-04-13 08:58:48 UTC
lvm2-2.02.177-4.el7.x86_64

I assume there are two things for fixing actually, one is LV type detection and assigning correct activationmode (partial/degraded) to LV and second to enable the degraded activationmode for thinpool lvs backed by raid volumes.

Tried to set degraded activation mode manually for all LVs, before converting to thinpool. After filtering out one device/leg and attempting reactivation in degraded mode non-thin device gets activated while thin pool requires partial mode. On the other hand that might actually be correct since thinpool is not raid type but thin-pool.

Please specify what is going to be fixed as it's not very clear now. 

Adding Cond. NAK flag for 7.6 until it's clear on what's supposed to be fixed and tested.


[root@virt-381 ~]# lvchange -an --activationmode degraded vg_test/boot
[root@virt-381 ~]# lvchange -an --activationmode degraded vg_test/Thin
[root@virt-381 ~]# lvchange -an --activationmode degraded vg_test/thinmeta
[root@virt-381 ~]# lvs -a -o +devices
  LV                  VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                  
  root                rhel_virt-381 -wi-ao----  <6.20g                                                     /dev/vda2(205)                           
  swap                rhel_virt-381 -wi-ao---- 820.00m                                                     /dev/vda2(0)                             
  Thin                vg_test       rwi---r---   2.00g                                                     Thin_rimage_0(0),Thin_rimage_1(0)        
  [Thin_rimage_0]     vg_test       Iwi---r---   2.00g                                                     /dev/sda(53)                             
  [Thin_rimage_1]     vg_test       Iwi---r---   2.00g                                                     /dev/sdb(53)                             
  [Thin_rmeta_0]      vg_test       ewi---r---   4.00m                                                     /dev/sda(52)                             
  [Thin_rmeta_1]      vg_test       ewi---r---   4.00m                                                     /dev/sdb(52)                             
  boot                vg_test       rwi---r--- 100.00m                                                     boot_rimage_0(0),boot_rimage_1(0)        
  [boot_rimage_0]     vg_test       Iwi---r--- 100.00m                                                     /dev/sda(1)                              
  [boot_rimage_1]     vg_test       Iwi---r--- 100.00m                                                     /dev/sdb(27)                             
  [boot_rmeta_0]      vg_test       ewi---r---   4.00m                                                     /dev/sda(0)                              
  [boot_rmeta_1]      vg_test       ewi---r---   4.00m                                                     /dev/sdb(26)                             
  thinmeta            vg_test       rwi---r--- 100.00m                                                     thinmeta_rimage_0(0),thinmeta_rimage_1(0)
  [thinmeta_rimage_0] vg_test       Iwi---r--- 100.00m                                                     /dev/sda(27)                             
  [thinmeta_rimage_1] vg_test       Iwi---r--- 100.00m                                                     /dev/sdb(1)                              
  [thinmeta_rmeta_0]  vg_test       ewi---r---   4.00m                                                     /dev/sda(26)                             
  [thinmeta_rmeta_1]  vg_test       ewi---r---   4.00m                                                     /dev/sdb(0) 
                                                  

[root@virt-381 ~]# lvconvert -y --type thin-pool --poolmetadata vg_test/thinmeta vg_test/Thin
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting vg_test/Thin and vg_test/thinmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted vg_test/Thin and vg_test/thinmeta to thin pool.

[root@virt-381 ~]# vgchange -a n vg_test
  0 logical volume(s) in volume group "vg_test" now active

### Set filter to: filter = [ "r|^/dev/sdb$|" ]


[root@virt-381 ~]# lvs -a -o lv_name,lv_attr,devices
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  LV                    Attr       Devices                                      
  root                  -wi-ao---- /dev/vda2(205)                               
  swap                  -wi-ao---- /dev/vda2(0)                                 
  Thin                  twi---tzp- Thin_tdata(0)                                
  [Thin_tdata]          rwi---r-p- Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] Iwi---r--- /dev/sda(53)                                 
  [Thin_tdata_rimage_1] Iwi---r-p- [unknown](53)                                
  [Thin_tdata_rmeta_0]  ewi---r--- /dev/sda(52)                                 
  [Thin_tdata_rmeta_1]  ewi---r-p- [unknown](52)                                
  [Thin_tmeta]          ewi---r-p- Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] Iwi---r--- /dev/sda(27)                                 
  [Thin_tmeta_rimage_1] Iwi---r-p- [unknown](27)                                
  [Thin_tmeta_rmeta_0]  ewi---r--- /dev/sda(26)                                 
  [Thin_tmeta_rmeta_1]  ewi---r-p- [unknown](26)                                
  boot                  rwi---r-p- boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       Iwi---r--- /dev/sda(1)                                  
  [boot_rimage_1]       Iwi---r-p- [unknown](1)                                 
  [boot_rmeta_0]        ewi---r--- /dev/sda(0)                                  
  [boot_rmeta_1]        ewi---r-p- [unknown](0)                                 
  [lvol0_pmspare]       ewi------- /dev/sda(565)     
  
 ### degraded activation mode fails for thinpool                    
[root@virt-381 ~]# vgchange -a y vg_test --activationmode degraded
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  Refusing activation of partial LV vg_test/Thin.  Use '--activationmode partial' to override.
  1 logical volume(s) in volume group "vg_test" now active
  
[root@virt-381 ~]# lvs -a -o lv_name,lv_attr,devices
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  LV                    Attr       Devices                                      
  root                  -wi-ao---- /dev/vda2(205)                               
  swap                  -wi-ao---- /dev/vda2(0)                                 
  Thin                  twi---tzp- Thin_tdata(0)                                
  [Thin_tdata]          rwi---r-p- Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] Iwi---r--- /dev/sda(53)                                 
  [Thin_tdata_rimage_1] Iwi---r-p- [unknown](53)                                
  [Thin_tdata_rmeta_0]  ewi---r--- /dev/sda(52)                                 
  [Thin_tdata_rmeta_1]  ewi---r-p- [unknown](52)                                
  [Thin_tmeta]          ewi---r-p- Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] Iwi---r--- /dev/sda(27)                                 
  [Thin_tmeta_rimage_1] Iwi---r-p- [unknown](27)                                
  [Thin_tmeta_rmeta_0]  ewi---r--- /dev/sda(26)                                 
  [Thin_tmeta_rmeta_1]  ewi---r-p- [unknown](26)                                
  boot                  rwi-a-r-p- boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       iwi-aor--- /dev/sda(1)                                  
  [boot_rimage_1]       Iwi-aor-p- [unknown](1)                                 
  [boot_rmeta_0]        ewi-aor--- /dev/sda(0)                                  
  [boot_rmeta_1]        ewi-aor-p- [unknown](0)                                 
  [lvol0_pmspare]       ewi------- /dev/sda(565)              
 
 
### partial mode works for thinpool                   
[root@virt-381 ~]# vgchange -a y vg_test --activationmode partial
  PARTIAL MODE. Incomplete logical volumes will be processed.
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  2 logical volume(s) in volume group "vg_test" now active
  
[root@virt-381 ~]# lvs -a -o lv_name,lv_attr,devices
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  LV                    Attr       Devices                                      
  root                  -wi-ao---- /dev/vda2(205)                               
  swap                  -wi-ao---- /dev/vda2(0)                                 
  Thin                  twi-a-tzp- Thin_tdata(0)                                
  [Thin_tdata]          rwi-aor-p- Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] iwi-aor--- /dev/sda(53)                                 
  [Thin_tdata_rimage_1] Iwi-aor-p- [unknown](53)                                
  [Thin_tdata_rmeta_0]  ewi-aor--- /dev/sda(52)                                 
  [Thin_tdata_rmeta_1]  ewi-aor-p- [unknown](52)                                
  [Thin_tmeta]          ewi-aor-p- Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] iwi-aor--- /dev/sda(27)                                 
  [Thin_tmeta_rimage_1] Iwi-aor-p- [unknown](27)                                
  [Thin_tmeta_rmeta_0]  ewi-aor--- /dev/sda(26)                                 
  [Thin_tmeta_rmeta_1]  ewi-aor-p- [unknown](26)                                
  boot                  rwi-a-r-p- boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       iwi-aor--- /dev/sda(1)                                  
  [boot_rimage_1]       Iwi-aor-p- [unknown](1)                                 
  [boot_rmeta_0]        ewi-aor--- /dev/sda(0)                                  
  [boot_rmeta_1]        ewi-aor-p- [unknown](0)                                 
  [lvol0_pmspare]       ewi------- /dev/sda(565) 




 
> Do you have a hint how to solve this?

###The repair attempt on non-thin worked for me when I skipped the vgreduce --removemissing part. 

[root@virt-381 ~]# vgextend vg_test /dev/sdc
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  Physical volume "/dev/sdc" successfully created.
  WARNING: Device for PV vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY not found or rejected by a filter.
  Volume group "vg_test" successfully extended
  
  
[root@virt-381 ~]# lvconvert --repair /dev/vg_test/boot /dev/sdc
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg_test/boot successfully replaced.



### However, trying to repair thinpool directly does not work (unsupported?)
 
 [root@virt-381 ~]# lvchange -an vg_test/Thin
  WARNING: Not using lvmetad because a repair command was run.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.

[root@virt-381 ~]# lvconvert --repair /dev/vg_test/Thin /dev/sdc
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
  WARNING: LV vg_test/Thin_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV vg_test/Thin_tmeta might use different PVs.  Move it with pvmove if required.

[root@virt-381 ~]# lvs -a -o lv_name,lv_attr,devices
  WARNING: Not using lvmetad because a repair command was run.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
  LV                    Attr       Devices                                      
  root                  -wi-ao---- /dev/vda2(205)                               
  swap                  -wi-ao---- /dev/vda2(0)                                 
  Thin                  twi---tzp- Thin_tdata(0)                                
  Thin_meta0            rwi---r-p- Thin_meta0_rimage_0(0),Thin_meta0_rimage_1(0)
  [Thin_meta0_rimage_0] Iwi---r--- /dev/sda(27)                                 
  [Thin_meta0_rimage_1] Iwi---r-p- [unknown](27)                                
  [Thin_meta0_rmeta_0]  ewi---r--- /dev/sda(26)                                 
  [Thin_meta0_rmeta_1]  ewi---r-p- [unknown](26)                                
  [Thin_tdata]          rwi---r-p- Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] Iwi---r--- /dev/sda(53)                                 
  [Thin_tdata_rimage_1] Iwi---r-p- [unknown](53)                                
  [Thin_tdata_rmeta_0]  ewi---r--- /dev/sda(52)                                 
  [Thin_tdata_rmeta_1]  ewi---r-p- [unknown](52)                                
  [Thin_tmeta]          ewi------- /dev/sda(565)                                
  boot                  rwi-a-r--- boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       iwi-aor--- /dev/sda(1)                                  
  [boot_rimage_1]       iwi-aor--- /dev/sdc(1)                                  
  [boot_rmeta_0]        ewi-aor--- /dev/sda(0)                                  
  [boot_rmeta_1]        ewi-aor--- /dev/sdc(0)                                  
  [lvol1_pmspare]       ewi------- /dev/sdc(26)                  
  
  
  
### Repairing tmeta and tdata volumes separately as Jon suggested in Comment 4 should do the trick:
  
  [root@virt-381 ~]# lvconvert --repair vg_test/Thin_tdata /dev/sdc
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg_test/Thin_tdata successfully replaced.
                               
[root@virt-381 ~]# lvconvert --repair vg_test/Thin_tmeta /dev/sdc
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg_test/Thin_tmeta successfully replaced.

[root@virt-381 ~]# lvs -a -o lv_name,lv_attr,devices
  WARNING: Not using lvmetad because a repair command was run.
  Couldn't find device with uuid vKVD8X-RpR3-yWS5-tZBE-QCNW-grax-cmoglY.
  LV                    Attr       Devices                                      
  root                  -wi-ao---- /dev/vda2(205)                               
  swap                  -wi-ao---- /dev/vda2(0)                                 
  Thin                  twi-a-tz-- Thin_tdata(0)                                
  [Thin_tdata]          rwi-aor--- Thin_tdata_rimage_0(0),Thin_tdata_rimage_1(0)
  [Thin_tdata_rimage_0] iwi-aor--- /dev/sda(53)                                 
  [Thin_tdata_rimage_1] iwi-aor--- /dev/sdc(1)                                  
  [Thin_tdata_rmeta_0]  ewi-aor--- /dev/sda(52)                                 
  [Thin_tdata_rmeta_1]  ewi-aor--- /dev/sdc(0)                                  
  [Thin_tmeta]          ewi-aor--- Thin_tmeta_rimage_0(0),Thin_tmeta_rimage_1(0)
  [Thin_tmeta_rimage_0] iwi-aor--- /dev/sda(27)                                 
  [Thin_tmeta_rimage_1] iwi-aor--- /dev/sdc(514)                                
  [Thin_tmeta_rmeta_0]  ewi-aor--- /dev/sda(26)                                 
  [Thin_tmeta_rmeta_1]  ewi-aor--- /dev/sdc(513)                                
  boot                  rwi-a-r--- boot_rimage_0(0),boot_rimage_1(0)            
  [boot_rimage_0]       iwi-aor--- /dev/sda(1)                                  
  [boot_rimage_1]       iwi-aor--- /dev/sdc(540)                                
  [boot_rmeta_0]        ewi-aor--- /dev/sda(0)                                  
  [boot_rmeta_1]        ewi-aor--- /dev/sdc(539)                                
  [lvol0_pmspare]       ewi------- /dev/sda(565)

Comment 8 Heinz Mauelshagen 2018-07-25 14:18:34 UTC
Created attachment 1470524 [details]
Patch to avoid partial on thin/cache/... SubLVs in case they are raid != raidd0|raid0_meta

Patch avoids "vgreduce --force --removemissing VG" to remove thin/cache/... SubLVs in case they are backed by redundant raid

Comment 9 Corey Marthaler 2020-05-11 19:39:20 UTC
Closing this bug WONTFIX in lieu of rhel8 bug 1834485. It's too late in the rhel7 game to fix this.


Note You need to log in before you can comment on or make changes to this bug.