Bug 1274762

Summary: "Internal error: pool_is_active called with non-pool LV" and confusing error message when running --repair on cpool.
Product: Red Hat Enterprise Linux 7 Reporter: Marian Csontos <mcsontos>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: unspecified CC: agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, zkabelac
Version: 7.2   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.02.152-1.el7 Doc Type: Bug Fix
Doc Text:
lvm2 properly reports error about unsupported --repair operation for cache-pool.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-04 04:11:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1295577, 1313485    

Description Marian Csontos 2015-10-23 13:49:16 UTC
Description of problem:

Internal error and confusing error message when running --repair on cmeta.

I have a cache volume used by cache LV.
Running `lvconvert --repair` on cpool's cmeta fails mentioning thin pool:

      Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
      Device vg-cpool_ck69_cmeta (253:2) is used by another device.
      Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.

Also internal error is reported:

  Internal error: pool_is_active called with non-pool LV cpool_ck69.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.

Version-Release number of selected component (if applicable):
lvm2-2.02.130-5.el7.x86_64

Steps to Reproduce:
1. create a cpool with cache volume
2. run:

    lvconvert --repair vg/cpool_cmeta

Actual results:

      Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
      Device vg-cpool_ck69_cmeta (253:2) is used by another device.
      Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.

Expected results:
Better error messages.

Additional info:

# lvs -a vg

    Using logical volume(s) on command line.
  LV                 VG   #Seg Attr       LSize  Maj Min KMaj KMin Pool         Origin      Data%  Meta%  Move Cpy%Sync
  [cpool_ck69]       vg      1 Cwi---C---  1.00m  -1  -1   -1   -1                          0.00   0.97        100.00  
  [cpool_ck69_cdata] vg      1 Cwi-ao----  1.00m  -1  -1  253    1                                                     
  [cpool_ck69_cmeta] vg      1 ewi-ao----  4.03m  -1  -1  253    2                                                     
  lv1                vg      1 Cwi-a-C--- 32.00k  -1  -1  253    4 [cpool_ck69] [lv1_corig] 0.00   0.97        100.00  
  [lv1_corig]        vg      1 owi-aoC--- 32.00k  -1  -1  253    3                                                     
  [lvol0_pmspare]    vg      1 ewi-a-----  4.03m  -1  -1  253    5                                                     
# lvconvert --yes --repair vg/cpool_ck69
  Internal error: pool_is_active called with non-pool LV cpool_ck69.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.
  Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
  Device vg-cpool_ck69_cmeta (253:2) is used by another device.
  Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.

Comment 1 Marian Csontos 2015-10-23 13:53:42 UTC
Sorry, it's cpool.

Comment 3 Zdenek Kabelac 2015-11-19 13:59:07 UTC
For now  cache pool repair is unsupported.

Providing better error message:

https://www.redhat.com/archives/lvm-devel/2015-November/msg00137.html

Comment 7 Mike McCune 2016-03-28 23:48:44 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 9 Corey Marthaler 2016-08-03 19:38:41 UTC
Fix verified in the latest rpms.


BEFORE: lvm2-2.02.130-5.el7_2.5

[root@host-006 ~]# lvconvert --repair cache_sanity/pool_cmeta
  Can't repair LV "pool_cmeta" of segtype linear.

[root@host-006 ~]# lvconvert --repair cache_sanity/pool
  Internal error: pool_is_active called with non-pool LV pool.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.
  Repair of thin metadata volume of thin pool cache_sanity/pool failed (status:1). Manual repair required!
  Device cache_sanity-pool_cmeta (253:4) is used by another device.
  Cannot deactivate thin pool metadata volume pool_cmeta.




AFTER: lvm2-2.02.161-3.el7.x86_64

[root@host-078 ~]#  lvconvert --repair cache_sanity/pool_cmeta
  Operation not permitted on striped or linear LV cache_sanity/pool_cmeta.
  Operations permitted on a striped or linear LV are:
  --merge
  --type snapshot
  --type thin
  --type cache
  --type thin-pool
  --type cache-pool
  --type mirror
  --type raid*

[root@host-078 ~]#  lvconvert --repair cache_sanity/pool
  Operation not permitted on cache pool LV cache_sanity/pool.
  Operations permitted on a cache pool LV are:
  --splitcache    (operates on cache LV)

Comment 11 errata-xmlrpc 2016-11-04 04:11:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html