Bug 1274762 - "Internal error: pool_is_active called with non-pool LV" and confusing error message when running --repair on cpool.
"Internal error: pool_is_active called with non-pool LV" and confusing error ...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.2
Unspecified Unspecified
unspecified Severity low
: rc
: ---
Assigned To: Zdenek Kabelac
cluster-qe@redhat.com
:
Depends On:
Blocks: 1313485 1295577
  Show dependency treegraph
 
Reported: 2015-10-23 09:49 EDT by Marian Csontos
Modified: 2016-11-04 00:11 EDT (History)
7 users (show)

See Also:
Fixed In Version: lvm2-2.02.152-1.el7
Doc Type: Bug Fix
Doc Text:
lvm2 properly reports error about unsupported --repair operation for cache-pool.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-04 00:11:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Marian Csontos 2015-10-23 09:49:16 EDT
Description of problem:

Internal error and confusing error message when running --repair on cmeta.

I have a cache volume used by cache LV.
Running `lvconvert --repair` on cpool's cmeta fails mentioning thin pool:

      Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
      Device vg-cpool_ck69_cmeta (253:2) is used by another device.
      Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.

Also internal error is reported:

  Internal error: pool_is_active called with non-pool LV cpool_ck69.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.

Version-Release number of selected component (if applicable):
lvm2-2.02.130-5.el7.x86_64

Steps to Reproduce:
1. create a cpool with cache volume
2. run:

    lvconvert --repair vg/cpool_cmeta

Actual results:

      Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
      Device vg-cpool_ck69_cmeta (253:2) is used by another device.
      Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.

Expected results:
Better error messages.

Additional info:

# lvs -a vg

    Using logical volume(s) on command line.
  LV                 VG   #Seg Attr       LSize  Maj Min KMaj KMin Pool         Origin      Data%  Meta%  Move Cpy%Sync
  [cpool_ck69]       vg      1 Cwi---C---  1.00m  -1  -1   -1   -1                          0.00   0.97        100.00  
  [cpool_ck69_cdata] vg      1 Cwi-ao----  1.00m  -1  -1  253    1                                                     
  [cpool_ck69_cmeta] vg      1 ewi-ao----  4.03m  -1  -1  253    2                                                     
  lv1                vg      1 Cwi-a-C--- 32.00k  -1  -1  253    4 [cpool_ck69] [lv1_corig] 0.00   0.97        100.00  
  [lv1_corig]        vg      1 owi-aoC--- 32.00k  -1  -1  253    3                                                     
  [lvol0_pmspare]    vg      1 ewi-a-----  4.03m  -1  -1  253    5                                                     
# lvconvert --yes --repair vg/cpool_ck69
  Internal error: pool_is_active called with non-pool LV cpool_ck69.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.
  Repair of thin metadata volume of thin pool vg/cpool_ck69 failed (status:1). Manual repair required!
  Device vg-cpool_ck69_cmeta (253:2) is used by another device.
  Cannot deactivate thin pool metadata volume cpool_ck69_cmeta.
Comment 1 Marian Csontos 2015-10-23 09:53:42 EDT
Sorry, it's cpool.
Comment 3 Zdenek Kabelac 2015-11-19 08:59:07 EST
For now  cache pool repair is unsupported.

Providing better error message:

https://www.redhat.com/archives/lvm-devel/2015-November/msg00137.html
Comment 7 Mike McCune 2016-03-28 19:48:44 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 9 Corey Marthaler 2016-08-03 15:38:41 EDT
Fix verified in the latest rpms.


BEFORE: lvm2-2.02.130-5.el7_2.5

[root@host-006 ~]# lvconvert --repair cache_sanity/pool_cmeta
  Can't repair LV "pool_cmeta" of segtype linear.

[root@host-006 ~]# lvconvert --repair cache_sanity/pool
  Internal error: pool_is_active called with non-pool LV pool.
syscall 'open' failed: Device or resource busy
Note: you cannot run this tool with these options on live metadata.
  Repair of thin metadata volume of thin pool cache_sanity/pool failed (status:1). Manual repair required!
  Device cache_sanity-pool_cmeta (253:4) is used by another device.
  Cannot deactivate thin pool metadata volume pool_cmeta.




AFTER: lvm2-2.02.161-3.el7.x86_64

[root@host-078 ~]#  lvconvert --repair cache_sanity/pool_cmeta
  Operation not permitted on striped or linear LV cache_sanity/pool_cmeta.
  Operations permitted on a striped or linear LV are:
  --merge
  --type snapshot
  --type thin
  --type cache
  --type thin-pool
  --type cache-pool
  --type mirror
  --type raid*

[root@host-078 ~]#  lvconvert --repair cache_sanity/pool
  Operation not permitted on cache pool LV cache_sanity/pool.
  Operations permitted on a cache pool LV are:
  --splitcache    (operates on cache LV)
Comment 11 errata-xmlrpc 2016-11-04 00:11:56 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html

Note You need to log in before you can comment on or make changes to this bug.