RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1380532 - certain repair of cache raid volumes fails "Cannot convert internal LV "
Summary: certain repair of cache raid volumes fails "Cannot convert internal LV "
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On:
Blocks: 1383925
TreeView+ depends on / blocked
 
Reported: 2016-09-29 21:57 UTC by Corey Marthaler
Modified: 2021-09-03 12:53 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: Bug Fix
Doc Text:
"lvconvert --repair" now works properly on cache logical volumes Due to a regression in the lvm2-2.02.166-1.el package, released in Red Hat Enterprise Linux 7.3, the "lvconvert --repair" command could not be run properly on cache logical volumes. As a consequence, the `Cannot convert internal LV` error occurred. The underlying source code has been modified to fix this bug, and "lvconvert --repair" now works as expected.
Clone Of:
: 1383925 (view as bug list)
Environment:
Last Closed: 2017-08-01 21:47:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
test result (32.68 KB, text/plain)
2017-04-18 12:02 UTC, Roman Bednář
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Corey Marthaler 2016-09-29 21:57:43 UTC
Description of problem:
This may be expected now, or fallout from bug 1353759?

# Same failing cmd attempted manually

[root@host-123 ~]# lvconvert --repair black_bird/synced_primary_raid6_4legs_1_corig
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
  Cannot convert internal LV black_bird/synced_primary_raid6_4legs_1_corig.


================================================================================
Iteration 0.2 started at Thu Sep 29 14:50:37 CDT 2016
================================================================================
Scenario kill_primary_synced_raid6_4legs: Kill primary leg of synced 4 leg raid6 volume(s)
********* RAID hash info for this scenario *********
* names:              synced_primary_raid6_4legs_1
* sync:               1
* type:               raid6
* -m |-i value:       4
* leg devices:        /dev/sde1 /dev/sda1 /dev/sdf1 /dev/sdd1 /dev/sdg1 /dev/sdb1
* spanned legs:       0
* manual repair:      0
* failpv(s):          /dev/sde1
* failnode(s):        host-123
* lvmetad:            1
* cache stack:        1
* raid fault policy:  allocate
******************************************************

Creating raids(s) on host-123...
host-123: lvcreate  --type raid6 -i 4 -n synced_primary_raid6_4legs_1 -L 500M black_bird /dev/sde1:0-2400 /dev/sda1:0-2400 /dev/sdf1:0-2400 /dev/sdd1:0-2400 /dev/sdg1:0-2400 /dev/sdb1:0-2400

Current mirror/raid device structure(s):
  LV                                      Attr       LSize   Cpy%Sync Devices
   synced_primary_raid6_4legs_1            rwi-aor--- 512.00m 18.75    synced_primary_raid6_4legs_1_rimage_0(0),synced_primary_raid6_4legs_1_rimage_1(0),synced_primary_raid6_4legs_1_rimage_2(0),synced_primary_raid6_4legs_1_rimage_3(0),synced_primary_raid6_4legs_1_rimage_4(0),synced_primary_raid6_4legs_1_rimage_5(0)
   [synced_primary_raid6_4legs_1_rimage_0] Iwi-aor--- 128.00m          /dev/sde1(1)
   [synced_primary_raid6_4legs_1_rimage_1] Iwi-aor--- 128.00m          /dev/sda1(1)
   [synced_primary_raid6_4legs_1_rimage_2] Iwi-aor--- 128.00m          /dev/sdf1(1)
   [synced_primary_raid6_4legs_1_rimage_3] Iwi-aor--- 128.00m          /dev/sdd1(1)
   [synced_primary_raid6_4legs_1_rimage_4] Iwi-aor--- 128.00m          /dev/sdg1(1)
   [synced_primary_raid6_4legs_1_rimage_5] Iwi-aor--- 128.00m          /dev/sdb1(1)
   [synced_primary_raid6_4legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sde1(0)
   [synced_primary_raid6_4legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
   [synced_primary_raid6_4legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
   [synced_primary_raid6_4legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdd1(0)
   [synced_primary_raid6_4legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdg1(0)
   [synced_primary_raid6_4legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdb1(0)


Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec

********* CACHE info for this scenario *********
* Killing the raid cache ORIGIN (slow) device
* cache pool device:   /dev/sda1
* cachemode:              writethrough
****************************************************

Convert mirror/raid volume(s) to Cache volume(s) on host-123...

Creating CACHE DATA and META (fast) devices (which will not be failed), convert to cache pool create a cache volume
lvcreate -n cache_synced_primary_raid6_4legs_1 -L 500M black_bird /dev/sda1
WARNING: xfs signature detected on /dev/black_bird/cache_synced_primary_raid6_4legs_1 at offset 0. Wipe it? [y/n]: [n]
  Aborted wiping of xfs.
  1 existing signature left on the device.
lvcreate -n meta -L 12M black_bird /dev/sda1

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata black_bird/meta black_bird/cache_synced_primary_raid6_4legs_1
  WARNING: Converting logical volume black_bird/cache_synced_primary_raid6_4legs_1 and black_bird/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool black_bird/cache_synced_primary_raid6_4legs_1 black_bird/synced_primary_raid6_4legs_1

Creating xfs on top of mirror(s) on host-123...
Mounting mirrored xfs filesystems on host-123...

Current mirror/raid device structure(s):
  LV                                            Attr       LSize   Cpy%Sync Devices
   [cache_synced_primary_raid6_4legs_1]          Cwi---C--- 500.00m 0.00     cache_synced_primary_raid6_4legs_1_cdata(0)
   [cache_synced_primary_raid6_4legs_1_cdata]    Cwi-ao---- 500.00m          /dev/sda1(33)
   [cache_synced_primary_raid6_4legs_1_cmeta]    ewi-ao----  12.00m          /dev/sda1(158)
   [lvol0_pmspare]                               ewi-------  12.00m          /dev/sde1(33)
   synced_primary_raid6_4legs_1                  Cwi-aoC--- 512.00m 0.00     synced_primary_raid6_4legs_1_corig(0)
   [synced_primary_raid6_4legs_1_corig]          rwi-aoC--- 512.00m 100.00   synced_primary_raid6_4legs_1_corig_rimage_0(0),synced_primary_raid6_4legs_1_corig_rimage_1(0),synced_primary_raid6_4legs_1_corig_rimage_2(0),synced_primary_raid6_4legs_1_corig_rimage_3(0),synced_primary_raid6_4legs_1_corig_rimage_4(0),synced_primary_raid6_4legs_1_corig_rimage_5(0)
   [synced_primary_raid6_4legs_1_corig_rimage_0] iwi-aor--- 128.00m          /dev/sde1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_1] iwi-aor--- 128.00m          /dev/sda1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_2] iwi-aor--- 128.00m          /dev/sdf1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_3] iwi-aor--- 128.00m          /dev/sdd1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_4] iwi-aor--- 128.00m          /dev/sdg1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_5] iwi-aor--- 128.00m          /dev/sdb1(1)
   [synced_primary_raid6_4legs_1_corig_rmeta_0]  ewi-aor---   4.00m          /dev/sde1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_3]  ewi-aor---   4.00m          /dev/sdd1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_4]  ewi-aor---   4.00m          /dev/sdg1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_5]  ewi-aor---   4.00m          /dev/sdb1(0)

PV=/dev/sde1
        lvol0_pmspare: 1.0
        synced_primary_raid6_4legs_1_corig_rimage_0: 1.0
        synced_primary_raid6_4legs_1_corig_rmeta_0: 1.0

Writing verification files (checkit) to mirror(s) on...
        ---- host-123 ----

<start name="host-123_synced_primary_raid6_4legs_1"  pid="7258" time="Thu Sep 29 14:51:18 2016" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 

Verifying files (checkit) on mirror(s) on...
        ---- host-123 ----



Disabling device sde on host-123rescan device...
  /dev/sde1: read failed after 0 of 512 at 21467824128: Input/output error
  /dev/sde1: read failed after 0 of 512 at 21467938816: Input/output error
  /dev/sde1: read failed after 0 of 512 at 0: Input/output error
  /dev/sde1: read failed after 0 of 512 at 4096: Input/output error
  /dev/sde1: read failed after 0 of 2048 at 0: Input/output error


Getting recovery check start time from /var/log/messages: Sep 29 14:51
Attempting I/O to cause mirror down conversion(s) on host-123
dd if=/dev/zero of=/mnt/synced_primary_raid6_4legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.173477 s, 242 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
  LV                                            Attr       LSize   Cpy%Sync Devices
   [cache_synced_primary_raid6_4legs_1]          Cwi---C--- 500.00m 0.00     cache_synced_primary_raid6_4legs_1_cdata(0)
   [cache_synced_primary_raid6_4legs_1_cdata]    Cwi-ao---- 500.00m          /dev/sda1(33)
   [cache_synced_primary_raid6_4legs_1_cmeta]    ewi-ao----  12.00m          /dev/sda1(158)
   [lvol0_pmspare]                               ewi-----p-  12.00m          [unknown](33)
   synced_primary_raid6_4legs_1                  Cwi-aoC-p- 512.00m 0.00     synced_primary_raid6_4legs_1_corig(0)
   [synced_primary_raid6_4legs_1_corig]          rwi-aoC-p- 512.00m 100.00   synced_primary_raid6_4legs_1_corig_rimage_0(0),synced_primary_raid6_4legs_1_corig_rimage_1(0),synced_primary_raid6_4legs_1_corig_rimage_2(0),synced_primary_raid6_4legs_1_corig_rimage_3(0),synced_primary_raid6_4legs_1_corig_rimage_4(0),synced_primary_raid6_4legs_1_corig_rimage_5(0)
   [synced_primary_raid6_4legs_1_corig_rimage_0] iwi-aor-p- 128.00m          [unknown](1)
   [synced_primary_raid6_4legs_1_corig_rimage_1] iwi-aor--- 128.00m          /dev/sda1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_2] iwi-aor--- 128.00m          /dev/sdf1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_3] iwi-aor--- 128.00m          /dev/sdd1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_4] iwi-aor--- 128.00m          /dev/sdg1(1)
   [synced_primary_raid6_4legs_1_corig_rimage_5] iwi-aor--- 128.00m          /dev/sdb1(1)
   [synced_primary_raid6_4legs_1_corig_rmeta_0]  ewi-aor-p-   4.00m          [unknown](0)
   [synced_primary_raid6_4legs_1_corig_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_3]  ewi-aor---   4.00m          /dev/sdd1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_4]  ewi-aor---   4.00m          /dev/sdg1(0)
   [synced_primary_raid6_4legs_1_corig_rmeta_5]  ewi-aor---   4.00m          /dev/sdb1(0)

Verifying FAILED device /dev/sde1 is *NOT* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verifying IMAGE device /dev/sdf1 *IS* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verifying IMAGE device /dev/sdd1 *IS* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
  WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of lvol0_pmspare on: 
        Skipping check for lvol0_pmspare until BUG 1030121 (lvol0_pmspare) is sorted out
Checking EXISTENCE and STATE of synced_primary_raid6_4legs_1_corig_rimage_0 on: host-123 

(ALLOCATE POLICY) there should not be an 'unknown' device associated with synced_primary_raid6_4legs_1_corig_rimage_0 on host-123



Sep 29 14:51:39 host-123 qarshd[18024]: Running cmdline: echo offline > /sys/block/sde/device/state
Sep 29 14:51:39 host-123 systemd: Started qarsh Per-Connection Server (10.15.80.224:50172).
Sep 29 14:51:39 host-123 systemd: Starting qarsh Per-Connection Server (10.15.80.224:50172)...
Sep 29 14:51:39 host-123 qarshd[18028]: Talking to peer ::ffff:10.15.80.224:50172 (IPv6)
Sep 29 14:51:39 host-123 qarshd[18028]: Running cmdline: pvscan --cache /dev/sde1
Sep 29 14:51:39 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:39 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:39 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:39 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:39 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:43 host-123 kernel: sd 4:0:0:1: rejecting I/O to offline device
Sep 29 14:51:43 host-123 kernel: md: super_written gets error=-5, uptodate=0
Sep 29 14:51:43 host-123 kernel: md/raid:mdX: Disk failure on dm-3, disabling device.#012md/raid:mdX: Operation continuing on 5 devices.
Sep 29 14:51:43 host-123 lvm[16364]: Device #0 of raid6_zr array, black_bird-synced_primary_raid6_4legs_1_corig, has failed.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr already missing, skipping.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Device for PV eX9fOz-z6UL-sJXu-ErHI-i7nS-HaPV-LkhCgr not found or rejected by a filter.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rimage_0 while checking used and assumed devices.
Sep 29 14:51:43 host-123 lvm[16364]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid6_4legs_1_corig_rmeta_0 while checking used and assumed devices.
Sep 29 14:51:43 host-123 lvm[16364]: Cannot convert internal LV black_bird/synced_primary_raid6_4legs_1_corig.
Sep 29 14:51:43 host-123 lvm[16364]: Failed to process event for black_bird-synced_primary_raid6_4legs_1_corig.


Version-Release number of selected component (if applicable):
3.10.0-510.el7.x86_64

lvm2-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-libs-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-cluster-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016

Comment 1 Corey Marthaler 2016-09-29 22:00:54 UTC
This appear to only affect cache raid repair when the fault policy is "allocate", warn with a manual repair attempt appears to succeed.

Comment 3 Heinz Mauelshagen 2016-10-04 15:21:40 UTC
Reproduced with raid_fault_policy="allocate".

"lvconvert --repair black_bird/synced_primary_raid6_4legs_1_corig" fails here with "Cannot convert internal LV ...".

Is this a regression at all?

Comment 4 Corey Marthaler 2016-10-04 16:24:45 UTC
1. verified that a manual repair is never attempted due to the restriction that cache and pool raids need to be inactive during the passing 'warn' cases, so disregard comment #1.


2. verified that this same test case passed in 7.2.z as well as in a previous version of rhel7.3:

3.10.0-510.el7.x86_64

lvm2-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-libs-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-cluster-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016



Oct  4 11:17:01 host-116 lvm[3815]: Device #0 of raid1 array, black_bird-synced_primary_raid1_2legs_1_cdata, has failed.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Device for PV dlewYC-gbDe-njA2-GeRl-jnev-zJ0I-p92C5V not found or rejected by a filter.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid1_2legs_1_cdata_rimage_0 while checking used and assumed devices.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid1_2legs_1_cdata_rmeta_0 while checking used and assumed devices.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Device for PV dlewYC-gbDe-njA2-GeRl-jnev-zJ0I-p92C5V already missing, skipping.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Device for PV dlewYC-gbDe-njA2-GeRl-jnev-zJ0I-p92C5V not found or rejected by a filter.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid1_2legs_1_cdata_rimage_0 while checking used and assumed devices.
Oct  4 11:17:01 host-116 lvm[3815]: WARNING: Couldn't find all devices for LV black_bird/synced_primary_raid1_2legs_1_cdata_rmeta_0 while checking used and assumed devices.
Oct  4 11:17:02 host-116 kernel: device-mapper: raid: Device 0 specified for rebuild; clearing superblock
Oct  4 11:17:02 host-116 kernel: md/raid1:mdX: active with 1 out of 2 mirrors
Oct  4 11:17:02 host-116 kernel: created bitmap (1 pages) for device mdX
Oct  4 11:17:03 host-116 kernel: mdX: bitmap initialized from disk: read 1 pages, set 0 of 1000 bits
Oct  4 11:17:03 host-116 kernel: md: recovery of RAID array mdX
Oct  4 11:17:03 host-116 kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Oct  4 11:17:03 host-116 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Oct  4 11:17:03 host-116 kernel: md: using 128k window, over a total of 512000k.
Oct  4 11:17:03 host-116 multipathd: dm-2: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-2: devmap not registered, can't remove
Oct  4 11:17:03 host-116 multipathd: dm-3: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-3: devmap not registered, can't remove
Oct  4 11:17:03 host-116 multipathd: dm-13: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-13: devmap not registered, can't remove
Oct  4 11:17:03 host-116 systemd: Started qarsh Per-Connection Server (10.15.80.224:51342).
Oct  4 11:17:03 host-116 systemd: Starting qarsh Per-Connection Server (10.15.80.224:51342)...
Oct  4 11:17:03 host-116 multipathd: dm-12: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-12: devmap not registered, can't remove
Oct  4 11:17:03 host-116 multipathd: dm-2: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-3: remove map (uevent)
Oct  4 11:17:03 host-116 qarshd[6698]: Talking to peer ::ffff:10.15.80.224:51342 (IPv6)
Oct  4 11:17:03 host-116 multipathd: dm-12: remove map (uevent)
Oct  4 11:17:03 host-116 multipathd: dm-13: remove map (uevent)
Oct  4 11:17:03 host-116 qarshd[6698]: Running cmdline: pvs -a
Oct  4 11:17:03 host-116 kernel: md/raid1:mdX: active with 1 out of 2 mirrors
Oct  4 11:17:03 host-116 kernel: created bitmap (1 pages) for device mdX
Oct  4 11:17:04 host-116 kernel: md: mdX: recovery interrupted.
Oct  4 11:17:04 host-116 kernel: mdX: bitmap initialized from disk: read 1 pages, set 0 of 1000 bits
Oct  4 11:17:04 host-116 kernel: md: recovery of RAID array mdX
Oct  4 11:17:04 host-116 kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Oct  4 11:17:04 host-116 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Oct  4 11:17:04 host-116 kernel: md: using 128k window, over a total of 512000k.
Oct  4 11:17:04 host-116 kernel: md: resuming recovery of mdX from checkpoint.
Oct  4 11:17:04 host-116 lvm[3815]: Faulty devices in black_bird/synced_primary_raid1_2legs_1_cdata successfully replaced.
Oct  4 11:17:04 host-116 systemd: Started qarsh Per-Connection Server (10.15.80.224:51344).
Oct  4 11:17:04 host-116 systemd: Starting qarsh Per-Connection Server (10.15.80.224:51344)...
Oct  4 11:17:04 host-116 qarshd[6751]: Talking to peer ::ffff:10.15.80.224:51344 (IPv6)
Oct  4 11:17:04 host-116 qarshd[6751]: Running cmdline: tail -n1 /var/log/messages | cut -c 1-12
Oct  4 11:17:04 host-116 systemd: Started qarsh Per-Connection Server (10.15.80.224:51346).
Oct  4 11:17:04 host-116 systemd: Starting qarsh Per-Connection Server (10.15.80.224:51346)...
Oct  4 11:17:04 host-116 qarshd[6757]: Talking to peer ::ffff:10.15.80.224:51346 (IPv6)
Oct  4 11:17:05 host-116 qarshd[6757]: Running cmdline: dd if=/dev/zero of=/mnt/synced_primary_raid1_2legs_1/ddfile count=10 bs=4M
Oct  4 11:17:05 host-116 systemd: Started qarsh Per-Connection Server (10.15.80.224:51348).
Oct  4 11:17:05 host-116 systemd: Starting qarsh Per-Connection Server (10.15.80.224:51348)...
Oct  4 11:17:05 host-116 qarshd[6761]: Talking to peer ::ffff:10.15.80.224:51348 (IPv6)
Oct  4 11:17:05 host-116 qarshd[6761]: Running cmdline: sync
Oct  4 11:17:08 host-116 kernel: md: mdX: recovery done.
Oct  4 11:17:08 host-116 lvm[3815]: Device #0 of raid1 array, black_bird-synced_primary_raid1_2legs_1_cdata, has failed.
Oct  4 11:17:08 host-116 lvm[3815]: WARNING: Device for PV dlewYC-gbDe-njA2-GeRl-jnev-zJ0I-p92C5V not found or rejected by a filter.
Oct  4 11:17:08 host-116 lvm[3815]: WARNING: Device for PV dlewYC-gbDe-njA2-GeRl-jnev-zJ0I-p92C5V not found or rejected by a filter.
Oct  4 11:17:08 host-116 lvm[3815]: Faulty devices in black_bird/synced_primary_raid1_2legs_1_cdata successfully replaced.

Comment 6 Jonathan Earl Brassow 2016-10-04 17:16:18 UTC
There is a work-around for this issue (I'll elaborate on that in next comment).  It should not cause hangs/corruptions - only inconvenience.  As such, I am pushing for 7.4 fix with 7.3.z inclusion.

Comment 7 Heinz Mauelshagen 2016-10-05 16:03:36 UTC
Got one-liner fix to lvconvert.c allowing for the rebuild of corig internal LV in testing...

Comment 8 Heinz Mauelshagen 2016-10-10 15:33:00 UTC
Fix pushed upstream:

allow repair on cache orig RAID LVs and
"lvconvert --replace/--mirrors/--type {raid*|mirror|striped|linear}" as well.
    
Allow the same lvconvert actions on any cache pool and metadata RAID SubLVs.

Comment 18 Roman Bednář 2017-04-18 12:02:47 UTC
Created attachment 1272293 [details]
test result

Thank you for explanation, I was not aware of this behaviour.

Marking verified.

3.10.0-640.el7.x86_64

lvm2-2.02.169-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
lvm2-libs-2.02.169-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
lvm2-cluster-2.02.169-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
device-mapper-1.02.138-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
device-mapper-libs-1.02.138-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
device-mapper-event-1.02.138-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
device-mapper-event-libs-1.02.138-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 17:15:46 CEST 2017
cmirror-2.02.169-3.el7    BUILT: Wed Mar 29 16:17:46 CEST 2017

Comment 19 errata-xmlrpc 2017-08-01 21:47:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.