RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1404384 - "Expected raid segment" warning when executing lvs after raid conversion of _tdata
Summary: "Expected raid segment" warning when executing lvs after raid conversion of _...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.9
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1347048
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-13 17:17 UTC by Corey Marthaler
Modified: 2017-12-06 12:33 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1347048
Environment:
Last Closed: 2017-12-06 12:33:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Corey Marthaler 2016-12-13 17:20:49 UTC
2.6.32-676.el6.x86_64

lvm2-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
lvm2-libs-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
lvm2-cluster-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 08:17:19 CDT 2016
device-mapper-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-libs-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-event-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-event-libs-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 08:58:09 CDT 2016
cmirror-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016



SCENARIO - [swap_inactive_thin_pool_meta_device_using_lvconvert]
Swap _tmeta devices with newly created volumes while pool is inactive multiple times
Making pool volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate  --type raid1  -m 1  --zero n -L 4M -n meta snapper_thinp
  WARNING: Logical volume snapper_thinp/meta not zeroed.
lvcreate  --type raid1  -m 1  --zero n -L 1G -n POOL snapper_thinp
  WARNING: Logical volume snapper_thinp/POOL not zeroed.
Waiting until all mirror|raid volumes become fully syncd...
   1/2 mirror(s) are fully synced: ( 34.46% 100.00% )
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes --zero n
  WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

Sanity checking pool device (POOL) metadata
thin_check /dev/mapper/snapper_thinp-meta_swap
examining superblock
examining devices tree
examining mapping tree
checking space map counts


Making origin volume
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
Making snapshot of origin volume
lvcreate  -k n -s /dev/snapper_thinp/origin -n snap


*** Swap corrupt pool metadata iteration 1 ***
Current tmeta device: POOL_tmeta_rimage_0
Corrupting pool meta device (/dev/mapper/snapper_thinp-POOL_tmeta)
dd if=/dev/urandom of=/dev/mapper/snapper_thinp-POOL_tmeta count=512 seek=4096 bs=1
512+0 records in
512+0 records out
512 bytes (512 B) copied, 0.00279946 s, 183 kB/s

Sanity checking pool device (POOL) metadata
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
thin_check /dev/mapper/snapper_thinp-meta_swap
examining superblock
examining devices tree
examining mapping tree
checking space map counts
bad checksum in space map bitmap
meta data appears corrupt
  Check of pool snapper_thinp/POOL failed (status:1). Manual repair required!
couldn't reactivate all volumes associated with pool device

Swap in new _tmeta device using lvconvert --repair
lvconvert --yes --repair snapper_thinp/POOL /dev/sdc1
  WARNING: recovery of pools without pool metadata spare LV is not automated.
  WARNING: If everything works, remove "snapper_thinp/POOL_meta0".
  WARNING: Use pvmove command to move "snapper_thinp/POOL_tmeta" on the best fitting PV.

New swapped tmeta device: /dev/sda1
Sanity checking pool device (POOL) metadata
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
thin_check /dev/mapper/snapper_thinp-meta_swap
examining superblock
examining devices tree
examining mapping tree
checking space map counts

Convert the now repaired meta device back to a redundant raid volume
lvconvert --type raid1  -m 1 snapper_thinp/POOL_tmeta

Removing snap volume snapper_thinp/POOL_meta0
lvremove -f /dev/snapper_thinp/POOL_meta0


Removing snap volume snapper_thinp/snap
lvremove -f /dev/snapper_thinp/snap
Although the snap removal passed, errors were found in it's output
  Internal error: Writing metadata in critical section.
  Logical volume "snap" successfully removed
  Releasing activation in critical section.
  libdevmapper exiting with 4 device(s) still suspended.



[root@host-077 ~]# lvs -a -o +devices
  Expected raid segment type but got linear instead
  Expected raid segment type but got linear instead
  Expected raid segment type but got linear instead
  Expected raid segment type but got linear instead
  Expected raid segment type but got linear instead
  LV                    VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                      
  POOL                  snapper_thinp twi-aot---   1.00g             0.04   1.66                             POOL_tdata(0)                                
  [POOL_tdata]          snapper_thinp rwi-aor---   1.00g                                    100.00           POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
  [POOL_tdata_rimage_0] snapper_thinp iwi-aor---   1.00g                                                     /dev/sda1(3)                                 
  [POOL_tdata_rimage_1] snapper_thinp iwi-aor---   1.00g                                                     /dev/sdb1(3)                                 
  [POOL_tdata_rmeta_0]  snapper_thinp ewi-aor---   4.00m                                                     /dev/sda1(2)                                 
  [POOL_tdata_rmeta_1]  snapper_thinp ewi-aor---   4.00m                                                     /dev/sdb1(2)                                 
  [POOL_tmeta]          snapper_thinp ewi-aor-r-   4.00m                                    0.00             POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
  [POOL_tmeta_rimage_0] snapper_thinp iwi-sor-r-   4.00m                                                     /dev/sda1(259)                               
  [POOL_tmeta_rimage_1] snapper_thinp iwi-sor-r-   4.00m                                                     /dev/sdb1(260)                               
  [POOL_tmeta_rmeta_0]  snapper_thinp ewi-sor-r-   4.00m                                                     /dev/sda1(260)                               
  [POOL_tmeta_rmeta_1]  snapper_thinp ewi-sor-r-   4.00m                                                     /dev/sdb1(259)                               
  [lvol0_pmspare]       snapper_thinp ewi-------   4.00m                                                     /dev/sda1(261)                               
  origin                snapper_thinp Vwi-a-t---   1.00g POOL        0.01                                                                                 
  other1                snapper_thinp Vwi-a-t---   1.00g POOL        0.01                                                                                 
  other2                snapper_thinp Vwi-a-t---   1.00g POOL        0.01                                                                                 
  other3                snapper_thinp Vwi-a-t---   1.00g POOL        0.01                                                                                 
  other4                snapper_thinp Vwi-a-t---   1.00g POOL        0.01                                                                                 
  other5                snapper_thinp Vwi-a-t---   1.00g POOL        0.01

Comment 5 Jan Kurik 2017-12-06 12:33:49 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.