RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1347048 - "Expected raid segment" warning when executing lvs after raid conversion of _tdata
Summary: "Expected raid segment" warning when executing lvs after raid conversion of _...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1404384 1469559
TreeView+ depends on / blocked
 
Reported: 2016-06-15 22:00 UTC by Corey Marthaler
Modified: 2021-09-03 12:36 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1404384 (view as bug list)
Environment:
Last Closed: 2017-07-31 16:13:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2016-06-15 22:00:30 UTC
Description of problem:
This was after checking for bug 1296312 (unable to convert thin meta|data volume residing on a shared VG).

[root@mckinley-02 ~]# lvs -a -o +devices
  Expected raid segment type but got linear instead
  LV                    VG               Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                             
  [lvmlock]             global           -wi-ao---- 256.00m                                                     /dev/mapper/mpathc1(0)                                              
  POOL                  snapper_thinp    twi-aot---   2.00g             0.21   0.16                             POOL_tdata(0)                                                       
  [POOL_tdata]          snapper_thinp    rwi-aor-r-   2.00g                                    0.00             POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0),POOL_tdata_rimage_2(0)
  [POOL_tdata_rimage_0] snapper_thinp    Iwi---r---   2.00g                                                     /dev/mapper/mpathe1(65)                                             
  [POOL_tdata_rimage_1] snapper_thinp    Iwi---r---   2.00g                                                     /dev/mapper/mpathb1(1)                                              
  [POOL_tdata_rimage_2] snapper_thinp    Iwi---r---   2.00g                                                     /dev/mapper/mpatha1(1)                                              
  [POOL_tdata_rmeta_0]  snapper_thinp    ewi---r---   4.00m                                                     /dev/mapper/mpathe1(577)                                            
  [POOL_tdata_rmeta_1]  snapper_thinp    ewi---r---   4.00m                                                     /dev/mapper/mpathb1(0)                                              
  [POOL_tdata_rmeta_2]  snapper_thinp    ewi---r---   4.00m                                                     /dev/mapper/mpatha1(0)                                              
  [POOL_tmeta]          snapper_thinp    ewi-ao----  44.00m                                                     /dev/mapper/mpathf1(0)                                              
  [lvmlock]             snapper_thinp    -wi-ao---- 256.00m                                                     /dev/mapper/mpathe1(0)                                              
  [lvol0_pmspare]       snapper_thinp    ewi-------   4.00m                                                     /dev/mapper/mpathe1(64)                                             
  meta_resize           snapper_thinp    Vwi-a-t---   1.00g POOL origin 0.37                                                                                                        
  origin                snapper_thinp    Vwi-a-t---   1.00g POOL        0.37                                                                                                        
  other1                snapper_thinp    Vwi-a-t---   1.00g POOL        0.01                                                                                                        
  other2                snapper_thinp    Vwi---t---   1.00g POOL                                                                                                                    
  other3                snapper_thinp    Vwi---t---   1.00g POOL                                                                                                                    
  other4                snapper_thinp    Vwi---t---   1.00g POOL                                                                                                                    
  other5                snapper_thinp    Vwi---t---   1.00g POOL                                                                                                                    


Version-Release number of selected component (if applicable):
3.10.0-418.el7.x86_64
lvm2-2.02.156-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
lvm2-libs-2.02.156-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
lvm2-cluster-2.02.156-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
device-mapper-1.02.126-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
device-mapper-libs-1.02.126-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
device-mapper-event-1.02.126-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
device-mapper-event-libs-1.02.126-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc8.el7    BUILT: Wed May  4 02:56:34 CDT 2016
cmirror-2.02.156-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016
sanlock-3.3.0-1.el7    BUILT: Wed Feb 24 09:52:30 CST 2016
sanlock-lib-3.3.0-1.el7    BUILT: Wed Feb 24 09:52:30 CST 2016
lvm2-lockd-2.02.156-1.el7    BUILT: Mon Jun 13 03:05:51 CDT 2016

Comment 1 Corey Marthaler 2016-06-15 22:04:15 UTC
lvs -vvvv -a -o +devices
[...]

#toollib.c:2399       Processing LV POOL_tdata in VG snapper_thinp.
#activate/dev_manager.c:755         Getting device info for snapper_thinp-POOL_tdata [LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata]
#ioctl/libdm-iface.c:1838         dm status  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ opencount noflush ]   [16384] (*1)
#libdm-common.c:1191         snapper_thinp-POOL_tdata (253:22): read ahead is 0
#activate/dev_manager.c:755         Getting device info for snapper_thinp-POOL_tdata [LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata]
#ioctl/libdm-iface.c:1838         dm info  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ noopencount flush ]   [16384] (*1)
#activate/activate.c:1485       snapper_thinp/POOL_tdata is active locally
#activate/dev_manager.c:755         Getting device info for snapper_thinp-POOL_tdata [LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata]
#ioctl/libdm-iface.c:1838         dm info  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ noopencount flush ]   [16384] (*1)
#activate/activate.c:960         Checking raid device health for LV snapper_thinp/POOL_tdata.
#ioctl/libdm-iface.c:1838         dm status  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ noopencount noflush ]   [16384] (*1)
#activate/dev_manager.c:1318   Expected raid segment type but got linear instead
#activate/activate.c:969         <backtrace>
#metadata/lv.c:1087         <backtrace>
#activate/dev_manager.c:755         Getting device info for snapper_thinp-POOL_tdata [LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata]
#ioctl/libdm-iface.c:1838         dm info  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ noopencount flush ]   [16384] (*1)
#activate/activate.c:930         Checking mirror percent for LV snapper_thinp/POOL_tdata.
#activate/dev_manager.c:1281         Getting device raid1 status percentage for snapper_thinp-POOL_tdata
#ioctl/libdm-iface.c:1838         dm status  LVM-TpDpFypFuiE3DCbNewkBDYa1KlemgJNRKPlP3UKKAlibxk5WYwlsqRKoZLWH9mCR-tdata [ noopencount noflush ]   [16384] (*1)
#activate/dev_manager.c:1060         LV percent: 100.00

Comment 2 Corey Marthaler 2016-06-15 22:17:35 UTC
The fix for this is to reactivate the pool device. Doing so puts the dm devices in the proper state.

[root@mckinley-02 ~]# dmsetup ls
snapper_thinp-origin    (253:25)
snapper_thinp-POOL      (253:24)
global-lvmlock  (253:19)
snapper_thinp-meta_resize       (253:27)
snapper_thinp-POOL-tpool        (253:23)
snapper_thinp-POOL_tdata        (253:22)
snapper_thinp-POOL_tmeta        (253:21)
snapper_thinp-other1    (253:26)
snapper_thinp-lvmlock   (253:20)

[root@mckinley-02 ~]# lvchange -an snapper_thinp
[root@mckinley-02 ~]# lvchange -ay snapper_thinp

[root@mckinley-02 ~]# dmsetup status | grep POOL_tdata
snapper_thinp-POOL_tdata_rimage_1: 0 4194304 linear 
snapper_thinp-POOL_tdata_rimage_0: 0 4194304 linear 
snapper_thinp-POOL_tdata_rmeta_2: 0 8192 linear 
snapper_thinp-POOL_tdata_rmeta_1: 0 8192 linear 
snapper_thinp-POOL_tdata_rmeta_0: 0 8192 linear 
snapper_thinp-POOL_tdata: 0 4194304 raid raid1 3 AAA 4194304/4194304 idle 0
snapper_thinp-POOL_tdata_rimage_2: 0 4194304 linear

Comment 3 David Teigland 2016-06-15 22:21:30 UTC
And this problem only seems to occur if there's an active thin LV using the pool when the pool_tdata is lvconverted to raid1.

Comment 5 David Teigland 2016-06-16 18:13:14 UTC
This is a general problem with thin pools, unrelated to lvmlockd.

1. create a thin pool (ee/pool)
2. create a thin LV (ee/thin1)

3. convert the pool data LV to raid1
   (lvconvert --type raid1 -m1 ee/pool_tdata)

4. lvs -a ee prints:
  Expected raid segment type but got linear instead

5. dmsetup shows linear devs

ee-thin1        (253:9)
ee-pool-tpool   (253:7)
ee-pool_tdata   (253:6)
ee-pool_tmeta   (253:3)

6. reactivate the pool

# lvchange -an ee/pool
# lvchange -ay ee/pool

7. dmsetup shows raid devs, and no more message from lvs -a

ee-thin1        (253:9)
ee-pool-tpool   (253:7)
ee-pool_tdata   (253:6)
ee-pool_tdata_rimage_1  (253:12)
ee-pool_tmeta   (253:3)
ee-pool_tdata_rimage_0  (253:10)
ee-pool_tdata_rmeta_1   (253:11)
ee-pool_tdata_rmeta_0   (253:8)
ee-pool (253:13)

Comment 6 Corey Marthaler 2016-06-27 21:05:46 UTC
This issue can lead to a deadlock if additional thin volumes are attempted.

host-075: pvcreate /dev/sdd1 /dev/sda1 /dev/sdh1 /dev/sdf1 /dev/sdc1 /dev/sde1 /dev/sdg1
host-075: vgcreate  snapper_thinp /dev/sdd1 /dev/sda1 /dev/sdh1 /dev/sdf1 /dev/sdc1 /dev/sde1 /dev/sdg1

============================================================
Iteration 1 of 3 started at Mon Jun 27 12:07:28 CDT 2016
============================================================
SCENARIO - [swap_inactive_thin_pool_meta_device_using_lvconvert]
Swap _tmeta devices with newly created volumes while pool is inactive multiple times
Making pool volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate  --type raid10 -i 2 -m 1 --profile thin-performance --zero y -L 4M -n meta snapper_thinp
lvcreate  --type raid10 -i 2 -m 1 --profile thin-performance --zero y -L 1G -n POOL snapper_thinp
Waiting until all mirror|raid volumes become fully syncd...
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
Sleeping 15 sec
Sleeping 15 sec
lvconvert --zero y --thinpool snapper_thinp/POOL --poolmetadata meta --yes
  WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

Sanity checking pool device (POOL) metadata
examining superblock
examining devices tree
examining mapping tree
checking space map counts


Making origin volume
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate  -V 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
Making snapshot of origin volume
lvcreate  -k n -s /dev/snapper_thinp/origin -n snap


*** Swap pool metadata iteration 1 ***
Current tmeta device: POOL_tmeta_rimage_0

vgchange -an snapper_thinp

Swap in new _tmeta device using lvconvert --repair
lvconvert --yes --repair snapper_thinp/POOL /dev/sdc1
  WARNING: recovery of pools without pool metadata spare LV is not automated.
  WARNING: If everything works, remove "snapper_thinp/POOL_meta0".
  WARNING: Use pvmove command to move "snapper_thinp/POOL_tmeta" on the best fitting PV.

vgchange -ay snapper_thinp

New swapped tmeta device: /dev/sdd1
Sanity checking pool device (POOL) metadata
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!

Convert the now repaired meta device back to a redundant raid volume
lvconvert --type raid10 -i 2 -m 1 snapper_thinp/POOL_tmeta


### A reactivation here would avoid this problem.


Removing snap volume snapper_thinp/POOL_meta0
lvremove -f /dev/snapper_thinp/POOL_meta0

[root@host-075 ~]# lvs -a -o +devices
 Expected raid segment type but got linear instead
 LV                    Attr       LSize   Pool Origin Data% Meta% Cpy%Sync Devices
 POOL                  twi-aotz--   1.00g             0.00  0.54           POOL_tdata(0)
 [POOL_tdata]          rwi-aor---   1.00g                         100.00   POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0),POOL_tdata_rimage_2(0),POOL_tdata_rimage_3(0)
 [POOL_tdata_rimage_0] iwi-aor--- 512.00m                                  /dev/sdd1(3)
 [POOL_tdata_rimage_1] iwi-aor--- 512.00m                                  /dev/sda1(3)
 [POOL_tdata_rimage_2] iwi-aor--- 512.00m                                  /dev/sdh1(3)
 [POOL_tdata_rimage_3] iwi-aor--- 512.00m                                  /dev/sdf1(3)
 [POOL_tdata_rmeta_0]  ewi-aor---   4.00m                                  /dev/sdd1(2)
 [POOL_tdata_rmeta_1]  ewi-aor---   4.00m                                  /dev/sda1(2)
 [POOL_tdata_rmeta_2]  ewi-aor---   4.00m                                  /dev/sdh1(2)
 [POOL_tdata_rmeta_3]  ewi-aor---   4.00m                                  /dev/sdf1(2)
 [POOL_tmeta]          ewi-aor-r-   8.00m                         0.00     POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)                                              
 [POOL_tmeta_rimage_0] Iwi---r---   8.00m                                  /dev/sdd1(131)
 [POOL_tmeta_rimage_1] Iwi---r---   8.00m                                  /dev/sda1(132)
 [POOL_tmeta_rmeta_0]  ewi---r---   4.00m                                  /dev/sdd1(133)
 [POOL_tmeta_rmeta_1]  ewi---r---   4.00m                                  /dev/sda1(131)
 [lvol0_pmspare]       ewi-------   8.00m                                  /dev/sdd1(134)
 origin                Vwi-a-tz--   1.00g POOL        0.00
 other1                Vwi-a-tz--   1.00g POOL        0.00
 other2                Vwi-a-tz--   1.00g POOL        0.00
 other3                Vwi-a-tz--   1.00g POOL        0.00
 other4                Vwi-a-tz--   1.00g POOL        0.00
 other5                Vwi-a-tz--   1.00g POOL        0.00
 snap                  Vwi-a-tz--   1.00g POOL origin 0.00


[root@host-075 ~]# lvremove -f -vvvv /dev/snapper_thinp/snap
[...]
#activate/dev_manager.c:755         Getting device info for snapper_thinp-POOL-tpool [LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgckm7sGqRpc1EfG3CKJcYVIIk9eb8zsKSrM-tpool]
#ioctl/libdm-iface.c:1838         dm info  LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgckm7sGqRpc1EfG3CKJcYVIIk9eb8zsKSrM-tpool [ noopencount flush ]   [16384] (*1)
#mm/memlock.c:562         Unlock: Memlock counters: locked:1 critical:1 daemon:0 suspended:4
#metadata/pv_manip.c:420         /dev/sdd1 0:      0      2: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 1:      2      1: POOL_tdata_rmeta_0(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 2:      3    128: POOL_tdata_rimage_0(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 3:    131      2: POOL_tmeta_rimage_0(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 4:    133      1: POOL_tmeta_rmeta_0(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 5:    134      2: lvol0_pmspare(0:0)
#metadata/pv_manip.c:420         /dev/sdd1 6:    136   6262: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sda1 0:      0      2: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sda1 1:      2      1: POOL_tdata_rmeta_1(0:0)
#metadata/pv_manip.c:420         /dev/sda1 2:      3    128: POOL_tdata_rimage_1(0:0)
#metadata/pv_manip.c:420         /dev/sda1 3:    131      1: POOL_tmeta_rmeta_1(0:0)
#metadata/pv_manip.c:420         /dev/sda1 4:    132      2: POOL_tmeta_rimage_1(0:0)
#metadata/pv_manip.c:420         /dev/sda1 5:    134   6264: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdh1 0:      0      2: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdh1 1:      2      1: POOL_tdata_rmeta_2(0:0)
#metadata/pv_manip.c:420         /dev/sdh1 2:      3    128: POOL_tdata_rimage_2(0:0)
#metadata/pv_manip.c:420         /dev/sdh1 3:    131   6267: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdf1 0:      0      2: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdf1 1:      2      1: POOL_tdata_rmeta_3(0:0)
#metadata/pv_manip.c:420         /dev/sdf1 2:      3    128: POOL_tdata_rimage_3(0:0)
#metadata/pv_manip.c:420         /dev/sdf1 3:    131   6267: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdc1 0:      0   6398: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sde1 0:      0   6398: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdg1 0:      0   6398: NULL(0:0)
#metadata/metadata.c:3471   Internal error: Writing metadata in critical section.
#mm/memlock.c:562         Unlock: Memlock counters: locked:1 critical:1 daemon:0 suspended:4
#format_text/format-text.c:665         Writing snapper_thinp metadata to /dev/sdd1 at 402432 len 9314
#format_text/format-text.c:665         Writing snapper_thinp metadata to /dev/sda1 at 402432 len 9314
#format_text/format-text.c:665         Writing snapper_thinp metadata to /dev/sdh1 at 402432 len 9314
[...]


[root@host-075 ~]# lvcreate -vvvv -s snapper_thinp/origin
[...]
#activate/dev_manager.c:1752         Getting device info for snapper_thinp-POOL_tdata_rmeta_3-cow [LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgck86ZproKBS53JmaKb9aiuT0EI5MuqLfAm-cow]
#ioctl/libdm-iface.c:1838         dm info  LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgck86ZproKBS53JmaKb9aiuT0EI5MuqLfAm-cow [ opencount flush ]   [16384] (*1)
#libdm-deptree.c:572         Matched uuid LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgckpKK7L0GNa2HqVXgYkt2Vfh4Z6b62pcGs in deptree.
#libdm-deptree.c:572         Matched uuid LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgckpKK7L0GNa2HqVXgYkt2Vfh4Z6b62pcGs in deptree.
#activate/dev_manager.c:2672         Checking kernel supports thin segment type for snapper_thinp/origin
#libdm-deptree.c:572         Matched uuid LVM-QRQ5EY1joVgtvt8vo0LtmBC0omRhqgckm7sGqRpc1EfG3CKJcYVIIk9eb8zsKSrM-tpool in deptree.
#metadata/metadata.c:2619         Calculated readahead of LV origin is 8192
#libdm-deptree.c:2697     Loading snapper_thinp-origin table (253:14)
#libdm-deptree.c:2641         Adding target to (253:14): 0 2097152 thin 253:12 1
#ioctl/libdm-iface.c:1838         dm table   (253:14) [ opencount flush ]   [16384] (*1)
#libdm-deptree.c:2732     Suppressed snapper_thinp-origin (253:14) identical table reload.
#libdm-deptree.c:1349     Resuming snapper_thinp-POOL_tmeta (253:2)
#libdm-common.c:2346         Udev cookie 0xd4d8750 (semid 2719746) created
#libdm-common.c:2366         Udev cookie 0xd4d8750 (semid 2719746) incremented to 1
#libdm-common.c:2238         Udev cookie 0xd4d8750 (semid 2719746) incremented to 2
#libdm-common.c:2488         Udev cookie 0xd4d8750 (semid 2719746) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK         (0x2e)
#ioctl/libdm-iface.c:1838         dm resume   (253:2) [ noopencount flush ]   [16384] (*1)


Jun 27 12:15:16 host-075 kernel: INFO: task lvcreate:9117 blocked for more than 120 seconds.
Jun 27 12:15:16 host-075 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 27 12:15:16 host-075 kernel: lvcreate        D ffff88003b227858     0  9117   2554 0x00000080
Jun 27 12:15:16 host-075 kernel: ffff88003ceefab0 0000000000000082 ffff880022f70ed0 ffff88003ceeffd8
Jun 27 12:15:16 host-075 kernel: ffff88003ceeffd8 ffff88003ceeffd8 ffff880022f70ed0 ffff88003b227ae8
Jun 27 12:15:16 host-075 kernel: ffff88003b227858 ffffea0000eb8840 ffff88003b227870 ffff88003b227858
Jun 27 12:15:16 host-075 kernel: Call Trace:
Jun 27 12:15:16 host-075 kernel: [<ffffffff8167f039>] schedule+0x29/0x70
Jun 27 12:15:16 host-075 kernel: [<ffffffff814f6dc5>] md_super_wait.part.46+0x65/0xb0
Jun 27 12:15:16 host-075 kernel: [<ffffffff810aee20>] ? wake_up_atomic_t+0x30/0x30
Jun 27 12:15:16 host-075 kernel: [<ffffffff814fab88>] md_super_wait+0x18/0x20
Jun 27 12:15:16 host-075 kernel: [<ffffffff81500223>] write_page+0x273/0x380
Jun 27 12:15:16 host-075 kernel: [<ffffffff81130fc0>] ? dyntick_save_progress_counter+0x30/0x30
Jun 27 12:15:16 host-075 kernel: [<ffffffffa03bd06e>] ? super_sync+0x1fe/0x230 [dm_raid]
Jun 27 12:15:16 host-075 kernel: [<ffffffff814ffeab>] bitmap_update_sb+0x11b/0x120
Jun 27 12:15:16 host-075 kernel: [<ffffffff814f8e78>] md_update_sb+0x238/0x670
Jun 27 12:15:16 host-075 kernel: [<ffffffff81086f66>] ? put_online_cpus+0x56/0x80
Jun 27 12:15:16 host-075 kernel: [<ffffffff8113258f>] ? synchronize_sched_expedited+0x16f/0x1d0
Jun 27 12:15:16 host-075 kernel: [<ffffffffa03bc6f6>] rs_update_sbs+0x36/0x50 [dm_raid]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa03bcbb8>] raid_preresume+0x1c8/0x400 [dm_raid]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa0007d14>] dm_table_resume_targets+0x54/0xe0 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa0005121>] dm_resume+0xc1/0x100 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa000a54b>] dev_suspend+0x12b/0x250 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa000a420>] ? table_load+0x390/0x390 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa000ae65>] ctl_ioctl+0x255/0x500 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa013a882>] ? xfs_file_buffered_aio_write+0x232/0x260 [xfs]
Jun 27 12:15:16 host-075 kernel: [<ffffffffa000b123>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Jun 27 12:15:16 host-075 kernel: [<ffffffff81207b35>] do_vfs_ioctl+0x2e5/0x4c0
Jun 27 12:15:16 host-075 kernel: [<ffffffff812a3fce>] ? file_has_perm+0xae/0xc0
Jun 27 12:15:16 host-075 kernel: [<ffffffff811f6981>] ? __sb_end_write+0x31/0x60
Jun 27 12:15:16 host-075 kernel: [<ffffffff81207db1>] SyS_ioctl+0xa1/0xc0
Jun 27 12:15:16 host-075 kernel: [<ffffffff81689f49>] system_call_fastpath+0x16/0x1b

Comment 8 David Teigland 2017-07-31 16:13:44 UTC
(In reply to David Teigland from comment #5)
> This is a general problem with thin pools, unrelated to lvmlockd.
> 
> 1. create a thin pool (ee/pool)
> 2. create a thin LV (ee/thin1)
> 
> 3. convert the pool data LV to raid1
>    (lvconvert --type raid1 -m1 ee/pool_tdata)
> 
> 4. lvs -a ee prints:
>   Expected raid segment type but got linear instead
> 
> 5. dmsetup shows linear devs
> 
> ee-thin1        (253:9)
> ee-pool-tpool   (253:7)
> ee-pool_tdata   (253:6)
> ee-pool_tmeta   (253:3)

Retried this with current code, and the problem is gone.


Note You need to log in before you can comment on or make changes to this bug.