RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1442137 - RAID RESHAPE: deadlock when attempting striped raid image addition on single core
Summary: RAID RESHAPE: deadlock when attempting striped raid image addition on single ...
Keywords:
Status: CLOSED DUPLICATE of bug 1443999
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1439934 (view as bug list)
Depends On: 1443999
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-13 15:55 UTC by Corey Marthaler
Modified: 2021-09-03 12:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 18:05:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
verbose lvconvert attempt (358.87 KB, text/plain)
2017-04-13 22:10 UTC, Corey Marthaler
no flags Details

Description Corey Marthaler 2017-04-13 15:55:01 UTC
Description of problem:
[root@host-073 ~]# lvcreate -L 100M --type raid5 -i 2 VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents).
  Logical volume "lvol0" created.
[root@host-073 ~]# lvs -a -o +devices
  LV               VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                              
  lvol0            VG            rwi-a-r--- 104.00m                                    100.00           lvol0_rimage_0(0),lvol0_rimage_1(0),lvol0_rimage_2(0)
  [lvol0_rimage_0] VG            iwi-aor---  52.00m                                                     /dev/sda1(1)                                         
  [lvol0_rimage_1] VG            iwi-aor---  52.00m                                                     /dev/sdb1(1)                                         
  [lvol0_rimage_2] VG            iwi-aor---  52.00m                                                     /dev/sdc1(1)                                         
  [lvol0_rmeta_0]  VG            ewi-aor---   4.00m                                                     /dev/sda1(0)                                         
  [lvol0_rmeta_1]  VG            ewi-aor---   4.00m                                                     /dev/sdb1(0)                                         
  [lvol0_rmeta_2]  VG            ewi-aor---   4.00m                                                     /dev/sdc1(0)                                         
  root             rhel_host-073 -wi-ao----  <6.20g                                                     /dev/vda2(205)                                       
  swap             rhel_host-073 -wi-ao---- 820.00m                                                     /dev/vda2(0)                                         
[root@host-073 ~]# lvconvert --yes --stripes 4 VG/lvol0
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active logical volume VG/lvol0 will grow it from 26 to 52 extents!
  Run "lvresize -l26 VG/lvol0" to shrink it or use the additional capacity.

[DEADLOCK]


Apr 13 10:32:56 host-073 kernel: device-mapper: raid: Superblocks created for new raid set
Apr 13 10:32:56 host-073 kernel: md/raid:mdX: not clean -- starting background reconstruction
Apr 13 10:32:56 host-073 kernel: md/raid:mdX: device dm-3 operational as raid disk 0
Apr 13 10:32:56 host-073 kernel: md/raid:mdX: device dm-5 operational as raid disk 1
Apr 13 10:32:56 host-073 kernel: md/raid:mdX: device dm-7 operational as raid disk 2
Apr 13 10:32:56 host-073 kernel: md/raid:mdX: raid level 5 active with 3 out of 3 devices, algorithm 2
Apr 13 10:32:56 host-073 kernel: mdX: bitmap file is out of date, doing full recovery
Apr 13 10:32:56 host-073 kernel: md: resync of RAID array mdX
Apr 13 10:32:56 host-073 lvm[12658]: Monitoring RAID device VG-lvol0 for events.
Apr 13 10:32:57 host-073 kernel: md: mdX: resync done.
Apr 13 10:32:57 host-073 lvm[12658]: raid5_ls array, VG-lvol0, is now in-sync.
Apr 13 10:33:20 host-073 multipathd: dm-9: remove map (uevent)
Apr 13 10:33:20 host-073 multipathd: dm-9: devmap not registered, can't remove
Apr 13 10:33:20 host-073 multipathd: dm-10: remove map (uevent)
Apr 13 10:33:20 host-073 multipathd: dm-10: devmap not registered, can't remove
Apr 13 10:33:20 host-073 multipathd: dm-9: remove map (uevent)
Apr 13 10:33:20 host-073 multipathd: dm-10: remove map (uevent)
Apr 13 10:33:20 host-073 kernel: md/raid:mdX: device dm-3 operational as raid disk 0
Apr 13 10:33:20 host-073 kernel: md/raid:mdX: device dm-5 operational as raid disk 1
Apr 13 10:33:20 host-073 kernel: md/raid:mdX: device dm-7 operational as raid disk 2
Apr 13 10:33:20 host-073 kernel: md/raid:mdX: raid level 5 active with 3 out of 3 devices, algorithm 2
Apr 13 10:33:20 host-073 dmeventd[12658]: No longer monitoring RAID device VG-lvol0 for events.
Apr 13 10:33:20 host-073 kernel: dm-8: detected capacity change from 218103808 to 109051904
Apr 13 10:33:20 host-073 kernel: md: reshape of RAID array mdX
Apr 13 10:33:21 host-073 lvm[12658]: Monitoring RAID device VG-lvol0 for events.
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: device dm-3 operational as raid disk 0
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: device dm-5 operational as raid disk 1
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: device dm-7 operational as raid disk 2
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: device dm-10 operational as raid disk 3
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: device dm-12 operational as raid disk 4
Apr 13 10:33:22 host-073 kernel: md/raid:mdX: raid level 5 active with 5 out of 5 devices, algorithm 2
Apr 13 10:33:22 host-073 dmeventd[12658]: No longer monitoring RAID device VG-lvol0 for events.
Apr 13 10:34:21 host-073 systemd-udevd: worker [15958] /devices/virtual/block/dm-8 is taking a long time
Apr 13 10:36:21 host-073 systemd-udevd: worker [15958] /devices/virtual/block/dm-8 timeout; kill it
Apr 13 10:36:21 host-073 systemd-udevd: seq 140722 '/devices/virtual/block/dm-8' killed
Apr 13 10:37:17 host-073 kernel: INFO: task lvconvert:15874 blocked for more than 120 seconds.
Apr 13 10:37:17 host-073 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 13 10:37:17 host-073 kernel: lvconvert       D ffff88003aba9f60     0 15874   2454 0x00000080
Apr 13 10:37:17 host-073 kernel: ffff880022aaba80 0000000000000086 ffff880022aabfd8 ffff880022aabfd8
Apr 13 10:37:17 host-073 kernel: ffff880022aabfd8 0000000000016cc0 ffff880035d46dd0 ffff880026267f10
Apr 13 10:37:17 host-073 kernel: 7fffffffffffffff ffff880026267f08 ffff88003aba9f60 0000000000000001
Apr 13 10:37:17 host-073 kernel: Call Trace:
Apr 13 10:37:17 host-073 kernel: [<ffffffff844979d9>] schedule+0x29/0x70
Apr 13 10:37:17 host-073 kernel: [<ffffffff84495649>] schedule_timeout+0x239/0x2c0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83ebef85>] ? check_preempt_curr+0x85/0xa0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83ebefb9>] ? ttwu_do_wakeup+0x19/0xd0
Apr 13 10:37:17 host-073 kernel: [<ffffffff84497d8d>] wait_for_completion+0xfd/0x140
Apr 13 10:37:17 host-073 kernel: [<ffffffff83ec2280>] ? wake_up_state+0x20/0x20
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eaec2a>] kthread_stop+0x4a/0xe0
Apr 13 10:37:17 host-073 kernel: [<ffffffff842ff56b>] md_unregister_thread+0x4b/0x80
Apr 13 10:37:17 host-073 kernel: [<ffffffff84306459>] md_reap_sync_thread+0x19/0x150
Apr 13 10:37:17 host-073 kernel: [<ffffffff8430688b>] __md_stop_writes+0x3b/0xb0
Apr 13 10:37:17 host-073 kernel: [<ffffffff84306921>] md_stop_writes+0x21/0x30
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05d2976>] raid_presuspend+0x16/0x20 [dm_raid]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc0202a4a>] dm_table_presuspend_targets+0x4a/0x60 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fd848>] __dm_suspend+0xd8/0x210 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01ffea0>] dm_suspend+0xc0/0xd0 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc0205414>] dev_suspend+0x194/0x250 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc0205280>] ? table_load+0x390/0x390 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc0205c45>] ctl_ioctl+0x1e5/0x500 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc0205f73>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffff8401264d>] do_vfs_ioctl+0x33d/0x540
Apr 13 10:37:17 host-073 kernel: [<ffffffff840b072f>] ? file_has_perm+0x9f/0xb0
Apr 13 10:37:17 host-073 kernel: [<ffffffff840128f1>] SyS_ioctl+0xa1/0xc0
Apr 13 10:37:17 host-073 kernel: [<ffffffff844a3489>] system_call_fastpath+0x16/0x1b
Apr 13 10:37:17 host-073 kernel: INFO: task systemd-udevd:15958 blocked for more than 120 seconds.
Apr 13 10:37:17 host-073 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 13 10:37:17 host-073 kernel: systemd-udevd   D ffff88003b375e20     0 15958    485 0x00000084
Apr 13 10:37:17 host-073 kernel: ffff88000b0e36e0 0000000000000082 ffff88000b0e3fd8 ffff88000b0e3fd8
Apr 13 10:37:17 host-073 kernel: ffff88000b0e3fd8 0000000000016cc0 ffff88003dae0000 ffff880023eb3c08
Apr 13 10:37:17 host-073 kernel: ffff880023eb3c00 ffff88000b0e3750 ffff880023eb3ea0 ffff880023eb3dc8
Apr 13 10:37:17 host-073 kernel: Call Trace:
Apr 13 10:37:17 host-073 kernel: [<ffffffff844979d9>] schedule+0x29/0x70
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05bce12>] raid5_get_active_stripe+0x4d2/0x710 [raid456]
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eafa00>] ? wake_up_atomic_t+0x30/0x30
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05bd205>] raid5_make_request+0x1b5/0xcd0 [raid456]
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eafa00>] ? wake_up_atomic_t+0x30/0x30
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05d254a>] raid_map+0x2a/0x40 [dm_raid]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fe0a0>] __map_bio+0x90/0x190 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fc680>] ? queue_io+0x80/0x80 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fe39f>] __clone_and_map_data_bio+0x16f/0x280 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fe781>] __split_and_process_bio+0x2d1/0x520 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffff840e0000>] ? sha256_transform+0x15d0/0x1c40
Apr 13 10:37:17 host-073 kernel: [<ffffffffc01fecdc>] dm_make_request+0x11c/0x190 [dm_mod]
Apr 13 10:37:17 host-073 kernel: [<ffffffff840f0d16>] generic_make_request+0x106/0x1e0
Apr 13 10:37:17 host-073 kernel: [<ffffffff840f0e60>] submit_bio+0x70/0x150
Apr 13 10:37:17 host-073 kernel: [<ffffffff83f8f29e>] ? lru_cache_add+0xe/0x10
Apr 13 10:37:17 host-073 kernel: [<ffffffff84040954>] mpage_readpages+0x124/0x160
Apr 13 10:37:17 host-073 kernel: [<ffffffff84039c30>] ? I_BDEV+0x10/0x10
Apr 13 10:37:17 host-073 kernel: [<ffffffff84039c30>] ? I_BDEV+0x10/0x10
Apr 13 10:37:17 host-073 kernel: [<ffffffff8403a60d>] blkdev_readpages+0x1d/0x20
Apr 13 10:37:17 host-073 kernel: [<ffffffff83f8d15c>] __do_page_cache_readahead+0x1cc/0x250
Apr 13 10:37:17 host-073 kernel: [<ffffffff83f8d6d9>] force_page_cache_readahead+0x99/0xe0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83f8d7b7>] page_cache_sync_readahead+0x97/0xb0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83f8160b>] generic_file_aio_read+0x29b/0x790
Apr 13 10:37:17 host-073 kernel: [<ffffffff8403aa4c>] blkdev_aio_read+0x4c/0x70
Apr 13 10:37:17 host-073 kernel: [<ffffffff83ffd97d>] do_sync_read+0x8d/0xd0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83ffe36c>] vfs_read+0x9c/0x170
Apr 13 10:37:17 host-073 kernel: [<ffffffff83fff22f>] SyS_read+0x7f/0xe0
Apr 13 10:37:17 host-073 kernel: [<ffffffff844a3489>] system_call_fastpath+0x16/0x1b
Apr 13 10:37:17 host-073 kernel: INFO: task mdX_reshape:15959 blocked for more than 120 seconds.
Apr 13 10:37:17 host-073 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 13 10:37:17 host-073 kernel: mdX_reshape     D ffff880035d46dd0     0 15959      2 0x00000080
Apr 13 10:37:17 host-073 kernel: ffff880026267aa0 0000000000000046 ffff880026267fd8 ffff880026267fd8
Apr 13 10:37:17 host-073 kernel: ffff880026267fd8 0000000000016cc0 ffff88003da8bec0 ffff880023eb3c0c
Apr 13 10:37:17 host-073 kernel: ffff880023eb3c00 ffff880026267b10 ffff880023eb3ea0 ffff880023eb3dd8
Apr 13 10:37:17 host-073 kernel: Call Trace:
Apr 13 10:37:17 host-073 kernel: [<ffffffff844979d9>] schedule+0x29/0x70
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05bce12>] raid5_get_active_stripe+0x4d2/0x710 [raid456]
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eafa00>] ? wake_up_atomic_t+0x30/0x30
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05c1460>] reshape_request+0x4f0/0x910 [raid456]
Apr 13 10:37:17 host-073 kernel: [<ffffffffc05c1aff>] raid5_sync_request+0x27f/0x400 [raid456]
Apr 13 10:37:17 host-073 kernel: [<ffffffff84303174>] md_do_sync+0x9c4/0x1070
Apr 13 10:37:17 host-073 kernel: [<ffffffff83e60ebe>] ? kvm_clock_read+0x1e/0x20
Apr 13 10:37:17 host-073 kernel: [<ffffffff842ff4d5>] md_thread+0x155/0x1a0
Apr 13 10:37:17 host-073 kernel: [<ffffffff842ff380>] ? find_pers+0x80/0x80
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eae9bf>] kthread+0xcf/0xe0
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eae8f0>] ? insert_kthread_work+0x40/0x40
Apr 13 10:37:17 host-073 kernel: [<ffffffff844a33d8>] ret_from_fork+0x58/0x90
Apr 13 10:37:17 host-073 kernel: [<ffffffff83eae8f0>] ? insert_kthread_work+0x40/0x40
Apr 13 10:39:17 host-073 kernel: INFO: task lvconvert:15874 blocked for more than 120 seconds.
Apr 13 10:39:17 host-073 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 13 10:39:17 host-073 kernel: lvconvert       D ffff88003aba9f60     0 15874   2454 0x00000080
Apr 13 10:39:17 host-073 kernel: ffff880022aaba80 0000000000000086 ffff880022aabfd8 ffff880022aabfd8
Apr 13 10:39:17 host-073 kernel: ffff880022aabfd8 0000000000016cc0 ffff880035d46dd0 ffff880026267f10
Apr 13 10:39:17 host-073 kernel: 7fffffffffffffff ffff880026267f08 ffff88003aba9f60 0000000000000001
Apr 13 10:39:17 host-073 kernel: Call Trace:
Apr 13 10:39:17 host-073 kernel: [<ffffffff844979d9>] schedule+0x29/0x70
Apr 13 10:39:17 host-073 kernel: [<ffffffff84495649>] schedule_timeout+0x239/0x2c0
Apr 13 10:39:17 host-073 kernel: [<ffffffff83ebef85>] ? check_preempt_curr+0x85/0xa0
Apr 13 10:39:17 host-073 kernel: [<ffffffff83ebefb9>] ? ttwu_do_wakeup+0x19/0xd0
Apr 13 10:39:17 host-073 kernel: [<ffffffff84497d8d>] wait_for_completion+0xfd/0x140
Apr 13 10:39:17 host-073 kernel: [<ffffffff83ec2280>] ? wake_up_state+0x20/0x20
Apr 13 10:39:17 host-073 kernel: [<ffffffff83eaec2a>] kthread_stop+0x4a/0xe0
Apr 13 10:39:17 host-073 kernel: [<ffffffff842ff56b>] md_unregister_thread+0x4b/0x80
Apr 13 10:39:17 host-073 kernel: [<ffffffff84306459>] md_reap_sync_thread+0x19/0x150
Apr 13 10:39:17 host-073 kernel: [<ffffffff8430688b>] __md_stop_writes+0x3b/0xb0
Apr 13 10:39:17 host-073 kernel: [<ffffffff84306921>] md_stop_writes+0x21/0x30
Apr 13 10:39:17 host-073 kernel: [<ffffffffc05d2976>] raid_presuspend+0x16/0x20 [dm_raid]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc0202a4a>] dm_table_presuspend_targets+0x4a/0x60 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc01fd848>] __dm_suspend+0xd8/0x210 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc01ffea0>] dm_suspend+0xc0/0xd0 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc0205414>] dev_suspend+0x194/0x250 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc0205280>] ? table_load+0x390/0x390 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc0205c45>] ctl_ioctl+0x1e5/0x500 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffffc0205f73>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Apr 13 10:39:17 host-073 kernel: [<ffffffff8401264d>] do_vfs_ioctl+0x33d/0x540
Apr 13 10:39:17 host-073 kernel: [<ffffffff840b072f>] ? file_has_perm+0x9f/0xb0
Apr 13 10:39:17 host-073 kernel: [<ffffffff840128f1>] SyS_ioctl+0xa1/0xc0
Apr 13 10:39:17 host-073 kernel: [<ffffffff844a3489>] system_call_fastpath+0x16/0x1b


Version-Release number of selected component (if applicable):
3.10.0-635.el7.x86_64

lvm2-2.02.169-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
lvm2-libs-2.02.169-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
lvm2-cluster-2.02.169-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
device-mapper-1.02.138-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
device-mapper-libs-1.02.138-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
device-mapper-event-1.02.138-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
device-mapper-event-libs-1.02.138-3.el7    BUILT: Wed Mar 29 09:17:46 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017


How reproducible:
Everytime

Comment 2 Corey Marthaler 2017-04-13 15:57:40 UTC
Actually, it appears the raid6 does work. Only affects 4 and 5.

[root@host-121 ~]# lvcreate -L 100M --type raid6 -i 3 VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol0" created.
[root@host-121 ~]# lvconvert --yes --stripes 4 VG/lvol0
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active logical volume VG/lvol0 will grow it from 27 to 36 extents!
  Run "lvresize -l27 VG/lvol0" to shrink it or use the additional capacity.
  Logical volume VG/lvol0 successfully converted.

Comment 3 Corey Marthaler 2017-04-13 22:10:45 UTC
Created attachment 1271573 [details]
verbose lvconvert attempt

Comment 4 Alasdair Kergon 2017-04-20 14:00:53 UTC
Using bug 1443999 for kernel patches and the current bug if any userspace changes are needed.

Comment 5 Corey Marthaler 2017-04-21 14:07:05 UTC
Sounds like this bug is pretty well understood by now, but fwiw, I'm seeing this more frequently now w/ raid6 reshapes.

* from type:    raid6_ra_6
* to type:      raid6_ls_6
lvconvert --yes -R 16384.00k  --type raid6_ls_6 centipede2/takeover


* from type:    raid6_zr
* to type:      raid6_n_6
lvconvert --yes   --type raid6_n_6 centipede2/takeover



3.10.0-651.el7.x86_64

lvm2-2.02.170-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
lvm2-libs-2.02.170-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
lvm2-cluster-2.02.170-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
device-mapper-1.02.139-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
device-mapper-libs-1.02.139-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
device-mapper-event-1.02.139-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
device-mapper-event-libs-1.02.139-2.el7    BUILT: Thu Apr 13 14:37:43 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017

Comment 6 Jonathan Earl Brassow 2017-04-24 14:35:07 UTC
*** Bug 1439934 has been marked as a duplicate of this bug. ***

Comment 11 Jonathan Earl Brassow 2017-06-19 18:05:43 UTC

*** This bug has been marked as a duplicate of bug 1443999 ***


Note You need to log in before you can comment on or make changes to this bug.