Bug 1818912
| Summary: | mdadm reshape from RAID5 to RAID6 hangs [fedora] | |||
|---|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Hubert Kario <hkario> | |
| Component: | mdadm | Assignee: | Jes Sorensen <jes.sorensen> | |
| Status: | CLOSED EOL | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 32 | CC: | agk, dledford, jes.sorensen, ncroxon, xni | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1818914 (view as bug list) | Environment: | ||
| Last Closed: | 2021-05-25 17:09:14 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1818914, 1818931 | |||
Same behaviour on Fedora 31:
kernel-5.5.10-200.fc31.x86_64
mdadm-4.1-4.fc31.x86_64
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 loop3[4] loop2[3] loop1[1] loop0[0]
2093056 blocks super 1.2 level 6, 512k chunk, algorithm 18 [4/3] [UUU_]
[>....................] reshape = 0.0% (1/1046528) finish=0.0min speed=174421K/sec
unused devices: <none>
And on Fedora 32:
kernel-5.6.0-0.rc7.git0.2.fc32.x86_64
mdadm-4.1-4.fc32.x86_64
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 loop3[4] loop2[3] loop1[1] loop0[0]
2093056 blocks super 1.2 level 6, 512k chunk, algorithm 18 [4/3] [UUU_]
[>....................] reshape = 0.0% (1/1046528) finish=0.0min speed=174421K/sec
unused devices: <none>
https://marc.info/?l=linux-raid&m=159195299630680&w=2 Verified the above patch fixes the hang and allows the grow to proceed. My mistake, it does not fix it. On fedora32, running the posted reproducer script...
I see no progress with two threads, md0_raid6, md0_reshape.
# ps aux |grep md0
root 1265 0.0 0.0 0 0 ? S 10:43 0:00 [md0_raid6]
root 1271 0.0 0.0 0 0 ? S 10:43 0:00 [md0_reshape]
root 1274 0.0 0.0 5668 4804 ? SLs 10:43 0:00 /usr/sbin/mdadm --grow --continue /dev/md0
root 1312 0.0 0.0 216084 644 pts/0 S+ 10:59 0:00 grep --color=auto md0
what could be minutes or hours later... no progress
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 loop3[4] loop2[3] loop1[1] loop0[0]
2093056 blocks super 1.2 level 6, 512k chunk, algorithm 18 [4/3] [UUU_]
[>....................] reshape = 1.5% (16384/1046528) finish=1091.8min speed=15K/sec
unused devices: <none>
Jes, if you want to review... I'll post upstream..
diff --git a/Grow.c b/Grow.c
index 6b8321c..66bd8c0 100644
--- a/Grow.c
+++ b/Grow.c
@@ -931,10 +931,13 @@ int start_reshape(struct mdinfo *sra, int already_running,
err = err ?: sysfs_set_num(sra, NULL, "sync_max", sync_max_to_set);
if (!already_running && err == 0) {
int cnt = 5;
+ int err2;
do {
err = sysfs_set_str(sra, NULL, "sync_action",
"reshape");
- if (err)
+ err2 = sysfs_set_str(sra, NULL, "sync_max",
+ "max");
+ if (err || err2)
sleep(1);
} while (err && errno == EBUSY && cnt-- > 0);
}
(In reply to Nigel Croxon from comment #6) > - if (err) > + err2 = sysfs_set_str(sra, NULL, "sync_max", > + "max"); > + if (err || err2) > sleep(1); > } while (err && errno == EBUSY && cnt-- > 0); > } shouldn't `err2` be added to `while (...`? Yes, thanks Hubert.
[root@mdbox mdadm]# git diff Grow.c
diff --git a/Grow.c b/Grow.c
index 57db7d4..8e4391f 100644
--- a/Grow.c
+++ b/Grow.c
@@ -931,12 +931,15 @@ int start_reshape(struct mdinfo *sra, int already_running,
err = err ?: sysfs_set_num(sra, NULL, "sync_max", sync_max_to_set);
if (!already_running && err == 0) {
int cnt = 5;
+ int err2;
do {
err = sysfs_set_str(sra, NULL, "sync_action",
"reshape");
- if (err)
+ err2 = sysfs_set_str(sra, NULL, "sync_max",
+ "max");
+ if (err || err2)
sleep(1);
- } while (err && errno == EBUSY && cnt-- > 0);
+ } while (err && err2 && errno == EBUSY && cnt-- > 0);
}
return err;
}
Hello Jes, Do you have a status on the upstream patch that was submitted? Thanks for your time, -Nigel This message is a reminder that Fedora 32 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 32 on 2021-05-25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '32'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 32 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Fedora 32 changed to end-of-life (EOL) status on 2021-05-25. Fedora 32 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed. |
Description of problem: Reshaping a 3-disk RAID5 to 4-disk RAID6 hangs, restore from critical section impossible Version-Release number of selected component (if applicable): mdadm-4.1-1.fc30.x86_64 kernel-5.5.10-100.fc30.x86_64 How reproducible: always Steps to Reproduce: truncate -s 1G disk1 truncate -s 1G disk2 truncate -s 1G disk3 truncate -s 1G disk4 DEVS=($(losetup --find --show disk1)) DEVS+=($(losetup --find --show disk2)) DEVS+=($(losetup --find --show disk3)) ADD=$(losetup --find --show disk4) mdadm --create /dev/md0 --level=5 --raid-devices=3 "${DEVS[@]}" mdadm --wait /dev/md0 mdadm /dev/md0 --add "$ADD" mdadm --grow /dev/md0 --level=6 --raid-devices=4 --backup-file=mdadm.backup Actual results: hanged at the beginning of of migration: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid6 loop3[4] loop2[3] loop1[1] loop0[0] 2093056 blocks super 1.2 level 6, 512k chunk, algorithm 18 [4/3] [UUU_] [>....................] reshape = 0.0% (1/1046528) finish=2.0min speed=8176K/sec unused devices: <none> Expected results: a RAID6 array with previously existing data Additional info: mdadm --stop /dev/md0 mdadm --assemble /dev/md0 "${DEVS[@]}" $ADD --backup-file=mdadm.backup mdadm: Failed to restore critical section for reshape, sorry.