| Summary: | raid background reconstruction after every full bootup | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Damjan <damjanster> | ||||
| Component: | dracut | Assignee: | dracut-maint | ||||
| Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 16 | CC: | agk, damjanster, dledford, dracut-maint, harald, joel.granados, johannbg, jonathan, kay, lpoetter, mbroz, metherid, mschmidt, notting, plautrba, twoerner | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2012-12-20 15:42:04 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
This is most likely a systemd issue. Initscripts used to make sure on reboot that we umounted everything, mounted root read only, then ran mdadm --wait-clean --scan, then rebooted. My guess is that systemd is not performing this vital series of steps. These steps are responsible for the raid array being marked clean, without them you will have a rebuild on every reboot. Is it possible to manually add that command to an existing/new systemd script? It also happens that I cannot shut down/reboot the machine as it stalls on "unmount". Now that I'm using hybernate to "stop" the PC, I've found out that after waking up from hybernate a new gnome-shell starts for the same session. These processes keep piling up and I have to manually kill them to release the CPU and memory resources. (In reply to comment #1) > This is most likely a systemd issue. Initscripts used to make sure on reboot > that we umounted everything, mounted root read only, then ran mdadm > --wait-clean --scan, then rebooted. Is this specific to RAID with external metadata? Looks related to bug 713224. Yes, it is specific to external metadata. The --wait-clean --scan option to mdadm tells it to wait for all existing external imsm arrays to be marked clean by the (should be still running) mdmon processes. Once it reads that the array is clean from the on disk superblock of all imsm arrays, it exits and the reboot process can complete. This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component. Has this issue been resolved yet? I keep updating packages, but problems with rebuild remain. I have had further problems with this issue, so I went for a different distro altogether. The one I use now also implemented systemd and Gnome 3.2. The one thing they have done differently is the persistent use of dmraid for "fake-raid" controllers. I now believe there is an issue within the newer mdraid implementation that fedora uses. Is there a way to get the old dmraid back to Fedora 16? (In reply to comment #7) > Is there a way to get the old dmraid back to Fedora 16? add "rd.md.imsm=0 rd.md.ddf=0" to the kernel command line (In reply to comment #8) > (In reply to comment #7) > > Is there a way to get the old dmraid back to Fedora 16? > > add "rd.md.imsm=0 rd.md.ddf=0" to the kernel command line "rd.md.imsm=0 rd.md.ddf=0 rd.dm=1" Any workarounds for this? Comment 9 did not work for me. Ppl might be interested in this thread: http://lists.freedesktop.org/archives/systemd-devel/2011-November/003734.html will add "mdadm --wait-clean --scan" to the dracut shutdown dracut has "mdadm -v --stop --scan" since dracut-011. Is that not enough? Do I have to do "mdadm --wait-clean --scan" first? dracut-018-60.git20120927.fc16 has been submitted as an update for Fedora 16. https://admin.fedoraproject.org/updates/dracut-018-60.git20120927.fc16 dracut-018-105.git20120927.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/dracut-018-105.git20120927.fc17 Package dracut-018-105.git20120927.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing dracut-018-105.git20120927.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-14953/dracut-018-105.git20120927.fc17 then log in and leave karma (feedback). dracut-018-60.git20120927.fc16 has been pushed to the Fedora 16 stable repository. If problems still persist, please make note of it in this bug report. |
Created attachment 519198 [details] full dmesg output Description of problem: I use onboard RAID 1 (intel ICH9R) which used to work fine on previous versions of Fedora. Now on every bootup the raid array gets auto resynced. This is very annoying at best and I do believe it will shorten my drives' life. There is no hardware error reported and the background reconstruction finishes fine. The only way to avoid it is to put the workstation in sleep mode - avoiding reboot. Version-Release number of selected component (if applicable): mdadm-3.2.2-6.fc15.x86_64 dracut-009-12.fc15.noarch How reproducible: Every time system boots F15. Steps to Reproduce: 1. shutdown 2. power up and select F15 or just reboot. Actual results: Resync of the raid 1 array. Expected results: Array stays normal - no rebuild required. Additional info: (from dmesg) [ 3.462629] dracut: Autoassembling MD Raid [ 3.470681] md: md127 stopped. [ 3.472500] md: bind<sda> [ 3.472584] md: bind<sdb> [ 3.473141] dracut: mdadm: Container /dev/md127 has been assembled with 2 drives [ 3.477324] md: md126 stopped. [ 3.477512] md: bind<sdb> [ 3.477634] md: bind<sda> [ 3.479392] md: raid1 personality registered for level 1 [ 3.479521] bio: create slab <bio-1> at 1 [ 3.481195] md/raid1:md126: not clean -- starting background reconstruction [ 3.481198] md/raid1:md126: active with 2 out of 2 mirrors [ 3.481217] md126: detected capacity change from 0 to 1000202043392 [ 3.483908] md126: p1 p2 p3