Bug 618062
Summary: | [LTC 6.0 FEAT] 201252: parted support (GPT, etc) for 4K Byte Sector data disk | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Ryan Lerch <rlerch> |
Component: | doc-Storage_Admin_Guide | Assignee: | Jacquelynn East <jeast> |
Status: | CLOSED NOTABUG | QA Contact: | ecs-bugs |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 6.1 | CC: | borgan, bugproxy, charles_rose, coughlan, cward, ddomingo, ddumas, ejratl, esandeen, hdegoede, jeast, jfeeney, jjarvis, jmoyer, kzak, martinez, matt_domsch, meyering, mgahagan, msnitzer, peterm, qcai, rlandman, rlerch, rwheeler, snagar, wwlinuxengineering, yanwang |
Target Milestone: | rc | Keywords: | Documentation, FutureFeature |
Target Release: | 6.1 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | |
Clone Of: | 614404 | Environment: | |
Last Closed: | 2010-12-10 15:26:07 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 487227, 539553, 614404 | ||
Bug Blocks: | 356741, 463632, 519834, 523313, 534151, 554529, 554559, 576381, 654869, 696265 |
Description
Ryan Lerch
2010-07-25 22:54:21 UTC
Thanks for the tip :) hdegoede see above comments please :) Thanks I have tested with at least three different 4KB-logical sector devices. As far as I know, there is no problem with upstream parted vs. >512B-sector devices. If anyone has evidence to the contrary, please describe it here. (In reply to comment #8) > I have tested with at least three different 4KB-logical sector devices. > As far as I know, there is no problem with upstream parted vs. >512B-sector > devices. If anyone has evidence to the contrary, please describe it here. That is great, but can you help backport the required fixes to resolve bug# 614404 ? (In reply to comment #10) > That is great, but can you help backport the required fixes to resolve bug# > 614404 ? Hi Mike, In that bug, the comment "hanging in fsync()" does not sound like something that could indicate a bug in parted. Rather, it sounds like a bug in a driver, hardware, or the kernel. Has anyone else been able to reproduce such a failure? (In reply to comment #10) > That is great, but can you help backport the required fixes to resolve bug# > 614404 ? To answer your question, Sure. If there's a parted bug behind bug# 614404, I will help fix it. IMHO, this should be "needinfo: reproducer required", and if none is forthcoming in say a month or two, just close the bug. Nothing I can see above suggests a bug in parted. Another example, the I/O errors mentioned in the description above are unlikely to be due to bugs in Parted. (In reply to comment #12) > (In reply to comment #10) > > That is great, but can you help backport the required fixes to resolve bug# > > 614404 ? > > To answer your question, Sure. > If there's a parted bug behind bug# 614404, I will help fix it. > IMHO, this should be "needinfo: reproducer required", and if none > is forthcoming in say a month or two, just close the bug. > > Nothing I can see above suggests a bug in parted. > Another example, the I/O errors mentioned in the description above are unlikely > to be due to bugs in Parted. Fine, but I'm missing why we're laboring over that BZ here. (In reply to comment #13) > Fine, but I'm missing why we're laboring over that BZ here. Hi Mike, This bug is a clone of 614404, and comments from 614404 have been the basis of all comments in this BZ. As far as I'm concerned, both this bug and 614404 should be closed soon (as "unreproducible"), if we see no additional data. I've marked my calendar to close both on Dec 7th if nothing comes up in the mean time. Since there is no evidence of a bug in parted, and hence no reason to deprecate it in the RHEL6 storage guide, I'm closing this. If anyone provides details, you're welcome to reopen. |