Bug 1635060
Summary: | kernel BUG at mm/page_alloc.c:2019! | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Ken Booth <redhat> |
Component: | kernel | Assignee: | Kernel Maintainer List <kernel-maint> |
Status: | CLOSED WORKSFORME | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 27 | CC: | airlied, bskeggs, chrism, didierg-divers, dimhen, doug.hs, edgar.hoch, ewk, extras-qa, fedora, fedora, gabrielbiga, gabriele.svelto, hdegoede, hongjiu.lu, ichavero, itamar, jarodwilson, jglisse, john.j5live, jonathan, josef, kernel-maint, labbott, linville, mchehab, mischmitz, mjg59, redhat, redhat, samuel-rhbugs, steved, thomas.tomdan |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1598989 | Environment: | |
Last Closed: | 2018-10-22 15:11:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ken Booth
2018-10-01 23:00:48 UTC
I can't understand why this report was duped, was this tested on the 4.18 series? (In reply to Laura Abbott from comment #1) > I can't understand why this report was duped, was this tested on the 4.18 > series? I was told that there should be a separate bug for each version of RHEL, does this not apply to Fedora releases? (My bug was for F27 not F28). Meanwhile, tested with new kernel version and i cannot replicate the issue any more. (I also tested with kernel patches, but booted from old kernel and cannot reproduce in that environment either, so not sure how conclusive the test is). It's not necessary to dupe on Fedora since we keep similar kernels between versions. If this bug isn't reproduced I think it's okay to close for now. |