Bug 222875
Summary: | Kernel panic - not syncing: SM: Record message above and reboot. | ||||||
---|---|---|---|---|---|---|---|
Product: | [Retired] Red Hat Cluster Suite | Reporter: | Tomasz Jaszowski <tjaszowski> | ||||
Component: | cman | Assignee: | Christine Caulfield <ccaulfie> | ||||
Status: | CLOSED CANTFIX | QA Contact: | Cluster QE <mspqa-list> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 4 | CC: | cluster-maint, teigland | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | i686 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2008-01-08 14:04:29 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Tomasz Jaszowski
2007-01-16 17:58:39 UTC
Created attachment 145711 [details]
kernel panic on iLO
It seems to be requesting node id -1 from cman (which is invalid). Dave: did cman give SM this node number ? Hi, Any ideas how to avoid this kernel panic? (we would like to join this system into production, so answer to this bug becoming critical...) Thanks Could you describe exactly what you did to get this? And does it happen every time you do that? it happened during configuration of GFS partitions. We created them, added to fstab, mounted, unmounted few times, few times rebooted...and after one of those reboots on one of nodes we saw that message. Unfortunately I can't provide exact path how to reproduce it. We didn't tried to reproduce it, and it happened only once (In reply to comment #5) > it happened during configuration of GFS partitions. We created them, added to > fstab, mounted, unmounted few times, few times rebooted...and after one of those > reboots on one of nodes we saw that message. Unfortunately I can't provide exact > path how to reproduce it. > > We didn't tried to reproduce it, and it happened only once nothing more to add Hi any ideas? Not without any more information, no. Sorry. as I'm not able to provide more detailed informations, setting as cantfix. Thanks for help |