Bug 197373
Summary: | clvmd seg faults at start up | ||
---|---|---|---|
Product: | [Retired] Red Hat Cluster Suite | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2-cluster | Assignee: | Christine Caulfield <ccaulfie> |
Status: | CLOSED WORKSFORME | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4 | CC: | agk, dwysocha, mbroz |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2006-11-01 23:20:48 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2006-06-30 17:29:03 UTC
You shold have guessed that filing a big entitled "...for no apparent reason" would be returned for more information. really, now. It looks like a (possible DLM) kernel oops, so more kernel traceback please. [root@taft-01 ~]# clvmd Segmentation fault (core dumped) Jul 5 08:35:58 taft-01 kernel: clvmd[4397] general protection rip:34be804ffc rsp:7fbfffed90 error:0 [root@taft-01 ~]# strace clvmd execve("/usr/sbin/clvmd", ["clvmd"], [/* 21 vars */]) = 0 uname({sys="Linux", node="taft-01", ...}) = 0 brk(0) = 0x561000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2a95556000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=110734, ...}) = 0 mmap(NULL, 110734, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2a95557000 close(3) = 0 open("/lib64/tls/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 V0\2774"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=106203, ...}) = 0 mmap(0x34bf300000, 1131384, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x34bf300000 mprotect(0x34bf30f000, 1069944, PROT_NONE) = 0 mmap(0x34bf40f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xf000) = 0x34bf40f000 mmap(0x34bf411000, 13176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x34bf411000 close(3) = 0 open("/lib64/libdevmapper-event.so.1.02", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`!\220\277"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0555, st_size=27352, ...}) = 0 mmap(0x34bf900000, 1071376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x34bf900000 mprotect(0x34bf906000, 1046800, PROT_NONE) = 0 mmap(0x34bfa05000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x34bfa05000 close(3) = 0 open("/lib64/libdevmapper.so.1.02", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p5\20\277"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0555, st_size=68680, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2a95573000 mmap(0x34bf100000, 1112432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x34bf100000 mprotect(0x34bf10e000, 1055088, PROT_NONE) = 0 mmap(0x34bf20e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x34bf20e000 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV (core dumped) +++ hit this again today on taft-01: Jul 12 05:16:01 taft-01 kernel: clvmd[4361]: segfault at 0000000000000008 rip 00000034be80bb98 rsp 0000007fbffff360 error 4 Does the X86/64 log SEGVs in the kernel log or is that a kernel oops? If it's a kernel oops, is there more traceback? If its just a userland segv can you get a gdb traceback? - according to the message it dumped core. ISTR that agk said (some time ago now) that he had fixed some things in LVM that might be causing odd segvs. Are these still happening ? so..are these still hapenning with the new LVM code ?? Have not seen this bug in almost 4 months, closing. Will reopen if seen again. |