Bug 845715
| Summary: | glusterd crashes on volume set | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
| Component: | glusterd | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | urgent | ||
| Version: | mainline | CC: | amarts, gluster-bugs, jdarcy, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-07-24 17:32:11 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 895528 | ||
http://review.gluster.com/3779 fixes the issue... is now merged to master... |
Description of problem: gluster volume set crashes glusterd. Here is the backtrace: #0 0x00007f8b03bca8ef in xlator_mem_acct_init (xl=0x1e51390, num_types=99) at xlator.c:466 466 if (!xl->ctx->mem_acct_enable) Missing separate debuginfos, use: debuginfo-install glibc-2.15-37.fc17.x86_64 keyutils-libs-1.5.5-2.fc17.x86_64 krb5-libs-1.10-5.fc17.x86_64 libcom_err-1.42-4.fc17.x86_64 libgcc-4.7.0-5.fc17.x86_64 libselinux-2.1.10-3.fc17.x86_64 libxml2-2.7.8-7.fc17.x86_64 openssl-1.0.0j-1.fc17.x86_64 zlib-1.2.5-6.fc17.x86_64 (gdb) p xl->name $1 = 0x1e51340 "r2-posix" (gdb) p xl->ctx $2 = (glusterfs_ctx_t *) 0x0 (gdb) bt #0 0x00007f8b03bca8ef in xlator_mem_acct_init (xl=0x1e51390, num_types=99) at xlator.c:466 #1 0x00007f8afefc7a82 in mem_acct_init (this=0x1e51390) at posix.c:3918 #2 0x00007f8b03c18fb2 in xlator_validate_rec (xlator=0x1e51390, op_errstr=0x7fff9bbafb68) at options.c:904 #3 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4b7a0, op_errstr=0x7fff9bbafb68) at options.c:886 #4 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4c140, op_errstr=0x7fff9bbafb68) at options.c:886 #5 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4cae0, op_errstr=0x7fff9bbafb68) at options.c:886 #6 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4d840, op_errstr=0x7fff9bbafb68) at options.c:886 #7 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4e1e0, op_errstr=0x7fff9bbafb68) at options.c:886 #8 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4eb80, op_errstr=0x7fff9bbafb68) at options.c:886 #9 0x00007f8b03c18ebb in xlator_validate_rec (xlator=0x1e4f520, op_errstr=0x7fff9bbafb68) at options.c:886 #10 0x00007f8b03c190cf in graph_reconf_validateopt (graph=0x7fff9bbaf068, op_errstr=0x7fff9bbafb68) at options.c:933 #11 0x00007f8b009ea39b in validate_brickopts (volinfo=0x1c873f0, brickinfo_path=0x1c926f0 "/gfs/r2_0", I did a volume set for client-log-level with the patch ed4b76ba9c545f577287c0e70ae3cc853a0d5f3f, it crashed. resetting the patch did not produce any crash, so most probably this is the cause for the regression. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: