Bug 1282802 - rebalance/disrep_tier crash
rebalance/disrep_tier crash
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
Unspecified Unspecified
high Severity urgent
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2015-11-17 08:12 EST by RajeshReddy
Modified: 2016-09-17 11:35 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-11-25 02:15:58 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-11-17 08:12:52 EST
Description of problem:
rebalance/disrep_tier crash

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Created distributed replica volume and attached 4 hot tier bricks
2. Mounted it on client using fuse and while doing file operations i observed crash (not remember the exact steps)

Actual results:
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done.

warning: core file may not match specified executable file.
[New LWP 19153]
[New LWP 8929]
[New LWP 8911]
[New LWP 8905]
[New LWP 8909]
[New LWP 8924]
[New LWP 8906]
[New LWP 8907]
[New LWP 8926]
[New LWP 8923]
[New LWP 8934]
[New LWP 8928]
[New LWP 8932]
[New LWP 8908]
[New LWP 8941]
[New LWP 8936]
[New LWP 8937]
[New LWP 8939]
[New LWP 8922]
[New LWP 8930]
[New LWP 8931]
[New LWP 8927]
[New LWP 8935]
[New LWP 8940]
[New LWP 8938]
[New LWP 19154]
[New LWP 8933]
[New LWP 8925]
[New LWP 8904]

warning: .dynamic section for "/usr/lib64/glusterfs/3.7.5/rpc-transport/socket.so" is not at the expected address (wrong library or version mismatch?)

warning: .dynamic section for "/usr/lib64/glusterfs/3.7.5/xlator/cluster/distribute.so" is not at the expected address (wrong library or version mismatch?)

warning: .dynamic section for "/usr/lib64/glusterfs/3.7.5/xlator/cluster/tier.so" is not at the expected address (wrong library or version mismatch?)

warning: .dynamic section for "/usr/lib64/libgfdb.so.0" is not at the expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfs -s localhost --volfile-id rebalance/disrep_tier --xlator-op'.
Program terminated with signal 11, Segmentation fault.
#0  __strlen_sse2_pminub () at ../sysdeps/x86_64/multiarch/strlen-sse2-pminub.S:38
38		movdqu	(%rdi), %xmm1
(gdb) bt
#0  __strlen_sse2_pminub () at ../sysdeps/x86_64/multiarch/strlen-sse2-pminub.S:38
#1  0x00007f265ae59481 in gf_sql_delete_unwind (sql_conn=0x7f2600a369b8, gfdb_db_record=<optimized out>)
    at gfdb_sqlite3_helper.c:1019
#2  0x00007f2667d28e2e in _IO_new_file_fopen (fp=0x7f26009a7250, fp@entry=0x7f260f7fde00, 
    filename=filename@entry=0x7f26540225d0 "/var/run/gluster/disrep_tier-tier-dht/demotequeryfile-disrep_tier-tier-dht", mode=<optimized out>, mode@entry=0x7f264401e720 " \270", is32not64=is32not64@entry=1524977057) at fileops.c:359
#3  0x00007f2667d1d4b4 in __fopen_internal (
    filename=0x7f26540225d0 "/var/run/gluster/disrep_tier-tier-dht/demotequeryfile-disrep_tier-tier-dht", 
    mode=0x7f264401e720 " \270", is32=1524977057) at iofopen.c:90
#4  0x00007f265b484c65 in cluster_markerxtime_cbk (frame=0x7f260f7fde90, cookie=<optimized out>, 
    this=0x7f264401e720, op_ret=0, op_errno=10715024, dict=0x0, xdata=0x56499930)
    at ../../../../xlators/lib/src/libxlator.c:226
#5  0x00000000000e6303 in ?? ()
#6  0x0000000056499930 in ?? ()
#7  0x00000000000e6303 in ?? ()
#8  0x00007f260f7fde00 in ?? ()
#9  0x00007f2600000000 in ?? ()
#10 0x00007f260f7fde90 in ?? ()
#11 0xa9eca3b7c8046d00 in ?? ()
#12 0x00007f26000008c0 in ?? ()
#13 0x00007f264b7fdc80 in ?? ()
#14 0x00007f260f7fde90 in ?? ()
#15 0x0000000000000000 in ?? ()

Expected results:

Additional info:
[root@rhs-client18 core]# gluster vol info disrep_tier
Volume Name: disrep_tier
Type: Tier
Volume ID: ea4bd2c2-efd3-4d25-bbc1-8f6d9c75dafc
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick5/tier
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick5/tier
Brick3: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/tier
Brick4: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/tier
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/disrep_teri
Brick6: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/disrep_teri
Brick7: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/disrep_teri
Brick8: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/disrep_teri
Options Reconfigured:
cluster.tier-demote-frequency: 600
performance.readdir-ahead: on
features.ctr-enabled: on
Comment 2 RajeshReddy 2015-11-17 08:16:59 EST
sosreport and core are available @/home/repo/sosreports/bug.1282802 on rhsqe-repo.lab.eng.blr.redhat.com
Comment 3 RajeshReddy 2015-11-23 05:17:59 EST
Earlier i was running glusterfs-server-3.7.5-5 to upgrade to glusterfs-server-3.7.5-6 i un-installed old version and re-installed glusterfs-server-3.7.5-6
Comment 4 RajeshReddy 2015-11-24 06:28:47 EST
Uninstalled old version without disturbing existing volumes and re-installed the new version i.e  glusterfs-server-3.7.5-6 after re-installation able to see all previous volumes and while doing IO on the volumes observed the crash
Comment 5 Rejy M Cyriac 2015-11-25 02:15:58 EST
Upgrades between development builds are not valid, and so bugs raised related to such environments are not acceptable. Hence closing this bug as NOTABUG.

If the issue is reproducible on a system with clean install of the latest development build, or on a system upgraded from the last released version to the latest development build, this bug may be re-opened

Note You need to log in before you can comment on or make changes to this bug.