Description of problem: =================== Snapd crashed during creation and deletion of snapshot Version-Release number of selected component (if applicable): =========== glusterfs-server-3.7.5-18 How reproducible: Steps to Reproduce: ========= 1. Create 16x2 volume and attach 6x2 hot tier and enable uss 2. Mount it on client using fuse 3. While IO is going on create snapshot and then after couple of hours delete the snapshot Actual results: Expected results: Additional info: =============== [root@dhcp35-51 core]# gluster vol info replica_tier Volume Name: replica_tier Type: Tier Volume ID: e68ac8d7-ab73-4077-82ca-88f328277b09 Status: Started Number of Bricks: 44 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 6 x 2 = 12 Brick1: 10.70.36.42:/rhs/brick3/distrep_hot2 Brick2: 10.70.36.43:/rhs/brick3/distrep_hot2 Brick3: 10.70.35.122:/rhs/brick3/distrep_hot2 Brick4: 10.70.35.138:/rhs/brick3/distrep_hot2 Brick5: 10.70.35.51:/rhs/brick3/distrep_hot2 Brick6: 10.70.35.35:/rhs/brick3/distrep_hot2 Brick7: 10.70.35.132:/rhs/brick3/distrep_hot2 Brick8: 10.70.35.98:/rhs/brick3/distrep_hot2 Brick9: 10.70.35.77:/rhs/brick3/distrep_hot2 Brick10: 10.70.35.191:/rhs/brick3/distrep_hot2 Brick11: 10.70.35.202:/rhs/brick3/distrep_hot2 Brick12: 10.70.35.49:/rhs/brick3/distrep_hot2 Cold Tier: Cold Tier Type : Distributed-Replicate Number of Bricks: 16 x 2 = 32 Brick13: dhcp35-153.lab.eng.blr.redhat.com:/rhs/brick6/distrep Brick14: 10.70.35.38:/rhs/brick6/distrep Brick15: 10.70.35.196:/rhs/brick6/distrep Brick16: 10.70.36.41:/rhs/brick6/distrep Brick17: 10.70.35.49:/rhs/brick6/distrep Brick18: 10.70.35.202:/rhs/brick6/distrep Brick19: 10.70.35.191:/rhs/brick6/distrep Brick20: 10.70.35.77:/rhs/brick6/distrep Brick21: 10.70.35.98:/rhs/brick6/distrep Brick22: 10.70.35.132:/rhs/brick6/distrep Brick23: 10.70.35.35:/rhs/brick6/distrep Brick24: 10.70.35.51:/rhs/brick6/distrep Brick25: 10.70.35.138:/rhs/brick6/distrep Brick26: 10.70.35.122:/rhs/brick6/distrep Brick27: 10.70.36.43:/rhs/brick6/distrep Brick28: 10.70.36.42:/rhs/brick6/distrep Brick29: dhcp35-153.lab.eng.blr.redhat.com:/rhs/brick5/distrep Brick30: 10.70.35.38:/rhs/brick5/distrep Brick31: 10.70.35.196:/rhs/brick5/distrep Brick32: 10.70.36.41:/rhs/brick5/distrep Brick33: 10.70.35.49:/rhs/brick5/distrep Brick34: 10.70.35.202:/rhs/brick5/distrep Brick35: 10.70.35.191:/rhs/brick5/distrep Brick36: 10.70.35.77:/rhs/brick5/distrep Brick37: 10.70.35.98:/rhs/brick5/distrep Brick38: 10.70.35.132:/rhs/brick5/distrep Brick39: 10.70.35.35:/rhs/brick5/distrep Brick40: 10.70.35.51:/rhs/brick5/distrep Brick41: 10.70.35.138:/rhs/brick5/distrep Brick42: 10.70.35.122:/rhs/brick5/distrep Brick43: 10.70.36.43:/rhs/brick5/distrep Brick44: 10.70.36.42:/rhs/brick5/distrep Options Reconfigured: performance.readdir-ahead: on features.barrier: disable nfs.outstanding-rpc-limit: 16 features.ctr-enabled: on cluster.tier-mode: cache features.uss: on features.quota: on features.inode-quota: on features.quota-deem-statfs: on
core and sosreport are available @/home/repo/sosreports/bug.1304972 on rhsqe-repo.lab.eng.blr.redhat.com
Observed one more crash on 10.70.36.43 node and core file is available @/home/repo/sosreports/bug.1304972 on rhsqe-repo.lab.eng.blr.redhat.com
Bala, If you have a tiering setup mentioned above, can you please just see whether creation and deletion of snapshot have any issues on the latest releases. I don't want you to create the setup and try to reproduce the bug, if you can cover the scenario as part of your tiering setup that good, other wise let me know.
Based on the comment8, closing the bug