Bug 1395699
Summary: | getting Input/output error on doing deletes simultaneously from two clients | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | disperse | Assignee: | Ashish Pandey <aspandey> |
Status: | CLOSED DUPLICATE | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.2 | CC: | nchilaka, pkarampu, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-17 08:47:53 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2016-11-16 13:21:31 UTC
note that One brick was down root@dhcp35-37 ~]# gluster v status erasure Status of volume: erasure Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.37:/rhs/brick1/erasure 49152 0 Y 31636 Brick 10.70.35.116:/rhs/brick1/erasure 49152 0 Y 32577 Brick 10.70.35.239:/rhs/brick1/erasure 49152 0 Y 13279 Brick 10.70.35.135:/rhs/brick1/erasure N/A N/A N N/A Brick 10.70.35.8:/rhs/brick1/erasure 49152 0 Y 29016 Brick 10.70.35.196:/rhs/brick1/erasure 49152 0 Y 30329 Brick 10.70.35.37:/rhs/brick2/erasure 49153 0 Y 26096 Brick 10.70.35.116:/rhs/brick2/erasure 49153 0 Y 32596 Brick 10.70.35.239:/rhs/brick2/erasure 49153 0 Y 13298 Brick 10.70.35.135:/rhs/brick2/erasure 49153 0 Y 16724 Brick 10.70.35.8:/rhs/brick2/erasure 49153 0 Y 29024 Brick 10.70.35.196:/rhs/brick2/erasure 49153 0 Y 30321 Snapshot Daemon on localhost 49154 0 Y 26281 Self-heal Daemon on localhost N/A N/A Y 12130 Quota Daemon on localhost N/A N/A Y 31664 Snapshot Daemon on 10.70.35.196 49154 0 Y 30336 Self-heal Daemon on 10.70.35.196 N/A N/A Y 2749 Quota Daemon on 10.70.35.196 N/A N/A Y 22663 Snapshot Daemon on 10.70.35.135 49154 0 Y 16837 Self-heal Daemon on 10.70.35.135 N/A N/A Y 1328 Quota Daemon on 10.70.35.135 N/A N/A Y 21241 Snapshot Daemon on 10.70.35.116 49154 0 Y 32710 Self-heal Daemon on 10.70.35.116 N/A N/A Y 17057 Quota Daemon on 10.70.35.116 N/A N/A Y 4667 Snapshot Daemon on 10.70.35.8 49154 0 Y 29030 Self-heal Daemon on 10.70.35.8 N/A N/A Y 1217 Quota Daemon on 10.70.35.8 N/A N/A Y 21068 Snapshot Daemon on 10.70.35.239 49154 0 Y 13426 Self-heal Daemon on 10.70.35.239 N/A N/A Y 30091 Quota Daemon on 10.70.35.239 N/A N/A Y 17733 Task Status of Volume erasure ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-37 ~]# gluster v info erasure Volume Name: erasure Type: Distributed-Disperse Volume ID: 95cd2d01-3452-46c3-9edc-946470738052 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: 10.70.35.37:/rhs/brick1/erasure Brick2: 10.70.35.116:/rhs/brick1/erasure Brick3: 10.70.35.239:/rhs/brick1/erasure Brick4: 10.70.35.135:/rhs/brick1/erasure Brick5: 10.70.35.8:/rhs/brick1/erasure Brick6: 10.70.35.196:/rhs/brick1/erasure Brick7: 10.70.35.37:/rhs/brick2/erasure Brick8: 10.70.35.116:/rhs/brick2/erasure Brick9: 10.70.35.239:/rhs/brick2/erasure Brick10: 10.70.35.135:/rhs/brick2/erasure Brick11: 10.70.35.8:/rhs/brick2/erasure Brick12: 10.70.35.196:/rhs/brick2/erasure Options Reconfigured: features.cache-invalidation-timeout: 600 performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 300 disperse.shd-max-threads: 4 features.uss: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@dhcp35-37 ~]# Nag, This is exactly the same bug which has already been raised by Ambarish. https://bugzilla.redhat.com/show_bug.cgi?id=1395161 I would like to mark this as duplicate and close it. Your thoughts? Although it is duplicate, If you can collect sosreport , that will be helpful . *** This bug has been marked as a duplicate of bug 1395161 *** |