Bug 1229267
| Summary: | Snapshots failing on tiered volumes (with EC) | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
| Component: | tier | Assignee: | hari gowtham <hgowtham> |
| Status: | CLOSED WONTFIX | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | annair, rgowdapp, rhs-bugs, rkavunga, sankarshan, sasundar |
| Target Milestone: | --- | Keywords: | Triaged, ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | tier-interops | ||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1218589 | Environment: | |
| Last Closed: | 2018-11-08 18:43:09 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1218589 | ||
| Bug Blocks: | |||
|
Description
Nag Pavan Chilakam
2015-06-08 10:44:18 UTC
I tried to reproduce this issue with latest master code. I'm able to create snapshots during an ongoing I/O on mount.
My test scenario
volume :>>>>
Volume Name: patchy
Type: Tier
Volume ID: 358a6e6a-c0a2-4e3d-8260-2b83ac28c4b5
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.43.110:/d/backends/3/patchy_snap_mnt
Brick2: 10.70.43.100:/d/backends/3/patchy_snap_mnt
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (3 + 1) = 4
Brick3: 10.70.43.100:/d/backends/1/patchy_snap_mnt
Brick4: 10.70.43.110:/d/backends/1/patchy_snap_mnt
Brick5: 10.70.43.100:/d/backends/2/patchy_snap_mnt
Brick6: 10.70.43.110:/d/backends/2/patchy_snap_mnt
Options Reconfigured:
cluster.tier-promote-frequency: 10
cluster.tier-demote-frequency: 10
cluster.write-freq-threshold: 0
cluster.read-freq-threshold: 0
performance.io-cache: off
performance.quick-read: off
features.ctr-enabled: on
performance.readdir-ahead: on
on mount point>>
tar -xvf /root/linux-4.1.2.tar.xz
on server (during I/O)>>
for i in {1..100} ; do gluster snapshot create snap$i patchy no-timestamp;done;
repeated the same test on NFS mount also, an both I/O and snapshot was success. (In reply to Mohammed Rafi KC from comment #3) > I tried to reproduce this issue with latest master code. I'm able to create > snapshots during an ongoing I/O on mount. > > My test scenario > > > volume :>>>> > > Volume Name: patchy > Type: Tier > Volume ID: 358a6e6a-c0a2-4e3d-8260-2b83ac28c4b5 > Status: Started > Number of Bricks: 6 > Transport-type: tcp > Hot Tier : > Hot Tier Type : Distribute > Number of Bricks: 2 > Brick1: 10.70.43.110:/d/backends/3/patchy_snap_mnt > Brick2: 10.70.43.100:/d/backends/3/patchy_snap_mnt > Cold Tier: > Cold Tier Type : Disperse > Number of Bricks: 1 x (3 + 1) = 4 > Brick3: 10.70.43.100:/d/backends/1/patchy_snap_mnt > Brick4: 10.70.43.110:/d/backends/1/patchy_snap_mnt > Brick5: 10.70.43.100:/d/backends/2/patchy_snap_mnt > Brick6: 10.70.43.110:/d/backends/2/patchy_snap_mnt > Options Reconfigured: > cluster.tier-promote-frequency: 10 > cluster.tier-demote-frequency: 10 > cluster.write-freq-threshold: 0 > cluster.read-freq-threshold: 0 > performance.io-cache: off > performance.quick-read: off > features.ctr-enabled: on > performance.readdir-ahead: on > > > > on mount point>> > tar -xvf /root/linux-4.1.2.tar.xz > > on server (during I/O)>> > > for i in {1..100} ; do gluster snapshot create snap$i patchy > no-timestamp;done; Rafi, Moving to bug ON_QA would be valid only if there was an issue, that was fixed with a patch, and the patch was available on a certain build( as mentioned in FIXED-IN-VERSION ) If this issue is not reproducible, this bug should be closed as CLOSED - WORKSFORME. If there was really a issue, and that was fixed, then provide the patch URL, and once the patch is available in the build, update FIXED-IN-VERSION and move this bug to ON_QA I am moving this bug to ASSIGNED, as there were no new builds available Removing FailedQA tag as this case was not really failed. Seems more like a glusterd/rpc issue rather than tiering. Probably we can change the component to rpc/glusterd? <snip> [2015-05-05 15:03:21.723499] E [glusterd-utils.c:409:glusterd_submit_reply] 0-: Reply submission failed [2015-05-05 15:07:13.759341] W [glusterd-mgmt.c:190:gd_mgmt_v3_brick_op_fn] 0-management: snapshot brickop failed [2015-05-05 15:07:13.759356] E [glusterd-mgmt.c:943:glusterd_mgmt_v3_brick_op] 0-management: Brick ops failed for operation Snapshot on local node [2015-05-05 15:07:13.759362] E [glusterd-mgmt.c:2028:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Brick Ops Failed [2015-05-05 15:08:19.961699] I [socket.c:3432:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1) [2015-05-05 15:08:19.961717] E [rpcsvc.c:1299:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x1, Program: GlusterD svc cli, ProgVers: 2, Proc: 39) to rpc-transport (socket.management) [2015-05-05 15:08:19.961727] E [glusterd-utils.c:409:glusterd_submit_reply] 0-: Reply submission failed </snip> As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |