Bug 1333226 - Detach tier failed (fuse mount)
Summary: Detach tier failed (fuse mount)
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.9
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Dan Lambright
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-05 02:02 UTC by Paul Cuzner
Modified: 2017-03-08 11:03 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-08 11:03:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
rebalance logs from each node in the trusted pool (5.30 MB, application/x-bzip)
2016-05-05 02:02 UTC, Paul Cuzner
no flags Details

Description Paul Cuzner 2016-05-05 02:02:45 UTC
Created attachment 1154070 [details]
rebalance logs from each node in the trusted pool

Description of problem:
After running tiering with sharding (512MB shards), I attempted to detach the teir to return the volume to it's original state. This failed. logs from the nodes have been attached

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a sharded volume, and add a tier
2. run a workload to force promotion of shards
3. attempt to detach the tier

Actual results:
detach process failed - I ended up running detach with -force

Expected results:
detach should work for a tier/sharded volume

Additional info:
This test was performed on a RHEV/RHGS cluster using the latest downstream build 3.7.9.

Comment 1 Kaushal 2017-03-08 11:03:38 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.