Bug 1369697 - rm -rf * failed with error "Directory not empty" while rebalance+lookup are in-progress
Summary: rm -rf * failed with error "Directory not empty" while rebalance+lookup are i...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: ---
Assignee: Mohit Agrawal
QA Contact: Prasad Desala
URL:
Whiteboard: dht-rm-rf
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-24 07:38 UTC by Prasad Desala
Modified: 2018-04-16 18:16 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 18:16:14 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Prasad Desala 2016-08-24 07:38:51 UTC
Description of problem:
=======================
With infinite loop of lookups from mount point, added multiple bricks to a distributed replica volume and started rebalance with force option.
While rebalance and lookups are in-progress, started removing files and directories from the mount point using "rm -rf *". 
Removing data failed for few files and directories with error "Directory not empty".

Version-Release number of selected component (if applicable):
=============================================================
3.7.9-10.el7rhgs.x86_64

How reproducible:
=================
always

Steps to Reproduce:
===================
1. create and mount Dist-rep volume. 
2. create files and Directories on it. In my case, i untarred linux kernel package.
3. From mount, keep sending continuous lookups.
4. add new bricks to that volume.
5. start rebalance with start force option.
6. While rebalance and lookups are in-progress, from mount point start deleting data using rm -rf *.

Actual results:
===============
File/Directory deletion is failing with error "Directory not empty".

Expected results:
================
rm -rf should not fail with such error if all sub-volumes were up.


Note You need to log in before you can comment on or make changes to this bug.