Description of problem: 1) create volume 2) create brick 3) start volume 4) stop volume 5) remove brick check node:/brick_dir/ [root@filer01 ~]# ls -la /export/hateya2/ total 4 drwxr-xr-x 3 vdsm kvm 23 Sep 10 02:42 . drwxr-xr-x. 11 root root 4096 Sep 11 02:47 .. drw------- 8 root root 60 Sep 10 02:40 .glusterfs
Created attachment 611695 [details] engine and gluster logs.
This is the glusterfs behavior, hence changing the product to RHS. AFAIK, this is intended behavior in glusterfs.
as of today, yes, it is a know behavior and not removing the brick from backend is intentional. One has to do a 'rm /brick-dir' on the server machine to make the proper cleanup.
proposing it to be known issues section. we think a proper cleanup of volume export dir will cause problems...
We will plan to support it in future with 'force' key... for now, recommend known-issues section in doc.
Marked for known issues. (and the work around exists) ---------- Cause: volume start creates '.glusterfs' directory in backend export directory. in remove-brick, we don't do operations on backend export, but just change the volume configs to remove the brick... Consequence: stale data is present in backend export. Workaround (if any): perform a 'rm -rf /export-dir' on server node for cleanup. ------------ Please re-open if its not sufficient.