Description of problem: ------------------- if we have multiple volumes which we want to create/delete or if we want to create multiple vols, back to back, we need to put a sleep of 2 seconds, so as to avoid stale brick processes related issues. However, we need to fix this so that we dont have to put a sleep of 2 seconds, given that, this is not a feasible approach for OCS, in my belief. Also, on another node, I am not sure if heketi/OCS provision takes care of inducing sleep of 2 sec. Version-Release number of selected component (if applicable): ======================= 3.12.2-21 expected behavior: =============== enhance gluster to take in parallel requests for vol create/del without the need of delays induced(internally gluster can queue them if need be) refer admin docs for checking if we are asking customers to induce delay eg: https://doc-stage.usersys.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/ 23.3.2. Enabling Management Encryption
"if we have multiple volumes which we want to create/delete or if we want to create multiple vols, back to back, we need to put a sleep of 2 seconds, so as to avoid stale brick processes related issues." - What's the justification on the stale brick process here? What am I missing? Mind to point to a BZ?
Also why the Internal Whiteboard has been marked as 3.4.1? Are you proposing this as blocker to 3.4.1? Again, what's the justification?
> Also, on another node, I am not sure if heketi/OCS provision takes care of inducing sleep of 2 sec. No, to the best of my knowledge Heketi does not enforce a delay or sleep of 2 seconds between volume deletes. This is also the first I've heard of this recommendation. I couldn't find it in the docs you linked to, but I'm not sure if you were saying that it should be added to the docs or was noted in the docs. If the latter, an excerpt from the docs might be useful.
Needinfo should remain to the reporter as I haven't seen the response yet on my questions :-)
Nag - I have an open question at this bug at comment 2. Please answer to it, otherwise this bug will be closed with a resolution as insufficient data.