Bug 1479664
| Summary: | create update delete 100 volumes, at last there are 4 volumes left | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | lei.zhou <zhou.lei56> | ||||||
| Component: | glusterd | Assignee: | bugs <bugs> | ||||||
| Status: | CLOSED NOTABUG | QA Contact: | |||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 3.10 | CC: | amukherj, bugs, zhou.lei56 | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | --- | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2017-08-16 08:01:22 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
lei.zhou
2017-08-09 06:08:17 UTC
please attach glusterd and cmd_history log files from all the nodes. Created attachment 1311627 [details]
nodes glusterd log and cmd history
Created attachment 1311937 [details]
sorry,make up nodes cmd_history.log
This is not a bug. What has happened is the volumes which are still visible through gluster volume list were never deleted as the delete operation on the volumes failed. For eg: from 136.136.136.146 [2017-08-09 02:53:52.693510] : volume stop 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for 05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime. [2017-08-09 02:53:52.722636] : volume delete 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for 05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime. Now why that can happen is when two nodes try to initiate transaction on the same volume, one of them would succeed and one of them wouldn't. From the other node's cmd_history.log file I can see that at the same time volume status command was triggered which resulted into glusterd taking lock on the volume 05fccafb-810b-4dab-8f24-7149617afa9f in this node and this resulted into volume stop and delete to fail. I use gstatus to report glusterfs server status per 30 seconds,Is that the reason? Could glusterfs handle more than one volumes operation simultaneously? If there are a plenty of volumes to CRUD,Could the parallel processing performance of glusterfs improve? |