| Summary: | SMB[md-cache]: while running dd from cifs mount to create a large file and doing multiple graph switch causes I/O error and fill in logs | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | surabhi <sbhaloth> |
| Component: | md-cache | Assignee: | Poornima G <pgurusid> |
| Status: | CLOSED WORKSFORME | QA Contact: | Vivek Das <vdas> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.2 | CC: | amukherj, pgurusid, rcyriac, rgowdapp, rhinduja, rhs-bugs, rjoseph, sbhaloth, vdas |
| Target Milestone: | --- | ||
| Target Release: | RHGS 3.2.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: |
undefined
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-12-20 07:47:39 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
surabhi
2016-10-20 11:16:44 UTC
I see that the log messages show that there was a fini, was there a disconnect? Could you try graph-switches like quick-read/add-brick etc. ? Is this seen when md-cache is disabled as well? Tried the test with graph-switches by turning readdir-ahead on and off and quick-read on and off, didn't observed the issue. Will test once again with write-behind and md-cache off and update the bug. Keeping the needinfo back to Surabhi as we still need the results when md-cache is turned off along with write-behind. Vivek , Could you please test and update the results as per comment#3 Summary of tests
------------------
1. With mdcache disabled
Command used dd
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
Create deep directory using python script
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
2. With md-cache enabled
Command used dd
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
Create deep directory using python script
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
3. Distaf test cases that does a graph switch ON & OFF while dd command is ran.
Ran this on a loop
Well trying all the above test scenarios the original bug related to IO error was NOT seen. But i was hitting issue with the above scenario where the dd command was getting hang and for which i have reported another bug #1396449. So i suspect that it might happen that because of the hang issue of the dd command the IO error is not discovered and that we can wait for the fix to #1396449 and then we can re-try the above steps to reproduce the IO error.
Is it possible to attach statedump of smbd process when I/O is hung? As per the triaging we all have the agreement that this BZ has to be fixed in rhgs-3.2.0. Providing qa_ack Versions:
--------
glusterfs-3.8.4-9.el7rhgs.x86_64
samba-client-4.4.6-3.el7rhgs.x86_64
1. With md-cache enabled
Command used dd
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
Command used dd
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
readdir-ahead on & off (in a loop)
Create deep directory using python script
ll /mnt/cifs (in a loop) where /mnt/cifs is the mount point
write-behind on & off (in a loop)
2. Distaf test cases that does a graph switch ON & OFF while dd command is ran.
Ran this on a loop
No Input Output errors or huge warning messages were encountered.
|