Bug 880165

Summary: [RHEV-RHS] "fs_mark" test fails on fuse mount
Product: Red Hat Gluster Storage Reporter: spandura
Component: glusterfsAssignee: vsomyaju
Status: CLOSED DUPLICATE QA Contact: spandura
Severity: unspecified Docs Contact:
Priority: medium    
Version: 2.0CC: grajaiya, nsathyan, rhs-bugs, sdharane, shaines, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-11 01:44:47 EST Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description spandura 2012-11-26 07:07:46 EST
Description of problem:
Executing "fs_mark" on fuse mount fails.

fs_mark execution output for failed case:-
#  /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S  4 
#       Version 3.3, 4 thread(s) starting at Mon Nov 26 17:09:32 2012
#       Sync method: SYNC POST REVERSE: Issue sync() and then reopen and fsync() each file in reverse order after main write loop.
#       Directories:  Time based hash between directories across 4 subdirectories with 180 seconds per subdirectory.
#       File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
#       Files info: size 51200 bytes, written with an IO size of 16384 bytes per write
#       App overhead is time in microseconds spent in the test not doing file writing related system calls.

FSUse%        Count         Size    Files/sec     App Overhead
fscanf read too few entries from thread log file: fs_log.txt.18982

[11/26/12 - 17:16:06 root@rhs-gp-srv15 run14666]# cat fs_log.txt.18982
1000 9.7 74842 1633 19916 2803509 28 98 1144 10205 82747 3155761 3118 43 80 230 2230 4620 6298

Version-Release number of selected component (if applicable):
[11/26/12 - 12:37:43 root@rhs-gp-srv15 ~]# glusterfs --version
glusterfs 3.3.0rhsvirt1 built on Nov  7 2012 10:11:13

[11/26/12 - 12:37:47 root@rhs-gp-srv15 ~]# rpm -qa | grep gluster

How reproducible:

Failure case is more than pass case. 

Steps to Reproduce:
1. Create a replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's.

2. Set the volume option "group" to "virt"

3. Set storage.owner-uid , storage.owner-gid to 36.

4. start the volume. 

5. create a host from RHEVM

6. create a storage domain from RHEVM for the above created volume. 

7. On the host mount point to the volume, run "fs_mark" sanity test : "for i in `seq 1 6` ; do /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S $i ; done" 

Actual results:
Changing to the specified mountpoint
executing fs_mark
start: 12:15:27

real    0m21.477s
user    0m0.131s
sys     0m1.543s

real    2m16.434s
user    0m0.138s
sys     0m2.157s

real    0m28.405s
user    0m0.121s
sys     0m1.963s

real    0m56.814s
user    0m0.122s
sys     0m1.921s
fs_mark failed
Total 0 tests were successful
Switching over to the previous working directory
Removing /rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/
rmdir: failed to remove `/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/': Directory not empty
rmdir failed:Directory not empty

Expected results:
fs_mark should pass

Additional info:

Volume Name: replicate
Type: Replicate
Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: rhs-client1:/disk1
Brick2: rhs-client16:/disk1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.eager-lock: enable
storage.linux-aio: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off

Note: The same test passes on regular file system
Comment 3 Amar Tumballi 2012-11-26 09:20:33 EST
marking it as 'medium' as ideally we should not be recommending the image hosting volume as general purpose storage.
Comment 4 Vijay Bellur 2012-12-11 01:44:47 EST

*** This bug has been marked as a duplicate of bug 856467 ***