Bug 880165 - [RHEV-RHS] "fs_mark" test fails on fuse mount
[RHEV-RHS] "fs_mark" test fails on fuse mount
Status: CLOSED DUPLICATE of bug 856467
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: vsomyaju
spandura
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-11-26 07:07 EST by spandura
Modified: 2015-03-04 19:06 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-11 01:44:47 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2012-11-26 07:07:46 EST
Description of problem:
=========================
Executing "fs_mark" on fuse mount fails.

fs_mark execution output for failed case:-
=============================================
#  /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S  4 
#       Version 3.3, 4 thread(s) starting at Mon Nov 26 17:09:32 2012
#       Sync method: SYNC POST REVERSE: Issue sync() and then reopen and fsync() each file in reverse order after main write loop.
#       Directories:  Time based hash between directories across 4 subdirectories with 180 seconds per subdirectory.
#       File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
#       Files info: size 51200 bytes, written with an IO size of 16384 bytes per write
#       App overhead is time in microseconds spent in the test not doing file writing related system calls.

FSUse%        Count         Size    Files/sec     App Overhead
fscanf read too few entries from thread log file: fs_log.txt.18982

[11/26/12 - 17:16:06 root@rhs-gp-srv15 run14666]# cat fs_log.txt.18982
1000 9.7 74842 1633 19916 2803509 28 98 1144 10205 82747 3155761 3118 43 80 230 2230 4620 6298


Version-Release number of selected component (if applicable):
=============================================================
[11/26/12 - 12:37:43 root@rhs-gp-srv15 ~]# glusterfs --version
glusterfs 3.3.0rhsvirt1 built on Nov  7 2012 10:11:13

[11/26/12 - 12:37:47 root@rhs-gp-srv15 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.3.0rhsvirt1-8.el6.x86_64
glusterfs-3.3.0rhsvirt1-8.el6.x86_64


How reproducible:
=================
Intermittent. 

Failure case is more than pass case. 

Steps to Reproduce:
==================
1. Create a replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's.

2. Set the volume option "group" to "virt"

3. Set storage.owner-uid , storage.owner-gid to 36.

4. start the volume. 

5. create a host from RHEVM

6. create a storage domain from RHEVM for the above created volume. 

7. On the host mount point to the volume, run "fs_mark" sanity test : "for i in `seq 1 6` ; do /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S $i ; done" 

  
Actual results:
==============
Changing to the specified mountpoint
/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate/run31318
executing fs_mark
start: 12:15:27

real    0m21.477s
user    0m0.131s
sys     0m1.543s

real    2m16.434s
user    0m0.138s
sys     0m2.157s

real    0m28.405s
user    0m0.121s
sys     0m1.963s

real    0m56.814s
user    0m0.122s
sys     0m1.921s
end:12:19:30
fs_mark failed
0
Total 0 tests were successful
Switching over to the previous working directory
Removing /rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/
rmdir: failed to remove `/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/': Directory not empty
rmdir failed:Directory not empty

Expected results:
====================
fs_mark should pass

Additional info:
===============

Volume Name: replicate
Type: Replicate
Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-client1:/disk1
Brick2: rhs-client16:/disk1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.eager-lock: enable
storage.linux-aio: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off

Note: The same test passes on regular file system
Comment 3 Amar Tumballi 2012-11-26 09:20:33 EST
marking it as 'medium' as ideally we should not be recommending the image hosting volume as general purpose storage.
Comment 4 Vijay Bellur 2012-12-11 01:44:47 EST

*** This bug has been marked as a duplicate of bug 856467 ***

Note You need to log in before you can comment on or make changes to this bug.