Bug 880165 - [RHEV-RHS] "fs_mark" test fails on fuse mount
Summary: [RHEV-RHS] "fs_mark" test fails on fuse mount
Keywords:
Status: CLOSED DUPLICATE of bug 856467
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: ---
Assignee: vsomyaju
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-11-26 12:07 UTC by spandura
Modified: 2015-03-05 00:06 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-12-11 06:44:47 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2012-11-26 12:07:46 UTC
Description of problem:
=========================
Executing "fs_mark" on fuse mount fails.

fs_mark execution output for failed case:-
=============================================
#  /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S  4 
#       Version 3.3, 4 thread(s) starting at Mon Nov 26 17:09:32 2012
#       Sync method: SYNC POST REVERSE: Issue sync() and then reopen and fsync() each file in reverse order after main write loop.
#       Directories:  Time based hash between directories across 4 subdirectories with 180 seconds per subdirectory.
#       File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
#       Files info: size 51200 bytes, written with an IO size of 16384 bytes per write
#       App overhead is time in microseconds spent in the test not doing file writing related system calls.

FSUse%        Count         Size    Files/sec     App Overhead
fscanf read too few entries from thread log file: fs_log.txt.18982

[11/26/12 - 17:16:06 root@rhs-gp-srv15 run14666]# cat fs_log.txt.18982
1000 9.7 74842 1633 19916 2803509 28 98 1144 10205 82747 3155761 3118 43 80 230 2230 4620 6298


Version-Release number of selected component (if applicable):
=============================================================
[11/26/12 - 12:37:43 root@rhs-gp-srv15 ~]# glusterfs --version
glusterfs 3.3.0rhsvirt1 built on Nov  7 2012 10:11:13

[11/26/12 - 12:37:47 root@rhs-gp-srv15 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.3.0rhsvirt1-8.el6.x86_64
glusterfs-3.3.0rhsvirt1-8.el6.x86_64


How reproducible:
=================
Intermittent. 

Failure case is more than pass case. 

Steps to Reproduce:
==================
1. Create a replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's.

2. Set the volume option "group" to "virt"

3. Set storage.owner-uid , storage.owner-gid to 36.

4. start the volume. 

5. create a host from RHEVM

6. create a storage domain from RHEVM for the above created volume. 

7. On the host mount point to the volume, run "fs_mark" sanity test : "for i in `seq 1 6` ; do /opt/qa/tools/new_fs_mark/fs_mark-3.3/fs_mark  -d  .  -D  4  -t  4  -S $i ; done" 

  
Actual results:
==============
Changing to the specified mountpoint
/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate/run31318
executing fs_mark
start: 12:15:27

real    0m21.477s
user    0m0.131s
sys     0m1.543s

real    2m16.434s
user    0m0.138s
sys     0m2.157s

real    0m28.405s
user    0m0.121s
sys     0m1.963s

real    0m56.814s
user    0m0.122s
sys     0m1.921s
end:12:19:30
fs_mark failed
0
Total 0 tests were successful
Switching over to the previous working directory
Removing /rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/
rmdir: failed to remove `/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run31318/': Directory not empty
rmdir failed:Directory not empty

Expected results:
====================
fs_mark should pass

Additional info:
===============

Volume Name: replicate
Type: Replicate
Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-client1:/disk1
Brick2: rhs-client16:/disk1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.eager-lock: enable
storage.linux-aio: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off

Note: The same test passes on regular file system

Comment 3 Amar Tumballi 2012-11-26 14:20:33 UTC
marking it as 'medium' as ideally we should not be recommending the image hosting volume as general purpose storage.

Comment 4 Vijay Bellur 2012-12-11 06:44:47 UTC

*** This bug has been marked as a duplicate of bug 856467 ***


Note You need to log in before you can comment on or make changes to this bug.