Bug 999373

Summary: [RHS-RHOS] "ls" takes >5mins to list glance images on gluster mount while instance snapshot is in progress.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Gowrishankar Rajaiyan <grajaiya>
Component: glusterfsAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED NOTABUG QA Contact: shilpa <smanjara>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2.1CC: grajaiya, rhs-bugs, tkatarki, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
virt rhos glance rhs integration
Last Closed: 2014-03-27 09:53:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
rhs-bricks-glance1.1530.dump.1377072403 none

Description Gowrishankar Rajaiyan 2013-08-21 08:49:20 UTC
Description of problem: After launching an instance I take snapshot of it. What Openstack snapshot does is, it uploads the nova image to glance (which is a gluster mount) so that further new instances can be launched using this snapshot. But it takes a lot of time (real: 5m10.043s) when I do a "ls" on /var/lib/glance/images which is a gluster mount. strace reveals that most of the time(50%) is spent on lgetxattr. The impact is such that any further operation on glance takes a long time.


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.20rhs-2.el6.x86_64

How reproducible: Always


Steps to Reproduce:
1. Setup RHS-RHOS[1]
2. Ensure that glance and cinder are integrated using RHS.
3. Launch an instance from ISO[2].
4. Create snapshot using dashboard.
5. When the task of instance is "Image Uploading", try listing images in glance.

Actual results: Takes a lot of time (real 5m10.043s)

Expected results: Should be reasonable.



Steps to Reproduce in detail:
[1]
* Install openstack-utils openstack-cinder openstack-glance glusterfs-fuse glusterfs
* openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
* openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
* openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
* Create /etc/cinder/shares.conf
* mount -t glusterfs 10.70.43.44:/glance-vol /var/lib/glance/images
* packstack --allinone

[2]
* nova volume-create --display-name RHS2.1-Node1 20
* nova boot --flavor 2 --image 5818c4e8-d0fb-4b32-830e-7154eacee839 --key-name root --block-device-mapping vda=0f9cd58f-8c1a-4883-9233-5e15411b394b::0 RHS2.1-Node1




Additional info:
I confirmed that there is no memory or cpu load on the Openstack machine or on RHS nodes

Openstack:
[root@rhs-hpc-srv1 ~(keystone_admin)]# mpstat 
Linux 2.6.32-358.114.1.openstack.el6.x86_64 (rhs-hpc-srv1.lab.eng.blr.redhat.com) 	08/21/2013 	_x86_64_	(32 CPU)

01:41:55 AM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
01:41:55 AM  all    1.35    0.00    0.37    0.01    0.00    0.01    0.00    0.01   98.26

[root@rhs-hpc-srv1 ~(keystone_admin)]# free -g
             total       used       free     shared    buffers     cached
Mem:            47         18         28          0          0         11
-/+ buffers/cache:          6         40
Swap:           29          0         29
[root@rhs-hpc-srv1 ~(keystone_admin)]#


One of the RHS node:
[root@localhost ~]# free -g
             total       used       free     shared    buffers     cached
Mem:             7          2          5          0          0          1
-/+ buffers/cache:          0          7
Swap:            7          0          7

[root@localhost ~]# mpstat 
Linux 2.6.32-358.14.1.el6.x86_64 (localhost.localdomain) 	08/21/2013 	_x86_64_	(4 CPU)

01:52:25 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
01:52:25 PM  all    0.14    0.00    0.08    0.02    0.00    0.00    0.05    0.00   99.70
[root@localhost ~]# 



[root@rhs-hpc-srv1 ~]# time strace -c ls -l /var/lib/glance/images/
total 26753122
-rw-r-----. 1 glance glance   699592704 Aug 20 17:19 1245b941-79c7-4e1c-84e1-4e2727c0b530
-rw-r-----. 1 glance glance  3589636096 Aug 21  2013 2f9486ba-4f2e-4aa7-b23d-229a0999161c
-rw-r-----. 1 glance glance   235536384 Aug 20 17:22 30feb3a1-f714-41c7-86be-d962a5fb216b
-rw-r-----. 1 glance glance   237371392 Aug 20 17:21 569e41dd-d823-4162-bb20-036fb24557e0
-rw-r-----. 1 glance glance  1826027520 Aug 20 17:14 5818c4e8-d0fb-4b32-830e-7154eacee839
-rw-r-----. 1 glance glance  1682935808 Aug 21  2013 6c2da0ab-24ef-4da6-bb70-4113f59e847e
-rw-r-----. 1 glance glance  1682059264 Aug 21  2013 6f713cee-8e3c-4057-bd37-98ccee4c3e0b
-rw-r-----. 1 glance glance   876109824 Aug 21  2013 abb48a02-4141-4d3b-a323-04829df59b35
-rw-r-----. 1 glance glance 16337731584 Aug 20 17:33 bc1349be-e4b8-479e-be02-ed3710e73844
-rw-r-----. 1 glance glance   228196352 Aug 20 17:22 effbf554-37c1-4e7f-96c6-62671929e0b7
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 50.50    0.010512         478        22        11 lgetxattr
 21.10    0.004393         116        38           mmap
 14.41    0.003000         136        22           lstat
 11.46    0.002385          75        32        10 open
  2.46    0.000512         102         5         5 connect
  0.06    0.000013           1        11           write
  0.00    0.000000           0        18           read
  0.00    0.000000           0        29           close
  0.00    0.000000           0         9           stat
  0.00    0.000000           0        22           fstat
  0.00    0.000000           0         1           lseek
  0.00    0.000000           0        18           mprotect
  0.00    0.000000           0         8           munmap
  0.00    0.000000           0         3           brk
  0.00    0.000000           0         2           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0         2           ioctl
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         5           socket
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         2           fcntl
  0.00    0.000000           0         6           getdents
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           statfs
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         4         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.020815                   267        28 total

real	5m10.043s
user	0m0.007s
sys	0m0.042s
[root@rhs-hpc-srv1 ~]# 


Even a "df" takes 13 seconds:
[root@rhs-hpc-srv1 ~(keystone_admin)]# time strace -c df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb1            1149638772   1538576 1089701940   1% /
tmpfs                 24697036         0  24697036   0% /dev/shm
/dev/sdf1             30774208    215992  28994980   1% /boot
/dev/sdc1            1149638772   7903324 1083337192   1% /var
/srv/loopback-device/device1
                        982936     34088    896420   4% /srv/node/device1
10.70.43.44:cinder-vol
                     314382336  29532416 284849920  10% /var/lib/cinder/volumes/1d12e17a168a458a2db39ca37ee302fd
10.70.43.44:/glance-vol
                     314382336  29532416 284849920  10% /var/lib/glance/images
10.70.43.44:cinder-vol
                     314382336  29532416 284849920  10% /var/lib/nova/mnt/1d12e17a168a458a2db39ca37ee302fd
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00    0.004355         363        12           statfs
  0.00    0.000000           0         5           read
  0.00    0.000000           0        13           write
  0.00    0.000000           0        11         5 open
  0.00    0.000000           0         8           close
  0.00    0.000000           0         7           fstat
  0.00    0.000000           0        12           mmap
  0.00    0.000000           0         3           mprotect
  0.00    0.000000           0         4           munmap
  0.00    0.000000           0         3           brk
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         1           arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00    0.004355                    81         6 total

real	0m13.494s
user	0m0.002s
sys	0m0.021s
[root@rhs-hpc-srv1 ~(keystone_admin)]# 


[root@localhost ~]# gluster vol info glance-vol
 
Volume Name: glance-vol
Type: Distributed-Replicate
Volume ID: c703f1a8-0bc0-4eb4-b620-61c851093866
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.43.66:/rhs/bricks/glance1
Brick2: 10.70.43.67:/rhs/bricks/glance1
Brick3: 10.70.43.68:/rhs/bricks/glance1
Brick4: 10.70.43.69:/rhs/bricks/glance1
Brick5: 10.70.43.70:/rhs/bricks/glance1
Brick6: 10.70.43.61:/rhs/bricks/glance1
Brick7: 10.70.43.56:/rhs/bricks/glance1
Brick8: 10.70.43.57:/rhs/bricks/glance1
Brick9: 10.70.43.53:/rhs/bricks/glance1
Brick10: 10.70.43.47:/rhs/bricks/glance1
Brick11: 10.70.43.44:/rhs/bricks/glance1
Brick12: 10.70.43.42:/rhs/bricks/glance1
Options Reconfigured:
storage.owner-gid: 161
storage.owner-uid: 161
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
[root@localhost ~]# 


[root@localhost ~]# gluster vol status glance-vol
Status of volume: glance-vol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.43.66:/rhs/bricks/glance1			49153	Y	1533
Brick 10.70.43.67:/rhs/bricks/glance1			49153	Y	1533
Brick 10.70.43.68:/rhs/bricks/glance1			49153	Y	1544
Brick 10.70.43.69:/rhs/bricks/glance1			49153	Y	1536
Brick 10.70.43.70:/rhs/bricks/glance1			49153	Y	1538
Brick 10.70.43.61:/rhs/bricks/glance1			49153	Y	1558
Brick 10.70.43.56:/rhs/bricks/glance1			49153	Y	1532
Brick 10.70.43.57:/rhs/bricks/glance1			49153	Y	1573
Brick 10.70.43.53:/rhs/bricks/glance1			49153	Y	1532
Brick 10.70.43.47:/rhs/bricks/glance1			49153	Y	1539
Brick 10.70.43.44:/rhs/bricks/glance1			49153	Y	1530
Brick 10.70.43.42:/rhs/bricks/glance1			49153	Y	18923
NFS Server on localhost					2049	Y	22219
Self-heal Daemon on localhost				N/A	Y	22223
NFS Server on 10.70.43.67				2049	Y	22204
Self-heal Daemon on 10.70.43.67				N/A	Y	22211
NFS Server on 10.70.43.56				2049	Y	22172
Self-heal Daemon on 10.70.43.56				N/A	Y	22179
NFS Server on 10.70.43.61				2049	Y	22173
Self-heal Daemon on 10.70.43.61				N/A	Y	22180
NFS Server on 10.70.43.66				2049	Y	22277
Self-heal Daemon on 10.70.43.66				N/A	Y	22284
NFS Server on 10.70.43.47				2049	Y	22044
Self-heal Daemon on 10.70.43.47				N/A	Y	22051
NFS Server on 10.70.43.70				2049	Y	1440
Self-heal Daemon on 10.70.43.70				N/A	Y	1447
NFS Server on 10.70.43.72				2049	Y	22323
Self-heal Daemon on 10.70.43.72				N/A	Y	22329
NFS Server on 10.70.43.69				2049	Y	22212
Self-heal Daemon on 10.70.43.69				N/A	Y	22219
NFS Server on 10.70.43.42				2049	Y	7758
Self-heal Daemon on 10.70.43.42				N/A	Y	7762
NFS Server on 10.70.43.57				2049	Y	22063
Self-heal Daemon on 10.70.43.57				N/A	Y	22070
NFS Server on 10.70.43.68				2049	Y	22241
Self-heal Daemon on 10.70.43.68				N/A	Y	22248
NFS Server on 10.70.43.53				2049	Y	22060
Self-heal Daemon on 10.70.43.53				N/A	Y	22067
 
There are no active volume tasks
[root@localhost ~]# 


[root@localhost ~]# xfs_info /dev/mapper/bricks
meta-data=/dev/mapper/bricks     isize=512    agcount=64, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=13105664, imaxpct=15
         =                       sunit=64     swidth=640 blks
naming   =version 2              bsize=8192   ascii-ci=0
log      =internal               bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]#

Comment 1 Gowrishankar Rajaiyan 2013-08-21 08:52:05 UTC
Created attachment 788760 [details]
rhs-bricks-glance1.1530.dump.1377072403

attaching statedump.

Comment 3 Gowrishankar Rajaiyan 2013-08-21 09:08:41 UTC
It gets a bit better after I deleted the instance and the snapshot that was getting created, but still

[root@rhs-hpc-srv1 ~]# time strace -c ls -l /var/lib/glance/images/
total 26875338
-rw-r-----. 1 glance glance   699592704 Aug 20 17:19 1245b941-79c7-4e1c-84e1-4e2727c0b530
-rw-r-----. 1 glance glance  3589636096 Aug 21  2013 2f9486ba-4f2e-4aa7-b23d-229a0999161c
-rw-r-----. 1 glance glance   235536384 Aug 20 17:22 30feb3a1-f714-41c7-86be-d962a5fb216b
-rw-r-----. 1 glance glance   237371392 Aug 20 17:21 569e41dd-d823-4162-bb20-036fb24557e0
-rw-r-----. 1 glance glance  1826027520 Aug 20 17:14 5818c4e8-d0fb-4b32-830e-7154eacee839
-rw-r-----. 1 glance glance  1682935808 Aug 21  2013 6c2da0ab-24ef-4da6-bb70-4113f59e847e
-rw-r-----. 1 glance glance  1682059264 Aug 21  2013 6f713cee-8e3c-4057-bd37-98ccee4c3e0b
-rw-r-----. 1 glance glance  1001259008 Aug 21  2013 abb48a02-4141-4d3b-a323-04829df59b35
-rw-r-----. 1 glance glance 16337731584 Aug 20 17:33 bc1349be-e4b8-479e-be02-ed3710e73844
-rw-r-----. 1 glance glance   228196352 Aug 20 17:22 effbf554-37c1-4e7f-96c6-62671929e0b7
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 33.63    0.009749         443        22        11 lgetxattr
 30.08    0.008720         229        38           mmap
 15.40    0.004464         893         5           socket
 10.35    0.002999         136        22           lstat
  6.28    0.001819         165        11           write
  3.75    0.001086          34        32        10 open
  0.48    0.000140         140         1           statfs
  0.03    0.000009           0        29           close
  0.00    0.000000           0        18           read
  0.00    0.000000           0         9           stat
  0.00    0.000000           0        22           fstat
  0.00    0.000000           0         1           lseek
  0.00    0.000000           0        18           mprotect
  0.00    0.000000           0         8           munmap
  0.00    0.000000           0         3           brk
  0.00    0.000000           0         2           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0         2           ioctl
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         5         5 connect
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         2           fcntl
  0.00    0.000000           0         6           getdents
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         4         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.028986                   267        28 total

real	4m40.799s
user	0m0.003s
sys	0m0.060s
[root@rhs-hpc-srv1 ~]#

Comment 4 Gowrishankar Rajaiyan 2013-08-21 09:09:57 UTC
restarting openstack-glance-api helps:

[root@rhs-hpc-srv1 ~]# time strace -c ls -l /var/lib/glance/images/
total 26880782
-rw-r-----. 1 glance glance   699592704 Aug 20 17:19 1245b941-79c7-4e1c-84e1-4e2727c0b530
-rw-r-----. 1 glance glance  3589636096 Aug 21  2013 2f9486ba-4f2e-4aa7-b23d-229a0999161c
-rw-r-----. 1 glance glance   235536384 Aug 20 17:22 30feb3a1-f714-41c7-86be-d962a5fb216b
-rw-r-----. 1 glance glance   237371392 Aug 20 17:21 569e41dd-d823-4162-bb20-036fb24557e0
-rw-r-----. 1 glance glance  1826027520 Aug 20 17:14 5818c4e8-d0fb-4b32-830e-7154eacee839
-rw-r-----. 1 glance glance  1682935808 Aug 21  2013 6c2da0ab-24ef-4da6-bb70-4113f59e847e
-rw-r-----. 1 glance glance  1682059264 Aug 21  2013 6f713cee-8e3c-4057-bd37-98ccee4c3e0b
-rw-r-----. 1 glance glance  1006833664 Aug 21  2013 abb48a02-4141-4d3b-a323-04829df59b35
-rw-r-----. 1 glance glance 16337731584 Aug 20 17:33 bc1349be-e4b8-479e-be02-ed3710e73844
-rw-r-----. 1 glance glance   228196352 Aug 20 17:22 effbf554-37c1-4e7f-96c6-62671929e0b7
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 40.90    0.004999         833         6           getdents
 32.72    0.003999         800         5           socket
  8.18    0.001000          45        22           lstat
  8.18    0.001000          45        22        11 lgetxattr
  5.96    0.000728         146         5         5 connect
  3.94    0.000481          15        32        10 open
  0.07    0.000009           0        38           mmap
  0.06    0.000007           4         2           fcntl
  0.00    0.000000           0        18           read
  0.00    0.000000           0        11           write
  0.00    0.000000           0        29           close
  0.00    0.000000           0         9           stat
  0.00    0.000000           0        22           fstat
  0.00    0.000000           0         1           lseek
  0.00    0.000000           0        18           mprotect
  0.00    0.000000           0         8           munmap
  0.00    0.000000           0         3           brk
  0.00    0.000000           0         2           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0         2           ioctl
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           statfs
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         4         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.012223                   267        28 total

real	0m0.096s
user	0m0.004s
sys	0m0.024s
[root@rhs-hpc-srv1 ~]#

Comment 5 Gowrishankar Rajaiyan 2013-08-21 09:11:50 UTC
==> /var/log/glusterfs/var-lib-glance-images.log <==
[2013-08-20 21:23:42.406475] W [client-rpc-fops.c:984:client3_3_fsync_cbk] 0-glance-vol-client-11: remote operation failed: Structure needs cleaning
[2013-08-20 21:23:42.406531] W [afr-transaction.c:1497:afr_changelog_fsync_cbk] 0-glance-vol-replicate-5: fsync(cfaf9576-9fa5-48e4-991a-39a3088f9590) failed on subvolume glance-vol-client-11. Transaction was WRITE
[2013-08-20 21:24:29.555457] W [client-rpc-fops.c:984:client3_3_fsync_cbk] 0-glance-vol-client-11: remote operation failed: Structure needs cleaning
[2013-08-20 21:24:29.555500] W [afr-transaction.c:1497:afr_changelog_fsync_cbk] 0-glance-vol-replicate-5: fsync(cfaf9576-9fa5-48e4-991a-39a3088f9590) failed on subvolume glance-vol-client-11. Transaction was WRITE
[2013-08-20 21:24:52.317066] W [client-rpc-fops.c:984:client3_3_fsync_cbk] 0-glance-vol-client-11: remote operation failed: Structure needs cleaning
[2013-08-20 21:24:52.317128] W [afr-transaction.c:1497:afr_changelog_fsync_cbk] 0-glance-vol-replicate-5: fsync(cfaf9576-9fa5-48e4-991a-39a3088f9590) failed on subvolume glance-vol-client-11. Transaction was WRITE
[2013-08-20 21:24:52.361048] W [client-rpc-fops.c:984:client3_3_fsync_cbk] 0-glance-vol-client-11: remote operation failed: Structure needs cleaning
[2013-08-20 21:24:52.361102] W [afr-transaction.c:1497:afr_changelog_fsync_cbk] 0-glance-vol-replicate-5: fsync(cfaf9576-9fa5-48e4-991a-39a3088f9590) failed on subvolume glance-vol-client-11. Transaction was WRITE
[2013-08-20 21:40:08.669613] W [client-rpc-fops.c:984:client3_3_fsync_cbk] 0-glance-vol-client-11: remote operation failed: Structure needs cleaning
[2013-08-20 21:40:08.669660] W [afr-transaction.c:1497:afr_changelog_fsync_cbk] 0-glance-vol-replicate-5: fsync(cfaf9576-9fa5-48e4-991a-39a3088f9590) failed on subvolume glance-vol-client-11. Transaction was WRITE

Comment 6 Gowrishankar Rajaiyan 2013-08-21 09:41:54 UTC
(In reply to Gowrishankar Rajaiyan from comment #0)
...
> 
> How reproducible: Always
> 
...

Couldn't reproduce it the 3rd time.

Comment 7 shilpa 2014-03-27 07:41:56 UTC
Re-tested on RHOS4.0 with glusterfs-3.4.0.59rhs-1.el6_4.x86_64

While the instance is being snapshotted time time taken to list /mnt/gluster/glance/images is 0m0.051s

# time strace -c ls -l /mnt/gluster/glance/images/
total 1249024
-rw-r-----. 1 glance glance 251985920 Feb 18 12:52 036b032a-88c2-4f8b-a9c1-8cf6ef032841
-rwxrwxrwx. 1 glance glance 251985920 Mar 13 14:40 4c320f2a-e369-43a8-91a7-031109e99057
-rw-r-----. 1 glance glance 251985920 Mar 27 12:04 4e7d2806-e2f0-429c-bcab-d289cdacf391
-rw-r-----. 1 glance glance  19070976 Mar 10 15:06 58ae3788-60c7-45e7-b333-64bd0f622b70
-rw-r-----. 1 glance glance 251985920 Mar 25 13:08 a1c71334-152f-4311-ab89-f59754c4f449
-rw-r-----. 1 glance glance 251985920 Mar 25 13:07 dcde5200-cd98-4ffa-a363-65dcb36018d7
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 60.20    0.001546         110        14         7 lgetxattr
 39.41    0.001012          72        14           lstat
  0.39    0.000010           0        38           mmap
  0.00    0.000000           0        18           read
  0.00    0.000000           0         7           write
  0.00    0.000000           0        32        10 open
  0.00    0.000000           0        29           close
  0.00    0.000000           0         5           stat
  0.00    0.000000           0        22           fstat
  0.00    0.000000           0         1           lseek
  0.00    0.000000           0        18           mprotect
  0.00    0.000000           0         8           munmap
  0.00    0.000000           0         3           brk
  0.00    0.000000           0         2           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0         2           ioctl
  0.00    0.000000           0         1         1 access
  0.00    0.000000           0         5           socket
  0.00    0.000000           0         5         5 connect
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         2           fcntl
  0.00    0.000000           0         6           getdents
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           statfs
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         4         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.002568                   243        24 total

real	0m0.051s
user	0m0.002s
sys	0m0.010s

Comment 8 shilpa 2014-03-27 09:53:32 UTC
Since the bug cannot be reproduced Closing this bug as Not a bug.