Bug 1149765 - dm-thin: performance degradation seen with RHS over dm-thin not seen in dm-thin standalone
Summary: dm-thin: performance degradation seen with RHS over dm-thin not seen in dm-th...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1208974
TreeView+ depends on / blocked
 
Reported: 2014-10-06 15:03 UTC by Manoj Pillai
Modified: 2015-04-15 08:45 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1208974 (view as bug list)
Environment:
Last Closed: 2015-04-15 08:45:06 UTC
Embargoed:


Attachments (Terms of Use)
iostat for local dm-thin (185.51 KB, text/plain)
2014-10-06 15:30 UTC, Manoj Pillai
no flags Details
iostat for RHS+dm-thin (331.17 KB, text/plain)
2014-10-06 15:31 UTC, Manoj Pillai
no flags Details

Description Manoj Pillai 2014-10-06 15:03:22 UTC
Description of problem:
Iozone tests with RHS over thin-p are showing poorer performance compared to RHS over traditional LV. In the simplest test, "iozone -i 0 -i 1" i.e. write test followed by read test, is reporting lower read throughput when RHS is using thin-p underneath compared to RHS running over traditional LV. 

The same test when run a system without RHS in the picture i.e. iozone on an XFS filesystem over thin-p does not show degradation compared to an XFS filesystem over traditional LV.

Version-Release number of selected component (if applicable):
kernel-2.6.32-431.29.2.el6.x86_64
lvm2-2.02.100-8.el6.x86_64
glusterfs-3.6.0.28-1.el6rhs.x86_64

How reproducible:
Consistently

Steps to Reproduce:
1.
create an RHS volume with a single brick (single server). The brick is an XFS filesystem created over a thin LV (in a thin pool with chunk size 256K). For comparison, we also run with the brick created over traditional LV.

2.
clients mount the RHS volume. iozone is run in distributed mode over 4 clients as follows:

iozone -+m ${BIN_PATH}/distiozone.conf -+h [host_ip] -i 0 -i 1 -w -+n -c -C -e -s ${FILE_SZ_GB}g -r ${REC_SZ_KB}k -t $IOZONE_NTHRDS 

FILE_SZ_GB=20
REC_SZ_KB=1024
IOZONE_NTHRDS=8

This test creates 8 files in the RHS volume, with each thread writing to its file in the write phase (concurrently), and reading from its file in the read phase.

Total server memory is 64g which is much less than the amount of data written 160g.


Actual results:
With RHS over traditional LV:
        Children see throughput for  8 initial writers  =  915075.96 KB/sec
        Children see throughput for  8 readers          =  934001.85 KB/sec

With RHS over thin LV:
        Children see throughput for  8 initial writers  = 1123032.67 KB/sec
        Children see throughput for  8 readers          =  539766.88 KB/sec

Expected results:
No drop in throughput for readers for RHS over thin LV, compared to RHS over traditional LV.

Additional info:
For the thin-p case, the test was repeated with /sys/module/dm_bufio/parameters/max_age_seconds set to 3600, which is greater than the duration of the test. iostat was used to verify that there were no reads on the thin pool metadata device. However, there was no improvement in the read throughput. This rules out metadata read overhead in the thin-p case as the cause of the degradation.

More info:
If a similar test is done on the RHS server system (iozone running directly on the server system -- no RHS volume and clients) the results are quite different.

command in this case:
iozone -i 0 -i 1 -w -+n -c -C -e -s ${FILE_SZ_GB}g -r ${REC_SZ_KB}k -t $IOZONE_NTHRDS -F ${file_set}

The number of iozone thread and file size written by each thread is kept the same i.e. 8 threads and 20g

XFS filesystem over traditional LV:
        Children see throughput for  8 initial writers  =  837670.20 KB/sec
        Children see throughput for  8 readers          =  938276.05 KB/sec

XFS filesystem over thin LV:
        Children see throughput for  8 initial writers  = 1466771.38 KB/sec
        Children see throughput for  8 readers          =  989255.89 KB/sec

In this case, no degradation is seen in read throughput.

Comment 2 Manoj Pillai 2014-10-06 15:30:41 UTC
Created attachment 944301 [details]
iostat for local dm-thin

Comment 3 Manoj Pillai 2014-10-06 15:31:24 UTC
Created attachment 944302 [details]
iostat for RHS+dm-thin

Comment 4 Manoj Pillai 2014-10-06 15:41:41 UTC
iostat output for these cases is interesting.

For the local dm-thin case (no RHS), iostat reports large avgqu-sz on the thin LV, usually 5000+ in the write phase. See https://bugzilla.redhat.com/attachment.cgi?id=944301

Note that this is output of "iostat -N -tkdx 2".

For the RHS case, avgqu-sz is usually quite low, ~400 in a 2-sec interval followed by ~5 in the next 2 sec interval. See
https://bugzilla.redhat.com/attachment.cgi?id=944302

I'm wondering if the bio_sort patch that Mike put in to reduce fragmentation in the presence of multi-threaded allocations is being rendered ineffective by the small queue in the RHS case.

I also noticed that vm parameters dirty_ratio and dirty_background_ratio for RHS installs are different from the RHEL defaults.

BRCK_VM_DIRTY_R=5 # vm.dirty_ratio; default: rhs 5, rhel 20
BRCK_VM_DIRTY_BGRNDR=2 # vm.dirty_background_ratio; default: rhs 2, rhel 10 

These runs are with the RHS values. I have some data with the RHEl default values as well.

Comment 5 Mike Snitzer 2014-10-06 18:56:10 UTC
This would seem to be a gluster issue to me.  dm-thinp could possibly be better about its ondisk layout for multiple threads _but_ it is clearly doing quite well when the iozone is executed locally on the server.

The larger avgqu-sz is definitely going to help the bio-sort patch I did.  If gluster is throttling the IO it'll hurt us.

Joe has a patch that iterates on the multi-block allocation (blocks_per_allocation) RFC patch earlier in the year.  It is a more "supported" variant that will likely help us regardless of avgqu-sz... see:

https://github.com/jthornber/linux-2.6/commit/94820b27088c69e14cc2de9a4d7ae1533d55897b

(that commit needs a fix that I cannot find at the moment).

Anyway, point is:
1) we'll want gluster to allow for larger avgqu-sz
   - seems like architectural issue that needs deeper analysis on gluster's side
2) we have Joe's over-provisioning patch that may save the day regardless

Comment 6 Mike Snitzer 2014-10-06 19:06:38 UTC
(In reply to Mike Snitzer from comment #5)
> Joe has a patch that iterates on the multi-block allocation
> (blocks_per_allocation) RFC patch earlier in the year.  It is a more
> "supported" variant that will likely help us regardless of avgqu-sz... see:
> 
> https://github.com/jthornber/linux-2.6/commit/
> 94820b27088c69e14cc2de9a4d7ae1533d55897b
> 
> (that commit needs a fix that I cannot find at the moment).

Here is the fix:
https://github.com/jthornber/linux-2.6/commit/ea590d9691f0c7070332605ac43172e9a5c820e2

Comment 7 Manoj Pillai 2014-10-06 19:55:06 UTC
(In reply to Mike Snitzer from comment #5)
I'll add some more results, esp. with the RHEL6 defaults for vm.dirty* parameters. We can then decide whether this should be all RHS, or some analysis is needed on the dm-thin side as well.

But on the blocks_per_allocation patch. I think I remember the problem we had with the early version: it helped with fragmentation for the initial provisioning, but it did not help with fragmentation in the "breaking of sharing" case. So once you created a snapshot on the LV, and then concurrent writers triggered new allocations, those new allocations would be fragmented. Not sure if the newer version handles that case better. Will have to take a closer look.

Comment 8 Manoj Pillai 2014-10-24 12:01:01 UTC
Summary:
Setting the vm parameters dirty_ratio and dirty_background ratio to RHEL defaults is showing a big improvement in performance with thin-p. The hypothesis in https://bugzilla.redhat.com/show_bug.cgi?id=1149765#c4 seems to be correct: the performance degradation we are seeing with RHS on thin-p seems to be primarily because bio sorting, as a way of reducing fragmentation, is not working with the VM values that RHS uses.

Results with a new kernel that has a number of recent enhancements. See https://bugzilla.redhat.com/show_bug.cgi?id=1142773.

uname -r: 2.6.32-505.el6.bz1145230_v6.x86_64
glusterfs-3.6.0.28-1.el6rhs.x86_64

Other settings:
* adaptive read-ahead turned off in RAID controller
* read-ahead on LV set to 4M
* brick configuration follows steps in https://bugzilla.redhat.com/show_bug.cgi?id=1100514#c11 to fix alignment issue.

Distributed iozone with 4 clients, 2 RHS servers, distribute volume, each brick is 256g, thin pool is 512g. RAID-6 storage stripe_unit=128K, 10 data disks. 16 iozone threads, each writing 20g.

Results with VM parameters at values set by RHS
[vm.dirty_ratio=5, vm.dirty_background_ratio=2]

Traditional LV:
        Children see throughput for 16 initial writers  = 1841893.27 KB/sec
        Children see throughput for 16 readers          = 1623057.94 KB/sec

Thin LV 256K chunk size:
        Children see throughput for 16 initial writers  = 2276318.85 KB/sec
        Children see throughput for 16 readers          =  865812.68 KB/sec

Thin LV 1280K chunk size:
        Children see throughput for 16 initial writers  = 2274958.77 KB/sec
        Children see throughput for 16 readers          =  891584.84 KB/sec

Results with RHEL defaults for VM parameters
[vm.dirty_ratio=20, vm.dirty_background_ratio=10]

Thin LV 256K chunk size:
        Children see throughput for 16 initial writers  = 1808394.41 KB/sec
        Children see throughput for 16 readers          = 1462757.52 KB/sec

Thin LV 1280K chunk size:
        Children see throughput for 16 initial writers  = 1770513.55 KB/sec
        Children see throughput for 16 readers          = 1443105.26 KB/sec

Results with RHEL defaults for VM parameters + server RPC throttling turned off
[vm.dirty_ratio=20, vm.dirty_background_ratio=10]
[gluster volume set $rhs_volume server.outstanding-rpc-limit 0]

Traditional LV:
        Children see throughput for 16 initial writers  = 1735551.26 KB/sec
        Children see throughput for 16 readers          = 1613528.16 KB/sec

Thin LV 256K chunk size:
        Children see throughput for 16 initial writers  = 1754890.68 KB/sec
        Children see throughput for 16 readers          = 1465261.98 KB/sec

Thin LV 1280K chunk size:
        Children see throughput for 16 initial writers  = 1747974.87 KB/sec
        Children see throughput for 16 readers          = 1462885.48 KB/sec

The read-throughput drop with RHS on thin-p is about 10% with the RHEL VM parameters.

There is a drop in write throughput with thin-p with the RHEL default VM parameter values (compared to thin-p with RHS default VM parameter values). Not  sure why. We have seen this before with kernels that did not have the dm-thin I/O throttling. See: https://bugzilla.redhat.com/show_bug.cgi?id=1142773#c32.

Comment 10 Mike Snitzer 2014-10-24 14:12:52 UTC
It is increasingly looking like the combination of revised (more aggressive) VM tuning and the changes from bug#1142773 and bug#1145230 has resolved the reported performance discrepancy.

This bug should probably transition to a RHS-only bug at this point (to get the default VM tuning, etc changed).

Comment 11 Manoj Pillai 2014-10-27 17:06:25 UTC
Ben, is it feasible for RHS to use RHEL defaults for dirty_ratio and dirty_background_ratio (20 and 10, resp.)? See comment 8 for the performance impact with thin-p. I'm not familiar with why the current values were chosen, but I see them coming from /etc/sysctl.d/vdsm.conf:
#Set dirty page parameters
vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

Comment 12 Ben England 2014-11-05 13:09:39 UTC
It will increase stalls while dirty pages are being written out.  Remember 20% of 128 GB (typical Dell server box now) is 24 GB of dirty pages, that can take minutes to flush.  So if that's the only hope for good write performance, I'm not crazy about it.  Let's test it.

Comment 13 Mike Snitzer 2014-12-01 19:06:04 UTC
Manoj and/or Ben England,

Where do we stand on this report?  Should Ben Turner weigh in on the performance of latest gluster (6.6.z w/ thinp performance improvements) with thin vs thick LV?

Comment 14 Manoj Pillai 2014-12-01 19:34:41 UTC
I'll look at some vm parameter values/tuning that might be acceptable (in light of comment 12). Moving this to an RHS bz while that is in progress and in the hope that dm-thinp changes will not be needed.

Comment 16 Manoj Pillai 2015-01-12 14:27:19 UTC
I wanted to look at this problem in the following cases:

* Disk bandwith is much lower compared to network bandwidth: in
this scenario we want the dirty_ratio and dirty_background_ratio
parameters to have small values to avoid problems from trying to
flush out large amounts of dirty data to a block device with
limited bandwidth.  However, even with the smaller values for the
VM params, because network bandwidth exceeds disk bandwidth,
there should be enough queue build-up under heavy writes for
bio-sort to work well in preventing dm-thinp fragmentation.

* Disk bandwidth is comparable to or greater than network
bandwidth: in this scenario, it is probably safer to bump up the
values of the VM params, in an attempt to improve dm-thinp
performance as in comment 8. The quesion is how effective are
higher VM param values in preventing dm-thinp fragmentation.

To experiment more easily, I removed the standard RAID-6
configuration and used a JBOD config at the RAID controller
level. So there are 12 disks available. For the low disk
bandwidth case, I used 4 disks aggregated into an LVM2 striped LV
for storage. For the higher disk bandwidth case, I used (a) 8-way
striped LV (b) 12-way striped LV.

For dm-thinp, I generally used a block size that varies with the
striping factor. For 4-way striped LV, default is a 256K stripe
unit size, and a dm-thinp block size of 1 MB. For 8-way, 128K
stripe unit size and dm-thinp block size of 1MB. For 12-way, 128K
stripe unit size and dm-thinp block size of 1.5MB.

These tests are with 2.6.32-504.3.2.el6.x86_64, which includes
the latest dm-thinp improvements. RHS is
glusterfs*-3.6.0.28-1.el6rhs.x86_64.

The main observation was that as the available disk bandwidth
increases, thin LV read performance suffers in this test compared
to traditional LV, even when the VM params are raised from the
RHS defaults to the RHEL defaults. So, in general, you cannot tune 
your way to better performance. Results to follow.

Comment 17 Manoj Pillai 2015-01-12 14:48:14 UTC
Followup to comment 16

The results below are from a distributed iozone run with 2 clients, writing to a fuse mounted gluster volume, on a single server, single brick. read-ahead on brick block device set to 4096K.

When the brick is an LV striped over 4 disks:

Traditional LV, vm params=20,10 (dirty_ratio=20, dirty_background_ratio=10):
        Children see throughput for  8 initial writers  =  571127.94 KB/sec
        Children see throughput for  8 readers          =  434926.04 KB/sec

Thin LV, 1M chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  =  534594.28 KB/sec
        Children see throughput for  8 readers          =  414526.24 KB/sec

Traditional LV, vm params=5,2:
        Children see throughput for  8 initial writers  =  585716.81 KB/sec
        Children see throughput for  8 readers          =  436091.39 KB/sec

Thin LV, 1M chunk size, vmparams=5,2:
        Children see throughput for  8 initial writers  =  610048.60 KB/sec
        Children see throughput for  8 readers          =  431375.24 KB/sec

So, thin LV is holding its own pretty well in this case, irrespective of the VM param values.

When the brick is an LV striped over 12 disks:

Traditional LV, vm params=20,10:
        Children see throughput for  8 initial writers  = 1034308.47 KB/sec
        Children see throughput for  8 readers          =  922192.18 KB/sec

Thin LV, 1536K chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  = 1108934.64 KB/sec
        Children see throughput for  8 readers          =  454208.50 KB/sec

Thin LV, 256K chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  = 1086965.48 KB/sec
        Children see throughput for  8 readers          =  678781.98 KB/sec

[I've been seeing the fixed chunk size (not adapted to LV striping parameters) of 256K give better performance on the read test in many cases. Not sure why that is.]

So, in this case, even at the higher VM param values, there is a huge gap between thin and traditional LV.

When the VM param values are lowered to the current RHS defaults, things get worse with thin-p:

Traditional LV, vm params=5,2:
        Children see throughput for  8 initial writers  = 1108637.48 KB/sec
        Children see throughput for  8 readers          =  923576.42 KB/sec

Thin LV, 1536K chunk size, vm params=5,2:
        Children see throughput for  8 initial writers  = 1110083.64 KB/sec
        Children see throughput for  8 readers          =  429852.82 KB/sec

Thin LV, 256K chunk size, vm params=5,2:
        Children see throughput for  8 initial writers  = 1108597.75 KB/sec
        Children see throughput for  8 readers          =  423696.85 KB/sec

Comment 18 Manoj Pillai 2015-01-12 16:03:14 UTC
(In reply to Manoj Pillai from comment #17)

Same test. This time with gluster brick on LV striped across 8 disks:

Traditional LV:
vm params =20,10:
        Children see throughput for  8 initial writers  =  887876.23 KB/sec
        Children see throughput for  8 readers          =  725519.41 KB/sec

vm params=5,2:
        Children see throughput for  8 initial writers  =  916983.81 KB/sec
        Children see throughput for  8 readers          =  726719.46 KB/sec

Thin LV, 1024K chunk size:
vm params=20,10:
        Children see throughput for  8 initial writers  =  931390.66 KB/sec
        Children see throughput for  8 readers          =  469504.76 KB/sec

vmparams=5,2:
        Children see throughput for  8 initial writers  = 1106232.27 KB/sec
        Children see throughput for  8 readers          =  379325.10 KB/sec

Thin LV, 256K chunk size:
vm params=20,10:
        Children see throughput for  8 initial writers  =  945816.94 KB/sec
        Children see throughput for  8 readers          =  681052.80 KB/sec

vm params=5,2:
        Children see throughput for  8 initial writers  = 1105959.50 KB/sec
        Children see throughput for  8 readers          =  399721.76 KB/sec

Note: for thin LV, 1024K chunk size is derived as 128K chunk size * 8. mkfs.xfs is passed su=128K,sw=8. Similarly, for traditional LV, mkfs.xfs is passed su=128K,sw=8.

Comment 19 Mike Snitzer 2015-01-12 16:48:50 UTC
What if the same tests are executed locally?  Is there a concurrency problem with gluster's network transport?

Comment 20 Manoj Pillai 2015-01-12 20:36:13 UTC
So gluster over thin LV is showing degradation in read performance compared to traditional LV, because of fragmentation. This kind of degradation is not seen when iozone is run locally. Can we see this with NFS? Here are the results for the same test, but this time with kernel NFS (v4). Exported FS is XFS, created over LV striped over 8 disks. As before, there are 2 clients mounting the NFS exported FS. Distributed iozone is running on the clients, total 8 threads.

Traditional LV, vm params=20,10:
        Children see throughput for  8 initial writers  =  608313.24 KB/sec
        Children see throughput for  8 readers          =  645032.21 KB/sec

Thin LV, 1024K chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  =  670037.65 KB/sec
        Children see throughput for  8 readers          =  522059.73 KB/sec

Thin LV, 256K chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  =  606660.95 KB/sec
        Children see throughput for  8 readers          =  507595.54 KB/sec

So, that's about 20% degradation with thin LV with VM parameters at the RHEL defaults. Degradation is much worse if you tune down the VM dirty parameters (read performance with traditional LV is not affected by the VM dirty parameters).

Comment 21 Manoj Pillai 2015-01-12 20:48:06 UTC
IMO, these results are showing the need for dm-thinp to have some fragmentation prevention mechanism in addition to bio-sort, particularly to handle the networked storage case where queues may not build up like in the local access case.

XFS seems to put in a lot of effort to prevent fragmentation (e.g. speculative preallocation: http://oss.sgi.com/archives/xfs/2014-04/msg00083.html). Maybe dm-thinp needs something along similar lines in order to be able to close the gap with traditional LV on large file sequential writes that result in thinp allocation.

Comment 22 Mike Snitzer 2015-01-12 21:07:23 UTC
(In reply to Manoj Pillai from comment #20)
> So gluster over thin LV is showing degradation in read performance compared
> to traditional LV, because of fragmentation. This kind of degradation is not
> seen when iozone is run locally.

My point is that there is clearly suboptimal write order occurring with the network storage (be it NFS or gluster).  Understanding _why_ that is occurring is the first step to seeing how we might address anything (be it in thinp or elsewhere).

(In reply to Manoj Pillai from comment #21)
> IMO, these results are showing the need for dm-thinp to have some
> fragmentation prevention mechanism in addition to bio-sort, particularly to
> handle the networked storage case where queues may not build up like in the
> local access case.

The bio-sort patch may be helping a lot for local runs (would e interesting to see the load generator throttle IOs to some established in-flight IO upper bound.. have to believe fio can do this).  And see if local performance also drops.

The changes we did recently to recover write performance (delay submission of all bios for a given thinp block by locking them in the same bio prison) served to optimize raid storage (by having a full stripe's worth of IO hit the raid controller's writeback cache at roughly the same time).  BUT it didn't take steps to reorder the way IOs to a given block are mapped on disk.

Will need to discuss with Joe whether there are steps that can be taken to optimizing the ondisk mapping of data blocks at the time the bios are released from the bio prison.  This is just one related idea that builds on what we have.  It may be a bad idea or challenging to implement for some reason I'm missing at the moment.  We'll see.

Comment 23 Joe Thornber 2015-01-13 07:42:31 UTC
Sounds like the over provisioning patch is the next thing to try.

Comment 24 Manoj Pillai 2015-01-13 13:42:48 UTC
(In reply to Manoj Pillai from comment #20)
> Here are the
> results for the same test, but this time with kernel NFS (v4). Exported FS
> is XFS, created over LV striped over 8 disks. As before, there are 2 clients
> mounting the NFS exported FS. Distributed iozone is running on the clients,
> total 8 threads.
> 
> Traditional LV, vm params=20,10:
>         Children see throughput for  8 initial writers  =  608313.24 KB/sec
>         Children see throughput for  8 readers          =  645032.21 KB/sec
> 
> Thin LV, 1024K chunk size, vm params=20,10:
>         Children see throughput for  8 initial writers  =  670037.65 KB/sec
>         Children see throughput for  8 readers          =  522059.73 KB/sec
> 
> Thin LV, 256K chunk size, vm params=20,10:
>         Children see throughput for  8 initial writers  =  606660.95 KB/sec
>         Children see throughput for  8 readers          =  507595.54 KB/sec

Thin LV, 64M chunk size, vm params=20,10:
        Children see throughput for  8 initial writers  =  642213.90 KB/sec
        Children see throughput for  8 readers          =  627726.65 KB/sec

(In reply to Joe Thornber from comment #23)
> Sounds like the over provisioning patch is the next thing to try.

That should do it for these tests. See result above with larger dm-thinp block size of 64M. Will have to see though if we can get over provisioning to work well in the presence of snapshots.

Comment 25 Manoj Pillai 2015-04-04 18:01:26 UTC
The RHGS specific tuning changes discussed here i.e. setting vm.dirty_ratio and vm.dirty_background_ratio to RHEL defaults, have been made in RHGS 3.0.4. Once that is verified, I'll close this bug.

The dm-thinp changes can be tracked in bz 1208974.

Comment 27 Manoj Pillai 2015-04-14 11:21:59 UTC
(In reply to Manoj Pillai from comment #25)
> The RHGS specific tuning changes discussed here i.e. setting vm.dirty_ratio
> and vm.dirty_background_ratio to RHEL defaults, have been made in RHGS
> 3.0.4. Once that is verified, I'll close this bug.
> 

I verified that the vm_dirty parameters are getting set to the RHEL defaults in RHGS 3.0.4. Also, the admin guide discusses tuning of these parameters: https://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Memory.html#Virtual_Memory_Parameters

I'm going to close this bz.

Comment 28 Vivek Agarwal 2015-04-15 08:45:06 UTC
Closing based on comment 27 by Manoj.


Note You need to log in before you can comment on or make changes to this bug.