Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1021955

Summary: Copy/Sync percentage drops when maxrecoveryrate is changed on a RAID LV
Product: Red Hat Enterprise Linux 6 Reporter: Nenad Peric <nperic>
Component: lvm2Assignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED WORKSFORME QA Contact: Cluster QE <mspqa-list>
Severity: low Docs Contact:
Priority: low    
Version: 6.5CC: agk, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-04-09 10:53:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1075263    
Attachments:
Description Flags
Test I'm using to verify reported percentages do not drop. none

Description Nenad Peric 2013-10-22 11:54:47 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:

95% of the time 

Steps to Reproduce:

Create many raids with maxrecoveryrate of, say, 512.
Then wait a bit while the sync goes under way. 
Change the recovery rates of some LVs, and the sync percentage will often drop below the value before the change.


Actual results:
[root@virt-008 ~]# lvchange raid/lvol3 --maxrecoveryrate 2048
  Logical volume "lvol3" changed.

[root@virt-008 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lvol0   raid       rwi-a-r---   1.00g                                 8.59        
  lvol1   raid       rwi-a-r---   1.00g                                 8.59        
  lvol2   raid       rwi-a-r---   1.00g                                 8.59        
  lvol3   raid       rwi-a-r---   1.00g                                 8.98        
  lvol4   raid       rwi-a-r---   1.00g                                 8.59        

[root@virt-008 ~]# lvchange raid/lvol1 --maxrecoveryrate 2048
  Logical volume "lvol1" changed.
[root@virt-008 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lvol0   raid       rwi-a-r---   1.00g                                 9.38        
  lvol1   raid       rwi-a-r---   1.00g                                 6.25        
  lvol2   raid       rwi-a-r---   1.00g                                 9.38        
  lvol3   raid       rwi-a-r---   1.00g                                10.94        
  lvol4   raid       rwi-a-r---   1.00g                                 8.98        

The sync percentage dropped for lvol1 from 8.59 to 6.25.
On lower percentages (up to 5) it drops to 0. 


Expected results:

That the percentage stays the same (or higher) after the lvchange since the recovery rate setting should only speed up or slow down the process, not reverse it (if I understood it corrrectly). 


Tested with:

lvm2-2.02.100-6.el6.x86_64

This behavior was first seen in verification of Bug 969171 (https://bugzilla.redhat.com/show_bug.cgi?id=969171#c9)

Comment 2 RHEL Program Management 2013-10-27 08:05:32 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 Jonathan Earl Brassow 2014-04-01 17:52:01 UTC
Does the recovery rate have to be changed in a particular direction?  My script is currently increasing the rate, but never decreasing.

I'll attach the script I'm using.

Comment 5 Jonathan Earl Brassow 2014-04-01 17:53:58 UTC
Created attachment 881466 [details]
Test I'm using to verify reported percentages do not drop.

My test currently only increases recovery rate.  It also assumes a VG name of 'vg'.

Comment 6 Jonathan Earl Brassow 2014-04-02 04:52:17 UTC
(In reply to Jonathan Earl Brassow from comment #5)
> Created attachment 881466 [details]
> Test I'm using to verify reported percentages do not drop.
> 
> My test currently only increases recovery rate.  It also assumes a VG name
> of 'vg'.

If you want this test to work well, you will probably have to:
s/
rate=$((${a[1]} + 100))
/
rate=$((10#${a[1]} + 100))
/
Otherwise, some of the percent numbers will be improperly converted to octal.

Comment 7 Nenad Peric 2014-04-03 13:29:42 UTC
I was not increasing or decreasing.
I was actually sending the same number to the change command, as stated in the opening comment. 
However, I'll test it again with the newest build with increasing and decreasing and see what hapens.

Comment 8 Jonathan Earl Brassow 2014-04-08 20:59:39 UTC
I've tested again and by creating with a rate of 512 and changing to a rate of 2048, like in the opening comment.  I still cannot reproduce.

Comment 9 Nenad Peric 2014-04-09 08:51:27 UTC
Ok, I will re-test it today again with the newest LVM build and post back the results. Maybe it is no longer happening.

Comment 10 Nenad Peric 2014-04-09 09:51:10 UTC
I cannot seem to reproduce this anymore. 

So whatever has changed in the meantime, this percentage remains the same or grows faster, as would be expected with the increase of the recovery rate. 

I suppose that this bug can now be closed as such, and if I run into it again in the future, I will open a new one (or re-open this one..)