RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1532697 - Default VDO logical size is smaller than "usable" size -> under-provisioning
Summary: Default VDO logical size is smaller than "usable" size -> under-provisioning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: vdo
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: sclafani
QA Contact: Jakub Krysl
URL:
Whiteboard:
Depends On:
Blocks: 1645690
TreeView+ depends on / blocked
 
Reported: 2018-01-09 15:49 UTC by Jakub Krysl
Modified: 2019-08-06 13:08 UTC (History)
8 users (show)

Fixed In Version: 6.1.2.22
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1645690 (view as bug list)
Environment:
Last Closed: 2019-08-06 13:08:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2233 0 None None None 2019-08-06 13:08:14 UTC

Description Jakub Krysl 2018-01-09 15:49:21 UTC
Description of problem:
When default VDO is created, vdo logical size is calculated to not over-provision (result of BZ 1519330 ). But now the vdo logical size is actually smaller than "useable" size, hence reducing the physical size user can use. The default size should be to exactly cover the "usable" physical size without under- or over-provisioning.

# vdo create --name vdo --device /dev/mapper/test-small
Creating VDO vdo
Starting VDO vdo
Starting compression on VDO vdo
VDO instance 15 volume is ready at /dev/mapper/vdo
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        5242880   3150444   2092436  60%          100%
# vdo status | grep logical
        logical blocks: 522989
        logical blocks used: 0
# dd if=/dev/urandom of=/dev/mapper/vdo bs=4k count=522989 status=progress
2094321664 bytes (2.1 GB) copied, 11.000174 s, 190 MB/s
522989+0 records in
522989+0 records out
2142162944 bytes (2.1 GB) copied, 19.9836 s, 107 MB/s
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        5242880   5242400       480  99%            0%

Increasing the logical size by < 480K (as there needs to be space for "maintenance VDO data", see BZ 1528270, comment #2) leads to no I/O errors and successfully writes data to VDO (if BZ 1532682 is avoided) and reduces the "available" counter:
# vdo create --name vdo --device /dev/mapper/test-small --vdoLogicalSize 2091960K
Creating VDO vdo
Starting VDO vdo
Starting compression on VDO vdo
VDO instance 16 volume is ready at /dev/mapper/vdo
# vdo status | grep logical
        logical blocks: 522990
        logical blocks used: 0
# dd if=/dev/urandom of=/dev/mapper/vdo bs=4k count=522990 status=progress
2073825280 bytes (2.1 GB) copied, 11.000090 s, 189 MB/s
522990+0 records in
522990+0 records out
2142167040 bytes (2.1 GB) copied, 36.0527 s, 59.4 MB/s
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        5242880   5242404       476  99%            0%

Version-Release number of selected component (if applicable):
vdo-6.1.0.106

How reproducible:
100%

Steps to Reproduce:
1. create vdo with default logical size
2. fill it
3. check "available" 1-K blocks in vdostats

Actual results:
"available 1-K blocks" > 0

Expected results:
"available 1-K blocks" = 0 and no I/O errors when VDO is filled

Additional info:

Comment 2 Sweet Tea Dorminy 2018-01-09 15:57:43 UTC
The 'usable space' when VDO is created is actually slightly greater than the usable space at later times, because VDO only allocates its block map as it receives writes to never-before-used logical addresses. If you fill the logical space, I believe the calculation is such that you'll have between 0 and 5 usable blocks left over.

Comment 3 Jakub Krysl 2018-01-10 12:55:11 UTC
In this case the logical space is completely filled (writing the same amount of data as there is space in logical blocks), if I try to write more, I get "out of space" in dd. So at this point I should expect 0 - 5 usable blocks left, but I am seeing 480 1-K blocks left, which is much more.

I am even able to increase the logical size to the point where I get 0 "available" 1-K blocks and still no I/O errors:
# vdo create --name vdo --device /dev/mapper/test-small
Creating VDO vdo
Starting VDO vdo
Starting compression on VDO vdo
VDO instance 1 volume is ready at /dev/mapper/vdo
# vdo status | grep ogical
    Logical size: 2091956K
    Logical threads: 1
        logical blocks: 522989
        logical blocks used: 0
### new logical size: 2091956K + 480K = 2092436K
# vdo growLogical --name vdo --vdoLogicalSize 2092436K
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        5242880   3150444   2092436  60%          100%
# dd if=/dev/urandom of=/dev/mapper/vdo bs=4K count=523109 status=progress
2055581696 bytes (2.1 GB) copied, 11.000075 s, 187 MB/s
523109+0 records in
523109+0 records out
2142654464 bytes (2.1 GB) copied, 30.0255 s, 71.4 MB/s
# vdo status | grep ogical
    Logical size: 2092436K
    Logical threads: 1
        logical blocks: 523109
        logical blocks used: 523109
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        5242880   5242880         0 100%            0%

So this means these 480 1-K blocks are "usable" by user, but not accessible by default, as default vdo has its logical size smaller than the "usable" physical space.

Comment 4 Sweet Tea Dorminy 2018-01-11 21:10:55 UTC
I was actually incorrect.

Each of our block map blocks stores 812 logical addresses' mappings. We only deal in whole block map blocks. So if we added 812 more logical addresses, we'd need (up to 5 more block map blocks) + (812 more physical blocks), which we don't have. Thus the number of logical addresses picked.

Comment 5 Sweet Tea Dorminy 2018-01-11 21:12:41 UTC
"We only deal in whole block map blocks."... (I think, for the purposes of picking a default logical size.)

Comment 6 sclafani 2018-01-11 23:29:11 UTC
I believe the problem isn't leaf map page granularity, or at least, not that simply. We can specify an exact number of logical blocks, even if it's not a multiple of 812 (522989 isn't a multiple of 812). The issue is that the calculation is a recurrence, and one that might oscillate. We chose to do the simplest first-order approximation of reserving enough logical blocks to address every possible data block. That reserves too many, since the reserved blocks themselves will never need to be mapped. We thought at the time the difference between the simplest approximation and a more complicated one wasn't significant enough (as above, fewer than one block per 10000) to justify the complexity.

Comment 9 sclafani 2018-11-02 21:34:45 UTC
The problem wasn't the approximation, but that for BZ 1519330 we used a function that measures the amount to space needed in memory, which includes 2 4K blocks for each of 60 tree roots which we don't allocate from the pool of data blocks. 2 * 60 * 4K = 480K.

Comment 11 Jakub Krysl 2019-04-29 16:07:59 UTC
I used 6G LV as backing device to test this:

# vdo create --name vdo --device /dev/mapper/vg-lv
Creating VDO vdo
Starting VDO vdo
Starting compression on VDO vdo
VDO instance 7 volume is ready at /dev/mapper/vdo

# dd if=/dev/urandom of=/dev/mapper/vdo bs=4K count=523108 status=progress
2130739200 bytes (2.1 GB) copied, 35.001183 s, 60.9 MB/s
523108+0 records in
523108+0 records out
2142650368 bytes (2.1 GB) copied, 69.3447 s, 30.9 MB/s
# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        6291456   6291452         4  99%            0%



It seems there is still 1 4K block left. Using new VDO:
# vdo growLogical --name vdo --vdoLogicalSize 2092432K
vdo: ERROR - Can't grow a VDO volume by less than 4096 bytes

# vdo growLogical --name vdo --vdoLogicalSize 2092436K

# dd if=/dev/urandom of=/dev/mapper/vdo bs=4K count=523109 status=progress
2103742464 bytes (2.1 GB) copied, 35.001183 s, 60.1 MB/s
523109+0 records in
523109+0 records out
2142654464 bytes (2.1 GB) copied, 70.7104 s, 30.3 MB/s

# vdostats
Device               1K-blocks      Used Available Use% Space saving%
/dev/mapper/vdo        6291456   6291456         0 100%            0%


So I am still able to grow the VDO and write to it random data without any errors. It is much smaller, just 1 block now. It is possible to change the default logical size to contain this block too? Also what is the reason behind this one, according to #c9 there was exactly 480K missed out for good reason, so where did this last block come from?
Thanks

Comment 12 sclafani 2019-04-29 19:46:42 UTC
The 480K was a bug that got fixed, which is what I tried to say in #c9. The explanation of why it can't be exact is in https://bugzilla.redhat.com/show_bug.cgi?id=1645690#c3

Comment 13 Jakub Krysl 2019-04-30 07:40:29 UTC
(In reply to sclafani from comment #12)
> The 480K was a bug that got fixed, which is what I tried to say in #c9. The
> explanation of why it can't be exact is in
> https://bugzilla.redhat.com/show_bug.cgi?id=1645690#c3

Thanks, setting to verified.

Comment 15 errata-xmlrpc 2019-08-06 13:08:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2233


Note You need to log in before you can comment on or make changes to this bug.