Bug 1921844 - RBD/ NFS attached encrypted volume disk size not the same as other backend encrypted disk size
Summary: RBD/ NFS attached encrypted volume disk size not the same as other backend en...
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 16.1 (Train)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Cinder Bugs List
QA Contact: Evelina Shames
Andy Stillman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-28 17:34 UTC by bkopilov
Modified: 2023-07-30 06:53 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-710 0 None None None 2022-01-14 14:56:36 UTC

Description bkopilov 2021-01-28 17:34:54 UTC
Description of problem:
Looks like when  encrypted && clear volumes attached to an instance , both sharing the same lsblk disk size.
When extending the volume in use , the size of the encrypted volume bigger than the clear one , 
[Titan92 setup with a fix for permission issue] 
Why ?

# created two volumes empty , 1 clear size = 1G , 1 encrypted with luks size = 1G
Created an instance and attached both the instance

From lsblk -lb

$ lsblk -lb
NAME  MAJ:MIN RM       SIZE RO TYPE MOUNTPOINT
vda   253:0    0 2147483648  0 disk
vda1  253:1    0 2138029568  0 part /
vda15 253:15   0    8388608  0 part
vdb   253:16   0 1073741824  0 disk
vdc   253:32   0 1073741824  0 disk



### It looks like both disk sharing the same size


# nova extend command for both:

cinder --os-volume-api-version 3.59  extend 537ae37f-ec46-4259-8215-048606c7f964 2
cinder --os-volume-api-version 3.59  extend 83c20fa9-244a-4875-9f28-b5e323d559c9 2


(overcloud) [stack@undercloud-0 tempest]$ cinder list
+--------------------------------------+--------+--------------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name               | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+--------------------+------+-------------+----------+--------------------------------------+
| 537ae37f-ec46-4259-8215-048606c7f964 | in-use | unencrypted volume | 2    | tripleo     | false    | 73a4f6c2-d64e-496a-864b-a75293136092 |
| 83c20fa9-244a-4875-9f28-b5e323d559c9 | in-use | encrypted volume   | 2    | LUKS        | false    | 73a4f6c2-d64e-496a-864b-a75293136092 |
+--------------------------------------+--------+--------------------+------+-------------+----------+--------------------------------------+


# checking lsblk size on instance:
$ lsblk -nlb
vda   253:0    0 2147483648  0 disk
vda1  253:1    0 2138029568  0 part /
vda15 253:15   0    8388608  0 part
vdb   253:16   0 2147483648  0 disk
vdc   253:32   0 2145415168  0 disk 



When tested on netapp backend , 
attached encrypted volume size is not the same as clear volume .
RBD behaviour is different from others and probably will cause us issues related to migrating / copying and more ...


Note You need to log in before you can comment on or make changes to this bug.