Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 846564

Summary: failed to create volumes with virsh vol-create-as command on 6.1 rhel
Product: Red Hat Enterprise Linux 6 Reporter: Otis Henry <othenry>
Component: partedAssignee: Brian Lane <bcl>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Release Test Team <release-test-team-automation>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6.1CC: acathrow, cwei, dallan, dyuan, jdenemar, jwu, mjenner, mzhan, shyu, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-06-11 22:59:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
SOS Report #1
none
SOS Report #2 none

Description Otis Henry 2012-08-08 07:00:14 UTC
Description of problem:
When creating volumes using "virsh vol-create-as" command on 6.1 Redhat, the system showed error creating volume. The system did not show error on the first vol creation, but it showed error on the rest of the vol creation. After creating volumes, created volumes were not showed up when using "virsh vol-list" command. After went into Virtual Machine Manager and manually click refreshed volume list, all created volumes would show up.


Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. install 6.1 Redhat and boot from local storage
2. enable multipath after the installation is finished
3. add 50-100GB SAN LUN
4. use parted command to create a GPT disk label on the disk
5. create storage pool configuration file
6. use "virsh pool-define" command with pool configuration file to attch the device
7. start the storage pool using "pool-start" and "pool-autostart" commands
8. verify the storage pool configuration using "pool-info"
9. creat 10G volume using "virsh vol-create-as Storage vol1 10G" command
10. create another 10 volume using the same command
11. failed to created vol error shows up

  
Actual results:
failed to created vol error shows up

Expected results:
have to install device-mapper from 6.2 Redhat

Additional info:

Comment 2 yuping zhang 2012-08-08 09:14:32 UTC
I think it is libvirt bug.And please try this issue with RHEL 6.3. I think it should be okay.

Comment 3 Otis Henry 2012-08-14 00:14:48 UTC
Created attachment 604112 [details]
SOS Report #1

SOS Report #1, due to multiple inconsistent behaviours multiple SOS fils have been attached.

Comment 4 Otis Henry 2012-08-14 00:17:34 UTC
YuPing,
We have performed the requestd on RHEL 6.3 and run into the same issue.  I've attached two SOS reports as the result behaviour detailed below were inconsistant.


>>snip!!_BEGIN
<internal log location removed> ...There are two. That's because the behavior is not consistent. 

This is the first sosreport. 
[root@feni-ls13-rhel63ga-san mapper]# virsh vol-list SAN_LUN_500G
Name                 Path                                    
-----------------------------------------
SAN_LUN_500Gp1       /dev/mapper/SAN_LUN_500Gp1              
SAN_LUN_500Gp10      /dev/mapper/SAN_LUN_500Gp10             
SAN_LUN_500Gp11      /dev/mapper/SAN_LUN_500Gp11             
SAN_LUN_500Gp2       /dev/mapper/SAN_LUN_500Gp2              
SAN_LUN_500Gp3       /dev/mapper/SAN_LUN_500Gp3              
SAN_LUN_500Gp4       /dev/mapper/SAN_LUN_500Gp4              
SAN_LUN_500Gp5       /dev/mapper/SAN_LUN_500Gp5              
SAN_LUN_500Gp6       /dev/mapper/SAN_LUN_500Gp6              
SAN_LUN_500Gp7       /dev/mapper/SAN_LUN_500Gp7              
SAN_LUN_500Gp8       /dev/mapper/SAN_LUN_500Gp8              
SAN_LUN_500Gp9       /dev/mapper/SAN_LUN_500Gp9              

[root@feni-ls13-rhel63ga-san mapper]# 
[root@feni-ls13-rhel63ga-san mapper]# 
[root@feni-ls13-rhel63ga-san mapper]# 
[root@feni-ls13-rhel63ga-san mapper]# virsh vol-create-as SAN_LUN_500G vol12 10G
error: Failed to create vol vol12
error: internal error Child process (/sbin/parted /dev/mapper/SAN_LUN_500G mkpart --script primary 150323872768B 161061291007B) status unexpected: exit status 1

I tried to created a new volume manually using mkpart, but got error too.  
[root@feni-ls13-rhel63ga-san mapper]# /sbin/parted /dev/mapper/SAN_LUN_500G mkpart --script primary 161061291007B 171798709246B
Error: You requested a partition from 161GB to 172GB.
The closest location we can manage is 161GB to 172GB.

However seems the volume is created.

[root@feni-ls13-rhel63ga-san mapper]# ls -latr
total 0
crw-rw----  1 root root 10, 58 Aug 13 12:12 control
lrwxrwxrwx  1 root root      7 Aug 13 12:12 Boot_LUN -> ../dm-0
lrwxrwxrwx  1 root root      7 Aug 13 12:12 Boot_LUNp3 -> ../dm-3
lrwxrwxrwx  1 root root      7 Aug 13 12:12 SAN_LUN_500G -> ../dm-7
lrwxrwxrwx  1 root root      7 Aug 13 12:12 mpathb -> ../dm-4
lrwxrwxrwx  1 root root      7 Aug 13 12:12 mpathc -> ../dm-5
lrwxrwxrwx  1 root root      7 Aug 13 12:12 mpathd -> ../dm-6
lrwxrwxrwx  1 root root      7 Aug 13 12:12 Boot_LUNp2 -> ../dm-2
lrwxrwxrwx  1 root root      7 Aug 13 12:12 Boot_LUNp1 -> ../dm-1
lrwxrwxrwx  1 root root      7 Aug 13 12:12 SAN_LUN_500Gp1 -> ../dm-8
lrwxrwxrwx  1 root root      8 Aug 13 12:12 SAN_LUN_500Gp9 -> ../dm-16
lrwxrwxrwx  1 root root      7 Aug 13 14:36 SAN_LUN_500Gp2 -> ../dm-9
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp3 -> ../dm-10
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp4 -> ../dm-11
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp5 -> ../dm-12
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp6 -> ../dm-13
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp7 -> ../dm-14
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp8 -> ../dm-15
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp10 -> ../dm-17
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp11 -> ../dm-18
drwxr-xr-x 17 root root   4580 Aug 13 14:36 ..
lrwxrwxrwx  1 root root      8 Aug 13 14:36 SAN_LUN_500Gp12 -> ../dm-19

Then I tried to create another volume again. Got same error this time, but volume is not created. This is the second sos report.
[root@feni-ls13-rhel63ga-san mapper]# virsh vol-create-as SAN_LUN_500G vol13 20G
error: Failed to create vol vol13
error: internal error Child process (/sbin/parted /dev/mapper/SAN_LUN_500G mkpart --script primary 161061291008B 182536127487B) status unexpected: exit status 1

[root@feni-ls13-rhel63ga-san mapper]# 
[root@feni-ls13-rhel63ga-san mapper]# ls -latr
total 0
crw-rw----  1 root root 10, 58 Aug 13 15:37 control
lrwxrwxrwx  1 root root      7 Aug 13 15:37 Boot_LUN -> ../dm-0
lrwxrwxrwx  1 root root      7 Aug 13 15:37 Boot_LUNp3 -> ../dm-3
lrwxrwxrwx  1 root root      7 Aug 13 15:37 mpathc -> ../dm-4
lrwxrwxrwx  1 root root      7 Aug 13 15:37 SAN_LUN_500G -> ../dm-6
lrwxrwxrwx  1 root root      7 Aug 13 15:37 mpathb -> ../dm-5
lrwxrwxrwx  1 root root      7 Aug 13 15:37 mpathd -> ../dm-7
lrwxrwxrwx  1 root root      8 Aug 13 15:37 SAN_LUN_500Gp9 -> ../dm-16
lrwxrwxrwx  1 root root      7 Aug 13 15:37 SAN_LUN_500Gp1 -> ../dm-8
lrwxrwxrwx  1 root root      7 Aug 13 15:37 Boot_LUNp2 -> ../dm-2
lrwxrwxrwx  1 root root      7 Aug 13 15:37 Boot_LUNp1 -> ../dm-1
lrwxrwxrwx  1 root root      7 Aug 13 16:24 SAN_LUN_500Gp2 -> ../dm-9
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp3 -> ../dm-10
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp4 -> ../dm-11
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp5 -> ../dm-12
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp6 -> ../dm-13
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp7 -> ../dm-14
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp8 -> ../dm-15
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp10 -> ../dm-17
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp11 -> ../dm-18
drwxr-xr-x 17 root root   4580 Aug 13 16:24 ..
lrwxrwxrwx  1 root root      8 Aug 13 16:24 SAN_LUN_500Gp12 -> ../dm-19
drwxr-xr-x  2 root root    460 Aug 13 16:24 .
[root@feni-ls13-rhel63ga-san mapper]# virsh vol-list SAN_LUN_500G
Name                 Path                                    
-----------------------------------------
SAN_LUN_500Gp1       /dev/mapper/SAN_LUN_500Gp1              
SAN_LUN_500Gp10      /dev/mapper/SAN_LUN_500Gp10             
SAN_LUN_500Gp11      /dev/mapper/SAN_LUN_500Gp11             
SAN_LUN_500Gp12      /dev/mapper/SAN_LUN_500Gp12             
SAN_LUN_500Gp2       /dev/mapper/SAN_LUN_500Gp2              
SAN_LUN_500Gp3       /dev/mapper/SAN_LUN_500Gp3              
SAN_LUN_500Gp4       /dev/mapper/SAN_LUN_500Gp4              
SAN_LUN_500Gp5       /dev/mapper/SAN_LUN_500Gp5              
SAN_LUN_500Gp6       /dev/mapper/SAN_LUN_500Gp6              
SAN_LUN_500Gp7       /dev/mapper/SAN_LUN_500Gp7              
SAN_LUN_500Gp8       /dev/mapper/SAN_LUN_500Gp8              
SAN_LUN_500Gp9       /dev/mapper/SAN_LUN_500Gp9              

[root@feni-ls13-rhel63ga-san mapper]# /sbin/parted /dev/mapper/SAN_LUN_500G mkpart --script primary 161061291008B 182536127487B
Error: You requested a partition from 161GB to 183GB.
The closest location we can manage is 161GB to 161GB.

This is a 500G LUN, there are plenty of space.
<<snip!!_END

Comment 5 Otis Henry 2012-08-14 00:18:15 UTC
Created attachment 604113 [details]
SOS Report #2

SOSReport #2

Comment 11 Jiri Denemark 2014-03-28 15:14:55 UTC
Unless the above usage of parted is wrong, this looks like an issue in parted to me.

Comment 12 Brian Lane 2014-03-28 17:39:54 UTC
Please retest with the latest, parted-2.1-21, there have been a number of improvements in device-mapper handling since 6.1

Comment 13 Brian Lane 2014-05-02 00:31:36 UTC
Also, please attach the output from 'parted -s /dev/mapper/SAN_LUN_500G  u b p'