RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1100286 - Root partition does not get resized to the available space
Summary: Root partition does not get resized to the available space
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: guest-images
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Joey Boggs
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 977028
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-22 12:40 UTC by Chris Pelland
Modified: 2016-11-16 16:14 UTC (History)
15 users (show)

Fixed In Version: rhel-guest-image-6.5-20140613.0.el6_5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-23 07:37:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0782 0 normal SHIPPED_LIVE rhel-guest-image bug fix update 2014-06-23 11:36:33 UTC

Description Chris Pelland 2014-05-22 12:40:44 UTC
This bug has been copied from bug #977028 and has been proposed
to be backported to 6.5 z-stream (EUS).

Comment 5 yuliu 2014-05-23 08:30:25 UTC
 Jaroslav Henner 2014-05-23 03:57:20 EDT

I have VERIFIED this by uploading rhel-guest-image-6.5-20140522.0.x86_64.qcow2
to Openstack Grizzly and booting VM with different flavors, one of 80G, other with 20G disk:

`--> ssh cloud-user.3.2 
[cloud-user@bar ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        79G  877M   74G   2% /
tmpfs           3,9G     0  3,9G   0% /dev/shm

[cloud-user@bar ~]$  sudo fdisk -l /dev/vda

Disk /dev/vda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00036382

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           1       10443    83882373+  83  Linux


`--> ssh cloud-user.3.8
[cloud-user@foo ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G  866M   18G   5% /
tmpfs           939M     0  939M   0% /dev/shm

 sudo fdisk -l /dev/vda

Disk /dev/vda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00036382

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           1        2610    20963801   83  Linux

Comment 6 Jaroslav Henner 2014-05-24 14:42:24 UTC
Chris, why is this a clone of the BZ#977028? What should we do with the 977028? Close it as a DUP?

Comment 7 Lei Wang 2014-05-26 07:17:36 UTC
(In reply to Jaroslav Henner from comment #6)
> Chris, why is this a clone of the BZ#977028? What should we do with the
> 977028? Close it as a DUP?

Hi Jaroslav, this is a 6.5.z clone of the original bug 977028 and represents the fix in RHEL 6.5 update5, the original bug will be used to track the 6.6.0 fix of this bug which follows RHEL 6.6 schedule. Hope this helps addressing your question.

Comment 8 yuliu 2014-05-27 01:32:55 UTC
(In reply to yuliu from comment #5)

>  Jaroslav Henner 2014-05-23 03:57:20 EDT
> 
> I have VERIFIED this by uploading
> rhel-guest-image-6.5-20140522.0.x86_64.qcow2
> to Openstack Grizzly and booting VM with different flavors, one of 80G,
> other with 20G disk:
> 
> `--> ssh cloud-user.3.2 
> [cloud-user@bar ~]$ df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vda1        79G  877M   74G   2% /
> tmpfs           3,9G     0  3,9G   0% /dev/shm
> 
> [cloud-user@bar ~]$  sudo fdisk -l /dev/vda
> 
> Disk /dev/vda: 85.9 GB, 85899345920 bytes
> 255 heads, 63 sectors/track, 10443 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00036382
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/vda1   *           1       10443    83882373+  83  Linux
> 
> 
> `--> ssh cloud-user.3.8
> [cloud-user@foo ~]$ df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vda1        20G  866M   18G   5% /
> tmpfs           939M     0  939M   0% /dev/shm
> 
>  sudo fdisk -l /dev/vda
> 
> Disk /dev/vda: 21.5 GB, 21474836480 bytes
> 255 heads, 63 sectors/track, 2610 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00036382
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/vda1   *           1        2610    20963801   83  Linux
Above is the result from rhos-qe team.
Add virt-qe test result below:

#qemu-img resize rhel-guest-image-6.5-20140523.0.x86_64.qcow2 +10G
Image resized.
# qemu-img info rhel-guest-image-6.5-20140523.0.x86_64.qcow2 
image: rhel-guest-image-6.5-20140523.0.x86_64.qcow2
file format: qcow2
virtual size: 26G (27917287424 bytes)
disk size: 303M
cluster_size: 65536
Format specific information:
    compat: 0.10
#boot the image, and login.
# fdisk -l

Disk /dev/sda: 27.4 GB, 27419869184 bytes
255 heads, 63 sectors/track, 3333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003a2a4

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        3333    26771298+  83  Linux

# resize2fs /dev/sda1

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        26G  868M   24G   4% /
tmpfs           246M     0  246M   0% /dev/shm

Result: the root partition get resized to the available space.

Comment 9 Jaroslav Henner 2014-05-28 09:57:25 UTC
(In reply to yuliu from comment #8)
> (In reply to yuliu from comment #5)
> 
> >  Jaroslav Henner 2014-05-23 03:57:20 EDT
> > 
> > I have VERIFIED this by uploading
> > rhel-guest-image-6.5-20140522.0.x86_64.qcow2
> > to Openstack Grizzly and booting VM with different flavors, one of 80G,
> > other with 20G disk:
> > 
> > `--> ssh cloud-user.3.2 
> > [cloud-user@bar ~]$ df -h
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vda1        79G  877M   74G   2% /
> > tmpfs           3,9G     0  3,9G   0% /dev/shm
> > 
> > [cloud-user@bar ~]$  sudo fdisk -l /dev/vda
> > 
> > Disk /dev/vda: 85.9 GB, 85899345920 bytes
> > 255 heads, 63 sectors/track, 10443 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disk identifier: 0x00036382
> > 
> >    Device Boot      Start         End      Blocks   Id  System
> > /dev/vda1   *           1       10443    83882373+  83  Linux
> > 
> > 
> > `--> ssh cloud-user.3.8
> > [cloud-user@foo ~]$ df -h
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vda1        20G  866M   18G   5% /
> > tmpfs           939M     0  939M   0% /dev/shm
> > 
> >  sudo fdisk -l /dev/vda
> > 
> > Disk /dev/vda: 21.5 GB, 21474836480 bytes
> > 255 heads, 63 sectors/track, 2610 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disk identifier: 0x00036382
> > 
> >    Device Boot      Start         End      Blocks   Id  System
> > /dev/vda1   *           1        2610    20963801   83  Linux
> Above is the result from rhos-qe team.
> Add virt-qe test result below:
> 
> #qemu-img resize rhel-guest-image-6.5-20140523.0.x86_64.qcow2 +10G
> Image resized.
> # qemu-img info rhel-guest-image-6.5-20140523.0.x86_64.qcow2 
> image: rhel-guest-image-6.5-20140523.0.x86_64.qcow2
> file format: qcow2
> virtual size: 26G (27917287424 bytes)
> disk size: 303M
> cluster_size: 65536
> Format specific information:
>     compat: 0.10
> #boot the image, and login.
> # fdisk -l
> 
> Disk /dev/sda: 27.4 GB, 27419869184 bytes
> 255 heads, 63 sectors/track, 3333 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0003a2a4
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1        3333    26771298+  83  Linux
> 
> # resize2fs /dev/sda1

This is interesting. The IIRC the FS should get resized by cloud-init. Do yo think the command above actually did resize the FS, or did it just do NOOP?

> 
> # df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda1        26G  868M   24G   4% /
> tmpfs           246M     0  246M   0% /dev/shm
> 
> Result: the root partition get resized to the available space.

Comment 10 yuliu 2014-05-28 10:28:17 UTC
(In reply to Jaroslav Henner from comment #9)
> (In reply to yuliu from comment #8)
> > (In reply to yuliu from comment #5)
> > 
> > >  Jaroslav Henner 2014-05-23 03:57:20 EDT
> > > 
> > > I have VERIFIED this by uploading
> > > rhel-guest-image-6.5-20140522.0.x86_64.qcow2
> > > to Openstack Grizzly and booting VM with different flavors, one of 80G,
> > > other with 20G disk:
> > > 
> > > `--> ssh cloud-user.3.2 
> > > [cloud-user@bar ~]$ df -h
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/vda1        79G  877M   74G   2% /
> > > tmpfs           3,9G     0  3,9G   0% /dev/shm
> > > 
> > > [cloud-user@bar ~]$  sudo fdisk -l /dev/vda
> > > 
> > > Disk /dev/vda: 85.9 GB, 85899345920 bytes
> > > 255 heads, 63 sectors/track, 10443 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disk identifier: 0x00036382
> > > 
> > >    Device Boot      Start         End      Blocks   Id  System
> > > /dev/vda1   *           1       10443    83882373+  83  Linux
> > > 
> > > 
> > > `--> ssh cloud-user.3.8
> > > [cloud-user@foo ~]$ df -h
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/vda1        20G  866M   18G   5% /
> > > tmpfs           939M     0  939M   0% /dev/shm
> > > 
> > >  sudo fdisk -l /dev/vda
> > > 
> > > Disk /dev/vda: 21.5 GB, 21474836480 bytes
> > > 255 heads, 63 sectors/track, 2610 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disk identifier: 0x00036382
> > > 
> > >    Device Boot      Start         End      Blocks   Id  System
> > > /dev/vda1   *           1        2610    20963801   83  Linux
> > Above is the result from rhos-qe team.
> > Add virt-qe test result below:
> > 
> > #qemu-img resize rhel-guest-image-6.5-20140523.0.x86_64.qcow2 +10G
> > Image resized.
> > # qemu-img info rhel-guest-image-6.5-20140523.0.x86_64.qcow2 
> > image: rhel-guest-image-6.5-20140523.0.x86_64.qcow2
> > file format: qcow2
> > virtual size: 26G (27917287424 bytes)
> > disk size: 303M
> > cluster_size: 65536
> > Format specific information:
> >     compat: 0.10
> > #boot the image, and login.
> > # fdisk -l
> > 
> > Disk /dev/sda: 27.4 GB, 27419869184 bytes
> > 255 heads, 63 sectors/track, 3333 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disk identifier: 0x0003a2a4
> > 
> >    Device Boot      Start         End      Blocks   Id  System
> > /dev/sda1   *           1        3333    26771298+  83  Linux
> > 
> > # resize2fs /dev/sda1
> 
> This is interesting. The IIRC the FS should get resized by cloud-init. Do yo
> think the command above actually did resize the FS, or did it just do NOOP?
> 
Yes, the cloud-init should do it. But these steps are run without RHOS, so the cloud-init is actually not running.
When you resize it, and boot it. Check it by fdisk -l, you will see the /dev/sda1 has been resized already. But you check it with df, it's not.
After execute the "resize2fs /dev/sda1", then check it by df again, it's done.
Check the details below. I execute these steps in order for you to check it more intuitively.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        26G  866M   24G   4% /
tmpfs           246M     0  246M   0% /dev/shm
# fdisk -l

Disk /dev/sda: 38.2 GB, 38157287424 bytes
255 heads, 63 sectors/track, 4639 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003a2a4

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        4639    37261743+  83  Linux

# resize2fs /dev/sda1 
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 3
Performing an on-line resize of /dev/sda1 to 9315435 (4k) blocks.
The filesystem on /dev/sda1 is now 9315435 blocks long.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        35G  870M   33G   3% /
tmpfs           246M     0  246M   0% /dev/shm

# fdisk -l

Disk /dev/sda: 38.2 GB, 38157287424 bytes
255 heads, 63 sectors/track, 4639 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003a2a4

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        4639    37261743+  83  Linux



> > 
> > # df -h
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/sda1        26G  868M   24G   4% /
> > tmpfs           246M     0  246M   0% /dev/shm
> > 
> > Result: the root partition get resized to the available space.

Comment 11 Jaroslav Henner 2014-05-28 15:16:00 UTC
(In reply to yuliu from comment #10)
> (In reply to Jaroslav Henner from comment #9)
> > (In reply to yuliu from comment #8)
> > > (In reply to yuliu from comment #5)
> > > 
> > > >  Jaroslav Henner 2014-05-23 03:57:20 EDT
> > > > 
> > > > I have VERIFIED this by uploading
> > > > rhel-guest-image-6.5-20140522.0.x86_64.qcow2
> > > > to Openstack Grizzly and booting VM with different flavors, one of 80G,
> > > > other with 20G disk:
> > > > 
> > > > `--> ssh cloud-user.3.2 
> > > > [cloud-user@bar ~]$ df -h
> > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > /dev/vda1        79G  877M   74G   2% /
> > > > tmpfs           3,9G     0  3,9G   0% /dev/shm
> > > > 
> > > > [cloud-user@bar ~]$  sudo fdisk -l /dev/vda
> > > > 
> > > > Disk /dev/vda: 85.9 GB, 85899345920 bytes
> > > > 255 heads, 63 sectors/track, 10443 cylinders
> > > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > > Disk identifier: 0x00036382
> > > > 
> > > >    Device Boot      Start         End      Blocks   Id  System
> > > > /dev/vda1   *           1       10443    83882373+  83  Linux
> > > > 
> > > > 
> > > > `--> ssh cloud-user.3.8
> > > > [cloud-user@foo ~]$ df -h
> > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > /dev/vda1        20G  866M   18G   5% /
> > > > tmpfs           939M     0  939M   0% /dev/shm
> > > > 
> > > >  sudo fdisk -l /dev/vda
> > > > 
> > > > Disk /dev/vda: 21.5 GB, 21474836480 bytes
> > > > 255 heads, 63 sectors/track, 2610 cylinders
> > > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > > Disk identifier: 0x00036382
> > > > 
> > > >    Device Boot      Start         End      Blocks   Id  System
> > > > /dev/vda1   *           1        2610    20963801   83  Linux
> > > Above is the result from rhos-qe team.
> > > Add virt-qe test result below:
> > > 
> > > #qemu-img resize rhel-guest-image-6.5-20140523.0.x86_64.qcow2 +10G
> > > Image resized.
> > > # qemu-img info rhel-guest-image-6.5-20140523.0.x86_64.qcow2 
> > > image: rhel-guest-image-6.5-20140523.0.x86_64.qcow2
> > > file format: qcow2
> > > virtual size: 26G (27917287424 bytes)
> > > disk size: 303M
> > > cluster_size: 65536
> > > Format specific information:
> > >     compat: 0.10
> > > #boot the image, and login.
> > > # fdisk -l
> > > 
> > > Disk /dev/sda: 27.4 GB, 27419869184 bytes
> > > 255 heads, 63 sectors/track, 3333 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disk identifier: 0x0003a2a4
> > > 
> > >    Device Boot      Start         End      Blocks   Id  System
> > > /dev/sda1   *           1        3333    26771298+  83  Linux
> > > 
> > > # resize2fs /dev/sda1
> > 
> > This is interesting. The IIRC the FS should get resized by cloud-init. Do yo
> > think the command above actually did resize the FS, or did it just do NOOP?
> > 
> Yes, the cloud-init should do it. But these steps are run without RHOS, so
> the cloud-init is actually not running.

That doesn't make sense to me. Cloud-init doesn't depend on being in RHOS.

> When you resize it, and boot it. Check it by fdisk -l, you will see the
> /dev/sda1 has been resized already. But you check it with df, it's not.
> After execute the "resize2fs /dev/sda1", then check it by df again, it's
> done.
> Check the details below. I execute these steps in order for you to check it
> more intuitively.
> 
> # df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda1        26G  866M   24G   4% /
> tmpfs           246M     0  246M   0% /dev/shm
> # fdisk -l
> 
> Disk /dev/sda: 38.2 GB, 38157287424 bytes
> 255 heads, 63 sectors/track, 4639 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0003a2a4
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1        4639    37261743+  83  Linux
> 
> # resize2fs /dev/sda1 
> resize2fs 1.41.12 (17-May-2010)
> Filesystem at /dev/sda1 is mounted on /; on-line resizing required
> old desc_blocks = 2, new_desc_blocks = 3
> Performing an on-line resize of /dev/sda1 to 9315435 (4k) blocks.
> The filesystem on /dev/sda1 is now 9315435 blocks long.
> 
> # df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda1        35G  870M   33G   3% /
> tmpfs           246M     0  246M   0% /dev/shm
> 
> # fdisk -l
> 
> Disk /dev/sda: 38.2 GB, 38157287424 bytes
> 255 heads, 63 sectors/track, 4639 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0003a2a4
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1        4639    37261743+  83  Linux
> 
> 
> 
> > > 
> > > # df -h
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/sda1        26G  868M   24G   4% /
> > > tmpfs           246M     0  246M   0% /dev/shm
> > > 
> > > Result: the root partition get resized to the available space.

Comment 12 Jaroslav Henner 2014-05-28 15:44:49 UTC
(In reply to Jaroslav Henner from comment #11)
> > > This is interesting. The IIRC the FS should get resized by cloud-init. Do yo
> > > think the command above actually did resize the FS, or did it just do NOOP?
> > > 
> > Yes, the cloud-init should do it. But these steps are run without RHOS, so
> > the cloud-init is actually not running.
> 
> That doesn't make sense to me. Cloud-init doesn't depend on being in RHOS.
> 

I have just checked it with the 7.0 image:

qemu-kvm -m 512 -hda rhel-guest-image-7.0-20140506.1.qcow -nographic

there were outputs like:
cloud-init[700]: 2014-05-28 11:28:04,729 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [87/120s]: request error [[Errno 101] Network is unreachable]

which means the cloud-init ran. Maybe the cloud-init didn't do the FS resizing. I have to check who should actually do that.

I have also checked, the image has cloud-init module resizefs enabled. I didn't see expicit statement resize_rootfs: true in the /etc/cloud/cloud.cfg, but I think the cloud-init just should do the resizing.

Comment 13 yuliu 2014-05-29 02:46:05 UTC
(In reply to Jaroslav Henner from comment #12)
> (In reply to Jaroslav Henner from comment #11)
> > > > This is interesting. The IIRC the FS should get resized by cloud-init. Do yo
> > > > think the command above actually did resize the FS, or did it just do NOOP?
> > > > 
> > > Yes, the cloud-init should do it. But these steps are run without RHOS, so
> > > the cloud-init is actually not running.
> > 
> > That doesn't make sense to me. Cloud-init doesn't depend on being in RHOS.
> > 
> 
> I have just checked it with the 7.0 image:
> 
> qemu-kvm -m 512 -hda rhel-guest-image-7.0-20140506.1.qcow -nographic
> 
> there were outputs like:
> cloud-init[700]: 2014-05-28 11:28:04,729 - url_helper.py[WARNING]: Calling
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [87/120s]:
> request error [[Errno 101] Network is unreachable]
> 
> which means the cloud-init ran. Maybe the cloud-init didn't do the FS
> resizing. I have to check who should actually do that.
> 
> I have also checked, the image has cloud-init module resizefs enabled. I
> didn't see expicit statement resize_rootfs: true in the
> /etc/cloud/cloud.cfg, but I think the cloud-init just should do the resizing.

Sorry Jaroslav, my apologize. You just remind me that we usually delete the cloud-init when we run it without openstack, because it takes some minutes to connect which would failed obviously. It would be a long time in our automation.
And I have tested on rhel-guest-image-6.5-20140523.0.x86_64.qcow2, yes the cloud-init just resized it.
Apologize again.

Comment 14 yuliu 2014-06-16 02:49:03 UTC
Version:
rhel-guest-image-6.5-20140613.0.x86_64.qcow2

Steps:
1. before resize:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        16G  868M   15G   6% /
tmpfs           246M     0  246M   0% /dev/shm

2. After resize:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        26G  866M   24G   4% /
tmpfs           246M     0  246M   0% /dev/shm

Result: root partition got resized to available space.

Comment 16 errata-xmlrpc 2014-06-23 07:37:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0782.html


Note You need to log in before you can comment on or make changes to this bug.