Bug 1462589 - [RFE] Support discard=unmap
[RFE] Support discard=unmap
Status: CLOSED NOTABUG
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova (Show other bugs)
9.0 (Mitaka)
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Eoghan Glynn
Joe H. Rahme
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-18 21:54 EDT by Shinobu KINJO
Modified: 2017-08-30 13:53 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-07-02 20:35:25 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shinobu KINJO 2017-06-18 21:54:38 EDT
Description of RFE:
-------------------
Recently we investigated with the customer how to utilize THIN PROVISIONING which is one of the features that physical storage has after executing and completing nova volume-update.

First off we tried to use the following options in nova.conf and cinder.conf respectively.

 #1. nova.conf: hw_disk_discard
 #2. cinder.conf: report_discard_support

Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING was not reflected after volume-update.

Digging it into further more, we thought that <discard=unmap> needs to be added in driver line in xml to free up unused space like:

// ****** /etc/libvirt/qemu/instance-00000001.xml
  <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
// ******

To enable above after launching instance automatically, we modified the following lines.

// ******
--- /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py.orig 2017-05-16 17:17:27.148903218 +0900
+++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py 2017-05-16 17:35:00.892001149 +0900
@@ -122,6 +122,8 @@
      'serial': conf.serial,
      'type': connection_info['driver_volume_type']
    })
+ conf.driver_discard = 'unmap'
+ LOG.warning("### conf.driver_discard=%s" % conf.driver_discard)

  return conf
// ******

We did that because we expected that Nova check connection_info returned by Cinder then it add <discard=unmap> into xml file.

We also expected that it would be good enough to check value of <report_discard_support> in cinder.conf for setting <discard=unmap> into xml file.

Anyway after those modifications, we finally could make THIN PROVISIONING work after volume-update.

We would like to make sure:

 #1 If those modifications are reasonably and logically fine and secure enough or not.
 #2 If there are other solution or not

Since our ultimate goal is to make use of THIN PROVISIONING on stack environment, specifically libvirt so that we save storage space more effectively, if there is another solution, that would be fine.

Regards,
Comment 1 Daniel Berrange 2017-06-19 04:45:40 EDT
(In reply to Shinobu KINJO from comment #0)
> Description of RFE:
> -------------------
> Recently we investigated with the customer how to utilize THIN PROVISIONING
> which is one of the features that physical storage has after executing and
> completing nova volume-update.
> 
> First off we tried to use the following options in nova.conf and cinder.conf
> respectively.
> 
>  #1. nova.conf: hw_disk_discard

This controls the discard setting for host-local storage (ie emphemeral disk)

>  #2. cinder.conf: report_discard_support

This controls the discard setting for cinder volumes attached to guests

> Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> was not reflected after volume-update.

#1 doesn't affects volumes, #2 will affect volumes, *if* the particular Cinder backend supports discard.

> Digging it into further more, we thought that <discard=unmap> needs to be
> added in driver line in xml to free up unused space like:
> 
> // ****** /etc/libvirt/qemu/instance-00000001.xml
>   <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
> // ******
> 
> To enable above after launching instance automatically, we modified the
> following lines.
> 
> // ******
> --- /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py.orig
> 2017-05-16 17:17:27.148903218 +0900
> +++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py
> 2017-05-16 17:35:00.892001149 +0900
> @@ -122,6 +122,8 @@
>       'serial': conf.serial,
>       'type': connection_info['driver_volume_type']
>     })
> + conf.driver_discard = 'unmap'
> + LOG.warning("### conf.driver_discard=%s" % conf.driver_discard)

This is not required - the settings #2 are the supported way to enable discard

> We would like to make sure:
> 
>  #1 If those modifications are reasonably and logically fine and secure
> enough or not.

No, these changes are not required. It is necessary to determine why setting the cinder discard setting was not functional as that is the supported way to enable discard
Comment 2 Shinobu KINJO 2017-06-19 06:28:59 EDT
(In reply to Daniel Berrange from comment #1)
> (In reply to Shinobu KINJO from comment #0)
> > Description of RFE:
> > -------------------
> > Recently we investigated with the customer how to utilize THIN PROVISIONING
> > which is one of the features that physical storage has after executing and
> > completing nova volume-update.
> > 
> > First off we tried to use the following options in nova.conf and cinder.conf
> > respectively.
> > 
> >  #1. nova.conf: hw_disk_discard
> 
> This controls the discard setting for host-local storage (ie emphemeral disk)
> 
> >  #2. cinder.conf: report_discard_support
> 
> This controls the discard setting for cinder volumes attached to guests
> 
> > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > was not reflected after volume-update.
> 
> #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> Cinder backend supports discard.

So if we used certified storage by us as backend, setting <report_discard_support> to <true> should enable us to do what we want to do.

Is that what you intended?
Comment 3 Daniel Berrange 2017-06-21 04:04:09 EDT
(In reply to Shinobu KINJO from comment #2)
> (In reply to Daniel Berrange from comment #1)
> > (In reply to Shinobu KINJO from comment #0)
> > >  #2. cinder.conf: report_discard_support
> > 
> > This controls the discard setting for cinder volumes attached to guests
> > 
> > > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > > was not reflected after volume-update.
> > 
> > #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> > Cinder backend supports discard.
> 
> So if we used certified storage by us as backend, setting
> <report_discard_support> to <true> should enable us to do what we want to do.

Yes, *if* the cinder backend supports discard, then setting report_discard_support should trigger Nova to enable discard. NB, I don't know which particular cinder backends support discard and which don't though.
Comment 4 Shinobu KINJO 2017-06-21 06:00:58 EDT
(In reply to Daniel Berrange from comment #3)
> (In reply to Shinobu KINJO from comment #2)
> > (In reply to Daniel Berrange from comment #1)
> > > (In reply to Shinobu KINJO from comment #0)
> > > >  #2. cinder.conf: report_discard_support
> > > 
> > > This controls the discard setting for cinder volumes attached to guests
> > > 
> > > > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > > > was not reflected after volume-update.
> > > 
> > > #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> > > Cinder backend supports discard.
> > 
> > So if we used certified storage by us as backend, setting
> > <report_discard_support> to <true> should enable us to do what we want to do.
> 
> Yes, *if* the cinder backend supports discard, then setting
> report_discard_support should trigger Nova to enable discard. NB, I don't
> know which particular cinder backends support discard and which don't though.

Is it up to third party driver (= storage vendor)?
Comment 5 awaugama 2017-08-30 13:53:56 EDT
WONTFIX/NOTABUG therefore QE Won't automate

Note You need to log in before you can comment on or make changes to this bug.