Bug 1462589 - [RFE] Support discard=unmap
Summary: [RFE] Support discard=unmap
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 9.0 (Mitaka)
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Eoghan Glynn
QA Contact: Joe H. Rahme
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-19 01:54 UTC by Shinobu KINJO
Modified: 2020-07-16 09:50 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-03 00:35:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Shinobu KINJO 2017-06-19 01:54:38 UTC
Description of RFE:
-------------------
Recently we investigated with the customer how to utilize THIN PROVISIONING which is one of the features that physical storage has after executing and completing nova volume-update.

First off we tried to use the following options in nova.conf and cinder.conf respectively.

 #1. nova.conf: hw_disk_discard
 #2. cinder.conf: report_discard_support

Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING was not reflected after volume-update.

Digging it into further more, we thought that <discard=unmap> needs to be added in driver line in xml to free up unused space like:

// ****** /etc/libvirt/qemu/instance-00000001.xml
  <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
// ******

To enable above after launching instance automatically, we modified the following lines.

// ******
--- /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py.orig 2017-05-16 17:17:27.148903218 +0900
+++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py 2017-05-16 17:35:00.892001149 +0900
@@ -122,6 +122,8 @@
      'serial': conf.serial,
      'type': connection_info['driver_volume_type']
    })
+ conf.driver_discard = 'unmap'
+ LOG.warning("### conf.driver_discard=%s" % conf.driver_discard)

  return conf
// ******

We did that because we expected that Nova check connection_info returned by Cinder then it add <discard=unmap> into xml file.

We also expected that it would be good enough to check value of <report_discard_support> in cinder.conf for setting <discard=unmap> into xml file.

Anyway after those modifications, we finally could make THIN PROVISIONING work after volume-update.

We would like to make sure:

 #1 If those modifications are reasonably and logically fine and secure enough or not.
 #2 If there are other solution or not

Since our ultimate goal is to make use of THIN PROVISIONING on stack environment, specifically libvirt so that we save storage space more effectively, if there is another solution, that would be fine.

Regards,

Comment 1 Daniel Berrangé 2017-06-19 08:45:40 UTC
(In reply to Shinobu KINJO from comment #0)
> Description of RFE:
> -------------------
> Recently we investigated with the customer how to utilize THIN PROVISIONING
> which is one of the features that physical storage has after executing and
> completing nova volume-update.
> 
> First off we tried to use the following options in nova.conf and cinder.conf
> respectively.
> 
>  #1. nova.conf: hw_disk_discard

This controls the discard setting for host-local storage (ie emphemeral disk)

>  #2. cinder.conf: report_discard_support

This controls the discard setting for cinder volumes attached to guests

> Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> was not reflected after volume-update.

#1 doesn't affects volumes, #2 will affect volumes, *if* the particular Cinder backend supports discard.

> Digging it into further more, we thought that <discard=unmap> needs to be
> added in driver line in xml to free up unused space like:
> 
> // ****** /etc/libvirt/qemu/instance-00000001.xml
>   <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
> // ******
> 
> To enable above after launching instance automatically, we modified the
> following lines.
> 
> // ******
> --- /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py.orig
> 2017-05-16 17:17:27.148903218 +0900
> +++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/volume.py
> 2017-05-16 17:35:00.892001149 +0900
> @@ -122,6 +122,8 @@
>       'serial': conf.serial,
>       'type': connection_info['driver_volume_type']
>     })
> + conf.driver_discard = 'unmap'
> + LOG.warning("### conf.driver_discard=%s" % conf.driver_discard)

This is not required - the settings #2 are the supported way to enable discard

> We would like to make sure:
> 
>  #1 If those modifications are reasonably and logically fine and secure
> enough or not.

No, these changes are not required. It is necessary to determine why setting the cinder discard setting was not functional as that is the supported way to enable discard

Comment 2 Shinobu KINJO 2017-06-19 10:28:59 UTC
(In reply to Daniel Berrange from comment #1)
> (In reply to Shinobu KINJO from comment #0)
> > Description of RFE:
> > -------------------
> > Recently we investigated with the customer how to utilize THIN PROVISIONING
> > which is one of the features that physical storage has after executing and
> > completing nova volume-update.
> > 
> > First off we tried to use the following options in nova.conf and cinder.conf
> > respectively.
> > 
> >  #1. nova.conf: hw_disk_discard
> 
> This controls the discard setting for host-local storage (ie emphemeral disk)
> 
> >  #2. cinder.conf: report_discard_support
> 
> This controls the discard setting for cinder volumes attached to guests
> 
> > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > was not reflected after volume-update.
> 
> #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> Cinder backend supports discard.

So if we used certified storage by us as backend, setting <report_discard_support> to <true> should enable us to do what we want to do.

Is that what you intended?

Comment 3 Daniel Berrangé 2017-06-21 08:04:09 UTC
(In reply to Shinobu KINJO from comment #2)
> (In reply to Daniel Berrange from comment #1)
> > (In reply to Shinobu KINJO from comment #0)
> > >  #2. cinder.conf: report_discard_support
> > 
> > This controls the discard setting for cinder volumes attached to guests
> > 
> > > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > > was not reflected after volume-update.
> > 
> > #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> > Cinder backend supports discard.
> 
> So if we used certified storage by us as backend, setting
> <report_discard_support> to <true> should enable us to do what we want to do.

Yes, *if* the cinder backend supports discard, then setting report_discard_support should trigger Nova to enable discard. NB, I don't know which particular cinder backends support discard and which don't though.

Comment 4 Shinobu KINJO 2017-06-21 10:00:58 UTC
(In reply to Daniel Berrange from comment #3)
> (In reply to Shinobu KINJO from comment #2)
> > (In reply to Daniel Berrange from comment #1)
> > > (In reply to Shinobu KINJO from comment #0)
> > > >  #2. cinder.conf: report_discard_support
> > > 
> > > This controls the discard setting for cinder volumes attached to guests
> > > 
> > > > Attempts both #1 and #2 were not succeeded; Meaning that THIN PROVISIONING
> > > > was not reflected after volume-update.
> > > 
> > > #1 doesn't affects volumes, #2 will affect volumes, *if* the particular
> > > Cinder backend supports discard.
> > 
> > So if we used certified storage by us as backend, setting
> > <report_discard_support> to <true> should enable us to do what we want to do.
> 
> Yes, *if* the cinder backend supports discard, then setting
> report_discard_support should trigger Nova to enable discard. NB, I don't
> know which particular cinder backends support discard and which don't though.

Is it up to third party driver (= storage vendor)?

Comment 5 awaugama 2017-08-30 17:53:56 UTC
WONTFIX/NOTABUG therefore QE Won't automate


Note You need to log in before you can comment on or make changes to this bug.