Back to bug 1041737

Who When What Removed Added
RHOS Integration 2013-12-12 19:06:36 UTC Keywords FutureFeature
Whiteboard upstream_milestone_icehouse-2 upstream_status_needs-code-review
Red Hat Bugzilla 2013-12-12 19:06:36 UTC Doc Type Bug Fix Enhancement
John Skeoch 2014-01-13 01:13:01 UTC CC hateya
Ayal Baron 2014-01-14 10:17:24 UTC CC abaron
RHOS Integration 2014-01-23 05:10:51 UTC Target Release 5.0 ---
Target Milestone Upstream M2 Upstream M3
Whiteboard upstream_milestone_icehouse-2 upstream_status_needs-code-review upstream_milestone_icehouse-3 upstream_status_needs-code-review
RHEL Program Management 2014-01-23 05:21:55 UTC Target Release --- 5.0
Sean Cohen 2014-01-26 19:33:40 UTC Keywords Triaged
CC eglynn, fpercoco, pmyers, scohen
Component RFEs openstack-glance
QA Contact dron
RHOS Integration 2014-02-06 05:14:05 UTC Target Release 5.0 ---
Whiteboard upstream_milestone_icehouse-3 upstream_status_needs-code-review upstream_milestone_icehouse-3 upstream_status_implemented
RHEL Program Management 2014-02-06 05:32:30 UTC Target Release --- 5.0
RHOS Integration 2014-02-20 00:09:03 UTC Target Release 5.0 ---
Whiteboard upstream_milestone_icehouse-3 upstream_status_implemented upstream_milestone_icehouse-3 upstream_status_implemented upstream_definition_approved
RHEL Program Management 2014-02-20 00:18:39 UTC Target Release --- 5.0
RHOS Integration 2014-02-20 16:37:54 UTC Target Release 5.0 ---
RHEL Program Management 2014-02-20 17:29:53 UTC Target Release --- 5.0
RHOS Integration 2014-02-20 18:22:02 UTC Status NEW POST
Target Release 5.0 ---
RHEL Program Management 2014-02-20 19:34:04 UTC Target Release --- 5.0
John Skeoch 2014-03-17 02:03:13 UTC CC abaron iheim
Sean Cohen 2014-03-20 15:51:55 UTC Assignee rhos-maint sgotliv
Sergey Gotliv 2014-03-20 16:30:14 UTC Assignee sgotliv fpercoco
Itamar Heim 2014-03-25 10:35:17 UTC CC iheim
Summer Long 2014-03-28 10:06:31 UTC Blocks 1081957
Summer Long 2014-03-28 10:07:12 UTC CC slong
Dafna Ron 2014-03-31 16:54:02 UTC Flags needinfo?(scohen)
Sean Cohen 2014-04-10 09:55:55 UTC Priority unspecified medium
Flags needinfo?(scohen) needinfo+
Dafna Ron 2014-04-10 10:01:51 UTC CC gfidente
Flags needinfo+ needinfo?(scohen) needinfo?(gfidente)
Sean Cohen 2014-04-10 10:52:22 UTC Flags needinfo?(scohen) needinfo?(gfidente)
Tzach Shefi 2014-05-13 14:44:57 UTC CC tshefi
Pratik Pravin Bandarkar 2014-05-19 13:01:38 UTC CC pbandark
Sadique Puthen 2014-05-19 13:58:26 UTC CC sputhenp
Flavio Percoco 2014-05-28 09:00:16 UTC Status POST MODIFIED
Fixed In Version openstack-glance-2014.1-2.el7ost
Sergey Gotliv 2014-05-31 20:56:55 UTC Status MODIFIED ON_QA
nlevinki 2014-06-01 07:48:33 UTC Status ON_QA VERIFIED
CC nlevinki
Flavio Percoco 2014-06-21 03:04:03 UTC Doc Text Currently if you want to configure multiple nfs server as a backend using filesystem store, you cannot mount all disks to a single directory. filesystem store allows administrator to configure only single directory with filesystem_store_datadir parameter in the glance-api.conf.

It is possible to use mhddfs (fuse plugin :https://romanrm.net/mhddfs) which mounts multiple nfs servers to a single directory but it doesn't allow you to evenly store the data on all the disks. Another major drawback is when one of the disk is broken it is very hard to know how many and what images are stored on that disk as glance registry stores location specified in the filesystem_store_datadir parameter.

This enhancement fixes the above issues by adding multifilesystem support to the current filesystem store.
Andrew Dahms 2014-06-25 06:21:06 UTC CC adahms
Doc Text Currently if you want to configure multiple nfs server as a backend using filesystem store, you cannot mount all disks to a single directory. filesystem store allows administrator to configure only single directory with filesystem_store_datadir parameter in the glance-api.conf.

It is possible to use mhddfs (fuse plugin :https://romanrm.net/mhddfs) which mounts multiple nfs servers to a single directory but it doesn't allow you to evenly store the data on all the disks. Another major drawback is when one of the disk is broken it is very hard to know how many and what images are stored on that disk as glance registry stores location specified in the filesystem_store_datadir parameter.

This enhancement fixes the above issues by adding multifilesystem support to the current filesystem store.
Previously, it was not possible to mount all disks to a single directory when configuring multiple NFS servers as a backend using the filesystem store. This was due to the filesystem store only allowing administrators to configure a single directory using the filesystem_store_datadir parameter in the glance-api.conf file.

While it is possible to use MHDDFS (a FUSE plug-in: https://romanrm.net/mhddfs), which mounts multiple NFS servers to a single directory, MHDDFS does not allow you to evenly store the data on all the disks. Another major drawback is that it is very difficult to know the number and type of images stored on a disk when one of the disks is broken because the Glance registry stores the location specified in the filesystem_store_datadir parameter.

This enhancement fixes the above issues by adding multi-filesystem support to the current filesystem store.
errata-xmlrpc 2014-07-07 12:04:53 UTC Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2014-07-08 15:31:48 UTC Status RELEASE_PENDING CLOSED
Resolution --- ERRATA
Last Closed 2014-07-08 11:31:48 UTC
Perry Myers 2016-04-26 15:51:43 UTC CC pmyers

Back to bug 1041737