Bug 1387672 - postgres upgrade fails to run if not usig a separate mount point/disk for the db
Summary: postgres upgrade fails to run if not usig a separate mount point/disk for the db
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Appliance
Version: 5.7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: cfme-future
Assignee: Nick Carboni
QA Contact: luke couzens
URL:
Whiteboard: black:upgrade:migration
: 1411320 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-21 14:02 UTC by luke couzens
Modified: 2017-06-09 14:02 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-02 12:30:50 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1431811 0 unspecified NEW CloudForms database not properly setup 2022-02-12 08:28:25 UTC

Internal Links: 1431811

Description luke couzens 2016-10-21 14:02:29 UTC
Description of problem:postgres upgrade fails to run if not usig a separate mount point/disk for the db


Version-Release number of selected component (if applicable):5.7.0.6


How reproducible:100%


Steps to Reproduce:
1.Configure 5.6 appliance without separate disk for db
2.add repos for latest 5.7
3.run yum update
4.logout and in to ssh
5.run vmdb
6.run rake db:migrate
7.run rake evm:automate:reset
8.run /usr/bin/miq_postgres_upgrade.sh

Actual results:upgrade script fails to run and upgrade postgres


Expected results:postgres upgraded to 9.5


Additional info:Currently the script will only run the upgrade process if the db is mounted on a separate disk, this makes me wonder if customers may come across this issue if they have not done this.

Comment 2 Nick Carboni 2016-10-21 14:08:52 UTC
I'm not sure how common it is for customers to run an internal database on the same disk as the main filesystem.

Is this something we should be accounting for?

Also, the reason we check for the mountpoint is because this script also handles moving the mountpoint from the data directory to the pgsql directory (one up) so that we can run the upgrade on the same filesystem and take advantage of pg_upgrade's ability to upgrade the data using hardlinks instead of copying.

If we think it's safe to do the upgrade even if the disk isn't mounted where we expect it to be, I should be able to make the mountpoint change logic conditional on the current check for the mountpoint location.

Thoughts?

Comment 4 Nick Carboni 2016-11-02 12:30:50 UTC
Closing as WONTFIX.

Feedback from John Hardy about this situation:
> We should not support this type of configuration and urge the customer to manually rectify the configuration prior to going to CF 4.2

Comment 5 Nick Carboni 2017-01-13 20:05:32 UTC
*** Bug 1411320 has been marked as a duplicate of this bug. ***

Comment 9 Steven Mercurio 2017-06-09 02:50:07 UTC
There was NO mention that this was an issue when 4.1 was installed.  At the very LEAST I need to know how to move over to another physical disk.

Comment 10 Steven Mercurio 2017-06-09 03:39:38 UTC
Also I should mention this requirement is UTTERLY POINTLESS as the "second disk" most of the time are on the same set of physical platters (aka VMware datastore, SAN LUN, ISCSI LUN, etc.) anyway.  Those "separate disk" days are LONG GONE.

AND

if the storage is laid out right this is a moot point as the PV is LOGICAL and coming from a SAN that is spread out over MANY spindles and having just one logical disk tied to the VM makes management (ESPECIALLY with THIN PROVISIONING) FAR easier.  Having a VM with just one disk is something I **RECOMMEND** and to handle the IO issues on the *BACK END* with the storage array which can be CHANGED/EXPANDED ON THE FLY as requirecachingd.

This makes backups/replication with de-duplication to a DR site/etc. FAR easier.

NetApp and other SANs are just FAR better at this and my suggestion to clients is:

DON'T complicate the OS - just put everything on ONE disk and monitor/handle the IO issues on the NETAPP/SAN side where you handle backups, DR, thin provisioning, etc.

The choice I made for 4.1 to keep everything on one disk was not only COMMON but BEST PRACTICE given my SAN and VM storage layout.

BTW:  DITTO for SATELLITE6 -all 3 DBs - ONE VM disk.  It's just backed by SSD backed/Tier1 SAN storage.

I've been doing this since 1997 and started doing SAN de-dup/replication to a DR site with 3par in ~2009.  1 VM = 1 virtual Disk.  K.I.S.S.



This needs to be re-opened and re-visited!

Comment 11 Steven Mercurio 2017-06-09 03:46:24 UTC
(In reply to Steven Mercurio from comment #9)
> There was NO mention that this was an issue when 4.1 was installed.  At the
> very LEAST I need to know how to move over to another physical disk.

Figured out that I need a 2'nd "vg_data" VG for a "lv_pg" lv so I just created a /dev/vda3 to use for now as a PV to throw the "vg_data" VG on.  then I'll move the LV back over to the VG-CFME VG after this 4.1-->4.5 upgrade mess is done and wipe /dev/vda3.

I should NOT have to do this.

Comment 12 Steven Mercurio 2017-06-09 09:27:18 UTC
I looked at the script and it will be quite easy to change to be able to work with one drive.  Where can I send the new script once complete for inclusion so that the user has the choice to use a second disk on to run the alternative script to use just one disk?

Comment 13 Nick Carboni 2017-06-09 14:02:35 UTC
Pull Requests are welcome at https://github.com/ManageIQ/manageiq-appliance/blob/master/LINK/usr/bin/miq_postgres_upgrade.sh

Some background on this:

The main reason this script needed to be created was because up until this point the mount point for the data directory created by our appliance console was directly on the PG data directory rather than above it.

PostgreSQL major version upgrades often require a *second* data directory for the new version's data. So, in order for this script to preserve environments which do have the database on a separate filesystem, and to take advantage of the hard link option in pg_upgrade, we needed to move the mount point before upgrading, which is a more complex task than a simple pg_upgrade.

The choice was made to not alter this script to account for this issue because this issue (the separate LV and mount point location) was the majority of the reason the script needed to be created in the first place.

Now, given all that, I do think it would be beneficial to write up a separate script to handle a basic pg_upgrade with a data directory not on a separate filesystem which would share some of the code from the existing script. Unfortunately I don't have the cycles to take this on right now, but if you would like to contribute such a refactoring, I would be happy to review it.


Note You need to log in before you can comment on or make changes to this bug.