Description of problem:postgres upgrade fails to run if not usig a separate mount point/disk for the db Version-Release number of selected component (if applicable):5.7.0.6 How reproducible:100% Steps to Reproduce: 1.Configure 5.6 appliance without separate disk for db 2.add repos for latest 5.7 3.run yum update 4.logout and in to ssh 5.run vmdb 6.run rake db:migrate 7.run rake evm:automate:reset 8.run /usr/bin/miq_postgres_upgrade.sh Actual results:upgrade script fails to run and upgrade postgres Expected results:postgres upgraded to 9.5 Additional info:Currently the script will only run the upgrade process if the db is mounted on a separate disk, this makes me wonder if customers may come across this issue if they have not done this.
I'm not sure how common it is for customers to run an internal database on the same disk as the main filesystem. Is this something we should be accounting for? Also, the reason we check for the mountpoint is because this script also handles moving the mountpoint from the data directory to the pgsql directory (one up) so that we can run the upgrade on the same filesystem and take advantage of pg_upgrade's ability to upgrade the data using hardlinks instead of copying. If we think it's safe to do the upgrade even if the disk isn't mounted where we expect it to be, I should be able to make the mountpoint change logic conditional on the current check for the mountpoint location. Thoughts?
Closing as WONTFIX. Feedback from John Hardy about this situation: > We should not support this type of configuration and urge the customer to manually rectify the configuration prior to going to CF 4.2
*** Bug 1411320 has been marked as a duplicate of this bug. ***
There was NO mention that this was an issue when 4.1 was installed. At the very LEAST I need to know how to move over to another physical disk.
Also I should mention this requirement is UTTERLY POINTLESS as the "second disk" most of the time are on the same set of physical platters (aka VMware datastore, SAN LUN, ISCSI LUN, etc.) anyway. Those "separate disk" days are LONG GONE. AND if the storage is laid out right this is a moot point as the PV is LOGICAL and coming from a SAN that is spread out over MANY spindles and having just one logical disk tied to the VM makes management (ESPECIALLY with THIN PROVISIONING) FAR easier. Having a VM with just one disk is something I **RECOMMEND** and to handle the IO issues on the *BACK END* with the storage array which can be CHANGED/EXPANDED ON THE FLY as requirecachingd. This makes backups/replication with de-duplication to a DR site/etc. FAR easier. NetApp and other SANs are just FAR better at this and my suggestion to clients is: DON'T complicate the OS - just put everything on ONE disk and monitor/handle the IO issues on the NETAPP/SAN side where you handle backups, DR, thin provisioning, etc. The choice I made for 4.1 to keep everything on one disk was not only COMMON but BEST PRACTICE given my SAN and VM storage layout. BTW: DITTO for SATELLITE6 -all 3 DBs - ONE VM disk. It's just backed by SSD backed/Tier1 SAN storage. I've been doing this since 1997 and started doing SAN de-dup/replication to a DR site with 3par in ~2009. 1 VM = 1 virtual Disk. K.I.S.S. This needs to be re-opened and re-visited!
(In reply to Steven Mercurio from comment #9) > There was NO mention that this was an issue when 4.1 was installed. At the > very LEAST I need to know how to move over to another physical disk. Figured out that I need a 2'nd "vg_data" VG for a "lv_pg" lv so I just created a /dev/vda3 to use for now as a PV to throw the "vg_data" VG on. then I'll move the LV back over to the VG-CFME VG after this 4.1-->4.5 upgrade mess is done and wipe /dev/vda3. I should NOT have to do this.
I looked at the script and it will be quite easy to change to be able to work with one drive. Where can I send the new script once complete for inclusion so that the user has the choice to use a second disk on to run the alternative script to use just one disk?
Pull Requests are welcome at https://github.com/ManageIQ/manageiq-appliance/blob/master/LINK/usr/bin/miq_postgres_upgrade.sh Some background on this: The main reason this script needed to be created was because up until this point the mount point for the data directory created by our appliance console was directly on the PG data directory rather than above it. PostgreSQL major version upgrades often require a *second* data directory for the new version's data. So, in order for this script to preserve environments which do have the database on a separate filesystem, and to take advantage of the hard link option in pg_upgrade, we needed to move the mount point before upgrading, which is a more complex task than a simple pg_upgrade. The choice was made to not alter this script to account for this issue because this issue (the separate LV and mount point location) was the majority of the reason the script needed to be created in the first place. Now, given all that, I do think it would be beneficial to write up a separate script to handle a basic pg_upgrade with a data directory not on a separate filesystem which would share some of the code from the existing script. Unfortunately I don't have the cycles to take this on right now, but if you would like to contribute such a refactoring, I would be happy to review it.