Bug 1412626 - [scale] High Database Load after updating to oVirt 4.0.4 (select * from getdisksvmguid)
Summary: [scale] High Database Load after updating to oVirt 4.0.4 (select * from getd...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Database.Core
Version: 4.0.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-4.1.1
: 4.1.1.3
Assignee: Liron Aravot
QA Contact: eberman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-12 13:35 UTC by christian.grundmann
Modified: 2017-04-21 09:30 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-21 09:30:39 UTC
oVirt Team: Storage
rule-engine: ovirt-4.1+
rule-engine: exception+


Attachments (Terms of Use)
select * from pg_stat_activity (32.62 KB, text/plain)
2017-01-12 13:35 UTC, christian.grundmann
no flags Details
create view sql (3.76 KB, text/plain)
2017-01-29 14:59 UTC, Liron Aravot
no flags Details
alldisks_post_view (20.87 KB, text/plain)
2017-01-30 10:49 UTC, christian.grundmann
no flags Details
alldisks_pre_view (16.56 KB, text/plain)
2017-01-30 10:49 UTC, christian.grundmann
no flags Details
getdisksvmguid_post_view (399 bytes, text/plain)
2017-01-30 10:50 UTC, christian.grundmann
no flags Details
getdisksvmguid_pre_view (392 bytes, text/plain)
2017-01-30 10:50 UTC, christian.grundmann
no flags Details
Zip with Postgres 9.6 Reports (6.46 KB, application/zip)
2017-01-30 16:16 UTC, christian.grundmann
no flags Details
Zip with Reports after Analyze (22.08 KB, application/zip)
2017-02-10 08:38 UTC, christian.grundmann
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 72260 0 master MERGED db: memory_and_disk_images_storage_domain_view - EXISTS instead of JOIN 2017-02-16 08:19:13 UTC
oVirt gerrit 72406 0 ovirt-engine-4.1 MERGED db: memory_and_disk_images_storage_domain_view - EXISTS instead of JOIN 2017-02-17 10:09:51 UTC
oVirt gerrit 72412 0 ovirt-engine-4.0 MERGED db: memory_and_disk_images_storage_domain_view - EXISTS instead of JOIN 2017-02-19 14:02:24 UTC

Description christian.grundmann 2017-01-12 13:35:57 UTC
Created attachment 1239931 [details]
select * from pg_stat_activity

Maybe side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1302752

Description of problem:
After Upgrading to 4.0.4 (or higher)
my Setup becomes unuseable 
100% CPU no CPU wait lot of running Queries

Setup:
8 Nodes 
Around 100 VMs running all the time
100-200 VMs dynamically created and destroyed from template using vagrant 
Around 2 disks per VM
7 Storage Domains around 1TB each

Performance problem hits as soon as i start creating VMs with vagrant


         backend_start         |          query_start          |         state_change          | waiting |        state        |                                          query
-------------------------------+-------------------------------+-------------------------------+---------+---------------------+------------------------------------------------------------------------------------------
2017-01-11 15:52:41.612942+01 | 2017-01-11 16:14:45.676881+01 | 2017-01-11 16:14:45.676882+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:35.526771+01 | 2017-01-11 16:14:45.750546+01 | 2017-01-11 16:14:45.750547+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 14:48:41.133303+01 | 2017-01-11 16:14:42.89794+01  | 2017-01-11 16:14:42.897991+01 | f       | idle                | SELECT 1
2017-01-11 14:48:43.504048+01 | 2017-01-11 16:14:46.794742+01 | 2017-01-11 16:14:46.794813+01 | f       | idle                | SELECT option_value FROM vdc_options WHERE option_name = 'DisconnectDwh'
2017-01-11 14:48:43.531955+01 | 2017-01-11 16:14:34.541273+01 | 2017-01-11 16:14:34.543513+01 | f       | idle                | COMMIT
2017-01-11 14:48:43.564148+01 | 2017-01-11 16:14:34.543635+01 | 2017-01-11 16:14:34.544145+01 | f       | idle                | COMMIT
2017-01-11 14:48:43.569029+01 | 2017-01-11 16:00:01.86664+01  | 2017-01-11 16:00:01.866711+01 | f       | idle in transaction | SELECT 'continueAgg', '1'                                                               +
                               |                               |                               |         |                     | FROM history_configuration                                                              +
                               |                               |                               |         |                     | WHERE var_name = 'lastHourAggr'                                                         +
                               |                               |                               |         |                     | AND var_datetime < '2017-01-11 15:00:00.000000+0100'                                    +
                               |                               |                               |         |                     |
2017-01-11 14:48:43.572644+01 | 2017-01-11 14:48:43.57571+01  | 2017-01-11 14:48:43.575736+01 | f       | idle                | SET extra_float_digits = 3
2017-01-11 14:48:43.577039+01 | 2017-01-11 14:48:43.580066+01 | 2017-01-11 14:48:43.58009+01  | f       | idle                | SET extra_float_digits = 3
2017-01-11 14:48:54.308078+01 | 2017-01-11 16:14:46.931422+01 | 2017-01-11 16:14:46.931423+01 | f       | active              | select * from  getsnapshotbyleafguid($1)
2017-01-11 14:48:54.465485+01 | 2017-01-11 16:14:21.113926+01 | 2017-01-11 16:14:21.113959+01 | f       | idle                | COMMIT
2017-01-11 15:52:41.606561+01 | 2017-01-11 16:14:45.839754+01 | 2017-01-11 16:14:45.839755+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 14:48:56.477555+01 | 2017-01-11 16:14:45.276255+01 | 2017-01-11 16:14:45.277038+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 15:52:41.736304+01 | 2017-01-11 16:14:44.48134+01  | 2017-01-11 16:14:44.48134+01  | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 14:48:56.489949+01 | 2017-01-11 16:14:46.40924+01  | 2017-01-11 16:14:46.409241+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.618773+01 | 2017-01-11 16:14:45.732394+01 | 2017-01-11 16:14:45.732394+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 14:48:56.497824+01 | 2017-01-11 16:14:46.827751+01 | 2017-01-11 16:14:46.827752+01 | f       | active              | select * from  getsnapshotbyleafguid($1)
2017-01-11 14:48:56.497732+01 | 2017-01-11 16:09:04.207597+01 | 2017-01-11 16:09:04.342567+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 14:48:58.785162+01 | 2017-01-11 16:14:46.093658+01 | 2017-01-11 16:14:46.093659+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.620421+01 | 2017-01-11 16:14:46.224543+01 | 2017-01-11 16:14:46.224543+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.620478+01 | 2017-01-11 16:14:46.009864+01 | 2017-01-11 16:14:46.009865+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.647839+01 | 2017-01-11 16:14:46.834005+01 | 2017-01-11 16:14:46.834005+01 | f       | active              | select * from  getsnapshotbyleafguid($1)
2017-01-11 14:48:58.929402+01 | 2017-01-11 16:14:44.908748+01 | 2017-01-11 16:14:44.908749+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.756257+01 | 2017-01-11 16:14:46.193542+01 | 2017-01-11 16:14:46.193542+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.766689+01 | 2017-01-11 16:14:46.453393+01 | 2017-01-11 16:14:46.453394+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 15:52:41.848642+01 | 2017-01-11 16:14:29.04013+01  | 2017-01-11 16:14:29.080273+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 16:03:06.731047+01 | 2017-01-11 16:13:43.332298+01 | 2017-01-11 16:13:43.333075+01 | f       | idle                | select * from  getallfromcluster($1, $2)
2017-01-11 16:03:18.282962+01 | 2017-01-11 16:14:44.56195+01  | 2017-01-11 16:14:44.56195+01  | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:03:18.305949+01 | 2017-01-11 16:14:46.483223+01 | 2017-01-11 16:14:46.483223+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:03:18.905939+01 | 2017-01-11 16:14:45.090399+01 | 2017-01-11 16:14:45.0904+01   | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:03:25.100944+01 | 2017-01-11 16:14:46.887946+01 | 2017-01-11 16:14:46.887947+01 | f       | active              | select * from  getsnapshotbyleafguid($1)
2017-01-11 16:03:25.118964+01 | 2017-01-11 16:14:45.866665+01 | 2017-01-11 16:14:45.866666+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:04:00.255077+01 | 2017-01-11 16:13:51.184443+01 | 2017-01-11 16:13:51.184499+01 | f       | idle                | select * from  getqosbyqosid($1)
2017-01-11 16:04:12.591564+01 | 2017-01-11 16:14:45.849935+01 | 2017-01-11 16:14:45.849935+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:04:13.778038+01 | 2017-01-11 16:14:46.138704+01 | 2017-01-11 16:14:46.138705+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:04:14.734107+01 | 2017-01-11 16:12:28.219276+01 | 2017-01-11 16:12:28.219372+01 | f       | idle                | select * from  getqosbyqosid($1)
2017-01-11 16:04:15.098427+01 | 2017-01-11 16:14:45.049351+01 | 2017-01-11 16:14:45.049352+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:04:15.549806+01 | 2017-01-11 16:14:45.942699+01 | 2017-01-11 16:14:45.942699+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:04:35.623004+01 | 2017-01-11 16:14:45.277261+01 | 2017-01-11 16:14:45.278013+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 16:05:17.307564+01 | 2017-01-11 16:14:41.567496+01 | 2017-01-11 16:14:41.568274+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 16:05:11.805966+01 | 2017-01-11 16:14:46.851024+01 | 2017-01-11 16:14:46.851024+01 | f       | active              | select * from  getsnapshotbyleafguid($1)
2017-01-11 16:05:17.430004+01 | 2017-01-11 16:10:23.506252+01 | 2017-01-11 16:10:23.582274+01 | f       | idle                | select * from  getstorage_domains_by_storagepoolid($1, $2, $3)
2017-01-11 16:05:25.482896+01 | 2017-01-11 16:14:45.0316+01   | 2017-01-11 16:14:45.0316+01   | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:05:25.547646+01 | 2017-01-11 16:11:10.067981+01 | 2017-01-11 16:11:10.068043+01 | f       | idle                | select * from  getqosbyqosid($1)
2017-01-11 16:05:34.070317+01 | 2017-01-11 16:14:46.293573+01 | 2017-01-11 16:14:46.293573+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:05:35.479037+01 | 2017-01-11 16:14:45.699444+01 | 2017-01-11 16:14:45.699445+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:05:35.500181+01 | 2017-01-11 16:14:46.221274+01 | 2017-01-11 16:14:46.221274+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:05:48.391458+01 | 2017-01-11 16:14:46.443046+01 | 2017-01-11 16:14:46.443047+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:40.116008+01 | 2017-01-11 16:14:45.181865+01 | 2017-01-11 16:14:45.181866+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:23.595959+01 | 2017-01-11 16:14:46.126082+01 | 2017-01-11 16:14:46.126083+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:24.582543+01 | 2017-01-11 16:14:46.074258+01 | 2017-01-11 16:14:46.074258+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:24.660881+01 | 2017-01-11 16:13:39.930702+01 | 2017-01-11 16:13:39.95559+01  | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 16:06:24.690863+01 | 2017-01-11 16:07:28.763627+01 | 2017-01-11 16:07:28.763684+01 | f       | idle                | select * from  getqosbyqosid($1)
2017-01-11 16:06:26.244997+01 | 2017-01-11 16:14:45.760047+01 | 2017-01-11 16:14:45.760048+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:26.359194+01 | 2017-01-11 16:14:46.90043+01  | 2017-01-11 16:14:46.929003+01 | f       | idle                | select * from  getvdsbyvdsid($1, $2, $3)
2017-01-11 16:06:26.377649+01 | 2017-01-11 16:14:45.035936+01 | 2017-01-11 16:14:45.035937+01 | f       | active              | select * from  getdisksvmguid($1, $2, $3, $4)
2017-01-11 16:06:40.128282+01 | 2017-01-11 16:12:43.764245+01 | 2017-01-11 16:12:43.764293+01 | f       | idle                | select * from  getqosbyqosid($1)
2017-01-11 16:06:40.150762+01 | 2017-01-11 16:10:54.629416+01 | 2017-01-11 16:10:54.629496+01 | f       | idle                | select * from  getstoragedomainidsbystoragepoolidandstatus($1, $2)
2017-01-11 16:14:46.934168+01 | 2017-01-11 16:14:46.964807+01 | 2017-01-11 16:14:46.964809+01 | f       | active              | select backend_start,query_start,state_change,waiting,state,query from pg_stat_activity;

top - 16:13:43 up  1:36,  3 users,  load average: 41.62, 37.53, 21.37
Tasks: 286 total,  40 running, 246 sleeping,   0 stopped,   0 zombie
%Cpu(s): 99.3 us,  0.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16432648 total,  3626184 free,  6746368 used,  6060096 buff/cache
KiB Swap:  5242876 total,  5242876 free,        0 used.  8104244 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
17328 postgres  20   0 3485020  66760  20980 R  21.5  0.4   1:37.27 postgres: engine engine 127.0.0.1(35212) SELECT
8269 postgres  20   0 3484796  68200  22780 R  21.2  0.4   2:15.36 postgres: engine engine 127.0.0.1(47116) SELECT
9016 postgres  20   0 3520356 279756 199452 R  21.2  1.7   7:24.29 postgres: engine engine 127.0.0.1(42992) SELECT
16878 postgres  20   0 3498100  66160  18468 R  21.2  0.4   2:00.82 postgres: engine engine 127.0.0.1(34238) SELECT
16751 postgres  20   0 3486388 215784 169404 R  20.9  1.3   1:56.38 postgres: engine engine 127.0.0.1(34008) SELECT
17868 postgres  20   0 3487860 215472 167796 R  20.9  1.3   1:07.40 postgres: engine engine 127.0.0.1(36312) SELECT
8272 postgres  20   0 3490392  76912  25288 R  20.5  0.5   2:30.15 postgres: engine engine 127.0.0.1(47124) SELECT
8274 postgres  20   0 3495800  83144  26100 R  20.5  0.5   2:56.66 postgres: engine engine 127.0.0.1(47130) SELECT
9015 postgres  20   0 3523344 283388 198908 R  20.5  1.7   7:19.91 postgres: engine engine 127.0.0.1(42990) SELECT
16879 postgres  20   0 3488296  72180  23744 R  20.5  0.4   1:30.01 postgres: engine engine 127.0.0.1(34242) SELECT
17241 postgres  20   0 3486540 215716 168024 R  20.5  1.3   1:47.58 postgres: engine engine 127.0.0.1(35018) SELECT
17242 postgres  20   0 3495864  69172  20988 R  20.5  0.4   1:54.09 postgres: engine engine 127.0.0.1(35022) SELECT
17668 postgres  20   0 3488576  54484  15080 R  20.5  0.3   1:28.91 postgres: engine engine 127.0.0.1(35896) SELECT
8266 postgres  20   0 3490688 222344 170852 R  20.2  1.4   2:58.95 postgres: engine engine 127.0.0.1(47112) SELECT
8268 postgres  20   0 3503420 241888 177500 R  20.2  1.5   3:10.34 postgres: engine engine 127.0.0.1(47117) SELECT
8275 postgres  20   0 3510316 253340 181688 R  20.2  1.5   4:12.02 postgres: engine engine 127.0.0.1(47132) SELECT
9014 postgres  20   0 3523872 284636 199424 R  20.2  1.7   7:51.82 postgres: engine engine 127.0.0.1(42988) SELECT
9027 postgres  20   0 3514872 265384 189656 R  20.2  1.6   5:21.63 postgres: engine engine 127.0.0.1(43012) SELECT
17546 postgres  20   0 3475628  55248  19108 R  20.2  0.3   1:33.40 postgres: engine engine 127.0.0.1(35668) SELECT
17669 postgres  20   0 3483284  66920  22488 R  20.2  0.4   1:28.01 postgres: engine engine 127.0.0.1(35898) SELECT
17670 postgres  20   0 3504988  78300  22032 R  20.2  0.5   1:18.96 postgres: engine engine 127.0.0.1(35900) SELECT
17865 postgres  20   0 3485084  66688  21316 R  20.2  0.4   1:14.00 postgres: engine engine 127.0.0.1(36306) SELECT
7869 postgres  20   0 3492780 224272 171620 R  19.9  1.4   2:57.03 postgres: engine engine 127.0.0.1(46542) SELECT




Version-Release number of selected component (if applicable):
No Problem in 4.0.3
Problem with 4.0.4 and higher (tried 4.0.4 and 4.0.6)

How reproducible:


Steps to Reproduce:
1. create VMs from template using Vagrant (around 20-30)
2. engine hits 100%, long running queries 
3. need to downgrade to 4.0.3 

Actual results:
engine becomes unuseable

Expected results:
Performance like in 4.0.3 or better 

Additional info:
see https://www.mail-archive.com/users@ovirt.org/msg38387.html

Comment 1 Tal Nisan 2017-01-12 14:25:47 UTC
Allon, can you please have a look?

Comment 2 Liron Aravot 2017-01-19 13:14:42 UTC
Hi Christian,
can you please specify what version of postgres are you using?
additionally - did you try to reproduce the issue on more then one setup?

Comment 3 Liron Aravot 2017-01-19 13:18:19 UTC
Additionally, I'd appreciate if you could run EXPLAIN ANALYZE for that query on your setup.

Comment 4 christian.grundmann 2017-01-19 13:50:43 UTC
Hi,
did not install postgres per Hand, so i think it was installed by engine?

rpm -qa |grep postgres
postgresql-libs-9.2.18-1.el7.x86_64
postgresql-9.2.18-1.el7.x86_64
postgresql-jdbc-9.2.1002-5.el7.noarch
postgresql-server-9.2.18-1.el7.x86_64

Can't tell which Parameters used for the Original Query used 'template guid' 't', NULL, 'f' like in the other bug?

engine=> EXPLAIN ANALYZE select * from  getdisksvmguid('3987798e-8d6e-4da2-b66d-81cd08d5dc41', 't', NULL, 'f');
                                                      QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
 Function Scan on getdisksvmguid  (cost=0.25..10.25 rows=1000 width=3107) (actual time=46.452..46.453 rows=1 loops=1)
 Total runtime: 46.515 ms
(2 rows)

but thats in 4.0.3 (where the Problem doesn't happen) as a can't have my engine on 4.0.6 

I also have only that System no others to try

Comment 5 Liron Aravot 2017-01-29 14:57:53 UTC
Thanks christian,
I'd appreciate if you could create a db dump from you setup and attach it or alternatively - 
1. Create a db dump of your setup
2. Create a new db and restore the data from the dump to it
3. Run the following and save the output:
a. explain (ANALYZE true, COSTS true)  select * from all_disks_for_vms;
b. explain (ANALYZE true, COSTS true) select * from  getdisksvmguid('3987798e-8d6e-4da2-b66d-81cd08d5dc41', 't', NULL, 'f');

4. Create the VIEW as defined in 4.0.6  (see the attached sql file).
5. Run the queries in step 3 again and save the output.

Additionally, I'd appreciate if you could answer the following:
1. Did you try multiple times? Did you manage to reproduce always or only in some of the times?
2. Do you have the engine log from the last time you tried that on 4.0.6? I'd appreciate if you could attach it as well.

Comment 6 Liron Aravot 2017-01-29 14:59:32 UTC
Created attachment 1245581 [details]
create view sql

Comment 7 Liron Aravot 2017-01-29 15:01:08 UTC
Clarification - run all the tests on the cloned database of course.

Comment 8 christian.grundmann 2017-01-30 10:49:23 UTC
Created attachment 1245833 [details]
alldisks_post_view

Comment 9 christian.grundmann 2017-01-30 10:49:57 UTC
Created attachment 1245834 [details]
alldisks_pre_view

Comment 10 christian.grundmann 2017-01-30 10:50:21 UTC
Created attachment 1245835 [details]
getdisksvmguid_post_view

Comment 11 christian.grundmann 2017-01-30 10:50:47 UTC
Created attachment 1245836 [details]
getdisksvmguid_pre_view

Comment 12 christian.grundmann 2017-01-30 10:52:21 UTC
Attached pre_view Queries from original Dump and post_View Queries after creating the views

Comment 13 christian.grundmann 2017-01-30 10:54:51 UTC
1. Did you try multiple times? Did you manage to reproduce always or only in some of the times?
I am creating a lot of VMs during the day so the problem was directly after the update so I did the downgrade without trying around a log.

2. Do you have the engine log from the last time you tried that on 4.0.6? I'd appreciate if you could attach it as well.
no did not save the logs

Comment 14 Liron Aravot 2017-01-30 15:51:12 UTC
Thanks Christian,

The problem cause is that after the changes made in BZ 1302752 the postgres optimizer chooses to perform join and then to reduce the rows instead of performing seq scan - when the rows count in those table is very big that results in worse performance as can be seen in [1].
Generally speaking, the postgres optimizer optimally should've generated the same plan for both cases (EXISTS/LEFT JOIN) - which means that for both cases it should've perform seq scan - it may be that because of the complexity of the query that doesn't happen.

Any chance you can install the latest stable release of postgres for your os on a different machine, restore your database dump there and to perform the test again?

Thanks,
Liron


[1] - 
-------------------------------------------------------------------------------
old view (using EXISTS) -                                                                          
                                                                             ->                                                                               ->  Seq Scan on snapshots  (cost=0.00..6.27 rows=1 width=0) (actual time=0.012..0.012 rows=0 loops=292)
                                                                                   Filter: ((public.images.image_group_id = memory_metadata_disk_id) OR (public.images.image_group_id = memory_dump_disk_id))
                                                                                   Rows Removed by Filter: 151

..
 Total runtime: 10.278 ms
                                                                                   Rows Removed by Filter: 151
-------------------------------------------------------------------------------


Updated view (Using joins)- we can see 44092 rows removed.
-------------------------------------------------------------------------------

                                                                                       ->  Nested Loop Left Join  (cost=0.00..788.16 rows=1 width=119) (actual time=0.125..5.806 rows=292 loops=1)
                                                                                             Join Filter: ((public.images.image_group_id = memory_snap.memory_metadata_disk_id) OR (public.images.image_group_id = memory_snap.memory_dump_disk_id))
                                                                                             Rows Removed by Join Filter: 44092
                                                                                             
..
 Total runtime: 433.765 ms
-------------------------------------------------------------------------------

Comment 15 christian.grundmann 2017-01-30 16:16:27 UTC
Created attachment 1245920 [details]
Zip with Postgres 9.6 Reports

Comment 16 christian.grundmann 2017-01-30 16:19:03 UTC
in Produktion i have postgresql 9.2.18

I rerun the Tests with postgresql 9.6.1 (attached zip)
but its only a small improvement 345.151 ms vs. 10.700 ms in ovirt 4.0.3

Comment 17 Liron Aravot 2017-01-30 16:41:56 UTC
Thanks Chrisitan,
I've also tried to simulate the case in my local env as well (with 2000 disks/2000 snapshots) using postgres 9.5, the optimizer did optimize the query for both to use the seq scan instead of performing a join - therefore the generated query plan is related to your data (and we need to let the postgres developers to check if they have an issue here, if you could open a postgres bug on that it'll be great as they may need more data from you).

To not depend on the postgres implementation for that case, I'll change the query to avoid the join after testing that it works in high scale system as expected and not causing to a performance degradation.

Thanks for your input,
Liron

Comment 18 christian.grundmann 2017-01-31 07:41:57 UTC
(In reply to Liron Aravot from comment #17)
> Thanks Chrisitan,
> I've also tried to simulate the case in my local env as well (with 2000
> disks/2000 snapshots) using postgres 9.5, the optimizer did optimize the
> query for both to use the seq scan instead of performing a join - therefore
> the generated query plan is related to your data (and we need to let the
> postgres developers to check if they have an issue here, if you could open a
> postgres bug on that it'll be great as they may need more data from you).
> 
> To not depend on the postgres implementation for that case, I'll change the
> query to avoid the join after testing that it works in high scale system as
> expected and not causing to a performance degradation.
> 
> Thanks for your input,
> Liron

Hi,
i don't think i understand the problem enough to write the postgres bug report.
Would be greate if you could do it. If more input from me is necessary I can provide it.

Thx Christian

Comment 19 Liron Aravot 2017-02-09 18:45:21 UTC
Hi Christian, The optimizer wrong planning should be caused by the fact that your db statistics aren't updated - 

In order to confirm that, can you please perform the following on the cloned db you created and we tested on before:

1. Create the VIEW as defined in 4.0.6  (see the attached sql file).
2. attach the output of: select * from pg_stat_all_tables;
2. run VACUUM ANALYZE;
3. run 
  a. explain (ANALYZE true, COSTS true)  select * from all_disks_for_vms;
  b. explain (ANALYZE true, COSTS true) select * from 
getdisksvmguid('3987798e-8d6e-4da2-b66d-81cd08d5dc41', 't', NULL, 'f');

and attach the output?

thanks.

Comment 20 christian.grundmann 2017-02-10 08:38:13 UTC
Created attachment 1248994 [details]
Zip with Reports after Analyze

Comment 21 eberman 2017-04-04 11:31:23 UTC
(In reply to christian.grundmann from comment #20)
> Created attachment 1248994 [details]
> Zip with Reports after Analyze

Hi Christian,could you update if issue was solved ?

Comment 22 christian.grundmann 2017-04-04 11:40:31 UTC
explain (ANALYZE true, COSTS true)  select * from all_disks_for_vms;
 Total runtime: 43.922 ms
explain (ANALYZE true, COSTS true) select * from getdisksvmguid('3987798e-8d6e-4da2-b66d-81cd08d5dc41', 't', NULL, 'f');
 Total runtime: 40.331 ms

Times are from Production not the Clone DB

works for me on oVirt 4.1.1.6-1.el7.centos

Comment 23 eberman 2017-04-04 12:29:55 UTC
(In reply to christian.grundmann from comment #22)
> explain (ANALYZE true, COSTS true)  select * from all_disks_for_vms;
>  Total runtime: 43.922 ms
> explain (ANALYZE true, COSTS true) select * from
> getdisksvmguid('3987798e-8d6e-4da2-b66d-81cd08d5dc41', 't', NULL, 'f');
>  Total runtime: 40.331 ms
> 
> Times are from Production not the Clone DB
> 
> works for me on oVirt 4.1.1.6-1.el7.centos

moving to verified 
thank you!


Note You need to log in before you can comment on or make changes to this bug.