Bug 1567592 - Dead row counts surges quickly to above 1 million - risk of too much bloat
Summary: Dead row counts surges quickly to above 1 million - risk of too much bloat
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: ovirt-engine-dwh
Classification: oVirt
Component: Database
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.7
: ---
Assignee: Shirly Radco
QA Contact: Daniel Gur
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-15 07:34 UTC by Roy Golan
Modified: 2018-09-05 09:39 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-02 07:21:58 UTC
oVirt Team: Metrics
rule-engine: ovirt-4.2+


Attachments (Terms of Use)
Query to check how far are we from tx wrap around (756 bytes, text/plain)
2018-04-15 07:37 UTC, Roy Golan
no flags Details

Description Roy Golan 2018-04-15 07:34:37 UTC
Description of problem:
The amount of dead rows for at least 4 of the busiest tables surges to > 1 million pretty quickly, even after performing full vacuum. It also stays there and autovacuum can't bring it down efficiently.
After reaching say ~ 1.3 million it sort of fluctuates there every day but never goes back to thousands. 

The risk is 1. tx wrap-arround 2. bloat 3. degradation in query speed (because a some queries will load the dead rows as well)

What we don't have yet is the behaviour from 4.1 but I do see a surge of dead recods count also in engine db but less dramatic.

An excerpt of dead records count from a running busy system, not huge though:

select relname,n_dead_tup from pg_stat_user_tables order by n_dead_tup desc limit 20
              relname              | n_dead_tup
-----------------------------------+------------
 vm_disk_samples_history           |    1567593
 vm_interface_samples_history      |    1306910
 vm_samples_history                |    1270649
 vm_disks_usage_samples_history    |    1270649


And it gets there after a day or two after a full vacuum.

Version-Release number of selected component (if applicable):
4.2

How reproducible:
looks like in this system it is consistent.

Steps to Reproduce:
1. Have a setup with DWH co-hosted with engine db (default)
2. Populate the system with ~20 hosts / several hundred VMs and disks
3. Run this query every hour to monitor the growth


  select relname,n_dead_tup from pg_stat_user_tables order by n_dead_tup desc limit 20

Still don't know how severe it that, need to closely monitor and see how fast are we getting to tx wrap around. I'm attaching a query that can give this sort of output:


              table               | xid_age  | mxid_age | tx_before_wraparound_vacuum |    size    |        last_autovacuum        
----------------------------------+----------+----------+-----------------------------+------------+-------------------------------
 vm_interface_daily_history       | 34289153 |      391 |                   165710847 | 168 MB     | 
 vm_disk_hourly_history           | 34289145 |      391 |                   165710855 | 451 MB     | 2018-04-15 09:08:26.253828+03
 cluster_configuration            | 34289144 |      391 |                   165710856 | 112 kB     | 


We see here that daily history didn't even get autovaccum.

Comment 1 Roy Golan 2018-04-15 07:37:43 UTC
Created attachment 1422111 [details]
Query to check how far are we from tx wrap around

Comment 2 Shirly Radco 2018-04-15 08:45:45 UTC
Mordechai, Please run the query on the scale qe env and attach here the results.

Comment 3 Shirly Radco 2018-04-17 12:31:17 UTC
Please check this in a 4.1 environment.

Comment 4 mlehrer 2018-04-17 16:24:53 UTC
(In reply to Shirly Radco from comment #2)
> Mordechai, Please run the query on the scale qe env and attach here the
> results.

Can DWH be separated from engine postgres, and run a vacuum full first for this?

It would mean:

Steps:
0. Run Vaccum Full on db
1. Have a setup with DWH separated from engine db
2. 400 hosts / several 4k VMs and disks
3. Run this query every hour to monitor the growth after full vaccum was done.
4.  select relname,n_dead_tup from pg_stat_user_tables order by n_dead_tup desc limit 20

Is this acceptable on 4.2, and separate dwh db?
Please confirm

Comment 5 Shirly Radco 2018-04-23 07:11:58 UTC
(In reply to mlehrer from comment #4)
> (In reply to Shirly Radco from comment #2)
> > Mordechai, Please run the query on the scale qe env and attach here the
> > results.
> 
> Can DWH be separated from engine postgres, and run a vacuum full first for
> this?
> 
> It would mean:
> 
> Steps:
> 0. Run Vaccum Full on db
> 1. Have a setup with DWH separated from engine db
> 2. 400 hosts / several 4k VMs and disks
> 3. Run this query every hour to monitor the growth after full vaccum was
> done.
> 4.  select relname,n_dead_tup from pg_stat_user_tables order by n_dead_tup
> desc limit 20
> 
> Is this acceptable on 4.2, and separate dwh db?
> Please confirm


Yes. We can do a full vacuum and follow up after it but lets get the stats before we run it as well.
Separate dwh db is ok.

From QE I would also want to test on a 4.1 installation to see if this is a regression.

Comment 6 mlehrer 2018-04-30 12:04:32 UTC
(In reply to Shirly Radco from comment #5)
> (In reply to mlehrer from comment #4)
> > (In reply to Shirly Radco from comment #2)
> > > Mordechai, Please run the query on the scale qe env and attach here the
> > > results.
> > 
> > Can DWH be separated from engine postgres, and run a vacuum full first for
> > this?
> > 
> > It would mean:
> > 
> > Steps:
> > 0. Run Vaccum Full on db
> > 1. Have a setup with DWH separated from engine db
> > 2. 400 hosts / several 4k VMs and disks
> > 3. Run this query every hour to monitor the growth after full vaccum was
> > done.
> > 4.  select relname,n_dead_tup from pg_stat_user_tables order by n_dead_tup
> > desc limit 20
> > 
> > Is this acceptable on 4.2, and separate dwh db?
> > Please confirm
> 
> 
> Yes. We can do a full vacuum and follow up after it but lets get the stats
> before we run it as well.
> Separate dwh db is ok.
> 
> From QE I would also want to test on a 4.1 installation to see if this is a
> regression.

Updating BZ that we:

#4.1 
We agreed offline that this will be handled outside scale.

#4.2
 provided sdraco 3 samples over 3 hours, rgolan requested the sampling be over several days and not hours.  We will take 1 sample a day over the next 2 days for 4.2 values here[1]

[1] https://docs.google.com/spreadsheets/d/1dhgMz7dgqRWTXRXvOQ7nzkjXWzH399fgKlMAYvr0RKk/edit?usp=sharing

Comment 7 Lucie Leistnerova 2018-05-03 09:28:57 UTC
I don't see any dead records in dwh db on sample tables on engine with 2 hosts, 30 VMs, dwh co-hosted. Tested in ovirt-engine-4.1.11.2-0.1.el7.noarch over 2 hours interval, without any change after vacuum full.


            relname              | n_dead_tup
----------------------------------+------------
 schema_version                   |         43
 enum_translator                  |         42
 host_daily_history               |         40
 statistics_vms_users_usage_daily |         39
 storage_domain_daily_history     |         34
 host_interface_daily_history     |         30
 history_configuration            |         20
 vm_interface_hourly_history      |          0
 host_samples_history             |          0
 vm_disk_samples_history          |          0
 vm_daily_history                 |          0
 storage_domain_samples_history   |          0
 vm_disks_usage_daily_history     |          0
 users_details_history            |          0
 calendar                         |          0
 vm_interface_samples_history     |          0
 host_interface_hourly_history    |          0
 vm_hourly_history                |          0
 host_configuration               |          0
 vm_disk_daily_history            |          0

Comment 8 Shirly Radco 2018-05-03 11:33:50 UTC
(In reply to Lucie Leistnerova from comment #7)
> I don't see any dead records in dwh db on sample tables on engine with 2
> hosts, 30 VMs, dwh co-hosted. Tested in ovirt-engine-4.1.11.2-0.1.el7.noarch
> over 2 hours interval, without any change after vacuum full.
> 
> 
>             relname              | n_dead_tup
> ----------------------------------+------------
>  schema_version                   |         43
>  enum_translator                  |         42
>  host_daily_history               |         40
>  statistics_vms_users_usage_daily |         39
>  storage_domain_daily_history     |         34
>  host_interface_daily_history     |         30
>  history_configuration            |         20
>  vm_interface_hourly_history      |          0
>  host_samples_history             |          0
>  vm_disk_samples_history          |          0
>  vm_daily_history                 |          0
>  storage_domain_samples_history   |          0
>  vm_disks_usage_daily_history     |          0
>  users_details_history            |          0
>  calendar                         |          0
>  vm_interface_samples_history     |          0
>  host_interface_hourly_history    |          0
>  vm_hourly_history                |          0
>  host_configuration               |          0
>  vm_disk_daily_history            |          0

Thank you.
Do we have a bigger 4.1 setup to test this on? Also the test after vacuum should be spread to a few days, not hourly.

Comment 9 Lucie Leistnerova 2018-05-07 13:38:34 UTC
No, we don't have bigger environment. If you need it, please ask the scale team.

It also seems alright in daily period (I did not run vacuum meanwhile). The numbers changed a little or even are better after the weekend.

Comment 10 Shirly Radco 2018-08-30 08:53:41 UTC
Can we close this? I don't see how we can test if this is a regression from 4.1.

Comment 11 Roy Golan 2018-09-02 07:21:58 UTC
I have been following a production system for a long time(the on I originally opened the bug after) to try to track the impact and I see none at the moment.

This is an excerpt from a bloat report I produce daily:

              table               | xid_age  | mxid_age | tx_before_wraparound_vacuum |    size    |        last_autovacuum       
----------------------------------+----------+----------+-----------------------------+------------+-------------------------------
 pg_toast.pg_toast_84482          | 78645515 |       13 |                   121354485 | 56 kB      |
 cluster_configuration            | 78645515 |       13 |                   121354485 | 80 kB      |
 pg_toast.pg_toast_84655          | 78645515 |       13 |                   121354485 | 8192 bytes |
 pg_toast.pg_toast_84782          | 78645515 |       13 |                   121354485 | 8192 bytes |
 pg_toast.pg_toast_84380          | 78645515 |       13 |                   121354485 | 8192 bytes |
 statistics_vms_users_usage_daily | 78645515 |       13 |                   121354485 | 112 MB     | 2018-08-28 03:01:27.689565+03
 vm_interface_daily_history       | 78645515 |       13 |                   121354485 | 196 MB     | 2018-08-25 03:02:00.25732+03
 pg_toast.pg_toast_84321          | 78645515 |       13 |                   121354485 | 8192 bytes |
 host_configuration               | 78645515 |       13 |                   121354485 | 3400 kB    |


'tx_before_wraparound_vacuum' values are high, so there is no risk of running into a wraparound.
Currently there is no evidence of a problem, hence I'm closing it.


Note You need to log in before you can comment on or make changes to this bug.