Description of problem: Processing a virt-who report causes one specific RHSM request type is blocked for some time. Since these requests are fired frequently, this can cause the requests occupy whole passenger queue and passenger starts to return 503. Once the virt-who report processing is completed, the requests RHSM requests are unblocked. Anyway the 503 errors shouldnt happen meantime. Version-Release number of selected component (if applicable): Sat 6.3.1 How reproducible: 100% on customer data; generic reproducer shall not be hard to develop Steps to Reproduce: (generic reproducer) 1. Have few thousands of systems registered, with default certCheckInterval = 240 in rhsm.conf (the lower the better for reproducer) 2. Send virt-who report with mapping of several hundreds of systems 3. During processing of the report, check WebUI status or httpd error logs Particular reproducer without a need of having a single host that is mimicked by specific curl requests: A) to mimic RHSM certs check request: in fact just one particular URI GET request is essential / sufficient: curl -s -u admin:changeme -X GET https://$(hostname -f)/rhsm/consumers/${uuid}/certificates/serials (set uuid to various UUIDs of hosts / candlepin consumer IDs, and run these requests concurrently several times) B) to mimic virt-who report: have virt-who-report.json with HV<->VMs mappings, and run: time curl -s -u admin:changeme -X POST -H "Content-Type: application/json" -d @virt-who-report.json 'https://your.satellite/rhsm/hypervisors?owner=Owner&env=Library' Actual results: 3. shows 503 errors in WebUI, /var/log/httpd/error_log having "Request queue is full. Returning an error" errors. Expected results: 3. WebUI accessible, no such errors in httpd logs. Additional info: Technical explanation what goes wrong (to some extent): - virt-who report processing requires updating katello_subscription_facets postgres table in some lengthy transaction (*) - So Katello::Api::Rhsm::CandlepinProxiesController#serials requests are stuck on step: @host.subscription_facet.save! for tens(!) of seconds, till the virt-who report is finished - these requests come from the RHSM certs check queries / particular URI request /rhsm/consumers/${uuid}/certificates/serials - these requests get accumulated for the few tens of seconds, and for higher load of them, this can fill whole passenger request queue - that consequently triggers the 503 errors Particular reproducer on customer data to be provided in next comment.
Quite relevant to this: the virt-who is blocking other updates of katello_subscription_facets table quite *probably* due to requests like: UPDATE "katello_subscription_facets" SET "hypervisor_host_id" = 16770 WHERE "katello_subscription_facets"."host_id" IN (4971, 12626, 12753, 12885, 13087, 13389, 11572, 13709, 13723, 13753, 13820, 13901, 14036, 14069, 14308, 14367, 14412, 14480, 14470, 14603, 4619, 5170, 5171, 5669, 15206, 15373, 6939, 15627, 8388, 9488, 10062, 11527, 12855, 13143, 13722, 14286, 14570, 3954, 4799, 6262, 7327, 15269, 15304, 6844, 10474, 9899, 10124, 5166, 4538, 11842, 12950, 13379, 13491, 8539, 4167, 10957, 11016, 9314) (I *think* so since we noticed some postgres deadlocks with that updates when customer run virt-who concurrently many times for the same hypervisor, due to another bug)
(In reply to Pavel Moravec from comment #3) > Quite relevant to this: the virt-who is blocking other updates of > katello_subscription_facets table quite *probably* due to requests like: > > UPDATE "katello_subscription_facets" SET "hypervisor_host_id" = 16770 WHERE > "katello_subscription_facets"."host_id" IN (4971, 12626, 12753, 12885, > 13087, 13389, 11572, 13709, 13723, 13753, 13820, 13901, 14036, 14069, 14308, > 14367, 14412, 14480, 14470, 14603, 4619, 5170, 5171, 5669, 15206, 15373, > 6939, 15627, 8388, 9488, 10062, 11527, 12855, 13143, 13722, 14286, 14570, > 3954, 4799, 6262, 7327, 15269, 15304, 6844, 10474, 9899, 10124, 5166, 4538, > 11842, 12950, 13379, 13491, 8539, 4167, 10957, 11016, 9314) > > > (I *think* so since we noticed some postgres deadlocks with that updates > when customer run virt-who concurrently many times for the same hypervisor, > due to another bug) This seems to be irrelevant. Since enabling "log_min_duration_statement = 100" in postgres, the only long requests are from processing the RHSM requests, like: production.log: 2018-06-06 08:05:37 b322c26d [app] [I] Started POST "/rhsm/hypervisors?owner=CHANGED-fin&env=Library" for 127.0.0.1 at 2018-06-06 08:05:37 +0200 .. 2018-06-06 08:05:39 b322c26d [sql] [D] (0.2ms) BEGIN 2018-06-06 08:05:39 b322c26d [sql] [D] SQL (0.9ms) UPDATE "foreman_tasks_tasks" SET "state" = $1 WHERE "foreman_tasks_tasks"."id" = $2 [["state", "planned"], ["id", "efb763e4-8612-415b-87f9-4d50ad8cae10"]] 2018-06-06 08:05:39 b322c26d [sql] [D] (1.1ms) COMMIT 2018-06-06 08:06:41 b322c26d [sql] [D] ForemanTasks::Task::DynflowTask Load (0.6ms) SELECT "foreman_tasks_tasks".* FROM "foreman_tasks_tasks" WHERE "foreman_tasks_tasks"."type" IN ('ForemanTasks::Task::DynflowTask') AND "foreman_tasks_tasks"."external_id" = $1 ORDER BY "foreman_tasks_tasks"."id" ASC LIMIT 1 [["external_id", "51267868-feb3-4672-aae1-dbaef1e660b7"]] .. 2018-06-06 08:06:41 b322c26d [app] [I] Completed 200 OK in 63441ms (Views: 0.4ms | ActiveRecord: 78.4ms) candlepin.log: 2018-06-06 08:05:38,098 [thread=http-bio-8443-exec-20] [req=5ab57392-c356-4145-adda-050cac33c62e, org=, csid=b322c26d] INFO org.candlepin.common.filter.LoggingFilter - Request: verb=POST, uri=/candlepin/hypervisors?owner=CHANGED-fin&env= .. 2018-06-06 08:05:38,876 [thread=http-bio-8443-exec-20] [req=5ab57392-c356-4145-adda-050cac33c62e, org=CHANGED-fin, csid=b322c26d] INFO org.candlepin.resource.HypervisorResource - Summary of hypervisor checkin by principal "{"type":"trusteduser","name":"foreman_admin"}": Created: 0, Updated: 0, Unchanged:24, Failed: 0 .. 2018-06-06 08:05:38,985 [thread=http-bio-8443-exec-20] [req=5ab57392-c356-4145-adda-050cac33c62e, org=CHANGED-fin, csid=b322c26d] INFO org.candlepin.common.filter.LoggingFilter - Response: status=200, content-type="application/json", time=888 postgres: mainly only: 2018-06-06 08:06:40 CEST LOG: duration: 51201.266 ms execute a4: UPDATE "katello_subscription_facets" SET "last_checkin" = $1 WHERE "katello_subscription_facets"."id" = $2 2018-06-06 08:06:40 CEST DETAIL: parameters: $1 = '2018-06-06 06:05:49.687821', $2 = '10071' that come from the RHSM requests. and the RHSM requests got accumulating between 08:05:39 - 08:06:41. When the blocking request processing virt-who report didnt print anything..(?). How the virt-who report processing can block the RHSM requests, then? Some activerecord trick?
(In reply to Pavel Moravec from comment #5) > How the virt-who report processing can block the RHSM requests, then? Some > activerecord trick? By the underlying foreman task that fires Actions::ForemanVirtWhoConfigure::Config::Report step that executes a huge transaction: 2018-06-06 08:05:40 [sql] [D] (0.1ms) BEGIN .. 2018-06-06 08:05:40 [sql] [D] Katello::Host::SubscriptionFacet Load (1.0ms) SELECT "katello_subscription_facets".* FROM "katello_subscription_facets" WHERE "katello_subscription_facets"."uuid" = $1 LIMIT 1 [["uuid", "8b499512-1aa2-4f10-b0cc-93ea45704b24"]] 2018-06-06 08:05:40 [sql] [D] Katello::Host::SubscriptionFacet Load (0.4ms) SELECT "katello_subscription_facets".* FROM "katello_subscription_facets" WHERE "katello_subscription_facets"."host_id" = $1 LIMIT 1 [["host_id", 17268]] .. 2018-06-06 08:05:42 [sql] [D] (0.6ms) SELECT "hosts"."id" FROM "hosts" INNER JOIN "katello_subscription_facets" ON "katello_subscription_facets"."host_id" = "hosts"."id" WHERE "hosts"."type" IN ('Host::Managed') AND 1=0 .. 2018-06-06 08:05:42 [sql] [D] SQL (0.3ms) UPDATE "katello_subscription_facets" SET "hypervisor_host_id" = 17268 WHERE 1=0 .. 2018-06-06 08:05:42 [sql] [D] SQL (0.5ms) UPDATE "katello_subscription_facets" SET "last_checkin" = $1 WHERE "katello_subscription_facets"."id" = $2 [["last_checkin", "2018-06-06 06:05:38.000000"], ["id", 10070]] .. .. .. .. 2018-06-06 08:06:40 [sql] [D] SQL (0.2ms) UPDATE "katello_subscription_facets" SET "last_checkin" = $1 WHERE "katello_subscription_facets"."id" = $2 [["last_checkin", "2018-06-06 06:05:38.000000"], ["id", 10091]] 2018-06-06 08:06:40 [sql] [D] (1.3ms) COMMIT So the minute-long transaction updating also katello_subscription_facets table blocks the RHSM requests processing.
(In reply to Pavel Moravec from comment #6) > (In reply to Pavel Moravec from comment #5) > > How the virt-who report processing can block the RHSM requests, then? Some > > activerecord trick? > > By the underlying foreman task that fires > Actions::ForemanVirtWhoConfigure::Config::Report step that executes a huge > transaction: A correction here: The problematic step is HypervisorsUpdate in its finalize step, in particular: https://github.com/Katello/katello/blob/master/app/lib/actions/katello/host/hypervisors_update.rb#L15-L23 That method executes one single transaction to postgres to update VM<->HV mapping, by updating also katello_subscription_facets table. And that blocks the RHSM requests.. Is the transaction really required to be atomic? Can't we split it to per-hypervisor transactions, e.g.?
Moving this to the general subscriptions component as the problem update in comment 6 & 7 is in katello & not candlepin.
@pmoravec, From talking to Justin, I understood that https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated way of registering a host. Newer version of virt-who uses https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. Can you confirm that you are using latest virt-who? If the problem still exists, I would really like to see the corresponding logs, both foreman's and candlepin's. It would also be very nice to see SQL log for foreman part.
(In reply to Shimon Shtein from comment #10) > @pmoravec, > From talking to Justin, I understood that > https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated > way of registering a host. Newer version of virt-who uses > https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. > > Can you confirm that you are using latest virt-who? > > If the problem still exists, I would really like to see the corresponding > logs, both foreman's and candlepin's. It would also be very nice to see SQL > log for foreman part. Nice finding! On my reproducer, I just mimicked customer's requests, will check their virt-who version. I can confirm that the /rhsm/hypervisors/OWNER uri does not cause the perf.problems as it is much faster. Anyway, I also see really many customers using the old URI - I will wrote KCS that virt-who should be updated. (does somebody know what version of virt-who changed the URI?)
(In reply to Pavel Moravec from comment #11) > (In reply to Shimon Shtein from comment #10) > > @pmoravec, > > From talking to Justin, I understood that > > https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated > > way of registering a host. Newer version of virt-who uses > > https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. > > > > Can you confirm that you are using latest virt-who? > > > > If the problem still exists, I would really like to see the corresponding > > logs, both foreman's and candlepin's. It would also be very nice to see SQL > > log for foreman part. > > Nice finding! On my reproducer, I just mimicked customer's requests, will > check their virt-who version. > > I can confirm that the /rhsm/hypervisors/OWNER uri does not cause the > perf.problems as it is much faster. Anyway, I also see really many customers > using the old URI - I will wrote KCS that virt-who should be updated. > > (does somebody know what version of virt-who changed the URI?) @pmoravec I try to give you more information about the issue *virt-who is running from satelite server and one capsule server *virt-who version is virt-who-0.21.7-1.el7_5.noarch for both *in both cases we face an high load on satellite server during the jobs
(In reply to Giorgio Valsecchi from comment #12) > (In reply to Pavel Moravec from comment #11) > > (In reply to Shimon Shtein from comment #10) > > > @pmoravec, > > > From talking to Justin, I understood that > > > https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated > > > way of registering a host. Newer version of virt-who uses > > > https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. > > > > > > Can you confirm that you are using latest virt-who? > > > > > > If the problem still exists, I would really like to see the corresponding > > > logs, both foreman's and candlepin's. It would also be very nice to see SQL > > > log for foreman part. > > > > Nice finding! On my reproducer, I just mimicked customer's requests, will > > check their virt-who version. > > > > I can confirm that the /rhsm/hypervisors/OWNER uri does not cause the > > perf.problems as it is much faster. Anyway, I also see really many customers > > using the old URI - I will wrote KCS that virt-who should be updated. > > > > (does somebody know what version of virt-who changed the URI?) > > @pmoravec > I try to give you more information about the issue > > *virt-who is running from satelite server and one capsule server > *virt-who version is virt-who-0.21.7-1.el7_5.noarch for both > *in both cases we face an high load on satellite server during the jobs Per my understanding that is the newest virt-who version that should contact the /rhsm/hypervisors/OWNER uri/API, and not the /rhsm/hypervisors?owner=XXX one. if you run: zgrep hypervisors /var/log/httpd/foreman-ssl_access_ssl.log* do you see: (the bad one) 1.2.3.4 - - [06/Jun/2018:14:00:13 +0200] "POST /rhsm/hypervisors?owner=OWNER&env=Library HTTP/1.1" 200 4 "-" "-" or: (the good one) 192.168.111.1 - - [05/Jun/2018:10:51:13 +0200] "POST /rhsm/hypervisors/oracle?reporter_id=provisioning.sysmgmt.lan-1ce55230d33b436bb58031c84ebcc153&cloaked=False&env=Library HTTP/1.1" 200 452 "-" "RHSM/1.0 (cmd=virt-who)" ? (anyway I would recommend raise needinfo to sshtein or jsherril with your answer - I dont know virt-who much, "just" spotted the blocking API calls and reported the bug)
(In reply to Pavel Moravec from comment #13) > (In reply to Giorgio Valsecchi from comment #12) > > (In reply to Pavel Moravec from comment #11) > > > (In reply to Shimon Shtein from comment #10) > > > > @pmoravec, > > > > From talking to Justin, I understood that > > > > https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated > > > > way of registering a host. Newer version of virt-who uses > > > > https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. > > > > > > > > Can you confirm that you are using latest virt-who? > > > > > > > > If the problem still exists, I would really like to see the corresponding > > > > logs, both foreman's and candlepin's. It would also be very nice to see SQL > > > > log for foreman part. > > > > > > Nice finding! On my reproducer, I just mimicked customer's requests, will > > > check their virt-who version. > > > > > > I can confirm that the /rhsm/hypervisors/OWNER uri does not cause the > > > perf.problems as it is much faster. Anyway, I also see really many customers > > > using the old URI - I will wrote KCS that virt-who should be updated. > > > > > > (does somebody know what version of virt-who changed the URI?) > > > > @pmoravec > > I try to give you more information about the issue > > > > *virt-who is running from satelite server and one capsule server > > *virt-who version is virt-who-0.21.7-1.el7_5.noarch for both > > *in both cases we face an high load on satellite server during the jobs > > Per my understanding that is the newest virt-who version that should contact > the /rhsm/hypervisors/OWNER uri/API, and not the > /rhsm/hypervisors?owner=XXX one. > > if you run: > > zgrep hypervisors /var/log/httpd/foreman-ssl_access_ssl.log* > > do you see: > > (the bad one) > 1.2.3.4 - - [06/Jun/2018:14:00:13 +0200] "POST > /rhsm/hypervisors?owner=OWNER&env=Library HTTP/1.1" 200 4 "-" "-" > > or: > > (the good one) > 192.168.111.1 - - [05/Jun/2018:10:51:13 +0200] "POST > /rhsm/hypervisors/oracle?reporter_id=provisioning.sysmgmt.lan- > 1ce55230d33b436bb58031c84ebcc153&cloaked=False&env=Library HTTP/1.1" 200 452 > "-" "RHSM/1.0 (cmd=virt-who)" > > ? > > (anyway I would recommend raise needinfo to sshtein or jsherril with your > answer - I dont know virt-who much, "just" spotted the blocking API calls > and reported the bug) * an another virt-who istance from another server was requesting this kind API 1.2.3.4 - - [06/Jun/2018:14:00:13 +0200] "POST /rhsm/hypervisors?owner=OWNER&env=Library HTTP/1.1" 200 4 "-" "-" * this virt-who istance has been stopped and also this kind of API requests have been stopped * new tests for the high load problem on satellite server have to be planned
(In reply to Giorgio Valsecchi from comment #14) > > > (In reply to Pavel Moravec from comment #13) > > (In reply to Giorgio Valsecchi from comment #12) > > > (In reply to Pavel Moravec from comment #11) > > > > (In reply to Shimon Shtein from comment #10) > > > > > @pmoravec, > > > > > From talking to Justin, I understood that > > > > > https://your.satellite/rhsm/hypervisors?owner=XXX is an old and deprecated > > > > > way of registering a host. Newer version of virt-who uses > > > > > https://your.satellite/rhsm/hypervisors/OWNER endpoint that is much faster. > > > > > > > > > > Can you confirm that you are using latest virt-who? > > > > > > > > > > If the problem still exists, I would really like to see the corresponding > > > > > logs, both foreman's and candlepin's. It would also be very nice to see SQL > > > > > log for foreman part. > > > > > > > > Nice finding! On my reproducer, I just mimicked customer's requests, will > > > > check their virt-who version. > > > > > > > > I can confirm that the /rhsm/hypervisors/OWNER uri does not cause the > > > > perf.problems as it is much faster. Anyway, I also see really many customers > > > > using the old URI - I will wrote KCS that virt-who should be updated. > > > > > > > > (does somebody know what version of virt-who changed the URI?) > > > > > > @pmoravec > > > I try to give you more information about the issue > > > > > > *virt-who is running from satelite server and one capsule server > > > *virt-who version is virt-who-0.21.7-1.el7_5.noarch for both > > > *in both cases we face an high load on satellite server during the jobs > > > > Per my understanding that is the newest virt-who version that should contact > > the /rhsm/hypervisors/OWNER uri/API, and not the > > /rhsm/hypervisors?owner=XXX one. > > > > if you run: > > > > zgrep hypervisors /var/log/httpd/foreman-ssl_access_ssl.log* > > > > do you see: > > > > (the bad one) > > 1.2.3.4 - - [06/Jun/2018:14:00:13 +0200] "POST > > /rhsm/hypervisors?owner=OWNER&env=Library HTTP/1.1" 200 4 "-" "-" > > > > or: > > > > (the good one) > > 192.168.111.1 - - [05/Jun/2018:10:51:13 +0200] "POST > > /rhsm/hypervisors/oracle?reporter_id=provisioning.sysmgmt.lan- > > 1ce55230d33b436bb58031c84ebcc153&cloaked=False&env=Library HTTP/1.1" 200 452 > > "-" "RHSM/1.0 (cmd=virt-who)" > > > > ? > > > > (anyway I would recommend raise needinfo to sshtein or jsherril with your > > answer - I dont know virt-who much, "just" spotted the blocking API calls > > and reported the bug) > > * an another virt-who istance from another server was requesting this kind > API > > 1.2.3.4 - - [06/Jun/2018:14:00:13 +0200] "POST > /rhsm/hypervisors?owner=OWNER&env=Library HTTP/1.1" 200 4 "-" "-" > > * this virt-who istance has been stopped and also this kind of API requests > have been stopped > > * new tests for the high load problem on satellite server have to be planned @pmoravec the high load problem during the run of virt-who jobs is still present after stopping wrong API requests
This problem is reproducible also when using proper API: again by doing 2 activities concurrently: - concurrently running RHSM serials queries like: curl -s -u admin:changeme -X GET https://$(hostname -f)/rhsm/consumers/${UUID}/certificates/serials (all these are processed within 100ms normally) - sending: curl -s -u admin:changeme -X POST -H "Content-Type: text/plain" -d @virt-who-report.txt 'https://satellite.example.com/rhsm/hypervisors/myOwner?reporter_id=satellite.example.com-7ada5fb8eb4945008f116ca17edaba5c&cloaked=False&env=Library' Then wait several minutes till the Actions::Katello::Host::Hypervisors task is in finalize steps. Then RHSM serial requests stop processing, passenger queue grows etc. Particular reproducer in next private comment.
And here is the problem - basically still the same - seen in /var/log/foreman/production.log : 2018-06-15 09:52:30 [sql] [D] (0.1ms) BEGIN 2018-06-15 09:52:30 [foreman-tasks/dynflow] [D] Step 834c40b4-f1bf-41a7-86ec-36d5bf4cfe2d: 6 pending >> running in phase Finalize Actions::Katello::Host::HypervisorsUpdate .. 2018-06-15 10:27:53 [foreman-tasks/dynflow] [D] Step 834c40b4-f1bf-41a7-86ec-36d5bf4cfe2d: 6 running >> success in phase Finalize Actions::Katello::Host::HypervisorsUpdate 2018-06-15 10:27:53 [sql] [D] (1.2ms) COMMIT (almost) meantime, no /candlepin/consumers/UUID/certificates/serials request raised to candlepin.
Created redmine issue http://projects.theforeman.org/issues/23995 from this bug
Ran the virt-who import and noticed that it is making a lot of relatively slow queries: select cp_consumer.id from cp_consumer inner join cp_consumer_facts on cp_consumer.id = cp_consumer_facts.cp_consumer_id where cp_consumer_facts.mapkey = 'virt.uuid' and lower(cp_consumer_facts.element) in ( '?', '?') and cp_consumer.owner_id = '?' order by cp_consumer.updated desc these were landing in the 200-300ms range. I added an index: echo "CREATE INDEX lower_case_test ON cp_consumer_facts ((lower(element)));" | sudo -u postgres psql -d candlepin this dropped the above queries down into the 0.300ms range. What this did was speed up the virt-who import from 30-45 minutes down to 6 minutes. This helps overall performance, but we were still getting 503 errors during the 6 minute window of the virt-who import. I'd recommend creating this index in the short term to assist while we continue to investigate why the Satellite is unable to keep up with the load while virt-who import is running.
Upstream bug assigned to sshtein
@mmccune Thanks for narrowing down the problematic request! It looks like "get attributes" request that is performed for each hypervisor. In my PR I am changing action's behavior to query all hypervisors together. Now I am sure that my upstream fix in addition to your index will speed things up even more.
Filed the candlepin side of this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1600201
As the number of affected customer grows, what are the plans of providing a resolution (as adding the index isnt enough every time)? (I understand the problem is on more components / nontrivial..)
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/23995 has been resolved.
Verified ! @ Satellite 6.4 snap 25 Steps: -------- 1. Have 5000 thousands of systems registered, with default certCheckInterval = 240 in rhsm.conf 2. Send curl requests in loop to request a serial cert # vi serialreq.sh ''' while read uuids do curl -s -u admin:changeme -X GET https://$(hostname -f)/rhsm/consumers/${uuids}/certificates/serials done < filewithuuidslist ''' 3. Send virt-who report with mapping of 1000 systems 4. Observed Logs and Web UI Observation: ------------- 1. The serial cert requests were processed successfully without any errors or glitched in production log. 2. passenger-status -- show=requests, the passenger queue doesn't accumulates. 3. Httpd logs doesnt show Requests full error. 4. The WebUI is very much accessible and can traverse along the webUI Adding the recording video below.
The recording of a verification is here : https://drive.google.com/open?id=1GADD752wVqSZk-qjXApw6JW4jeWdPFWk The file is more than 20 mb hence didnt attach here.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2927