Bug 1647510
Summary: | Swift increased the number of workers from OSP13 | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Joe Talerico <jtaleric> |
Component: | openstack-tripleo-heat-templates | Assignee: | Christian Schwede (cschwede) <cschwede> |
Status: | CLOSED ERRATA | QA Contact: | Mike Abrams <mabrams> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 14.0 (Rocky) | CC: | ccopello, cschwede, jtaleric, mburns, oblaut, scohen, smalleni, tvignaud |
Target Milestone: | --- | Keywords: | Triaged, ZStream |
Target Release: | 14.0 (Rocky) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | scale_lab | ||
Fixed In Version: | openstack-tripleo-heat-templates-9.2.1-0.20190119154863.el7ost | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-03-18 13:03:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1635664 |
Description
Joe Talerico
2018-11-07 16:08:38 UTC
Joe, can you please check the memory consumption again? I see quite different numbers on my undercloud. Is there another graph with real used memory? RSS and virtual might be misleading here, because of the high amount of shared memory. I had a look with docker stats, and the memory consumption looks pretty okay to me: [stack@undercloud ~]$ docker stats --format="{{.Name}} {{.MemUsage}}" --no-stream | grep swift swift_proxy 132.8 MiB / 11.73 GiB swift_container_server 106.6 MiB / 11.73 GiB swift_object_updater 26.48 MiB / 11.73 GiB swift_account_server 97.3 MiB / 11.73 GiB swift_rsync 188 KiB / 11.73 GiB swift_object_expirer 28.38 MiB / 11.73 GiB swift_account_reaper 24.94 MiB / 11.73 GiB swift_object_server 110.3 MiB / 11.73 GiB swift_container_updater 25.39 MiB / 11.73 GiB For reference, adding my benchmark results posted in https://review.openstack.org/#/c/618105/ here: I run some simple benchmarks, and here are my results (default auto setting on this machine with 24 cores results in 12 workers): ===================================== 12 WORKERS ===================================== swift-bench 2018-11-16 11:50:01,931 INFO 10000 PUTS **FINAL** [0 failures], 65.9/s swift-bench 2018-11-16 11:50:45,672 INFO 10000 GETS **FINAL** [0 failures], 228.7/s ===================================== 10 WORKERS ===================================== swift-bench 2018-11-16 11:54:58,142 INFO 10000 PUTS **FINAL** [0 failures], 62.0/s swift-bench 2018-11-16 11:55:42,562 INFO 10000 GETS **FINAL** [0 failures], 225.2/s ===================================== 8 WORKERS ===================================== swift-bench 2018-11-16 11:59:54,088 INFO 10000 PUTS **FINAL** [0 failures], 63.9/s swift-bench 2018-11-16 12:00:38,464 INFO 10000 GETS **FINAL** [0 failures], 225.4/s ===================================== 6 WORKERS ===================================== swift-bench 2018-11-16 12:04:50,969 INFO 10000 PUTS **FINAL** [0 failures], 62.4/s swift-bench 2018-11-16 12:05:34,058 INFO 10000 GETS **FINAL** [0 failures], 232.2/s ===================================== 4 WORKERS ===================================== swift-bench 2018-11-16 12:09:59,399 INFO 10000 PUTS **FINAL** [0 failures], 57.6/s swift-bench 2018-11-16 12:10:51,284 INFO 10000 GETS **FINAL** [0 failures], 192.8/s ===================================== 2 WORKERS ===================================== swift-bench 2018-11-16 12:15:26,775 INFO 10000 PUTS **FINAL** [0 failures], 56.3/s swift-bench 2018-11-16 12:16:16,462 INFO 10000 GETS **FINAL** [0 failures], 201.3/s ===================================== 1 WORKERS ===================================== swift-bench 2018-11-16 12:21:13,469 INFO 10000 PUTS **FINAL** [0 failures], 50.3/s swift-bench 2018-11-16 12:22:34,887 INFO 10000 GETS **FINAL** [0 failures], 122.8/s Concurrency was 20, object size 10KiB, 10.000 PUTS/GETS, no Keystone, direct access without haproxy. Swift doesn't benefit that much if there are many more workers, because there is just a single disk on the undercloud, and this one is the limiting factor. The performance with just a single worker should be still fine for the use cases on the undercloud, but I think the sweet spot is 2 workers actually - the performance difference between 2 and more workers is quite small, however the difference of the used memory for Swift is quite significant: Workers Memory (MiB) -------------------- 12 841,88 10 806,25 8 702,3 6 595,66 4 484,33 2 368,6 1 301,92 (In reply to Christian Schwede (cschwede) from comment #3) > Joe, can you please check the memory consumption again? I see quite > different numbers on my undercloud. Is there another graph with real used > memory? RSS and virtual might be misleading here, because of the high amount > of shared memory. > > I had a look with docker stats, and the memory consumption looks pretty okay > to me: > > [stack@undercloud ~]$ docker stats --format="{{.Name}} {{.MemUsage}}" > --no-stream | grep swift > swift_proxy 132.8 MiB / 11.73 GiB > swift_container_server 106.6 MiB / 11.73 GiB > swift_object_updater 26.48 MiB / 11.73 GiB > swift_account_server 97.3 MiB / 11.73 GiB > swift_rsync 188 KiB / 11.73 GiB > swift_object_expirer 28.38 MiB / 11.73 GiB > swift_account_reaper 24.94 MiB / 11.73 GiB > swift_object_server 110.3 MiB / 11.73 GiB > swift_container_updater 25.39 MiB / 11.73 GiB Christian - What Puddle? I have the latest puddle? I am seeing the same issue here : ( process/thread count ) http://norton.perf.lab.eng.rdu.redhat.com:3000/dashboard/snapshot/TolTSzYc32HorlULsSHme4qWQrsRQluP We are seeing large consumptions of RSS still : (Memory usage) http://norton.perf.lab.eng.rdu.redhat.com:3000/dashboard/snapshot/IMBl9vWwWG3pnjamMhC6t1UKfbJ8hLZq *** Bug 1651680 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0446 Clearing needinfo on closed BZ. |