Bug 1318974
Summary: | Creating pods on OSE with awsElasticBlockStore only assigns devices /dev/xvdb - /dev/xvdp to openshift node | ||||||
---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jianwei Hou <jhou> | ||||
Component: | Storage | Assignee: | Jan Safranek <jsafrane> | ||||
Status: | CLOSED ERRATA | QA Contact: | Jianwei Hou <jhou> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 3.2.0 | CC: | agoldste, aos-bugs, ekuric, eparis, jeder, jhou, jsafrane, pep, swagiaal | ||||
Target Milestone: | --- | Keywords: | NeedsTestCase | ||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | 1315995 | Environment: | |||||
Last Closed: | 2016-05-19 20:12:39 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1315995 | ||||||
Bug Blocks: | 1267746 | ||||||
Attachments: |
|
Description
Jianwei Hou
2016-03-18 10:00:34 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1315995 has more info, check ongoing debuggin status there... Restoring needinfo flag, sorry. Status update: - in Origin 3.2, there is hard limit of 39 devices now. - I have PR pending upstream to remove this limit and depend only on the scheduler to schedule 39 (configurable) volumes to a node. https://github.com/kubernetes/kubernetes/pull/23254 Are we waiting on 23254 or can this bug go to ON_QA? Ok, let's move it to MODIFIED state for 3.2 - there is hard limit of 39 volumes in kubelet, which is way better than we have today. Jeremy & Elvir, if you want this configurable, please fill a new bug for 3.3 or later. Verified on openshift v3.2.0.43 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 There is no limit for volumes on AWS now. I have created 55 PVs, PVCs and pods, all the pods are running. Created attachment 1155962 [details]
test log
Correction, the test in comment 9 was performed on a cluster with 2 nodes, I disabled one node so that all pods are scheduled to another node, the node can have as many as 39 pods with ebs volumes. When the 40th pod was created, got error 'Could not attach EBS Disk "aws://us-east-1d/vol-719f5ed4": Too many EBS volumes attached to node ip-172-18-12-231.ec2.internal'. So this bug is fixed now. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1094 |