Bug 1510878 - EBS is created in the wrong Zone while using the aws-ebs provisioner to create volumes dynamically in the Cluster.
Summary: EBS is created in the wrong Zone while using the aws-ebs provisioner to creat...
Keywords:
Status: CLOSED DUPLICATE of bug 1491399
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.6.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.8.0
Assignee: Scott Dodson
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-08 11:34 UTC by Marcos Entenza
Modified: 2017-11-08 17:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-08 17:57:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Marcos Entenza 2017-11-08 11:34:10 UTC
Description of problem:

Version-Release number of selected component (if applicable): v3.6.173.0.49

How reproducible:

Steps to Reproduce:
1. Create an OCP Cluster in some AWS Region and 1 particular Zone
2. Create another OCP Cluster in same AWS Region and different Zone
3. Configure both cluster to use aws-ebs provisioner to create volumes dynamically in the Cluster.

Actual results:
Volumes are created initially in the correct Zone, but then all the volumes are created in the same Zone, so volumes from Cluster in Zone 'a' are created in Zone 'b' and can't be attached to instances


Expected results:
Each volume must be created in their corresponding Zone according to the configuration from the Cluster: /etc/origin/cloudprovider/aws.conf

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Marcos Entenza 2017-11-08 14:12:05 UTC
Just to add more info, for point 3, we don't require to configure both clusters with the aws-ebs provisioner, it also fails if we configure one.

Comment 2 Jan Safranek 2017-11-08 14:46:10 UTC
AWS instances in single OpenShift cluster should be tagged with "kubernetes.io/cluster/<cluster-id>" tag, where <cluster-id> is unique for the particular cluster. Then you can have multiple cluster in one AWS project.

The instances should be already tagged by ansible installer.

Comment 3 Marcos Entenza 2017-11-08 15:00:41 UTC
Jan, I think I'm not understanding you on this completely. Ansible installer, as far as I understand, doesn't take care of the AWS instances and it's not adding any tags to the instances. Is responsible for adding the required info under /etc/origin/cloudprovider/aws.conf, and point to that file in master-config.yaml and node-config.yaml files.

Could you please point me to the code where that tag should be created?

Comment 4 Scott Dodson 2017-11-08 17:57:15 UTC
Right, currently the installer doesn't provision aws instances or manage tags on AWS instances that are used by the "BYO" playbooks which expect pre-provisioned hosts. In 3.7 however we've added a check to ensure that the admin has set a desired tag in the ansible variables whenever AWS cloud provider credentials are configured. This does NOT actually set tags, however it does force the admin to acknowledge that a tag must be set and references relevant documentation that describes how to do that.

As we add AWS provisioning in future releases we will insure that tags are set properly on those instances.

*** This bug has been marked as a duplicate of bug 1491399 ***


Note You need to log in before you can comment on or make changes to this bug.