Bug 1510878

Summary: EBS is created in the wrong Zone while using the aws-ebs provisioner to create volumes dynamically in the Cluster.
Product: OpenShift Container Platform Reporter: Marcos Entenza <mak>
Component: InstallerAssignee: Scott Dodson <sdodson>
Status: CLOSED DUPLICATE QA Contact: Johnny Liu <jialiu>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.6.1CC: aos-bugs, aos-storage-staff, jokerman, jsafrane, mmccomas
Target Milestone: ---   
Target Release: 3.8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-08 17:57:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marcos Entenza 2017-11-08 11:34:10 UTC
Description of problem:

Version-Release number of selected component (if applicable): v3.6.173.0.49

How reproducible:

Steps to Reproduce:
1. Create an OCP Cluster in some AWS Region and 1 particular Zone
2. Create another OCP Cluster in same AWS Region and different Zone
3. Configure both cluster to use aws-ebs provisioner to create volumes dynamically in the Cluster.

Actual results:
Volumes are created initially in the correct Zone, but then all the volumes are created in the same Zone, so volumes from Cluster in Zone 'a' are created in Zone 'b' and can't be attached to instances


Expected results:
Each volume must be created in their corresponding Zone according to the configuration from the Cluster: /etc/origin/cloudprovider/aws.conf

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Marcos Entenza 2017-11-08 14:12:05 UTC
Just to add more info, for point 3, we don't require to configure both clusters with the aws-ebs provisioner, it also fails if we configure one.

Comment 2 Jan Safranek 2017-11-08 14:46:10 UTC
AWS instances in single OpenShift cluster should be tagged with "kubernetes.io/cluster/<cluster-id>" tag, where <cluster-id> is unique for the particular cluster. Then you can have multiple cluster in one AWS project.

The instances should be already tagged by ansible installer.

Comment 3 Marcos Entenza 2017-11-08 15:00:41 UTC
Jan, I think I'm not understanding you on this completely. Ansible installer, as far as I understand, doesn't take care of the AWS instances and it's not adding any tags to the instances. Is responsible for adding the required info under /etc/origin/cloudprovider/aws.conf, and point to that file in master-config.yaml and node-config.yaml files.

Could you please point me to the code where that tag should be created?

Comment 4 Scott Dodson 2017-11-08 17:57:15 UTC
Right, currently the installer doesn't provision aws instances or manage tags on AWS instances that are used by the "BYO" playbooks which expect pre-provisioned hosts. In 3.7 however we've added a check to ensure that the admin has set a desired tag in the ansible variables whenever AWS cloud provider credentials are configured. This does NOT actually set tags, however it does force the admin to acknowledge that a tag must be set and references relevant documentation that describes how to do that.

As we add AWS provisioning in future releases we will insure that tags are set properly on those instances.

*** This bug has been marked as a duplicate of bug 1491399 ***