Bug 1650653 - Issues while trying to deploy a MongoDB 3.6 ReplicaSet across multiple clusters using the scl image
Summary: Issues while trying to deploy a MongoDB 3.6 ReplicaSet across multiple cluste...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Software Collections
Classification: Red Hat
Component: rh-mongodb36-container
Version: rh-mongodb36
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.4
Assignee: Patrik Novotný
QA Contact: Lukáš Zachar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-16 18:16 UTC by Mario Vázquez
Modified: 2020-05-05 07:25 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 07:25:47 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Mario Vázquez 2018-11-16 18:16:48 UTC
Description of problem:
Replicaset cannot be configured using OpenShift routes.

Version-Release number of selected component (if applicable):
MongoDB 3.6 SCL Docker Image

How reproducible:


Steps to Reproduce:
Deploy a mongodb replicaset across different OpenShift Clusters:

- A single mongodb pod/cluster
- MongoDB connections have to be done using SSL
- MongoDB replicaset have to use OpenShift Secure Passthrough routes 
- Self-signed certs (with valid hostnames defined in SAN)

Two OpenShift routes pointing to our mongodb services located in different clusters:

- mongo.apps.cluster1.test.com
- mongo.apps.cluster1.test.com

Each pod has the following SAN in its mongo certificate:

cluster1 pod: localhost,localhost.localdomain,127.0.0.1,mongo.apps.cluster1.test.com,mongodb,mongodb.mongofed,mongodb.mongofed.svc.cluster.local
cluster2 pod: localhost,localhost.localdomain,127.0.0.1,mongo.apps.cluster2.test.com,mongodb,mongodb.mongofed,mongodb.mongofed.svc.cluster.local

The pods run mongo with the following parameters:

mongod -f /etc/mongod.conf --keyFile /var/lib/mongodb/keyfile --sslMode requireSSL --sslPEMKeyFile /opt/app-root/src/mongodb-ssl/mongodb.pem --sslAllowConnectionsWithoutCertificates --sslCAFile /opt/app-root/src/mongodb-ssl/ca.pem --replSet rs0

The mongod.conf looks as follows:
systemLog:
  verbosity: 3
net:
  port: 27017
  bindIp: "0.0.0.0"
storage:
  dbPath: /var/lib/mongodb/data
replication:
  oplogSizeMB: 64


Actual results:

When I try to configure the replicaset I can initialize the primary node (cluster1 this case), but when trying to add the secondary node (cluster2 in this case) I get this error:

Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo.apps.cluster1.test.com:443; the following nodes did not respond affirmatively: mongo.apps.cluster2.test.com:443 failed with SSLHandshakeFailed

If I use the same certificate for each pod without the route in the SAN and don't pass the CA to the mongod process, when trying to add nodes to the replicaset I get this error:

Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo.apps.cluster1.test.com:443; the following nodes did not respond affirmatively: mongo.apps.cluster2.test.com:443 failed with ProtocolError


Expected results:

Replicaset is configured.

Additional info:

From my laptop, I can connect to each of the mongo pods using the route:

mongo --ssl --sslAllowInvalidCertificates mongo.apps.cluster1.test.com:443

From cluster1 pod, I can connect to cluster2 pod using the route and vice-versa.

Using mongodb image from dockerhub (v4.1) this use case works.

Comment 2 Marek Skalický 2018-12-10 15:56:56 UTC
(In reply to Mario Vázquez from comment #0)
> How reproducible:
> The pods run mongo with the following parameters:
> 
> mongod -f /etc/mongod.conf --keyFile /var/lib/mongodb/keyfile --sslMode
> requireSSL --sslPEMKeyFile /opt/app-root/src/mongodb-ssl/mongodb.pem
> --sslAllowConnectionsWithoutCertificates --sslCAFile
> /opt/app-root/src/mongodb-ssl/ca.pem --replSet rs0
> 
> The mongod.conf looks as follows:
> systemLog:
>   verbosity: 3
> net:
>   port: 27017
>   bindIp: "0.0.0.0"
> storage:
>   dbPath: /var/lib/mongodb/data
> replication:
>   oplogSizeMB: 64
> 

Am I right that only default file was used and then custom container CMD mentioned above was run?

Also I guess replicaset was set up manually. Could you please provide more information about that?


> Additional info:
> 
> From my laptop, I can connect to each of the mongo pods using the route:
> 
> mongo --ssl --sslAllowInvalidCertificates mongo.apps.cluster1.test.com:443
> 
> From cluster1 pod, I can connect to cluster2 pod using the route and
> vice-versa.
> 

Please try to use x.509 also for inter cluster authentication [1]. 


[1] https://docs.mongodb.com/master/tutorial/configure-x509-member-authentication/



> Using mongodb image from dockerhub (v4.1) this use case works.
>

So only what differed was the container image? All options, config file and process of setting up replset was the same?

Comment 5 Patrik Novotný 2020-05-05 07:25:47 UTC
Since requested information weren't provided, I'm closing this issue. Feel free to re-open if still actual.


Note You need to log in before you can comment on or make changes to this bug.