Description of problem: Replicaset cannot be configured using OpenShift routes. Version-Release number of selected component (if applicable): MongoDB 3.6 SCL Docker Image How reproducible: Steps to Reproduce: Deploy a mongodb replicaset across different OpenShift Clusters: - A single mongodb pod/cluster - MongoDB connections have to be done using SSL - MongoDB replicaset have to use OpenShift Secure Passthrough routes - Self-signed certs (with valid hostnames defined in SAN) Two OpenShift routes pointing to our mongodb services located in different clusters: - mongo.apps.cluster1.test.com - mongo.apps.cluster1.test.com Each pod has the following SAN in its mongo certificate: cluster1 pod: localhost,localhost.localdomain,127.0.0.1,mongo.apps.cluster1.test.com,mongodb,mongodb.mongofed,mongodb.mongofed.svc.cluster.local cluster2 pod: localhost,localhost.localdomain,127.0.0.1,mongo.apps.cluster2.test.com,mongodb,mongodb.mongofed,mongodb.mongofed.svc.cluster.local The pods run mongo with the following parameters: mongod -f /etc/mongod.conf --keyFile /var/lib/mongodb/keyfile --sslMode requireSSL --sslPEMKeyFile /opt/app-root/src/mongodb-ssl/mongodb.pem --sslAllowConnectionsWithoutCertificates --sslCAFile /opt/app-root/src/mongodb-ssl/ca.pem --replSet rs0 The mongod.conf looks as follows: systemLog: verbosity: 3 net: port: 27017 bindIp: "0.0.0.0" storage: dbPath: /var/lib/mongodb/data replication: oplogSizeMB: 64 Actual results: When I try to configure the replicaset I can initialize the primary node (cluster1 this case), but when trying to add the secondary node (cluster2 in this case) I get this error: Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo.apps.cluster1.test.com:443; the following nodes did not respond affirmatively: mongo.apps.cluster2.test.com:443 failed with SSLHandshakeFailed If I use the same certificate for each pod without the route in the SAN and don't pass the CA to the mongod process, when trying to add nodes to the replicaset I get this error: Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo.apps.cluster1.test.com:443; the following nodes did not respond affirmatively: mongo.apps.cluster2.test.com:443 failed with ProtocolError Expected results: Replicaset is configured. Additional info: From my laptop, I can connect to each of the mongo pods using the route: mongo --ssl --sslAllowInvalidCertificates mongo.apps.cluster1.test.com:443 From cluster1 pod, I can connect to cluster2 pod using the route and vice-versa. Using mongodb image from dockerhub (v4.1) this use case works.
(In reply to Mario Vázquez from comment #0) > How reproducible: > The pods run mongo with the following parameters: > > mongod -f /etc/mongod.conf --keyFile /var/lib/mongodb/keyfile --sslMode > requireSSL --sslPEMKeyFile /opt/app-root/src/mongodb-ssl/mongodb.pem > --sslAllowConnectionsWithoutCertificates --sslCAFile > /opt/app-root/src/mongodb-ssl/ca.pem --replSet rs0 > > The mongod.conf looks as follows: > systemLog: > verbosity: 3 > net: > port: 27017 > bindIp: "0.0.0.0" > storage: > dbPath: /var/lib/mongodb/data > replication: > oplogSizeMB: 64 > Am I right that only default file was used and then custom container CMD mentioned above was run? Also I guess replicaset was set up manually. Could you please provide more information about that? > Additional info: > > From my laptop, I can connect to each of the mongo pods using the route: > > mongo --ssl --sslAllowInvalidCertificates mongo.apps.cluster1.test.com:443 > > From cluster1 pod, I can connect to cluster2 pod using the route and > vice-versa. > Please try to use x.509 also for inter cluster authentication [1]. [1] https://docs.mongodb.com/master/tutorial/configure-x509-member-authentication/ > Using mongodb image from dockerhub (v4.1) this use case works. > So only what differed was the container image? All options, config file and process of setting up replset was the same?
Since requested information weren't provided, I'm closing this issue. Feel free to re-open if still actual.