Description of problem: I followed this tutorial: https://access.redhat.com/documentation/en/red-hat-xpaas/0/single/red-hat-xpaas-sso-image/#tutorials, but the SSO container does not get to ready state due to an error occurring at JBoss start-up, when trying to rename /opt/eap/standalone/configuration/standalone_xml_history/current. I expect the container to start correctly, and not having a "current" directory on the image. Version-Release number of selected component (if applicable): redhat-sso-7/sso70-openshift image, version 1.3-21 How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: 03:34:40,664 ERROR [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0055: Caught exception during boot: java.lang.IllegalStateException: WFLYCTL0056: Could not rename /opt/eap/standalone/configuration/standalone_xml_history/current to /opt/eap/standalone/configuration/standalone_xml_history/20161202-033440664 at org.jboss.as.controller.persistence.ConfigurationFile.createHistoryDirectory(ConfigurationFile.java:638) at org.jboss.as.controller.persistence.ConfigurationFile.successfulBoot(ConfigurationFile.java:470) at org.jboss.as.controller.persistence.BackupXmlConfigurationPersister.successfulBoot(BackupXmlConfigurationPersister.java:94) at org.jboss.as.controller.AbstractControllerService.finishBoot(AbstractControllerService.java:449) at org.jboss.as.server.ServerService.boot(ServerService.java:368) at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:299) at java.lang.Thread.run(Thread.java:745) Expected results: No error. Additional info: It looks like it the same issue as described in the following Wildfly issue. [1] It seems from that issue that this issue may be fixed with Wildfly 2.2 but the sso image seems to be using WildFly Core 2.1.8. [1] https://issues.jboss.org/browse/WFCORE-1501
I'm also seeing this. Here are my steps: https://gist.github.com/jpkrohling/5c9d4bb72895ba1b4e929a70ff56f533 And here are the logs: https://paste.fedoraproject.org/paste/ipw4QYmQEieERV4livVNUw
This is usually related to docker issues with layered filesystems, what is the environment?
I tested it on Fedora 26, with docker-ce (Docker, Inc).
FYI I had this problem with RH SSO on an OCP 3.6 cluster running in AWS (AWS Reference Architecture). The error occurred when using in-container storage. If I added a PV (gp2), and mounted it to /opt/eap/standalone/configuration/standalone_xml_history/ then the rollout of the SSO container succeeds. It's not a fix, but it's a workaround.
@Ben is this environment something I can have access to? If so please email me directly.
Encountered this error when running OCP 3.6 on Azure. Below is the workaround used. # Create a persistent volume claim in the deployment config using the command below $ oc volume dc/sso --add --claim-size 512M --mount-path /opt/eap/standalone/configuration/standalone_xml_history --name standalone-xml-history As the deployment config is updated it should automatically trigger a new deployment
The CLOUD-2195 issue (https://issues.jboss.org/browse/CLOUD-2195) was addressed in RH-SSO 7.2 for OpenShift image starting from "redhat-sso-7/sso72-openshift:1.0-5" image tag. The RH-SSO 7.0 for OpenShift and RH-SSO 7.1 for OpenShift images are deprecated, and will no longer receive updates. The RH-SSO 7.2 for OpenShift image should be used instead. Refer to "Deprecated image streams and application templates" section of the RH-SSO for OpenShift image -- https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.2/html-single/red_hat_single_sign-on_for_openshift/#deprecated_image_streams_and_application_templates_for_rh_sso_for_openshift for details.