In very rare case, a Cinder volume can be detached from a running pod. The pod is not informed, application running there suddenly cannot read/write any data from/to its volume(s). This can corrupt filesystem on the volume as well as application data there (e.g. database files)! See https://github.com/kubernetes/kubernetes/issues/19602 for technical details. It involves probably only Cinder, AWS is safe and I was not able to reproduce it on GCE so far. Version-Release number of selected component (if applicable): origin 3.1.0 How reproducible: rarely (< 10% even in perfect conditions for the reproducer, much less in any real deployment) Steps to Reproduce: see the github issue
Fix merged upstream. Awaiting rebase into Origin. https://github.com/kubernetes/kubernetes/pull/19707
In case there is no rebase I filled Origin PR: https://github.com/openshift/origin/pull/7108
Origin PR merged
Verified on openshift v1.1.2-260-gf556adc, this bug is not reproducible.