Bug 1303728

Summary: Seeing VM crash while writing in same RBD Disk from different VMs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tanay Ganguly <tganguly>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED CURRENTRELEASE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 1.3.2CC: ceph-eng-bugs, flucifre, jdillama, kdreyer, kurs
Target Milestone: rc   
Target Release: 1.3.4   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-28 15:46:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1223652    
Bug Blocks:    

Description Tanay Ganguly 2016-02-01 19:09:46 UTC
Description of problem:
While trying to write on a same RBD image, the VM is getting crashed.

Version-Release number of selected component (if applicable):
rpm -qa | grep ceph
ceph-common-0.94.5-4.el7cp.x86_64
ceph-osd-0.94.5-4.el7cp.x86_64
ceph-0.94.5-4.el7cp.x86_64
ceph-radosgw-0.94.5-4.el7cp.x86_64
ceph-debuginfo-0.94.5-4.el7cp.x86_64
ceph-selinux-0.94.5-4.el7cp.x86_64

Selinux as enforcing.

How reproducible:


Steps to Reproduce:
1. Create an rbd image with --feature 13, create snap and clone it.
2. Attach the same Clone to 2 different VMs.
3. Try to write on the RBD clone from 2 different VMs at same time
   Used dd to write it.

Actual results:
Lock should handle this use case, and there should not be a crash

Expected results:
Seeing the VM Crash.

Additional info:
Logs attached.

Comment 2 Ken Dreyer (Red Hat) 2016-02-03 17:19:08 UTC
I discussed this with Jason and Josh, and the outcome is that we want to land a fix in 0.94.7 upstream and give it plenty of testing before pulling it downstream.

This is not technically a valid use case of RBD exclusive locking, and customers should never have two VMs using the same RBD image at the same time. If customers had such a setup, this would certainly cause other issues. Jason and Josh also confirmed that there's no way to hit this with a single client, nor during a VM live migration between hypervisors.

Since this is a lower priority bug, re-targeting to RHCS 1.3.3.

Comment 4 Jason Dillaman 2017-06-26 17:02:58 UTC
This was resolved in RHCS 2.x