Bug 1530784 - Stale bucket index entries are left over after object deletions
Summary: Stale bucket index entries are left over after object deletions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: z1
: 3.0
Assignee: Matt Benjamin (redhat)
QA Contact: Vidushi Mishra
URL:
Whiteboard:
Depends On:
Blocks: 1473188 1500904
TreeView+ depends on / blocked
 
Reported: 2018-01-03 19:03 UTC by Ken Dreyer (Red Hat)
Modified: 2021-03-11 16:49 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-12.2.1-41.el7cp Ubuntu: ceph_12.2.1-43redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1500904
Environment:
Last Closed: 2018-03-08 15:52:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 20380 0 None None None 2018-01-03 19:03:20 UTC
Ceph Project Bug Tracker 20895 0 None None None 2018-01-03 19:03:20 UTC
Ceph Project Bug Tracker 22555 0 None None None 2018-01-03 19:03:20 UTC
Red Hat Product Errata RHBA-2018:0474 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix update 2018-03-08 20:51:53 UTC

Description Ken Dreyer (Red Hat) 2018-01-03 19:03:21 UTC
+++ This bug was initially created as a clone of Bug #1500904 +++

Description of problem:

Objects are deleted but the index still thinks they are there.  This issue was thought to be resolved in BZ https://bugzilla.redhat.com/show_bug.cgi?id=1464099


Version-Release number of selected component (if applicable):


How reproducible:

See http://tracker.ceph.com/issues/20380

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:


--- Additional comment from Eric Ivancich on 2017-10-23 11:44:10 MDT ---

Was able to reproduce the error after multiple times (seems to generate condition around 1 out of approx. 20 runs) running the following set of scripts (modified versions of those supplied by customer) on a cluster with three rados gateways. Currently combing through OSD logs to see if there are any clues as to the underlying problem.

==== delete_create_script.sh

#!/bin/sh

trap 'kill $(jobs -p)' SIGINT SIGTERM # EXIT

swift -A http://localhost:8000/auth -U test:tester -K testing post test

dir=$(dirname $0)

${dir}/delete_create_object.sh one &
${dir}/delete_create_object.sh two &
${dir}/delete_create_object.sh three &
${dir}/delete_create_object.sh four &
${dir}/delete_create_object.sh five &
${dir}/delete_create_object.sh six &
${dir}/delete_create_object.sh seven &
${dir}/delete_create_object.sh eight &
${dir}/delete_create_object.sh nine &
${dir}/delete_create_object.sh ten &

wait

echo Done all

==== delete_create_object.sh

#!/bin/sh

port_lo=8000
port_hi=8002

trap 'kill $(jobs -p)' SIGINT SIGTERM # EXIT

i=1
objects=100
prefix=$1

list_out=$1.list
echo $(date) > $list_out

while [ $i -lt $objects ]
do
    object=$prefix.`date +%Y-%m-%d:%H:%M:%S`.$i
    touch $object

    port1=$(( RANDOM % (port_hi - port_lo + 1 ) + port_lo ))
    port2=$(( RANDOM % (port_hi - port_lo + 1 ) + port_lo ))
    port3=$(( RANDOM % (port_hi - port_lo + 1 ) + port_lo ))

    swift -A http://localhost:${port1}/auth -U test:tester -K testing upload test $object >/dev/null
    swift -A http://localhost:${port2}/auth -U test:tester -K testing delete test $object >/dev/null &
    swift -A http://localhost:${port3}/auth -U test:tester -K testing list test >>$list_out

    i=`expr $i + 1`
    rm -f $object &
done

wait

echo Done $1

====

--- Additional comment from Vikhyat Umrao on 2017-11-02 09:28:33 MDT ---

upstream jewel backport: https://github.com/ceph/ceph/pull/16856

--- Additional comment from Vikhyat Umrao on 2017-11-02 09:33:49 MDT ---

git tag --contains ff67388e24c93ca16553839c16f51030fa322917
v10.2.10

Comment 10 errata-xmlrpc 2018-03-08 15:52:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0474


Note You need to log in before you can comment on or make changes to this bug.