Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1585307

Summary: [RFE] RGW: Relaxed region constraint enforcement
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: medium Docs Contact: John Brier <jbrier>
Priority: medium    
Version: 2.5CC: cbodley, ceph-eng-bugs, ceph-qe-bugs, hnallurv, jbrier, kbader, kdreyer, mbenjamin, owasserm, sweil, tserlin, vumrao
Target Milestone: rcKeywords: FutureFeature
Target Release: 3.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.5-35.el7cp Ubuntu: ceph_12.2.5-20redhat1xenial Doc Type: Enhancement
Doc Text:
.Relaxed region constraint enforcement In {product} 3.x when using `s3cmd` and option `--region` with a zonegroup that does not exist an `InvalidLocationConstraint` error will be generated. This did not occur in Ceph 2.x because it did not have strict checking on the region. With this update Ceph 3.1 adds a new `rgw_relaxed_region_enforcement` boolean option to enable relaxed (non-enforcement of region constraint) behavior backward compatible with Ceph 2.x. The option defaults to False.
Story Points: ---
Clone Of:
: 1591314 1744766 (view as bug list) Environment:
Last Closed: 2018-09-26 18:21:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1584264, 1744766    

Description Vikhyat Umrao 2018-06-01 19:32:07 UTC
Description of problem:
[support] RHCS 3(Luminous) and s3cmd regions getting InvalidLocationConstraint

Version-Release number of selected component (if applicable):
RHCS 3 - 12.2.4-10.el7cp
Luminous 

How reproducible:
Always if zonegroup does not exist and it was working fine in the jewel release or RHCS 2.y.

# s3cmd mb s3://test2 --region "whatever"
ERROR: S3 error: 400 (InvalidLocationConstraint): The specified location-constraint is not valid

# s3cmd mb s3://test2 --region "RegionOne"
ERROR: S3 error: 400 (InvalidLocationConstraint): The specified location-
constraint is not valid

# s3cmd mb s3://test2 --region ""
Bucket 's3://test2/' created


The reason is why it was working in jewel - jewel did not have strict checking for this option and in luminous it has a check condition:



In RHCS 3.x(Luminous)
==========================

void RGWCreateBucket::execute()
{
  RGWAccessControlPolicy old_policy(s->cct);
  buffer::list aclbl;
  buffer::list corsbl;
  bool existed;
  string bucket_name;
  rgw_make_bucket_entry_name(s->bucket_tenant, s->bucket_name, bucket_name);
  rgw_raw_obj obj(store->get_zone_params().domain_root, bucket_name);
  obj_version objv, *pobjv = NULL;

  op_ret = get_params();
  if (op_ret < 0)
    return;

  if (!location_constraint.empty() &&
      !store->has_zonegroup_api(location_constraint)) {
      ldout(s->cct, 0) << "location constraint (" << location_constraint << ")"
                       << " can't be found." << dendl;
      op_ret = -ERR_INVALID_LOCATION_CONSTRAINT;
      s->err.message = "The specified location-constraint is not valid";
      return;
  }

  if (!store->get_zonegroup().is_master_zonegroup() && !location_constraint.empty() &&
      store->get_zonegroup().api_name != location_constraint) {
    ldout(s->cct, 0) << "location constraint (" << location_constraint << ")"
                     << " doesn't match zonegroup" << " (" << store->get_zonegroup().api_name << ")"
                     << dendl;
    op_ret = -ERR_INVALID_LOCATION_CONSTRAINT;
    s->err.message = "The specified location-constraint is not valid";
    return;
  }


In RHCS 2.x(Jewel)
====================

void RGWCreateBucket::execute()
{
  RGWAccessControlPolicy old_policy(s->cct);
  buffer::list aclbl;
  buffer::list corsbl;
  bool existed;
  string bucket_name;
  rgw_make_bucket_entry_name(s->bucket_tenant, s->bucket_name, bucket_name);
  rgw_obj obj(store->get_zone_params().domain_root, bucket_name);
  obj_version objv, *pobjv = NULL;

  op_ret = get_params();
  if (op_ret < 0)
    return;

  if (!store->get_zonegroup().is_master_zonegroup() &&
      store->get_zonegroup().api_name != location_constraint) {
    ldout(s->cct, 0) << "location constraint (" << location_constraint << ") doesn't match zonegroup" << " (" << store->get_zonegroup().api_name << ")" << dendl;
    op_ret = -EINVAL;
    return;
  }


If you check RHCS 3.x there is extra condition to check location_constraint(region/zonegroup).

if (!location_constraint.empty() &&  !store->has_zonegroup_api(location_constraint))

Now if you check RHCS 2.x there was no strict checking for location constraint(region/zonegroup) hence it is working there.

This code came from this commit:
====================================

$ git blame src/rgw/rgw_op.cc  
$git show 25e4d1e4542

commit 25e4d1e454219cd31e2c1359f20175eff20b71f4
Author: Jiaying Ren <jiaying.ren>
Date:   Thu May 25 20:17:46 2017 +0800

    rgw/multisite: check location constraint existness
    
    to match the behavior of AWS S3
    
    Signed-off-by: Jiaying Ren <jiaying.ren>

diff --git a/src/rgw/rgw_op.cc b/src/rgw/rgw_op.cc
index 3a2fbdfee3..5b98e50fdb 100644
--- a/src/rgw/rgw_op.cc
+++ b/src/rgw/rgw_op.cc
@@ -2394,6 +2394,15 @@ void RGWCreateBucket::execute()
   if (op_ret < 0)
     return;
 
+  if (!location_constraint.empty() &&
+      !store->has_zonegroup_api(location_constraint)) {
+      ldout(s->cct, 0) << "location constraint (" << location_constraint << ")"
+                       << " can't be found." << dendl;
+      op_ret = -ERR_INVALID_LOCATION_CONSTRAINT;
+      s->err.message = "The specified location-constraint is not valid";
+      return;
+  }
+
   if (!store->get_zonegroup().is_master_zonegroup() &&
       store->get_zonegroup().api_name != location_constraint) {
     ldout(s->cct, 0) << "location constraint (" << location_constraint << ")"
diff --git a/src/rgw/rgw_rados.h b/src/rgw/rgw_rados.h
index 24bb8bb3cb..4e81cbe4fc 100644
--- a/src/rgw/rgw_rados.h
+++ b/src/rgw/rgw_rados.h
@@ -2465,6 +2465,16 @@ public:
   const string& get_current_period_id() {
     return current_period.get_id();
   }
+
+  bool has_zonegroup_api(const std::string& api) const {
+    if (!current_period.get_id().empty()) {
+      const auto& zonegroups_by_api = current_period.get_map().zonegroups_by_api;
+      if (zonegroups_by_api.find(api) != zonegroups_by_api.end())
+        return true;
+    }
+    return false;
+  }
+
   // pulls missing periods for period_history
   std::unique_ptr<RGWPeriodPuller> period_puller;
   // maintains a connected history of periods


From my RHCS 3.x test environment:
========================================

# radosgw-admin zonegroup list
{
    "default_info": "7c03f4c2-14c7-4923-9fbf-3fc5dddbf43a",
    "zonegroups": [
        "us-local-2",
        "us",
        "us-local-1",
        "default"
    ]
}


$ s3cmd -c .s3cfg.quicklab.pnq2.rgw1 mb s3://bucket1 --region us     
Bucket 's3://bucket1/' created

2018-05-28 16:38:47.240080 7fcf066b7700  1 ====== starting new request req=0x7fcf066b1190 =====
2018-05-28 16:38:47.312457 7fcf066b7700  1 ====== req done req=0x7fcf066b1190 op status=0 http_status=200 ======
2018-05-28 16:38:47.312514 7fcf066b7700  1 civetweb: 0x5623de019000: 10.3.117.9 - - [28/May/2018:16:38:47 -0400] "PUT /bucket1/ HTTP/1.1" 1 0 - -

$ s3cmd -c .s3cfg.quicklab.pnq2.rgw1 mb s3://bucket2 --region us-local-1
Bucket 's3://bucket2/' created

2018-05-28 16:39:08.635810 7fcf1f6e9700  1 ====== starting new request req=0x7fcf1f6e3190 =====
2018-05-28 16:39:08.694686 7fcf1f6e9700  1 ====== req done req=0x7fcf1f6e3190 op status=0 http_status=200 ======
2018-05-28 16:39:08.694757 7fcf1f6e9700  1 civetweb: 0x5623ddf17000: 10.3.117.9 - - [28/May/2018:16:39:08 -0400] "PUT /bucket2/ HTTP/1.1" 1 0 - -

$ s3cmd -c .s3cfg.quicklab.pnq2.rgw1 mb s3://bucket3 --region test     
ERROR: S3 error: 400 (InvalidLocationConstraint): The specified location-constraint is not valid



2018-05-28 16:39:22.763234 7fcf28efc700  1 ====== starting new request req=0x7fcf28ef6190 =====
2018-05-28 16:39:22.765512 7fcf28efc700  0 location constraint (test) can't be found. 

^^ see this.


2018-05-28 16:39:22.765542 7fcf25ef6700  1 ====== starting new request req=0x7fcf25ef0190 =====
2018-05-28 16:39:22.765607 7fcf28efc700  1 ====== req done req=0x7fcf28ef6190 op status=-2208 http_status=400 ======
2018-05-28 16:39:22.765643 7fcf28efc700  1 civetweb: 0x5623ddeb4000: 10.3.117.9 - - [28/May/2018:16:39:22 -0400] "PUT /bucket3/ HTTP/1.1" 1 0 - -

So in RHCS 3 it will only work for valid regions(zonegroups) name.

Comment 31 Vikhyat Umrao 2018-08-20 18:55:06 UTC
*** Bug 1591314 has been marked as a duplicate of this bug. ***

Comment 34 errata-xmlrpc 2018-09-26 18:21:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819

Comment 35 Red Hat Bugzilla 2023-09-15 00:09:48 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days