Bug 2273608 - [IBM nvmeof:1.1.0-1] NVMeOF deployment fails with ImportError for monitor_pb2_grpc
Summary: [IBM nvmeof:1.1.0-1] NVMeOF deployment fails with ImportError for monitor_pb2...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 7.1
Assignee: Aviv Caro
QA Contact: Manohar Murthy
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2024-04-05 10:08 UTC by Rahul Lepakshi
Modified: 2024-11-16 04:25 UTC (History)
9 users (show)

Fixed In Version: ceph-nvmeof-container-1.2.0-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-13 14:31:15 UTC
Embargoed:


Attachments (Terms of Use)
Logs (49.96 KB, text/plain)
2024-04-05 10:08 UTC, Rahul Lepakshi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8760 0 None None None 2024-04-05 10:25:28 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:31:20 UTC

Description Rahul Lepakshi 2024-04-05 10:08:30 UTC
Created attachment 2025350 [details]
Logs

Description of problem:

Issue - Unable to deploy nvmeof service downstream with below builds
Error-

 Traceback (most recent call last):
   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
     return _run_code(code, main_globals, None,
   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
     exec(code, run_globals)
   File "/remote-source/ceph-nvmeof/app/control/__main__.py", line 12, in <module>
     from .server import GatewayServer
   File "/remote-source/ceph-nvmeof/app/control/server.py", line 27, in <module>
     from .proto import monitor_pb2_grpc
 ImportError: cannot import name 'monitor_pb2_grpc' from 'control.proto' (/remote-source/ceph-nvmeof/app/control/proto/__init__.py)

Attaching entire logs

Build details
# ceph version
ceph version 18.2.1-119.el9cp (e7ae67cbcbfafacd65330907f545d7e5c9e300e1) reef (stable)

cp.stg.icr.io/cp/ibm-ceph/ceph-7-rhel9@sha256:2abe71f8e02f3e7d3b6255be621cccfc1c4b129263450b32ef74ebfd9cdaaa01

# podman inspect cp.stg.icr.io/cp/ibm-ceph/ceph-7-rhel9:7-32
[
     {
          "Id": "6189a016cf0b52ee913333180f0595f1e20382300abc4458175d00e37da23bd2",
          "Digest": "sha256:2abe71f8e02f3e7d3b6255be621cccfc1c4b129263450b32ef74ebfd9cdaaa01",
          "RepoTags": [
               "cp.stg.icr.io/cp/ibm-ceph/ceph-7-rhel9:7-32"
          ],
          "RepoDigests": [
               "cp.stg.icr.io/cp/ibm-ceph/ceph-7-rhel9@sha256:2abe71f8e02f3e7d3b6255be621cccfc1c4b129263450b32ef74ebfd9cdaaa01",
               "cp.stg.icr.io/cp/ibm-ceph/ceph-7-rhel9@sha256:cb41ebf5f15718eed80eab62e7b7882b4c351c9362e7e15de11eaf7c551750c9"
          ]

nvmeof : cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.1.0-1



Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Deploy nvmeof service with IBM build https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/testing/IBM-CEPH-7.1-202404040557.ci.

  cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.1.0-1
  cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.1.0-1

2.
3.

Actual results: nvmeof service deployment failure 


Expected results: Successful nvmeof deployment 


Additional info:

Comment 2 roys 2024-04-07 07:54:13 UTC
@rlepaksh

Hi I fixed the issue in downstream code.

Once a new build will be available you can retest.

Thanks.

Comment 3 Rahul Lepakshi 2024-04-08 06:00:00 UTC
Thanks Roy. 

@avivcaro, can we have downstream build ASAP? as this is blocker and we cannot proceed further. Having a build with this fix will unblock our testing.

Comment 5 tserlin 2024-04-09 15:02:19 UTC
(In reply to Aviv Caro from comment #4)
> Fixed by
> https://gitlab.cee.redhat.com/ceph/ceph-nvmeof/-/commit/
> 2563c3d65190ef080af2a4aa73d8564d937d19b5

We'll need a downstream ceph-nvmeof-container-1.2.0. Justin, should I go ahead and build it -- unless you were already in the process of doing so?

Thanks,

Thomas

Comment 9 harika chebrolu 2024-04-24 06:21:52 UTC
Its working fine with latest build:

nvmeof_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:1.2.4-1 
nvmeof_cli_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:1.2.4-1 

Log: 
http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/RH/7.1/rhel-9/Regression/18.2.1-150/nvmeotcp/103/tier-2_nvmeof_functional_Regression/

Comment 11 errata-xmlrpc 2024-06-13 14:31:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925

Comment 12 Red Hat Bugzilla 2024-11-16 04:25:36 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.