RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1074659 - targetd throws an exception if configured block doesn't exist
Summary: targetd throws an exception if configured block doesn't exist
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: targetd
Version: 7.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Tony Asleson
QA Contact: Martin Hoyer
URL:
Whiteboard:
Depends On: 1162381
Blocks: 1385242
TreeView+ depends on / blocked
 
Reported: 2014-03-10 19:06 UTC by Andy Grover
Modified: 2021-09-06 12:33 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 20:43:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1982 0 normal SHIPPED_LIVE targetd bug fix and enhancement update 2017-08-01 18:31:11 UTC

Description Andy Grover 2014-03-10 19:06:31 UTC
targetd opens configured VGs on startup, just to make sure they are accessible, but if not, the result is not a nice clean exit, but an uncaught exception.

Comment 2 Bruno Goncalves 2014-11-10 12:58:20 UTC
Reproducible using targetd-0.7.1-1.el7.noarch


# cat /etc/target/targetd.yaml 
# See http://www.yaml.org/spec/1.2/spec.html for more on YAML.

# No default password, please pick a good one.

password: 1Password!

# defaults below; uncomment and edit
#pool_name: vg-targetd
#user: admin
#ssl: false
#target_name: iqn.2003-01.org.example.mach1:1234


# systemctl restart targetd
#

# systemctl status targetd
targetd.service - targetd storage array API daemon
   Loaded: loaded (/usr/lib/systemd/system/targetd.service; disabled)
   Active: failed (Result: exit-code) since Mon 2014-11-10 07:54:57 EST; 2min 43s ago
  Process: 18778 ExecStart=/usr/bin/targetd (code=exited, status=1/FAILURE)
 Main PID: 18778 (code=exited, status=1/FAILURE)

Nov 10 07:54:52 intel-chiefriver-01.lab.eng.rdu.redhat.com systemd[1]: Starting targetd storage array API daemon...
Nov 10 07:54:52 intel-chiefriver-01.lab.eng.rdu.redhat.com systemd[1]: Started targetd storage array API daemon.
Nov 10 07:54:52 intel-chiefriver-01.lab.eng.rdu.redhat.com [18778]: detected unhandled Python exception in '/usr/bin/targetd'
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com [18778]: communication with ABRT daemon failed: timed out
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: Traceback (most recent call last):
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: File "/usr/bin/targetd", line 24, in <module>
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: sys.exit(main())
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: File "/usr/lib/python2.7/site-packages/targetd/main.py", line 215, in main
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: update_mapping()
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: File "/usr/lib/python2.7/site-packages/targetd/main.py", line 195, in update_mapping
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: mapping.update(block.initialize(config))
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: File "/usr/lib/python2.7/site-packages/targetd/block.py", line 87, in initialize
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: test_vg = lvm.vgOpen(get_vg_lv(pool)[0])
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com targetd[18778]: lvm.LibLVMError: (-1, 'Volume group "vg-targetd" not found')
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com systemd[1]: targetd.service: main process exited, code=exited, status=1/FAILURE
Nov 10 07:54:57 intel-chiefriver-01.lab.eng.rdu.redhat.com systemd[1]: Unit targetd.service entered failed state.

Comment 3 Andy Grover 2014-11-11 00:53:41 UTC
Fixed in 0.7.2

Comment 4 Tom Coughlan 2014-12-23 20:39:27 UTC
moving this to 7.2

Comment 7 Mark Thacker 2016-11-30 20:49:47 UTC
Seems like a bug, but setting a pm_ack in any case.

Comment 10 mdidomenico 2017-03-04 12:59:53 UTC
Is there a patch or work around available for this available?  I'm using RHEL 7.3 and hit this bug.

Comment 11 Tony Asleson 2017-03-06 14:29:36 UTC
(In reply to mdidomenico from comment #10)
> Is there a patch or work around available for this available?  I'm using
> RHEL 7.3 and hit this bug.

The workaround is to ensure that the lvm VG or lvm LV thinpool exists and is specified correctly in the yaml configuration file.

Comment 12 mdidomenico 2017-03-06 15:52:06 UTC
I'm not sure i understand the documentation or the "pool_name" option purpose then.  On a freshly installed system, there is no vg-targetd lvm created.  and it doesn't appear that you can "unset" pool_name in the yaml config file.

as a work around i set the pool_name to my o/s volume group, but that seems a little icky.  i don't intend to use that volume group for any iscsi targets.  and i don't plan to use a pool_name for any targets (i have local files to share out as luns)

so what's the purpose of the "pool_name" variable?  is the real fix in 7.4 going to just soft fail if the default coded "pool_name" doesn't exist?

Comment 13 Tony Asleson 2017-03-06 16:12:59 UTC
The pool_name is a variable that specifies the lvm VG that will be used to allocate LVs from when using the targetd service to serve up ISCSI targets.  Without this the service really cannot do anything.

From what you have disclosed in your use case, I'm not seeing any benefit for you to run the service, but perhaps I'm missing something?  If this is the case I would suggest removing the targetd package until you have a need to serve up ISCSI targets using LVM & LIO.

Comment 14 mdidomenico 2017-03-06 18:35:29 UTC
(In reply to Tony Asleson from comment #13)
> From what you have disclosed in your use case, I'm not seeing any benefit
> for you to run the service, but perhaps I'm missing something?  If this is
> the case I would suggest removing the targetd package until you have a need
> to serve up ISCSI targets using LVM & LIO.

The use case is where all of the target luns I wish to share out are file i/o based and not part of a volume group or logical volume.

Comment 15 Tony Asleson 2017-03-06 23:17:43 UTC
(In reply to mdidomenico from comment #14)
> The use case is where all of the target luns I wish to share out are file
> i/o based and not part of a volume group or logical volume.

targetd adds no value in this use case right now.  In the future it's possible to may.

Comment 16 mdidomenico 2017-03-06 23:53:35 UTC
(In reply to Tony Asleson from comment #15)
> (In reply to mdidomenico from comment #14)
> > The use case is where all of the target luns I wish to share out are file
> > i/o based and not part of a volume group or logical volume.
> 
> targetd adds no value in this use case right now.  In the future it's
> possible to may.

Huh?  i come from the rhel6 world, so i might be a little hazy on how rhel7 things work, but my understanding is that targetcli and targetd are linked, one controls the other.  looking at the targetcli man page under the file io heading

-- Fileio also supports using an existing file, or creating a new file. New files are sparsely allocated by default. --

Which seems to indicate it does exactly what i want.  share a pre-created image file through iscsi as a lun.

the old tgtd did this without an issue using a file backing store.  are you saying targetd doesn't have this capability even though the man pages seem to indicate it does?

i'm not near a test environment at the moment, otherwise i'd test it myself.

Comment 17 Tony Asleson 2017-03-07 02:10:51 UTC
(In reply to mdidomenico from comment #16)
> (In reply to Tony Asleson from comment #15)
> > (In reply to mdidomenico from comment #14)
> > > The use case is where all of the target luns I wish to share out are file
> > > i/o based and not part of a volume group or logical volume.
> > 
> > targetd adds no value in this use case right now.  In the future it's
> > possible to may.
> 
> Huh?  i come from the rhel6 world, so i might be a little hazy on how rhel7
> things work, but my understanding is that targetcli and targetd are linked,
> one controls the other.  looking at the targetcli man page under the file io
> heading
> 
> -- Fileio also supports using an existing file, or creating a new file. New
> files are sparsely allocated by default. --
> 
> Which seems to indicate it does exactly what i want.  share a pre-created
> image file through iscsi as a lun.
> 
> the old tgtd did this without an issue using a file backing store.  are you
> saying targetd doesn't have this capability even though the man pages seem
> to indicate it does?
> 
> i'm not near a test environment at the moment, otherwise i'd test it myself.

So sorry for all the confusion, lets try to clear it up.  Targetd is _not_ required and _not_ needed for targetcli/LIO or any of what you're trying to achieve.  Targetd is not linked to or controlled by targetcli.  Yes, targetcli is exactly what you should be using.

Targetd is a remote storage JSON API which turns a box into a storage appliance which only has a subset of all the features of LIO for block and also has API for NFS exports and using btrfs.

Comment 18 mdidomenico 2017-03-07 03:10:02 UTC
(In reply to Tony Asleson from comment #17)
> 
> So sorry for all the confusion, lets try to clear it up.  Targetd is _not_
> required and _not_ needed for targetcli/LIO or any of what you're trying to
> achieve.  Targetd is not linked to or controlled by targetcli.  Yes,
> targetcli is exactly what you should be using.
> 
> Targetd is a remote storage JSON API which turns a box into a storage
> appliance which only has a subset of all the features of LIO for block and
> also has API for NFS exports and using btrfs.

Oh! Okay.  Thank you for clearing that up.  Everything I've come across as we transition from rhel6 to rhel7 reads a little differently than that.  Or at least my interpretation of it did.

Thanks again, I'll uninstall targetd and see if I can get my network situated like I need.

Comment 19 Andy Grover 2017-03-07 03:20:56 UTC
mdidomenico, sounds like you're on the right track. Just wanted to add -- there's no daemon that runs but you will likely want to enable target.service, which restores LIO settings (and then exits) on boot.

Comment 20 Martin Hoyer 2017-04-28 12:23:46 UTC
Tested with targetd-0.8.5-1.el7, works well.
No regression found.

Comment 21 errata-xmlrpc 2017-08-01 20:43:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1982


Note You need to log in before you can comment on or make changes to this bug.