RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 985012 - Prevent Error 500 due to IndividualClusterController.model being None
Summary: Prevent Error 500 due to IndividualClusterController.model being None
Keywords:
Status: CLOSED DUPLICATE of bug 878149
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: luci
Version: 6.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jan Pokorný [poki]
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-16 14:47 UTC by Jan Pokorný [poki]
Modified: 2013-08-01 08:09 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-01 08:09:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jan Pokorný [poki] 2013-07-16 14:47:26 UTC
During testing luci vs. python-weberror in FIPS mode [bug 746118],
I've discovered that URLs like these:

  https://${HOST}:8084/cluster/${CLUSTERNAME}/failovers
  https://${HOST}:8084/cluster/${CLUSTERNAME}/fences
  https://${HOST}:8084/cluster/${CLUSTERNAME}/resources
  https://${HOST}:8084/cluster/${CLUSTERNAME}/services

lead to Error 500: We're sorry but we weren't able to process this request
(which involved Beaker middleware and the FIPS-related bug caused emitting
plain "Internal Server Error").


The tracebacks follow this pattern:

File '/usr/lib/python2.6/site-packages/pylons/wsgiapp.py',
line 125 in __call__
  response = self.dispatch(controller, environ, start_response)
File '/usr/lib/python2.6/site-packages/pylons/wsgiapp.py',
line 324 in dispatch
  return controller(environ, start_response)
File '/usr/lib64/python2.6/site-packages/luci/controllers/root.py',
line 53 in __call__
  return BaseController.__call__(self, environ, start_response)
File '/usr/lib64/python2.6/site-packages/luci/lib/base.py',
line 30 in __call__
  return TGController.__call__(self, environ, start_response)
File '/usr/lib/python2.6/site-packages/pylons/controllers/core.py',
line 221 in __call__
  response = self._dispatch_call()
File '/usr/lib/python2.6/site-packages/pylons/controllers/core.py',
line 172 in _dispatch_call
  response = self._inspect_call(func)
File '/usr/lib/python2.6/site-packages/pylons/controllers/core.py',
line 107 in _inspect_call
  result = self._perform_call(func, args)
File '/usr/lib/python2.6/site-packages/tg/controllers.py',
line 857 in _perform_call
  self, controller, params, remainder=remainder)
File '/usr/lib/python2.6/site-packages/tg/controllers.py',
line 172 in _perform_call
  output = controller(*remainder, **dict(params))

-- ONE OF --

# failovers
File '/usr/lib64/python2.6/site-packages/luci/controllers/cluster.py',
line 816 in failovers
  if not self.model.getFailoverDomainByName(failovername):
AttributeError: 'NoneType' object has no attribute 'getFailoverDomainByName'

# fences
File '/usr/lib64/python2.6/site-packages/luci/controllers/cluster.py',
line 936 in fences
  if not self.model.getFenceDeviceByName(fencename):
AttributeError: 'NoneType' object has no attribute 'getFenceDeviceByName'

# resources
File '/usr/lib64/python2.6/site-packages/luci/controllers/cluster.py',
line 516 in resources
  self.model.getResourceByName(resourcename)
AttributeError: 'NoneType' object has no attribute 'getResourceByName'

# services
File '/usr/lib64/python2.6/site-packages/luci/controllers/cluster.py',
line 644 in services
  if not self.model.getService(servicename):
AttributeError: 'NoneType' object has no attribute 'getService'

-- === ---


This should be captured in the earlier processing and the error note
should be emitted accordingly.

Sidenote: what are the states leading to model attribute being None?
I reproduced the issue with node-less cluster, unfortunately don't
recall if this is artificial degenerate case or something one can
set up deliberately.

Comment 2 Jan Pokorný [poki] 2013-07-16 20:25:22 UTC
re [comment 0]:

> Sidenote: what are the states leading to model attribute being None?
> I reproduced the issue with node-less cluster, unfortunately don't
> recall if this is artificial degenerate case or something one can
> set up deliberately.

The situation was
- add/create cluster with 1+ node(s)
- stop ricci on all of these
  - this should cause no node to be listed in the nodes overview
- try to display cluster's fences, etc. as per URLs above

Apparently, luci should be robust enough to deal with such situation.
Perhaps the whole luci DB vs. reality + no authoritative data logic
scenario should be reconsidered to better fit expectations.

Comment 4 Radek Steiger 2013-08-01 08:09:09 UTC

*** This bug has been marked as a duplicate of bug 878149 ***


Note You need to log in before you can comment on or make changes to this bug.