Bug 740600 - Can't register xen system to an organization due to lack of entitlement
Summary: Can't register xen system to an organization due to lack of entitlement
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Spacewalk
Classification: Community
Component: Server
Version: 1.5
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Jan Pazdziora
QA Contact: Red Hat Satellite QA List
URL:
Whiteboard:
Depends On:
Blocks: space16
TreeView+ depends on / blocked
 
Reported: 2011-09-22 15:53 UTC by Christopher May-Townsend
Modified: 2011-12-22 16:50 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-22 16:50:34 UTC
Embargoed:


Attachments (Terms of Use)
output for log files (22.12 KB, text/plain)
2011-09-22 15:53 UTC, Christopher May-Townsend
no flags Details
Latest error log output (12.80 KB, text/plain)
2011-12-09 14:12 UTC, Christopher May-Townsend
no flags Details

Description Christopher May-Townsend 2011-09-22 15:53:26 UTC
Created attachment 524442 [details]
output for log files

Description of problem:

Using 1.5-client; I would try to register the system with spacewalk using a basic activation key (only name and monitoring entitlement selected) and I would get an internal server error on the client. The server would generate some errors in attachment. Had 10000 entitlements for each field set for second organization unit.

How reproducible:

Was everytime on any xen vm I tried. I installed a fresh one (all vms I use are clones of each other) and the same problem happened there. Running centos 5.7 domu with xen 3.1 and a centos 5.7 vm.


Steps to Reproduce:
1. Create new org
2. attach vm to org
3. fail
  
Actual results:

Internal server error on client, and numerous errors on spacewalk server.

Expected results:

System would register as normal

Additional info:

A centos 6 qemu vm would attach perfectly fine. I'm running spacewalk server with postgres. Will gladly run any tests anyone asks. Oh and also, I can register the client to the default org perfectly and then migrate the box over to second org and use box as normal. Just attaching fails. Will soon test with 3rd org just in case.

Comment 1 Christopher May-Townsend 2011-09-22 15:59:02 UTC
Just created a 3rd repo with entitlements and it connected fine. So something wrong with 2nd one. I named the second one tOm (were theOthermedia) so maybe it something to do with that?

There is still a bug here, but could be something strange

Comment 2 Christopher May-Townsend 2011-09-22 16:08:47 UTC
created a 3rd organization unit sorry

Comment 3 Christopher May-Townsend 2011-09-22 16:43:07 UTC
update; between creating the 3rd unit and setting it up for general use I have somehow triggered the bug again. It was working and now without touching the entitlements (everything is set to 10k) it no longer works.

Comment 4 Christopher May-Townsend 2011-09-23 08:50:49 UTC
Sorry for the spam, I had systems registering happily on the 2nd org before I left work last night. The only thing I was doing was migrating the systems between the 1st and 3rd org. As of this morning it's no longer working.

I would welcome some instructions to do stuff step by step to see what breaks if anyone thinks this bug is serious or not seen before. For now im going to do things very slowly and test after each time to see if I can narrow it down. It only appears to affect xen vm's and i'm convinced its something to do with virtual machine entitlements.

Comment 5 Christopher May-Townsend 2011-09-26 15:38:45 UTC
Final comment on this unless i figure out a work around. Basically the problem only seems to persist when the parent xen host is in the same org and has a base entitlement enabled.

I would be perfectly happy having xen image as a standalone server but am struggling to find a way without having to migrate boxes across (which obivously loses all their groups, channel info, etc)

Comment 6 Jan Pazdziora 2011-12-09 13:30:25 UTC
We'd need to figure out which entitlement it's failing on.

Could you please patch your driver_postgresql.py with

--- /usr/lib/python2.4/site-packages/spacewalk/server/rhnSQL/driver_postgresql.py.orig  2011-12-08 13:00:10.000000000 -0500
+++ /usr/lib/python2.4/site-packages/spacewalk/server/rhnSQL/driver_postgresql.py       2011-12-09 08:27:01.000000000 -0500
@@ -82,7 +82,15 @@
         query = "SELECT %s(%s)" % (self.name, positional_args)
 
         log_debug(2, query, args)
-        ret = self.cursor.execute(query, args)
+        try:
+            ret = self.cursor.execute(query, args)
+        except psycopg2.Error, e:
+            error_code = 99999
+            m = re.match('ERROR: +-([0-9]+)', e.pgerror)
+            if m:
+                error_code = int(m.group(1))
+            raise sql_base.SQLError(error_code, e.pgerror, e)
+  
         if self.ret_type == None:
             return ret
         else:

and reproduce the error? The log file should have slightly different traceback.

Comment 7 Christopher May-Townsend 2011-12-09 14:12:29 UTC
Created attachment 544555 [details]
Latest error log output

After failed attach of xen box after posted patch

Comment 8 Christopher May-Townsend 2011-12-09 14:13:00 UTC
Oh and cheers for looking into this :)

Comment 9 Jan Pazdziora 2011-12-16 12:55:49 UTC
The rvii issue from the last traceback is fixed in Spacewalk nightly, with the change

https://fedorahosted.org/spacewalk/changeset/36208823bf8ecda8f4bc1376be4e8e69f4e0f2a6

If you apply this change, does it make things work, or do you get yet another traceback?

Comment 10 Christopher May-Townsend 2011-12-19 13:14:00 UTC
Just made the change, it certainly appears to work! Im having issues deleting the original system off spacewalk, but with the xen master node having entitlements, I was able to attach a xen instance on that box to spacewalk without any issues.

I'd say problem fixed! Thank you very much Jan for looking into it for me.

Cheers,
Chris

Comment 11 Milan Zázrivec 2011-12-22 16:50:34 UTC
Spacewalk 1.6 has been released.


Note You need to log in before you can comment on or make changes to this bug.