Hide Forgot
Created attachment 510564 [details] rhevm template build fail 01 Description of problem: recreate: 1. setup rhevm config on conductor server 2. verify rhevm setup using deltacloud api 3. verify aeolus-configure creates a rhevm provider 4. in webui create rhevm provider account.. test credentials used w/ deltacloud api rhevm driver 5. use aeolus-image to build a template 6. The template starts to builds but fails w/ a "Unable to read package metadata. This may be due to a missing repodata directory. Please ensure the tree has been correctly generated." template xml tried.. [root@hp-ml370g5-01 ~]# cat template.xml <?xml version="1.0"?> <template> <name>template01</name> <description>template01</description> <os> <name>Fedora</name> <arch>x86_64</arch> <version>14</version> <install type="url"> <url>http://download.fedoraproject.org/pub/fedora/linux/releases/14/Fedora/x86_64/os/</url> </install> </os> <repositories> <repository name="custom"> <url>http://repos.fedorapeople.org/repos/aeolus/demo/webapp/</url> <signed>false</signed> </repository> </repositories> </template> [root@hp-ml370g5-01 ~]# cat templateInternal.xml <?xml version="1.0"?> <template> <name>templateInternal01</name> <description>templateInsternal01</description> <os> <name>Fedora</name> <arch>x86_64</arch> <version>14</version> <install type="url"> <url>http://download.devel.redhat.com/released/F-14/GOLD/Fedora/x86_64/os/</url> </install> </os> <repositories> <repository name="custom"> <url>http://repos.fedorapeople.org/repos/aeolus/demo/webapp/</url> <signed>false</signed> </repository> </repositories> </template> [root@hp-ml370g5-01 ~]# [root@hp-ml370g5-01 ~]# aeolus-image build --target rhev-m --template /root/templateInternal.xml Output: Target Image: 3bdef686-5807-4494-a878-2d07baa6a6f0 Image: 1e9b4cb3-b21c-4f6a-af08-0d7b706c9ec0 Build: 524d24fc-2335-4fac-ab59-0c7bd634c930 Status: BUILDING Percent Complete: 0 image factory log: /var/log/imagefactory.log <== 2011-06-29 22:23:02,388 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Mounting ISO 2011-06-29 22:23:02,432 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Checking if there is enough space on the filesystem 2011-06-29 22:23:02,446 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Extracting ISO contents 2011-06-29 22:23:12,028 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Putting the kickstart in place 2011-06-29 22:23:12,154 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Modifying the boot options 2011-06-29 22:23:12,155 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Generating new ISO 2011-06-29 22:23:15,822 INFO oz.Guest.FedoraGuest pid(27333) Message: Cleaning up old ISO data 2011-06-29 22:23:15,941 DEBUG imagefactory.BuildJob.BuildAdaptor pid(27333) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 0 to 10 2011-06-29 22:23:15,942 INFO oz.Guest.FedoraGuest pid(27333) Message: Generating 10GB diskimage for templateInternal01 2011-06-29 22:23:15,943 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(27333) Message: Doing base install via Oz 2011-06-29 22:23:15,943 INFO oz.Guest.FedoraGuest pid(27333) Message: Running install for templateInternal01 2011-06-29 22:23:15,943 INFO oz.Guest.FedoraGuest pid(27333) Message: Generate XML for guest templateInternal01 with bootdev cdrom 2011-06-29 22:23:15,945 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Generated XML: <?xml version="1.0"?> <domain type="kvm"> <name>templateInternal01</name> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <uuid>7e78c7fa-ef69-4122-8eeb-eec768b62c2b</uuid> <clock offset="utc"/> <vcpu>1</vcpu> <features> <acpi/> <apic/> <pae/> </features> <os> <type>hvm</type> <boot dev="cdrom"/> </os> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <devices> <console device="pty"/> <graphics port="-1" type="vnc"/> <interface type="bridge"> <source bridge="virbr0"/> <mac address="52:54:00:31:25:d8"/> <model type="virtio"/> </interface> <input bus="ps2" type="mouse"/> <console type="pty"> <target port="0"/> </console> <disk device="disk" type="file"> <target dev="vda" bus="virtio"/> <source file="/var/tmp/base-image-3bdef686-5807-4494-a878-2d07baa6a6f0.dsk"/> </disk> <disk type="file" device="cdrom"> <source file="/var/tmp/templateInternal01-url-oz.iso"/> <target dev="hdc"/> </disk> </devices> </domain> 2011-06-29 22:23:17,179 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Waiting for templateInternal01 to finish installing, 3600/3600 2011-06-29 22:23:27,221 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Waiting for templateInternal01 to finish installing, 3590/3600 2011-06-29 22:23:37,260 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Waiting for templateInternal01 to finish installing, 3580/3600 2011-06-29 22:23:47,316 DEBUG oz.Guest.FedoraGuest pid(27333) Message: Waiting for templateInternal01 to finish installing, 3570/36 In previous attempts I was able to build a template for RHEVM, however I am not able to now. Is the repodata recreated on the aeolus server itself? Are we not using the repodata on the fedora repo itself? I tried two different repo's to test the theory, neither worked. I also tried removing the additional repository data listed http://aeolusproject.org/page/Launching_a_Deployment_with_CLI_Tools#Create_a_deployable.xml_file_and_host_it See screenshots for details..
aeolus-config output [root@hp-ml370g5-01 export]# aeolus-configure notice: /Stage[main]/Apache/Exec[permit-http-networking]/returns: executed successfully notice: /File[/var/lib/aeolus-conductor]/ensure: created notice: /Stage[main]/Aeolus::Conductor/Selinux::Mode[permissive]/Exec[set_selinux_permissive]/returns: executed successfully notice: /Stage[main]/Aeolus::Conductor/Service[condor]/ensure: ensure changed 'stopped' to 'running' notice: /File[/etc/rhevm.json]/content: content changed '{md5}98e07263d603123a6655e3ec7abb9e76' to '{md5}7adf7ac7403c7c19d60d07585f8295fd' notice: /Stage[main]/Aeolus::Image-factory/Service[qpidd]/ensure: ensure changed 'stopped' to 'running' notice: /File[/etc/init.d/deltacloud-rhevm]/ensure: defined content as '{md5}cccbcf97dcbebe48ccb7978f0c802622' notice: /Stage[main]/Aeolus::Conductor/Postgres::User[aeolus]/Exec[create_aeolus_postgres_user]/returns: executed successfully notice: /Stage[main]/Aeolus::Rhevm/Aeolus::Deltacloud[rhevm]/Service[deltacloud-rhevm]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: (in /usr/share/aeolus-conductor) notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: executed successfully notice: /Stage[main]/Aeolus::Conductor/Service[aeolus-connector]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor/Service[solr]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor/Rails::Migrate::Db[migrate_aeolus_database]/Exec[migrate_rails_database]/returns: executed successfully notice: /Stage[main]/Aeolus::Conductor/Rails::Seed::Db[seed_aeolus_database]/Exec[seed_rails_database]/returns: (in /usr/share/aeolus-conductor) notice: /Stage[main]/Aeolus::Conductor/Rails::Seed::Db[seed_aeolus_database]/Exec[seed_rails_database]/returns: executed successfully notice: /File[/var/lib/aeolus-conductor/production.seed]/ensure: created notice: /Stage[main]/Aeolus::Image-factory/Service[imagefactory]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[create_site_admin_user]/returns: (in /usr/share/aeolus-conductor) notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[create_site_admin_user]/returns: User admin registered notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[create_site_admin_user]/returns: executed successfully notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: (in /usr/share/aeolus-conductor) notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: Granting administrator privileges for admin... notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: executed successfully notice: /File[/etc/init.d/deltacloud-ec2-us-west-1]/ensure: defined content as '{md5}d52f8ab18e5fec3d847c2ec754409857' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[ec2-us-west-1]/Aeolus::Deltacloud[ec2-us-west-1]/Service[deltacloud-ec2-us-west-1]/ensure: ensure changed 'stopped' to 'running' notice: /File[/etc/init.d/deltacloud-ec2-us-east-1]/ensure: defined content as '{md5}d8e1ef85277e52a647815e3177766704' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[ec2-us-east-1]/Aeolus::Deltacloud[ec2-us-east-1]/Service[deltacloud-ec2-us-east-1]/ensure: ensure changed 'stopped' to 'running' notice: /File[/etc/init.d/deltacloud-mock]/ensure: defined content as '{md5}91f7a7b75548184be3bc143f11152ad2' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[mock]/Aeolus::Deltacloud[mock]/Service[deltacloud-mock]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor/Service[conductor-delayed_job]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor/Exec[build_solr_index]/returns: (in /usr/share/aeolus-conductor) notice: /Stage[main]/Aeolus::Conductor/Exec[build_solr_index]/returns: executed successfully notice: /Stage[main]/Aeolus::Conductor/Service[aeolus-conductor]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Rhevm/Aeolus::Conductor::Hwp[rhevm-hwp]/Web_request[hwp-rhevm-hwp]/post: post changed '' to 'https://localhost/conductor/hardware_profiles' notice: /Stage[main]/Aeolus::Rhevm/Aeolus::Conductor::Provider[rhevm]/Web_request[provider-rhevm]/post: post changed '' to 'https://localhost/conductor/providers' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[ec2-us-east-1]/Aeolus::Conductor::Provider[ec2-us-east-1]/Web_request[provider-ec2-us-east-1]/post: post changed '' to 'https://localhost/conductor/providers' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[ec2-us-west-1]/Aeolus::Conductor::Provider[ec2-us-west-1]/Web_request[provider-ec2-us-west-1]/post: post changed '' to 'https://localhost/conductor/providers' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Provider[mock]/Aeolus::Conductor::Provider[mock]/Web_request[provider-mock]/post: post changed '' to 'https://localhost/conductor/providers' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Conductor::Hwp[hwp1]/Web_request[hwp-hwp1]/post: post changed '' to 'https://localhost/conductor/hardware_profiles' notice: /Stage[main]/Aeolus::Conductor/Service[conductor-dbomatic]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Iwhd/Service[mongod]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Iwhd/Service[iwhd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: % Total % Received % Xferd Average Speed Time Time Time Current notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0]/Exec[create-bucket-aeolus]/returns: notice: /Stage[main]/Aeolus::Conductor::Seed_data/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: executed successfully notice: Finished catalog run in 102.57 seconds [root@hp-ml370g5-01 ~]# ruby /root/checkServices.rb Checking aeolus-conductor ... Success: (pid 27539) is running... Checking aeolus-connector ... Success: image_factory_connector (pid 27074) is running... Checking condor ... Success: condor_master (pid 26920) is running... Checking conductor-dbomatic ... Success: dbomatic (pid 27704) is running... Checking conductor-delayed_job ... Success: delayed_job (pid 27457) is running... Checking conductor-warehouse_sync ... /root/checkServices.rb:31: command not found: /etc/init.d/conductor-warehouse_sync status FAILURE: Checking deltacloud-ec2-us-east-1 ... Success: deltacloudd (pid 27401) is running... Checking deltacloud-ec2-us-west-1 ... Success: deltacloudd (pid 27379) is running... Checking deltacloud-mock ... Success: deltacloudd (pid 27421) is running... Checking httpd ... Success: httpd (pid 27107) is running... Checking imagefactory ... Success: imagefactory (pid 27333) is running... Checking iwhd ... Success: iwhd (pid 27772) is running... Checking libvirtd ... Success: libvirtd (pid 3800) is running... Checking mongod ... Success: mongod (pid 27740) is running... Checking ntpd ... Success: ntpd (pid 25300) is running... Checking postgresql ... Success: postmaster (pid 4133) is running... Checking qpidd ... Success: qpidd (pid 26952) is running... Checking production solr ... Success: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 27134 root 64u IPv6 232666 0t0 TCP *:8983 (LISTEN) Checking connector ... Success: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME image_fac 27074 root 12u IPv4 232598 0t0 TCP localhost:cfinger (LISTEN) Checking condor_q ... Success: -- Submitter: hp-ml370g5-01.rhts.eng.bos.redhat.com : <10.16.64.103:55055> : hp-ml370g5-01.rhts.eng.bos.redhat.com ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 0 jobs; 0 idle, 0 running, 0 held Checking condor_status ... Success:
[root@hp-ml370g5-01 ~]# cat /etc/aeolus-configure/nodes/default_configure #Default setup configuration. #Set everything up on this box. #You can override the default behavior #by creating <fqdn>_configure with the #class membership and parameters as #desire and it will take precedence over this. #NOTE: Although this suggests the components #can be deployed on individual boxes. This will likely #become a common practice, but be advised that currently #apart from https on the web server for conductor, you should #consider intermachine communications insecure. Securing #intermachine service calls is on the roadmap. --- parameters: enable_https: true enable_security: false rhevm_nfs_server: hp-ml370g5-01.rhts.eng.bos.redhat.com rhevm_nfs_export: /exports/iwhd rhevm_nfs_mount_point: /mnt/iwhd-nfs rhevm_deltacloud_port: 3005 rhevm_deltacloud_username: <snip> rhevm_deltacloud_password: <snip> rhevm_deltacloud_powershell_url: https://10.16.120.32:8548/rhevm-api-powershell classes: - aeolus::conductor - aeolus::image-factory - aeolus::iwhd - aeolus::conductor::seed_data - aeolus::rhevm [root@hp-ml370g5-01 ~]#
[root@hp-ml370g5-01 ~]# ps -ef | grep deltacloud root 27379 1 0 21:28 ? 00:00:18 /usr/bin/ruby /usr/bin/deltacloudd -i ec2 -e production -p 3004 --provider us-west-1 root 27401 1 0 21:28 ? 00:00:18 /usr/bin/ruby /usr/bin/deltacloudd -i ec2 -e production -p 3003 --provider us-east-1 root 27421 1 0 21:28 ? 00:00:18 /usr/bin/ruby /usr/bin/deltacloudd -i mock -e production -p 3002 root 27988 1 0 21:45 pts/0 00:00:14 /usr/bin/ruby /usr/bin/deltacloudd -i rhevm -e production -r 0.0.0.0 -p 3005 --provider https://10.16.120.32:8548/rhevm-api-powershell root 28873 23291 0 22:39 pts/0 00:00:00 grep deltacloud [root@hp-ml370g5-01 ~]# [root@hp-ml370g5-01 ~]# echo $API_PROVIDER https://10.16.120.32:8548/rhevm-api-powershell [root@hp-ml370g5-01 ~]#
[root@hp-ml370g5-01 ~]# rpm -qa | grep aeolus aeolus-conductor-0.3.0-0.el6.20110628135944git2a88782.noarch rubygem-aeolus-cli-0.0.1-1.el6.20110628165632git0dfe3ff.noarch aeolus-all-0.3.0-0.el6.20110628135944git2a88782.noarch aeolus-conductor-daemons-0.3.0-0.el6.20110628135944git2a88782.noarch aeolus-conductor-doc-0.3.0-0.el6.20110628135944git2a88782.noarch aeolus-configure-2.0.1-0.el6.20110628141215gitb8aaf85.noarch [root@hp-ml370g5-01 ~]# rpm -qa | grep iwhd iwhd-0.96.1.9e86-1.el6.x86_64 [root@hp-ml370g5-01 ~]# rpm -qa | grep image rubygem-image_factory_console-0.4.0-1.el6.20110628135944git2a88782.noarch rubygem-image_factory_connector-0.0.3-1.el6.20110628135944git2a88782.noarch imagefactory-0.2.2-1.el6.noarch genisoimage-1.1.9-11.el6.x86_64 [root@hp-ml370g5-01 ~]#
*** Bug 717580 has been marked as a duplicate of this bug. ***
Putting in need info due to the priority of this feature.
This is almost certainly a mis-configuration on the host, meaning that the guest can't reach out to anything external. Can you run the script here (as root): http://people.redhat.com/clalance/collect_libvirt_info.sh And attach the results to this BZ? Chris Lalancette
[root@hp-ml370g5-01 collect]# cat collect.txt # virsh list --all Id Name State ---------------------------------- # virsh net-list --all Name State Autostart ----------------------------------------- default active yes # virsh net-dumpxml default <network> <name>default</name> <uuid>273f0694-8c5f-461d-85c7-ab1a13e9401e</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:82:A5:ED'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> </dhcp> </ip> </network> # cat /proc/sys/net/ipv4/ip_forward 1 # cat /proc/sys/net/bridge/bridge-nf-call-arptables 1 # cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 # cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 # iptables -t filter -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # iptables -t nat -L -v Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # iptables -t mangle -L -v Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # brctl show bridge name bridge id STP enabled interfaces virbr0 8000.52540082a5ed yes virbr0-nic # route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0 10.16.64.0 * 255.255.248.0 U 0 0 0 eth0 link-local * 255.255.0.0 U 1002 0 0 eth0 default 10.16.71.254 0.0.0.0 UG 0 0 0 eth0 # ifconfig eth0 Link encap:Ethernet HWaddr 00:15:60:A3:E3:B0 inet addr:10.16.64.103 Bcast:10.16.71.255 Mask:255.255.248.0 inet6 addr: fec0::f101:215:60ff:fea3:e3b0/64 Scope:Site inet6 addr: fec0:0:a10:4000:215:60ff:fea3:e3b0/64 Scope:Site inet6 addr: fe80::215:60ff:fea3:e3b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1731528 errors:0 dropped:0 overruns:0 frame:0 TX packets:268590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:625733068 (596.7 MiB) TX bytes:39208078 (37.3 MiB) Interrupt:16 Memory:f8000000-f8012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:748487 errors:0 dropped:0 overruns:0 frame:0 TX packets:748487 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:245941200 (234.5 MiB) TX bytes:245941200 (234.5 MiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:82:A5:ED inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110 errors:0 dropped:0 overruns:0 frame:0 TX packets:52 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8039 (7.8 KiB) TX bytes:5758 (5.6 KiB) # chkconfig --list | grep -i network network 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@hp-ml370g5-01 collect]#
yikes.. iptables is off.. retrying
it was indeed my iptables... my evil twin must have turned it off.. I guess :(
removing from tracker
perm close