Created attachment 1540935 [details] screenshot Description of problem: The RDP Address for a Windows VM shows the internal IP of the node VM is running on, the downloaded file also has internal IP Though the service expose and RDP functionality works fine when connecting to correct IP/hostname manually Version-Release number of selected component (if applicable): kubevirt-web-ui:v1.4.0 How reproducible: Steps to Reproduce: 1.On a Windows VM go to Desktop Viewer tab under Console 2.Get the rdp file from Launch Remote desktop 3. Actual results: Shows internal IP instead of PublicIP/hostname of the node VM is on Expected results: Should show the correct IP Additional info:
Forgot to mention, the VM had only pod network IP
can you please attach the yaml of the service associated with the VM?
I just removed that cluster, but the expose was done using the virtctl command which is shown in the UI for RDP Also the service didn't had any reference to this InternalIP, it had Endpoints: 10.130.0.27:3389 Will share it from a new cluster tomorrow.
ok, thank you, would like to see how it compares. Moving the needinfo back to mark that I wait for it
On a new cluster built today, I still see the same issue ``` [cloud-user@cnv-executor-vatsal-master1 ~]$ oc get svc -o yaml win12-local-rdp apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-03-06T08:09:12Z name: win12-local-rdp namespace: default resourceVersion: "249068" selfLink: /api/v1/namespaces/default/services/win12-local-rdp uid: 2089754e-3fe7-11e9-b5a2-fa163efefb49 spec: clusterIP: 172.30.103.169 externalTrafficPolicy: Cluster ports: - nodePort: 30413 port: 31313 protocol: TCP targetPort: 3389 selector: vm.cnv.io/name: win12-local sessionAffinity: None type: NodePort status: loadBalancer: {} ``` RDP info shown in UI ``` RDP Address:172.16.0.25 RDP Port:30413 ``` ``` [cloud-user@cnv-executor-vatsal-master1 ~]$ oc get vmi -o yaml apiVersion: v1 items: - apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: creationTimestamp: 2019-03-06T08:08:05Z finalizers: - foregroundDeleteVirtualMachine generateName: win12-local generation: 1 labels: kubevirt.io/nodeName: cnv-executor-vatsal-node2.example.com vm.cnv.io/name: win12-local name: win12-local namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachine name: win12-local uid: 8dbf948d-3fe6-11e9-b5a2-fa163efefb49 resourceVersion: "248915" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/win12-local uid: f8a77386-3fe6-11e9-b5a2-fa163efefb49 spec: domain: clock: timer: hpet: present: false hyperv: present: true pit: present: true tickPolicy: delay rtc: present: true tickPolicy: catchup utc: {} cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - bootOrder: 1 disk: bus: sata name: rootdisk interfaces: - bridge: {} model: e1000e name: nic0 features: acpi: enabled: true apic: enabled: true hyperv: relaxed: enabled: true spinlocks: enabled: true spinlocks: 8191 vapic: enabled: true firmware: uuid: 9260c5d4-00e4-53c6-83e0-dc2757d4e709 machine: type: q35 resources: requests: memory: 4G networks: - name: nic0 pod: {} terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: rootdisk-win12-local name: rootdisk status: conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable - lastProbeTime: null lastTransitionTime: 2019-03-06T08:08:28Z status: "True" type: Ready interfaces: - ipAddress: 10.129.0.25 mac: 0a:58:0a:81:00:19 name: nic0 migrationMethod: BlockMigration nodeName: cnv-executor-vatsal-node2.example.com phase: Running kind: List metadata: resourceVersion: "" selfLink: "" ```
We are showing IP of the Node that the VM is running on and that is what we were always doing. @fabian do you think that is better to show service clusterIP or Node IP ?
Although we are able to connect to the shown IP (which is the node's internal IP, not floating IP or hostname), we don't expect users to be on the cluster network to RDP to the VM right? even if it's just pod network being used
Redirecting to Dan.
I'm not sure at all that we want to support RDP over the Pod network. Such support would require us to expose a public IP (or possibly only a public IP:port) per running VM. I know that we do have such plans for VNC, though. Either way, it is not a bug, but a big missing feature. I hope Franck can help deciding when and whether we want to tackle it.
We have filed https://jira.coreos.com/browse/KNIP-358 Provide RDP access to users external to the cluster to track this RFE. I believe that we should close this one.