Description of problem: ISO domain located on engine cannot be attached to datacenter because it cannot be mounted on host. Version-Release number of selected component (if applicable): ovirt-engine-3.5.0-0.0.master.20140612090854.el6.noarch How reproducible: Steps to Reproduce: 1. say yes to the engine-local nfs ISO domain during oVirt installation 2. add host and master domain (3.4 compatible) 3. attempt to attach the ISO domain to the DC Actual results: Failed to attach Storage Domain ISO_DOMAIN to Data Center 34. (User: admin) Expected results: Additional info: /etc/exports is set like this in 3.5: <path> <engine hostname>(rw) in 3.4 it's set like this: <path> 0.0.0.0/0.0.0.0(rw) Rewriting the exports file the way it used to be set at 3.4 and restarting NFS service fixes the problem.
(In reply to Petr Beňas from comment #0) > Description of problem: > ISO domain located on engine cannot be attached to datacenter because it > cannot be mounted on host. This is _by_default_ and by design. When choosing to configure an iso domain during setup, user is asked for the ACL to use for this export, and the default is indeed to export to the engine host only. > > Version-Release number of selected component (if applicable): > ovirt-engine-3.5.0-0.0.master.20140612090854.el6.noarch > > How reproducible: > > > Steps to Reproduce: > 1. say yes to the engine-local nfs ISO domain during oVirt installation > 2. add host and master domain (3.4 compatible) > 3. attempt to attach the ISO domain to the DC > > Actual results: > Failed to attach Storage Domain ISO_DOMAIN to Data Center 34. (User: admin) > > Expected results: > > > Additional info: > /etc/exports is set like this in 3.5: > <path> <engine hostname>(rw) > > in 3.4 it's set like this: > <path> 0.0.0.0/0.0.0.0(rw) Which is a security risk and thus was changed. > > Rewriting the exports file the way it used to be set at 3.4 and restarting > NFS service fixes the problem. Closing for now. Please reopen if needed. Thanks!
I encountered issues with this too using a hosted-engine deployment with 3.5. In my case my engine was logging: rpc.mountd[26600]: refused mount request from 172.16.7.37 for /var/lib/exports/iso (/var/lib/exports/iso): unmatched host To work around the issue, I had to modify my /etc/exports.d/ovirt-engine-iso-domain.exports on the engine from this: exporting enceladus-f20.doubledog.org:/var/lib/exports/iso ... to this: exporting 172.16.7.37:/var/lib/exports/iso exporting enceladus-f20.doubledog.org:/var/lib/exports/iso At this point I don't know why the the IP address has listed also. I don't have any issues resolving the FQDN on the engine (or the host). Furthermore I could successfully do as root prior to modifying the exports: mkdir /tmp/m mount enceladus-f20.doubledog.org:/var/lib/exports/iso /tmp/m So while I understand the security risk, I do believe there is a bug here still.
(In reply to John Florian from comment #3) > I encountered issues with this too using a hosted-engine deployment with > 3.5. In my case my engine was logging: > > rpc.mountd[26600]: refused mount request from 172.16.7.37 for > /var/lib/exports/iso (/var/lib/exports/iso): unmatched host > > To work around the issue, I had to modify my > /etc/exports.d/ovirt-engine-iso-domain.exports on the engine from this: > > exporting enceladus-f20.doubledog.org:/var/lib/exports/iso > > ... to this: > > exporting 172.16.7.37:/var/lib/exports/iso > exporting enceladus-f20.doubledog.org:/var/lib/exports/iso > > > At this point I don't know why the the IP address has listed also. I don't > have any issues resolving the FQDN on the engine (or the host). The content written there is composed by concatenating two inputs from you while running engine-setup - the ISO path and the ACL. The ISO path defaults to /var/lib/exports/iso and the ACL defaults to FQDN(rw) where FQDN is the name you entered when asked about the FQDN of the machine. You can check the setup logs (/var/log/ovirt-engine/setup/*) to see what you input. If you see changes in the file since then, they were most likely done by you manually. > Furthermore > I could successfully do as root prior to modifying the exports: > > mkdir /tmp/m > mount enceladus-f20.doubledog.org:/var/lib/exports/iso /tmp/m > > So while I understand the security risk, I do believe there is a bug here > still. If you refer to the issue above (name vs ip address), please attach relevant setup logs. If you refer to the general issue, explain how you expect engine-setup to behave, in a way that will be: 1. Not less secure than now 2. More comfortable for users Thanks!
Created attachment 980493 [details] engine-setup log
(In reply to Yedidyah Bar David from comment #4) > (In reply to John Florian from comment #3) > > To work around the issue, I had to modify my > > /etc/exports.d/ovirt-engine-iso-domain.exports on the engine from this: > > > > exporting enceladus-f20.doubledog.org:/var/lib/exports/iso > > > > ... to this: > > > > exporting 172.16.7.37:/var/lib/exports/iso > > exporting enceladus-f20.doubledog.org:/var/lib/exports/iso > > > > > > At this point I don't know why the the IP address has listed also. I don't > > have any issues resolving the FQDN on the engine (or the host). > > The content written there is composed by concatenating two inputs from you > while running engine-setup - the ISO path and the ACL. The ISO path defaults > to /var/lib/exports/iso and the ACL defaults to FQDN(rw) where FQDN is the > name you entered when asked about the FQDN of the machine. You can check the > setup logs (/var/log/ovirt-engine/setup/*) to see what you input. If you see > changes in the file since then, they were most likely done by you manually. Err sorry, I see I didn't review what I wrote closely enough: s/why the the IP address has listed also/why the the IP address has TO BE listed also/ To clarify: I know for certain that I took the defaults and used the FQDN and that I later had to manually enter the IP address variant to make the domain attachable. Prior to making the edit, I was able to locally mount that export to a /tmp/m directory -- I just was unable to attach it without the IP address variant. > > So while I understand the security risk, I do believe there is a bug here > > still. > > If you refer to the issue above (name vs ip address), please attach relevant > setup logs. If you refer to the general issue, explain how you expect > engine-setup to behave, in a way that will be: > 1. Not less secure than now > 2. More comfortable for users I understand why it should NOT be exported as "<path> 0.0.0.0/0.0.0.0(rw)". What I don't understand is why "<path> FQDN(rw)" cannot be attached yet "<path> IP_OF_FQDN(rw)" can be attached. I have attached my engine-setup log in case that helps.
(In reply to John Florian from comment #6) > Err sorry, I see I didn't review what I wrote closely enough: s/why the the > IP address has listed also/why the the IP address has TO BE listed also/ > > To clarify: > I know for certain that I took the defaults and used the FQDN and that I > later had to manually enter the IP address variant to make the domain > attachable. Prior to making the edit, I was able to locally mount that > export to a /tmp/m directory -- I just was unable to attach it without the > IP address variant. > OK, now at least the question is clear :-) Where did you manage mount it? On the engine machine? To attach a domain (including the ISO domain), one of the hosts (iirc the spm) has to mount it, not the engine. Perhaps the name was not resolvable on the host?
I mounted it on the engine "enceladus-f20" (172.16.7.148). I don't know what SPM means. The IP that I added was for that of enceladus-f20 and yes, it does resolve fine everywhere. The only other system I have related to oVirt is "oberon" (172.16.7.37) and it cannot mount this export, nor should it be able to according to the ACL. When I was trying to attach the ISO domain, I wasn't really sure who was really trying to mount what, but when I saw the engine log: rpc.mountd[26600]: refused mount request from 172.16.7.37 for /var/lib/exports/iso (/var/lib/exports/iso): unmatched host ... everything became quite obvious what was being attempted and what needed to be done. I believe the real rub here, though I've not verified this, is that the Attach ISO Domain feature gets the IP address of the engine and tries to mount using that whereas if it had simply tried mounting with the FQDN of the engine instead, it would have worked. In other words, there's a bit of disagreement which is best: 1. engine-setup prompts and defaults for the FQDN of the engine 2. Attach ISO Domain ignores engine FQDN and uses IP of engine instead If both used the IP or both used the FQDN, I *think* the process would be smoother.
Having played with this more, I now clearly see my confusion. Taking the defaults through the engine-setup provides one ACL for the FQDN of the engine. That one is most definitely needed, but you must also add another for either the FQDN or IP address of the host to be able to attach the ISO_DOMAIN. This entry is *not* provided by the setup scripts. The process was probably very smooth when the export ACL was wide open (i.e., 0.0.0.0/0.0.0.0(rw)), but now there's a bump in the road for oVirt/RHEV newbies due to the better secured ACL. If engine-setup were to create the export like: /var/lib/exports/iso HOST_FQDN(rw) /var/lib/exports/iso ENGINE_FQDN(rw) ... instead of what it does now (using ovirt-engine-setup-base-3.5.0.1-1.fc20.noarch): /var/lib/exports/iso ENGINE_FQDN(rw) .... things would be smooth for the noob as well as secure.
Thanks for the clarification! Not sure we can do much more. In effect, we already ask exactly that - to input the list of clients that need access. Not sure we can change the default to anything more useful. Perhaps we should not accept any default but instead suggest the current default (plus some other options?) as mere text and force the user to input something (thus needing to think and provide a meaningful answer, plus having to read enough to know how to fix/amend later). A co-worker (Alon Bar-Lev) suggested to restore the default to be 0.0.0.0(rw) but inverse the default for "Configure an NFS share on this server to be used as an ISO Domain? " from Yes to No.
(In reply to Yedidyah Bar David from comment #10) > Thanks for the clarification! > > Not sure we can do much more. In effect, we already ask exactly that - to > input the list of clients that need access. Not sure we can change the > default to anything more useful. I didn't realize it was asking for a *list*. > Perhaps we should not accept any default but instead suggest the current > default (plus some other options?) as mere text and force the user to input > something (thus needing to think and provide a meaningful answer, plus > having to read enough to know how to fix/amend later). This is a very astute observation. The scripts may make it too easy and so many of the "up and running" sources of documentation (which I did find helpful) generally suggest to hit Enter most of the way through and simply take the defaults. Maybe this particular prompt wasn't clear enough that it accepts a list. If the engine-setup script is aware of the host's name, I think it would be great to either default to having both entries or at least suggesting both. I am too new to oVirt to know if there's ever a case where you wouldn't want both ACL entries. > A co-worker (Alon Bar-Lev) suggested to restore the default to be > 0.0.0.0(rw) but inverse the default for "Configure an NFS share on this > server to be used as an ISO Domain? " from Yes to No. Interesting thought but that seems counter-productive, but again that may be just my newness here. I'm generally of the opinion that the easier this all is, the more it will get used and that will just lead to more polish and refinement that always happens with great FOSS projects. A bit off-topic, but another area I got burned on was the two distinct stages where 1) there's a intermission for the user to install the engine's OS and 2) there's an intermission for the user to run engine-setup. These two milestones look so similar that in my first journey through the process I didn't realize they were distinct steps, partly because I didn't understand the most general idea of the agenda of the hosted engine set up process. Whether this is because it would benefit from some agenda proclamations or those are there but again it's too easy to not read everything that's there but not so easy that you can completely ignore it, I don't know. I can tell you that I *did* have the benefit of using an already setup oVirt environment at the office and the coworker who did all of that work with whom I could consult. Granted that setup doesn't use the hosted engine feature, but I feel I still had an advantage compared to most. I must admit it's amazing how much nasty config work you guys did make go away; there's a *lot* going on behind the scenes, I know.
(In reply to John Florian from comment #11) > (In reply to Yedidyah Bar David from comment #10) > > Thanks for the clarification! > > > > Not sure we can do much more. In effect, we already ask exactly that - to > > input the list of clients that need access. Not sure we can change the > > default to anything more useful. > > I didn't realize it was asking for a *list*. Sorry for that. The text was about an ACL - an access control list. > > > Perhaps we should not accept any default but instead suggest the current > > default (plus some other options?) as mere text and force the user to input > > something (thus needing to think and provide a meaningful answer, plus > > having to read enough to know how to fix/amend later). > > This is a very astute observation. The scripts may make it too easy and so > many of the "up and running" sources of documentation (which I did find > helpful) generally suggest to hit Enter most of the way through and simply > take the defaults. Now pushed http://gerrit.ovirt.org/37062 , taking the bug and moving to 3.6. You are welcome to comment... > Maybe this particular prompt wasn't clear enough that it > accepts a list. If the engine-setup script is aware of the host's name, I > think it would be great to either default to having both entries or at least > suggesting both. I am too new to oVirt to know if there's ever a case where > you wouldn't want both ACL entries. Currently, when you run engine-setup inside the engine VM, nothing in that VM (including engine-setup) knows that it's going to be used for hosted-engine, the host name, or anything related. It's just a normal setup. Only later, after you reply '1' (engine is set up), the hosted-engine deploy script connects to the engine and configures a bit stuff in it for hosted-engine. > > > A co-worker (Alon Bar-Lev) suggested to restore the default to be > > 0.0.0.0(rw) but inverse the default for "Configure an NFS share on this > > server to be used as an ISO Domain? " from Yes to No. > > Interesting thought but that seems counter-productive, but again that may be > just my newness here. I'm generally of the opinion that the easier this all > is, the more it will get used and that will just lead to more polish and > refinement that always happens with great FOSS projects. > > A bit off-topic, but another area I got burned on was the two distinct > stages where 1) there's a intermission for the user to install the engine's > OS and 2) there's an intermission for the user to run engine-setup. These > two milestones look so similar that in my first journey through the process > I didn't realize they were distinct steps, partly because I didn't > understand the most general idea of the agenda of the hosted engine set up > process. Whether this is because it would benefit from some agenda > proclamations or those are there but again it's too easy to not read > everything that's there but not so easy that you can completely ignore it, I > don't know. Bugs/comments/patches are always welcome! > I can tell you that I *did* have the benefit of using an > already setup oVirt environment at the office and the coworker who did all > of that work with whom I could consult. Granted that setup doesn't use the > hosted engine feature, but I feel I still had an advantage compared to most. > I must admit it's amazing how much nasty config work you guys did make go > away; there's a *lot* going on behind the scenes, I know. Thanks :-)
tl;dr: http://gerrit.ovirt.org/37062 makes setup to require some input from the user instead of providing a default.
(In reply to Yedidyah Bar David from comment #12) > (In reply to John Florian from comment #11) > > (In reply to Yedidyah Bar David from comment #10) > > > Thanks for the clarification! > > > > > > Not sure we can do much more. In effect, we already ask exactly that - to > > > input the list of clients that need access. Not sure we can change the > > > default to anything more useful. > > > > I didn't realize it was asking for a *list*. > > Sorry for that. The text was about an ACL - an access control list. FACE PALM. You ever seen an acronym so much you stop thinking about what it actually means? And to think I get annoyed when people refer to their PIN numbers. :-) > > > Perhaps we should not accept any default but instead suggest the current > > > default (plus some other options?) as mere text and force the user to input > > > something (thus needing to think and provide a meaningful answer, plus > > > having to read enough to know how to fix/amend later). > > > > This is a very astute observation. The scripts may make it too easy and so > > many of the "up and running" sources of documentation (which I did find > > helpful) generally suggest to hit Enter most of the way through and simply > > take the defaults. > > Now pushed http://gerrit.ovirt.org/37062 , taking the bug and moving to 3.6. > > You are welcome to comment... Okay, I will. That looks excellent! Should it also have any mention of "At a minimum, you likely want to grant access to your engine *and* any other oVirt hosts that are to attach the ISO domain."??? I realize there's a UX threshold here where, if it gets too verbose, folks won't read it and if it's too terse folks won't understand. I think the link to the FAQ is perfect in that sense. > Bugs/comments/patches are always welcome! Oh, I'm getting there. ;-) I just need to get my old virt-manager setup migrated onto oVirt so I can get back to doing normal stuff. I keep restarting my oVirt deployment. I managed to really hose things up trying to setup bonding after the hosted engine was deployed. No luck when I have only one host and the engine is using the interface that needs bonding. Oh and there's my bizarre hardware issue where an Intel NIC's MAC gets permanently stuck on a the Gigabyte MOBO's NIC and screws up F20's biosdevname feature. Hardware reset won't undo it; only a power cycle. Eeks!
(In reply to John Florian from comment #14) > (In reply to Yedidyah Bar David from comment #12) > > (In reply to John Florian from comment #11) > > > (In reply to Yedidyah Bar David from comment #10) > > > > Thanks for the clarification! > > > > > > > > Not sure we can do much more. In effect, we already ask exactly that - to > > > > input the list of clients that need access. Not sure we can change the > > > > default to anything more useful. > > > > > > I didn't realize it was asking for a *list*. > > > > Sorry for that. The text was about an ACL - an access control list. > > FACE PALM. You ever seen an acronym so much you stop thinking about what it > actually means? And to think I get annoyed when people refer to their PIN > numbers. :-) :-) > > > > > > Perhaps we should not accept any default but instead suggest the current > > > > default (plus some other options?) as mere text and force the user to input > > > > something (thus needing to think and provide a meaningful answer, plus > > > > having to read enough to know how to fix/amend later). > > > > > > This is a very astute observation. The scripts may make it too easy and so > > > many of the "up and running" sources of documentation (which I did find > > > helpful) generally suggest to hit Enter most of the way through and simply > > > take the defaults. > > > > Now pushed http://gerrit.ovirt.org/37062 , taking the bug and moving to 3.6. > > > > You are welcome to comment... > > Okay, I will. That looks excellent! Should it also have any mention of "At > a minimum, you likely want to grant access to your engine *and* any other > oVirt hosts that are to attach the ISO domain."??? IIRC the engine does not need direct access, unless it's an allinone setup. You can also register/login and then comment directly in gerrit. > > I realize there's a UX threshold here where, if it gets too verbose, folks > won't read it and if it's too terse folks won't understand. I think the > link to the FAQ is perfect in that sense. Indeed. I wrote there something longer then trimmed it down before pushing. > > > > Bugs/comments/patches are always welcome! > > Oh, I'm getting there. ;-) > > I just need to get my old virt-manager setup migrated onto oVirt so I can > get back to doing normal stuff. I keep restarting my oVirt deployment. I > managed to really hose things up trying to setup bonding after the hosted > engine was deployed. No luck when I have only one host and the engine is > using the interface that needs bonding. Note that for testing you can also use nested-kvm. Works for me quite well. > Oh and there's my bizarre hardware > issue where an Intel NIC's MAC gets permanently stuck on a the Gigabyte > MOBO's NIC and screws up F20's biosdevname feature. Hardware reset won't > undo it; only a power cycle. Eeks! Are you sure that's a hardware issue and not a firmware/driver/software bug?
Automated message: can you please update doctext or set it as not required?
ok in 3.6.x engine-setup is explicitly ask you for "acl" for local iso domain.
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue. If problems still persist, please open a new BZ and reference this one.