Description ----------- libguestfs currently does not detect OSTree QCOW2 disk images. Version ------- $ rpm -q libguestfs libguestfs-1.27.12-2.fc21.x86_64 How reproducible: Consistently. Steps to Reproduce ------------------ $ wget -c \ http://rpm-ostree.cloud.fedoraproject.org/project-atomic/images/f20/qemu/20140414.1.qcow2.xz $ xz -d 20140414.1.qcow2.xz $ guestfish --ro -i -a /home/kashyapc/20140414.1.qcow2 Actual results -------------- libguestfs cannot not detect the OSTree image: $ guestfish --ro -i -a /home/kashyapc/20140414.1.qcow2 guestfish: no operating system was found on this disk If using guestfish '-i' option, remove this option and instead use the commands 'run' followed by 'list-filesystems'. You can then mount filesystems you want by hand using the 'mount' or 'mount-ro' command. If using guestmount '-i', remove this option and choose the filesystem(s) you want to see by manually adding '-m' option(s). Use 'virt-filesystems' to see what filesystems are available. If using other virt tools, this disk image won't work with these tools. Use the guestfish equivalent commands (see the virt tool manual page). Expected results ---------------- libugestfs should detect OSTree images. Additional info --------------- Colin Walters a workaround on IRC to use 'guestmount': $ guestmount -a disk.qcow2 -m /dev/sda3:/ -m /dev/sda1:/boot
You can use guestfish as well, either using the same -m options as Colin suggested, or using list-filesystems + mount: $ guestfish --ro -a 20140414.1.qcow2 Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> run ><fs> list-filesystems /dev/sda1: ext4 /dev/sda2: swap /dev/sda3: xfs ><fs> mount /dev/sda3 / ><fs> mount /dev/sda1 /boot
Proposing as a F21 Alpha blocker as its preventing the generation of the docker host cloud image. http://koji.fedoraproject.org/koji/taskinfo?taskID=7437237 is an example build where it fails due to libguestfs failing to detect the os.
<imcleod> dgilmore: Have we had a successful atomic build? <imcleod> dgilmore: My guess is that this is down to us depending on libguestfs detection to correctly identify the disk layout and OS type. That detection code likely does not yet know about the native-install Atomic stuff. <dgilmore> hey imcleod <dgilmore> http://koji.fedoraproject.org/koji/taskinfo?taskID=7436675 <dgilmore> imcleod: not had a sucessful build <dgilmore> this is the first that didnt puke installing <dgilmore> imcleod: no problems on the dogs. been there myself <dgilmore> imcleod: we dod have to use autopartitioning otherwise anaconda pukes <imcleod> Hrm. So that specific task failed with 1000 seconds of no disk activity. <imcleod> And there is a screenshot which does seem to show Anaconda in distress. <dgilmore> let me make sure that was the right task <dgilmore> http://koji.fedoraproject.org/koji/taskinfo?taskID=7437237 <dgilmore> imcleod: was the wrong task sorry <dgilmore> Exception encountered in _build_image_from_template thread <dgilmore> Unable to find an OS on disk image (/var/tmp/koji/tasks/7237/7437237/output_image/5f38c6a6-b61b-4fb5-b968-05fce415233b.body) <dgilmore> Traceback (most recent call last): <dgilmore> File "/usr/lib/python2.7/site-packages/imgfac/Builder.py", line 132, in _build_image_from_template <dgilmore> self.os_plugin.create_base_image(self, template, parameters) <dgilmore> File "/usr/lib/python2.7/site-packages/imagefactory_plugins/TinMan/TinMan.py", line 344, in create_base_image <dgilmore> gfs = launch_inspect_and_mount(self.image, readonly=True) <dgilmore> File "/usr/lib/python2.7/site-packages/imgfac/FactoryUtils.py", line 25, in launch_inspect_and_mount <dgilmore> return inspect_and_mount(g, diskfile=diskfile) <dgilmore> File "/usr/lib/python2.7/site-packages/imgfac/FactoryUtils.py", line 32, in inspect_and_mount <dgilmore> raise Exception("Unable to find an OS on disk image (%s)" % (diskfile)) <dgilmore> Exception: Unable to find an OS on disk image (/var/tmp/koji/tasks/7237/7437237/output_image/5f38c6a6-b61b-4fb5-b968-05fce415233b.body) <dgilmore> ABORT called in TinMan plugin <imcleod> Roger. OK. So that's almost certainly the issue. The absolute fastest way to fix this is to add a parameter to the koji call into factory to disable the generation of ICICLE. <imcleod> You will loose the RPM list for now, but the image will be created. <dgilmore> okay <imcleod> The proper fix, to my mind, is to teach libguestfs what an Atomic image looks like so that the layout and OS version detection works again. <dgilmore> not sure how it detects the os but afaik the os-release file etc should all be in place <imcleod> I will create one locally and poke at it a bit to see how we might do that, then mail Rich Jones some details. <imcleod> Yeah. Let me see if I can find a browsesable source for the detection code. <dgilmore> ive just ping rwmjones about it in #fedora-devel <dgilmore> https://bugzilla.redhat.com/show_bug.cgi?id=1102241 * dneary (~dneary@Maemo/community/docmaster/dneary) has joined #fedora-cloud <imcleod> So, actually, now that I think about it a bit more, even if we had the OS mounted up correctly in libguestfs, it's likely to break our assumptions about how to generate an RPM list. <imcleod> Because, really, an Atomic image can have multiple RPM lists, right? <imcleod> Even if it is initially installed with just one bootable Atomic image. <dgilmore> imcleod: and proposed the bug as an alpha blocker <dgilmore> i believe not <dgilmore> it will only ever have the one for teh active tree <dgilmore> you can still run a rpm -qa <walters> dgilmore, the os-release file exists...but in /ostree/deploy/fedora-atomic/deploy/<checksum>/etc/os-release <walters> not in /etc/os-release <walters> dgilmore, right...initially the image only has one tree, but after you do an upgrade, you will have two <dgilmore> walters: but rpm only see one right? <walters> if we can just skip ICICLE for now that seems sane to me <walters> dgilmore, right - rpm-ostree knows how to do a diff though <dgilmore> walters: I really do not want to do that as its an all or nothing operation <dgilmore> meaning all cloud images would have it turned off <walters> oh, i see <imcleod> walters: My concerns is that there's not really going to be a sane way to mount up the filesystem in libguestfs in a way that mimics what a user would see in a booted ostree. <dgilmore> i ca see doing it as a short term thing to get Alpha done, but it must be fixed for beta <walters> imcleod, i have several libguestfs-based tools that effectively know about ostree <walters> but yeah, there's no shortcut to the tools gaining understanding <imcleod> walters: OK. cool. What we end up doing is essentially a guestfs.sh("rpm -qa") <walters> note with this of course the rpm manifest is fixed in the tree - anaconda isn't doing any actual processing <walters> given an ostree hash you can always look in the origin tree to find the package set too <dgilmore> walters: sure, but to koji its just a anaconda install. it is clueless about it being a traditional install or an ostree install <walters> right <dgilmore> we need to be able to get the rpm list of the guest <imcleod> dgilmore: Do you want the one line koji patch to disable ICICLE for now? This is going to be a big enough fix (I think) that I'd prefer to get it off the Alpha blocker list as it is a build tooling issue, not a problem with the actual package content. <dgilmore> imcleod: sure we can try. its far from ideal <imcleod> dgilmore: Agreed. <dgilmore> hopefully we can wake up to rmwjones having magically fixed libguestfs <imcleod> dgilmore: But I'd argue at this stage it is more important to get the images out than to have the precise manifest available. <walters> dgilmore, note the kickstart does this as well <walters> it's in the logs <dgilmore> walters: right, koji is supposed to actually check the rpms are in the tag you build against <dgilmore> and fail teh build if they are not <walters> ok <dgilmore> we kinda work around it right now as there are a few issues preventing us from doing real builds <dgilmore> we do scratch builds with slightly less checks <dgilmore> so its a bandaid at best <imcleod> walters: Pulling the Anaconda manifest from a known location on the installed filesystem might well be a reasonable alternative here. * danielbruno has quit (Ping timeout: 250 seconds) <walters> imcleod, not sure what you mean by anaconda manifest <walters> does it put the rpm -qa in the logs somewhere? <dgilmore> walters: i suspect running rpm -qa in %post and putting it somewhere to fetch <walters> rpm -qa is already in the %post <dgilmore> but really it should be something in anaconda itself if we want to rely on it <imcleod> walters: Basically what you said. The original Oz ICICLE generation is based on the assumption that we can boot the resulting guest, ssh into it an do an rpm -qa. <imcleod> walters: For things like docker, where the resulting image may not boot in the traditional sense, we added the ability to run the rpm command via libguestfs' "sh" capability. <walters> eww, ssh at build time is a really bad idea <walters> that would force ssh keygen at a time when you might not have entropy <imcleod> walters: Not build time. <walters> and bake keys into the image, and... <imcleod> walters: We boot after the build, using a throwaway copy of the image. <imcleod> walters: I call it "safe ICICLE". <imcleod> walters: Yes. baked-in SSH details are bad, unquestionably. <walters> ok, copy seems fine <walters> so the code in question here is oz/oz/RedHat.py:do_icicle() right? <imcleod> walters: Yes, but for the absolute "offline" ICICLE bits, not quite. <imcleod> walters: That involves, I'm sad to say, a bit of a hack inside of Factory at the moment. <imcleod> walters: But let me ask this. Is there a way to mount up an Atomic image, using purely libguestfs calls, such that a guestfs shell command of "rpm -qa" will give the list of installed packages? <imcleod> walters: From what I recall of the implementation details, I'm not sure. IIRC you do some interesting boot-time things to put the runtime tree in place. <imcleod> walters: Apologies. The last deep dive I did was the preso you did just before devconf. <walters> imcleod, yes, but you'd have to chroot first <imcleod> walters: Yeah. OK. So that's going to require at least a bit of Oz tweaking and/or Factory tweaking. <walters> the other alternative is to use the host rpm <imcleod> walters: I really feel like that is a step backwards... <walters> e.g. mount it via guestmount, then rpm --dbpath=/path/to/mount/ostree/deploy/fedora-atomic/blah/usr/share/rpm -qa <imcleod> walters: Yeah. That feels like a move back to the somewhat fragile approach that we are trying to get away from. I appreciate that right now, it's broken..... <dgilmore> that would involve writing code to detect the filesystems and that its a ostree image <dgilmore> which takes us back to just fixing libguestfs <walters> either way, you need code to detect filesystems <imcleod> walters: Yes. I suspect Rich will want that regardless. <walters> yeah there's actually a step before this <walters> so libguestfs has this magical "-i" parameter <dgilmore> walters: right, but in the generic reusable tool seems the best place to do so <walters> which mounts the disks, and looks for /etc/fstab, and mounts storage <walters> obviously with ostree there's also not an /etc/fstab - it's also in the deployment root * sgordon (~NaN@redhat/sgordon) has joined #fedora-cloud <imcleod> walters: Right. It's an interesting question as to what exactly you are "mounting" when you are dealing with ostree/atomic. <walters> which version of which OS <walters> you can easily with ostree have a single physical storage root that contains say two fedora trees, and two rhel trees <walters> their /etc/fstabs might be the same...or they might be different <walters> that's the general case, and i guess to really support it libguestfs might have to parse the boot order and pick the default or something <imcleod> walters: Understood. It's like a qcow2 image with multiple versions in it. Which set of RPMs do we want? Obviously in the Anaconda/koji context we want the set that were installed initially. <walters> right <walters> so... <dgilmore> imcleod: wel at the point we get the icicle its a raw image <dgilmore> we convert it to qcow2 and xz compress it in koji <walters> i guess let's copy/paste this to the bug, see if the libguestfs people jump at it, and if not, consider workarounds or having (well i'd probably be me) dive in to it, or look at other workarounds? <imcleod> dgilmore: Yeah. Understood. Again, my inclination is, absent a quick brilliant guestfs fix, that we work around in koji for now and commit to it being a Beta blocker. <imcleod> dgilmore: It's a build tooling issue, not something that indicates instability in the delivered bits, yeah? <imcleod> dgilmore: That's my rationalization anyway. <dgilmore> ive proposed it as an Alpha blocker right now <dgilmore> but we can look at it with QA <imcleod> OK. <dgilmore> i asked adamw to join us <dgilmore> see what he says with his QA hat on <walters> ok if i copy/paste the logs to the bug? * adamw (~adamw@redhat/adamw) has joined #fedora-cloud <imcleod> adamw: Nice hat. <adamw> mmm? <dgilmore> walters: sure <imcleod> adamw: (09:20:58 PM) dgilmore: see what he says with his QA hat on <dgilmore> adamw: gday <adamw> aha <adamw> i say dgilmore is right <imcleod> adamw: I take it back. I don't like your hat. Try a different one.... <dgilmore> adamw: so we are thinking as a workaround that we turn off teh code to get the icicle <dgilmore> longer term libguestfs needs to understand ostree trees <imcleod> and Oz may need to understand them as well <dgilmore> the icile gives us the rpms in the image
Is the latest image in: http://rpm-ostree.cloud.fedoraproject.org/project-atomic/images/f20/qemu/ ?
We've gotten factory building F21 atomic candidates as of last night. You can find one of them here: https://kojipkgs.fedoraproject.org//work/tasks/7718/7437718/fedora-cloud-atomic-20140821-21_Alpha_TC1.x86_64.qcow2 And some build details here: http://koji.fedoraproject.org/koji/taskinfo?taskID=7437718
I normally wouldn't consider this a blocker, but given the fact that it's necessary to fix the tooling issues, which in turn is necessary to build the docker cloud image, I'm +1 for making it an Alpha blocker. I'm still not sure which criterion this would be against, other than "RCs are necessary for the Go/No-Go meeting to decide the fate of Alpha". Might be nice to another mechanism for tooling issues in the future, though...
I'm not too happy about a package as complex as libguestfs being on the "critical path" [in the common sense, not in the literal Fedora CRITPATH sense] of a Fedora release. Is there no workaround for this which avoids it being a blocker? I have taken a look at the image which Ian McLeod kindly supplied above and have a few questions and observations: (1) It would really help if ostree was self-identifying (and this applies to any OS - I'm looking at you, FreeBSD). ie. If there existed a file like '/ostree/release' that we could rely on being there in any ostree distro. (2) Is ostree a distro? Or is the distro Fedora, and ostree is a variation like a spin? Do we envisage a Debian ostree existing now or in the future? (3) Is there an fstab or equivalent? Basically, how can we know the relationship between the separate filesystems and how they are mounted at boot? Libguestfs has a fairly flexible system that even copes with Windows drive letters, but it does need something in the filesystem to tell it this relationship. Hard-coding is possible but would not handle future changes. (4) If there are multiple OS/versions in an ostree, how would the filesystem look different from the single distro in the example qcow2 file? (5) Is extlinux the only option for booting ostree, or is grub2 possible as well (now or in the future)? --- Tip for those unfamiliar with libguestfs: $ guestfish --ro -a fedora-cloud-atomic-20140821-21_Alpha_TC1.x86_64.qcow2 -m /dev/fedora/root -m /dev/sda1:/boot Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> mountpoints /dev/fedora/root: / /dev/sda1: /boot
Discussed in multiple Blocker Review meetings. Rejected as a blocker because this doesn't directly violate any specific criteria and a workaround is already in place as a default for generating the images.
(In reply to Richard W.M. Jones from comment #7) > > (1) It would really help if ostree was self-identifying (and this > applies to any OS - I'm looking at you, FreeBSD). ie. If there > existed a file like '/ostree/release' that we could rely on being > there in any ostree distro. You'll find /ostree only exists if ostree does I'd say. > (2) Is ostree a distro? Most definitely not. > Or is the distro Fedora, and ostree is a > variation like a spin? More like ostree is an alternative delivery vehicle for a spin. > Do we envisage a Debian ostree existing now > or in the future? It actually exists now, just not in a public form. > (3) Is there an fstab or equivalent? /etc/fstab exists in the *deployment root*. What OSTree itself does is use the bootloader configuration to find the installed system. That's how the atomic upgrades work. If libguestfs was willing to link to libostree, you could also create a new OstreeSysroot and use it to find the default deployment. Inside there, you'd find /etc/fstab which you could use to mount the other partitions. > (4) If there are multiple OS/versions in an ostree, how would the > filesystem look different from the single distro in the example > qcow2 file? Multiple roots show up in /ostree/deploy, which are hardlinked to /ostree/repo. > (5) Is extlinux the only option for booting ostree, or is grub2 > possible as well (now or in the future) It's possible in the future.
(In reply to Colin Walters from comment #9) > > (2) Is ostree a distro? > > Most definitely not. > > > Or is the distro Fedora, and ostree is a > > variation like a spin? > > More like ostree is an alternative delivery vehicle for a spin. Hm, although it should be considered as "own distro" anyway, shouldn't it? I mean, on the root there's no /usr nor /etc, but just the /ostree as "actual meat". > > (4) If there are multiple OS/versions in an ostree, how would the > > filesystem look different from the single distro in the example > > qcow2 file? > > Multiple roots show up in /ostree/deploy, which are hardlinked to > /ostree/repo. In the example image, there is: /ostree/deploy/project-atomic-controller/deploy/6b6b1362241f1c658b54797b51c2215e32b0978f2201eeb7cd2068276adb9015.0 a) can there be other deploys? (say, /ostree/deploy/other-controller/...) b) is the trailing .0 denoting some versioning of the UUID it refers to? c) how should the distro in this deploy be handled? In the example image, it seems like a Fedora, although modified (/etc/fedora-release shows "Generic"), with some patched RPM (/var/lib/rpm -> /usr/share/rpm). So like a normal Fedora, or some spin (like Fedora/OSTree), or something else?
To be clear the rpmdb is in the tree, we basically just need to mount the disk Something like: # guestmount --ro -a atomic-disk.qcow2 -m /dev/atomicos/root:/ -m /dev/sda1:/boot /mnt/guestmount/ Then rpm -qa: # rpm --dbpath=/mnt/guestmount/ostree/deploy/osname/deploy/c53693547429fe6d4a14d0abd8ebb23920cacec11433749821db47204dbefac9.0/usr/share/rpm -qa
(In reply to Pino Toscano from comment #10) > (In reply to Colin Walters from comment #9) > > > (2) Is ostree a distro? > > > > Most definitely not. > > > > > Or is the distro Fedora, and ostree is a > > > variation like a spin? > > > > More like ostree is an alternative delivery vehicle for a spin. > > Hm, although it should be considered as "own distro" anyway, shouldn't it? I don't know what the definition of "distro" means to you. To me it refers most strongly to the concept of a singular software repository for online updates. In that definition, ostree is just a tool to compose on the server side what you could compose on the client. > I mean, on the root there's no /usr nor /etc, but just the /ostree as > "actual meat". Inside the deployment directory there is both /usr and /etc. > a) can there be other deploys? (say, /ostree/deploy/other-controller/...) Yes on live systems - that's the whole way the atomic upgrades work. > b) is the trailing .0 denoting some versioning of the UUID it refers to? Basically yes. > c) how should the distro in this deploy be handled? I suspect you just want to discover the default deployment root, and then pretend that's the physical root. > In the example image, it seems like a Fedora, although modified > (/etc/fedora-release shows "Generic"), with some patched RPM (/var/lib/rpm > -> /usr/share/rpm). So like a normal Fedora, or some spin (like > Fedora/OSTree), or something else? Mostly like a normal Fedora inside the deployment root, yes.
(In reply to Colin Walters from comment #12) > I don't know what the definition of "distro" means to you. To me it refers > most strongly to the concept of a singular software repository for online > updates. In that definition, ostree is just a tool to compose on the server > side what you could compose on the client. For libguestfs it has a specific technical meaning to do with how we classify guests into a 3-level hierarchy: type windows, linux, ... distro fedora, debian, ... product-variant Server, Desktop, ... http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_distro Obviously not designed with OSTree in mind! But I think you've answered the question anyhow. The distro is simply "fedora" or "debian" etc. The interesting question from my point of view is whether every deployable root maps to a libguestfs inspection root returned by guestfs_inspect_os().
(In reply to Richard W.M. Jones from comment #13) > (In reply to Colin Walters from comment #12) > > I don't know what the definition of "distro" means to you. To me it refers > > most strongly to the concept of a singular software repository for online > > updates. In that definition, ostree is just a tool to compose on the server > > side what you could compose on the client. > > For libguestfs it has a specific technical meaning to do with how > we classify guests into a 3-level hierarchy: > > type windows, linux, ... > distro fedora, debian, ... > product-variant Server, Desktop, ... > > http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_distro > > Obviously not designed with OSTree in mind! > > But I think you've answered the question anyhow. The distro is > simply "fedora" or "debian" etc. > > The interesting question from my point of view is whether every > deployable root maps to a libguestfs inspection root returned by > guestfs_inspect_os(). Yes, this is the key question I have as a consumer of the output as well. More specifically, what should we expect the output of guestfs_inspect_get_mountpoints() to look like for a deployable root? As I understand it, the block device will be shared between multiple roots, and we'll need to be passed a directory underneath that device that is the location of the "real" root filesystem. So whereas we used to get something like this: "/", "/dev/sda2" We'll now get something like: "/", "/dev/sda2:/ostree/deploy/osname/deploy/c53693547429fe6d4a14d0abd8ebb23920cacec11433749821db47204dbefac9.0/" Rich or Pino, can you comment?
(In reply to Ian McLeod from comment #15) > (In reply to Richard W.M. Jones from comment #13) > > (In reply to Colin Walters from comment #12) > > > I don't know what the definition of "distro" means to you. To me it refers > > > most strongly to the concept of a singular software repository for online > > > updates. In that definition, ostree is just a tool to compose on the server > > > side what you could compose on the client. > > > > For libguestfs it has a specific technical meaning to do with how > > we classify guests into a 3-level hierarchy: > > > > type windows, linux, ... > > distro fedora, debian, ... > > product-variant Server, Desktop, ... > > > > http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_distro > > > > Obviously not designed with OSTree in mind! > > > > But I think you've answered the question anyhow. The distro is > > simply "fedora" or "debian" etc. > > > > The interesting question from my point of view is whether every > > deployable root maps to a libguestfs inspection root returned by > > guestfs_inspect_os(). > > Yes, this is the key question I have as a consumer of the output as well. > > More specifically, what should we expect the output of > guestfs_inspect_get_mountpoints() to look like for a deployable root? As I > understand it, the block device will be shared between multiple roots, and > we'll need to be passed a directory underneath that device that is the > location of the "real" root filesystem. So whereas we used to get something > like this: > > "/", "/dev/sda2" > > We'll now get something like: > > "/", > "/dev/sda2:/ostree/deploy/osname/deploy/ > c53693547429fe6d4a14d0abd8ebb23920cacec11433749821db47204dbefac9.0/" It could be something like that. What would be a lot simpler to implement would be if we ignored all of this and simply returned it as a single OS image. Do you care that there are potentially multiple OSes in an OStree disk image? I'm still a bit hazy as to what you want to use this for exactly. Maybe we can have a chat about this tomorrow afternoon.
(In reply to Richard W.M. Jones from comment #16) > What would be a lot simpler to implement would be if we ignored all > of this and simply returned it as a single OS image. Do you care that > there are potentially multiple OSes in an OStree disk image? It'd *almost* be enough for most uses to just find the first deployment root and chroot to it. Except that /var lives outside the root. This would mean libguestfs scripts wouldn't be able to read/write /var as the host would see it. To replicate this, you'd have to bind mount, like OSTree does: https://git.gnome.org/browse/ostree/tree/src/switchroot/ostree-prepare-root.c#n186 > I'm still a bit hazy as to what you want to use this for exactly. > Maybe we can have a chat about this tomorrow afternoon. I think the goal here is just to extract the RPM database. However I would definitely find it useful if libguestfs knew more about ostree myself.
For reference, this is the consuming code in imagefactory: https://github.com/redhat-imaging/imagefactory/blob/master/imgfac/FactoryUtils.py#L27
Back in the day, Matt Booth extended libguestfs to add support for btrfs, where the concerns are similar to this case. The main thing is that "filesystem" != "device" which is what we had assumed up til that point. When inspecting a btrfs-based guest, you will get output like this: ><fs> inspect-os btrfsvol:/dev/sda2/root ><fs> inspect-get-mountpoints "btrfsvol:/dev/sda2/root" /boot: /dev/sda1 /: btrfsvol:/dev/sda2/root /home: btrfsvol:/dev/sda2/home ><fs> mount "btrfsvol:/dev/sda2/root" / ><fs> ll / total 4 drwxr-xr-x 1 root root 32 Sep 23 08:59 . drwxr-xr-x 19 root root 4096 Sep 23 13:54 .. drwxr-xr-x 1 root root 74 Sep 23 08:59 bin drwxr-xr-x 1 root root 0 Sep 23 08:59 boot drwxr-xr-x 1 root root 134 Sep 23 08:59 etc drwxr-xr-x 1 root root 10 Sep 23 08:59 usr drwxr-xr-x 1 root root 12 Sep 23 08:59 var Note that the strings like "btrfsvol:..." are internal implementation details. You shouldn't try to parse them. The aim here was that code such as that given in comment 18 continues to work. My proposal therefore is that we extend this further for OSTree, so it would return other opaque strings that match subdirectories containing the root, /var and anything else that has to be mounted. Also we should only return the "current" deployment (for now - we can return more in future though).
(In reply to Colin Walters from comment #12) > (In reply to Pino Toscano from comment #10) > > (In reply to Colin Walters from comment #9) > > > > (2) Is ostree a distro? > > > > > > Most definitely not. > > > > > > > Or is the distro Fedora, and ostree is a > > > > variation like a spin? > > > > > > More like ostree is an alternative delivery vehicle for a spin. > > > > Hm, although it should be considered as "own distro" anyway, shouldn't it? > > I don't know what the definition of "distro" means to you. To me it refers > most strongly to the concept of a singular software repository for online > updates. In that definition, ostree is just a tool to compose on the server > side what you could compose on the client. > > > I mean, on the root there's no /usr nor /etc, but just the /ostree as > > "actual meat". > > Inside the deployment directory there is both /usr and /etc. What I meant is that I have to identify the root of the atomic host somehow. > > b) is the trailing .0 denoting some versioning of the UUID it refers to? > > Basically yes. Is it something stable that I could rely on? > > c) how should the distro in this deploy be handled? > > I suspect you just want to discover the default deployment root, and then > pretend that's the physical root. That would just consider only one deployment root and possibly ignore the host, which are things I'd not want to. > > In the example image, it seems like a Fedora, although modified > > (/etc/fedora-release shows "Generic"), with some patched RPM (/var/lib/rpm > > -> /usr/share/rpm). So like a normal Fedora, or some spin (like > > Fedora/OSTree), or something else? > > Mostly like a normal Fedora inside the deployment root, yes. Are these changes something kind of "definitive" (for example the name change in /etc/fedora-release), or are they in-progress changes? (In reply to Ian McLeod from comment #15) > More specifically, what should we expect the output of > guestfs_inspect_get_mountpoints() to look like for a deployable root? As I > understand it, the block device will be shared between multiple roots, and > we'll need to be passed a directory underneath that device that is the > location of the "real" root filesystem. So whereas we used to get something > like this: > > "/", "/dev/sda2" > > We'll now get something like: > > "/", > "/dev/sda2:/ostree/deploy/osname/deploy/ > c53693547429fe6d4a14d0abd8ebb23920cacec11433749821db47204dbefac9.0/" One possibility could be return multiple roots, although there are pro and contra about it (for example, our tools which mount based on inspection handle images with a single root only). Also most of the 3rd party tools using guestfs I've seen tend to accept one root only. Regarding the identifiers, I'm thinking about some like: "/", "/dev/sda1" "/", "ostree:/dev/sda2:osname:c53693547429fe6d4a14d0abd8ebb23920cacec11433749821db47204dbefac9:0" (the last :0 part depend whether that ID as suffix is something reliable or not, see my question above in this comment) (In reply to Colin Walters from comment #17) > It'd *almost* be enough for most uses to just find the first deployment root > and chroot to it. Except that /var lives outside the root. This would mean > libguestfs scripts wouldn't be able to read/write /var as the host would see > it. I'd prefer to not lose information if possible.
(In reply to Pino Toscano from comment #20) > > What I meant is that I have to identify the root of the atomic host somehow. Theoretically, there can be multiple in an existing system. For the goal of this bug, just traverse the hierarchy to find /ostree/deploy/$osname/deploy/$checksum.$serial/ Where there is only one value of $osname $checksum and $serial in initially created disk images. Longer term, libguestfs could look at using libostree, which has an API for this. > Is it something stable that I could rely on? I wouldn't hardcode .0, but there will be at most one deployment root. > That would just consider only one deployment root and possibly ignore the > host, which are things I'd not want to. Host? Well, in an Atomic image, there is by default no "host" if you mean "something bootable in the physical /". > Are these changes something kind of "definitive" (for example the name > change in /etc/fedora-release), or are they in-progress changes? Well for Fedora.next there are already changes here, but I don't see why libguestfs would care about the contents of that file other than to display it. Or does it take version-specific action?
(In reply to Colin Walters from comment #21) > (In reply to Pino Toscano from comment #20) > > > > What I meant is that I have to identify the root of the atomic host somehow. > > Theoretically, there can be multiple in an existing system. For the goal of > this bug, just traverse the hierarchy to find > /ostree/deploy/$osname/deploy/$checksum.$serial/ > > Where there is only one value of $osname $checksum and $serial in initially > created disk images. The problem here is finding a suitable representation for any of the deploys in the atomic image, and a way to properly handle mountpoints and any other data in it. Once that is done, collecting one deploy or all of them is the same thing. (And actually, traversing and collecting all the deploys is simplier than having to look for a single one.) > > Is it something stable that I could rely on? > > I wouldn't hardcode .0, but there will be at most one deployment root. I wasn't referring to the 0 as fixed value, but whether I can rely there always is a .N (numeric) suffix with the serial of that deploy. > > That would just consider only one deployment root and possibly ignore the > > host, which are things I'd not want to. > > Host? Well, in an Atomic image, there is by default no "host" if you mean > "something bootable in the physical /". Whatever is outside the roots of the deploys. > > Are these changes something kind of "definitive" (for example the name > > change in /etc/fedora-release), or are they in-progress changes? > > Well for Fedora.next there are already changes here, but I don't see why > libguestfs would care about the contents of that file other than to display > it. Or does it take version-specific action? We use such files as aid in the identification of the distribution. OSTree or not, if there are changes in those files we need to adjust libguestfs for them.
We added a fairly ugly workaround to Factory to address this for the time being. In a substantially cleaner (and non-python) form, it might be useful inside of the guestfs detection code. I'm sharing it here for reference: https://github.com/redhat-imaging/imagefactory/commit/a60e110b619da8b722444418073f861bd76a7ad1
dropping stale blocker metadata (should've been cleared when it was rejected.)
Wanted to bump this back up to see if it's still possible we can work on adding ostree-enabled detection. Thoughts?
Any way to prioritize this? It would be extremely helpful to me as well. Thanks. (PS. next it would be great if virt-builder would provide an atomic image)
(In reply to Ian McLeod from comment #25) > Wanted to bump this back up to see if it's still possible we can work on > adding ostree-enabled detection. > > Thoughts? Yes: the matter is not easy, and requires some architectural changes, which is why it hasn't been done so far, yet. Will get back to that next week, rejuggling the old prototypes I did few months ago and sending some summary to our mailing list. (In reply to Federico Simoncelli from comment #26) > (PS. next it would be great if virt-builder would provide an atomic image) Ideally, it should be matter of the distributions to provide all the images they want, instead of making us provide them. Regarding Fedora, this has been asked months ago, and with basically no activity on the Fedora rel-eng side: https://fedorahosted.org/rel-eng/ticket/5805
*** Bug 1212678 has been marked as a duplicate of this bug. ***
I'm offering a cool t-shirt or some nice beverage as a bounty if this is in Fedora by mid september :) I am happy to negotiate the bounty. Cheers!
I opened up a RHEL-AH image (link in one of the previous private comments). Of course there is no /etc/passwd or /etc/shadow as expected, so virt-customize doesn't know what to do with it. If you want to modify the password yourself, you'll need to find the correct files, which you can do like this: $ guestfish -a rhel-atomic-cloud-7.2-10.x86_64.qcow2 --ro Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> run ><fs> list-filesystems /dev/sda1: xfs /dev/atomicos/root: xfs ><fs> mount /dev/atomicos/root / ><fs> mount /dev/sda1 /boot ><fs> find / | grep '/etc/passwd$' ostree/deploy/rhel-atomic-host/deploy/ec85fba1bf789268d5fe954aac09e6bd58f718e47a2fcb18bf25073b396e695d.0/etc/passwd (and similarly for /etc/shadow). Then you would have to update that file yourself, using a guestfish command such as: ><fs> vi /ostree/deploy/[.....]/etc/passwd
FWIW this is a "would be nice" from my perspective, but not critical path. For Atomic Host virt image builds I think we just disable the "gather the rpmdb" phase. Which is fine because for Atomic Host the rpmdb is assembled in the ostree commit and should be immutable for VM builds at present.
I've actually been working on this, see: https://rwmj.wordpress.com/2015/12/06/inspection-now-with-added-prolog/
Hey Rich, Is there any update on this feature or maybe an upstream issue we can follow?
There's no update, still waiting for someone to write the patches. I'll update this bug if there is any news.
We've decided not to do this. For reasons see: https://issues.redhat.com/browse/RHEL-80287?focusedId=26685130&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-26685130