Description of problem: A completely full btrfs filesystem can get stuck in state where rm and echo "" > /some/file no longer work and both result in "no space left on device" message. (Full in this case means, the "btrfs filesystem show --all-devices" output lists a full device in the devid line, example: "devid 1 size 111.27GB used 111.27GB path /dev/dm-0") The only way to recover is to add in another storage medium as an additional device, then delete files, then remove the temporarily added device again. This is tedious and not something a normal user can do, and it can get even more complicated for fully encrypted (LVM) filesystems. Version-Release number of selected component (if applicable): 3.11.1-200.fc19.x86_64 How reproducible: I had my disk run full 3 times, and the third time this bug has affected me. The previous two times might have been using older kernel versions. Steps to Reproduce: 1. Fill up disk completely. 2. Try to rm file. (-> "no space left on device") Actual results: rm doesn't work, I get no space left on device. echo "" > /some/file as suggested in the btrfs FAQ doesn't work either. Expected results: rm works and I can easily regain free disk space. Additional info: bash-4.2$ btrfs --version Btrfs v0.20-rc1 bash-4.2$ No RAID or anything was used, the btrfs partition is regular except for having two subvolumes (/ and /home) and being inside an encrypted LVM container.
Sorry, I should have said *There appears to be no other way to recover than... That was what people on #fedora and #btrfs suggested to resolve the situation, there are probably still other ways to resolve it.
What does btrfs fi df /mnt/point say?
Now (with the disk no longer being full, on the machine where this bug was reproduced): bash-4.2$ btrfs fi df / Data: total=109.27GB, used=89.56GB System: total=4.00MB, used=24.00KB Metadata: total=2.00GB, used=1.15GB bash-4.2$ At the time of this bug/the disk being full, df / -h still indicated 10GB free (as did gnome nautilus), and "btrfs fi show --all-devices" showed in the dev id line for my single SSD: 111.27GB used 111.27GB. (it also does now, so I'm apparently close to filled up again)
It's more the metadata I'm concerned with, the next time you reproduce please capture the btrfs fi df so I can see what is going on.
*********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs. Fedora 19 has now been rebased to 3.12.6-200.fc19. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 20, and are still experiencing this issue, please change the version to Fedora 20. If you experience different issues, please open a new bug report for those.
*********** MASS BUG UPDATE ************** This bug has been in a needinfo state for more than 1 month and is being closed with insufficient data due to inactivity. If this is still an issue with Fedora 19, please feel free to reopen the bug and provide the additional information requested.