Red Hat Bugzilla – Bug 77905
Aggressive palette pre-allocation at 8-bit color depth breaks WindowMaker session
Last modified: 2007-04-18 12:48:27 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 Galeon/1.2.6 (X11; Linux i686; U;) Gecko/20020830
Description of problem:
When using a standalone WindowMaker session (i.e. no GNOME or KDE desktop) at
8-bit color depth, a full palette is preallocated, leaving no colors for
WindowMaker to allocate. The resulting desktop is unusable. This occurs when
logging in at runlevel 5 from a gdm prompt, but also from runlevel 3 when
running 'xinit' and then typing 'wmaker' at the terminal prompt (so the fault
isn't xsri or gdm or...). No apps in .Xdefaults, and nothing other than wmaker
in .Xclients*. Everything is fine in >8 bit color of course. This was not a
problem in Redhat 7.x with XFree86 4.2.
Version-Release number of selected component (if applicable):
XFree86-4.2.0-72, stock RH 8.0
Steps to Reproduce:
1. Set up X session with 8-bit color depth
2. log in from gdm (runlevel 5) or text console (runlevel 3)
3. If from text console, 'xinit' and 'wmaker' for barebones session
Actual Results: Windowmaker has no colors to allocate from, desktop is
Expected Results: A clean palette should have been available for WindowMaker to
allocate the default ditherable palette of (64) colors. GNOME works as expected.
Preallocation of a GNOME palette should not be done if neither desktop
environment is being used!
The RENDER extension consumes a fair portion of the colormap. This is
a known issue, but unlikely to be fixed in any 4.2.x release.
You can use Xnest at depth 8 inside a depth 24 server if 8bit-only apps
need to be used, so this is not considered a major problem. XFree86 4.3.0
will have a new feature to configure the colormap allocation.
Time permitting, I might have a look and see how much effort would be
required to backport. No promises though.
I've discussed this with various X people, and we all seem to agree now that
this is just not worth fixing in 4.2.x. Any sane application should run fine
in all color depths, and any sane modern desktop runs in truecolor. There
is very very few reasons to use 16bit depth or 8bit pseudocolor on modern
systems. The only reasons we can really think of that one really must use
anything but 24bit color depth are:
1) Ancient video hardware with inadequate amount of video memory. Wanting to
run at a higher resolution, and being forced into a lower color depth in
order for the screen to fit in memory.
The number of people who encounter #1 is extremely small, and there are
easy enough workarounds. One can downgrade the X server to one found in
a prior version of Red Hat Linux, or one can upgrade to the CVS version
of XFree86 or wait until 4.3.0 is released and upgrade. Or one can break
down and buy a new video card that doesn't have memory limitations. New
video hardware can be gotten for a song and a dance nowadays with 64Mb
of RAM and even 3D capabilities. For situations like schools or LTSP
installations where there are tonnes of machines, and purchasing even $20
video cards is out of the question due to the volume of machines, it is
recommended to downgrade the X server to 4.1.0, or upgrade to 4.3.0 or
CVS X. An unofficial solution, but it would work nonetheless.
2) Broken as designed applications that are hard coded to a specific depth
and will not run in any other depth.
Such applications that are depth specific include old Motif apps, other
ancient apps, apps that are not open source or to which the source no
longer exists or is not available, perhaps some games (QuakeWorld),
and others. For games like Quakeworld (16bit only), the source code
exists and could be changed if someone wanted to, or a second server
could be ran in 16bit depth, etc. Ditto for other apps locked into 16bit
depth. For apps locked into 8bit depth, users can use the 8+24bit overlay
support on their video card to get both 24 and 8bit simultaneously. That
might not work for all apps though, so one can use the Xnest server inside
X as another alternative. If one does not have video hardware that does
8 bit overlay, one can run Xnest at 8bit depth inside a server running at
8bit depth perhaps (dunno, haven't tested). Xnest does not support RENDER,
so it may not have this problem. Another option for these users is to
do install 4.1.0 or XFree86 CVS RPMs from rawhide as indicated for #1 above.
Another option, which is less tantalizing, would be to disable the render
extension at build time and rebuild the RPM packages. Most people probably
wouldn't want to attempt this, but if someone is indeed interested, I can
probably whip up some packages that do that. Keep in mind though that
many applications expect RENDER to be available now, and are likely to
not work, so this one might not be viable either.
If this problem really were that large out there, then someone would have
likely reported it to XFree86.org in the 8-10 month period during the
development of XFree86 4.2.0.
Anyway, I realize that for some people this might be a bit inconvenient,
however workarounds are available for the time being nonetheless and the
problem will be solved in a much better way in XFree86 4.3.0. It doesn't
make much sense to waste time solving a problem that workarounds exist
for, and to which is basically already solved upstream.
Hopefully everyone having these type of problems will find one of these
workarounds useful for the time being.