Red Hat Bugzilla – Bug 86311
ncurses incorrectly calculates string lengths for multibyte (UTF-8) strings
Last modified: 2015-01-07 19:04:28 EST
Description of problem:
ncurses functions malfunction for UTF-8 locale; e.g. mvaddnstr() macro
interprets its argument as byte count, not characters. This breaks some
3rd party applications, e.g. the 'tin' (http://www.tin.org) newsreader.
Version-Release number of selected component (if applicable):
Tries with both ncurses-5.2-28 (RedHat Linux 8.0) and ncurses-5.3-4 (RedHat
Linux 8.1 beta 3).
The sample program is below; compile with gcc -o tst tst.c -lncurses :
int main(int argc, char *argv)
mvaddnstr(15, 15, argv, 10);
Try running it with 1st argument containing only ASCII characters, e.g.
> ./tst ThisIsALongString
It outputs the 'ThisIsALon' string, just as expected. Then try to pass it
any string containing multibyte characters and see it truncated to less
than 10 characters (e.g. to 5 characters for cyrillic strings, with
ncurses does what it is designed to. If you want UTF-8 capability, use the "--enable-widec"
configure option. Try reading the installation instructions.
I know, that's just what I did here locally. However, the prebuilt packages
distributed by the Red Hat exhibit this misbehavior.
Future releases for UTF-8 environments will be linked against ncursesw. As the
current ncurses provides this, this is a bug with packages that link aginst
ncurses (they should be linking against ncursesw if they're working in UTF-8),
so I'm closing this.