Description of problem: ncurses functions malfunction for UTF-8 locale; e.g. mvaddnstr() macro interprets its argument as byte count, not characters. This breaks some 3rd party applications, e.g. the 'tin' (http://www.tin.org) newsreader. Version-Release number of selected component (if applicable): Tries with both ncurses-5.2-28 (RedHat Linux 8.0) and ncurses-5.3-4 (RedHat Linux 8.1 beta 3). How reproducible: The sample program is below; compile with gcc -o tst tst.c -lncurses : #include <ncurses.h> #include <stdio.h> int main(int argc, char *argv[]) { initscr(); mvaddnstr(15, 15, argv[1], 10); refresh(); sleep(5); endwin(); return 0; } Try running it with 1st argument containing only ASCII characters, e.g. > ./tst ThisIsALongString It outputs the 'ThisIsALon' string, just as expected. Then try to pass it any string containing multibyte characters and see it truncated to less than 10 characters (e.g. to 5 characters for cyrillic strings, with LANG=ru_RU.UTF-8)
ncurses does what it is designed to. If you want UTF-8 capability, use the "--enable-widec" configure option. Try reading the installation instructions.
I know, that's just what I did here locally. However, the prebuilt packages distributed by the Red Hat exhibit this misbehavior.
Future releases for UTF-8 environments will be linked against ncursesw. As the current ncurses provides this, this is a bug with packages that link aginst ncurses (they should be linking against ncursesw if they're working in UTF-8), so I'm closing this.