I have 2 machines that are having the same problem
(2CPU/PIII with wolverine and 1CPU/PII with fisher)
There is one very large table in a mysql db I am using.
When I try to load that table, it refuses to add any more records (380K records, 4GB [4,294,939,020]).
There are a number of related bugs for tcsh and db2, but they all have a problem at the old 2GB limit.
I have tried this from both the bash and tcsh and had similar results.
I am using ext2, with the default 4096 block size, on 9GB and 18GB drives
Can you try the mysql rpms (3.23.36-1) from http://people.redhat.com/teg/mysql?
Have you had the time to do this?
Also, how do you try to load it?
I upgraded mysql, mysql-devel, and mysql-server, and then I tried to reload additional data to that tablespace and
I got the same error.
I attempted to load data using:
"mysql> load data local infile 'dna-table.10' replace into dna"
and I get:
"I get "ERROR 2013: Lost connection to MySQL server during query" as soon as I get to the first new (non-replace) record.
If I do a full new load
"mysql> load data local infile 'dna-table.11' into dna"
I get and immediate server disconnect
ERROR 2006: MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 6
Current database: ensembl
ERROR 2013: Lost connection to MySQL server during query
Sorry for the delay in the reply.
I tried a client upgrade as well. As expected, that made no difference.
I've confirmed that MySQL has problems with files bigger than 4GB... it does
identify large file support during configure, but it doesn't seem to use it.
Tables need to be configured in a specific way... take a look at