Red Hat Bugzilla – Bug 1404888
Accumulo fails to create write-ahead logs
Last modified: 2016-12-16 19:40:42 EST
Created attachment 1231930 [details]
Description of problem:
When running Accumulo on top of a real HDFS volume (hdfs:// instead of file://), Accumulo fails to create write-ahead logs (WALs) and gets stuck and unresponsive.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Start Zookeeper
2. Start Hadoop (HDFS: datanode and namenode)
3. Configure Accumulo to use HDFS
4. Initialize Accumulo
5. Start Accumulo
Tserver logs show that Accumulo is in an infinite loop, trying to create a WAL file in HDFS, but repeatedly getting an error creating the file. Looking in HDFS, one can see many zero-length WAL files from all the retries.
Tserver should create a WAL successfully and continue normally.
The workaround is to disable write-ahead logs by setting table.walog.enabled to false in /etc/accumulo/accumulo-site.xml before starting Accumulo. Since this property is overridden for the accumulo.root and accumulo.metadata tables, one must change this property manually in ZooKeeper for those tables, which is a somewhat advanced operation.
Created attachment 1231931 [details]
Tested 1.6.6 with Hadoop 2.4.1 using just tar balls on a local setup and the WAL works fine.
(In reply to Mike Miller from comment #2)
> Tested 1.6.6 with Hadoop 2.4.1 using just tar balls on a local setup and the
> WAL works fine.
Yeah, I was also able to confirm this is not an upstream bug. I strongly suspect it's a classpath issue. We're probably missing an essential jar in Hadoop's classpath.
Ugh, the problem was simple. The default walog size is about 1G. Too big for the small disks we were testing with. Setting tserver.walog.max.size to 100M fixed the problem. Allocating larger disks would also solve the problem.
Reporting the infinite loop to upstream. Nothing we can do about it here other than to recommend larger disks or lowering the walog max size.