KahaDB log files not clearing ActiveMQ
After we upgraded to ActiveMQ5.6.0 and changed a number of configuration options listed in the vertical scaling post, things were moving brilliantly (see the post on testing the new configuration as well).
However, after a couple of weeks, disk space on one of the brokers continued to grow. Looking at the data/kahadb directory, we saw that the log files back to log1 still existed. This sounded like a frequent problem with ActiveMQ where it doesn't clear its log files after use (users seem to log this issue every other release). Only the broker that received the duplex network connection was suffering; the broker that established that network connection was fine. It seemed like a problem with the consuming of messages being acknowledged.
We turned on logging as detailed here: (see: http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html)
This showed that the broker didn't find any log to be cleared - the first attempt to find a free log produced no candidates and the clearing failed. This didn't really add much information except that there was a fundamental problem. We asked a question on the ActiveMQ forums which was a useless as our previous questions. This left us with two obvious options:
1) restart the troublesome broker
2) clear any needed messages from the broker, shut it down, clear off the KahaDB files and start fresh.
We started with 1, but after the broker started taking a while to load the many, many GBs of old logs, we grew concerned about message replay due to unacknowledged message consumption. So, we shut down (had already saved any pending messages), cleared off KahaDB and then started the broker again.
After several hours, the troubled broker looked healthy again and was clearing off early log files. Issue closed for now!
However, after a couple of weeks, disk space on one of the brokers continued to grow. Looking at the data/kahadb directory, we saw that the log files back to log1 still existed. This sounded like a frequent problem with ActiveMQ where it doesn't clear its log files after use (users seem to log this issue every other release). Only the broker that received the duplex network connection was suffering; the broker that established that network connection was fine. It seemed like a problem with the consuming of messages being acknowledged.
We turned on logging as detailed here: (see: http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html)
log4j.appender.kahadb=org.apache.log4j.RollingFileAppender log4j.appender.kahadb.file=${activemq.base}/data/kahadb.log log4j.appender.kahadb.maxFileSize=1024KB log4j.appender.kahadb.maxBackupIndex=5 log4j.appender.kahadb.append=true log4j.appender.kahadb.layout=org.apache.log4j.PatternLayout log4j.appender.kahadb.layout.ConversionPattern=%d [%-15.15t] %-5p %-30.30c{1} - %m%n log4j.logger.org.apache.activemq.store.kahadb.MessageDatabase=TRACE, kahadband used jconsole's access to the reload log4j method on the broker mbean to reload the logging file.
This showed that the broker didn't find any log to be cleared - the first attempt to find a free log produced no candidates and the clearing failed. This didn't really add much information except that there was a fundamental problem. We asked a question on the ActiveMQ forums which was a useless as our previous questions. This left us with two obvious options:
1) restart the troublesome broker
2) clear any needed messages from the broker, shut it down, clear off the KahaDB files and start fresh.
We started with 1, but after the broker started taking a while to load the many, many GBs of old logs, we grew concerned about message replay due to unacknowledged message consumption. So, we shut down (had already saved any pending messages), cleared off KahaDB and then started the broker again.
After several hours, the troubled broker looked healthy again and was clearing off early log files. Issue closed for now!
Comments
Post a Comment