Log file removal
The fourth component of the infrastructure, log file removal, concerns the ongoing disk consumption of the database log files. Depending on the rate at which the application writes to the databases and the available disk space, the number of log files may increase quickly enough so that disk space will be a resource problem. For this reason, you will periodically want to remove log files in order to conserve disk space. This procedure is distinct from database and log file archival for catastrophic recovery, and you cannot remove the current log files simply because you have created a database snapshot or copied log files to archival media.
Log files may be removed at any time, as long as:
- the log file is not involved in an active transaction.
- a checkpoint has been written subsequent to the log file’s creation.
- the log file is not the only log file in the environment.
Additionally, when Replication Manager is running the log file is older than the most out of date active site in the replication group.
If you are preparing for catastrophic failure, you will want to copy the log files to archival media before you remove them as described in Database and log file archival.
If you are not preparing for catastrophic failure, any one of the following methods can be used to remove log files:
- Run the standalone db_archive utility with the -d option, to remove any log files that are no longer needed at the time the command is executed.
- Call the DB_ENV->log_archive() method from the application, with the DB_ARCH_REMOVE flag, to remove any log files that are no longer needed at the time the call is made.
- Call the DB_ENV->log_set_config() method from the application, with the DB_LOG_AUTO_REMOVE flag, to remove any log files that are no longer needed on an ongoing basis. With this configuration, Berkeley DB will automatically remove log files, and the application will not have an opportunity to copy the log files to backup media.