6 min read

Log Miner can help when questions such as the following come up: What was changed? Who changed it? And in what order?

When unauthorized people change data, they may assume that the record does not retain all changes if that information isn’t viewable at the application level. There is a record of all changes that are logged, but it takes time and trouble to find that information.

The tool most often used is the PL/SQL package DBMS_LOGMNR, but the GUI Interface called Log Miner Viewer has been added to the OEM. There are quite a few examples in the Oracle Database Utilities Guide of how to use this utility for both the browser-based and PL/SQL versions. We will concentrate on when and how to find the data to restore.

You already should have a good understanding of the database structures that include the undo and redo logs: undo is generated when an end user starts changing data and redo is generated after the commit. Each is written to their own set of files. While undo and redo are both online (database is open), archived redo is offline and written to a disk.

Archived redo logs are no longer needed for the transactions inside the database because they have been committed and written to disk. Archive logs are still important in order to restore the previously committed transactions in a recovery situation. Making an archive log offl ine allows backup procedures (RMAN, third-party backup software or OS utilities) to manipulate the files at the operating system level.

Recovery is a database process that will:

  • Roll forward changes from redo logs and then rollback statements any end user used the rollback command for.
  • Roll back any uncommitted changes found in the UNDO segments.

There are specific Oracle processes such as LGWR that write the redo to the online logs and then an archiver process (ARC) writes to the archived logs. The only way to ensure every transaction in a database has been logged for recovery purposes is to operate in ARCHIVELOG mode. There are special situations that will call for running in noarchivelog mode. It is assumed that any transactions lost between backups can be recreated. Archived redo logs can be used to restore transactions that occurred between regular backups. From the last exercise, you also have a good understanding of read consistency available from undo segments, which also contribute to redo entries.

The DBMS_LOGMNR package is used to find data in both the undo and redo database structures. It is also useful for analyzing patterns over time for specific tuning needs, schema changes, and forecasting the time for hardware upgrades. With the DBMS_LOGMNR package, you can extract data that populates the V$LOGMNR_CONTENTS view with the actual transactions that have been executed. These entries contain both the REDO and UNDO statements.

You can operate Log Miner on the original database that created the log entries or almost any other Oracle database of a higher version that is running the same character set, database block size, and operating system. This is why it is critical that you protect the online redo, undo, and archive logs—they can be mined for information. Most often a DBA will actually use a different database to do the mining so that it doesn’t consume additional resources in a production database. If you use a different database than where the original transactions were created, you will have to rebuild the Log Miner data dictionary (online, offline, or a standalone flat file). The dictionary translates the internal object identifiers and types to table and column names that can be queried, but those object IDs will vary between databases, making the rebuild a requirement.

The Log Miner example task requires several preparatory steps to be completed first, with some additional discussion along the way. Discussion includes archiving, supplemental logging, and Flashback technologies. You won’t get to an actual logminer example for quite a few pages. Since logminer has extensive documentation detailing all of the steps for various scenarios, it was decided to only include a lesser known method of using logminer.

Turn on archivelog mode

Before we delve into the mining exercise, we will cover more information about SCNs, as they relate to checkpoints and log switches while turning on archiving for the database. Transactions in a database produce redo entries in the redo log buffer (in memory), but that is always being written to the online redo logs. That occurs according to different triggering events that can happen in the redo stream—a certain amount of data, commits, 3 seconds or 1/3 full redo log buffer. Whether these triggering events occur or not depends on the type and frequency of transactions.

A checkpoint synchronizes modified data blocks in the redo log buffer with the actual data files, keeping the data consistent. In the case of a database crash, this identifies the point where all outstanding data (transactions) have been written to disk. This checkpoint isn’t synchronized with the SCN of a transaction commit and it does not behave like a log switch.

The files you will need as you work through this exercise are included in the code as follows:

sys_archive.sql
sysarchive.lst


Open up the file sysarchive.lst. One of the most important views (anything labeled v$ is called a dynamic view) in the database is v$database.

SYS@NEWDB> SELECT LOG_MODE, NAME, CURRENT_SCN, ARCHIVE_CHANGE#,

OPEN_MODE FROM V$DATABASE;

Find this section for the statement from v$log_history farther down in sysarchive.lst. What are all these entries if we aren’t in archivelog mode? These are the log switches to the online redo logs. They are overwritten once that section of the redo log is no longer needed by a transaction to maintain consistency. This is where a checkpoint comes into play. It ensures that data is written to the disk and is independent of the ARC log switch process.

Once we switch to archivelog mode, the online redo will still be overwritten, but the ARC process will write a copy of that log to an archive destination. Below you will see that each log contains a range of database SCNs. This log contains database changes from the first SCN number to the next.

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Now we try to correlate archive_change# and checkpoint_change#. Also notice that the checkpoint_change# for each data file is consistent for normal database operations. I am showing only the partial output from the following command for the single data file created:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

At this point, we have started the database in mount mode (the controlfile needs to be accessed, but the database is not opened for full use), turned on the archiving process, and verified that archiving has started and also verified the location of the archived logs. Making a log switch from one online redo to another doesn’t sync the checkpoint_change# with what the controlfile has (controlfile_change# is what is also called a thread checkpoint).

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Only when we do a manual checkpoint (instead of a database-activated checkpoint) do tho se numbers coincide. They can be verified with the dynamic view v$datafile as shown below:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Additional information for troubleshooting archiving issues comes from another dynamic view, V$INSTANCE:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

The archiver column can also indicate when the ARC process failed to switch logs with an automatic retry in another five minutes. The log_switch_wait will indicate the wait event the log switching process is waiting on—ARCHIVE LOG, CLEAR LOG, or CHECKPOINT.

All of the activity associated with log switches and checkpoints will influence database performance. We shall continue now with the further required setup steps to complete all of the tasks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here