Oracle’s RDBMS SQL Command Dump Block

8 min read

Do not do this in a production database.

Before continuing with this article, you should read the Oracle Database Concepts 11g Release 2 (11.2) of the documentation, the book every DBA should start with.

Our examination of data blocks starts in Section 12-6 of the Concepts Manual.

Data block format: “Every Oracle data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data.” A block is the smallest unit of logical storage that the Relational Database Management System (RDBMS) can manipulate. Block size is determined by the database parameter DB_BLOCK_SIZE. The logical storage of data blocks, extents, segments, and table spaces (from smallest to largest) map to the data files, which are stored in operating system blocks.

An undo block will store the undo transaction that is the actual SQL command needed to reverse the original SQL transaction statement. This undo is needed for read consistency for all read-only queries until you commit or rollback that transaction.

Read consistency within a changed block (transaction) is maintained for any of the following commands: insert, update, delete, merge, select for update, or lock table. Any of the previous changes are tracked until the command is issued to either commit or rollback a particular transaction. This consistency keeps the data view to each user the same, whether they are just doing queries or actually changing any other data.

A point in time or what is called the System Change Number (SCN) identifies each transaction, and transaction flags show the state of the transaction. The only end user that can see any changed data will be the one making the changes, no matter the application used until they commit that change.

The SCN advances for every change to the database as a sequential counter, which identifies a certain point in time. The SCN tracks more than just single transactions by end users. These transactions will be in Data Definition Language (DDL) or Data Manipulation Language (DML). DDL statements are associated with creating objects (create table) or what is also called metadata. DML are the other commands mentioned earlier (insert, update, delete, among others) that manipulate the data in some way. The RDBMS advances the SCN if another person logs in, reconnects, or alters their session as well as when Oracle background processes (which constantly check the state of activity inside of the database) take place.

It is undo that gives everyone a point-in-time consistent view of the data, which is called Read Consistency. There are controls created from business rules within the application called triggers and integrity constraints that validate the data entered by the user. Database locks control access to data during changes for exclusive access by the end user changing it.

During a delete or update statement:

  • The data block is read, loading it into a memory structure called a buffer cache
  • The redo log buffer will contain the corresponding delete or update statement
  • An entry in the undo segment header block is created for this transaction
  • It also copies the delete or update row into an undo block
  • For a delete, the row is removed from the data block and that block is marked as dirty
  • Locks keep exclusive use of that block until a commit or rollback occurs

Dirty is an internal designation where the block is identified as having changed data that has not been written to disk. The RDBMS needs to track this information for transactional integrity and consistency. The underlying dynamic performance view v$bh indicates when a particular block is dirty, as seen by the following query:

SYS@ORCL11>select file#, block# from v$bh where dirty='Y';

When a transaction is committed by the end user:

  • The transaction SCN is updated in the data block and the undo segment header marks that statement as committed in the header section of the undo block.
  • The logwriter process (LGWR) will flush the log buffer to the appropriate online redo log file.
  • SCN is changed on the data block if it is still in the buffer cache (fast commit).

Delayed block cleanout can happen when all of the changed blocks don’t have the updated SCN indicating the commit has occurred. This can cause problems with a transaction that is updating large numbers of rows if a rollback needs to occur. Symptoms include hanging onto an exclusive lock until that rollback is finished, and causing end users to wait.

The delayed block cleanout process does occasionally cause problems that would require opening an Oracle Support Request. Delayed block cleanout was implemented to save time by reducing the number of disk reads to update the SCN until the RDBMS needs to access data from that same block again. If the changed block has already been written to the physical disk and the Oracle background process encounters this same block (for any other query, DML, or DDL), it will also record the committed change at the same time. It does this by checking the transaction entry by SCN in the undo header, which indicates the changes that have been committed. That transaction entry is located in the transaction table, which keeps track of all active transactions for that undo segment.

Each transaction is uniquely identified by the assignment of a transaction ID (XID), which is found in the v$transaction view. This XID is written in the undo header block along with the Undo Byte Address (Uba), which consists of the file and block numbers UBAFIL data file and UBABLK data block, and columns found in the v$transaction view, respectively.

Please take the time to go through the following demonstration; it will solidify the complex concepts in this article.

Demonstration of data travel path

Dumping a block is one of the methods to show how data is stored. It will show the actual contents of the block, whether it is a Table or Index Block, and an actual address that includes the data file number and block number. Remember from the concepts manual that several blocks together make up an extent, and extents then make up segments. A single segment maps to a particular table or index. It is easy to see from the following simplified diagram how different extents can be stored in different physical locations in different data files but the same logical tablespace:

The data in the test case comes from creating a small table (segment) with minimum data in a tablespace with a single data file created just for this demonstration. Automatic Segment Space Management (ASSM) is the default in 11g. If you create a tablespace in 11g with none of the optional storage parameters, the RDBMS by default creates an ASSM segment with locally managed autoallocated extents.

It is possible to define the size of the extents at tablespace creation time that depends on the type of data to be stored. If all of the data is uniform and you need to maintain strict control over the amount of space used, then uniform extents are desirable.

Allowing the RDBMS to autoallocate extents is typical in situations where the data is not the same size for each extent, reducing the amount of time spent in allocating and maintaining space for database segments. Discussing the details, options, and differences for all of the ways to manage segment space in Oracle Database 11g is beyond the scope of this article.

For this example, we will be using race car track information as the sample data. For this demonstration, you will create a specific user with the minimum amount of privileges needed to complete this exercise; SQL is provided for that step in the script.

There are several key files in the zipped code for this article that you will need for this exercise, listed as follows:

  • dumpblock_sys.sql
  • dumpblock_ttracker.sql
  • dumpblocksys.lst
  • dumpblockttracker.lst
  • NEWDB_ora_8582_SYSDUMP1.rtf
  • NEWDB_ora_8582_SYSDUMP1.txt
  • NEWDB_ora_8621_SYSDUMP2.rtf
  • NEWDB_ora_8621_SYSDUMP2.txt
  • NEWDB_ora_8628_SYSDUMP3.rtf
  • NEWDB_ora_8628_SYSDUMP3.txt
  • NEWDB_ora_8635_SYSDUMP4.rtf
  • NEWDB_ora_8635_SYSDUMP4.txt

You will also need access to a conversion calculator to translate the hexadecimal to a number that is the first listing below—use hexadecimal input and decimal output. The second will allow you to look up Hex (Hexadecimal) equivalents for characters.

Location of trace files

The dump block statement will create a trace file in the user dump (udump) directory on any version prior to 11gR1, which can be viewed by a text editor. Using 11gR1 and above, you will find it in the diag directory location. This example will demonstrate how to use the adrci command-line utility to view trace files. First we set the home path where the utility will find the files, then search with the most recent listed first—in this case, it is the NEWDB_ora_9980.trc file.

Now that you know the location for the trace files, how do you determine which trace file was produced? The naming convention for trace files includes the actual process number associated with that session. Use the following command to produce trace files with a specific name, making it easier to identify a separate task:



Please enter your comment!
Please enter your name here