9 min read

Before you even think about your backups, you need to understand the SQL Server recovery model that is internally used while the database is in operational mode. In this regard, today we will learn about SQL Server recovery models, backup and restore.

SQL Server recovery models

A recovery model is about maintaining data in the event of a server failure. Also, it defines the amount of information that SQL Server writes to the log file for the purpose of recovery.

SQL Server has three database recovery models:

  • Simple recovery model
  • Full recovery model
  • Bulk-logged recovery model

Simple recovery model

This model is typically used for small databases and scenarios where data changes are infrequent. It is limited to restoring the database to the point when the last backup was created. It means that all changes made after the backup are lost. You will need to recreate all changes manually. The major benefit of this model is that the log file takes only a small amount of storage space. How and when to use it depends on the business scenario.

Full recovery model

This model is recommended when recovery from damaged storage is the highest priority and data loss should be minimal. SQL Server uses copies of database and log files to restore the database. The database engine logs all changes to the database, including bulk operation and most DDL statements.

If the transaction log file is not damaged, SQL Server can recover all data except any
transaction which are in process at the time of failure (that is, not committed in to the
database file). All logged transactions give you the opportunity of point-in-time recovery,
which is a really cool feature.
A major limitation of this model is the large size of the log files which leads you to
performance and storage issues. Use it only in scenarios where every insert is important and loss of data is not an option.

Bulk-logged recovery model

This model is somewhere between simple and full. It uses database and log backups to
recreate the database. Compared to the full recovery model, it uses less log space for
CREATE INDEX and bulk load operations, such as SELECT INTO. Let’s look at this example. SELECT INTO can load a table with 1,000,000 records with a single statement. The log will only record the occurrence of these operations but not the details. This approach uses less storage space compared to the full recovery model.

The bulk-logged recovery model is good for databases which are used for ETL process and data migrations.

SQL Server has a system database model. This database is the template for each new one you create. If you use only the CREATE DATABASE statement without any additional parameters, it simply copies the model database with all the properties and metadata. It also inherits the default recovery model, which is full. So, the conclusion is that each new database will be in full recovery mode. This can be changed during and after the creation process.

Here is a SQL statement to check recovery models of all your databases on SQL Server on Linux instance:

1> SELECT name, recovery_model, recovery_model_desc

2> FROM sys.databases

3> GO

name recovery_model recovery_model_desc

------------------------ -------------- -------------------

master 3 SIMPLE

tempdb 3 SIMPLE

model 1 FULL

msdb 3 SIMPLE

AdventureWorks 3 SIMPLE

WideWorldImporters 3 SIMPLE

(6 rows affected)

The following DDL statement will change the recovery model for the model database from full to simple:

1> USE master

2> ALTER DATABASE model

3> SET RECOVERY SIMPLE

4> GO

If you now execute the SELECT statement again to check recovery models, you will notice that model now has different properties.

Backup and restore

Now it’s time for SQL coding and implementing backup/restore operations in our own Environments.

  1. First let’s create a full database backup of our University database:
1> BACKUP DATABASE University

2> TO DISK = '/var/opt/mssql/data/University.bak'

3> GO

Processed 376 pages for database'University', file 'University' on

file 1.

Processed 7 pages for database 'University', file 'University_log'

on file 1.

BACKUP DATABASE successfully processed 383 pages in 0.562 seconds

(5.324 MB/sec)

2. Now let’s check the content of the table Students:

1> USE University

2> GO

Changed database context to 'University'

1> SELECT LastName, FirstName

2> FROM Students

3> GO

LastName FirstName

--------------- ----------

Azemovic Imran

Avdic Selver

Azemovic Sara

Doe John

(4 rows affected)

3. As you can see there are four records. Let’s now simulate a large import from the
AdventureWorks database, Person.Person table. We will adjust the PhoneNumber data to fit our 13 nvarchar characters. But first we will drop unique index UQ_user_name so that we can quickly import a large amount of data.

1> DROP INDEX UQ_user_name
2> ON dbo.Students
3> GO
1> INSERT INTO Students (LastName, FirstName, Email, Phone,
UserName)
2> SELECT T1.LastName, T1.FirstName, T2.PhoneNumber, NULL,
'user.name'
3> FROM AdventureWorks.Person.Person AS T1
4> INNER JOIN AdventureWorks.Person.PersonPhone AS T2
5> ON T1.BusinessEntityID = T2.BusinessEntityID
6> WHERE LEN (T2.PhoneNumber) < 13
7> AND LEN (T1.LastName) < 15 AND LEN (T1.FirstName)< 10
8> GO
(10661 rows affected)

4. Let’s check the new row numbers:

1> SELECT COUNT (*) FROM Students

2> GO

-----------

10665

(1 rows affected)

Note: As you see the table now has 10,665 rows (10,661+4). But don’t forget that we had created a full database backup before the import procedure.

5. Now, we will create a differential backup of the University database:

1> BACKUP DATABASE University

2> TO DISK = '/var/opt/mssql/data/University-diff.bak'

3> WITH DIFFERENTIAL

4> GO

Processed 216 pages for database 'University', file 'University' on

file 1.

Processed 3 pages for database 'University', file 'University_log'

on file 1.

BACKUP DATABASE WITH DIFFERENTIAL successfully processed 219 pages

in 0.365 seconds (4.676 MB/sec).

6. If you want to see the state of .bak files on the disk, follow this procedure.
However, first enter superuser mode with sudo su. This is necessary because a
regular user does not have access to the data folder:

7. Now let’s test the transaction log backup of University database log file. However, first you will need to make some changes inside the Students table:

1> UPDATE Students

2> SET Phone = 'N/A'

3> WHERE Phone IS NULL

4> GO

1> BACKUP LOG University

2> TO DISK = '/var/opt/mssql/data/University-log.bak'

3> GO

Processed 501 pages for database 'University', file

'University_log' on file 1.

BACKUP LOG successfully processed 501 pages in 0.620 seconds (6.313

MB/sec)

Note: Next steps are to test restore database options of full and differential backup procedures.

8. First, restore the full database backup of University database. Remember that the Students table had four records before the first backup, and it currently has 10,665 (as we checked in step 4):

1> ALTER DATABASE University
2> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
3> RESTORE DATABASE University
4> FROM DISK = '/var/opt/mssql/data/University.bak'
5> WITH REPLACE
6> ALTER DATABASE University SET MULTI_USER
7> GO
Nonqualified transactions are being rolled back. Estimated rollback
completion: 0%.
Nonqualified transactions are being rolled back. Estimated rollback
completion: 100%.
Processed 376 pages for database 'University', file 'University' on
file 1.
Processed 7 pages for database 'University', file 'University_log'
on file 1.
RESTORE DATABASE successfully processed 383 pages in 0.520 seconds
(5.754 MB/sec).

Note: Before the restore procedure, the database is switched to single user mode.This way we are closing all connections that could abort the restore procedure. In the last step, we are switching the database to multi-user mode again.

9. Let’s check the number of rows again. You will see the database is restored to its initial state, before the import of more than 10,000 records from the AdventureWorks database:

1> SELECT COUNT (*) FROM Students
2> GO
-------
4
(1 rows affected)

10. Now it’s time to restore the content of the differential backup and return the University database to its state after the import procedure:

1> USE master
2> ALTER DATABASE University
3> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
4> RESTORE DATABASE University
5> FROM DISK = N'/var/opt/mssql/data/University.bak'
6> WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 5
7> RESTORE DATABASE University
8> FROM DISK = N'/var/opt/mssql/data/University-diff.bak'
9> WITH FILE = 1, NOUNLOAD, STATS = 5
10> ALTER DATABASE University SET MULTI_USER
11> GO
Processed 376 pages for database 'University', file 'University' on
file 1.
Processed 7 pages for database 'University', file 'University_log'
on file 1.
RESTORE DATABASE successfully processed 383 pages in 0.529 seconds
(5.656 MB/sec).
Processed 216 pages for database 'University', file 'University' on
file 1.
Processed 3 pages for database 'University', file 'University_log'
on file 1.
RESTORE DATABASE successfully processed 219 pages in 0.309 seconds
(5.524 MB/sec).

We’ll look at a really cool feature of SQL Server: backup compression. A backup can be a very large file, and if companies create backups on daily basis, then you can do the math on the amount of storage required.

Disk space is cheap today, but it is not free. As a database administrator on SQL Server on Linux, you should consider any possible option to optimize and save money. Backup

compression is just that kind of feature. It provides you with a compression procedure (ZIP, RAR) after creating regular backups. So, you save time, space, and money.

Let’s consider a full database backup of the University database. The uncompressed file is about 3 MB. After we create a new one with compression, the size should be reduced. The compression ratio mostly depends on data types inside the database. It is not a magic stick but it can save space.

The following SQL command will create a full database backup of the University database and compress it:

1> BACKUP DATABASE University

2> TO DISK = '/var/opt/mssql/data/University-compress.bak'

3> WITH NOFORMAT, INIT, SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10

4> GO

Now exit to bash, enter superuser mode, and type the following ls command to compare the size of the backup files:

tumbleweed:/home/dba # ls -lh /var/opt/mssql/data/U*.bak

Dba Konsole

As you can see, the compression size is 676 KB and it is around five times smaller. That is a huge space saving without any additional tools. SQL Server on Linux has one security feature with backup.

We learned about SQL Server recovery model, and how to efficiently backup and restore our database. You can know more about transaction logs and elements of backup strategy from this book SQL Server on Linux.

Also, check out:

SQL Server on Linux

LEAVE A REPLY

Please enter your comment!
Please enter your name here