Best Practices for Microsoft SQL Server 2008 R2 Administration

0
140
11 min read

 

Microsoft SQL Server 2008 R2 Administration Cookbook

Microsoft SQL Server 2008 R2 Administration Cookbook

Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook

        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

The reader would benefit by refering to the previous article on Managing the Core Database Engine since the following recipes are related to it.

Implementing Utility & Non-utility collection sets

The Utility information data collection set is installed and automatically started on each instance of SQL Server 2008 R2 when you complete the Utility Control Point (UCP) as we have seen in the previous article. The data is stored in the UMDW database, which is created during the UCP creation. The SQL Server utility collection set is supported side-by-side with Utility collection sets and non-SQL Server utility collection sets. In this recipe, we will go through the implementation tasks to set up the UCP data collection sets for utility and non-utility categories.

SQL Server 2008 R2 introduces the Utility Control Point (UCP) with a set of pre-defined utility collection sets that are managed by UMDW. Similarly, SQL Server 2008 manages the data collection to monitor CPU, disk, and memory resources of an instance using a Data Collector that is managed by Management Data Warehouse (MDW). For this recipe, it is necessary to introduce the MDW feature that stands as a non-utility collection set. The Management Data Warehouse is a relational database that contains all the data that is retained. This database can be on the same system as the data collector, or it can be on another computer. The MDW collection set is run in one of the following collection and upload modes:

  • Non-cached mode: Data collection and upload are on the same schedule. The packages start, collect, and upload data at their configured frequency, and run until they are finished. After the packages finish, they are unloaded from memory.
  • Cached mode: Data collection and upload are on different schedules. The packages collect and cache data until they receive a signal to exit from a loop control-flow task. This ensures that the data flow can be executed repeatedly, which enables continuous data collection.

Getting ready

The new feature of SQL Server 2008 R2—Utility Control Point (UCP)—allows DBAs to set up and collect the utility collection sets. Once the instances are enrolled, the default capacity policies of utilization across the instances or applications are set. It is essential to check that you are using a SQL Server 2008 R2 instance to register the UCP to design the multi-server management feature.

How to do it…

Using SQL Server Management Studio, these are the steps to implement the utility and nonutility data collection sets:

  1. To implement the utility data collection sets, connect to the Utility Explorer where the UCP is registered.
  2. Right-click on Managed Instances and choose Enroll instance (refer to the next screenshot).
  3. Specify the instance name of SQL Server to enroll.
  4. Specify the service account to run the utility collection set.

    To specify the account to collect data, you can choose SQL Server Agent service account, but for security precautions, it is recommended to propose a new account or existing domain user account with the required privileges.

  5. Review prerequisite validation results and selections.
  6. Enroll the instance.

  7. After completing the Enroll Instance wizard, click on the Managed Instances node in the Utility Explorer navigation pane.
  8. On the right-hand side of the Utility Explorer content pane, the enrolled SQL Server instances are displayed.
  9. Next, to implement the non-utility collection sets, from the SSMS tool, use the Configure Management Data Warehouse wizard to configure storage for collected data.
  10. Create the management data warehouse. You can install the management data warehouse on the same instance of SQL Server that runs the data collector for the utility collection set.
  11. Select the configuration task to install the predefined System Data collection sets.
  12. Configure the MDW storage by selecting the SQL Server instance to host and collect the non-utility collection sets.
  13. Map logins to management data warehouse roles.
  14. Once you have completed the MDW wizard, the data collection information for utility and non-utility collection sets are displayed under the Management folder, as shown in the next screenshot:

    Microsoft SQL Server 2008 R2 Administration tutorial

  15. Before we proceed to enable the data collection, it is essential to restart and upload the non-utility collection sets to the Data Collection.
  16. To upload and pass a validation of non-utility collection sets, execute the following TSQL from Query Editor:

    execmsdb.dbo.sp_syscollector_set_warehouse_database_name NULL

    execmsdb.dbo.sp_syscollector_set_warehouse_instance_name NULL

  17. Under the Management folder, right-click on Data Collection and choose Enable the data collector from SSMS, which is shown in the following screenshot:

    Microsoft SQL Server 2008 R2 Administration tutorial

  18. Once we have completed the MDW wizard, the data collection information will be stored in the data warehouse databases.

To ensure that both the utility collection sets exist, review the Data Collection option from SSMS, as shown in the preceding screenshot, which completes the process as a successful implementation of utility and non-utility collection sets on the same instance.

How it works…

The utility data collection sets are installed and automatically started on each instance of SQL Server 2008 R2 when they are configured using Utility Control Point. The UMDW database is created on the instance where UCP is configured and the following collection set and items are stored:

  • Utility Information—DAC Information
  • Utility Information—SMO Information
  • Utility Information—Utility Allocated CPU Info
  • Utility Information—Utility CPU-Memory Related Info
  • Utility Information—Utility Database FilesInfo
  • Utility Information—Utility Performance Counters Items
  • Utility Information—Utility Performance Counters Items1
  • Utility Information—Utility Volumes Information

The non-utility data collection sets are installed when MDW wizard is completed, but not started until they are enabled. The required schemas and their objects for the pre-defined system collect sets are created when MDW is configured. The various UCP and MDW jobs are created under SQL Server Agent | Jobs folder as follows:

  • collection_set_1_noncached_collect_and_upload
  • collection_set_2_collection
  • collection_set_2_upload
  • collection_set_3_collection
  • collection_set_3_upload
  • collection_set_4_noncached_collect_and_upload
  • mdw_purge_data_[MDW]
  • sysutility_get_cache_tables_data_into_aggregate_tables_daily
  • sysutility_get_views_data_into_cache_tables
  • sysutility_mi_collect_performance
  • sysutility_get_cache_tables_data_into_aggregate_tables_hourly
  • syspolicy_purge_history
  • sysutility_mi_collect_and_upload
  • mdw_purge_data_[sysutility_mdw]

The core schema is prefixed by ‘core’, which describes the tables, stored procedures, and views that are used to manage and identify the collected data. These objects are locked and can only be modified by the owner of the MDW database.

The parallel management of SQL Server Utility collection sets (utility and non-utility) requires a preparation on the instance where UCP information is stored and the best practice is to customize the data-collection frequency to avoid any overlap with the MDW data collection schedule. The data collection store contains server activity for all the instances that are configured to manage and host the operating system, such as percent CPU, memory usage, disk I/O usage, network usage, SQL Server waits, and SQL Server activity.

Designing and refreshing a Scalable Shared database

Designing a Scalable Shared Database (SSD) feature in SQL Server 2008 R2, allows the DBAs to scale out a read-only database (reporting database), which is a copy of a production database, built exclusively for reporting purposes. SSD feature has been part of SQL Server from 2005 Enterprise Edition onwards, that has been enhanced since SQL Server 2008 and this is supported in Enterprise edition and Data Center editions only. To host this reporting database, the disk volumes must be dedicated and read-only, and the scalable shared database feature will permit the smooth update process from production database to the reporting database.

The internals behind such a process of building or refreshing a reporting database are known as the build phase or refresh phase, depending on whether a new reporting database is being built or a stale reporting database is being refreshed. The validity of a scalable shared database begins from building a reporting database on a set of reporting volumes and that reporting data eventually becomes too outdated to be useful, which means that the stale database requires a data-refresh as part of each update cycle. Refreshing a stale reporting database involves either updating its data or building a completely new, fresh version of the database. This scalability feature is supported in Enterprise Edition and Data Center editions only. This recipe will cover how to design and refresh a reporting database that is intended for use as a scalable shared database.

Getting ready

Keeping the reporting database refreshed is a prerequisite as part of each update cycle. The key aspect of having an updated reporting database can be achieved by using the data-copy method, which requires the following:

  • Create or copy a database by designing a SSIS package to use. Execute SQL Task method or Transfer Database task method.
  • From SSMS, use SQL Server Import and Export wizard to copy required objects for the reporting purpose.
  • Restore a backup of the production database into the reporting volume, which will involve a full database backup file to be used.
  • The essential components such as, SAN storage hardware, processing environment, and data access environment must be used. The reporting database must have the same layout as the production database, so we need to use the same drive letter for the reporting volume and the same directory path for the database.

Additionally, verify that the reporting servers and the associated reporting database are running on identical platforms.

How to do it…

To design and refresh a reporting database, you will need to complete the following steps on the production SQL Server instance:

  1. Unmask the Logical Unit Number (LUN) on the disks where the Production database is stored. (Refer to the hardware vendor’s manual).
  2. Mount each reporting volume and mark it as read-write.
  3. Obtain the disk volume information. Logon remotely to the server and open a command prompt window to run the following:

    DiskPart list volumes

  4. Use the DiskPart utility to mount the volumes, then on that command prompt window run the following:

    DISKPART

  5. The DiskPart utility will open a prompt for you to enter the following commands:

    DISKPART> select volume=<drive-number>

    DISKPART> assign letter=<drive-letter>

    DISKPART> attribute clear readonly

    DISKPART> exit

    The <drive-number> is the volume number assigned by the Windows operating system.
    The <drive-letter> is the letter assigned to the reporting volume.

  6. To ensure that data files are accessible and disks are correctly mounted, list the contents of the directory using the following command from the command prompt:

    DIR <drive-letter>:<database directory>

  7. As we are refreshing an existing reporting database, attach the database to that server instance using SSMS. On Query Editor, enter the following TSQL statements:

    ALTER DATABASE AdventureWorks2008R2 SET READ_WRITE
    GO
    ALTER DATABASE AdventureWorks2008R2
    SET RECOVERY FULL, PAGE_VERIFY CHECKSUM;
    GO

  8. Detach the database from that server instance using the sp_detach_db statement from Query Editor.
  9. Now, we have to mark each reporting volume as read-only and dismount from the server. Go to the command prompt window and enter the following commands:

    DiskPart
    DISKPART> select volume=<drive-number>
    DISKPART> attribute set readonly
    DISKPART> remove
    DISKPART> exit

  10. To ensure that the reporting volume is read-only, you should attempt to create a file on the volume. This attempt must return an error.
  11. Next, go to the command prompt window and enter the following commands:

    DiskPart
    DISKPART> select volume=<drive-number>
    DISKPART> assign letter = <drive letter>
    DISKPART> exit

    The <drive-letter> is the letter assigned to the reporting volume.

  12. Attach the database to one or more server instances on each of the reporting servers using the sp_attach_db statement or SSMS tool.
  13. Now, the reporting database is made available as a scalable shared database to process the queries from the application.

How it works…

Using the available hardware vendor-specific servers and disk volumes, the scalable shared database features allow the application to scale out a read-only database built exclusively for reporting purposes.

The ‘build’ phase is the process of mounting the reporting volume on the production server and building the reporting database. After the reporting database is built on the volume, using the defined data-copy methods, the data is updated. Once it is completed, the process of setting each reporting volume to read-only and dismount begins.

The ‘attach’ phase is the process of making the reporting database available as a scalable shared database. After the reporting database is built on a set of reporting volumes, the volumes are marked as read-only and mounted across multiple reporting servers. The individual reporting server service instance will use the reporting database that is attached.

There’s more…

The Scalable Shared Database feature’s best practice recommendation:

  • On the basis of hardware, there is no limit on the number of server instances per database; however, for the shared database configuration, ensure that a maximum of eight servers per database are hosted.
  • The SQL Server instance collation and sort order must be similar across all the instances.
  • If the relational or reporting database is spread across the shared servers, then ensure to test and deploy a synchronized update then a rolling update of the scalable shared database.
  • Also, scaling out this solution is possible in SQL Server 2008 Analysis Services with the Read-Only Database capability.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here