40 min read

In this two-part article by Chris Webb, we will cover query performance tuning, including how to design aggregations and partitions and how to write efficient MDX. The first part will cover performance-specific design features, along with the concepts of partitions and aggregations.

One of the main reasons for building Analysis Services cubes as part of a BI solution is because it should mean you get better query performance than if you were querying your relational database directly. While it’s certainly the case that Analysis Services is very fast it would be naive to think that all of our queries, however complex, will return in seconds without any tuning being necessary. This article will describe the steps you’ll need to go through in order to ensure your cube is as responsive as possible.

How Analysis Services processes queries

Before we start to discuss how to improve query performance, we need to understand what happens inside Analysis Services when a query is run. The two major parts of the Analysis Services engine are:

  • The Formula Engine processes MDX queries, works out what data is needed to answer them, requests that data from the Storage Engine, and then performs all calculations needed for the query.
  • The Storage Engine handles all reading and writing of data; it fetches the data that the Formula Engine requests when a query is run and aggregates it to the required granularity.

When you run an MDX query, then, that query goes first to the Formula Engine where it is parsed; the Formula Engine then requests all of the raw data needed to answer the query from the Storage Engine, performs any calculations on that data that are necessary, and then returns the results in a cellset back to the user. There are numerous opportunities for performance tuning at all stages of this process, as we’ll see.

Performance tuning methodology

When doing performance tuning there are certain steps you should follow to allow you to measure the effect of any changes you make to your cube, its calculations or the query you’re running:

  • Wherever possible, test your queries in an environment that is identical to your production environment. Otherwise ensure that the size of the cube and the server hardware you’re running on is at least comparable, and running the same build of Analysis Services.
  • Make sure that no-one else has access to the server you’re running your tests on. You won’t get reliable results if someone else starts running queries at the same time as you.
  • Make sure that the queries you’re testing with are equivalent to the ones that your users want to have tuned. As we’ll see, you can use Profiler to capture the exact queries your users are running against the cube.
  • Whenever you test a query, run it twice: first on a cold cache, and then on a warm cache. Make sure you keep a note of the time each query takes to run and what you changed on the cube or in the query for that run.

Clearing the cache is a very important step—queries that run for a long time on a cold cache may be instant on a warm cache. When you run a query against Analysis Services, some or all of the results of that query (and possibly other data in the cube, not required for the query) will be held in cache so that the next time a query is run that requests the same data it can be answered from cache much more quickly. To clear the cache of an Analysis Services database, you need to execute a ClearCache XMLA command. To do this in SQL Management Studio, open up a new XMLA query window and enter the following:

<Batch >
<ClearCache>
<Object>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
</Object>
</ClearCache>
</Batch>

Remember that the ID of a database may not be the same as its name —you can check this by right-clicking on a database in the SQL Management Studio Object Explorer and selecting Properties. Alternatives to this method also exist: the MDX Studio tool allows you to clear the cache with a menu option, and the Analysis Services Stored Procedure Project (http://tinyurl.com/asstoredproc) contains code that allows you to clear the Analysis Services cache and the Windows File System cache directly from MDX. Clearing the Windows File System cache is interesting because it allows you to compare the performance of the cube on a warm and cold file system cache as well as a warm and cold Analysis Services cache: when the Analysis Services cache is cold or can’t be used for some reason, a warm file system cache can still have a positive impact on query performance.

After the cache has been cleared, before Analysis Services can answer a query it needs to recreate the calculated members, named sets and other objects defined in a cube’s MDX script. If you have any reasonably complex named set expressions that need to be evaluated, you’ll see some activity in Profiler relating to these sets being built and it’s important to be able to distinguish between this and activity that’s related to the queries you’re actually running. All MDX Script related activity occurs between the Execute MDX Script Begin and Execute MDX Script End events; these are fired after the Query Begin event but before the Query Cube Begin event for the query run after the cache has been cleared. When looking at a Profiler trace you should either ignore everything between the Execute MDX Script Begin and End events or run a query that returns no data at all to trigger the evaluation of the MDX Script, for example:

SELECT {} ON 0
FROM [Adventure Works]

Designing for performance

Many of the recommendations for designing cubes we’ve given so far in this article have been given on the basis that they will improve query performance, and in fact the performance of a query is intimately linked to the design of the cube it’s running against. For example, dimension design, especially optimizing attribute relationships, can have a significant effect on the performance of all queries—at least as much as any of the optimizations described in this article. As a result, we recommend that if you’ve got a poorly performing query the first thing you should do is review the design of your cube to see if there is anything you could do differently. There may well be some kind of trade-off needed between usability, manageability, time-to-develop, overall “elegance” of the design and query performance, but since query performance is usually the most important consideration for your users then it will take precedence. To put it bluntly, if the queries your users want to run don’t run fast your users will not want to use the cube at all!

Performance-specific design features

Once you’re sure that your cube design is as good as you can make it, it’s time to look at two features of Analysis Services that are transparent to the end user but have an important impact on performance and scalability: measure group partitioning and aggregations. Both of these features relate to the Storage Engine and allow it to answer requests for data from the Formula Engine more efficiently.

Partitions

A partition is a data structure that holds some or all of the data held in a measure group. When you create a measure group, by default that measure group contains a single partition that contains all of the data. Enterprise Edition of Analysis Services allows you to divide a measure group into multiple partitions; Standard Edition is limited to one partition per measure group, and the ability to partition is one of the main reasons why you would want to use Enterprise Edition over Standard Edition.

Why partition?

Partitioning brings two important benefits: better manageability and better performance. Partitions within the same measure group can have different storage modes and different aggregation designs, although in practice they usually don’t differ in these respects; more importantly they can be processed independently, so for example when new data is loaded into a fact table, you can process only the partitions that should contain the new data. Similarly, if you need to remove old or incorrect data from your cube, you can delete or reprocess a partition without affecting the rest of the measure group.

Partitioning can also improve both processing performance and query performance significantly. Analysis Services can process multiple partitions in parallel and this can lead to much more efficient use of CPU and memory resources on your server while processing is taking place. Analysis Services can also fetch and aggregate data from multiple partitions in parallel when a query is run too, and again this can lead to more efficient use of CPU and memory and result in faster query performance. Lastly, Analysis Services will only scan the partitions that contain data necessary for a query and since this reduces the overall amount of IO needed this can also make queries faster.

Building partitions

You can view, create and delete partitions on the Partitions tab of the Cube Editor in BIDS. When you run the New Partition Wizard or edit the Source property of an existing partition, you’ll see you have two options for controlling what data is used in the partition:

  • Table Binding means that the partition contains all of the data in a table or view in your relational data source, or a named query defined in your DSV. You can choose the table you wish to bind to on the Specify Source Information step of the New Partition Wizard, or in the Partition Source dialog if you choose Table Binding from the Binding Type drop-down box.
  • Query Binding allows you to specify an SQL SELECT statement to filter the rows you want from a table; BIDS will automatically generate part of the SELECT statement for you, and all you’ll need to do is supply the WHERE clause. If you’re using the New Partition Wizard, this is the option that will be chosen if you check the Specify a query to restrict rows checkbox on the second step of the wizard; in the Partition Source dialog you can choose this option from the Binding Type drop-down box.

It might seem like query binding is the easiest way to filter your data, and while it’s the most widely-used approach it does have one serious shortcoming: since it involves hard-coding an SQL SELECT statement into the definition of the partition, changes to your fact table such as the deletion or renaming of a column can mean the SELECT statement errors when it is run if that column is referenced in it. This means in turn will cause the partition processing to fail.. If you have a lot of partitions in your measure group—and it’s not unusual to have over one hundred partitions on a large cube—altering the query used for each one is somewhat time-consuming. Instead, table-binding each partition to a view in your relational database will make this kind of maintenance much easier, although you do of course now need to generate one view for each partition. Alternatively, if you’re building query-bound partitions from a single view on top of your fact table (which means you have complete control over the columns the view exposes), you could use a query like SELECT * FROM in each partition’s definition.

It’s very important that you check the queries you’re using to filter your fact table for each partition. If the same fact table row appears in more than one partition, or if fact table rows don’t appear in any partition, this will result in your cube displaying incorrect measure values.

On the Processing and Storage Locations step of the wizard you have the chance to create the partition on a remote server instance, functionality that is called Remote Partitions. This is one way of scaling out Analysis Services: you can have a cube and measure group on one server but store some of the partitions for the measure group on a different server, something like a linked measure group but at a lower level.

This can be useful for improving processing performance in situations when you have a very small time window available for processing but in general we recommend that you do not use remote partitions. They have an adverse effect on query performance and they make management of the cube (especially backup) very difficult.

Also on the same step you have the chance to store the partition at a location other than the default of the Analysis Services data directory. Spreading your partitions over more than one volume may make it easier to improve the IO performance of your solution, although again it can complicate database backup and restore.

After assigning an aggregation design to the partition (we’ll talk about aggregations in detail next), the last important property to set on a partition is Slice. The Slice property takes the form of an MDX member, set or tuple—MDX expressions returning members, sets or tuples are not allowed however – and indicates what data is present in a partition. While you don’t have to set it, we strongly recommend that you do so, even for MOLAP partitions, for the following reasons:

  • While Analysis Services does automatically detect what data is present in a partition during processing, it doesn’t always work as well as you’d expect and can result in unwanted partition scanning taking place at query time in a number of scenarios. The following blog entry on the SQLCat team site explains why in detail: http://tinyurl.com/partitionslicing
  • It acts as a useful safety mechanism to ensure that you only load the data you’re expecting into a partition. If, while processing, Analysis Services finds that data is being loaded into the partition that conflicts with what’s specified in the Slice property, then processing will fail.

More detail on how to set the Slice property can be found in Mosha Pasumansky’s blog entry on the subject here: http://tinyurl.com/moshapartition

Planning a partitioning strategy

We now know why we should be partitioning our measure groups and what to do to create a partition; the next question is: how should we split the data in our partitions? We need to find some kind of happy medium between the manageability and performance aspects of partitioning—we need to split our data so that we do as little processing as possible, but also so that as few partitions are scanned as possible by our users’ queries. Luckily, if we partition by our Time dimension we can usually meet both needs very well: it’s usually the case that when new data arrives in a fact table it’s for a single day, week or month, and it’s also the case that the most popular way of slicing a query is by a time period.

Therefore, it’s almost always the case that when measure groups are partitioned they are partitioned by time. It’s also worth considering, though, if it’s a good idea to partition by time and another dimension: for example, in an international company you might have a Geography dimension and a Country attribute, and users may always be slicing their queries by Country too—in which case it might make sense to partition by Country.

Measure groups that contain measures with the Distinct Count aggregation type require their own specific partitioning strategy. While you should still partition by time, you should also partition by non-overlapping ranges of values within the column you’re doing the distinct count on. A lot more detail on this is available in the following white paper: http://tinyurl.com/distinctcountoptimize

It’s worth looking at the distribution of data over partitions for dimensions we’re not explicitly slicing by, as there is often a dependency between data in these dimensions and the Time dimension: for example, a given Product may only have been sold in certain Years or in certain Countries. You can see the distribution of member DataIDs (the internal key values that Analysis Services creates for all members on a hierarchy) for a partition by querying the Discover_Partition_Dimension_Stat DMV, for example:

SELECT *
FROM SystemRestrictSchema($system.Discover_Partition_Dimension_Stat
,DATABASE_NAME = 'Adventure Works DW 2008'
,CUBE_NAME = 'Adventure Works'
,MEASURE_GROUP_NAME = 'Internet Sales'
,PARTITION_NAME = 'Internet_Sales_2003')

The following screenshot shows what the results of this query look like:

There’s also a useful Analysis Services stored procedure that shows the same data and any partition overlaps included in the Analysis Services Stored Procedure Project (a free, community-developed set of sample Analysis Services stored procedures): http://tinyurl.com/partitionhealth. This blog entry describes how you can take this data and visualise it in a Reporting Services report: http://tinyurl.com/viewpartitionslice

We also need to consider what size our partitions should be. In general between 5 and 20 million rows per partition, or up to around 3GB, is a good size. If you have a measure group with a single partition of below 5 million rows then don’t worry, it will perform very well, but it’s not worth dividing it into smaller partitions; it’s equally possible to get good performance with partitions of 50-60 million rows. It’s also best to avoid having too many partitions as well—if you have more than a few hundred it may make SQL Management Studio and BIDS slow to respond, and it may be worth creating fewer, larger partitions assuming these partitions stay within the size limits for a single partition we’ve just given.

Automatically generating large numbers of partitions
When creating a measure group for the first time, it’s likely you’ll already have a large amount of data and may need to create a correspondingly large number of partitions for it. Clearly the last thing you’ll want to do is create tens or hundreds of partitions manually and it’s worth knowing some tricks to create these partitions automatically. One method involves taking a single partition, scripting it out to XMLA and then pasting and manipulating this in Excel, as detailed here: http://tinyurl.com/generatepartitions. The Analysis Services Stored Procedure Project also contains a set of functions for creating partitions automatically based on MDX set expressions: http://tinyurl.com/autopartition.

Unexpected partition scans

Even when you have configured your partitions properly it’s sometimes the case that Analysis Services will scan partitions that you don’t expect it to be scanning for a particular query. If you see this happening the first thing to determine is whether these extra scans are making a significant contribution to your query times. If they aren’t, then it’s probably not worth worrying about; if they are, there are some things to try to attempt to stop it happening.

The extra scans could be the result of a number of factors, including:

  • The way you have written MDX for queries or calculations. In most cases it will be very difficult to rewrite the MDX to stop the scans, but the following blog entry describes how it is possible in one scenario: http://tinyurl.com/moshapart
  • The LastNonEmpty measure aggregation type may result in multiple partition scans. If you can restructure your cube so you can use the LastChild aggregation type, Analysis Services will only scan the last partition containing data for the current time period.
  • In some cases, even when you’ve set the Slice property, Analysis Services has trouble working out which partitions should be scanned for a query. Changing the attributes mentioned in the Slice property may help, but not always. The section on Related Attributes and Almost Related Attributes in the following blog entry discusses this in more detail: http://tinyurl.com/mdxpartitions
  • Analysis Services may also decide to retrieve more data than is needed for a query to make answering future queries more efficient. This behavior is called prefetching and can be turned off by setting the following connection string properties: Disable Prefetch Facts=True; Cache Ratio=1
    More information on this can be found in the section on Prefetching and Request Ordering in the white paper Identifying and Resolving MDX Query Bottleneck available from http://tinyurl.com/mdxbottlenecks
    Note that setting these connection string properties can have other, negative effects on query performance.

You can set connection string properties in SQL Management Studio when you open a new MDX Query window. Just click the Options button on the Connect to Analysis Services dialog, then go to the Additional Connection Parameters tab. Note that in the RTM version of SQL Management Studio there is a problem with this functionality, so that when you set a connection string property it will continue to be set for all connections, even though the textbox on the Additional Connection Parameters tab is blank, until SQL Management Studio is closed down or until you set the same property differently.

Aggregations

An aggregation is simply a pre-summarised data set, similar to the result of an SQL SELECT statement with a GROUP BY clause, that Analysis Services can use when answering queries. The advantage of having aggregations built in your cube is that it reduces the amount of aggregation that the Analysis Services Storage Engine has to do at query time, and building the right aggregations is one of the most important things you can do to improve query performance. Aggregation design is an ongoing process that should start once your cube and dimension designs have stabilised and which will continue throughout the lifetime of the cube as its structure and the queries you run against it change; in this section we’ll talk about the steps you should go through to create an optimal aggregation design.

Creating an initial aggregation design

The first stage in creating an aggregation design should be to create a core set of aggregations that will be generally useful for most queries run against your cube. This should take place towards the end of the development cycle when you’re sure that your cube and dimension designs are unlikely to change much, because any changes are likely to invalidate your aggregations and mean this step will have to be repeated. It can’t be stressed enough that good dimension design is the key to getting the most out of aggregations: removing unnecessary attributes, setting AttributeHierarchyEnabled to False where possible, building optimal attribute relationships and building user hierarchies will all make the aggregation design process faster, easier and more effective. You should also take care to update the EstimatedRows property of each measure group and partition, and the EstimatedCount of each attribute before you start, and these values are also used by the aggregation design process. BIDS Helper adds a new button to the toolbar in thePartitions tab of the Cube Editor to update all of these count properties with one click.

To build this initial set of aggregations we’ll be running the Aggregation Design Wizard, which can be run by clicking the Design Aggregations button on the toolbar of the Aggregations tab of the Cube Editor. This wizard will analyse the structure of your cube and dimensions, look at various property values you’ve set, and try to come up with a set of aggregations that it thinks should be useful. The one key piece of information it doesn’t have at this point is what queries you’re running against the cube, so some of the aggregations it designs may not prove to be useful in the long-run, but running the wizard is extremely useful for creating a first draft of your aggregation designs.

You can only design aggregations for one measure group at a time; if you have more than one partition in the measure group you’ve selected then the first step of the wizard asks you to choose which partitions you want to design aggregations for. An aggregation design can be associated with many partitions in a measure group, and a partition can be associated with just one aggregation design or none at all. We recommend that, in most cases, you have just one aggregation design for each measure group for the sake of simplicity. However if processing time is limited and you need to reduce the overall time spent building aggregations, or if query patterns are different for different partitions within the same measure group, then it may make sense to apply different aggregation designs to different partitions.

The next step of the wizard asks you to review the AggregationUsage property of all the attributes on all of the cube dimensions in your cube; this property can also be set on the Cube Structure tab of the Cube Editor.

The following figure shows the Aggregation Design Wizard:

Query Performance Tuning in Microsoft Analysis Services: Part 1

The AggregationUsage property controls how dimension attributes are treated in the aggregation design process. The property can be set to the following values:

  • Full: This means the attribute, or an attribute at a lower granularity directly related to it by an attribute relationship, will be included in every single aggregation the wizard builds. We recommend that you use this value sparingly, for at most one or two attributes in your cube, because it can significantly reduce the number of aggregations that get built. You should set it for attributes that will almost always get used in queries. For example, if the vast majority of your queries are at the Month granularity it makes sense that all of your aggregations include the Month attribute from your Time dimension.
  • None: This means the attribute will not be included in any aggregation that the wizard designs. Don’t be afraid of using this value for any attributes that you don’t think will be used often in your queries, it can be a useful way of ensuring that the attributes that are used often get good aggregation coverage. Note that Attributes with AttributeHierarchyEnabled set to False will have no aggregations designed for them anyway.
  • Unrestricted: This means that the attribute may be included in the aggregations designed, depending on whether the algorithm used by the wizard considers it to be useful or not.
  • Default: The default option applies a complex set of rules, which are:
    • The granularity attribute (usually the key attribute, unless you specified otherwise in the dimension usage tab) is treated as Unrestricted.
    • All attributes on dimensions involved in many-to-many relationships, unmaterialised referenced relationships, and data mining dimensions are treated as None. Aggregations may still be built at the root granularity, that is, the intersection of every All Member on every attribute.
    • All attributes that are used as levels in natural user hierarchies are treated as Unrestricted.
    • Attributes with IsAggregatable set to False are treated as Full.
    • All other attributes are treated as None

The next step in the wizard asks you to verify the number of EstimatedRows and EstimatedCount properties we’ve already talked about, and gives the option of setting a similar property that shows the estimated number of members from an attribute that appear in any one partition. This can be an important property to set: if you are partitioning by month, although you may have 36 members on your Month attribute a partition will only contain data for one of them.

On the Set Aggregation Options step you finally reach the point where some aggregations can be built. Here you can apply one last set of restrictions on the set of aggregations that will be built, choosing to either:

  • Estimated Storage Reaches, which means you build aggregations to fill a given amount of disk space.
  • Performance Gain Reaches, the most useful option. It does not mean that all queries will run n% faster; nor does it mean that a query that hits an aggregation directly will run n% faster. Think of it like this: if the wizard built all the aggregations it thought were useful to build (note: this is not the same thing as all of the possible aggregations that could be built on the cube) then, in general, performance would be better. Some queries would not benefit from aggregations, some would be slightly faster, and some would be a lot faster; and some aggregations would be more often used than others. So if you set this property to 100% the wizard would build all the aggregations that it could, and you’d get 100% of the performance gain possible from building aggregations. Setting this property to 30%, the default and recommended value, will build the aggregations that give you 30% of this possible performance gain—not 30% of the possible aggregations, usually a much smaller number. As you can see from the screenshot below, the graph drawn on this step plots the size of the aggregations built versus overall performance gain, and the shape of the curve shows that a few, smaller aggregations usually provide the majority of the performance gain.
  • I Click Stop, which means carry on building aggregations until you click the Stop button. Designing aggregations can take a very long time, especially on more complex cubes, because there may literally be millions or billions of possible aggregations that could be built. In fact, it’s not unheard of for the aggregation design wizard to run for several days before it’s stopped!
  • Do Not Design Aggregations allows you to skip designing aggregations.

Query Performance Tuning in Microsoft Analysis Services: Part 1

The approach we suggest taking here is to first select I Click Stop and then click the Start button. On some measure groups this will complete very quickly, with only a few small aggregations built. If that’s the case click Next; otherwise, if it’s taking too long or too many aggregations are being built, click Stop and then Reset, and then select Performance Gain Reaches and enter 30% and Start again. This should result in a reasonable selection of aggregations being built; in general around 50-100 aggregations is the maximum number you should be building for a measure group, and if 30% leaves you short of this try increasing the number by 10% until you feel comfortable with what you get.

On the final step of the wizard, enter a name for your aggregation design and save it. It’s a good idea to give the aggregation design a name including the name of the measure group to make it easier to find if you ever need to script it to XMLA.

It’s quite common that Analysis Services cube developers stop thinking about aggregation design at this point. This is a serious mistake: just because you have run the Aggregation Design Wizard does not mean you have built all the aggregations you need, or indeed any useful ones at all! Doing Usage-Based Optimisation and/or building aggregations manually is absolutely essential.

Usage-based optimization

We now have some aggregations designed, but the chances are that despite our best efforts many of them will not prove to be useful. To a certain extent we might be able to pick out these aggregations by browsing through them; really, though, we need to know what queries our users are going to run before we can build aggregations to make them run faster. This is where usage-based optimisation comes in: it allows us to log the requests for data that Analysis Services makes when a query is run and then feed this information into the aggregation design process.

To be able to do usage-based optimization, you must first set up Analysis Services to log these requests for data. This involves specifying a connection string to a relational database in the server properties of your Analysis Services instance and allowing Analysis Services to create a log table in that database. The white paper Configuring the Analysis Services Query Log contains more details on how to do this (it’s written for Analysis Services 2005 but is still relevant for Analysis Services 2008), and can be downloaded from http://tinyurl.com/ssasquerylog.

The query log is a misleading name, because as you’ll see if you look inside it it doesn’t actually contain the text of MDX queries run against the cube. When a user runs an MDX query, Analysis Services decomposes it into a set of requests for data at a particular granularity and it’s these requests that are logged; we’ll look at how to interpret this information in the next section. A single query can result in no requests for data, or it can result in as many as hundreds or thousands of requests, especially if it returns a lot of data and a lot of MDX calculations are involved. When setting up the log you also have to specify the percentage of all data requests that Analysis Services actually logs with the QueryLogSampling property—in some cases if it logged every single request you would end up with a very large amount of data very quickly, but on the other hand if you set this value too low you may end up not seeing certain important long-running requests. We recommend that you start by setting this property to 100 but that you monitor the size of the log closely and reduce the value if you find that the number of requests logged is too high.

Once the log has been set up, let your users start querying the cube. Explain to them what you’re doing and that some queries may not perform well at this stage. Given access to a new cube it will take them a little while to understand what data is present and what data they’re interested in; if they’re new to Analysis Services it’s also likely they’ll need some time to get used to whatever client tool they’re using. Therefore you’ll need to have logging enabled for at least a month or two before you can be sure that your query log contains enough useful information. Remember that if you change the structure of the cube while you’re logging then the existing contents of the log will no longer be usable.

Last of all, you’ll need to run the Usage-Based Optimisation Wizard to build new aggregations using this information. The Usage-Based Optimisation Wizard is very similar to the Design Aggregations Wizard, with the added option to filter the information in the query log by date, user and query frequency before it’s used to build aggregations. It’s a good idea to do this filtering carefully: you should probably exclude any queries you’ve run yourself, for example, since they’re unlikely to be representative of what the users are doing, and make sure that the most important users queries are over-represented.

Query Performance Tuning in Microsoft Analysis Services: Part 1

Once you’ve done this you’ll have a chance to review what data is actually going to be used before you actually build the aggregations.

Query Performance Tuning in Microsoft Analysis Services: Part 1

On the last step of the wizard you have the choice of either creating a new aggregation design or merging the aggregations that have just been created with an existing aggregation design. We recommend the latter: what you’ve just done is optimize queries that ran slowly on an existing aggregation design, and if you abandon the aggregations you’ve already got then it’s possible that queries which previously had been quick would be slow afterwards.

This exercise should be repeated at regular intervals throughout the cube’s lifetime to ensure that you built any new aggregations that are necessary as the queries that your users run change. Query logging can, however, have an impact on query performance so it’s not a good idea to leave logging running all the time.

Processing aggregations
When you’ve created or edited the aggregations on one or more partitions, you don’t need to do a full process on the partitions. All you need to do is to deploy your changes and then run a ProcessIndex, which is usually fairly quick, and once you’ve done that queries will be able to use the new aggregations. When you run a ProcessIndex Analysis Services does not need to run any SQL queries against the relational data source if you’re using MOLAP storage.

Monitoring partition and aggregation usage

Having created and configured your partitions and aggregations, you’ll naturally want to be sure that when you run a query Analysis Services is using them as you expect. You can do this very easily by running a trace with SQL Server Profiler or by using MDX Studio (a free MDX Editor that can be downloaded from http://tinyurl.com/mdxstudio).

To use Profiler, start it and then connect to your Analysis Services instance to create a new trace. On the Trace Properties dialog choose the Blank template and go to the Events Selection tab and check the following:

  • Progress ReportsProgress Report Begin
  • Progress ReportsProgress Report End
  • Queries EventsQuery Begin
  • Queries EventsQuery End
  • Query ProcessingExecute MDX Script Begin
  • Query ProcessingExecute MDX Script End
  • Query ProcessingQuery Cube Begin
  • Query ProcessingQuery Cube End
  • Query ProcessingGet Data From Aggregation
  • Query ProcessingQuery Subcube Verbose

Then clear the cache and click Run to start the trace.

Once you’ve done this you can either open up your Analysis Services client tool or you can start running MDX queries in SQL Management Studio. When you do this you’ll notice that Profiler starts to show information about what Analysis Services is doing internally to answer these queries. The following screenshot shows what you might typically see:

Interpreting the results of a Profiler trace is a complex task and well outside the scope of this article, but it’s very easy to pick out some useful information relating to aggregation and partition usage. Put simply:

  • The Query Subcube Verbose events represent individual requests for data from the Formula Engine to the Storage Engine, which can be answered either from cache, an aggregation or base-level partition data. Each of these requests is at a single granularity, meaning that all of the data in the request comes from a single distinct combination of attributes; we refer to these granularities as “subcubes”. The TextData column for this event shows the granularity of data that is being requested in human readable form; the Query Subcube event will display exactly the same data but in the less friendly-format that the Usage-Based Optimisation Query Log uses.
  • Pairs of Progress Report Begin and Progress Report End events show that data is being read from disk, either from an aggregation or a partition. The TextData column gives more information, including the name of the object being read; however, if you have more than one object (for example an aggregation) with the same name, you need to look at the contents of the ObjectPath column to see what object exactly is being queried.
  • The Get Data From Aggregation event is fired when data is read from an aggregation, in addition to any Progress Report events.
  • The Duration column shows how long each of these operations takes in milliseconds.

At this point in the cube optimisation process you should be seeing in Profiler that when your users run queries they hit as few partitions as possible and hit aggregations as often as possible. If you regularly see slow queries that scan all the partitions in your cube or which do not use any aggregations at all, you should consider going back to the beginning of the process and rethinking your partitioning strategy and rerunning the aggregation design wizards. In a production system many queries will be answered from cache and therefore be very quick, but you should always try to optimise for the worst-case scenario of a query running on a cold cache.

Building aggregations manually

However good the aggregation designs produced by the wizards are, it’s very likely that at some point you’ll have to design aggregations manually for particular queries. Even after running the Usage Based Optimisation Wizard you may find that it still does not build some potentially useful aggregations: the algorithm the wizards use is very complex and something of a black box, so for whatever reason (perhaps because it thinks it would be too large) it may decide not to build an aggregation that, when built manually, turns out to have a significant positive impact on the performance of a particular query.

Before we can build aggregations manually we need to work out which aggregations we need to build. To do this, we once again need to use Profiler and look at either the Query Subcube or the Query Subcube Verbose events. These events, remember, display the same thing in two different formats – requests for data made to the Analysis Services storage engine during query processing – and the contents of the Duration column in Profiler will show how long in milliseconds each of these requests took. A good rule of thumb is that any Query Subcube event that takes longer than half a second (500 ms) would benefit from having an aggregation built for it; you can expect that a Query Subcube event that requests data at the same granularity as an aggregation will execute almost instantaneously.

The following screenshot shows an example of trace on an MDX query that takes 700ms:

The single Query Subcube Verbose event is highlighted, and we can see that the duration of this event is the same as that of the query itself, so if we want to improve the performance of the query we need to build an aggregation for this particular request. Also, in the lower half of the screen we can see the contents of the TextData column displayed. This shows a list of all the dimensions and attributes from which data is being requested —the granularity of the request—and the simple rule to follow here is that whenever you see anything other than a zero by an attribute we know that the granularity of the request includes this attribute. We need to make a note of all of the attributes which have anything other than a zero next to them and then build an aggregation using them; in this case it’s just the Product Category attribute of the Product dimension.

The white paper Identifying and Resolving MDX Query Performance Bottlenecks (again, written for Analysis Services 2005 but still relevant for Analysis Services 2008), available from http://tinyurl.com/mdxbottlenecks, includes more detailed information on how to interpret the information given by the Query Subcube Verbose event.

So now that we know what aggregation we need to build, we need to go ahead and build it. We have a choice of tools to do this: we can either use the functionality built into BIDS, or we can use some of the excellent functionality that BIDS Helper provides. In BIDS, to view and edit aggregations, you need to go to the Aggregations tab in the cube editor. On the Standard View you only see a list of partitions and which aggregation designs they have associated with them; if you switch to the Advanced View by pressing the appropriate button on the toolbar, you can view the aggregations in each aggregation design for each measure group. If you right-click in the area where the aggregations are displayed you can also create a new aggregation and once you’ve done that you can specify the granularity of the aggregation by checking and unchecking the attributes on each dimension. For our particular query we only need to check the box next to the Product Categories attribute, as follows:

The small tick at the top of the list of dimensions in the Status row shows that this aggregation has passed the built-in validation rules that BIDS applies to make sure this is a useful aggregation. If you see an amber warning triangle here, hover over it with your mouse and in the tooltip you’ll see a list of reasons why the aggregation has failed its status check.

If we then deploy and run a ProcessIndex, we can then rerun our original query and watch it use the new aggregation, running much faster as a result:

The problem with the native BIDS aggregation design functionality is that it becomes difficult to use when you have complex aggregations to build and edit. The functionality present in BIDS Helper, while it looks less polished, is far more useable and offers many benefits over the BIDS native functionality, for example:

  • The BIDS Helper Aggregation Design interface displays the aggregation granularity in the same way (ie using 1s and 0s, as seen in the screenshot below) as the Query Subcube event in Profiler does, making it easier to cross reference between the two.
  • It also shows attribute relationships when it displays the attributes on each dimension when you’re designing an aggregation, as seen on the righthand side in the screenshot that follows. This is essential to being able to build optimal aggregations.
  • It also shows whether an aggregation is rigid or flexible.
  • It has functionality to remove duplicate aggregations and ones containing redundant attributes (see below), and search for similar aggregations.
  • It allows you to create new aggregations based on the information stored in the Query Log.
  • It also allows you to delete unused aggregations based on information from a Profiler trace.
  • Finally, it has some very comprehensive functionality to allow you to test the performance of the aggregations you build (see http://tinyurl.com/testaggs).

Unsurprisingly, if you need to do any serious work designing aggregations manually we recommend using BIDS Helper over the built-in functionality.

Common aggregation design issues

Several features of your cube design must be borne in mind when designing aggregations, because they can influence how Analysis Services storage engine queries are made and therefore which aggregations will be used. These include:

  • There’s no point building aggregations above the granularity you are slicing your partitions by. Aggregations are built on a per-partition basis, so for example if you’re partitioning by month there’s no value in building an aggregation at the Year granularity since no aggregation can contain more than one month’s worth of data. It won’t hurt if you do it, it just means that an aggregation at month will be the same size as one at year but useful to more queries. It follows from this that it might be possible to over-partition data and reduce the effectiveness of aggregations, but we have anecdotal evidence from people who have built very large cubes that this is not an issue.
  • For queries involving a dimension with a many-to-many relationship to a measure group, aggregations must not be built using any attributes from the many-to-many dimension, but instead must be built at the granularity of the attributes with a regular relationship to the intermediate measure group. When a query is run using the Sales Reason dimension Analysis Services fi rst works out which Sales Orders relate to each Sales Reason, and then queries the main measure group for these Sales Orders. Therefore, only aggregations at the Sales Order granularity on the main measure group can be used. As a result, in most cases it’s not worth building aggregations for queries on many-to-many dimensions since the granularity of these queries is often close to that of the original fact table.
  • Queries involving measures which have semi-additive aggregation types are always resolved at the granularity attribute of the time dimension, so you need to include that attribute in all aggregations.
  • Queries involving measures with measure expressions require aggregations at the common granularity of the two measure groups involved.
  • You should not build aggregations including a parent/child attribute; instead you should use the key attribute in aggregations.
  • No aggregation should include an attribute which has AttributeHierarchyEnabled set to False.
  • No aggregation should include an attribute that is below the granularity attribute of the dimension for the measure group.
  • Any attributes which have a default member that is anything other than the All Member, or which have IsAggregatable set to False, should also be included in all aggregations.
  • Aggregations and indexes are not built for partitions with fewer than 4096 rows. This threshold is set by the IndexBuildThreshold property in msmdsrv.ini; you can change it but it’s not a good idea to do so.
  • Aggregations should not include redundant attributes, that is to say attributes from the same ‘chain’ of attribute relationships. For example if you had a chain of attribute relationships going from month to quarter to year, you should not build an aggregation including month and quarter—it should just include month. This will increase the chance that the aggregation can be used by more queries, as well as reducing the size of the aggregation.

Summary

In this part of the article we covered performance-specific design features such as partitions and aggregations. In the next part, we will cover MDX calculation performance and caching.

LEAVE A REPLY

Please enter your comment!
Please enter your name here