1
votes

We have a very large dimension in our SSAS. During the incremental run we are using ProcessAdd to process the dimension. This dimension processing is taking 95 % of the total cube processing time.

This dimension involves single table. The named query for the dimension from DSV is -

  SELECT        ABC, XYZ, DEF, PQR, PLADKey, LEFT(ABC, 3) AS DNL1, LEFT(ABC, 7) AS DNL2, 
                         LEFT(ABC, 9) AS DNL3
FROM            dbo.PLAD AS ad

The table has more than 33000000 rows that increases daily. Is it possible that due to high row count the processAdd is working slow. Does it automatically picks the news rows only or do we have to specify the filter criteria to identify new rows ( like adding a where condition to select only the data that is greater than last key value)?

We are using AMO to generate the XMLA script for processing. If we need to add filters how we do that in AMO?

We are working on SQL Server 2008 R2.

Any suggestions that could improve the performance for this dimension processing will be helpful.

1

1 Answers

0
votes

If I understood your current state you ran a ProcessAdd on that dimension but didn't customize the query to just read the new rows? First, it is important to only do ProcessAdd on dimensions which are insert-only (no updates or deletes) in your ETL. If that's your case then I blogged about ProcessAdd here. See the "ProcessAdd Dimension 2008.xmla" example. It shows how to provide a SQL query that only returns the new rows.