1
votes

I have a SQL Server database where we have created some views based on dim and fact tables. I need to build SSAS tabular model based on my tables and views. But one of the view runs for 1.5 hour inside SQL query (SSMS). Now I need to use this same view to build my SSAS tabular model but 1.5 hour is not acceptable. This view is made up of more than 10 table joins and lot of Where conditions.

1) Can I bring all these tables being used in this view inside my SSAS tabular model but then I am not sure how to join them all and use where clauses inside SSSAS and build something similar to my view. Is that possible? If yes how?

or

2) I will build one time SSAS model from that view and then if I want to incrementally load the data daily, whats is the best way to do that?

1
You can use the dim and fact table as the base, you link them via the keys on the table in the table view of SSAS prject in VS. I would take a subset of the data, and proof of concept it in Power B, which is SSAS tabular, but for Power Users. For the long running query, I would suggest looking at your indexes and query execution plan to see where the bottle neck is tooJon

1 Answers

0
votes

The best option is to set up a proper ETL process. That is:

  1. Extract the tables from your source SQL database into a new SQL database that you control.
  2. Transform the data into a star schema.
  3. Load the data from the star schema into SSAS.

On SQL Server, the most common approach is use SSIS packages for data extraction, movement, and orchestration, and SQL Server Agent Jobs for scheduling.

To answer your questions:

  1. Yes, it is certainly possible to bring in all of the tables directly from your source system into your tabular model, but please don't do this! You will only create problems for yourself later on when creating DAX calculations. More information here.

  2. Incrementally loading data is something you decide for each table that is imported into your tabular model. Again, this is much easier if you have a proper star schema, as you would typically run a full processing on all your dimension tables, and then do incremental processing only on the largest fact tables.