Objective
We're building Enterprise DWH for business self-service: tens of TBs of data and about 30 business users. The flow looks like: Sources -> ETL -> DWH -> Power BI -> User
.
Transaction grain facts may contain billions of rows, non-additive measures and KPIs. Because of that, external in-memory cubes (tabular model) or PBI import mode isn't an option for us. In the same time we have very strict performance requirements - PBI visualizations shouldn't take more than 15 seconds to be built.
For sake of performance and usability, we end up with PBI team defining materialized views to build multiple (not too many at this point) aggregated derivatives from each transactional fact table (at DWH layer). Each derivative is just a more aggregated fact table plus pre-calculated/aggregated KPIs.
The issue
Partially because of Governance hasn't been implemented yet, and maybe because of number of tables & KPIs, business users find transaction-grain Star Schema too complex (and sometimes slow), and tend to use only derivative aggregated facts for data exploration. I have the feeling like transaction-grain would be used only by Power BI team, and can't say how many derivatives we'll have for each transactional fact table in future (it depends, maybe from 5 to 10).
Question
Is the approach we're doign now = a standard (best practice) approach? Shall we encourage our business users to use transactional facts? Or creating 5 derivative aggregations and putting burden on side of Power BI team is a good approach?
P.S.
How common is the 15 seconds requirement for PBI reports? Means when the user select any slicer value, the report should be re-freshed in < 15 seconds. Isn't it too low threshold?