0
votes

I have a fairly long report with about 20 pages, mainly charts (about 40) all using a subsample of the same dataset. This "master" report is iterated about 200 times by passing a parameter with such 200 different values.

I was wondering whether there is a best practice for such case in terms of number of rdl files and datasets. Here are the options:

  1. 1 RDL file & 1 Shared dataset
  2. Multiple RDL files & 1 shared dataset: the multiple RDL files are then included in a master RDL file as subreports
  3. 1 RDL file & Multiple Embedded datasets: each chart would have its own dataset
  4. Multiple RDL files & Multiple Embedded datasets: the multiple RDL files are then included in a master RDL file as subreports, each chart would have its own dataset

I see an advantage in option 1 as the shared dataset can be cached on the server making report generation much faster after the first iteration but I'm open to other approaches which might have other advantages.

1
I don't know much about execution plans and speeds and what not, but I can tell you from working in SSRS that having multiple datasets is a real hassle if something changes. It's far easier to keep the everything in one dataset. As far as 1 .rdl versus having subreports, it depends on the report. I've had reports that require four different datasets in 6 different tablixes, fit into a certain layout, and each tablix needed to print multiple pages worth of data while maintaining the same layout. One .rdl is easier, but not always practical.Steve-o169

1 Answers

1
votes

I think there is no explicit solution to this. This all depends on different influence factors. For example

  1. The amount of data per Dataset
  2. What do you try do archive with the reports
  3. Do changes happen more often or not?
  4. Can it be chached or not?

The chache problem can be solved easy I think. If you don´t need your data to be refreshed every hour/minute you can chache it on a daily basis.

I alaways like one dataset because when I do changes, I have to do the changes just once in one place. On the other side I had datasets with more then millions of rows. For the sake of performance I had to split such hughe datasets into smaller pieces with subreports (etc...). An improvement of performance but a pain in the *** when you have to change something.

So I think the situation will tell you which of your 4 options you can pick.