0
votes

Not sure if this is possible but I really so. I have a large table with 400+ columns. Each column is a check point that can have one of four values (1=done, 0.5=in progress, 0=not started). Each column belong to a group that I need to sum to see the progress of that group. And therefore the column has a name followed by an underscore and then groupID. (for example columnA_432).

I can of course manually pick all the columns and in dax sum these. But I was hoping that I could use the groupID to pick the columns for my measure and also do a count of how many columns there is (this number is used for calculating the percentage of the completion). Especially because a times a new column will be added and then I need to start all over again.

example of the table structure - very much shortened

1

1 Answers

0
votes

Normalise your data in Power Query, by unpivoting. Something like:

let
    Source = #"Source Data",
    #"Unpivoted Other Columns" = Table.UnpivotOtherColumns(Source, {"id"}, "Attribute", "Value"),
    #"Split Column by Delimiter" = Table.SplitColumn(#"Unpivoted Other Columns", "Attribute", Splitter.SplitTextByDelimiter("_", QuoteStyle.Csv), {"Check Point", "Group"})
in
    #"Split Column by Delimiter"

Now your Groups and Check Points are in your model as dimensions of the Value fact, so you can easily group / filter by these dimensions.

See a worked example PBIX file here: https://pwrbi.com/so_60371001/