0
votes

I have the following measure which calculates a node size for a graph chart table:

MEASURE tMeasures[m_ONA_nodeSize_flex] = 

IF(
    ISBLANK([m_ONA_rankFilter_nodes]),
    BLANK(),

    var empCnt = ROW(
            "data", 
        CALCULATE(
            SUMX( DISTINCT(tONAAppend[id]), 1),
            FILTER(
                ALL(tONAAppend),
                NOT( ISBLANK([m_ONA_rankFilter_nodes]))
                )
        )
    )

    RETURN
            IF(
                empCnt > 25, ROUND( 1500 / empCnt, 0),
                60
            )
    )

[m_ONA_rankFilter_nodes] is used to filter those nodes which exist in edges in the same table. These edges are also filtered using multiple conditions with a measure m_ONA_edgeValue_afterRank, which returns BLANK() if a row doesn't match filters and edge_value if it does.

The script before using tMeasures[m_ONA_nodeSize_flex] with included measures [m_ONA_rankFilter_nodes] and m_ONA_edgeValue_afterRank works relatively fast:

EVALUATE
  TOPN(
    501,
    SUMMARIZECOLUMNS(
      'tONAAppend'[tableName],
      'tONAAppend'[name],
      'tONAAppend'[nameFrom],
      'tONAAppend'[nameTo],
      'tONAAppend'[id],
      'tONAAppend'[idFrom],
      'tONAAppend'[idTo],
      'tONAAppend'[photoSrc],
      __DS0FilterTable,
      __DS0FilterTable2,
      __DS0FilterTable3,
      __DS0FilterTable4,
      "m_ONA_edgeValue_afterRank", 'tMeasures'[m_ONA_edgeValue_afterRank],
      "m_ONA_rankFilter_nodes", 'tMeasures'[m_ONA_rankFilter_nodes]
    ),
    'tONAAppend'[tableName],
    1,
    'tONAAppend'[name],
    1,
    'tONAAppend'[nameFrom],
    1,
    'tONAAppend'[nameTo],
    1,
    'tONAAppend'[id],
    1,
    'tONAAppend'[idFrom],
    1,
    'tONAAppend'[idTo],
    1,
    'tONAAppend'[photoSrc],
    1
  )

Image 1

However, when I replace 'tMeasures'[m_ONA_rankFilter_nodes] with a 'tMeasures'[m_ONA_nodeSize_flex] it starts working dramatically slower:

EVALUATE
  TOPN(
    501,
    SUMMARIZECOLUMNS(
      'tONAAppend'[tableName],
      'tONAAppend'[name],
      'tONAAppend'[nameFrom],
      'tONAAppend'[nameTo],
      'tONAAppend'[id],
      'tONAAppend'[idFrom],
      'tONAAppend'[idTo],
      'tONAAppend'[photoSrc],
      __DS0FilterTable,
      __DS0FilterTable2,
      __DS0FilterTable3,
      __DS0FilterTable4,
      "m_ONA_edgeValue_afterRank", 'tMeasures'[m_ONA_edgeValue_afterRank],
      "m_ONA_nodeSize_flex", 'tMeasures'[m_ONA_nodeSize_flex]
    ),
    'tONAAppend'[tableName],
    1,
    'tONAAppend'[name],
    1,
    'tONAAppend'[nameFrom],
    1,
    'tONAAppend'[nameTo],
    1,
    'tONAAppend'[id],
    1,
    'tONAAppend'[idFrom],
    1,
    'tONAAppend'[idTo],
    1,
    'tONAAppend'[photoSrc],
    1
  )

Image2

As I understand, the problem is in DAX engine working: it tries to calculate the value of this measure for each row. I think the fastest algo could be calculate once, then send to storage, and then populate all rows with it.

How can I optimize and force DAX to work more efficient?

1
I think the fastest algo could be calculate once, then send to storage, and then populate all rows with it. - This is one of the main differences between measures and columns. If you want to be calculated once, then don't make it a measure.Andrey Nikolov
@AndreyNikolov, thank you for your answer. However, I am aware of the difference. The problem is that I have to implement measures, as the number of edges shown and values for them are dynamically changed by a dashboard user. Unfortunately, it's not a solution.Alex

1 Answers

0
votes

I found good articles which relate to this topic and then implemented approach with variables written there:

https://www.sqlbi.com/articles/optimizing-conditions-involving-blank-values-in-dax/ https://www.sqlbi.com/articles/understanding-eager-vs-strict-evaluation-in-dax/

It worked as expected - all measures put in special variables were materialized in a Storage Engine. The performance has increased dramatically.