We have just released a new update of BI Pixie (now available on Azure Marketplace and Microsoft AppSource) to help you analyze how design choices can impact user engagement and the effectiveness of your reports.

The motivation behind the new feature is rooted in the law of diminishing returns. At what point does an increase in use of specific design elements stop returning the desired user adoption and engagement in your reports?
Let’s use the number of visuals in the report as the design element, and measure the number of interactions per session you get over time. It makes sense that reports with a few visuals will have lower interactions than reports with a higher number of visuals. But at what point, adding too many visuals will have a negative impact? Wouldn’t it be amazing if you could measure it on your entire portfolio of Power BI reports? Wouldn’t it be insightful to be able to select different design elements – like the number of bookmarks, slicers, buttons, table visuals and put them on the X-axis of a scatter chart to find how they may affect the level of engagement or user adoption or satisfaction on the Y-axis?

This idea inspired the new feature in BI Pixie. Here are a few examples of questions you can now answer:
- Do we get higher engagement in complex reports compared to the simpler ones? Was it worth it to invest so much time in building so many visuals in these reports?
- Do we deliver effective modern experiences in our dashboards, or do we push our end-users to just use the export to Excel feature?
- Do bookmarks and slicers help our end-users in discovering new pages?
- Is there excessive use of controls that could explain why we see low durations that our end-users spend in our reports?
In the BI Pixie app, you can now find two new pages to help you answer such questions. In the Design Impact page, you can analyze the effectiveness of your report design at the report level using three new scores that assess report complexity, usability, and passivity. With these scores and their underlying measures, you can identify ineffective design patterns in your reports.

On the left side of the page, you can find the slicer to select an engagement metric to analyze as the Y axis of the two scatter charts. Using the scatter charts, you can look for scenarios where excessive use of design elements can affect user engagement. For example: you can find reports with an extreme complexity score and low engagement as the first candidates for design improvement.
In the first scatter chart, you can select one of the three scores as the X axis: Complexity score, Usability score, and Passivity score.
In the second scatter chart you can select one of the report elements as the X axis: # of Visuals, Visuals per page, # of Bookmarks, # of Slicers and more.
By selecting a single row that represents a report in the top table or a single bubble at one of the scatter charts, you can then drill through to the Design Impact: Page-Level, where you can analyze the effectiveness of your design at the page level.

What are the three new scores?
Complexity score
An estimate of the report’s complexity at the data visualization level as detected during the instrumentation by BI Pixie. A higher score means that the report may have an excessive number of visuals or objects that are expensive to maintain.
Usability score
An estimate of the report’s usability, which is based on the detection of controls such as bookmarks, slicers, tooltips, and drill-throughs as detected during the instrumentation by BI Pixie. As the score gets higher, you are likely to get more user-friendly reports. But extreme scores may suggest excessive use of such controls and may lead to the reverse effect.
Passivity score
An estimate of the dominance of Table and Matrix visuals that lead to passive consumption and focus on Power BI as an export tool. A higher score means that the report leads to passivity and is more likely to serve as a data export tool rather than a modern and interactive dashboard.
The following table summarizes the underlying metrics.
| Metric | Description |
|---|---|
| # of Visuals | The number of visuals that were found during the last BI Pixie instrumentation. This metric is aggregated at the page level. You can find this metric as a measure named “Visuals”. |
| Visuals per page | The average number of visuals (best used at the report or project/workspace level). You can find this metric as a measure named “Columns in table visuals”. |
| # of Bookmarks | The number of bookmarks that were found during the last BI Pixie instrumentation at the report level. You can find this metric as a measure named “Bookmarks”. |
| # of Slicers | The number of slicers that were found during the last BI Pixie instrumentation at the page level. You can find this metric as a measure named “Slicers”. |
| # of Table visuals | The number of Table and Matrix visuals that were found during the last BI Pixie instrumentation at the page level. You can find this metric as a measure named “Table visuals”. |
| # of Columns in table visuals | The number of columns used in Table and Matrix visuals as found during the last BI Pixie instrumentation. This metric is aggregated at the page level. You can find this metric as a measure named “Columns in table visuals”. |
| # of objects | The number of buttons, images and shapes that were found during the last BI Pixie instrumentation. This metric is aggregated at the page level. You can find this metric as a measure named “Non-Analytic objects”. |
| # of Tooltip pages | The number of Tooltip pages that were found during the last BI Pixie instrumentation. You can find this metric as a measure named “Tooltip pages”. |
| # of Drillthru pages | The number of Drillthrough pages that were found during the last BI Pixie instrumentation. You can find this metric as a measure named “Drillthru pages”. |
Learn more about BI Pixie in our user guide.


