BI, ANALYTICS, AND REPORTING

Building Data Models That Work. Insights from Decades in Finance and Technology.

BY PROFESSIONAL ADVANTAGE - - 5 MINS READ

Back when we started in finance and technology, over three decades ago, reporting was mostly manual. We relied on csv or text file exports from mainframe systems converted into spreadsheet tools such as Excel and Lotus123. PowerPoint and hard copy printouts were the backbone of reporting.  

Fast-forward to today, and we have powerful (low code, no code) business intelligence tools like Power BI.  

But no matter how advanced analytics tools become, the reporting output is only as reliable as the quality of data and the understanding of the data definitions by the users.  

The importance of a solid data model. 

Over the years, we’ve built models for global organisations and small businesses across various industries, business systems, and requirements. We’ve seen what works and why data modelling is the foundation of reliable business intelligence. We’ve also seen data models fail because of blurred and overlapping data tables, overcomplexity, and a short-term approach to design that builds in inherent rigidity that is not easily adaptable to the changing business environment. 

In this blog, we share some of the insights we’ve learned along the way: 

1. Start with the Business Workflow

Before diving into the data, understand the business process. What decisions are being made? What questions need answering? What are the data points pivotal to those decisions. A good data model mirrors how the business operates, not just how data is stored. 

If your model doesn’t align with the workflow (e.g., how customers place orders, how discounts work, line discount vs. order discount), your reports won’t have the flexibility to answer the functional questions that management will require answers for (e.g. sales totals can’t be easily drilled down or grouped in the perspective required).

2. Facts First, Dimensions Second

Every model starts by identifying the fact table, which is the lowest level of transactional data. Whether it’s sales line items, timesheet line-item entries, or general ledger transactions, this is your foundation. 

Then come the dimension tables: the categories that determine the “who, what, when, and where” of the fact transaction line. These provide context and allow users to slice and dice the data meaningfully. 

Think about the business questions, e.g. How many did we sell of product X? How much did customer Y buy? How many hours were spent on a particular project? These are the “dimension” categories that managers require of the facts to make useful resource allocation decisions. Then think about the different perspectives your users want to slice and filter the data, e.g. by product, project, customer, manufacturer, distribution channel, region, date, delivery type.

3. Avoid the “Quick and Dirty” Trap

It’s tempting to build reports from exported CSVs of predetermined report views. It’s fast, familiar, and often perceived as “good enough.” But it comes with risks: 

  • The parameters inherent in the view or exports may not be complete, missing records that fall outside the parameters. (e.g. if a product was not sold over the time contained within the CSV file, that product may be omitted entirely from the report as if it doesn’t exist). 
  • Unreconcilable reports, where BI reports do not match direct exports from source systems. A misunderstanding in context exists that erodes trust in the BI system.  
  • Lack of detailed attributes (e.g. the product sold is named but the supplier or manufacturer is not). 

These shortcuts can lead to misleading insights and erode confidence in your reporting. 

A report built on incomplete data can be more troubling than the absence of immediate reports.

4. Connect to Source Systems Whenever Possible

The best models pull directly from source systems. This ensures: 

  • Complete and up-to-date data. 
  • Access to full attributes and unique keys. 
  • Better control over relationships. 

It also allows for more advanced features like auto-refresh, incremental updates, and integration with other tools. 

Direct connections mean more reliable data and more confidence in the reports.

5. Normalise and Optimise

A clean model performs better and is easier to maintain. That means: 

  • Removing redundant columns. 
  • Using one date table and where required virtual relationships are leveraged for simplicity. 
  • Avoiding unnecessary joins and duplicates. 

Power BI’s ability to auto-detect relationships and optimise performance is powerful, but only if the model is well-structured. 

A lean model is a fast model, and a fast model gets used.

6. Let the Model Drive the Report

Too often, we focus on visuals first. But the real value comes from the model. A strong model enables: 

  • Auto-generated reports. 
  • AI-driven insights. 
  • Reusable datasets across teams. 

Spend time getting the model right. The visuals will follow. 

Build the model once and power multiple reports.

7. Think Long-Term

A good model isn’t just for today’s dashboard. It’s the foundation for scalable analytics. With platforms like Microsoft Fabric, your model becomes part of a unified ecosystem ready for use across multiple teams and AI use cases. 

Final Thoughts 

Data modelling isn’t just a technical task; it’s a strategic one. It’s the difference between reactive reporting and proactive decision-making. After 30+ years in the field, we can confidently say: if you want your BI to work, invest in a good data model. 

 

Write a Comment