Power BI is a powerful tool for turning data into insights, but a solid Power BI model is essential for good-quality reports. A well-designed model leads to accurate insights, faster performance, and confident decision-making. A poorly designed one? It can mislead your team, slow reporting, erode trust in your dashboards, and ultimately lead to poor decision making.
Here are 7 common pitfalls we often see in Power BI data modelling, along with practical examples and best practices to help you avoid them.
- Building Models from CSV exports.
- Missing Unique Keys.
- Using Multiple date tables.
- Overlapping attributes across tables.
- Lack of normalisation.
- Ignoring virtual relationships.
- Keeping unnecessary columns.
Let’s dive deeper into each issue, its impact and tips on how to overcome it.
#1 – Building models from CSV Exports
At first glance, exporting data into a CSV and building your Power BI model from it may seem like a quick win and a time-saver. However, CSV exports might be missing data and therefore do not provide the whole picture.
Filtered data
Suppose someone exports a list of records from your ERP system to a CSV file. Here are some examples of how the data in that export may be filtered:
- Customers who are currently active.
- Aggregation of transactions by month.
- Transactions from the last 30 days.
- Products that are in stock.
- Projects that are marked as “open”.
This means you're not seeing:
- Customers who churned or were deactivated.
- Weekly and daily sales patterns or impact of public holidays or weekends.
- Historical transactions that could show seasonal trends.
- Out-of-stock products or back orders that still impact revenue.
- Completed projects that help measure delivery performance.
Why is this a problem for reporting
If you build your Power BI model from that filtered CSV, your dashboards will reflect only part of the story. You might conclude:
- “Customer Churn is low” — because you didn’t include inactive customers.
- “Sales are steady” — because you missed last quarter’s dip.
- A drop in sales for a month when the calendar month had 5 weekends rather than a typical 4. Inversely, if your organisation is busiest on weekends then a jump in sales is observed without the direct reasoning provided.
The business impact of building models from CSV exports
These blind spots can lead to poor decisions, such as underestimating risk, volume, misallocating resources, or missing opportunities to improve customer retention.
Solution: Always connect directly to the source systems whenever possible.
#2 – Missing Unique Keys
When building a data model in Power BI, every table should have a unique identifier, such as a CustomerID, ProductID, InvoiceNumber, etc. This identifier distinguishes records from one another in other tables. (For example, two customers with the name John Smith still have two unique identifiers that distinguish their related transactions in a Sales Invoice table).
Why is this a problem for reporting
Without a unique key, Power BI can’t reliably link tables together. This leads to:
- Broken relationships: Tables don’t connect properly, so your visuals might show blank values or errors.
- Ambiguous joins: Power BI doesn’t know which records to match with, so you get duplicates, inflated totals, or mismatched data.
- Duplicate Records: Without unique identifiers there is a risk that the data may contain duplicate records and there is no easy way to identify that case.
The business impact of missing unique keys
Let’s say you’re analysing customer sales performance. If your customer table doesn’t have a unique CustomerID, Power BI might:
- Match the same customer name to multiple sales records (especially if names are spelled slightly differently).
- Aggregate customers with the same name (e.g. Paul Smith) although two different people (customers) have the same name.
- Show inflated revenue because it counted the same transaction twice.
- Miss key customers entirely because it couldn’t find a match.
This can lead to misleading insights, such as thinking a region is outperforming when it’s actually just a data duplication issue.
Solution: Use Unique Identifiers.
Every table should have a column that uniquely identifies each row. Examples include:
- CustomerID for customers.
- ProductID for products.
- OrderID for transactions.
- EmployeeID for staff.
These keys allow Power BI to accurately link tables, filter, aggregate, and calculate data without double ups or mis matches.
#3: Using Multiple Date Tables
While having more than one date table in your model might seem harmless, maybe one for orders and another for deliveries, this can quickly lead to confusion and complexity.
Why is this a problem for reporting
- Power BI doesn’t know which date table to use for filtering or time intelligence.
- You duplicate logic (like fiscal calendars or holidays) across tables.
- It’s harder to compare timelines like order date vs. delivery date because they’re disconnected.
The business impact of multiple date tables
For example, you're trying to measure delivery performance. If your delivery and order dates live in separate tables, calculation of lead time or spot delays becomes more complex and your ability to display the data will consume more processing resources slowing down the speed of your report. This leads to missed opportunities to improve logistics or customer satisfaction.
Solution: Use one central date table and connect it to multiple fact tables using inactive relationships. Then, DAX functions like USERELATIONSHIP can be used to switch to inactive relationships when needed.
This keeps your model clean and lets you compare timelines accurately, like “orders placed vs. orders delivered” or “campaign start vs. revenue spike.”.
#4: Overlapping Attributes Across Tables
When the same attribute, like “Region” or “Department”, appears in multiple tables, it creates confusion and inconsistency.
Why is this a problem for reporting
- Power BI doesn’t know which table to use that references “Region” or “Department”. Region may appear as a column in a sales team table and also in a store table.
- The relationship to the fact table from either reference table may be different. For example, one may be a customer table the other may be store table. Filters when applied may provide very different aggregation results.
- Misaligned context. You risk reporting different representations for the same region depending on the table used due to different relationships to the fact table.
The business impact of overlapping attributes across tables:
Imagine your sales report shows Queensland with $1M in revenue, but your customer report shows $800K. Why? Because “Region” was defined differently in each table. This undermines trust in your data and can lead to poor strategic decisions.
Solution: Normalise your model. Create a single-dimensional table for shared attributes like “Region” and link all relevant tables to it. This ensures consistent filtering and reporting across your dashboards.
#5: Lack of Normalisation
When tables contain redundant data, your model becomes bloated and harder to maintain.
Why is this a problem for reporting
- You repeat the same customer or product info across multiple tables.
- Relationships become tangled.
- Your model size increases, slowing performance.
The business impact of the lack of normalisation in the data model
Let’s say every sales transaction includes full customer details: name, address, contact details, etc. If you have 100,000 transactions, that’s 100,000 copies of the same customer details. This slows down your reports and makes updates a nightmare.
Solution: Split repeated data into dimension tables (e.g., Customers, Products) and link them to your fact tables. This keeps your model lean and makes it easier to update and scale.
#6: Ignoring Virtual Relationships
Sometimes, instead of using DAX to create dynamic relationships, people duplicate tables just to make joins work. This adds unnecessary clutter.
Why is this a problem for reporting
- You end up with multiple versions of the same table.
- Your model becomes harder to navigate and maintain.
- Performance suffers due to duplications and increased complexity of DAX calculations.
The business impact of ignoring virtual relationships.
For example, you want to analyse sales by campaign, but the relationship isn’t direct. Instead of duplicating the campaign table, you can use DAX to activate a virtual relationship. This keeps your model clean and avoids confusion.
Solution: Use DAX functions like TREATAS or USERELATIONSHIP to create relationships on the fly. This allows you to analyse complex scenarios without bloating your model.
#7: Keeping Unnecessary Columns
It’s tempting to keep every column “just in case” but this leads to performance issues and cluttered models.
Why is this a problem for reporting
- Your model size balloons.
- Reports take longer to load.
- Users get overwhelmed by irrelevant fields.
The business impact of keeping unnecessary columns
If your product table includes 20 columns but you only use 5 in your reports, those extra columns are just slowing things down. Over time, this affects report responsiveness and user experience.
Solution: Remove unused columns, especially:
- Datetime columns where time is not required, remove timestamps you don’t need.
- Audit fields like “CreatedBy”, “ModifiedDate” or “LastModifiedDate.”
- Legacy fields from old systems.
- Systems reference fields to unique keys in unused or irrelevant tables.
This keeps your model fast, focused, and user-friendly.
There are various ways to optimise Power BI data models for data accuracy and performance. This list of seven pitfalls includes common mistakes we see when working with customers to help them build better Power BI data models.
If you need help reviewing your data models or optimising what you have built, contact us for an initial consultation. One of our Power BI-certified consultants can help guide you on your next step.
Book a 30-minute consultation with a Power BI-certified expert.


