For a long time, the norm in consolidation tools was to use 12 dimensions. As the tools have evolved and new tools have become well established on the market, this is not a hard limit anymore, and more dimensions can be used. Recently, I had the pleasure to develop a special purpose HFM application for a financial institution with 14 custom dimensions, making the total number of dimensions 22. The main driver for the amount dimensions is the granular reporting requirement set by the regulator. You can read the customer story here.
I’m reflecting on some lessons learned from that project, other recent engagements and discussions to highlight some issues that need to be taken into account when designing and working with applications with a high number of dimensions. Some of them might be obvious but still worth mentioning and other issues also apply to applications with fewer dimensions.
The more dimensions, the more support for users
As the number of dimensions grows, you need to define valid intersections carefully. The reason lies in mathematics. A high number of dimensions increases the number of possible member combinations more than a high number of members in dimensions. If data does not require specification in a dimension, the intersection should be made invalid. This reduces the theoretical maximum amount of data. In my opinion, it also helps the user more than it hurts as it keeps the data clean.
Often, not all dimensions are required for all data. Make sure to set the dimensions only to that data which is required. This way you don’t bother the users with details that are not required and you keep the data clean and relevant. If the requirements change over time, you can always adjust the setup to require the dimensions.
A high number of dimensions requires automated data integration. The level of detail in the data grows, which makes manual data input painful so it should be an exception only while the bulk of the data is brought in using tools.
Analysing data requires different tools with more dimensions. Reports and data forms work at a high level, but for anything more granular you need a database or spreadsheet-based solution. This is because in a row/column-style grid you can only view a limited number of dimensions.
Granular but not transactional data
Since we are still talking about a consolidation application, let’s keep in mind that consolidation works best on balances, not transactions. A higher number of dimensions makes data more granular, but you should still avoid consolidating individual transactions because they don’t make sense at consolidated levels.
Transactional data should be converted into balances at some point of the process, as the tools allow. The conversion can take place in data integration or in a staging table outside the consolidation cube. As a last resort, the transactional data could be stored in the consolidation cube, but on members which are excluded from consolidation.
Only in very few companies does the whole variety of source systems provide high quality data meeting the granular requirements. This can be dealt with default rules at data integration for the base-level data. This way the effect of defaulting remains minor. If possible, the defaulted data could be marked in some way to ease analysis.
Scripting is a good way to make calculations and steer consolidation. This is nothing new, but with higher number of dimensions some aspects are worth noting. The amount of data is likely to grow, so avoid writing scripts depending on the number of members or the amount of data as much as you can. This should be taken into account in metadata design as well.
Write your scripts as widely as possible. Don’t define dimensions you don’t absolutely need to. This keeps the scripts efficient and “future-friendly” for eventual changes.
Even though possible with most of today’s consolidation tools, adding dimensions should not be the first solution in mind for more analysis capabilities. More dimensions mean always more challenges to users. But nonetheless, dimensions can be added, if necessary.
Provided the script is written efficiently, a higher number of dimension or members does not automatically mean longer processing times in calculation or consolidation. It’s the amount of data that does. Therefore, proper scripting and limiting valid intersections are important.