Want to know The Truth About CPM?

11 August 2014

Enhanced Planning Validations: Part 2

Introduction

At the end of July Tyler Feddersen very generously wrote part one of this two part series on data validation in Planning; you now are reading the concluding entry.  I’ll let Tyler do the explaining below but I want to make sure you, Gentle Reader, understand that other than this introduction and a super short concluding paragraph that everything is 100% Tyler’s.  I don’t have many guest bloggers (if you are interested send me a comment via this blog or a LinkedIn message) but I am always beyond happy to use my blog as a platform for sharing information.  

Thanks must also go out to Performance Architects – not many companies (in my experience, “not many” should read “just about none”) would allow one of their employees to write for something other than their blog.  PA are to be commended for their spirit of information sharing.  Chuck Persky, whom I know from the long ago days of ISA (and thus my connection to PA), will be presenting at Oracle Open World – if you are going to that event I encourage you to attend his session Oracle Planning and Budgeting Cloud Service: Oracle Hyperion Planning in the Cloud [UGF9091] on Sunday, 28 September.  I’m not presenting on that Sunday (or at all at OOW, for that matter – sanity occasionally comes my way), but I am the guy who solicited the ODTUG EPM user group sessions.  And lest unworthy thoughts enter your mind,  I reached out to Chuck long before this blog post was written – there is no quid pro quo.

With that, enjoy.

Tyler Feddersen, Performance Architects

Let’s take it a step further


If you are reading this and have not read Part 1, I would definitely recommend stopping now and doing so. The rest of this blog will be taking the results of Part 1 and taking it a step further in creating a pretty good overall validation solution.

The validation process discussed in Part 1 was a simple way to be able to identify errors that would cause data integrity issues. However, it does not prevent them. This is the part of the process that Part 2 will attempt to accomplish.

A Look Ahead


Each individual step with this solution is fairly simple and direct. However, the process that combines them all starts to get a little confusing. Let’s take a glance at the general process flow that an end user would go through when working with this technique.

  1. The user completes all necessary data entries
  2. Following data entry, the user would change a lock status to “Locked”
    1. Done through a separate entry form
    2. A business rule checks for any errors prior to allowing the lock
    3. Causes a flag to show as “Locked”, which also prevents additional business rule execution
  3. If Process Management is used, the user would approve the entity, which would validate against the data form used in Step #2
    1. If approvals process finds that the entity has not gone through the initial locking process, an error will be displayed
    2. By successfully approving, the user is no longer the owner of the entity
  4. Following a successful approval, the user is no longer able to perform any data entry, which means that the initial lock in Step #2 cannot be reverted
  5. At this point, the user would need to have the entity sent back through Process Management in order to perform any further data modifications

The end result of the process allows for analysts and administrators to switch their focus to more business-related issues rather than focusing on locking down the system and having to check for all of the typical data entry errors. Administrators generally have to pick a universal end date to lock down the system in order to be assured that users are not making additional changes. With the process mentioned above, budgets and forecasts can be potentially analyzed as they are completed since a user’s ability to make any modifications is removed as soon as everything is approved.

The important part to remember is that this process is done completely using out-of-the-box functionality. If there is anything through the steps that you feel could be simplified….it probably could. But this will at least get us started.

Step 1: Dimensionality


If you recall from Part 1, I created two validation members: AllocValidate and Validate. AllocValidate was created in the dense Accounts dimension  while the Validate member was created within a sparse dimension, to give us block suppression capabilities. For this portion of the validation process, I created an additional Accounts member, LockFlag. This new member will be used in coordination with the previously created member, AllocValidate, to create a locking “flag” that all business rules can use to decide whether the rule should continue to process or not.

Additionally, I added a “NO_XXXX” for each dimension. The flag only needs to be stored at the Entity level, so each dimension outside of Accounts, Period, Scenario, Version, Year (although, I use No Year), and the dimension containing “Validate” will need the “NO_XXXX” member.  





Step 2: SmartLists


Building off the SmartList from Part 1, I added a few new entries. These entries will be used to display the status of an entity in the approval process, including: locked, open, and invalid. Additionally, I created a new SmartList to accompany the “LockFlag” member. This SmartList will be used by the users to lock and unlock their entity.  



Step 3: Create the Data Form


Create a data form with following parameters:

Rows: Entity
Columns: AllocValidate and LockFlag
POV: All other members where the flag will be stored (including Validate)

Additional Specifications:
  • Make the column for AllocValidate Read-Only. This column will be modified based on the user selection for LockFlag in coordination with a business rule that will run on save.
  • Add a Validation Rule for the AllocValidate column to show as invalid if the column does not appear as being locked.  



Step 4: Business Rules


First, create a rule to run on save that will check for the “LockFlag” selection and update the “AllocValidate” status accordingly. The script should initially check if an entity has any error messages. If not, then it will lock or unlock based on the LockFlag selection. Note that this rule actually runs for all entities each time the save occurs. However, there is very little to no potential conflict issue between users, as the rule runs very quickly. However, this rule can also be run as a Menu option to select a single entity at a time if desired. An example script is shown below.

The rule checks if any validation errors already exist. If so, it changes the AllocValidate flag to appear as invalid.


If the rule detects an invalid entry, the rest of the rule is not processed. If there was no error found, the rest of the rule will lock or unlock based on the LockFlag selection.

Next, update each business rule to take the “AllocValidate” flag into consideration. If the flag’s current status is “Locked”, then we no longer want to run the business rule if it will change the current data set. An example is shown below, using the @RETURN function as well.

Lastly, I’ve udated the rule that was created in Part 1 to include an automated re-assignment of the AllocValidate lock status to be invalid, if an error occurs. This is to prevent an issue where the user locks the entity but then submits invalid data, as data entries cannot be prevented through the use of business rules.  The example below would assign an error to a specific assignment in addition to assigning the “Invalid” flag for the whole entity.

Step 5: Process Management


Process Management is a fairly complex subject within Hyperion Planning, so I will not be going into a lot of detail on the specifics of setting everything up as that could be its own blog…or two. The assumption here is that the decision has been made to use Process Management, which allows it to incorporate this validation process. The important part is to make sure to select the checkbox “Do Not Promote” within the options for the Validation Rule set up in Step 3. This will cause approvals to fail if it detects an error within the data form.


Step 6: Example


If you’re confused at this point, it’s quite understandable. I am as well. Nonetheless, let’s put everything together and actually go through a sample process that a user might go through.

  1. The user has invalid data and is trying to lock department (entity) DP_0000. However, the rule has found a validation error.


  1. The user can navigate to a form with the list of errors affecting all of their entities. The example below was the form that was created in Part 1.


  1. The error is displayed on the form, and the user fixes the error. Note: My error flags were thrown off during the preparation for this demo, and the error doesn’t EXACTLY reflect the real error. We’ll pretend.


  1. After fixing the error, the user navigates back to the locking form and tries to lock again. The lock is successful this time.


  1. The user tries to run a rule to add a new position to the department that has been locked. Since this rule would change the data set that has been validated, an error is returned.


  1. The user goes back into the data set, entering a change to the data set that will cause an error to occur. The flag is automatically reset to be invalid.


  1. As an example for Process Management, we’ll pretend the user has approved the department successfully. In doing so, the depatment can no longer be unlocked unless the department is sent back, creating a completely locked down department.


Conclusion


By now, you probably get the gist of what is trying to be accomplished here. However, it all can get a litlte complex with how all of these pieces interact with each other. As mentioned earlier, all of these steps are just an example of how all of these pieces can work together to create a customized flow within any Planning application. They can be refined, simplified, or even expanded as needed. The best part about all of this is that it’s simply a new take on existing functionality, allowing developers and users to remain within the Planning technology. With the Hyperion functionality, we’re always attempting to achieve higher efficiency, and there is nothing more efficient than removing some redundant tasks from the administrator’s plate. Thanks to all for reading and a special thanks to Cameron as well for allowing me to hijack his blog for a few entries. Let me know if you have any questions or comments!

Cameron’s conclusion

So there you have it – a way to prevent planners from inputting incorrect data values.  It’s sort of the budget data quality Holy Grail for Planning administrators and consultants alike.  And oh yeah, business owners too.  This is awesome stuff and again my thanks to Tyler Feddersen for writing this and Performance Architects for allowing Tyler to write outside the walls of his company.  PA understands that sharing information is how we all grow and learn. Thanks for sharing, guys.

Be seeing you.

29 July 2014

Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 2

Introduction

This is part two of a three part series on Calculation Manager, ASO Planning, its relationship with BSO Planning, and Essbase ASO procedural calculations.  Part one covered all sorts of interesting things you can do with, oddly enough, BSO + Calculation Manager.  There is a reason for introducing that first, and then this post on making ASO procedural calculations fast.  I’d encourage you to stick around for part three which ties them both together in a most unusual way.  With that, let’s get into the details of making ASO procedural calcs fast, fast, fast.

Joe Watkins’ genius

How often do you get called genius?  If you’re anything like yr. obt. svt. it isn’t all that regular of an occurrence unless that term is within the context of, “You have a genius for screwing things up” or “You’re a genius at prevarication” or “I’ve met geniuses and you are no genius”.  

Joe Watkins (with whom I have only ever “met” via email and Network54) is a genius, at least when it comes to ASO Essbase, because the approach Joe came up with is…genius.

Why do I use the word genius?  Simply because:
  1. His approach is 100% undocumented.
  2. It is not an intuitive solution at first glance, but on examination it is not only obvious, it’s !@#$ing awesome.
  3. It solves a problem you could drive an HGV double-articulated lorry through.
  4. It is fast, fast, fast, fast.
  5. It is a total hack.

And oh yes, it is part two of my three part series on Calculation Manager, BSO Planning, ASO Planning, and ASO Essbase procedural calculations.

While this blog post stands on its own for Essbase-only practitioners for the technique alone,  I will explain why you will at least want to combine it with the CDF information I gave in part one even if the words “Hyperion Planning” never cross your lips.  Hyperion Planning geeks will have to read all three parts to get all of this hack (and yes, I contributed something to this, so it isn’t all a case of steal from others to get a good solution).

The problem(s) with ASO Essbase procedural calcs

It’s really very simple and very devastating at the same time – ASO procedural calculations do not ignore empty level zero member intersections but instead consider all of them.  Where’s there constituent data, the Essbase values the result; where there’s none, Essbase leaves the result blank.  For us BSO geeks, this is 100% not the way BSO Essbase works by default; in BSO, no blocks = no result unless we force Essbase to do so.  If only there were a way to make ASO Essbase behave the same way…

What this ASO Essbase behavior means is that procedural calculations, unless they are very tightly constrained in scope, can be agonizingly slow.  And even that tight targeting of the calculations can be a roadblock – do you always know where in a database a calculation should occur?  Maybe you could write a series of small scope calcs in a reporting application, but that would be very difficult to do if there is an element of budgeting to the application.

And in fact, I’ve understated the problem – even in a pretty small ASO database a procedural calculation can take a very, very, very long time.  Proof?  Read on.

The database

I stumbled (as is my usual wont) into this as part of a presentation.  I was trying to write a currency conversion calc to mimic, sort of, what happens in BSO Planning as part of an ASO Planning Kscope14 presentation I gave with Tim German.  I should also mention that Dan Pressman was a big help in the building of the dimensions.

The dimensions


By ASO standards it isn’t much.  Of course, by the lights of BSO, it’s huge – more on that in a moment.

The logic

Oddly, ASO Planning does not create all of the dimensions required for a multiple currency database.  They expect you to do so.  No problem, I thought, I’ll simply create similar dimensions to what exists in BSO Planning and go from there.  

Dimensions

Here’s how the fx relevant dimensions look when compared to a standard Planning-derived BSO fx plan type:
BSO
ASO
HSP_Rates
Fx Rates
Account (HSP_RateType and children)
Account (Fx Rate Type and children)
Currency
Currency
Product (Entity)
Product (Entity)
N/A
Analytic

For you Planning geeks, the Product dimension is the Entity dimension, Currency was automatically created by Planning (although corresponding fx conversion logic was not), and Analytic, and Fx Rates are custom dimensions I created to support ASO fx.
Account
Accounts stored the difference between Average and Ending rates.  This is just like BSO Planning.
FX Rates
I don’t have a HSP_Rates dimension but this is the same thing, mostly.
Currency
This dimension came directly from Planning itself.
Product (Entity)
The concept of tagging the members with UDAs is the same as BSO Planning.

BSO code

Want the code?  Create a multicurrency Planning app and then have Planning generate the calc script.  I’m just showing the screen shot to give you a feel for the logic.

All that the code is doing is:
  1. FIXing on level zero members
  2. Testing for UDAs assigned to the Entity dimension
  3. Multiplying the Local input value by the fx rate / by the USD rate (which is always 1)

As this is BSO, after the fx conversion, an aggregation is needed but not shown.  Of course ASO won’t require that aggregation code as it does that aggregation on the fly.

ASO first attempt

I wrote this in Calculation Manager (remember, I was trying to do this for Planning) but the logic is exactly the same in MaxL.
Execute calculation

Note the SourceRegion – in this case it’s all individual members because I was trying to calculate just one rate.  I would at least have to open up the fx member set if I were to calculate more than one.
fxtest.csc
This is a one line currency conversion formula.  
This whole approach is ugly and not easily exapandable, but it serves to illustrate an almost literal translation from BSO to ASO logic.
Success, if you can call it that.  
After entering the rates (sent from a Smart View ad-hoc sheet into the Essbase database as ASO Planning doesn’t push rates the way BSO Planning does), I entered one data value to be converted from Sterling to Dollars as per the above code.  That is one as in a whole number, greater than zero, but less than two.  How long do you think it took to run?  A second?  Half a second?  Something else?

How about 6,000 seconds?  Let me repeat that in words, not numbers: six thousand seconds or sixty hundred seconds or one and two thirds hours.  To convert one data value.  See the problem with ASO procedural calculations?

How long did the same database (just about) with the same amount of data take to run in BSO?  The total elapsed calculation time was 0.025 seconds.  So much for the might power of ASO compared to poor old obsolete BSO.

The fix

The key to BSO’s speed is that BSO does not consider all of the possible level zero member intersections, it only considers the sparse member combinations that have data.  In ASO terms, it only calculates based on the non-empty member intersections.  There are NONEMPTYTUPLE and NONEMPTYMEMBER commands in MDX but unfortunately they are not part of the execute calculation and execute allocation (the two and only two ways to run ASO procedural calculations) grammar.

NB – Oracle say this is coming to Essbase but when is tbd.  That will be (some day) great for those of us who are on the latest release, not so much for everyone on 11.2.3.500 and before.

So how can we get NONEMPTY functionality in ASO if it’s not part of the commands?  Enter, finally, Joe Watkins’ technique again.

The problem with NONEMPTYTUPLE

NONEMPTYTUPLE (I used that instead of NONEMPTYMEMBER) can only be used in an outline member formula.  Member formulas are (quite naturally) in the outline.  The member formula fires at all levels, and in the case of an fx rate calculation, is only actually valid at level zero.

This is a bit of an impasse – we know the problem with procedural calcs is the NONEMPTY issue, fx rate calculations only make sense at level zero, MDX has a keyword to address this, but only in member formulas and member formulas fire at all levels, not just level zero.  What to do?

Back to the Genius of Joe Watkins

Instead of trying to fight Essbase, Joe came up with a really clever way of using existing ASO functionality.  I read about his approach to fx over on Network54 in the beginning of 2013 and, since I was doing BSO Planning (that’s all there was) at the time, filed it away for future reference.  I also thought he was nuts for saying things like, “This is the future of ASO.. BSO will be dead in 5 years.. (my prediction)....”  Now I’m not so sure.

What he did

Joe:
  1. Created a member formula that contained his fx logic
  2. Stuck a NONEMPTYTUPLE keyword at the top of the formula
  3. Ran an ASO procedural allocation (not calculation) that 100% copied his member formula to a stored member thus harnessing ASO’s fast non empty performance but keeping the data calculated only at level zero
  4. Enjoyed success, the good life, and that warm glow inside that only comes from helping others

I may be slightly exaggerating number four, but one through three are the truth.

NB – The example here is a fx calculation, but this approach works for any and all level zero calculations.

Here’s how I did it

Additional member

In the Analytic dimension, I created a calculate-only member called MTD USA.  It contains the member formula to calculate fx conversion.

MTD USA’s member formula

Note the NONEMPTYTUPLE command that makes the member formula only address non empty data.

The CASE statement is a MDX version of the BSO currency conversion calc script.

Execute allocation

It’s all pretty simple from here on, thanks to Joe.  All I need to do is kick off an execute allocation in MaxL, set up my pov aka my FIX statement, identify the source (Local) and target (USD).  By not defining a spread range other than USD, Essbase copies everything from MTD USA in Local to MTD in USD.

Did you see the $5, $6, and $7 in the code?  If it’s MaxL, it can be driven through parameter variables.  Think about how you might use that in Planning given the last post’s review of @CalcMgrExecuteEncryptMaxLFile.

How fast is it?

On my VM with a limited set of data (I have finally ordered that 1 TB SSD but have yet to install it so I am constrained for space) I observed the following calculation times:
Process
BSO
ASO
X Fast
Allocate
106
3
35
Fx
400
1.2
333
Aggregate
1,772
N/A
N/A
Total
2,278
4.2
542

The allocate and the aggregate times are interesting, but the biggest overall difference is in fx – it’s over 300 times as fast as the equivalent BSO outline and data.  Look at that, ASO is faster than BSO, if it only considers non empty data.  

And now, thanks to Joe’s technique, it can.  A hack, most assuredly, but a glorious one.  I have an upcoming ASO budgeting application that I was dreading because I couldn’t figure out how to quickly do its rate calculations (no fx involved).  Now I know how to do it, and quickly.

This technique is in a word, awesome.  Yeah, I take some stick for overusing that word, but 300 times the speed of BSO is sort of remarkable.  All of us who use ASO procedural calcs owe Joe a huge round of thanks.

So what’s left?

Part three of this series will bring together:
  1. Running MaxL from BSO while in an ASO Planning application with Calculation Manager
  2. The fast ASO procedural calcs you just read about
  3. How to use this in Planning (and even Essbase)

I know it’s all been a bit long but there’s a lot of information to impart and it took me freaking forever to figure out how to tie it all together – there’s no reason to see why explaining it should be any faster.  

:)

Be seeing you.