Want to know The Truth About CPM?

25 August 2014

Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 3

Introduction

This is the third and final installment of a three part series on ASO calculations, and specifically, ASO Planning calculations.  Thus far I’ve showed how to use the @CalcMgrExecuteEncryptMaxLFile via Calculation Manager which is pretty cool, and then how to make ASO procedural calculations in MaxL fast, fast, fast.  That’s all well and good, but how does that relate to ASO Planning?

I’m awfully glad you asked that, because these two hacks combine in ASO Planning to create ASO Planning procedural calculations that are both unbelievably fast and slick.  Read on, and all will be revealed.

The path not taken

Before I go any further, you are likely thinking, (Are you?  Really?  Really?  If so, you’re just as sad as I.  We both should seek help.) ‘arf a mo’, Cameron, why wouldn’t you use the ASO procedural calculation functionality in Calculation Manager?  Why indeed?

It isn’t as though ASO Calc Mgr procedural calculations aren’t available in ASO Planning 11.1.2.3.500 – they are.

But what is also there is a bug, and I have to say quite a reasonable one.  I like to think of myself as the kind of person that can break anything, if I try long enough.

A short review

The essence of fast procedural calculations in ASO Essbase is (or would be) to use a NONEMPTY modifier in the calc script.  Unfortunately, at this time that is not available although I understand it is somewhere on the product enhancement list.  What my prior post explained in great detail was the hack Joe Watkins came up with to use the ASO procedural allocation grammar to copy the results of a member formula to a stored member.  That member formula (dynamic, and in the case of currency conversion, only valid at level zero) can use the NONEMPTYTUPLE keyword to make Essbase only consider existing data and in turn it moves it to a stored member.

The next few paragraphs are a rip-and-read from that post but it’s short, explains everything, and I am too lazy to paraphrase all of it.

Additional member

In the Analytic dimension of my Planning app, I created a calculate-only member called MTD USA.  It contains the member formula to calculate fx conversion.

MTD USA’s member formula

Note the NONEMPTYTUPLE command that makes the member formula only address non empty data.

The CASE statement is a MDX version of the BSO currency conversion calc script.

Execute allocation

It’s all pretty simple from here on, thanks to Joe.  All I need to do is kick off an execute allocation in MaxL, set up my pov aka my FIX statement, identify the source (Local) and target (USD).  By not defining a spread range other than USD, Essbase copies everything from MTD USA in Local to MTD in USD.

Did you see the $5, $6, and $7 in the code?  If it’s MaxL, it can be driven through parameter variables.  

Got it?  MTD member formula with NONEMPTYTUPLE + ASO procedural allocation that allocates 100% of that dynamic formula member to a stored member equals fast, fast, fast.

So what didn’t work?

I know that the Calc Mgr team is quite proactive and I suspect that this bug will be fixed soon, but in the meantime, and because this is a Most Excellent Hack with lots of possibilities outside of Planning, I’ll show how to get round it.

Specifically, what went KABOOM?

Oracle never thought anyone would allocate 100% of a level zero member to another.  And I can hardly blame them for thinking it.

Here’s the relevant screenshot in Calc Mgr.  It (again, quite reasonably) assumes that when you allocate a data value, you do it from an upper level member all the way down to the bottom.  And that is the normal way to do an allocation, except the fast ASO procedural calc hack doesn’t do that – it allocates a level zero member to a level zero member.  And that doesn’t work.

How I solved this

I found this defect as I was writing the joint presentation I gave with Tim German for Kscope14 and I wasn’t exactly doing it months before conference.  I was stuck.

But I remembered seeing the @CalcMgr functions back in Essbase 11.1.2.2.  What if I could write a BSO Calc Mgr rule and drive an ASO procedural calc via MaxL?  

And it turns out that in fact there are a lot of ways to run a MaxL script from BSO:
  • @CalcMgrExecuteEncryptMaxLFile (privateKey, maxlFileName, arguments, asynchronous)
  • @CalcMgrExecuteMaxLEnScript (privateKey, maxlScripts, arguments, asynchronous)
  • @CalcMgrExecuteMaxLFile (user, password, maxlFileName, arguments, asynchronous)
  • @CalcMgrExecuteMaxLFile (user, password, maxlFileName, arguments)
  • @CalcMgrExecuteMaxLScript (user, password, maxlScripts, arguments, asynchronous)
  • @CalcMgrExecuteMaxLScript (user, password, maxlScript, arguments)
  • RUNJAVA

And it gets better

Once I realized this, it hit me that I could likely drive it off of ASO Planning forms and pass the Page, POV, and even the User Variable values on save into a BSO Calc Mgr rule and from there into a MaxL script that runs the allocation.  OMG, Essbase ASO procedural calc nirvana could ensue.  Or the end of the world.  If igniting the atmosphere side bets are good enough for Manhattan Project physicists during atom bomb tests, surely giving this a whack seems worthwhile.

The short story is that all of this somewhat amazingly works, and works quite well.  I’ll cover the straightforward setup and application of this and then go into some of the more interesting possibilities.

Doing it @CalcMgrExecuteEncryptMaxLFile style

You will remember from the first post that it is very important, if you only mean to run the ASO procedural calc once, to limit the scope of @CalcMgrExecuteMaxLFile to one and only one block.  And oh yes, that block must exist for this to work.  Here’s the code:

See part one for all this approach’s requirements beyond blocks.  You will note that this BSO script does not have any Calc Mgr variables but I could have easily used them.

RUNJAVA, RUNJAVA, RUN RUN RUN

Again, see part one for all of the rules.  Note that the FIX and the existing block requirements do not apply.  But what I want you to focus on the {varYear}, {varBRUVAccount}, {varProduct}, and {varPostCode} Calc Mgr variables.

Here are the variables as defined in Calc Mgr.  NB – These are Calc Mgr variables passed from an ASO to a BSO Calc Mgr rule.  Coolness.  And awesomeness.  And a great hack.

Here’s the (again, BSO)  rule associated to the (again, ASO) form in Planning.  Note the Use Members on Form tag:

ASO procedural calc

Here’s the MaxL code containing the ASO allocation script:

And the output from that fx conversion.  Note how ASO Planning form values got passed to Calc Mgr variables and then to MaxL to run the ASO procedural calculation (see the pretty pastel highlight colors):

And now the demo

And here’s a very short movie showing it executing.  Please excuse the editing (with clock) at the end as I was trying to spare you all the trauma of me searching for the calculation time in the Essbase application log.  In any case, the time logged to MaxL (0.027 seconds) shows up in the application log as well.

Numbers don’t lie

Finally, you know from part two of this series how fast this can be.  The times you are seeing below are slower than what I demonstrated because they represent full database size (my database is just a fraction of the full dataset because of disk space constraints – trust me, these numbers are real):
Process
BSO
ASO
X Fast
Allocate
106
3
35
Fx
400
1.2
333
Aggregate
1,772
N/A
N/A
Total
2,278
4.2
542
Using this technique, the ASO fx is over 300 times as fast as the equivalent BSO outline and data.  A little slice of Essbase performance heaven, isn’t it?

Conclusion, or is it?

A combination of the Calc Mgr CDF that is in every copy of Planning (and Essbase, for that matter), the tried and true POV/page/and now row and column set passing to Calc Mgr, and a little creative ASO Essbase procedural calculations gives the Planning community access to an amazing amount of power and functionality.  
Cool, eh?  But this technique can be taken quite a bit further.

Where this starts getting really interesting

The demo you see above is from a Planning application that has BSO and ASO plan types that mirror one another.  As such, the dimensionality in the BSO application mostly matches ASO.  Is this required?  Absolutely not.
In fact, all that I need to run ASO procedural calculations in Planning is a BSO plan type with exactly one block of data (for @CalcMgrExecuteEncryptMaxLFile) or one that is completely empty (for RUNJAVA) and I can then address any ASO Planning plan type, even ones across multiple Planning applications or even servers.  The Calc Mgr functions call MaxL and MaxL can address any Essbase database to which it is provisioned whether that be that a Planning plan type, an ASO Essbase database, a BSO Essbase database, some combination of the above, etc., etc., etc.  
Calc Mgr itself isn’t even required (or even Planning) if you wish – you could use this all in a pure Essbase database and use command line substitution variables to drive scope, or just hard code it all.  You can go absolutely wild, relatively speaking, with this approach and do just about anything with it.  It is a very powerful technique and one that I hope will be exploited.
I find this all oddly stimulating.  But I’m weird.

Now the real conclusion and a question for you

This is one of my longer posts – almost 30 pages in Word which equates to approximately 6,500 words in total.  Does it make sense to write multiple part posts like this or would the EPM community be better served with me trying to write things like this as a white paper?  Write care of this blog or just email me.
Be seeing you.

22 August 2014

Essbase 11.1.2.3.502 is available for download

It’s out

I must give a hat tip to Steph who commented on my 11.1.2.4 post as I didn’t know about the patch release.

Two things that I found interesting

  1. No stated (although I could swear that Gabby said there was) improvements to Hybrid BSO
  2. Fragmentation (storage engine not stated, but I am not aware of significant ASO .dat fragmentation) no longer matters
Here's the relevant quote with emphasis added:
Historically, fragmentation has been perceived as degrading performance. However, with advances in hardware, memory, and disk architectures, the correlation between fragmentation and performance is no longer significant. Furthermore, several enhancements have been made to algorithms within Essbase, making the older statistics pertaining to fragmentation less relevant. Oracle recommends the use of the latest efficient storage systems to store Essbase data files, such as Storage Area Network (SAN) or flash.

That’s going to blow up the rule of thumb “defrag for performance”.   <grin>  One thing that the documentation does not note is when this became true.  Presumably it’s in the .502 patch as that’s when this went into the ReadMe but sometimes documentation lags a bit, particularly when it doesn’t address a defect.

Oracle do go on to state that fragmentation is still somewhat important because it increases disk requirements:
The second implication of fragmentation is related to increase in the size of data files. Oracle recommends regular monitoring of the number of blocks and the data file size. If the size of the data files increases even though the number of data blocks remains the same, and available disk space is diminishing, consider taking steps to reduce fragmentation.

So Essbase continues to eat disk when it’s fragmented but that’s only a worry if the database is constrained on space.  Verrrrry interesting.

I’m not sure how one would test this – I suppose a series of benchmarks against a db when it’s 100% defragmented and then when it is nicely fragmented would do it although per their comment, if the statistics are no longer totally relevant, how will you know?  I look forward to someone other than myself doing the testing.  <even bigger grin>

There’s quite a bit more to the ReadMe so you should Read The Whole Thing (login to Oracle Support required).

Be seeing you.

18 August 2014

Stupid Programming Tricks No. 19 -- TRUNCATEing DATAEXPORT

An interesting statistic

The code I am about to show you, and the solution that I came up with, took me over 15 hours of research, cursing, testing, and finally triumph.  Multiple cups of coffee were also consumed as were a few cups of proper tea.  Why do you care about what someone (such as yr. obt. svt.)  does for free or how he imbibes caffeine?  Perhaps for the amusement factor, as the solution that I came up with consists of eleven lines of code.  Eleven lines of code in 15 hours, for those of you mathematically challenged, is 0.73 lines of code per hour, or 5.5 lines of code per calendar day.  Ouch.  But that is the price for:  having a “good” idea, venturing into technology areas one knows very little about, and not giving up.

I tend to agree (if only to salve my ego) with the first point by David Veksler of the Mises Economic Blog (What, you’re not all Hayek fans?).  I quote:
A programmer spends about 10-20% of his time writing code, and most programmers write about 10-12 lines of code per day that goes into the final product, regardless of their skill level. Good programmers spend much of the other 90% thinking, researching, and experimenting to find the best design.

Does that put me in the category of a good programmer?  It’s hard to say – I do spend an awful lot of time thinking, “What am I trying to do and conceptually, how would I do it?” and then later, much later, I spend even more time trying to figure out how I turn that concept into reality.  So is that being a good programmer or just someone flailing about in areas that he doesn’t know much about?  You decide.

Why oh why oh why am I prattling on about this?

There was a thread the other day over on Network54 that discussed how to use the headers that come from a BSO DATAEXPORT statement as they are, without a doubt, pants.  I suggested that SQL output and subsequent manipulation (I should note that Adam_M went and did it in a text file but no matter, this was the inspiration and based on what he wanted to do, I think SQL might have been a better fit for transforming the data – Read The Whole Thing and make up your own mind) was the answer to Adam_M’s issue, but when I did so I had in the back of my mind the nagging reminder that, “Yeah, he’ll write it to relational, but then he’ll do it two or three or more times as he tweaks the extract and will end up with many more records than he ought.  This is a bummer (and erroneous – I would personally rather have it error out) and cannot be resolved by Essbase.”  And that’s right; Essbase will not truncate the table before it writes.  You have to do this manually and that of course means you have some kind of SQL access which in the real world, particularly in production, you will most likely not have.

The goal aka the functional spec

It would be super nice if BSO’s DATAEXPORT calc script function did a TRUNCATE to clear the table, or at least had the option of doing a TRUNCATE, before writing to a SQL target.

Two paths not taken

Just too much of a hack, even for me

One approach would be to use the @CalcMgrExecute function I wrote about (and I still owe part three of that series to you, Gentle Reader), run a MaxL script, in that MaxL script use the shell grammar, run a SQL*Plus (if Oracle) or sqlcmd (if SQL Server) script that does the TRUNCATE and then return back to the calling MaxL script, end it, and do the export as the next step in the original calc script.  At least I am pretty sure that would work but it is just too clunky for words.  I would go down this path if I wasn’t allowed to make the modifications I am about to describe but I would hate myself in the morning.

The way I would like to do it, but cannot

What occurred to me was that a SQL trigger that did a TRUNCATE before the INSERT from DATAEXPORT would do the trick.  Unfortunately, this cannot work because of the way Essbase writes to relational tables.

Would you believe…

That the default for Essbase is to do separate INSERTs for each and every line in the export?  I had heard this from My Man In California (MMIC) aka Glenn Schwartzberg but like oh so many words of wisdom he drops my way (MMIC is a very giving guy and I a forgetful one), I completely forgot.  Why does this inefficient approach to writing to relational matter?

It all goes KABOOM

It matters because the common SQL triggers of BEFORE INSERT TRIGGER (Oracle) and INSTEAD OF INSERT (SQL Server) would fire each and every time Essbase writes to the table.

Here’s what SQL Profiler showed me as the DATAEXPORT code ran:

There are 2,645 records to export out of Sample.Basic and that requires 2,645 INSERT statements.  Eeek.

It might be possible, although inefficient, to use these triggers to do the TRUNCATE and then copy the single record to yet another table, but that is an ugly approach one that I am not sure I have thought through completely as that target would need to be truncated on first write.  So maybe an INSERT trigger is not on.

Another approach

But then I again recalled of one of MMIC’s pearls of wisdom (see, Glenn, I really do listen to you) – there is an Essbase.cfg file setting that can allow batch inserts if the database supports that functionality.

DATAEXPORTENABLEBATCHINSERT

Off I went to the Tech Ref and did my requisite RTM.  Oh goody, thought I, it’s going to work.  Just read the description:  “When DATAEXPORTENABLEBATCHINSERT is set to TRUE, Essbase determines whether the relational database and the ODBC driver permit batch insert.”  But I should have known that when Essbase giveth with one hand, it taketh away with the other, because the next sentence states, “If they do, Essbase uses the batch-insert method, and, thus, performance is optimized.  Essbase determines the batch size; however, you can control the number of rows (from 2 to 1000) that are inserted at one time by using the DEXPSQLROWSIZE configuration setting.”

Oh, bugger, all this statement does is increase the number of records until that INSERT trigger fails.  I know that Sample.Basic has 2,645 rows and that means three INSERTs at the maximum row DEPSQLROWSIZE.  Bugger, again.  What to do?

Is there a way out?  Why yes there is.

INSERT triggers are out, but is there another trappable event that could allow the TRUNCATE to occur?  I looked again at SQL Profiler and saw two interesting facts.

  1. The LoginName column reflects whatever username I defined in the ODBC System DSN I used.
  2. There is a login event, at least as trapped via SQL Profiler.

The simple solution, although figuring out how to do it took, oh, forever, and it isn’t enough

If I could write a trigger that intercepts the act of EssbaseLogin (or whatever SQL username I chose) connecting to SQL Server I could have the login trigger TRUNCATE the table right then and there.  This frees me from the multiple INSERT issue quite handily.  How oh how oh how to do it was the question.

Figuring it out

SQL Server has a trigger database operation keyword called FOR LOGON (the Oracle equivalent is AFTER LOGON).  All I needed to do was to create that trigger, test for the username, do the TRUNCATE, and, because I am a paranoid nut, log: the fact that I did this, when I did it, and that the TRUNCATE succeeded.

A note for all of you SQL n00bs and experts – what you are about to see is the very first SQL trigger yr. obt. svt. ever wrote.  At least that’s the excuse I am sticking to as explanation why it took so long.  SQL n00bs – take heart, if I can do it, so can you; experts – everyone has to start somewhere, so stop the snickering at my plodding progress.

The target database

Here is what QueryTest.SampleBasicExport looks like – a simple table that sticks Product, Market, Measures, and Scenario into the rows and the 12 level zero months of Year as columns.

Approach #1

I stole the genesis for this code somewhere on the web (I meant to keep the link, but forgot to do so).   Here’s what I ended up with:

Line by line

As most of the readers of this blog are Essbase practitioners first, and SQL geeks second (or third, or fourth, etc.), I will explain each line of the code.

  1. The first line defines the trigger EssbaseDataExport – as I changed this about eleventy billion times the code shows ALTER instead of CREATE – and that this is a server-level trigger.
  2. The second line states that this is a LOGON trigger.
  3. Line three’s AS statement defines the trigger.
  4. The BEGIN statement on line four is the beginning block of the trigger logic.
  5. We finally get to the TRUNCATE logic on line five.
  6. Line six’s END marks the end of the trigger.

All I need to do is execute the code and ta-da, I now have the server-level trigger EssbaseDataExport.  You can see EssbaseDataExport in SQL Server Studio that the trigger is now there.

EssbaseDataExport  in action

Here’s what my calc script looks like:
Note that the SQL username EssbaseLogin is used.  At this point this isn’t important but it will be in just a bit.

Let’s do a count of the rows in QueryTest.SampleBasicExport.

If I run the export  I expect to see 2,645 records.

What’s the count?

When I run it a second time I expect the same number of rows as I have done a TRUNCATE just after logon but before the INSERTs actually happen:

And check the count:

DATAEXPORT ran, the LOGON trigger did the TRUNCATE, and the table still has 2,645 records.  Success boil in bag!!!

Is this really working? If I disable the trigger, there will be no truncation on run and the record count will now be 2,645 + 2,645 or 5,290 records.  I hope.
SQL Server gives me confirmation that the trigger EssbaseDataExport is disabled.

Run the DATAEXPORT one more time (Do I have to show you yet another MaxL screen shot?  Hopefully not as I am not going to do so.)

What’s the count?  Why yes it is 5,290 records.  So proof that Essbase won’t do a TRUNCATE before writing when the trigger is turned off.  Huzzah!

With a quick re-enable of the trigger, what happens when I run the DATAEXPORT calc script?
It runs yet again.
And now I have a row count of…
2,645.  I have now proved that every time there’s a login to SQL Server, I can get a TRUNCATE.  

Approach #2

But there’s a problem with this approach and it’s really quite a big one.  Every connection to SQL Server is going to result in a TRUNCATE.  Don’t believe me?  I will disconnect from SQL Server Studio, and then reconnect.

Here’s my connect to SQL Server dialog box. I click on the Connect button and…

What’s my row count on QueryTest.SampleBasicExport?

Yup, it’s zero.  As Donald Fauntleroy Duck would say, “Aw, Nuts!”  I need to be able to test for the username so that only the username that does that particular write to relational forces the clear.  And I’m going to need to be very careful about what username does that clear – EssbaseLogin could do two different kinds of connects – once to write (the TRUNCATE makes sense) and then again to read (which would be a bummer because the trigger would blow away the table contents).

So what I really need to do is create another username just used for writes, and maybe even a username just used for writing to that particular table.  I don’t want a SQL Load Rule to force that TRUNCATE to occur because that too will fire the CONNECT trigger.

Setting up a different DSN

As I am admin on my box and my SQL Server instance, this is easy peasy lemon squeezy.

Create the SQL Server username

As I am a dba on my own server, I will create the username EssbaseDataExport_Writer.

And then give EssabseDataExport_Writer db_owner access so it can do a TRUNCATE.

Creating the system DSN

Create the ODBC system DSN with that username EssbaseDataExport_Writer.

Set the default database to QueryTest:

And confirm that the username can connect:

Modifying the trigger to test for a specific username

What the trigger needs to do is test for the username EssbaseDataExport_Writer and then, and only then, perform the TRUNCATE.  All other usernames will not perform the clear of the SampleBasicExport table.

Happily there is a database operation keyword called SYSTEM_USER that will return the connection username.  Stick that SYSTEM_USER into an IF statement and then do the TRUNCATE if true and we should be good.

Here’s the trigger with the IF test for SYSTEM_USER.

Proof of the pudding part 1

If I the DataExp1 calc script using the non-tested username EssbaseLogin, SQL Server should not perform that TRUNCATE.  If there were already 2,645 records in the table SampleBasicExport, I should now have 5,290.

And so it is:

Proof of the pudding part 2

I created another calc script, this time pointing to the new DSN with the username DataExport_Writer.  In my typically unimaginative/lazy way, I named it DataExp2.

What happens now when I run the calc script?

And the row count?

Yup, I am now testing for just the username EssbaseDataExport_Writer!  Whew, what a long journey, but now when Essbase writes to SampleBasicExport using the EssbaseDataExport_Writer DSN/username, it will never double, triple, quadruple, etc. count the data.  

More cool trigger functions

It’s great that the TRUNCATE works, but wouldn’t it also be nice to have a record of each and every time the export fired and that the target table was cleared out before the write?  Why yes it would.  All I need to do is create a table with that kind of information.

Over in QueryTest, I created a table called LoginAudit with three fields:  DateTime, SystemUser, and RowsBeforeInsert.

I then altered the EssbaseDataExport trigger to do an INSERT after the TRUNCATE.  SYSDATETIME() is a date/time stamp, SYSTEM_USER() we’re already familiar with, and then I did a subquery to get the row count in SampleBasicExport which, if the TRUNCATE is successful, should always be zero.

After I apply the trigger, I then run the calc script DataExp2 and…

This result is just what I hoped for.  I could stick that INSERT statement before the IF to log everyone (this is actually a pretty common usage) and track everyone, I could extend the IF to an ELSEIF and test for other tables, in short I could do all of the cool things that SQL allows me to do.

And that is why SQL should be used for ETL work

I have gone pretty far afield and it took me 19 pages in MS Word to get to the end but a lot of that was spelling out each and every step in the process for we SQL beginners.

ETL in a Load Rule is, in my opinion, the work of the Devil Hisself because it isn’t transparent, is easy to get wrong, and has functions that cannot be undone, e.g. splitting a column.  Whatever the theological genesis of SQL is, it doesn’t suffer from any of the faults of Essbase Load Rules.  Yes, this is another Cameron rant about the Evils of Load Rules.  But I’m done.  :)

Thanks to the power of SQL, I’ve solved the issue with getting TRUNCATE access to an Essbase DATAEXPORT table which is quite often very difficult to do in a production environment through a very simple trigger.  See, SQL is awesome.

A caution, or at least a series of questions
One point about awesomeness -- I  reached out to my buddy Rich Magee, who knows SQL inside and out from a prior professional existence as a DBA, and asked him if there were any downsides to this approach.  Of course there were (life is often a combination of good and bad).  Here are his comments and they are food for thought:

"My understanding is that Logon triggers are typically used to monitor and control max number of logins, time outs and such. However, I could see no reason to not use them in your circumstance.
 
My questions (dba hat on) to you would be:
  • Why can you not simply schedule the stored proc to run?
  • What if the user logs on and off 5 times in a 5 minute period?
  • Would that not spawn unnecessary/redundant jobs all doing the same thing?
  • Could a risk be filling up the CPU or Disk with jobs/temp files that aren’t needed?"
So definitely some food for thought -- as always, you, Gentle Reader, must decide if what I hand out for free is genius or madness.  I tend to think it is the latter, not the former, but that is for you to decide.
The conclusion to the conclusion

Re the point about being a good (or bad) programmer based on approach  – I cannot say which category I fall into but I do know that I spent a lot of time figuring out what I wanted to do and then a lot more time figuring out how to do it with very little code at the end.  As before, you have to decide what class I (or you) fall into.  At least this is something you won’t have to figure out.

Be seeing you.