Want to know The Truth About CPM?

29 April 2012

More Oracle documentation than you can shake a stick at.


Introduction

I love idiomatic speech – I think it meets some deeply felt need for trivia.  Or maybe miscommunication.  After all, if you don’t know what a given idiom means, well, we have a failure to communicate.  

That isn’t Oracle’s problem

I don’t know if you’ve been paying attention to Oracle’s various outreach programs to us, its partners and customers, but they are trying really hard to give us what we need to do our jobs.  Think about all of the things Support does, think about Oracle coming to KScope for their awesome symposium, Those acts of communications on Oracle’s part are nice, aren’t they?  And it sure beats being on a prison chain gang (go click on “failure to communicate”).

Where oh where has my documentation gone?

There are five ways to get to Oracle documentation, and four of them are update sites.  I don’t think Oracle would go through this trouble unless they wanted you to RT_M the documentation.  Would they?

The first one’s easy, and is the good old Oracle EPM Documentation Portal Page.  I have the Essbase link right there in my browser bookmarks toolbar so I can jump to it at any time.

Oh, big deal you say, you’ve been using it since the year dot.  Show me something new you say.  (Do you say this?  Please say yes or the next bits don’t make sense.)  New, you want new?  I’ll give (or more accurately, Oracle will give) you something new.

What’s new, pussycat?

How about getting updates to documentation from any one of four social media sites?  That’s right, Oracle doesn’t make you choose.  Take your pick, and get the latest.
Although I’m not a Twitter user, I personally like the way it presents documentation updates the best.  But as I wrote, you pays your nickel and you take your chances.

Reading is Fundamental

Okay, that’s enough obscure idiomatic (all linked so you can actually understand, sort of, this seriously great message about Oracle’s EPM documentation team) for one post.  

After all,
R.I.F., courtesy of Oracle.  Enjoy.

22 April 2012

Stupid Programming Tricks #12 -- How to trap for import errors in MaxL

Introduction

This is truly a Stupid Trick because I said something couldn’t be done and in fact it could.

Oh, the shame.  And then the shame of getting it slightly wrong yet again.  A-g-o-n-y.

Happily, this is not the first time in my life I have been wrong and will certainly not be the last, so my ego can take the blow.  What did Albert Einstein say about stupidity?  “Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.”   Al, here I am, proving that really, stupidity is infinite, especially if your name is Cameron.

Sigh.

But the great news is that you, oh Cameron’s Blog For Essbase Hackers readers, actually take (at least some of) what I write under consideration and review, and are kind enough to tell me where I’m wrong.  Yes, Tim Faitsch, I’m looking at your comment in my No Exit (but not in MaxL) post.  

What oh what did I write that was so bad?

Here it is:
Note that MaxL is not going to trap data or dimension load errors.  ‘Twould be nice if it could, but it doesn’t.  So it goes.

Tim quite correctly pointed out that statement simply isn’t true.  And if you (ahem) read the Tech Ref, it’s pretty darn obvious.  What was I thinking?  I think the answer is I wasn’t thinking because here it is in black and white:
Example: Iferror (MaxL Shell)
The following example script tests various errors including MaxL Shell errors, and demonstrates how you can set the exit status variable to a nonzero argument to return an exit status to the MaxL Shell.
###  Begin Script ###
login $1 $2;
echo "Testing syntactic errors...";
spool on to spool.out;
set timestampTypo on;
iferror 'End';
echo "Testing shell escape...";
shell "cat doesnotexist.txt";
iferror 'ShellError';
msh "doesnotexistlerr.mxl";
iferror 'FileDoesNotExistError';
echo "Script completed successfully...";
spool off;
logout;
exit 0;

The trick

Did you catch it?  The shell command does a cat (I believe a *nix catalog type statement) looking for the file “doesnotexist.txt”.  If cat (think type in Windows) cannot find the file there’s an error.  If cat does find the file, there’s no error.  Who cares, right?

But consider how you could apply this to an import statement which, when it has bad record or two or two thousand, writes the errors to an error file.  If you could test for the existence of that error file, you’d know that if one existed, that would be there are data load errors, and if you do not, there are no data load errors.  

So here’s the trick and it’s a logical switcheroo:  An error from the shell “dir youerrorfilenamerighthere.err” statement means that the statement worked with no import errors because the no error file exists (this is assuming that you clear out all error files on script start) and the absence of an error really means that an import error occurred because the dir command found the text file.  Yes, it’s opposite land, but only in a good way.

An example with an error that’s not my own

First off, let’s establish a data source that we know isn’t going to work unless The Beverage Company becomes The Beverage and Snack Company.  What is wrong with this data load file for Good Old Sample.Basic?
 Hmm, potato chips.  So good to eat, so bad for you.  Especially when it’s a Scenario.  So we know that record three will fail.

Here’s the code of my script cleverly called LoadDataWithErrorChecking.msh:
/*
Purpose:    Illustrate checking for error in import with good ol' Dir
Written by: Cameron Lackpour
Modified:   Right now
Notes:      
*/

/*    Log in to Essbase    */
login hypadmin password on localhost ;
iferror "BadLogin" ;

/*    Define STDOUT and STDERR outputs    */
spool stdout on to "c:\\Automation\\Sample\\MaxLLog\\mshSample.log" ;
iferror "BadLogFile" ;
spool stderr on to "c:\\Automation\\Sample\\MaxLLog\\mshSample.err" ;
iferror "BadErrorFile" ;

/*    Load data from SQL    */
import database 'Sample'.'Basic' data connect as "hypsql" identified by "hypsql" using server rules_file 'dData'
on error write to "c:\\automation\\sample\\maxllog\\dData.err" ;
iferror "SQLImportError" ;

/*    Test to see if dData.err exists.  If it *doesn't*,MaxL will throw an error.  But we know that's a GOOD thing because the absence of an error file means that there were no data load errors.  If DIR *does* find the file, then there was a data load error.  And so we have to treat it like an error.  It's sort of backwards, but works.  And a big tip of the Essbase Hacker's hat to Tim Faitch for pointing this out when I (embarassingly) completely didn't read it in the docs.  Doh!    */shell "dir dData.err" ;
iferror "ExitWithOutError" ;
/*    If the file was found, in fact there was an error.    */
goto "SQLImportError" ;

/*    Leave the MaxL shell    */
define label "ExitWithoutError" ;
/*    This is where the script exits if there is no error.    */
logout ;
exit 0 ;

/*    Create label for login errors    */
define label "BadLogin" ;
/*    Quit MaxL with a 10 error code    */
exit 10 ;

/*    Create label for log file errors    */
define label "BadLogFile" ;
/*    Quit MaxL with a 20 error code    */
exit 20 ;

/*    Create label for error file errors    */
define label "BadErrorFile" ;
/*    Quit MaxL with a 30 error code    */
exit 30 ;

/*    Create label for SQL import errors    */
define label "SQLImportError" ;
/*    Quit MaxL with a 40 error code    */
exit 40 ;

I am expecting an error level of 40 when I run the above code.  And so it is.


I could have controlling code that tests for the 40 return code and does things – emails, automated Very Light warnings, texts, whatever.

You will note that this line is just the Windows version of the Tech Ref documented line (I just substituted dir for cat and of course used the right file name):
shell "dir dData.err" ;

But what happens when the script and the error file aren’t in the same directory.  This is actually pretty likely, especially in automated environments where locations, launch directories, and everything else are in separate directories.

The documentation lets you down

Well, to be fair, it isn’t wrong, but it sort of leaves out a crucial bit of information.  When you run shell “dir yourerrorfilenamerighthere.err”, there is an assumption on MaxL’s part that the place you launch the MaxL script from and the location of the error file are one and the same.
But when they aren’t in the same directory the dir command can’t find the error file.  No big deal you say (Do you?  Really?  Oh good, it isn’t just craaaaaazy voices in my head.  Or are they?), I’ll simply add in a directory to make everything work, like this:  
shell "dir c:\\\Automation\\\Sample\\\MaxLLog\\\dData.err" ;

Or will it?  Let’s not even change the execution directory and see what happens:


Uh-oh.  We know there’s an error, but MaxL is telling us that there isn’t.

NOTE THE ABOVE ABOUT DOUBLE QUOTES IS NOT TRUE ALL THE TIME.  Thanks to Jason Jones (read the comments) pointing out that "cat" really equals "type" in Windows.  In fact when you run shell "type yourerrorfilenamerighthere.err" things WORK, even when the file is in another directory.  What did I write about stupidity?  Double sigh.

So what’s missing or more accurately, what’s not needed?

The crucial bit of information that is still true, is that at least on Windows 2008 R2, is that YOU DON’T NEED THE DOUBLE QUOTES.  You don’t just don’t need them, you don’t want them at all if using dir.  Get rid of the double quotes and all is sweetness and light.  When you use dir, that is.  Read below to see how type successfully works with " and ".  Why?  Well, they're different commands and the reason is buried in how the two tools use double quotes, I think.  The real reasons behind that different quote handling are buried in the bowels of Windows and beyond our scope.  Suffice to say, dir pukes on double quotes and type works just fine with it.  Continuing along...

So, if you're going to test with dir, this is all you need:
shell dir c:\\\Automation\\\Sample\\\MaxLLog\\\dData.err ;

By the way, shell dir c:\\Automation\\Sample\\MaxLLog\\dData.err ; is also good.

And does it work?  Oh yes.


And you can run it from anywhere now.  Note that I am now running this from c:\Users\Administrator.


In fact, you can simply delete the double quotes altogether and run it from the source directory if you are inclined to follow the Tech Ref:
shell dir dData.err ;



If I actually had a clue about *nix, I would know that this works

Just like in the docs:

You see?  Lovely double quotes around the shell target.




Doh!

The morals of my story of error

There are five:
  1. Never say never.  Especially when people likely smarter than you  read your work (guys like Tim and Jason).  Whoops.
  2. The Tech Ref is correct when it comes to using this technique in the same directory as the launched script, but incorrect when going outside of that directory if you use dir -- it's just fine if you use type.  Btw, the Tech Ref was also right in pointing out how to do this.  Too bad I wasn't smart enough to read it and understand.  What was I thinking?  The world will never know as I certainly don't.  Again and again and again.
  3. Double quotes are sometimes good, sometimes not.  In the case of dir they are truly unnecessary, at least on Windows.
  4. I can get obsessed by this stuff sometimes – be glad that you didn’t lose an afternoon (and then another evening correcting your errors) trying to figure out why the silly shell statement wouldn’t find the error file.
  5. Maybe I should spend more time reading up on that little known OS, *nix.  I think I got the catalog = directory message from my days, gasp, on an Apple IIe. 

One last thing to consider -- it's a little weird that type handles double quotes and dir does not when passing explicitly named directories.  It would be nice if they both worked the same way.

One last, last thing to consider.  It's really gratifying that people read my posts, correct my errors, and actually care enough to get my mistakes through my thick skull so that you, dear reader, get accurate information in spite of my best efforts to confuse all and sundry.  :)  Both Tim and Jason get my thanks.

10 April 2012

Stupid Planning queries #9 - Entity dimension

Introduction

Oh dear, I seem to have dropped off with my Stupid Planning Tricks.  The thing is, I’ve got a gazillion (roughly, there might be some slight exaggeration in that count) of these queries and I simply don’t remember which ones I’ve shared and which ones I haven’t.  I tend to get distracted with other technologies and projects…this is all a way of saying I’m a bit scatter-brained.

Enough apology, here’s the code, in case you haven’t been able to figure this out from other queries I’ve given away, to pull the Entity dimension from your Planning application.  

Some interesting columns

Here’s HSP_ENTITY in SQL Server’s table design mode – don’t worry about the brand of SQL – the table is always the same.

ENTITY_ID is simple – that’s the name of the OBJECT_ID value in Entity.  How exactly you could have a NULL OBJECT_ID is beyond my understanding of Planning as each and every object in Planning has a value, but my SQL brain is very small, so I’m not going to worry about it.  You shouldn’t worry either – Planning sets the value, not you.

DEFAULT_CURRENCY makes sense if you are using Planning’s multicurrency functionality.  If you are working in a single currency application, this field is NULL.  It’s easy (as we will see) to go grab the currency descriptions from HSP_CURRENCY.

USED_IN is kind of an interesting field.  It is described as a bit masked value, showing what Plan Type the Entity is used in.  A bit mapped value is simply an index of the values (in the case of this field) created by either the individual values or the adding together of the values.  Huh?  Actually, this is really, really easy.  Think about Classic Planning’s dimension editor for the Entity dimension.  Now think further about the rightmost field.  Do you remember those numbers?  

The above is a full three Plan Type Planning application with Workforce and Capex thrown in for giggles.  Do you see the 1, 2,4, 8, and 16?  Those are the values that make up the bitmap.  Let’s take a look at what HSP_ENTITY looks like:

There they are.  Wait, what’s that 31 value?  Well, 1+2+4+8+16 = 31.  And yes, that 31 equates to No Entity sitting in all five Plan Types.  If we had one that was in Plan Type 1 and 8, it would have a bitmap value of 9 as in Entity6:


Ta da – proof that yr. obdnt. srvnt. can add simple numbers together.

The thing that’s nice is that you can use USED_IN with HSP_PLAN_TYPE to get the descriptions of the Plan Types:


EMPLOYEE_ID and REQUISITION_NO have the respective descriptions of “Id for an employee” and “Requisition number for a to-be-hired”.  Hmm, those sound as if they are for Workforce Planning.  Know that if you don’t have Workforce, these two fields are NULL.  (Actually, I can’t seem to value these fields at all.  I must be doing something wrong so figuring this out is on The List of Things to Figure Out and thus a subject for another blog post, someday.)

ENTITY_TYPE is another one of those odd ones.  I can see that it’s there.  I can see there’s a description in the schema:  
0=none (default)
1=employee
2=department-general
3=TBH-input

But I’ve never actually seen the values change from anything other than 0.  Again, it looks like this field is used in Workforce.  I shall have to experiment for a later blog post.

Putting it all together

So let’s write a query that takes into account all of the above.  
--    Purpose:    Illustrate the querying of a custom (Entity) dimension
--    Modified:    7 April 2012, Cameron Lackpour
--    Notes:        Unique to Entity are Currency, Entity Type, and Plan Type
--               
SELECT
    PO.OBJECT_NAME AS 'Parent',
    O.OBJECT_NAME AS 'Child',
    -- Use a SQL Subquery to get aliases.
    -- NB --  The overall SELECT from HSP_MEMBER ensures that members with
    --        and without an alias are selected.
    --        ISNULL puts in zero length string in place of NULL
    ISNULL((SELECT OA.OBJECT_NAME
        FROM HSP_ALIAS A
        INNER JOIN HSP_OBJECT OA
        ON A.MEMBER_ID = O.OBJECT_ID AND
        OA.OBJECT_ID = A.ALIAS_ID), '') AS 'Alias',
    --    Use a SQL subquery to get currency tags
    ISNULL((SELECT C.SYMBOL
        FROM HSP_CURRENCY C
        INNER JOIN HSP_ENTITY EN
        ON C.CURRENCY_ID = EN.DEFAULT_CURRENCY AND
        O.OBJECT_ID = EN.ENTITY_ID), '') AS 'Currency',
    --    Remember, there can be up to five plan types per Planning
    --    application, so CONSOL_OP1 through CONSOL_OP5 are valid.
    CASE M.CONSOL_OP1
        WHEN 0 THEN '+'
        WHEN 1 THEN '-'
        WHEN 2 THEN '*'
        WHEN 3 THEN '/'
        WHEN 4 THEN '%'
        WHEN 5 THEN '~'
        WHEN 6 THEN '^'
    END AS 'Operator',
    CASE M.DATA_STORAGE
        WHEN 0 THEN 'Store Data'
        WHEN 1 THEN 'Never Share'
        WHEN 2 THEN 'Label Only'
        WHEN 3 THEN 'Shared Member'
        WHEN 4 THEN 'Dynamic Calc and Store'
        WHEN 5 THEN 'Dynamic'
    END AS 'Storage',
    CASE M.TWOPASS_CALC
        WHEN 0 THEN 'No'
        WHEN 1 THEN 'Yes'
    END AS 'Two Pass',
    --    Change the syntax of the CASE statement to support testing for NULL
    --    Dave's Unofficial Guide has DATA_TYPE = 0 as Unspecified, but in
    --    11.1.2 that isn't the case.
    CASE
        WHEN M.DATA_TYPE IS NULL THEN 'Unspecified'
        WHEN M.DATA_TYPE = 1 THEN 'Currency'
        WHEN M.DATA_TYPE = 2 THEN 'Non Currency'
        WHEN M.DATA_TYPE = 3 THEN 'Percentage'
        WHEN M.DATA_TYPE = 4 THEN 'SmartList'
        WHEN M.DATA_TYPE = 5 THEN 'Date'
        WHEN M.DATA_TYPE = 6 THEN 'Text'
    END AS 'Data Type',
    --    As far as I can tell, this isn't used in Planning, but is still
    --    there.  Why?
    CASE
        WHEN M.USED_FOR_CONSOL = 0 THEN 'False'
        WHEN M.USED_FOR_CONSOL = 1 THEN 'True'
        ELSE 'Undefined'
    END AS 'Used for consol',
    ISNULL(F.FORMULA, '') AS 'Formula',
    --    The subquery will puke if it returns more than one
    --    UDA, so do string concateenation using FOR XML PATH
    ISNULL((STUFF((SELECT ',' + U.UDA_VALUE
        FROM HSP_MEMBER_TO_UDA MU
        INNER JOIN HSP_UDA U ON
            MU.UDA_ID = U.UDA_ID AND
            MU.MEMBER_ID = O.OBJECT_ID
             FOR XML PATH ('')),1,1,'')), '') AS 'UDAs',
    ISNULL((STUFF((SELECT '; ' +
        OAN.OBJECT_NAME + ': ' + OAV.OBJECT_NAME   
    FROM HSP_MEMBER_TO_ATTRIBUTE M1
    INNER JOIN HSP_OBJECT OAN
        ON M1.ATTR_ID = OAN.OBJECT_ID AND
        M1.MEMBER_ID    = O.OBJECT_ID
    INNER JOIN HSP_OBJECT OAV
        ON    M1.ATTR_MEM_ID = OAV.OBJECT_ID
    INNER JOIN HSP_MEMBER MB
        ON O.OBJECT_ID =  MB.MEMBER_ID
    FOR XML PATH ('')),1,1,'')), '') AS 'Attribute assignments',
    CASE
        WHEN E.ENTITY_TYPE = 0 THEN 'None'
        WHEN E.ENTITY_TYPE = 1 THEN 'Employee'
        WHEN E.ENTITY_TYPE = 2 THEN 'department-general'
        WHEN E.ENTITY_TYPE = 3 THEN 'TBH-input'
    END AS 'Entity Type',
    E.USED_IN AS 'PT #',
    --    Use a SQL subquery to get Plan Type tags
    ISNULL((SELECT P.TYPE_NAME
        FROM HSP_PLAN_TYPE P
        INNER JOIN HSP_ENTITY EN
        ON P.PLAN_TYPE = EN.USED_IN AND
        O.OBJECT_ID = EN.ENTITY_ID), '') AS 'PT Name'
FROM HSP_MEMBER M
INNER JOIN HSP_OBJECT O
    ON M.MEMBER_ID = O.OBJECT_ID
INNER JOIN HSP_OBJECT PO
    ON O.PARENT_ID = PO.OBJECT_ID
INNER JOIN HSP_ENTITY E
    ON M.MEMBER_ID = E.ENTITY_ID
LEFT OUTER JOIN HSP_MEMBER_FORMULA F ON
    M.MEMBER_ID = F.MEMBER_ID
-- Entity dimension is ALWAYS DIM_ID 33, but its name
-- can vary, so use the ID
WHERE M.DIM_ID = 33

What does this look like?

How about that multiple Plan Type example?

Unsurprisingly, when an Entity is shared across Plan Types, and the bitmap doesn’t correspond to a PLAN_TYPE value, the above code can’t identify the Plan Type description.  I leave it to to you, loyal reader, to figure out how to derive the name.  (Hint – it can’t be that hard – use the bitmap, simple addition, and test with CASE.)

Almost done

I think I’ve written almost every dimension there is.  I have new, exciting, Dodeca-based stuff I want to write about, so there will likely be one more post on Planning dimensions and then I may run away from Planning’s tables for a bit.  I hope you’ve enjoyed this (almost complete) series.