Want to know The Truth About CPM?
Showing posts with label reporting. Show all posts
Showing posts with label reporting. Show all posts

06 July 2016

Kscope16 in snaps, part 2

Trying to do this in a different fashion, part 2 of I don’t know

This is Monday, 27th June, aka Day 2 of the conference.  The madness was just beginning…

The photographers will snap us, and you'll find that you're in the rotogravure

PBCS session filling up


My woefully deluded because he presents with me wonderful co-presenter Jason Jones’ view from our PBCS session dais

1st slide

Carnival time with Gary Adashek, Jessica Cordova, and Chris Rothermel

I’ve gone to community night events since Kaleidoscope 2008 and this was by far the best ever right down to the unbelievably expensive but actually quite tasty popcorn.  The scavenger hunt was a brilliant idea (it surely wasn’t mine) that acted as an effective ice breaker.  Many thanks to subject experts Chris Barbieri (financial close), Gary Crisci (business content), Steve Davis (infrastructure), Al Marciante (reporting), and Glenn Schwartzberg (Essbase) who graciously quizzed attendees.  

The prize was this:  

Pretty cool, eh?

The success of the night was of course yours, Gentle Kscope16 Attendee, but the vision and hard operational work was a team effort that wouldn’t have happened without:
  • Jill Colsh from ODTUG’s management company Your Conference Connection (YCC)
  • EPM community volunteers Jennifer Anderson, Janice D'Aloia, Jessica Cordova (shanghaied into this at the last minute), Chris Rothermel, and the EPM community leader Gary Adashek
  • Greg Beaton, Alex Leung, and Valantus Philip (as well as a few others whose names flew by me at 160 kph, sorry but I least I metricated the speed) from The Goal Getters

Sometimes a group of disparate people come together for a project and it’s magic.  This was one of those times and I was privileged to be on the sidelines cheering our volunteers on.

My time with ODTUG is coming rapidly to a close.  Jennifer, Janice, Jessica, Chris, and Gary are the future.  Mark their names for one day they will be our board of directors.

The crowd, very early, and honestly there were over 100 there

Oh my goodness

No one gets credit (or blame) for this but me.  Perhaps it’s my Easter Bonnet?  Perhaps the >100 community night attendees went to my head?  Perhaps both?  Who can tell.

OMG #2

OMG #3, with Jennifer Anderson, Natalie Delemar, Gabby Rubin, and Richard Philipson


I’m not totally sure what the theme is wrt the lit up headpiece, only that it exists and that’s enough for me.

Not even halfway there

I’ve brought you so far through Monday night.  It got late.  It’ll get later.  My sleep patterns will become more erratic.  Fun, I think.

Be seeing you.

09 September 2012

A simple 23 step guide to Books, Batches, and Bursting in Hyperion Financial Reports

Introduction to the guide

I was asked in in this thread over on the Network54 Essbase board to post a guide to Hyperion Financial Reports batch bursting I wrote a while back.  I’m happy to share what I came up with but first a couple of caveats:
  • I have redacted identifying information from the screenshots.  It’s usually pretty obvious where this has happened.  
  • If you have questions about this, go ahead and make comments but unfortunately, I can’t reproduce the environment I did this one because:
    • I wrote this for EPM 11.1.2.0 (so an almost two year old release).  I have no idea if the defects/bugs/weirdness/obvious-stuff-that-I-am-too-dense-to-see have been resolved or not in 11.1.2.1 and 11.1.2.2.  Mercifully, I have not been called upon to do more with batches since writing this document.
    • Some of the issues I encountered were, I think, caused by a less-than-perfect installation.  I could rant on and on about how bad it was but it would bore everyone, even me.  At the time, I asked someone I know in infrastructure what he thought it would take to resolve all of the issues (oh, I had a working issues list).  The reply?  “I wouldn’t touch that with a 40 foot barge pole.  If I were to come in, I would insist on a complete reinstallation on a clean box.”  In other words, nuke and pave.  I mention this not to reflect the frustration we application developers encounter when an install is bad, but to note that some of the general weirdness may be because the software wasn’t “right”.
  • I wrote this in the form of a step-by-step tutorial.  I never did find one on the web – I thought for sure there would be one but my google-fu failed me.  Maybe there’s one now, but I sort of doubt it.
  • Contrary to what I wrote in that thread, doing all of this is not 23 steps, but instead:
    • 24 steps (23 to create + 1 to view) to create that scheduled batch
    • 6 steps to import a bursting file (and yes, I explain what a bursting file is) into Workspace
    • 6 more steps to applying the bursting file into the batch scheduler
    • Although I am somewhat math-challenged, that means this is a 36 step process.  It's almost the 39 Steps.

And with that out of the way, enjoy.    

Background

Briefly, there are six kinds of Financial Reports documents typically encountered in a Planning implementation:
  1. Financial Reports – the base Essbase/Planning report
  2. FR Books – collections of FRs using, where applicable, a common POV to drive all reports within a book
  3. FR Batches – Objects that contain reports and books.  A batch can contain a single report, multiple reports, a single book, multiple books, mixes of reports and books, etc.
  4. FR Scheduled Batches – Scheduler (Workspace has its own, reports-only scheduler) of FR Batches and their Books and reports.  Scheduled objects can get written to Workspace folders, zip files, and emails.
  5. Burst batching – A way to parameterize scheduled batches and overload single dimension selections (only one dimension can be bursted and yes that is a strange word for it) with either manually selected members or members driven through imported burst files.
  6. Bursting files – These are comma-delimited files used to drive burst batching

How to create a Scheduled Batch in Workspace 11.1.2.0

It’s just a simple 23 step procedure to define a scheduled batch.

Creating a book


1)  After creating a FR report, create a FR book by logging into Workspace and clicking on the Explore button.  Then select File->New->Document,.
2)  Select “Collect Reports into a book”
3)  Pick the report you want to incorporate into the book.  Books commonly contain more than one report but that is not a requirement.

4) Move the report(s) over to the right hand list box and when complete, click “Finish”.
5) The data sources in the report(s) will show up in the book.  These will be driven by the book’s POV.  
6) If you wish to get rid of the book’s table of contents, deselect it as shown below.
7) To force the save of the book, close the document; you will be prompted to save the book.
8) Save the book to whatever name and location you desire.
9) This book will be used in the batch.

Creating a batch


10) Go to File->New->Document again, but this time select “Batch Reports for Scheduling”.
11) Batches can be just a single or multiple books and reports.  To see them in the file selector, use the dropdown at the bottom to toggle between the different kinds of base documents.
12) Move the object you want to put into the batch over to the right.
13) Again, close the document to force a save action.
14) Save the batch.

Scheduling a batch

Creating the scheduled batch

15) Schedule the batch by going to the Navigate->Batch Scheduler menu.
16) In the Batch Scheduler screen, right click and select “New Scheduled Batch”.  This will launch the scheduled batch wizard.
17) Name the batch.  You can make the batch a one-tme affair by selecting “Delete Scheduled Batch Entry from Scheduler if Completed Successfully”.  Do not do this as creating a batch is fairly painful, as you may have noticed.  Click on Next to move to the next step.
18) Select the batch you want.  In this example, it’s “Batch for XXXXXXXXXXX”.  After entering the name, click on Next to move to the next step.
19) You will be prompted to log in to both FR and Essbase with an administrator id.  I do not recommend a specific userid like CameronTheConsultant as these ids get terminated.  

Selecting members for data connections

20) Individually select the connections (the All option doesn’t work, see Oracle Support ID 1097787.1) and select (usually) identical cost centers for each data source.
21) There’s a big bug in FR batches – as far as I can observe, when the scheduled batch is edited the Selected Members appear to be defined (and are shown in the Select Members text box) but are not.  The only way around this bug is to select the individual data connections and click on the Copy Members button to reapply the Cost Center members.

I have also found that clicking on the Preview Bursting List button for a given data source will deselect the selected members for the other data source.  To make sure both data sources are selected, click on the Copy Members button for both data sources and do not use the Preview Bursting List.

Click on Next to move to the next screen.

Defining where the batch output goes

22) Select “Export as PDF” and “Export to an external directory”.  Click on Next to run the batch.
23) A confirmation dialog box will appear.
 

Reviewing the output 

24) The external directory FRExport1, as defined through the FRConfig.cmd utility on the Financial Reports server, corresponds to \\yourservername\f$\FRExport1.  The output structure is as below.



Bursting files in scheduled batches

Bursting files are comma delimited files used to externally drive batch bursting member selections.  Given the bugs in FR batch scheduling, bursting files also provide a way to quickly and consistently select members in the Bursting Options.
For Oracle’s take on the bursting file parameters see:  http://download.oracle.com/docs/cd/E17236_01/epm.1112/fr_webuser/scheduler_wizard_fr.html

Bursting file fields

dimension_dimensionname

In the CCRpt example, the column name is dimension_Cost Center.  Within the field, the member values are the cost center numbers.  Per the documentation, member names must match on case.  Only one dimension per burst batching file can be defined.

subfolder_name

The name of the folders underneath the main one defined in the batch schedule.  The CCRpt example uses <<FinancialReportingObjectName()>>-<<MemberAlias()>> which passes the name of the book and the member alias from the dimension_dimensionname column into the sub folder name.  Other valid tags are <<MemberName()>>, <<BatchPovMember(DataSrcName,DimName)>>, <<BatchPovAlias(DataSrcName, DimName)>>, <<FinancialreportingObjectDescription()>>, and <<Date(“format”)>>.

financial_reporting_object_name

The name of the pdf files.  The same parameters as subfolder_name apply.

group_names

The Shared Services group that runs the batch, by row.  Not used in the CCRpt example.

role_names

The Shared Services role that runs the batch, by row.  Not used in the CCRpt example.

user_names

The Shared Services user name that runs the batch, by row.  Not used in the CCRpt example.

email_list

The SMTP email address that receives the batch pdf output.  Although this is set in the CCRpt example, it does not work because the SMTP configuration was not done during installation.

external_pdf_root

The root of the file output.  This overrides the output from the scheduled batch.

Importing the bursting file to Workspace

For the scheduled batch to read the burst batching file, it must be imported into Workspace.
1) After creating the batch bursting file in Excel and saving the output as a comma delimited file, import the file into Workspace by clicking on the Explore button, navigate to the target folder,  and then the menu File->Import->File and then select the file.
 
2) Click on Browse and select the file.
3) After confirming the file in the File textbox, click on the Next button to move to the Advanced screen.
4) Click on the Next button to move to the Permissions screen.
5) Click on Finish to finish the import process.
6) The file will appear in the target folder.
For your amusement, I have stuck a copy of the file here.  It’s in (as noted) comma-delimited format.

Applying the bursting file into the batch scheduler

1) In the batch scheduler, select the data source, then tick the “Run Batch for multiple members in the dimension”.  Then click on the ellipsis button to import the burst file.
2) Select the burst file – this is the file just imported into Workspace.
3) You will see the comma delimited batch bursting file’s full path.  Click on the “Copy Members” button to apply the values in the batch bursting file to the data source.
4) Once the Copy Members button has been clicked on, the members in the burst batch file will be shown in the Select Members text box.  This comma delimited list of members can be modified by clicking on the magnifying glass.
5) Select the other data source and apply the members by clicking on the “Copy Members” button.  It will look like the members are selected – this is not the case – never be afraid to click on that button.
6) Click on the Next button to continue defining the scheduled batch as defined above.

Conclusion

That was easy, wasn’t it?  :)

Okay, it wasn’t really hard at all, and it is pretty cool functionality.  However, it took me FOREVER and a day to figure out how to do this and I had to reach out to multiple people (in fact an ex-client from my ex-consulting company. Hello! Lisa Abrewczynski and Rholanda Brooks) to get the answer.  What, you think I figure all this stuff out by myself?  If only.  Oh wait, no one thinks that.  Regardless, it was super nice that they both took time to help me out – I am obliged.

And that’s it!  I hope you enjoyed this super short (hah!  25 pages in Word) guide to books, batches, and bursting.


28 August 2012

Bringing the Cloud down to the Ground and no, the result is not fog, part 2


Getting it onto your local machine

Still with me?  You must be else you wouldn't be reading this.  I think.  Anyway, you’ve dutifully read part 1 of this two part series and converted an AWS EC2 instance to a VMWare Workstation (in my case, at least) instance.  So now the question is – how oh how oh how do you get a BIG file off of the Cloud?  

This is not a hard step but it takes a long time because of the size of the files.  Note that if you get rid of those media files or if you have a faster connection than my DSL line it isn’t quite so painful.

Compress it to make it fast(er)

Although I suppose you could do this without using the installed 7-Zip compression program, I can’t see why you would.  

Dan did a bunch of experiments with getting the best performance out of 7-Zip, and found that the Lzma2 method with 24bit word and 256mb block and 8 threads was the fastest options settings.

I am not going to show the individual steps for doing this – you can just take the defaults on the compression but it will take longer/be bigger/be slower on the download but I suggest trying Dan’s settings.

Downloading the compressed files

I’ve done this four different (What, you think I know all of this stuff before I write it down?  If only.  Nope, I have to blunder through the options until I get to the right answer.) ways:
  • Transferring the file(s) from the AWS instance to a FTP server and then download them (this got me a nastygram from my internet provider because of size and download threads which ate up the box for everyone – whoops, so firmly rejected on my part).
  • Use Terminal Services to transfer the files.  Just follow the three steps below in the TS client.  You must set this before you connect to your instance.  Your local drives will then look like mapped drives from AWS.
 
  • Setting up an FTP server on your AWS instance and downloading from there.  Note that you will need to open up the default port of 21 in your AWS Security Group/firewall.
  • Use AWS’ S3 – This is the way I did it.  It’s a little confusing at first, but Cloudberry Explorer makes it dead simple.

Using Simple Storage Service (S3)

Given the other three (two really, I would avoid the first approach of sending the file (or files if you split them up) to an external ftp source) approaches, why use S3?

I used 7-Zip to both compress the VMWare files and to split it up into DVD sized (4.7 gigabyte) files.  I have had (Oracle e-delivery is where I’ve experienced this before) issues with my wonderful (can you tell it annoys me?) DSL connection.  What happens is that the files get downloaded, look like they’re valid, but in fact are corrupt.  

S3 allows me to redownload parts of my VM that fail.  It’s a pain to do that, and slow, and it costs (S3 charges you for downloads – pray that you have a better internet connection than me) but it is better than downloading everything over and over again.  Also, it gives me (and you, too) a chance to learn a new technology.  I should mention that John Booth mentioned S3 as an approach – as always, he has some really great ideas and I am at least smart enough to listen to them.  :)

I am not going to provide a detailed tutorial on S3 but suggest that you read here.  I essentially treat S3 as a super easy to set up FTP Cloud server that does not require me to configure the AWS instance’s IIS settings.  Note that most Cloud-based services such as OpenDrive or Box limit the size of uploaded files.  My provider, OpenDrive, has a file limit of 100 megabytes per file, so with my 4.7 gigabyte files, I really had to come up with another approach.

If you are interested in other tools other than Cloudberry, have a read of this thread.

If you are interested in using Cloudberry, read this very nice tutorial.

NB – You can also use the AWS console’s S3 component to move the data around – that’s what Dan did.  It is as simple as opening up the AWS console on your AWS instance (sort of like a mirror facing a mirror) and then right clicking inside your bucket like the below:

I did the same thing via Cloudberry to a S3 bucket I called VMWareMetavero.

To get it onto my laptop, I installed Cloudberry again and then downloaded it to my external hard drive.  

Alternatively, I could have just gone into S3 via the AWS console and done this:

Cloudberry made it a little less painful so I went that way.

That’s it.  

Unzipping the files (or even combining them)

Once you have the download to the Ground completed, 7-Zip needs to be installed on your laptop if not already done.  And once that is done, decompress.  Again, this takes a while.


Avoiding the Blue Screen of Death (BSOD)

At this point you have:
  1. Removed the media files from c:\media unless you really, really, really want them.  You might, but probably not.
  2. Converted the 11.1.2.2 AMI to a VM Workstation VM
  3. Compressed that VM on a using 7-Zip’s Lzma2 method with 24bit word and 256mb block and 8 threads.
  4. Downloaded that through S3 (or whatever method you prefer but that’s the easiest) to your laptop

So you’re all ready to go, right?  Uh, no, because here’s what happens when you try to fire up that VM in Workstation.  Auuuuuuuuugggggggghhhhhhhh!!!!!!!!!!   It’s the Blue Screen of Death!!!!!!!!!!!

And not just the BSOD, but a BSOD that will immediately reboot Windows so that you have a lovely endless loop of BSODs.  Fun times, fun times.

At least it’s fast – it took me about five tries (so we are talking 30 minutes of reboots) to get that screenshot with Snagit.  It will flash very, very, very quickly on your screen.  Is there a cure?  You betcha.

The cure for the Blues

If you want it all in one succinct (but not terribly well explained or at least I couldn’t follow the directions until I did it three times) thread, read this on the VMware support forum.  I’m going to show it to you step by step and will make it a tiny bit less painful.

Just to be completely up front, I am taking everything I read in that thread and putting pictures to it – the brains behind figuring this out belong solely to ivivanov and leonardw who figured all of this out.

The issue is the RedHat SCSI Bus Driver (really all of the Red Hat services, all of which start with “rhel”) despite the storport.sys message in the BSOD.  Who would believe that an error message is misleading or doesn’t give all of the information you need?  Why I would, and so should you.  The RedHat services are part of the EC2 Amazon offering and simply don’t work (why I know not as I am no hardware expert, but I can certainly attest to their super-duper not working).  It blows up Windows 2008 R2 on VMWare real good.

Richard Philipson tried out part 1 and pointed out (yep, some people actually read this blog, thankfully otherwise this is the most involved echo chamber ever) that the ec2config service is superfluous (and causes a wallpaper error on startup) and that those RedHat services are “a set of drivers to permit access to the Xen virtualized hardware presented by Amazon EC2 to the guest operating system.”  It makes sense that without Xen under the covers, there is no Xen virtualized hardware.

NB -- There is a separate intermittent error in VMWare if you have more than two cores to your laptop.  If you are on a machine with more than two cores, you may get a multiple processor error (a different BSOD).  If that is the case, you should set the number of processors to 1 and the number of cores to 2 in VMWare.

Step 1 – Getting into Boot Options

Open up your nifty new VM in VMWorkstation and start it.

As it starts up hover your mouse over the VMWorkstation window and press Ctrl+G.  You need the VM to get control of the keyboard/mouse as you are going to be holding down the F8 key.  If your Windows host has control, F8 will toggle a selector bar in the VMWorkstation application and you will not be able to get into Advanced Boot Options.  It’s a total Pain In The You Know What.

No worries if you don’t get it the first time as the Metavero VM will crash very quickly indeed.  :)

A VMWorkstation bar will pop up telling you to install VMWare Tools.  Ignore that for now but you will need to install this.

Step 2 – Go through the System Recover Options
First select a keyboard.

Then log in.  The cool thing is this is just like the AMI – username Administrator, password epmtestdrive.

Step 2 – Run a Command Prompt and then Regedit
Select Command Prompt.  

You will then run Regedit from x:\windows\system32.

Here it is:

Click on HKEY_LOCAL_MACHINE and then File->Load Hive.

Navigate to c:\windows\system32\config and select the SYSTEM file and click on Open.

NB – This must be on the C: drive, not the X (that’s the repair drive).  Here’s what x:\windows\system32\config looks like.  Note the two SYSTEM files.  You do NOT want this as it does not contain the services.

What you do want to see is this and it’s only available off of the C drive:

Type in “p2v” into the Key Name field and click OK.

Navigate to HKEY_LOCAL_MACHINE\p2v\ControlSet001\services.

For each of the rhelfltr, rhelnet, rhelscsi, amd rhelsvc services, click on the service in question and select the Start parameter.


In the below screenshot I have selected rhelscsi.  Note that there is some discussion on that VMWare thread that only rhelscsi needs to be disabled.  I’ve tried that and sometimes it works and sometimes it doesn’t.  Mu suggestion is to disable all four.

Right click on Start, select Modify, and change the value from 0 (or whatever) to 4 which disables the service.

From this:
 

To this and click OK:
 

Note the value of 4:

Do it again for rhelfltr, rhelnet, and rhelsvc.  All of these services need to be stopped.

With all four services disabled, select the key again.
 

Then select Unload Hive.
 

Select Yes in the confirmation dialog box.
 

Minimize Regedit, and then select Shut Down.  You will then restart the VM.



Start the Metavero (that’s what I named it) VM back up.

Ta da, you are now running (and not BSODing) Windows 2008 R2:

In VMWorkstation, VM->Send Ctrl+Alt+Del to get the login.  

NB – You can also hit Ctrl+Alt+Insert to get the same thing.

And there you are:

And finally (at least on my laptop, there is a fair amount of time before this all boots up):

Don’t forget to activate Windows

You have three days to apply that valid key for Windows 2008 R2 Datacenter.

No, I am not going to give you a valid license key.  But I have given you a lot of ways to get one, all of them legal.

Interestingly, Microsoft Action Pack (to which both Dan and I belong) does not support 2008 R2 Datacenter (bummer for those of us with a MAPS subscription), but with MAPS you get TechNet (after signing up for it), which does allow you to sign up for Technet for free, and then one can get a valid 2008 R2 Datacenter key.  Whew.  Thanks to Dan for figuring this out as it was not exactly straightforward.  

If you are on the fence between MAPS and Technet, note that Technet Professional only costs $349 for the first year but MAPS (you do have to qualify as a partner) gives mulitple internal use licenses.  You decide.

So what do we have?

Well, in the case both the case of Dan and me, slightly different outcomes.

In Dan’s environment, running on a 24 gigabyte laptop, he has a pretty awesome EPM installation.

In my world, running on an 8 gigbyte laptop, I pretty much have an unusable EPM installation because my host laptop simply doesn’t have enough horsepower.  Although I do have a nice blog post.  :)

Based on our tests, you simply must have a 16 gigabyte laptop to make this work acceptably.

What’s the right choice – the Cloud or the Ground?

As I wrote above, if you don’t have a multiprocessor, 16+ gigabyte laptop, with plenty of disk space, you can pretty much forget this approach.  A valid Windows 2008 R2 Datacenter key would be nice as well.

Assuming that you do have the above, is the Ground worth it?  I think the answer, despite the pain, effort, and time (I’m pretty sure this must be a world record for me for the length of a single blog post) is, “Yes, absolutely!”

You get a professionally installed EPM instance that is right there on your laptop/PC without the AWS charges.  That’s pretty cool.  And you (and Dan and I) got to perform, and learn, a whole bunch of tools that are pretty darn useful.  All I can say is that I will be getting a Dell Precision 4600 or 4700 in the near future.  That’s putting money where my mouth is.

I hope you enjoyed the multiple hacks.

And a big thanks to Dan Pressman and Richard Philipson for helping out with this monster of a post.