Want to know The Truth About CPM?
Showing posts with label EPM. Show all posts
Showing posts with label EPM. Show all posts

06 September 2017

What are you doing the rest of your life? Or at least what are you doing for the rest of the 20th of September?

What are you doing for the rest of the 20th of September, 2017?

Alas, I cannot provide Dusty Springfield for a number of reasons (alas she’s dead, were she alive I’m not sure we’d be BFFs (alas, again, for me this time), and I’ll bet the number of you Gentle Readers who actually like this kind of music rounds down to zero but maybe this exposure will change a few minds), but I can provide an excellent opportunity for Bay Area (NB – One should, apparently, never refer to the area as “Frisco” lest hirsute, enraged, and man-bunned baristas take umbrage.) Oracle EPMers to meet, greet, commiserate, and congratulate one another at the San Francisco Bay Area Oracle EPM Meetup.

Whew, even for me that’s an awful lot of parenthetical references.  Let me boil that down:  There’s a Bay Area Oracle EPM meetup on the 20th of September.  You should be there.  

Back to my self-indulgent/sometimes informative active voice below.

Where, When, What, and Who

Who

Let’s go completely backwards (Why not the right way round?  Heh, why not?) on this one and give you the who first.  The who is, of course you, Gentle Reader as ultimately, that’s what meetups are all about:  people.  And this ODTUG EPM meetup has that in spades with Marc Seewald of Oracle as well as Western Digitals’s Bill Roy, Sree Putreddi, and Mark Govostes.

All of this being organized by longtime Oracle EPM manager Frank Chow in cooperation with the ODTUG EPM Community.  

What

Beyond the normal meetup networking and sharing, there is real business and technical content.  To wit, Mark Seewald will cover something near and dear to every Oracle EPM customer:  Oracle EPM’s roadmap.  Seriously, with all of the noise and rumors swirling about you owe it to yourself to hear just what Oracle has to say.

Your fellow customers aren’t MIA:  the team from Western Digital will be there to discuss their journey from on-premises to Exalytics and the cloud.  Without exaggeration, that’s got to be a fascinating story both in terms of how it was done but why.    

Here’s their agenda:
  • Overview of WDC
  • EPM @ WDC
  • Why did WDC decide to move to EPM Cloud
  • WDC Current Oracle EPM Footprint (On-Prem & Cloud)
  • Roadmap of WDC Oracle EPM Cloud
  • Business / Technical challenges
  • Metadata Management / Integration / Automation
  • Q&A
  • Support model (On-Prem Vs Hybrid)
  • User adoption of EPM Cloud products
  • Open Questions

When

It’s on Wednesday, 20 September, 2017, from 2:00 pm to 6:00 pm.

Where

At Google’s offices, Google Bldg Plymouth 1. 1500 Plymouth Street, Mountain View, CA.

And oh yeah, Why

Why?  Why?  Really?  Why?  Ah, why.

Why is because while Kscope is awesome, like Christmas it comes but once a year; meetups are on offer all year round.  Yes, they aren’t a week of awesomeness, but the same people (well, at least some of them) come to meetups plus others who don’t get the chance and the same subject that is near and dear to all of us – EPM in all its forms – is the subject of discussion.  It’s vital to your job (come to think of it, mine as well) to be au courant on all things EPM and meetups are an excellent way to do that.

Join Frank, Marc, and the guys from Western Digital, won’t you?

Click here to attend.

Be seeing you there.

03 August 2017

Drill to Detail Ep.37 'How Essbase Won the OLAP Wars' With Special Guest Cameron Lackpour

Drill to Detail, just what is it?

It’s Mark Rittman’s take on (I am just going to crib directly from the site) on the business and strategy of analytics, big data, and distributed processing in the cloud in the form of podcast interviews with the geeks (I may have added this descriptor) who make it all happen.  I particularly like the way his interviews and interviewees contextualize technologies within markets, companies, and across history.  There’s nothing else like it.

If this sounds overly ambitious, know that the execution exceeds the vision.  Why?  Because Mark interviews the Great and the Good of our (and many other) professional spaces to get their take on technology, products, markets, trends, and futures.  Some of the luminaries most of us should recognize are Stewart Bryson, Dan McClary, Graham Spicer, Robin Moffatt, Vasu Murthy, Chris Webb, Gwen Shapira, Adrian Ward, and Donald Farmer who speak on tools and subjects as diverse as Gartner groupings, Big Data, Kudu, Hadoop, various forms of BI, Cloud (of course), DI, OLAP, and now… Essbase.

Yeah, Essbase.  And yeah, yr. obt. svt.  But don’t let that last bit put you off.

I could write all kinds of (un)funny jokes about how Mark made some sort of mind numbing mistake bringing me on (and would likely get many to agree) and could write all kinds of quite truthful remarks about how many others could have done better.  Instead I’ll just shut up for once and be grateful.  

You can subscribe to the series on iTunes or listen in via your browser:
In any case, I am beyond flattered to have been included in Mark’s program and I hope I do the subject justice.  Listen in, I think you’ll enjoy it.  Listen to the rest of the Drill to Detail series as well – it’s eye opening.

Be seeing you.

P.S.  And yeah, 76 minutes of  yr. obt. svt., longer than any other of his podcasts.  I'm either an Essbase bore and he couldn't stand listening to me again to edit it down or what I have to say interested Mark and maybe you too.  You decide.

13 April 2017

Miami, Florida ODTUG BI-EPM Oracle Cloud Stack User Event live blog

Where are you?  Probably not at the ODTUG BI-EPM Cloud Stack User Event.

Yr. Hmbl. & Obt. Svt. writes that because you’d not be reading my blog if you were.  Actually if you are here and you are reading this blog this can’t be a very good event.  With luck the number of people that fall into this category rounds down to zero because it actually is a pretty good event.

But I digress.

I’m going to try to live blog  totally suck when I try to live blog this meetup (Correction – I have been informed that this is a summit.  Fancy.) as well as do as much live video as I can on Periscope.  You can watch that if you are a Periscope user or also watch it via my Twitter feed @CameronLackpour.  Your choice as I aim to please.

To be a little fair to myself (eh, still lazy to the core), my laptop was being used for the presentations and my phone was being used for recording so apparently I’d either need to buy another laptop (nope) or phone (double nope) to be able to blog and present and record all at the same time.  Whew.  Given that the Chancellor of the Exchequer – me – isn’t going to allow that kind of capital expenditure I think I’m going to have to give up the notion of live blogging in future and settle for doing it after the fact.

Here’s the Periscope feeds (because again, I’m lazy and I suck and also because I accidently stopped the broadcast cf. I suck):

I know it’s a lot (thank you, wireless carrier, for your “unlimited” data plan as I’m going to do my best to stretch that definition to the limit) to watch but there’s awfully good content if you ignore my presentation.

Lunchy-lunchy

This has to be some of the best food I’ve ever had at a conference.  It was awfully good.

Here’s the remnants of the food:


I don’t know if I’d suggest that you come to a meetup (sorry, summit) because of the food but it’s a thought.

The speakers

So again, if you want to experience the summit, watch the Periscope sessions.

Here was the lineup – all good, even the Periscope sessions.

Here was the lineup – all good, even my presentation:
12:00 PM - 12:45 PM Networking and Lunch
12:45 PM - 12:55 PM Welcome and Introductions
1:00 PM - 2:00 PM Oracle Keynote Speaker: Jacques Vigeant, Senior Director Product Strategy
2:00 PM - 2:40 PM Doug Hahn, Enterprise Technology Officer at Invesco, How Do I Decide What Cloud is Best for Me?
2:40 PM - 2:55 PM Break
3:00 PM - 3:40 PM Phil Bernhardt, Finance Director of Strategic Planning at Scholastic Book Fairs, Scholastic Book Fairs Transitions to Oracle PBCS
3:40 PM - 4:20 PM Cameron Lackpour, Oracle ACE Director, Making the Administrative Transition: On Premises Planning to PBCS
4:20 PM - 4:30 PM Break
4:35 PM - 5:15 PM Lakshmi Balusu, VP of Financial Systems at Perry Ellis, A Journey to HCM Cloud
5:15 PM - 5:25 PM Closing Announcements: ODTUG Volunteer Opportunities and ODTUG Kscope17
5:30 PM - 7:00 PM Poolside Networking Happy Hour

One of things that was interesting was that the audience at this meetup (sorry again, summit) was a real mix of attendees:
  1. It actually was almost completely customers – a good thing as otherwise it is a consultant fest presenting to other consultants.  Boring.
  2. It actually was a real mix of levels – yes, it is an Executive Summit but there were both directors, real honest-to-goodness executives, and Hyperion admins, so again, a good thing and not the norm for meetups which trend towards the geek.

The presenters

For those of you unwilling to view the videos, here are some snapshots of the Best and Brightest as they present.

The ever ebullient, enthusiastic, and positive Jessica Cordova kicking it off


Jacques Vigeant giving the keynote (it was really good)

Doug Hahn, an honest-to-goodness executive and a very good speaker

Phil Bernhardt presenting on a truly successful PBCS implementation

Yr. Hmbl., Fthfl., and Obt. Svt. at work and OMG I hate looking at myself

Lakshmi Balusu talking about the Good, Bad, and the Ugly of HCM Cloud

What did you think about the event?

Here’s Ileana Ryan’s take on things:
Click on the snapshot or here:  https://www.pscp.tv/w/1OyKAoZdvQMJb

Thanks to all

None of this would have happened without the aid of:
and of course ODTUG.

Thank you sponsors and thank you attendees for coming together for such a great event.

Be seeing you.

06 April 2017

EPM 12c: Read 'em and weep

As that saying goes, read 'em and weep:

Are you on-premises people still here?  Good.  You have a strong will and constitution.

Initially I (like you I suspect) was stunned by this.  After promising and promising and promising us Planning functionality equivalent to PBCS (the release I heard was October 2016's) at Kscope after Kscope Oracle have now for all intents and purposes pulled the plug.  Yes, there will likely be PSUs in future, but I cannot see how that will encompass what Planning in particular (but Essbase Cloud as well) offers in functionality .  In essence, what you have today in on-premises is what you'll get – bar bug fixes and minor enhancements – for now and for the future.  Will this be what the audience come Kscope17's Sunday's symposium look like? 

http://theelusivefish.com/wp-content/uploads/2015/08/frankenstein_mob.jpg

Maybe.

Before you lose your minds -- and I am not playing the role of apologist for Oracle -- think very carefully about what you're trying to accomplish with your current install.  As John noted (and I'll add on), you have choices in the immediate future:
  • Make like the mildly upset (ahem) villagers and threaten to burn Redwood City (or Palo Alto or Sunnyvale or wherever Oracle EPM is) to the ground.  This ought to be good visceral fun.  <--John may not have suggested this approach.
  • Dump Oracle like a ton of bricks and switch to some other company. 
  • Move to the cloud whether you want to or not.  Even if you think it's not for you, it is now.  Enjoy.
  • Don't do anything at all for the short to medium term.  Be like the force of inertia.

All of these choices incur cost.  Only you can decide if that cost is worthwhile

Ordo ab hoc

Regardless of what you do, I urge you to think about what your next steps should be. 

In short:
  • Don't do the "When in danger when in doubt, run in circles, scream and shout." routine.  Fun to watch; painful to be part of.
  • Do think about how you can (if you want it) put pressure on Oracle to change their mind.  Don't be subtle. 
  • Do think about what your alternatives are:  switch vendors, switch to the cloud, or stay put.  (There's always that fourth way of mob violence but I only recommend that as an exercise for the mind no matter how enjoyable it may be.)

Consider something else:  a major upgrade to 12c was going to be (as EPM on-premises major upgrades are were) expensive, painful, full of bugs, and yet a chance to reconsider what your system does and why and how.  Those precepts (and I suspect the pain) hold true no matter what your change in direction may be, only now the direction is something other than on-premises. 

On-premises customers would have faced that transition regardless of EPM 12c or no EPM 12c.  Would have this conversion been easier if 12c was released?  Undoubtedly yes.  But is this sort of break potentially good for you and your company to really think about what you'll be doing two or so years down the road?  Undoubtedly yet painfully yes again so there's some good mixed in with the pain and the rage.

That it will never come again, Is what makes on-premises EPM so sweet

What do you think you'll do once the shock wears off?  Dance with mad abandon whether that be with joy or with grief?  Head for the nearest bar to see how many boilermakers you can drink (bonus points if you are teetotal)?   Something else?

Comment care of this blog (or do it to me privately because you don't want to be identified).  Oracle read it.  I'm not sure they care about it, but they read it.  

Regardless, I'm going to take every single one of your comments and forward them directly to Matt Bradley, Mike Casey, Al Marciante, Shankar Viswanathan, and Rich Wilkie and anyone else I can think of in the EPM product management space. 

Good luck to all of us!

Be seeing you.

16 February 2017

A lightweight, modular, and even better EPM on-premises backup

Time waits for no one

Time and tide waits for no man

A while ago, like five years ago, I wrote about a lightweight, modular, and better Essbase backup.  Funny, it doesn’t seem all that long ago and yet time passes swiftly.

That code works (and in fact I just put it in at a client) but it only covers the Essbase side of things.  As reluctant (Actually not as paying bills, funding retirement, and having some money left over for fun things all come from mostly Planning and doing assessments so alas not nearly as much Essbase as I’d like.) as I am to admit this, there is more to on-premises life than Essbase.  How then, does the rest of the world get backed up?  The answer is LCM.

More than just on-premises

While this post is going to only cover on-premises, there will be a second post on how to do this in PBCS/EPBCS.  Chris Rothermel contributed (again – the guy is generous) to that one (Yes, I have it, yes he gave it to me weeks ago, no I haven’t finished my part of it, yes I suck.) and while it is conceptually the same it has a few twists.  God willing and the Creek don’t rise, I’ll have it out next week.

A note about length

Er ma gawd this post is long – 38 pages in MS Word – and hence you might think that this process belies its title.  The actual code to perform this backup process is 41 lines of code including blank lines and comments.  Take that out and you’re down to 11 lines of script code.  That’s right:  11.  Is that lightweight enough for you?  I certainly hope so.  The unfortunate post length revolves around the one-time setup.  There are a few fair steps to do that but never fear, Gentle Reader, I lay it all out step by step as simply as I can.

The concept

As with my post from as near as can be five years ago, the gimmick in this backup process is to:
  1. Limit the number of on disk backups to 7 (one for each day of the week, natch)
  2. Use the number of the day of the week, e.g. 1 = Sunday, 2 = Monday, 3=Tuesday, etc. as that day’s rolling backup target
  3. Use seven scheduled processes, one for each day of the week
  4. Parameterize everything everywhere so it’s one snippet of code for all seven days.

The only real differences in this post vs. the Essbase-only post is that:
  1. The backup can include each and every object for each and every product in your LCM universe.  Want to backup HFM?  FDMEE?  Financial Reports?  The world is your oyster.
  2. The backup definition is defined through Shared Services once and only once.  Modifications to the process require modifying the LCM xmls (export and import).
  3. Restoring objects requires working through the LCM utility or via a manual import to Shared Services.
  4. The backup needs to run on the Shared Services server or wherever you can get the LCM backup utility to run.  I’ve never seen that anywhere other on the Shared Services server but I don’t get out much.
  5. BSO Essbase restructuring does not take place; you’ll have to figure out another way to handle this.  It would be easy to modify my old post to solely do restructures or you could simply run them on an ad-hoc basis.  For performance, I suggest that you run it every day, particularly within the context of a Planning application.

That looks like a lot of differences but you’ll see as we dive into the code that it’s really quite similar and in some instances easier and more flexible than an Essbase-only backup.

Let’s get started

Backup locations

One more time through the folder/day of the week relationship.  Yes, Gentle Reader, you’re familiar with the days of the week and the number of each day of the week.  I include this just in case you get a bit wobbly on the relationship of one with the other.
Number
Day of the week
1
Sunday
2
Monday
3
Tuesday
4
Wednesday
5
Thursday
6
Friday
7
Saturday

Easy peasy no big deasy.  

Creating the LCM export

Wait, you say.  Do you?  Hopefully, else you aren’t paying attention.  Wait, why is a manually driven Shared Services export required?  Actually, the export itself is not needed but what is needed is the LCM export and import xml migration definition files.  This is a one time task.  The layout is documented but I think it’s far easier to do this manually.  Remember, except for export.xml and import.xml the results are disposable.

In Shared Services

For this example I created an export of ASOSamp.Sample, Sample.Basic, and the Vision Planning application I migrated from PBCS to on-premises.  I boldface the latter because some people think it’s a Plain Jane migration to the cloud.  Nope, as ever your Yr. Most Hmbl. & Obt. Svt. has gone against the grain.

Go to Shared Services


Within each technology group select the applications to be backed up.  Remember, more than one product can be selected at a time.  Simply go from one to the other in Shared Serices.  When you’ve picked the last application then and only then click on Export.

Good ‘ol Sample.Basic

Don’t believe Jason Jones’ comment about TBC going tits up.  It is alive and well.

Note that for this database I am not exporting data.  It is being excluded for demonstration purposes.  Also, if you do keep the Essbase export process it’s redundant.  You’re going to have to think about whether you want all Essbase restores to go through Shared Services or if you prefer to do simple restores of Essbase file objects.  Also, I’d note that the exports in Shared Services are single threaded whereas Essbase’s MaxL command can and should be multithreaded.  Lastly, as previously noted Shared Services exports do not support restructures.  Choose wisely.

ASOSamp

Here’s the ASO version of Sample.Basic.  I am exporting the data in this one to show you how it’s manifested.

Vision

Here’s that PBCS->on-premises Vision application.

Calculation Manager

Planning applications contain deployed Business Rules but you’ll want the rules you can actually edit as well.

Ignore FINPLAN and SampApp1.  My renowned and well-known laziness comes to the fore and I can’t be bothered to retake this shot.  

Run the export

We’re only going to do this once and just to get the export.xml and import.xml settings files.

Working, working, working


All done.

Download to disk.

Or simply go to the Shared Services import_export folder.  Your choice.

The file objects other than export.xml and import.xml are surplus to requirements.  

Migration definitions

Wow, nine pages of instructions to get to the two files needed for this backup.  Ugh.  There’s worse (or better) to come.

Export.xml

From a get-it-out-of-the-system perspective, this is all that’s needed.  Getting it back in will require the almost identical import.xml.

Password

The export process does not include a username/password although the node for them exists.

There is, super unfortunately and super annoyingly, an issue with regards to the password:  said password (and username) gets reset each and every time the process is run.  Thank Peter Nitchke for pointing this out in his post on a different way to back up LCM objects.  Yes, I’m sort of covering blazed trails but surely imitation is the sincerest form of flattery and I do think I have a slightly different take on it.

The first time round

Assuming that you have created a suitably named backup folder in the first day and have copied export.xml to that folder:

NB – I named the target folder (and the the name of the backup to be restored in Shared Services if required) as LCM_FPCMPSSSFR to stand for FP = Financial Planning, CM = Calculation Manager, PS = Product Sales, SS = Shared Services, FR = Financial Reports.  Name this folder however you like; it will need to be modified in the LCMBackup.cmd file.

Running it from the command line

To get the suitably provisioned username/password to work, you’ll need to run the Shared Services command line tool utility.bat just once:

Use Notepad++ to capture the password

Before this runs, make sure you have that c:\automation\LCMBackup\1\LCM_FPCMPSSSFR\export.xml file open in Notepad++.  The editor does not have a lock on the file and will update export.xml.  Select Yes to have the file updated.

You’ll be promted to update the file at the end of the backup process.  Don’t as that will erase the updated username/password.  Btw, the fact that the tool does this is insanity but there it is.

After you click “No” (this is the version of the file with the encrypted username/password), save the file to C:\Automation\LCMBackup\Code\Export_with_credentials.xml.

Modify import.xml with the user/password from the export xml.  Copy it to c:\automation\LCMBackup\Code\Import_with_credentials.xml.  

NB – Import.xml won’t be required for interactive import of objects via Shared Services but is required if you are a daring sort of chap and want to run the import from the command line.

The finished product

Here it is in the backup folder with each technology type (Calculation Manager) or database/app (Essbase & Planning) folder all present and correct.

Calculation Manager

Want a rule?  See a rule.  Enjoy.

ASOSamp

Note that data is exported to the root Sample database folder.  This is interesting in light of the fact that the exported data is not in the tablespace folders.

The Tablespace folders do not contain .dat or exported files but instead contain the properties of the tablespace(s).

Sample

Good ol’ Sample.Basic is with us, this time with calc scripts on offer.

Are those really calc scripts?  Yep.

Vision

Are you bored yet with the seemingly regularity and comprehensiveness of the export?  Hopefully so else this process isn’t fit for purpose.

Note that Planning data location differs from Essbase.  Of course Planning != Essbase so it’s not surprising that data exports are in a different location.

LCMBackup.cmd

This is the whole thing.  Yeah, yeah, no error handling.  So sue me.  This is a blog, not the tool I’m putting in at your site.   Also, this stripped down code sample keeps everything easy to see.

Code so you can copy it

Don’t forget that the code wraps to the page. See the above screenshot for the correct breaks.
ECHO OFF
REM  Purpose:        Daily LCM processing
REM  Written by:    Cameron Lackpour
REM  Modified:      15 February 2017
REM  Notes:        --    Run the LCM backup for the current day
REM                --    The encrypted username and password must exist in a copy of Export.xml
REM                --    Import.xml is also required to allow an import of the LCM exports
REM                Usage:
REM                    DailyLCMBackup.cmd 1 g:
REM                    Where:    1 = folder generation
REM                    g: = backup drive

REM Make the variables pretty
SET Gen=%1
SET Drive=%2
SET LCMUtility=%2\Oracle\Middleware\user_projects\epmsystem1\bin\utility.bat

REM Clear out the target directory, removing all subfolders
REM Delete all files
REM    /F    Force deleting of read-only files.
REM    /S    Delete specified files from all subdirectories.
REM    /Q    Quiet mode, do not ask if ok to delete on global wildcard
del %Drive%\automation\lcmbackup\%Gen% /F /S /Q

REM Remove all subfolders
REM    /S    Removes all directories and files in the specified directory in addition to the directory itself.  Used to remove a directory tree.
REM    /Q    Quiet mode, do not ask if ok to remove a directory tree with /S
rd /S /Q %Drive%\automation\lcmbackup\%Gen%\LCM_FPCMPSSSFR
   
REM    Recreate the target directory   
md %Drive%\automation\lcmbackup\%Gen%\LCM_FPCMPSSSFR    
   
REM    Copy Export.xml file with credentials (username and password) to daily folder
COPY Export_with_credentials.xml %drive%\automation\lcmbackup\%Gen%\LCM_FPCMPSSSFR\Export.xml
REM    Copy Import.xml file
COPY Import_with_credentials.xml %drive%\automation\lcmbackup\%Gen%\LCM_FPCMPSSSFR\Import.xml

REM  Perform LCM backup
CALL %LCMUtility% %Drive%\automation\lcmbackup\%Gen%\LCM_FPCMPSSSFR\Export.xml >%Drive%\automation\lcmbackup\%Gen%\DailyLCMBackup.log

EXIT

Note the switches for del and rd.  It’s import to get rid of the files in the subfolders first before removing said subfolders.  Ain’t DOS grand?  This is necessary to make sure that deleted objects, e.g. calc script that was directly removed from Sample.Basic doesn’t persist.  I like to remove the folders as well just in case the export.xml files are modified.

Running LCMBackup.cmd

Again, easy peasy no big deasey.  

The syntax is:  lcmbackup.cmd generation drivename:

The example below is:  lcmbackup.cmd 1 c:

It’s all a bit difficult to follow as there are many objects to get deleted.  Happily there’s a log file that captures everything.

Breaking it down by numbered section

  1. Don’t tell me how, tell me why.  This is what I consider to be the bare minimum header.  Also, for the love of Mike, when you inevitably modify this code, please, please, please put your name alongside the modify date.  I’ve had people ask me, “Did you really do X?”.  Maybe but who can tell.  I like to take the blame when I deserve it but only then.
  2. Set parameters and get rid of all of the files below all of the subfolders.  Note again the high level of comments.  Some people live in “DOS” and so this is all well known.  I know the functions are there but I’ll be damned if I can recall them.
  3. Remove all of the folders.  Re the comments, Ibid.
  4. Remember that nonsense about the passwords?  Here’s where that oh-so-annoying deletion is overlaid.  Stupid but there it is.
  5. Actually run the LCM export utility.  Huzzah, we’re done.

Automation

That’s all well and good, but what about that relationship between days of the week and their ordinal order?  Remember that parameterization that drive the generation number?  This is where it comes into play.

What I’m going to show is Windows but this could just as easily done in Linux’s chron.

Here’s my Windows 2008 server (soon to be 2012 but only for the next VM install) Task Scheduler.  Huh, I seem to have Flash installed on a server OS.  Gee thanks, Calculation Manager, for making me have one of the most hacked products ever on my server.  Get rid of that abomination Oracle EPM team.  Please.

Seven days for seven backups

Rant aside, to make this code a rolling create seven backup tasks tied to each day of the week with that parameterized backup code and that magical seven day rolling backup is good to go.

Sunday step by step

I’ve typed enough.  Let’s let the pictures tell their 1,000 words.
Create a new process
Set it to run every Sunday
Path to LCMBackup.cmd and provide parameters
You’ll be logged off so provide a password
There we go

Sunday, Monday, or always

Let’s not do that manually six more times and instead export to xml, modify, and import right back in.  Believe you me, it’s less tedious this way.

Export

Export file

Change DaysOfWeek and the appropriate numeric parameter

Monday

All together now

Modifying the LCM definition

What if you want to expand the scope of the export?  You could go through the process in Shared Services if you like or could simply perform a copy and paste.  In this case I’m going to show how to also backup Demo.Basic.

Simply copy the Sample.Basic task node.

Paste and modify to reference Demo.Basic.  Don’t forget to point to the right filePath.

Lather, rinse, repeat
Ta da, seven days of bliss after you’ve imported each xml definition.  This does seem to be the blog post all about XML but it is such a good idea and oh so useful.

Restore

This backup process is like life insurance:  you hope you’ll never need to use it.  Of course that term life policy has the note of finality where this does not.  No matter, the analogy sort of holds.

Again, we’re at 35 pages and 2,500 words.  Let’s let the pictures tell the story of restoring Saturday’s backup.  What’s that you say?  No one in your firm works on Saturday?  You’re in luck.

I suppose I don’t need to show you how to copy and paste but in the spirit of completeness…

Copy the Saturday backup

Paste within the import_export folder

Yes, you could upload this as well through Shared Services.

There it is.

Select your object(s)

Import

All is restored

R&R

Yep, it’s time to relax.  It’s done.  Follow these lightweight, modular, and better EPM on-premises backup and never worry about backups ever again.

One other note – your IT department is of course backing up all file objects so when you need to walk things back a month or three it’ll be a moderately simple task to restore that business rule of rare genius.  Right?  Right.

Be seeing you.