21 February 2012

A lightweight, modular, and better Essbase backup

Your nightly backup

Backups come in many flavors:  email, important documents, code, family pictures, SQL databases, and of course Essbase databases.  The nature of the file determines the backup.  Some files, like small Word documents (like, uh, this blog post although this one got way longer than I anticipated) are easy.  Others, like Essbase, maybe not so much.  Of course Oracle has written a guide to backing up Essbase and I suggest that you read it.

The easy stuff

ASO databases are dead easy to back up:
  1. Stop the application and its single database.
  2. Perform a file system copy of everything in the application directory other than the temp directory.
  3. Lather, rinse, repeat

The not so easy stuff

Backing up BSO the new way

Interestingly (I think this stuff is interesting.  Does anyone else?  Likely not.), with 11.x, Oracle has a completely different way of backing up BSO databases.  It’s three steps, but they are not ones I tend to see very often:
  1. Enable transaction logging.
  2. Use MaxL’s alter database begin archive to file to write a baseline out to disk.  This gets backed up by a file system backup later.  If you are overwriting the backup, use alter database dbname force archive to file.  Both steps could be done manually with EAS as well.
  3. You will still need to do file system backups of .otl, .csc, .rul, .rep, .epd, and .sel (What are .sel files?  Ah, saved member selection lists in the Classic add-in?) files within a given database.  Also, application level dbname.app and Essbase-level Essbase.sec and Essbase.cfg still need to be backed up.


There’s quite a bit more in the backup guide about splitting archive files, partitioned databases, and Unicode.  Happy reading.

One thing you need to keep in mind is that these backups with an .arc extension are binary files.  That may or may not cause you concern.  

Restoring BSO the new way

This process requires that you keep transaction logging on at all times.  
  1. Restore last backup.  This is really only necessary if there’s been a database corruption.  Otherwise you should be able to roll backwards and forwards in the transactions to get to the point you want.
  2. Replay transactions as I noted above.  How do you know what these transactions are?  EAS or MaxL’s query database can do it for you.  Don’t forget that you need to have some kind of backup process in place for all of the files that the archive process does not handle:  .otl, .csc, .rul, .rep, .epd, and .sel as well as dbname.app and Essbase-level Essbase.sec and Essbase.cfg.  

Sort of used, but not all that much

I like the notion that there is essentially BSO journaling but I have to wonder how many people use it.  MMIC and I bounced this back and forth (Would you believe that Essbase consultants just IM this stuff for intellectual curiosity?  It’s true.  We are a sad lot.  Or at least some of us are.)

I did a highly unscientific survey of five consultants and one said she has seen and used archiving and transaction logging and two said they have seen transaction logging but no use of archiving.  The other two have never seen hide nor hare of it.

Is this just a case of the news not going out?  Write in care of this blog.

I’m old fashioned

Given the not completely universal usage of the archive and transaction logging approach, and the fact that Essbase artifacts (those pesky calc scripts, load rules, etc.) aren’t handled, and that people just like to have text-based backups, and of course the fact that not everyone is on 11, I think there’s still room for the “traditional” export to text file backup.

A backup process that has been around since the year dot is to create a rolling seven day on disk backup that:
  1. Clears out the oldest of the seven (it could be more, but seven is usually it) backup folder(s)
  2. Pushes the remaining six backups down the backup stack
  3. Clears out today’s backup folder(s)
  4. Does a MaxL (sometimes Esscmd, but not too much of that is still extant) export of all (sometimes level 0) data to text files in a parallel export
  5. Clears out the Essbase database(s) to defragment the databse.
  6. Loads back in the exported data from step #1 into the database(s)
  7. If not an all level export, runs aggregations in the Essbase database(s)
  8. Copies all of the relevant database Essbase artifacts
  9. Copies and clears out the log files – large log files (think budgeting apps with lots of sends and calcs) can slow Essbase down.  This is true both for application logs and Essbase.log.
  10. Oh yeah, stop Essbase to get a decent copy of Essbase.sec.  This is less important in 11.1.2.x because only filters are stored in the security file if using Shared Services which most do.  But it’s still a nice to have.


The result of the above is a fully defragmented Essbase database (really important with Planning-derived applications) and a backup that Essbase/Planning administrators can easily get at for quick restores of data and binary files.  You haven’t lived till its budget season, someone goofs, and you find out that IT can restore from the Essbase backup, maybe, in five to seven days.  

My secret backup shame

I have written and implemented a few of these and unsurprisingly (Thankfully?  Maybe surprisingly?  You decide.) my code has gotten better and better as I moved it from client to client but it’s always left me a bit dissatisfied because the code was so clunky.  I can be quite self-critical of my code but then again sometimes I get to see what some other consultants have put in.  

Oh my goodness.  

Even worse is when clients take a convoluted process (really all of the processes I’ve seen, including mine) and extend it.  The need for something simple, lightweight, and easily extendable without horrendous spaghetti code exists and I’m just the geek to do it.

What does that BSO backup need to do?

I’ve listed the specific steps above, but let’s look at these at a slightly higher level.  The backup must:
  1. Handle the seven day stack and roll off the oldest
  2. Backup the Essbase databases and artifacts
  3. Backup and clear the logs
  4. Backup Essbase.sec and clear out the Essbase.log file.


That’s it.  How hard could it be?

How does that rolling part work?

A rolling seven day backup structure looks something like this:
C:\Automation
C:\Automation\Backup
C:\Automation\Backup\1
C:\Automation\Backup\2
C:\Automation\Backup\3
C:\Automation\Backup\4
C:\Automation\Backup\5
C:\Automation\Backup\6
C:\Automation\Backup\7

Typically, folder 1 is the most current backup and folder 7 is the oldest.  The oldest generation of backup rolls off the stack and 6 becomes 7, 5 becomes 6, 4 becomes 5, etc.  The newest backup will be in folder 1 when it’s complete.  At least that’s the way I’ve implemented this in the past and observed other consulting companies do it.

So what techniques could be used to get rid of folder 7 (the oldest) and move everything down one?
  1. Move the files down one folder, leaving folder 1 empty.  S-L-O-W.  Why oh why did I write that?  In fairness to myself, I was but a young and callow youth.
  2. Delete folder 7, rename folders 1 through 7 to 2 through 6 and then recreate 1.  Better, but code intensive.
  3. Completely ignore the concept of rolling anything anywhere and instead just clear out and write the current day’s data to the folder that corresponds to the day, e.g., 1 = Sunday, 2 = Monday, 3 = Tuesday, 4 = Wednesday, 5 = Thursday, 6 = Friday, and 7 = Saturday.  There’s still rolling going on here, but the administrator just has to match the backup generation with the day of the week.  In some respects it’s even easier although a bit tricky when crossing weeks.  Just look at the modify date of the backups if there’s any question as to actual backup date.


Hardcoding is for dopes – I used to be a dope, don’t you be one
The application and database backup structure looks like this underneath the generation folder:

C:\Automation\Backup\1
C:\Automation\Backup\1\Demo\Basic
C:\Automation\Backup\1\Sample\Basic
C:\Automation\Backup\1\Sample\Intrntl
C:\Automation\Backup\1\Sample\XChgrate
C:\Automation\Backup\1\Essbase

NB – The Essbase folder is for Essbase.log and the backup of Essbase.sec.

Here’s a graphical view of it days 2 and 5 aka Monday and Thursday:


I will cop to the sad fact that in the past I (sob) hardcoded the delete and rename process.  When new Essbase databases came on line, I had to modify OS and MaxL backup scripts which led, inevitably, to mistakes.  This is the path of bad, convoluted, spaghetti code.  When I have to whiteboard what scripting code does and where, you (oh, you know who you are and your initials aren’t CL) have written garbage.

Again, intelligent parameterization and a rethink of what “rolling” means can result in a simple code set.

Parameterization is key

As I was rethinking the process, i.e., beating myself up and desperately hoping for another way, it occurred to me that I could use parameter variables both in the OS and MaxL scripts to make the code really lightweight.  Think about it – if username, server, application, database, drive, and backup generation could all be parameterized, I could call the same MaxL code base for any database on any server for any generation.  Write once, use many times with no painful code modifications.  Too good to be true?  Read on, MacDuff.

Code overview

This might look a little confusing (why so many .cmd and .msh files?) but my design theory was twofold:
  1. Keep the code simple so it can be turned on or off and expanded or reduced as needed with almost no code.  Check out how I did two different apps and four individual databases.  It’s very lightweight and yet fully functional.
  2. Keep like with like, i.e., a database backup is not an application log clear is not an Essbase security file backup.  You want to combine all of the code?  Be my guest, but I think you’re going down a spaghetti code path.  As this code is 100% free and 100% unwarranted, do as you wish but don’t blame me.

Script nameDescription
DailyBackup.cmdMaster calling script.  This is the only place Application and Database names are hardcoded and that’s just only as parameters.  Backs up Essbase BSO databases, clears out the Essbase application log(s), copies the Essbase security file, and clears out the Essbase.log.
BackupAndRestructure.cmdCalled by DailyBackup.cmd.  Clears out the target database directories (driven by parameter) and calls BackupAndRestructure.msh to do the nightly backup.
BackupAndClearAppLog.cmdCalled by DailyBackup.cmd.  Clears out the target application directory and calls BackupAndClearAppLog.msh to copy the application log file.
BackupEssbaseSecurityAndLog.cmdCalled by DailyBackup.cmd.  Backs up the Essbase.sec backup file and clears out the Essbase.log file.
BackupAndRestructure.mshCalled by BackupAndRestructure.cmd.  Puts database into read only mode, exports all data in four parallel streams, copy all of the Essbase artifacts, take database out of read-only mode, and restructure database to remove defragmentation
BackupAndClearAppLog.mshCalled by BackupAndClearAppLog.cmd.  Copies Essbase application log.
BackupEssbaseSecurityAndLog.mshCalled by BackupEssbaseSecurityAndLog.cmd.  Copies the Essbase.log file, backs up Essbase.bak, and clears out the Essbase.log file.


DailyBackup.cmd

This is the only script you need modify, and only to add or remove applications and databases.  That’s it.  Pretty simple, eh?
REM    Purpose:    Day n processing
REM    Written by:    Cameron Lackpour
REM    Modified:    14 July 2011, initial write
REM    Notes:        Call the day's processing and pass the following command line
REM            parameters to the various jobs
REM    Usage:    DailyBackup.cmd 1 g:
REM    Where:    1 = folder generation
REM        g: = backup drive

REM Make the variables pretty
SET Gen=%1
SET Drive=%2

REM    Backup Sample Basic, Interntl, and XChgrate
CALL %drive%\automation\BackupAndRestructure Sample Basic %Gen% %Drive%
CALL %drive%\automation\BackupAndRestructure Sample Interntl %Gen% %Drive%
CALL %drive%\automation\BackupAndRestructure Sample XChgrate %Gen% %Drive%

REM    Backup Demo Basic
CALL %drive%\automation\BackupAndRestructure Demo Basic %Gen% %Drive%

REM Clear out the Sample application log
CALL %drive%\automation\BackupAndClearAppLog Sample %Gen% %Drive%

REM Clear out the Demo application log
CALL %drive%\automation\BackupAndClearAppLog Demo %Gen% %Drive%

REM    Back up the Essbase security file and clear Essbase.log
CALL %drive%\automation\BackupEssbaseSecurityAndLog %Gen% %Drive%

REM EXIT

BackupAndRestructure.cmd

You really never need to change this code.  Neat, huh?  It will clear out the target (subfolder of 1 through 7) and kick off the database backup.

You will note that I pass the username, password, and server name as parameters in this code.  Uh-oh, that’ll get you a Fail from your security auditors.  Well, that’s your environment, not mine.  But the fix is simple(ish).  Modify the code (you are all Essbase hackers, right?) to remove the parameters, put the info directly into the script, and encrypt the file.  See my 2009 Kaleidoscope presentation here for all you ever wanted to know about MaxL encryption.  
REM    Purpose:    Full export and restructure of database
REM    Written by:    Cameron Lackpour
REM    Modified:    20 June 2011, initial write
REM            12 July 2011,     Added target backup generation as parameter $7
REM            13 July 2011,     Removed app/db specificity and parameterized everything
REM            save the username, password, and server.
REM    Notes:        Call this from the command line per the following example:
REM            BackupAndRestructure Sample Basic 1 g:
REM    Where:        Sample = application name
REM            Basic = database name
REM            1 = folder generation
REM            g: = backup drive

REM Make the variables pretty
SET App=%1
SET Db=%2
SET Gen=%3
SET Drive=%4

REM    Clear out the target directory
DEL %drive%\automation\databackup\%Gen%\%App%\%Db% /Q /F

REM    Launch the Essbase data backup
%epm_oracle_home%\products\Essbase\EssbaseClient\bin\startmaxl %drive%\automation\BackupAndRestructure.msh admin epmtestdrive metavero %App% %Db% "%drive%" %Gen%

EXIT

BackupAndClearAppLog.cmd

Large application logs slow down Essbase, so clear them out on a daily basis.  Or not, but I suggest that you do.  If you want to a better understanding of how ODL logs work, see my tortured reasoning here.
REM    Purpose:    Backup and clear the application log
REM    Written by:    Cameron Lackpour
REM    Modified:    14 July 2011, initial write
REM            25 July 2011, added backup of application log
REM    Notes:        The backup and clearing of the application log isn't done in the
REM            BackupAndRestructure code line as multiple Plan Type Planning apps
REM            would clear out the log file after the first run.
REM    Notes:        Call this from the command line per the following example:
REM            BackupAndClearAppLog Sample g: 1
REM    Where:        Sample = application name
REM            1 = folder generation
REM            g: = backup drive
REM    Notes:        Use the MaxL script BackupAndClearAppLog.msh to fully export and
REM            restructure a BSO database.
REM            This script passes positional parameters to drive values in the MaxL REM            script
REM            *    Parameter variables are as follows:
REM            -    %1 = user name
REM            -    %2 = password
REM            -    %3 = server name
REM            -    %4 = application
REM            -    %5 = drive
REM            -    %6 = target generation folder (1 to 7)

REM Make the variables pretty
SET App=%1
SET Gen=%2
SET Drive=%3

REM    Clear out the target directory that contains the appname.log file
DEL %drive%\automation\databackup\%Gen%\%App% /Q /F

REM    Launch the Essbase data backup
%epm_oracle_home%\products\Essbase\EssbaseClient\bin\startmaxl %drive%\automation\BackupAndClearAppLog.msh admin epmtestdrive metavero %App% "%drive%" %Gen%

EXIT

BackupEssbaseSecurityAndLog.cmd

Essbase.log and Essbase.sec are system-level files.  Interestingly (oh, I find this interesting, someone else does too, I hope) you do not need to force a backup of the security file with MaxL’s alter system sync security_backup because, per the Tech Ref:

If sync security_backup is not issued directly as described above, the security backup file is checked/refreshed automatically at the same frequency with which session inactivity is checked globally. The default inactivity check interval is five minutes. To change the interval, use set session_idle_poll, or see the Oracle Essbase Administration Services Online Help.


As you are running the backup at o-dark-thirty, that backup will already have taken place.  I love it when a plan comes together.  

A further somewhat interesting note – when I tested this on a MSAD 11.1.2.0 environment I found that Essbase would not write out a backup, i.e., Essbase.bak’s time/date stamp remained unchanged if there had been no change in security even though one would sort of think that would happen.  You can always add in the command to the MaxL if you want to but I don’t see the point.
REM    Purpose:    Stop Essbase, backup the Essbase.sec file, backup the Essbase.log file,
REM            clear the Essbase.log file, clear application .log files
REM    Written by:    Cameron Lackpour
REM    Modified:    14 July 2011, initial write
REM    Notes:        The backup of Essbase.bak is the current contents of Essbase.sec
REM    Notes:        Call this from the command line per the following example:
REM            BackupEssbaseSecurityandLog 1 g:
REM            Where:    1 = folder generation
REM                g: = backup drive


REM Make the variables pretty
SET Gen=%1
SET Drive=%2

REM    Clear out the target directory that contains Essbase.bak and Essbase.log
DEL %Drive%\automation\databackup\%Gen%\Essbase /Q /F

REM    Launch the Essbase data backup
%epm_oracle_home%\products\Essbase\EssbaseClient\bin\startmaxl %Drive%\automation\BackupEssbaseSecurityandLog.msh admin epmtestdrive metavero "%drive%" %Gen%

EXIT

BackupAndRestructure.msh

Called by BackupAndRestructure.cmd, this is the meat and potatoes of the BSO backup process.

Did you catch the oddity of four backslashes for subdirectories?  Trust me, double quotes, four backslashes, parameter variables, and Windows go together like coffee and donuts.

Did you also catch the shell and XCOPY commands I use to do the copying?  MaxL doesn’t support OS file copies, but Windows’ command language sure does.  Ah, scripting, I love it so.  I use ExcludeFromCopy.txt to remove the Essbase artifacts I don’t want.  Change them as you wish.

Note that I am using a four file parallel export – again, change to your requirements.

Lastly, note that I am not clearing out the database and loading the data back in and aggregating (if a level zero export) but am instead using MaxL’s restructure command.  Despite that command being single threaded in comparison to the load processes, my testing has revealed that the performance is roughly similar.  As restructuring is simpler, I stick with that.  As always, you can modify the code if you wish.
/*   
    Purpose:    Full backup and restructure of BSO database
    Written by:    Cameron Lackpour
    Modified:    20 June 2011, initial write
            12 July 2011, Added target backup generation as parameter $7
    Notes:        *    Parameter variables are as follows:
                -    $1 = user name
                -    $2 = password
                -    $3 = server name
                -    $4 = application
                -    $5 = database
                -    $6 = drive
                -    $7 = target generation folder (1 to 7)
                *    This *Essbase* backup does NOT address other backup

requirements in a Planning system.

*/

/*    Log in to Essbase    */
login $1 $2 on $3 ;
iferror ErrorHandler ;

/*    Log process to disk    */
spool on to "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5BackupAndRestructure.log" ;
iferror ErrorHandler ;

/*    Put database into read-only mode    */
alter database $4.$5 begin archive to file "archivelist.txt" ;
iferror ErrorHandler ;

/*    Export ALL levels of data    */
export database $4.$5 all data to server data_file "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_1.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_2.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_3.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_4.txt" ;
iferror ErrorHandler ;

/*    Shell out to DOS and copy the Essbase objects   
    XCOPY parameters
    /Y           Suppresses prompting to confirm you want to overwrite an existing destination file.
    /EXCLUDE:file1[+file2][+file3]...
    /F           Displays full source and destination file names while copying.
    /V           Verifies each new file.
    /R           Overwrites read-only files.
    /C           Continues copying even if errors occur.
    /K           Copies attributes. Normal Xcopy will reset read-only attributes.    */
shell XCOPY %drive%\\\\Oracle\\\\Middleware\\\\user_projects\\\\epmsystem1\\\\essbaseserver\\\\essbaseserver1\\\\app\\\\%App%\\\\%db%\\\\*.* %drive%\\\\automation\\\\databackup\\\\%Gen%\\\\%App%\\\\%Db% /Y /F /V /R /C /K /EXCLUDE:%drive%\\\\automation\\\\ExcludeFromXCOPY.txt ;
iferror ErrorHandler ;

/*    Take database out of read-only mode    */
alter database $4.$5 end archive ;
iferror ErrorHandler ;

/*    Restructure (defragment) database    */
alter database $4.$5 force restructure ;
iferror ErrorHandler ;

/*    Define error label    */
define label 'ErrorHandler' ;
/*    Make sure the database is out of archive mode.    */
alter database $4.$5 end archive ;

/*    Exit the script.    */
logout ;
exit ;

ExcludeFromXCOPY.txt

Just so you can see what’s not being copied, here’s the file.
.pag
.ind
.xcp
.zip
.txt
.old

BackupAndClearAppLog.msh

Again, I could have combined this script with the database backup process, but the goal was to make things modular and besides, there are many databases (or at least there can be in BSO) per application but only one log file per app.  
/*   
    Purpose:    Backup and clear the application log
    Written by:    Cameron Lackpour
    Modified:    14 July 2011, initial write
                25 July 2011, added backup of application log
    Notes:        *    Parameter variables are as follows:
                -    $1 = user name
                -    $2 = password
                -    $3 = server name
                -    $4 = application
                -    $5 = drive
                -    $6 = target generation folder (1 to 7)
*/

/*    Log in to Essbase    */
login $1 $2 on $3 ;
iferror ErrorHandler ;

/*    Log process to disk    */
spool on to "$5\\\\Automation\\\\DataBackup\\\\$6\\\\$4\\\\BackupAndClearAppLog.log" ;
iferror ErrorHandler ;

/*    Shell out to DOS and copy the appname.log file
    XCOPY parameters
    /Y           Suppresses prompting to confirm you want to overwrite an existing destination file.
    /EXCLUDE:file1[+file2][+file3]...
    /F           Displays full source and destination file names while copying.
    /V           Verifies each new file.
    /R           Overwrites read-only files.
    /C           Continues copying even if errors occur.
    /K           Copies attributes. Normal Xcopy will reset read-only attributes.    */
/*    Grab the application log file.  Do not try to validate the file as it is in read-write mode and things are getting written to the file, so drop the /V */
shell XCOPY %drive%\\\\Oracle\\\\Middleware\\\\user_projects\\\\epmsystem1\\\\diagnostics\\\\logs\\\\essbase\\\\essbase_0\\\\app\\\\%App%\\\\%app%.log %drive%\\\\automation\\\\databackup\\\\%Gen%\\\\%App% /Y /F /R /C /K
iferror ErrorHandler ;

/*    Clear out the Essbase log file    */
alter application $4 clear logfile ;
iferror ErrorHandler ;

/*    Define error label    */
define label 'ErrorHandler' ;

/*    Exit the script.    */
logout ;
exit ;

BackupEssbaseSecurityAndLog.msh

You saw my note about not really needing to backup Essbase.sec or even forcing the synchronization to Essbase.bak.  What this means is that you do not have to stop Essbase to get a good copy of the file.  Ain’t life grand?
/*   
    Purpose:    Backup the Essbase.sec (really Essbase.bak) and Essbase.log files
    Written by:    Cameron Lackpour
    Modified:    14 July 2011, initial write
    Notes:        *    Parameter variables are as follows:
                -    $1 = user name
                -    $2 = password
                -    $3 = server name
                -    $4 = drive
                -    $5 = target generation folder (1 to 7)
*/

/*    Log in to Essbase    */
login $1 $2 on $3 ;
iferror ErrorHandler ;

/*    Log process to disk    */
spool on to "$4\\\\Automation\\\\DataBackup\\\\$5\\\\Essbase\\\\BackupEssbaseSecurityandLog.log" ;
iferror ErrorHandler ;

/*    Shell out to DOS and copy the Essbase.log file
    XCOPY parameters
    /Y           Suppresses prompting to confirm you want to overwrite an existing destination file.
    /EXCLUDE:file1[+file2][+file3]...
    /F           Displays full source and destination file names while copying.
    /V           Verifies each new file.
    /R           Overwrites read-only files.
    /C           Continues copying even if errors occur.
    /K           Copies attributes. Normal Xcopy will reset read-only attributes.    */
/*    Grab the Essbase log file.  Do not try to validate the file as it is in read-write mode and things are getting written to the file, so drop the /V */
shell XCOPY %drive%\\\\Oracle\\\\Middleware\\\\user_projects\\\\epmsystem1\\\\diagnostics\\\\logs\\\\essbase\\\\essbase_0\\\\Essbase.log %drive%\\\\automation\\\\databackup\\\\%Gen%\\\\Essbase /Y /F /R /C /K
iferror ErrorHandler ;

/*    Shell out to DOS and copy Essbase.bak    */
shell XCOPY %drive%\\\\Oracle\\\\Middleware\\\\user_projects\\\\epmsystem1\\\\EssbaseServer\\\\essbaseserver1\\\\bin\\\\Essbase.bak %drive%\\\\automation\\\\databackup\\\\%Gen%\\\\Essbase /Y /F /R /C /K
iferror ErrorHandler ;

/*    Clear out the Essbase log file    */
alter system clear logfile ;

/*    Define error label    */
define label 'ErrorHandler' ;

/*    Exit the script.    */
logout ;
exit ;

Calling the code

Remember what I wrote about the parameterization of code?  

This:  c:\automation\dailybackup.cmd 2 c: launches the main backup, tells the code to write everything to the 2 folder (That would be Monday’s as it is the second day in the week), and also defines the backup to occur on the C drive.  I coded this backup to run off the same drive as the base Essbase drive.  When I originally wrote this code, everything ran off the g: drive.  As the drive is a parameter – it simply works by changing the drive parameter.

Automating from the scheduler

The key to doing the rolling backup without driving yourself bonkers is to use the day number parameter and a scheduler.  I did this on Windows, so I just used the freebie one. 


I simply created seven jobs, one for each day of the week.  The launch for all of the DailyBackup.cmd scripts is the same except for the number that corresponds to the day.

Solomon Grundy Born On Monday

Here’s Monday’s (remember, day two of the week) launch scheduled job:


Again, the key to the lightweight approach is to let the parameters and the days of the week be the rolling part of the backup.  So fast, so easy, so simple – I rarely come up with a semi-clever idea but this one is moderately clever.  I will tell you that it made my week when I figured it out.

74 lines of code (less comments) to backup two applications and four BSO databases

If it was just one BSO database it would be 70 lines of code.  If the requirement was for four applications with three BSO databases (or three Plan Types each) it would be 81.  All of the code changes would be in DailyBackup.cmd to accommodate extra applications and databases.  How easy does it get?

You can download the code and a sample folder hierarchy here.  Enjoy and please, no more garbage backup code.

10 comments:

  1. Please don't tell me you started writing this great article yesterday evening...

    FYI, we use to defrag all our database daily running the maxl command (v11) : alter database "$1"."$2" force restructure;

    We also set up the caches automatically using the following perl script I scripted : http://luvmybase.blogspot.com/2010/04/nice-perl-script-that-i-built-in-order.html#more

    ReplyDelete
  2. Sebastien,

    >>Please don't tell me you started writing this great article yesterday evening...

    ^^^The article took several days and was much longer and more involved than I anticipated. I wish I could write something like that in a day.

    Re the restructure -- in my benchmark tests (which weren't all that sophisticated) it was a dead heat between restructure and the export, clear, and load (and sometimes calc) process. As restructure is so much simpler, I went with that. There may be other Essbase databases that don't show this equivalent performance -- I am kind of curious to see.

    As for your Perl script -- that's pretty interesting -- you are dynamically sizing the caches based on database size. Do you really see a big jump in performance?

    Regards,

    Cameron Lackpour

    ReplyDelete
  3. Hi,

    This a really cool idea. Unfortunately I am having some problems getting this part to work.

    The part of the code:
    export database $4.$5 all data to server data_file "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_1.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_2.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_3.txt", "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5_4.txt" ;
    iferror ErrorHandler ;

    Is creating some issues in that the \ character isn't opening the path correctly.

    Here is the error:
    essmsh error: Unable to open log file c:\\\\Automation\\\\DataBackup\\\\1\\\\SPCLPROJ\\\\OPERBUD\\\\SPCLPROJ_OPERBUDBackupAndRestructure.log

    I tried chaning the number of slashes to one, two, and three slashes.
    If I put one slash I end up with c:automation
    With two I get
    c:\\automation

    I'm not really sure what else I need to do to troubleshoot it.

    Any ideas?

    Thanks!

    ReplyDelete
  4. Duncan,

    Are you saying that *only* the export command fails?

    Or that the
    spool on to "$6\\\\Automation\\\\DataBackup\\\\$7\\\\$4\\\\$5\\\\$4_$5BackupAndRestructure.log" ;
    command is failing?

    Are you sure that the spool command is one line?

    The four backslashes (or even two) should be just fine in Windows.

    Ping me via LinkedIn and we can do a G2M session.

    Regards,

    Cameron Lackpour

    P.S. Looking at your error, I'll bet that the "\\\\$4_$5BackupAndRestructure.log" ;" that ends the spool command is on a separate line -- it should all be one continuous code line. Even Notepad can show you if there's a carriage return in that command.

    ReplyDelete
  5. Hi Cameron,

    I just got it working thanks!

    You were right it was on multiple lines. For this part:
    "$6\\Automation\\DataBackup\\$7\\$4\\$5\\$4_$5_1.txt"
    I was thinking that you could split the lines.

    I also changed the backslashes so that there were two. The combination of these things helped and now the export completes all the way.

    P.S. I'm very grateful for the help. It's amazing to me how generous people are to help each other with hyperion on the different forums.

    ReplyDelete
  6. So it's only two years later - some of us are just a little slower than others!

    Firstly, thanks for the post, always useful to see how others do it.

    Secondly - for the planning side - I actually ended up (on top of the essbase backups) using the LCM utility (Oracle\Middleware\user_projects\FOUNDATION1\bin\Utility.bat) to create LCM copies of the apps every night.

    You've got to do some 'cuteness' (the LCM Export xml loses it's username and password once run, so you've got to keep a central version and copy it every time) but you can export directly to a folder and zip it up.

    5 pretty reasonably sized apps, all reports and everything in shared services zips up to about 500mb a night.

    That, in combination with a powershell script to search all the logs for errors and some french philsophy (http://camerons-blog-for-essbase-hackers.blogspot.com.au/2011/06/no-exit-but-not-in-maxl_9756.html) gives us a completely set and forget overnight process, with the ability to rollback any or every component within minutes when something fails.

    Anyways - just wanted to pass my thanks!

    ReplyDelete
  7. Hi,

    We perform the weekly offline backup activity every week and we have followed the same procedure as described in the blog.
    Every week it took only 1 hour to back up the artifacts but last weekend the same back up batch has taken 5 hours to finish the backup.
    Can Anyone please help me out with this issue.

    Regards
    Kumar

    ReplyDelete
  8. Kumar,

    Without looking at your system, I can only guess at the causes of poor backup performance.

    Did this only happen once? Every night? One night out of the week or month?

    One thought is that if you are using a SAN, or have a tape backup occurring when you perform your Essbase backup, you could be seeing heavy disk contention.

    And if Essbase is on a VM, there are all sorts of possibilities, none of them pleasant.

    Regards,

    Cameron Lackpour

    ReplyDelete
  9. Hi,
    you can Switch the lines from restructure and then copying the database, so you have a smaller, cleaner backup

    ReplyDelete