HFM – Removing Years

One of the most important steps during the creation of your application creation is determining how many years should exist in the application.  Specifying too many years could result in having wasted space and poor application performance while too few years could result in running out of room in your application when you hit the upper limit.

HFM Application Profile Editor (Pre- shown, though same applies to

When HFM first shipped, you had no means of updating the Start Year or the End Year of your application; therefore, a mistake during the application creation process could lead to major headaches in the future.  As customer’s HFM applications matured and those upper year limits were getting close, Hyperion/Oracle rolled out the capability to add more years to the applications.  (with, there is a built in function to do this)

When it comes to removing years from the application, you can clear data from years; however, there is not a way to actually remove the years in the application.  Unfortunately, Oracle has never released guidance or functionality for altering the Start Year and will tell you that it isn’t possible to change this.  I’m here to tell you that, you can absolutely change this!  Before we get into how to make those updates, some quick words on why you’d want to do this and some considerations around it.

Reasons to remove years from your application:

  • Database has grown significantly due to continued use of the same application over many years
  • Metadata is cluttered due to the need to ensure historical data remains consistent
  • Prior years are no longer used for reporting  and are not necessary
  • Minimize the years displayed in the U/I to only valid years
  • Reduce the number of database objects (data tables may exist for each year/period/scenario combination)

Considerations when removing years from your application:

  • Beginning Balances for Balance Sheet accounts – (e.g. If our Start Year changes to 2015, we will still need prior year ending balances)
  • HFM Rules/Calc Mgr – (e.g. ensure that rules are not impacted by beginning year change

Process to change the Start Year:

  • “Easy Steps”
    • Make a backup.  (LCM / Database Backup / etc.)
    • Create an archive application – Consider creating an archive HFM application for the historical data.  Storing the information in a separate application gives you the ability to still view the data if needed without having to make metadata trade-offs in your day to day/primary application.  Use HFM Application Copy Utility or LCM/Application Snapshot to accomplish this.  (If you already have an archive app, you can export/import data, though Journals may be slightly annoying in this instance)
    • Update Beginning Year variables / values in the application rules / data for the new Start Year
    • Stop the HFM services
  • “Not so Easy Steps”  (explained later in more detail, don’t panic yet)
    • Connect to your HFM database via your database tool of choice  (e.g. SQL Server Management Studio, SQLPlus, etc.)
    • Query the <APP_NAME>_BINARYFILES table by ENTRYIDX for Labels of type SharedAppData
    • Update bytes 74 & 75 of the first row to reflect your new Start Year
  • Start HFM Services
  • Verify application data / functionality
  • “Post Validation Cleanup”
    • Stop HFM Services
    • Remove Year / Period specific database tables (e.g. <APP_NAME>_DCE_<PeriodNum>_<Year> )

HFM Binary Files Table

For each HFM application, you will find a Binary Files table in the format <APP_NAME>_BINARYFILES.  This table is used for multiple purposes, including storing application configuration information, application metadata, HFM rules, memberlists, etc.  The LABEL column defines the type of file the data is related to.  The BINARYFILE column contains the raw information from the corresponding file.  (e.g. Application Profile .PER file, Rules .RUL file, Metadata, etc.)  Since this column has a character limit that is smaller than most of these files, you will find multiple rows for each file type.  The ENTRYIDX column is used to order the rows in the proper order.  (Ascending order)

As the Start and End years are defined in the profile, we only want to view rows containing a LABEL of SharedAppData:

 LABEL = 'SharedAppData'



SharedAppData ENTRYIDX 1 as viewed in a Hex Editor

As mentioned above, the SharedAppData rows correspond to the application profile (.PER) that defines the HFM application.  Analysis of this data indicates that both the Start and End years are stored in the first row of this information.  The Start Year is stored in bytes 74 and 75 while, the End Year is stored in bytes 78 and 79.  The data is stored in Least Significant Byte Order.  (e.g. Byte 75 is first half of year, 74 is the second)

In the Shared App Data above, the Start Year is 2006 while the End Year is 2015.

Start Year (Hexadecimal) = D6 07 = 07 D6 (flipped) = 2006 (decimal)
End Year (Hexadecimal) = DF 07 = 07 DF (flipped) = 2015 (decimal)

To change the Start Year, convert the decimal year to hexadecimal and then reverse the byte order.  For instance, if your new start year is 1999, you would end up with:  1999 (dec) = 07CF = CE 07

HINT:  You can use Windows built-in Calculator program in Programmer mode to make these conversions.

Update SharedAppData

After you’ve calculated your new Start Year value, create an Update query to alter the stored binary data

UPDATE query to alter Start Year


Verify Results

After you have updated your settings and restarted HFM, you can verify that you now have the appropriate years

Original Years


Updated Years

NOTE – While the screen shots are from an HFM application, this has been tested through, though I won’t guarantee this works with every patch version, etc.  Always test in a non-production environment.

Hyperion Profitability (HPCM) – “Transaction rolled back because transaction was set to RollbackOnly”

While working with a client running HPCM, we were encountering intermittent errors while executing rules:

Problem Overview

“javax.persistence.RollbackException: Transaction rolled back because transaction was set to RollbackOnly”


As the exact same rule could be re-run without any errors, it appeared to be application/environment related.  (also appeared to be database related given the content of the error)

Reviewing the profitability log provides a much clearer view of the issue

NOTE: Log would typically be found in a location similar to:  \Oracle\Middleware\user_projects\domains\EPMSystem\servers\Profitability\logs\hpcm.log


From the log file snippet above, note the highlighted section:

“[SQL Server] Arithmetic overflow error converting expression to data type smallinit.”

What is the Problem?

While this does not match the error message we see in the user interface, this is definitely related as:

  1. The error message was logged at the same time as the error in HPCM
  2. The user name logged corresponded to the user receiving the error
  3. SQL Server rolling back a transaction after a failed insert makes a lot of sense.

(very) Loosely speaking, database transactions exist to protect the integrity of a database.  If a program, or user, were to execute a series of statements against a database and one or more fail, what should happen?  Should we leave the database in an inconsistent state or should we put the database back and alert the user?  While application developers could build this logic into their program code, it is a lot more convenient to give the database a series of steps and let it handle that for us!

In this case, the INSERT statement is part of a transaction.  Since the INSERT failed, SQL Server has rolled back the entire transaction and reported that to HPCM.

Why are we encountering this problem?

While that explain what happened, why did this happen?  The error in the log file has four key clues :

  1. We are attempting to add data to a database table  (INSERT INTO)
  2. The table is: HPM_STAT_DETAIL
  3. ARITHMETIC OVERFLOW occurred when trying to store a value in a column
  4. The target column has a Data Type of smallint

In SQL Server, a smallint datatype can have a maximum value of 32,767.  Another look at the error message reveals one numeric, 43,014, which exceeds 32,767.  This value is being stored in a column called JAVA_THREAD.  As JAVA_THREAD is storing the process id, which is semi-randomly generated, if the number returned is < 32,768, the program works as expected.  If the ID is > 32,767, then things don’t go as well…..

Reviewing the table structure for this table confirms the suspicion.


How to fix this

The easiest fix for this issue is to change the datatype for this column from smallint to int.  As the largest int value is well over 2 Billion, this issue should not occur again.

LEGALESE – While I have reviewed this change with Oracle and am very confident this will not cause any issues, proceed at your own risk.   🙂



  • As of 6/26, Oracle has confirmed this as a BUG.  No ETA on an update yet, though.
  • This may be SQL Server specific, have not evaluated against Oracle schema to confirm data type used.  [Oracle equivalent of smallint would be number(1)]

2004 Pontiac Grand Prix GTP Turbocharger Installation – 401 HP Out Of 231 Cubes

Since you can never have enough horsepower, I worked with my good friend, Kevin Margitta of Cartuning to replace the Eaton Gen IV M90 Supercharger on my 2004 Grand Prix Comp G with an intercooled turbocharger.  While Kevin had the hardware worked out, I contributed to the project by creating the software necessarily to reprogram the engine’s control module.  With the intercooled turbo & software updates in place, this swap easily put down 401 HP to the wheels before I decided I had enough fun at the dyno shop for the day.

A couple of months after the swap was complete, I ran into Don Keefe and asked him if he would like a ride in the car.  While he didn’t seem very interested at first, he became very interested after the boost kicked in!  Shortly thereafter, the following awesome article was published in High Performance Pontiac:


Getting ready to make an 1/8th mile pass


Engine Bay View Post Turbo Install


Turbo Installation Closeup


Supercharger Rotor Block-off Plate


Another view of Turbo / Inlet piping


Air-to-Air Intercooler


Turbonetics Wastegate


Cartuning Turbo Kit Display Model


Stock Appearing Car





Electronics Project Room …..


I’ve been slowly repurposing a spare room in the basement to serve as my electronics project room.  Not quite done, but thought I’d post some photos showing current status.  Just need to get the electrical finalized and will be good to go!



Primary Work Area
Bench #1


Decorative / Functional Gauges
Decorative / Functional Gauges
Oscilloscope & Stuff
Oscilloscope & Stuff
Trusty Hakko 936
Hakko 936 Soldering Station & EMP 20 Programmer

Kscope2015 – Smart View

This past year at Kscope15, I presented a session all about the technical side of Smart View.  While I intended on spending equal time between infrastructure and API/programming  topics, I ended up focusing a bit more on the API/programming side.  There are so many ways to improve your Smart View documents by understanding some basic Visual Basic for Applications (VBA) code and leveraging the Smart View API, I simply couldn’t resist!

For those more interested in the infrastructure side of Smart View deployments, do not fear!  While the session itself didn’t spend as much time on it, the Power Point includes a fair amount of slides which provide information on how to automate Smart View deployment, automatically set user defaults, and deploy preconfigured Private and Shared Connections.

The sessions, and slide deck below, provide oodles of information on the following topics:

  • Improving Robustness of Smart View Documents
    • Excel Add-In Failure Detection  (e.g. Disabled Add-In / Missing Smart View)
    • Proactive Connection Monitoring
  • Deployment Simplification / Initial Configuration
    • Automated Installation Guidance
    • Automated Default Preferences Push
    • Automated Shared / Private Connection Push
  • Essbase Add-In / Smart View Conversions
  • VBA Important Tips / Tricks
  • Smart View API Important Tips / Tricks

As with all of my presentations, you will find a plethora of working examples such as:

  • Excel Performance Improvements ( Screen Updating / Enable Events / Calculation Mode )
  • Invalid Cell Data Identification ( Catch Non-Numeric data before it wrecks your formulas! )
  • Add-In Presence & Status Detection
  • Broken Link Detection & Correction
  • Planning Cell Note Editor
  • Working with Excel & VBA (Workbooks / Worksheets / Ranges / Events )
  • Working with Smart View API ( Refreshing Data / Creating, Establishing, Disconnecting, Deleting Connections )

Download the presentation here!

5 1/4″ Floppy Fun

Recently I was asked if I could recover some data from some old 5 1/4″ floppy disks. As I kept my old 8086 computer laying around “just in case”, I was more than happy to oblige!  I took the disks and began my challenge laden journey to get the data into a modern system.

Bringing ‘ole faithful back to life

After a quick search of my electronic scrapheap, I located my first IBM-PC compatible computer that I ever owned, an Epson 8086!  Like the Millenium Falcon from Star Wars, it may not look like much, but it has it where it counts!  (e.g., it has a working 5 1/4″ disk drive!)  For those who are interested, here are the specs to this late 80’s powerhouse:

'Ole Faithful
My first PC, an Epson 8086 4.77 Mhz w/ 10 Mhz “Turbo” mode!
Epson Apex 
CPU    : Intel 8086 Processor, no Math Co-processor
Speed  : 4.77 Mhz w/ 10 Mhz 'Turbo' Mode
Memory : 640 KB Conventional RAM, no Extended or Expanded RAM
Video  : CGA Graphics - 256 colors in text mode, 4 colors @ 320x200 graphics resolution 
HDD    : 20 MB RLL Encoded Hard Drive 
Floppy : 360 KB 5 1/4" Drive
Ports  : 9-Pin DIN Keyboard, 25-Pin Serial, 25-Pin Parallel

Just like the Millenium Falcon; however, I had performed a slight modification to this computer, a Microsoft Mach 20 add-on board!

Microsoft Mach 20 Board
CPU    : Intel 80286 Processor
Speed  : 8 Mhz 'Turbo' ("Equivalent to 12 Mhz 80286's of the day")
Memory : 512 KB RAM, expandable to 3.5 MB

The board works by replacing your 8086 Processor with an special adapter that connects the Mach 20 board to the computer’s motherboard.  The additional RAM is recognizable as Extended (first 384 KB) and Expanded  (remainder) by including HIMEM.SYS and EMM386.SYS in the MS DOS config.sys file.

Why am I mentioning this board you ask?  It was the first obstacle between me and my data!

Well I guess the 20, in Mach 20, refers to its lifespan….

I hooked up the keyboard, the monitor, and power cables and hit the Hyperdrive. (Power button)  The lights in my workout room momentarily dimmed and the screen flickered with a BIOS self test, perfect!  The memory test systematically ran through the memory: 64 KB, 128 KB, 256 KB.  Just as I thought things were going well, failure!  The screen froze at 256KB and then the screen went dead.  I shrugged it off as a fluke and punched the button again.  Once again, the memory test began: 64 KB, 128 KB, 256 KB, 320 KB, 384 KB.  Once again, I was met with a resounding failure as it froze again.  I looked over the internals of the computer and at initial glance everything appeared to be in order.  As I started feeling the connections, I discovered that the Mach 20 CPU connection cable felt suspect.  On a hunch, I held the cable down and hit the switch again.  64 KB, 128 KB, 256 KB, 320 KB, 384 KB, 448 KB, 512 KB, 576 KB, 640 KB!  Success!  I quickly shut off the machine and placed one of the our epoxy coated 5 LB dumbbells on the connection.  With the Mach 20 situation addressed, I was on the fast path to getting that data….. or not …….

Anyone have a 9-PIN DIN Keyboard laying around?  I didn’t think so ……

As I restarted the computer and watched the memory test complete, I was greeted with the next obstacle: BIOS test failures!  The first error was a lovely 301 Keyboard error.  As the picture at the beginning of this post should attest, the keyboard has seen better days and it was very, very unhappy.  As I don’t keep spare DIN keyboards laying around and it was ~ midnight, I didn’t have any options other than to find a way to make it work.  After disassembling the keyboard, I found significant corrosion on the connectors.  I cleaned the corrosion and reassembled what was left of the keyboard.  Fortunately, this was sufficient to allow the BIOS keyboard test to pass!  Later into the process of recovering the data, I discovered that some buttons didn’t work on the keyboard which made typing  difficult, but not impossible.  (HINT: ALT + ASCII code will put a character on the screen)

Floppy Drive Cable, Where Art Thou?

After resolving the keyboard failure, I was presented with yet another BIOS test gift, 601 – Diskette Error.  As the whole point of thisAfter awakening from a 20 year slumber, my computer was not happy. exercise is to read data from the floppy disk, this error was especially cringe worthy.  After staring at the screen momentarily in a mixture of disbelief and annoyance, I took another look at the cabling and discovered a likely explanation for the problem:  there wasn’t a floppy drive cable!  Even though it had been over a decade since I last used this machine,  I tried to come up with a reason why the cable would be gone and came up completely blank.  (still don’t know)  I searched through my bin of spare cables; however, this was a cable I didn’t have since these cables are nothing like current drive cables.  Defeated, I retreated to my laptop in search of a cable. In an amazing stroke of luck, Fry’s electronics stocks these cables and I picked two up in the AM.  (I have no idea why they stock this cable, but I’m sure glad they had it!)  I connected the cable, fired up the machine, and all BIOS tests passed.  The machine booted up DOS and things were finally falling into place.  I inserted the 5 1/4″ floppy, type A: to access the drive and ……..

Abort, Retry, Fail?

Anyone who has worked in DOS knows this message and the fear it creates as you realize your data is probably corrupt and gone forever.  While I set early expectations that the 5 1/4″ disk may not be recoverable due to its age, the disk was well stored and  I was really hoping to get this data.  I wasn’t ready to let it die on the operation room table! (actually my workout room floor, but who is keeping track…)  I futilely tried to access the drive repeatedly.  Each time I was met with the same cold message.  I looked at the clock and almost called the time of death on my data recovery effort, but I wanted to try one more thing.  Looking very closely at the disk, I noticed that it didn’t appear to seat entirely into the disk drive.  Even though the locking mechanism closed, the disk drive started up, and it sounded like it was working, it didn’t look quite right.  I used my left hand to push the disk into the drive farther and then tried to access the disk again.   Success!  Treating the disk as if it were an self destructing Mission Impossible message, I quickly copied the entire contents of the floppy disk to the computer’s hard drive.  I basked in the successful data copy for a few moments and then realized that my job wasn’t close to done.

Congratulations, you’ve moved your data from an obsolete 5 1/4″ disk to an obsolete computer, now what?!?!?

While I was able to read the data of the floppy successfully, I still had to find a way to get it to the person needing the data.  This antique didn’t have an Ethernet card (or software to run it) so I couldn’t email, FTP, or directly move the data across a network.  The machine didn’t have a 3 1/2″ disk drive which could be more useful.  I didn’t even have a modem.  There were two options that came to mind:

  • Transplant the hard drive into a more modern computer – While at first this may seem like a dead end, this is actually a very viable option.  Even though the hard drive is an older format which a newer computer wouldn’t recognize, I could move the controller card AND the hard disk into a newer machine with ISA slots and it would work!  (I’ve done it once before)  I grabbed the oldest/modern machine I had laying around, an HP Pentium based desktop, ripped it open and ……..    *NO* ISA slots.  Unfortunately, it was just new enough that it had PCI; therefore, I would not be able to move the hard drive and needed a Plan B.
  • Transmit the data over the Serial / Parallel ports to another computer – Thinking back to my old DOS days, I recalled a program built into the later versions of DOS that would allow you to send files between computers.  After combing through the DOS folder, I located the program:  Microsoft Interlink.

My victory was once again short lived as a few realizations set in:

  • Microsoft Interlink needs to run on both the host AND destination computers – As newer computers don’t ship with this program, I would need to get it copied over to the destination machine.  Of course if I had a way to copy that file, I wouldn’t need the program in the first place………   After a lot of scouring, I found a someone who was hosting the necessary files!
  • Microsoft Interlink needs a “LapLink” or Null modem cable – No problem, I know I have a Laplink cable here somewhere………..  I searched through my spare cable bin and my cable was nowhere to be found.  Unfortunately I gave away a large amount of older computer items just weeks before I took on this task and I suspect it was one of the items I gave away.  (After all, who would need an old LapLink cable, right?)  That’s OK, lets make our own….

Microsoft Interlnk, the gift that keeps on giving!

Using EDLIN to update the CONFIG.SYS to load INTERLNK driver
Using EDLIN to update the CONFIG.SYS to load INTERLNK driver

A quick scan of the Microsoft website (http://support.microsoft.com/kb/121246/en-us/), as well as the Interlnk help file, provided all the information necessary to wire up the pins.  In a huge stroke of luck, the prototyping cables I use for my circuit work perfectly fit the serial port pins making connecting the machines quick work.  The only drawback is since the cables are relatively short, it was a bit of a challenge to hook everything up.  With the cables connected to the computers, I turned my attention to configuring the software.

As I alluded to earlier, both machines need to have software running for this to work, the following steps outline what needs to happen:

  • Place INTERLNK.EXE and INTERSVR.EXE on both machines
  • Update the CONFIG.SYS file to load a device drive as follows:
    DEVICE=C:\DOS\INTERLNK.EXE /<ltp|com>:<number>  where:
  • <ltp/com> – specify LPT if connecting through a parallel port, com if connecting through serial port.
  • <number> – is the port number (i.e. If connection to LPT2 / COM2, enter a 2 here)
  • Update the AUTOEXEC.bat to call the INTERSVR.EXE program
  • Ensure the cable is connected between the machines
  • Restart the machines
Directly Connecting Computers using Prototype Circuit Board jumper cables
Directly Connecting Computers using Prototype Circuit Board jumper cables

If everything is setup right, during the boot-up process you will see a message indicating that a link has been established and it will assign a local drive letter to the remote system.

Success at last!

After restarting both machines, Microsoft Interlnk loaded and a drive map was established between the two machines.  I copied the files over to the newer machine and then used a IDE-USB adapter to connect that computer’s hard drive to my laptop and send the files off.

Connection Established!
Connection Established!

It wasn’t pretty, but in the end everything worked out and I got to justify keeping my old 8086 around!

Since it is up and running….

After I had successful sent off the files, I went back to the old 8086 to see what was still on the machine.  I found a couple of old, but good games that I used to play and some old code I wrote to add mouse support to a BASIC program I was writing.  A nice trip down memory lane!

Centurion 688AttackSub ASM to Activate Mouse Cursor GottaLoveCGA

Oracle DRM performance problems?

Have you recently upgraded DRM to the platform and begun experiencing performance problems? If so, you’re not alone. Scanning discussion boards and internet search sites leads one to the conclusion that many people have taken a performance hit since upgrading to this version. While in a status meeting at my current client, I heard grumblings of slow performance and volunteered to take a look at the issue. After investigating, I have found one significant issue with a relatively easy fix!

As DRM is a Microsoft .NET based application, it is not truly compiled until runtime. When this “Just In Time” compilation occurs, the Framework looks at certain configuration files (e.g., machine.config, web.config, app.config)used during the compilation process. These files contain settings covering key configuration topics such as authentication method, logging options, “global” variables, database connection strings, and compilation options.
One of the compilation options, debug, has a significant impact on the operation of the compiled application.

While looking at the configuration files for DRM, I quickly spotted a common mistake made by development teams releasing .NET code: debug=”true”. While debug=true is great for development teams, it is not great for production software. While I won’t bore you with all of the details as to why this is, the following MSDN blog post covers a fair amount of the differences between an ASP.NET application running in debug mode: Pros and Cons of using Debug=True:

To resolve this issue, do a search for any app.config, machine.config, web.config files contained in your DRM product folder.

For instance, you should be able to locate web.config files in the following folders:

  • %EPM System Home%\products\DataRelationshipManagement\client\web-client
  • %EPM System Home%\products\DataRelationshipManagement\client\migration-client

When you open these files, search for the compilation property. If you see debug=”true”, change this to debug=”false”. For the ASP.NET web applications, these changes will take effect immediately since the Framework will notice that the web.config has changed and will restart the web application.


– Oracle has confirmed this is an issue will address
– We have seen this issue with the following releases: .500 and .304

Kscope13 Recap

As with previous years, this year’s KScope conference did not disappoint!  The location, people, and content were all great.

While the weather in New Orleans was hot, the close proximity to many local attractions was cool and made it easy to explore the town between sessions.  The Sheraton was but steps to local dining, shopping, the historic French Quarter, and the Mississippi.  If you wanted to travel farther out, you simply had to walk out the front door and hop on a trolley car.

The record breaking drives of worldwide people who participated at the conference were great!  The organizers, venue staff, ODTUG Board Members, presenters, vendors, and attendees were all welcoming, helpful, and focused on making the conference successful.  The quality people that attend this conference make the conference special.

Overall, the content was exceptional.  Attendees had their pick of high quality content in all of the regular areas and even a few new areas such as .NET programming!  Oracle was on-hand to discuss the latest and greatest releases, consultants shared some of their secrets and best practices, vendors demonstrated their products, and employees provided real world case studies.

A few of my personal favorite sessions were:

* – Biased as this was MY session!

Speaking of sessions, I once again fielded a session focusing on API programming in EPM.  Unlike the 2012 session which took a shotgun blast approach to all sorts of EPM APIs, this year’s session shined a laser on the HFM COM API.  The goal of the session was to walk the attendees through an entire application build from scratch.

We covered installing the necessary HFM files to your computer, installing a development environment (Visual Studio), creating your first project, adding the necessary DLL file references, implementing key API calls, and compiling/executing your first program.  To further help the attendees get started, I provided a complete sample project that users could use as a starting point.

In case you missed my session(s) or you want a copy of the presentation deck, you can find them here:

2013 Session : https://www.box.com/s/ttapg8dcwebw7z9p0k0x

2012 Session : https://www.box.com/s/a2ea627ae8c6f38c674c

NOTE:  Links to source code are included in the files at the end of the slide decks.

Hope to see you at Kscope14 in Seattle!


Mardi Gras World
Mardi Gras World
NOLA Trolley Car
Trolley Car
What would happen if .....
What would happen if …..


HFM Subcube Architecture Documents (Pre-

Recently a saw a colleague asking for some information about the HFM table layout and how they could interface with the data in the system.  While I recalled there were a few documents on the subject, I could not locate them via Google.  While I’ve provided them at the bottom of this blog post, I thought I’d give a 10 second high level explanation of the primary tables in regards to the actual HFM data.  (Pre

Generally speaking, when you configured HFM, you provided a database for use by the system.  This database will hold the data for all of your applications created in the envirionment.  To keep the data separated, each application’s data tables will be prefixed with the application name.  (i.e. SampleApp_CSE_11_2011)

Oracle is kind enough to publish a data model breakdown for most of the tables which can be found here:

Interestingly, they do not cover all of the tables as they appear to skip over the data tables.  When it comes to the data, there are 3 primary tables that hold the financial information : DCE, DCN, and DCT tables.  The DCE tables store <Entity Currency> and <Parent Currency> values and the related adjustments. The DCN tables store data for the remaining members of the value dimension. The DCT tables store the journal transactions – when the journals are posted, the data values flow to the DCE and/or DCN tables.

When viewing your database, you will see a series of tables that follow a relatively obvious naming convention:

Elevator Trivia Note: The fact that the Period and Year and part of the naming convention is a key reason as to why you specify an ending year (and number of periods) during the App build process!

Looking at the structure of these tables, you’ll see the following fields used consistently:


With the exception of lValue, these are pretty straightforward and act as foreign keys back to the respective tables such as <AppName>_Entity_Item.

lValue – Is slightly different as it takes a little work to determine what this is.  For each currency in your system, you will have Base Currency “Entity Currency”, Adjustments, and Total.  This field is a combination of that AND the currency.  The following formula will help you translate:

<AppName>_CURRENCIES.ItemID * 3 + 15 = “Currency Total”
<AppName>_CURRENCIES.ItemID * 3 + 16 = “Currency Adj”
<AppName>_CURRENCIES.ItemID * 3 + 17 = “Currency”

In addition to those fields, the tables all contain one or more Timestamp field.  Minor conversion is needed on the Timestamp field to make it human readable.  For instance in SQL Server you would return it as:

cast(TimeStamp2 as smalldatetime)

After these columns, each of the 3 table sets have slightly different layouts; however, one additional pattern you should see is that there will be incrementing columns that correlate to the number of periods in your application.  One of the columns will hold your actual data while the other one will hold a timestamp or some sort of indicator about the data (i.e. Trans Type)

Armed with this knowledge, one thing you can do is search all of your data tables to find orphaned information.  While Oracle gives you a way to clear it out, it doesn’t give you a good way to see what you are clearing.  With a basic query, we can not only find out if we have orphaned records, but what the data is and when it was created/last used/updated!

The following Microsoft SQL Query will dynamically go through ALL tables that reference lEntity and then determine undefined values.
NOTE: You should theoretically search for Account, ICP, and Custom orphans as well, but this is a start!

— Get List of Tables that refer to Entity ID and then search them all looking for undefined values

— NOTE: You would probably do the same for lValue, lAccount, lICP, lCustom1 – lCustom4,

— NOTE: lParent would correspond to entity table like lEntity

— NOTE: lSrc and lDes should also be checked against actual table. i.e. lSrcAccount checked against Accounts


–Initialize some working memory

Declare @tblSearchTable varchar(50)

Declare @tblSrcTable varchar(50)

Declare @tblSearchField varchar(50)

Declare @sql nvarchar(4000)

— Setup defaults

set @tblSearchField =‘lEntity’

set @tblSrcTable =‘<APPNAME>_entity_item’

–Setup Cursor

Declare tmp_cursor CURSOR

For Select distinct name from sys.tables where object_id in ( select object_id from sys.columns where name= @tblSearchField) order by name asc

Open tmp_cursor

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable



SET @sql=‘select ”’+ @tblSearchTable +”’ as SrcTable, * from [‘+ @tblSearchTable +‘] where ‘+ @tblSearchField +‘ not in (select itemid from [‘+ @tblSrcTable +‘])’

EXECsp_executesql @sql

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable


CLOSE tmp_cursor

DEALLOCATE tmp_cursor


Additional Reading

Hyperion Subcube Architecture Document

HFM Tuning Guide

Kscope13 is approaching!

With Kscope13 around the corner, I thought I’d share last year’s presentations:


For Kscope13, I’m presenting a follow-up to my API session which will see us complete a .NET desktop app from scratch to consuming web services via a tablet app.  Should be interesting for those looking to expand their product’s functionality via the APIs.

To see a full break down for Kscope 13, please go here : http://kscope13.com/

See you there!

  • 1
  • 2