Browse Category

Oracle EPM

Technology information pertaining to Oracle’s EPM platform

Hyperion Profitability (HPCM) 11.1.2.4 – “Transaction rolled back because transaction was set to RollbackOnly”

While working with a client running HPCM 11.1.2.4.110, we were encountering intermittent errors while executing rules:

Problem Overview

“javax.persistence.RollbackException: Transaction rolled back because transaction was set to RollbackOnly”

1_UI_Error

As the exact same rule could be re-run without any errors, it appeared to be application/environment related.  (also appeared to be database related given the content of the error)

Reviewing the profitability log provides a much clearer view of the issue

NOTE: Log would typically be found in a location similar to:  \Oracle\Middleware\user_projects\domains\EPMSystem\servers\Profitability\logs\hpcm.log

3_HPCM_Error_Log

From the log file snippet above, note the highlighted section:

“[SQL Server] Arithmetic overflow error converting expression to data type smallinit.”


What is the Problem?

While this does not match the error message we see in the user interface, this is definitely related as:

  1. The error message was logged at the same time as the error in HPCM
  2. The user name logged corresponded to the user receiving the error
  3. SQL Server rolling back a transaction after a failed insert makes a lot of sense.

(very) Loosely speaking, database transactions exist to protect the integrity of a database.  If a program, or user, were to execute a series of statements against a database and one or more fail, what should happen?  Should we leave the database in an inconsistent state or should we put the database back and alert the user?  While application developers could build this logic into their program code, it is a lot more convenient to give the database a series of steps and let it handle that for us!

In this case, the INSERT statement is part of a transaction.  Since the INSERT failed, SQL Server has rolled back the entire transaction and reported that to HPCM.


Why are we encountering this problem?

While that explain what happened, why did this happen?  The error in the log file has four key clues :

  1. We are attempting to add data to a database table  (INSERT INTO)
  2. The table is: HPM_STAT_DETAIL
  3. ARITHMETIC OVERFLOW occurred when trying to store a value in a column
  4. The target column has a Data Type of smallint

In SQL Server, a smallint datatype can have a maximum value of 32,767.  Another look at the error message reveals one numeric, 43,014, which exceeds 32,767.  This value is being stored in a column called JAVA_THREAD.  As JAVA_THREAD is storing the process id, which is semi-randomly generated, if the number returned is < 32,768, the program works as expected.  If the ID is > 32,767, then things don’t go as well…..

Reviewing the table structure for this table confirms the suspicion.

2_Database_Column_Definitions


How to fix this

The easiest fix for this issue is to change the datatype for this column from smallint to int.  As the largest int value is well over 2 Billion, this issue should not occur again.

LEGALESE – While I have reviewed this change with Oracle and am very confident this will not cause any issues, proceed at your own risk.   🙂

4_Updated_Table

NOTE(s):

  • As of 6/26, Oracle has confirmed this as a BUG.  No ETA on an update yet, though.
  • This may be SQL Server specific, have not evaluated against Oracle schema to confirm data type used.  [Oracle equivalent of smallint would be number(1)]

Kscope2015 – Smart View

This past year at Kscope15, I presented a session all about the technical side of Smart View.  While I intended on spending equal time between infrastructure and API/programming  topics, I ended up focusing a bit more on the API/programming side.  There are so many ways to improve your Smart View documents by understanding some basic Visual Basic for Applications (VBA) code and leveraging the Smart View API, I simply couldn’t resist!

For those more interested in the infrastructure side of Smart View deployments, do not fear!  While the session itself didn’t spend as much time on it, the Power Point includes a fair amount of slides which provide information on how to automate Smart View deployment, automatically set user defaults, and deploy preconfigured Private and Shared Connections.

The sessions, and slide deck below, provide oodles of information on the following topics:

  • Improving Robustness of Smart View Documents
    • Excel Add-In Failure Detection  (e.g. Disabled Add-In / Missing Smart View)
    • Proactive Connection Monitoring
  • Deployment Simplification / Initial Configuration
    • Automated Installation Guidance
    • Automated Default Preferences Push
    • Automated Shared / Private Connection Push
  • Essbase Add-In / Smart View Conversions
  • VBA Important Tips / Tricks
  • Smart View API Important Tips / Tricks

As with all of my presentations, you will find a plethora of working examples such as:

  • Excel Performance Improvements ( Screen Updating / Enable Events / Calculation Mode )
  • Invalid Cell Data Identification ( Catch Non-Numeric data before it wrecks your formulas! )
  • Add-In Presence & Status Detection
  • Broken Link Detection & Correction
  • Planning Cell Note Editor
  • Working with Excel & VBA (Workbooks / Worksheets / Ranges / Events )
  • Working with Smart View API ( Refreshing Data / Creating, Establishing, Disconnecting, Deleting Connections )

Download the presentation here!

Oracle DRM 11.1.2.3.xxx performance problems?

Have you recently upgraded DRM to the 11.1.2.3 platform and begun experiencing performance problems? If so, you’re not alone. Scanning discussion boards and internet search sites leads one to the conclusion that many people have taken a performance hit since upgrading to this version. While in a status meeting at my current client, I heard grumblings of slow performance and volunteered to take a look at the issue. After investigating, I have found one significant issue with a relatively easy fix!

As DRM is a Microsoft .NET based application, it is not truly compiled until runtime. When this “Just In Time” compilation occurs, the Framework looks at certain configuration files (e.g., machine.config, web.config, app.config)used during the compilation process. These files contain settings covering key configuration topics such as authentication method, logging options, “global” variables, database connection strings, and compilation options.
One of the compilation options, debug, has a significant impact on the operation of the compiled application.

While looking at the configuration files for DRM, I quickly spotted a common mistake made by development teams releasing .NET code: debug=”true”. While debug=true is great for development teams, it is not great for production software. While I won’t bore you with all of the details as to why this is, the following MSDN blog post covers a fair amount of the differences between an ASP.NET application running in debug mode: Pros and Cons of using Debug=True:

To resolve this issue, do a search for any app.config, machine.config, web.config files contained in your DRM product folder.

For instance, you should be able to locate web.config files in the following folders:

  • %EPM System Home%\products\DataRelationshipManagement\client\web-client
  • %EPM System Home%\products\DataRelationshipManagement\client\migration-client

When you open these files, search for the compilation property. If you see debug=”true”, change this to debug=”false”. For the ASP.NET web applications, these changes will take effect immediately since the Framework will notice that the web.config has changed and will restart the web application.

Debug=True

————————-
Updates:
– Oracle has confirmed this is an issue will address
– We have seen this issue with the following 11.1.2.3 releases: .500 and .304

Kscope13 Recap

As with previous years, this year’s KScope conference did not disappoint!  The location, people, and content were all great.

While the weather in New Orleans was hot, the close proximity to many local attractions was cool and made it easy to explore the town between sessions.  The Sheraton was but steps to local dining, shopping, the historic French Quarter, and the Mississippi.  If you wanted to travel farther out, you simply had to walk out the front door and hop on a trolley car.

The record breaking drives of worldwide people who participated at the conference were great!  The organizers, venue staff, ODTUG Board Members, presenters, vendors, and attendees were all welcoming, helpful, and focused on making the conference successful.  The quality people that attend this conference make the conference special.

Overall, the content was exceptional.  Attendees had their pick of high quality content in all of the regular areas and even a few new areas such as .NET programming!  Oracle was on-hand to discuss the latest and greatest releases, consultants shared some of their secrets and best practices, vendors demonstrated their products, and employees provided real world case studies.

A few of my personal favorite sessions were:

* – Biased as this was MY session!

Speaking of sessions, I once again fielded a session focusing on API programming in EPM.  Unlike the 2012 session which took a shotgun blast approach to all sorts of EPM APIs, this year’s session shined a laser on the HFM COM API.  The goal of the session was to walk the attendees through an entire application build from scratch.

We covered installing the necessary HFM files to your computer, installing a development environment (Visual Studio), creating your first project, adding the necessary DLL file references, implementing key API calls, and compiling/executing your first program.  To further help the attendees get started, I provided a complete sample project that users could use as a starting point.

In case you missed my session(s) or you want a copy of the presentation deck, you can find them here:

2013 Session : https://www.box.com/s/ttapg8dcwebw7z9p0k0x

2012 Session : https://www.box.com/s/a2ea627ae8c6f38c674c

NOTE:  Links to source code are included in the files at the end of the slide decks.

Hope to see you at Kscope14 in Seattle!

Charles

Mardi Gras World
Mardi Gras World
NOLA Trolley Car
Trolley Car
What would happen if .....
What would happen if …..

 

HFM Subcube Architecture Documents (Pre-11.1.2.2)

Recently a saw a colleague asking for some information about the HFM table layout and how they could interface with the data in the system.  While I recalled there were a few documents on the subject, I could not locate them via Google.  While I’ve provided them at the bottom of this blog post, I thought I’d give a 10 second high level explanation of the primary tables in regards to the actual HFM data.  (Pre 11.1.2.2)

Generally speaking, when you configured HFM, you provided a database for use by the system.  This database will hold the data for all of your applications created in the envirionment.  To keep the data separated, each application’s data tables will be prefixed with the application name.  (i.e. SampleApp_CSE_11_2011)

Oracle is kind enough to publish a data model breakdown for most of the tables which can be found here:
http://www.oracle.com/technetwork/middleware/bi-foundation/resource-library-090986.html

Interestingly, they do not cover all of the tables as they appear to skip over the data tables.  When it comes to the data, there are 3 primary tables that hold the financial information : DCE, DCN, and DCT tables.  The DCE tables store <Entity Currency> and <Parent Currency> values and the related adjustments. The DCN tables store data for the remaining members of the value dimension. The DCT tables store the journal transactions – when the journals are posted, the data values flow to the DCE and/or DCN tables.

When viewing your database, you will see a series of tables that follow a relatively obvious naming convention:
<App_Name>_DCE_<PeriodNum>_<Year>

Elevator Trivia Note: The fact that the Period and Year and part of the naming convention is a key reason as to why you specify an ending year (and number of periods) during the App build process!

Looking at the structure of these tables, you’ll see the following fields used consistently:

lEntity
lParent
lValue
lAccount
lICP
lCustom1
lCustom2
lCustom3
lCustom4

With the exception of lValue, these are pretty straightforward and act as foreign keys back to the respective tables such as <AppName>_Entity_Item.

lValue – Is slightly different as it takes a little work to determine what this is.  For each currency in your system, you will have Base Currency “Entity Currency”, Adjustments, and Total.  This field is a combination of that AND the currency.  The following formula will help you translate:

<AppName>_CURRENCIES.ItemID * 3 + 15 = “Currency Total”
<AppName>_CURRENCIES.ItemID * 3 + 16 = “Currency Adj”
<AppName>_CURRENCIES.ItemID * 3 + 17 = “Currency”

In addition to those fields, the tables all contain one or more Timestamp field.  Minor conversion is needed on the Timestamp field to make it human readable.  For instance in SQL Server you would return it as:

cast(TimeStamp2 as smalldatetime)

After these columns, each of the 3 table sets have slightly different layouts; however, one additional pattern you should see is that there will be incrementing columns that correlate to the number of periods in your application.  One of the columns will hold your actual data while the other one will hold a timestamp or some sort of indicator about the data (i.e. Trans Type)

Armed with this knowledge, one thing you can do is search all of your data tables to find orphaned information.  While Oracle gives you a way to clear it out, it doesn’t give you a good way to see what you are clearing.  With a basic query, we can not only find out if we have orphaned records, but what the data is and when it was created/last used/updated!

The following Microsoft SQL Query will dynamically go through ALL tables that reference lEntity and then determine undefined values.
NOTE: You should theoretically search for Account, ICP, and Custom orphans as well, but this is a start!

_________________________________
— Get List of Tables that refer to Entity ID and then search them all looking for undefined values

— NOTE: You would probably do the same for lValue, lAccount, lICP, lCustom1 – lCustom4,

— NOTE: lParent would correspond to entity table like lEntity

— NOTE: lSrc and lDes should also be checked against actual table. i.e. lSrcAccount checked against Accounts

 

–Initialize some working memory

Declare @tblSearchTable varchar(50)

Declare @tblSrcTable varchar(50)

Declare @tblSearchField varchar(50)

Declare @sql nvarchar(4000)

— Setup defaults

set @tblSearchField =‘lEntity’

set @tblSrcTable =‘<APPNAME>_entity_item’

–Setup Cursor

Declare tmp_cursor CURSOR

For Select distinct name from sys.tables where object_id in ( select object_id from sys.columns where name= @tblSearchField) order by name asc

Open tmp_cursor

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable

WHILE@@FETCH_STATUS= 0

BEGIN

SET @sql=‘select ”’+ @tblSearchTable +”’ as SrcTable, * from [‘+ @tblSearchTable +‘] where ‘+ @tblSearchField +‘ not in (select itemid from [‘+ @tblSrcTable +‘])’

EXECsp_executesql @sql

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable

END

CLOSE tmp_cursor

DEALLOCATE tmp_cursor

__________________________________________________

Additional Reading

Hyperion Subcube Architecture Document

HFM Tuning Guide

Enumerating Financial Reports via Workspace database

Disclaimer : This post will talk about direct database access. Do not perform unless you are sure you know what you are doing and I’m not liable! I’m just passing along information I’ve learned in case it helps you out….
Recently, we decided that we wanted to review all of our reports to determine which ones we could drop and which ones we should keep. Over the past 7 years, things have accumulated and we literally have thousands of reports gasp between all of the backups, Work In Process, and flat out duplication. In order to handle the review of this we wanted to assign prople to review specific sections of the reports. I was asked to provide a nice excel ‘report’ that would break down the report / folder hierarchy so that we could assign it.
That is when I realized that I really couldn’t locate a tool / report that would generate such a report listing!
In the old 7.2.5 version of Reports, it was relatively easy to get this information as all of the report information (including actual report content) was stored in a single database table.
With System 9 and Workspace, this is no longer true. With Workspace, the files are stored on the server’s file system in a manner consistent with the hierarchy of your reports/folders. Intially I thought I would be able to use this file system to build my report using simple DOS commands. (i.e. dir /s > reportlist.txt) Unfortunately though, the file names of the reports and folders are not ‘human readable’. (In order to decipher it, you’d have to go into the workspace database and x-ref these names to a table, etc)  Fortunately; however, you can build a report listing via SQL. The script below recursively builds a report with the following output :

(NOTE: Actually the script includes more columns of output; however, for display purposes here, I’m only showing the name and description)



Report Name     Description
AnnualReports NULL
===Actg-YContractual Obligations – Multiple Entities Contractual Obligations
===Annual Validation Summary Annual Validation Summary
===Customers and End User Markets NULL
====== 001 – End Use Markets Book End Use Markets Book
====== 002 – Top Customers Book Top Customers Book
====== 010 – Customers Book Reports – Corp NULL
========= Revenue by Top Customers – All Sectors – Pr Yr Comp Customer Revenues
========= Revenue by Top Customers – All Sectors – Pr Yr Comp -TEST Customer Revenues
====== 020 – Markets Book Reports – Corp NULL
========= Revenue by End User Markets – All Sectors End Use Markets
====== 030 – All Sectors – End Use Markets – Pr Yr Comp NULL
========= End Use Markets – Pr Yr Comp Customer Revenues

 


SQL 2005+ Query
USE BIPLus_prod;

GO

WITH Reports (PARENT_FOLDER_UUID, CONTAINER_UUID, NAME, DESCRIPTION, Level, Sort, CREATION_DATE, LAST_MODIFIED_DATE)

AS

(

— Anchor member definition

SELECT d.PARENT_FOLDER_UUID, d.CONTAINER_UUID, d.NAME, d.DESCRIPTION, 0 as level, cast(NAME as nvarchar(1024)) as sort, d.CREATION_DATE, d.LAST_MODIFIED_DATE

FROM V8_CONTAINER as d

WHERE d.PARENT_FOLDER_UUID = ‘0000011b42b29e00-0000-0b4d-c0a8e22a’ UNION ALL

— Recursive member definition

SELECT d1.PARENT_FOLDER_UUID, d1.CONTAINER_UUID, d1.NAME, d1.DESCRIPTION, Level + 1, cast(Sort + ‘|’ + d1.Name as nvarchar(1024)), d1.CREATION_DATE, d1.LAST_MODIFIED_DATE

FROM V8_CONTAINER as d1

INNER JOIN Reports AS r

ON d1.PARENT_FOLDER_UUID = r.CONTAINER_UUID

)
— Statement that executes the CTE

SELECT replicate(‘ ‘,Level) + NAME, DESCRIPTION, CREATION_DATE, LAST_MODIFIED_DATE, Level, PARENT_FOLDER_UUID, CONTAINER_UUID FROM Reports order by sort
GO


How this works :
The table V8_CONTAINER holds the main part of the report / folder entry data. In this table you will find Names, Creation Dates, Modified Dates, Owner name, UUID, and Parent UUID information. In order to build a hierarchy, you will need to pay careful attention to the UUID fields. CONTAINER_UUID is the internal name of the object in Workspace. (i.e. non-user friendly name) PARENT_FOLDER_UUID is a reference back to the item’s parent.
With this in mind, if you know the UUID of your starting point, you can simply recurse from there to get a listing of all folders and subsequent reports. In the script above I start off looking for an entry that is the starting point of our reports that we wanted to audit. For your purposes you will need to change this to what your starting point is.

NOTE : If you want to get everything in the table, use REPORTMART. REPORTMART is the parent to everything in the table.
I hope my walk through wasn’t terribly boring and helpful. I apologize for only providing a script that works in SQL Server; however, that is what we use at our site and there should be enough explanation here for someone relatively proficient with Oracle to write a similar script if they desire.

Synchronizing FDM (9.3.1) Applications w/ T-SQL

At my current location we have 1 main and 2 supporting FDM (9.3.1) applications. The main app is used to load production G&L data, and the supporting apps are used for loading our Budget and Forecast data and all are separated to ensure that a mistake in a multi-load template is minimized.
The problem we have with this setup is that we have to do 3x the maintenance work when it comes to users and locations.

While the users and locations are the exact same in each application, there are some differences; however: a.) Data Maps in Budget / Forecast are simply * to * as we use multi-load templates that have hte actual HFM account/entity names in them whereas production maps account for each company’s GL b.) Categories are different as we limit each FDM app to only the exact Category (Scenarios) required. c.) Periods are different as we limit each FDM app to only the needed periods required.

The solution for me was to implement a SQL direct copy routine for the information I need to move. This helps as I can : a.) automate this task b.) copy only what I want c.) Perform the task relatively quickly.

While the Workbench offers the ability to import / export components, I ran into trouble with it not correctly importing all of the data and it also restrcted my flexibility. Copying the entire database was not viable either as there were pieces I did not want to update….
The script below illustrates how to perform a copy via SQL. *bold*Please note that if you want to use this (or base a script off of this) that there’s absolutely no warranty and its use at your own risk.bold I’m fairly comfortable that this works fine (as I’ve tested this in our Dev), but you should test it on your own as well.

Also note, that this script does not copy ALL tables as I didn’t need them all for my purpose.

The script performs the following :

– Clears data from Target Database – Data Archives – Data Maps – Data
– Logs – Import Groups/Items – Validation Groups/Entities – Users / Partition Security – Partition Hierarchies / Links – Partitions (locations)
– Copies data from Source Database: – Import Groups / Items – Validation Groups / Items – Partitions (locations) – Partition Hierarchy / Links – Users / Partition Security
– Updates Parent Location setting on all Data Load locations (This is since I want all of my data load locations in the Budget / Forecast apps to use a * to * map. I added a new location in the Prod Database which is defined with the * to * mappings and when I copy everything over, I want the locations to use this map instead of their production G&L map.

-=-=-==-=-=-===-=-=-=-=-=-=-=-===-=-=-=-=-=-=-=-=-=-

-- FDM Database 'Auto Copy'
-- Author : Charles Beyer
-- Date : 7/1/2010
-- Description : This script is used to automatically / manually sync up database between a SOURCE FDM application and a 
--                DESTINATION FDM application.
-- NOTES
-- ##TARGETDB## - Replace this with the name of the Database that you want to SYNC 
-- ##SOURCEDB## - Replace this with the name of the Database that is the data source

-- disable referential integrity
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'

--Clear User Security
TRUNCATE TABLE ##TARGETDB##.dbo.tSecUserPartition
TRUNCATE TABLE ##TARGETDB##.dbo.tStructPartitionLinks
delete from ##TARGETDB##.dbo.tStructPartitionHierarchy
TRUNCATE TABLE ##TARGETDB##.dbo.tDataArchive
TRUNCATE TABLE ##TARGETDB##.dbo.tDataCheck

--Attempt to clear out the tDataMapSeg tables
EXEC sp_MSForEachTable '
  DECLARE @TableName VarChar(100)
  Set @TableName = PARSENAME(''?'',1)
  IF  left(@TableName,8) = ''tDataMap''
      TRUNCATE TABLE ##TARGETDB##.? 
'  
GO

--Attempt to clear ou the tDataSeg tables
EXEC sp_MSForEachTable '
  DECLARE @TableName VarChar(100)
  Set @TableName = PARSENAME(''?'',1)
  IF  left(@TableName,8) = ''tDataSeg''
      TRUNCATE TABLE ##TARGETDB##.? 
'  
GO

TRUNCATE TABLE ##TARGETDB##.dbo.tLogActivity
TRUNCATE TABLE ##TARGETDB##.dbo.tLogProcess
TRUNCATE TABLE ##TARGETDB##.dbo.tDataArchive
delete from ##TARGETDB##.dbo.tSecUser

delete from ##TARGETDB##.dbo.tPOVPartition

TRUNCATE TABLE ##TARGETDB##.dbo.tBhvValEntItem
delete from ##TARGETDB##.dbo.tBhvValEntGroup

TRUNCATE TABLE ##TARGETDB##.dbo.tBhvImpItemFile
delete from ##TARGETDB##.dbo.tBhvImpGroup

-- RECOPY Data from Prod DB to Budget DB
insert into ##TARGETDB##.dbo.tBhvImpGroup
   select * from ##SOURCEDB##.dbo.tBhvImpGroup

insert into ##TARGETDB##.dbo.tBhvValEntGroup
   select * from ##SOURCEDB##.dbo.tBhvValEntGroup

insert into ##TARGETDB##.dbo.tBhvValEntItem
   select * from ##SOURCEDB##.dbo.tBhvValEntItem

insert into ##TARGETDB##.dbo.tBhvImpItemFile
   select * from ##SOURCEDB##.dbo.tBhvImpItemFile

insert into ##TARGETDB##.dbo.tPOVPartition
   select * from ##SOURCEDB##.dbo.tPOVPartition

insert into ##	TARGETDB##.dbo.tStructPartitionHierarchy
   select * from ##SOURCEDB##.dbo.tStructPartitionHierarchy

insert into ##TARGETDB##.dbo.tStructPartitionLinks
   select * from ##SOURCEDB##.dbo.tStructPartitionLinks

insert into ##TARGETDB##.dbo.tSecUser
   select * from ##SOURCEDB##.dbo.tSecUser

insert into ##TARGETDB##.dbo.tSecUserPartition
   select * from ##SOURCEDB##.dbo.tSecUserPartition

--Attempt to import data
SET IDENTITY_INSERT ##TARGETDB##.dbo.tDataMap ON
insert into ##TARGETDB##.dbo.tDataMap (PartitionKey, DimName, SrcKey, SrcDesc, TargKey, WhereClauseType, WhereClauseValue, 
      ChangeSign, Sequence, DataKey, VBScript)
   select PartitionKey, DimName, SrcKey, SrcDesc, TargKey, WhereClauseType, WhereClauseValue, ChangeSign, Sequence, DataKey, 
       VBScript from ##SOURCEDB##.dbo.tDataMap
SET IDENTITY_INSERT ##TARGETDB##.dbo.tDataMap OFF

-- Update parent locations ...
update ##TARGETDB##.dbo.tPOVPartition
  set PartParent = 'BudgetTemplateLoc'
 where 
   PartName  'BudgetTemplateLoc' and PartControlsType = 1

-- enable referential integrity again
EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL'
GO