Browse Category

Oracle EPM

Technology information pertaining to Oracle’s EPM platform

Performance Testing Oracle EPM Cloud Applications via Smart View / EPM Automate / Fiddler


Overview

Load based application performance testing is critical for ensuring that your application is performing optimally throughout its lifecycle. Having the capability to perform these tests, in an automated fashion, not only allows an organization to capture initial baseline performance metrics but to analyze the impact of future application updates.

Traditional testing tools, such as LoadRunner and JMeter, excel at web application testing as they can accurately recreate a user workflow at varying load levels. (e.g. simulating multiple user sessions concurrently) Unfortunately, these tools have a few drawbacks which may limit their usefulness in some scenarios: a) Products such as LoadRunner are quite costly, b) Significant technical skill is required to create scripts, and c) Scripts may require frequent updates as the web applications and/or test cases evolve.

To mitigate some of these drawbacks while retaining the benefits of application load besting for Oracle EPM Cloud applications, Oracle has application load testing functionality in the EPM Automate tool. The EPM Automate functionality allows for the recording/playback/performance analysis of a Smart View based user session against Oracle Cloud EPM applications. As users can perform just about any activity through Smart View that they would via the web-based interface, this allows for a near real world performance analysis. While there will be some performance difference between a Smart View sequence and its corresponding web-based sequence, the comparison is close enough to allow for meaningful results. (e.g. business rules triggered via Smart View or the web U/I will not have a performance difference while page rendering may be slightly different due to the lack of a web browser rendering delay)

To complete a performance testing cycle, there are four key areas of focus:

  • Pre-Requisites – Ensure that your environment is setup for supporting the performance testing process
  • Test Case Recording – The recording is created by leveraging a utility, such as Fiddler, which is installed on the end user’s machine and is capable of recording internet traffic. While the user is working in Smart View, Fiddler captures messages sent to/from the Oracle EPM Cloud application and Smart View.
  • Test Case Replay – After the recording is complete, the EPM Automate utility can be used to read the recording and “replay” the steps performed by the original user against PBCS. The results for each step are saved to file and can be analyzed to determine pass/fail of each step, time elapsed, etc. The utility also allows for the ability to stress test your application by running the recorded session multiple times concurrently to simulate having multiple users I the system at the same time. Depending on the test case, some tweaking may be required to make this work properly. (e.g. having 20 people run the same calc with the same POV wouldn’t make sense, etc.)
  • Analysis – While EPM Automate is performing the test case replay(s), it captures performance data. This data is stored and needs to be processed for analysis. Excel is typically used for performing the performance analysis.

Pre-Requisites

You should have the following installed on your computer / available to you:

In addition to the software requirements, you should also make sure you have the following functional items:

  • Created temporary test users which will be used during the testing process (e.g. If you want to test performance w/ 20 concurrent users, create 20 temp user accounts and/or make available 20 existing user accounts)
  • A test script which can be executed via Smart View which will simulate a “real world” use case. The test case should include all required steps that will be executed from Smart View (e.g. Variables to set, forms to open, calcs, etc.)

Test Case Recording

To create the recording, you will need to ensure that you have: Smart View, Telerik Fiddler, relevant connection details to the Oracle EPM Cloud product, and a script containing the instructions you will follow. Once you have that information, follow the steps below

  1. Starting Fiddler
    1. Once you have all of your prerequisite items prepared, start Fiddler. Navigate through the welcome screens and any update prompts

    2. After the initial welcome screen / update check, you will end up on the primary application screen which contains multiple windows:
      1. Traffic Window – this window shows a real-time activity sequence of HTTP data flowing from your PC to server end points

      2. Request Data – This tab in the upper right pane display “Request” information for the selected packet

      3. Response Data – This table in the lower right pane displays the “Response” information for the selected packet

NOTE – Request Data is information sent from your machine to the web server and Response data is the associated reply from the server.

  1. Filter Fiddler Traffic – As you’ll quickly realize, your machine is constantly sending information back/forth which Fiddler will see. We do not want all of this extra information in our recording. Perform the following steps to eliminate redundant information:
    1. Close any unnecessary programs – Close all browser windows, Outlook, Excel, Word, etc., etc., etc. The only programs you’ll want to have open are an instance of Excel for Smart View, a browser window for Oracle EPM, and Fiddler.
    2. You may still see data flowing from Windows services or other background processes. If you still see packets appearing in the traffic window, right click on one of the messages, select the Filter Now submenu, and then pick the appropriate “Hide” option. As you will typically see a pattern around the Host (target server), filtering by Host is usually the best choice.

    3. Continue filtering until your traffic window is no longer filling up with unnecessary packets. Also, ensure that Capture Traffic is enabled to ensure you are actively monitoring for new packets.
  2. Start Excel, Connect to Oracle EPM, and start executing your script via the Smart View add-in

As you are working through the steps, you should see activity being captured in Fiddler which corresponds to Oracle Cloud EPM traffic

  1. Export the Fiddler traffic into a HTTP Archive v1.1 file (.HAR)

Notes:

  • Consider creating multiple different recordings to test different use cases. Replaying the exact same script multiple times may not be a good real-world load test. (e.g. Would you really run the exact same business rule / POV concurrently?)
  • If done carefully, you can modify the .HAR file to create unique test cases with minimal effort (e.g. Create clones of a test case which use a different “POV”, Run Time Prompt values, business rule, report name, etc.) The example below shows the information passed for execution a Business Rule. If done properly, you can alter the rule name, the cube, POV and/or RTP members.

Test Case Replay

Now that you have created a recording file for your test case, you can use EPM Automate to play it back. Perform the following steps to perform the replay testing / performance data capturing:

  1. Create a replay user list file – This file will be used by EPM Automate to determine how many concurrent tests will be executed. This comma delimited file will contain a list of user accounts, passwords, and the name/location of the appropriate HAR that will be executed. The sample file shown below will execute for 10 concurrent users who are all executing the same test case.

  2. On your testing PC, open a command prompt in Admin Mode

  3. Using EPM Automate, login as a Service Administrator

  4. Using EPM Automate, start the testing session by issuing the replay command.

NOTE(s)

  1. Review the data output to the screen and/or to a specified output file. It should look similar to:

For each step in your test case, for each user, you will see a duration, username, time, HTTP status code, and action details.

Test Result Analysis

While you can manually review the output data shown above, if you want to compare multiple test iterations, you will probably need to pull the data into Excel for further analysis. For instance, you may have test output for 1 / 5 / 10 / 15 / 20 concurrent users so that you can see how well a process will scale as concurrent users are added.

In order to accomplish this comparison, you will have to do a little work in Excel as the data returned from EPM Automate isn’t consistently delimited in the output. (e.g. Business Rule execution will have multiple values in one column that need to be split to find actual Essbase provider execution times, etc.)

To fully analyze the information in the output, you will end up with columns covering:

  • User Experience Duration – This is referring to timing around the User Interface and doesn’t necessarily correlate to Business Rule / Essbase performance
  • Client Duration – This metric also speaks to U/I client software timing
  • Total Provider Time – This metric is buried in the “Object / Addl” columns in the EPM Automate output. For operations hitting against the cube, this is a number that will reflect execution time
  • Essbase Duration – While this is reported in EPM Automate output, it didn’t really seem to be useful
  • Smart Push Duration – Time spent for Smart Push operations
  • Business Rule Duration – While it should reflect execution duration of the business rule, it didn’t really seem to have meaningful values
  • Form Validation Duration
  • Total I-O Duration / Total Duration – Columns I add to calculate a grand total for an operation which may have durations for multiple categories. (e.g. Essbase, Provider, Smart Push, etc.)

As I’m typically running tests for different user levels (1 / 10 / 20 / 35), separate worksheets exist which contain the raw data for each of the user groups. A Unique ID is generated so that the same tasks can be matched up on the user tables (e.g. Execute Business Rule “A” timing is pulled for 1/10/20/35 users)

The summary tab uses the Unique IDs to display a distinct list of activities and the respective timings for each test set.

Finally, this data is used to generate presentation graphs:

Resolving Frozen HFM Applications

Overview

While Hyperion Financial Management (HFM) applications do not regularly suffer from technical issues, there are certain situations that can cause your application to become unresponsive. This scenario usually is caused by poorly designed or corrupted HFM member list or calculation rules which run in a loop. (e.g. self-referencing member lists) When this happens, the application becomes frozen and even a restart of the servers/application will fail to solve the issue since the rules are executed again on startup.

One way to resolve this issue is to revert to a backup; however, this could result in the loss of artifacts and/or data. Fortunately, there is a better way to solve this problem that will minimize downtime and prevent the loss of information!

The information provided below will cover key details and complete recovery steps using the HFM COMMA application as an example.

HFM Database Concepts (Rule File Storage)

While the HFM relational database contains a plethora of tables, one table (<appname>_BINARYFILES) contains the key to this solution. HFM Rules, Metadata, Member Lists, and other Application Settings are all stored in this table. (see my other post about altering HFM Starting years via this table for more information!) When you use the HFM user interface to import Rules / Member lists / Metadata to the application, the information is placed in the appropriate instance of the table. (e.g. COMMA_BINARYFILES) The LABEL column indicates what type of information is being stored (AppSettings, CalcRules, MemberListRules, SharedAppData, etc.) while the BINARYFILE column contains the data.

Since the BINARYFILE column can only contain 2,000 bytes of data, you will find multiple rows of for a given type. (e.g. Rules files will typically have numerous rows since rule files are typically much larger than 2,000 bytes) When the data needs to be used in the application, it is read in ascending order based on the ENTRYIDX column.

.

While the BINARYFILE data does not appear to be readable, it is simply the ASCII values stored as a string as opposed to the actual human readable characters. The beginning of the 2nd row has been decoded for reference:

 

Resolving HFM App Hang Up

While the HFM database table ‘history lesson’ was fun, how exactly does this solve the problem? Since the HFM application is looking for specific LABEL values for Rules / Member Lists, the easiest way to resolve the issue is to update the LABEL for the relevant rows so that HFM will no longer “see” them as rules / member lists. This will effectively give HFM a clear slate and it will startup as expected. (Before doing anything with the application you should re-load a good set of rules / members though!)

IMPORTANT – You should not make these changes while the application is running! Be sure to stop the application first!

The following steps should be followed to fully recover the application:

  1. ​Stop HFM Services
  2. Connect to the database and execute a query to change the Label value to a dummy value
    • Member Lists – UPDATE <APPNAME>_BINARYFILES SET LABEL = ‘BadMemberListRules’ WHERE LABEL = ‘MemberListRules’
    • HFM Rules – UPDATE <APPNAME>_BINARYFILES SET LABEL = ‘BadCalcRules’ WHERE LABEL = ‘CalcRules’
  3. Restart HFM Services
  4. Connect to Application
  5. Load new Member List or HFM Rules
    • NOTE – If this is an EPMA application or you are using Calculation Manager, be sure to correct the rule issue and redeploy the application

 

Oracle Inventory User Maintenance Scenarios

Overview

The Oracle Inventory is a series of registry entries and files that keep track of all Oracle products / patches that are installed on a machine. As certain portions of Oracle Inventory are stored in a user specific manner, product and patching problems can occur if multiple user accounts are utilized for product installation and/or patching operations. (e.g. User 1 installs Product A, and User 2 attempts to patch Product A)

Common Oracle Inventory / User related scenarios / solutions are:

  1. A user account is used to install software / apply patches and this account is disabled / deleted at a later date – Assuming no additional Oracle software has been installed, this is easy to fix via a combination of file copy / registry updates.
  2. Multiple user accounts have been used to install Oracle Software – In this scenario, the inventory needs to be recreated under one user regardless as to whether the accounts exist or not.

[IMPORTANT – It is strongly recommended to install/patch all software with a non-user specific account, where possible.  While certain security policies may not allow this, using a global account will ensure you do not have to perform the steps in this document.  ]

Prior install account has been disabled / deleted

In this scenario, an account used to install the Oracle EPM Software has been disabled / deleted and we are unable to patch the environment as the Oracle Inventory for the current user does not have knowledge of the prior installed products. The recommended fix is to reassign the previous user account’s Oracle Inventory to the new user account.

To move the Oracle Inventory, perform the following steps:

  1. Identify service account and setup with local administrator access on all Hyperion machines
  2. Copy .oracle.instances file from the user account which was used previously to the new account. In the example below, the user was epm_admin making the path to the file C:\Users\epm_admin\.oracle.instances.NOTE: This file may not be visible depending on your file folder view settings.

  3. Modify Windows Registry Hive & Key values – Replace the old install user with the new one
    1. Rename Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_EpmSystem_<old user>
    2. Update the values for ORACLE_GROUP_NAME & ORACLE_HOME_NAME keys so that they refer to the new user account.

 

  1. Modify file C:\Program Files\Oracle\Inventory\ContentsXML\inventory.xml

Replace any reference to the old install account with the new account

  1. Confirm OPatch software shows inventory properly by:
    1. Opening a Windows Administrator Command Prompt
    2. Navigate to the Oracle\Middleware\EPMSystem11R1\OPatch folder
    3. Execute the Opatch utility:Opatch lsinventory –oh <Install Drive>:\Oracle\Middleware\EPMSystem11R1 –jdk <Install Drive>:\Oracle\Middleware\jdk160_21
    4. Visually verify that the installed patch list corresponds with what has been installed

Oracle products have been installed to multiple accounts

In this scenario, multiple accounts have been used to install Oracle software. Patching attempts may fail or appear to operate successfully, though not all products are truly updated. To resolve this issue, the Oracle Inventory will need to be rebuilt targeting one user account.

NOTE: It is strongly recommended that a generic/service account is used for Oracle installation / patching to prevent these issues.

To rebuild the Oracle Inventory, perform the following steps:

  1. Log onto the server as the account which will contain the Oracle Inventory
  2. Execute the createInventory.bat script located in the <Install Drive>:\Oracle\Middleware\EPMSystem11R1\Opatch

Screen Clipping

  1. From the main menu, click on the Install menuScreen Clipping
  2. Scroll through the contents list to confirm that all installed products are reflectedScreen Clipping
  3. Monitor the Installation progress.NOTE: This may run for a while, > 10 minutes

    Screen Clipping

  4. On the Specify Source Location screen, provide the location of the products.xml file.NOTE: Should be <Install Drive>:\Oracle\Middleware\EPMSystem11R1\common\epmstage\Disk1\stage\products.xml

    Screen Clipping

  5. Specify the Oracle Inventory Home DetailsNOTE: By default this is OUIHome / C:\OraHome, use OUIHome1 & C:\OraHome1

    Screen Clipping

  6. Scroll through the provided list and ensure your products are shown and then click NextScreen Clipping
  7. Review items on the summary screen and click InstallScreen Clipping
  8. Upon completion of steps above, re-run the Opatch utility to confirm installed products now appearScreen Clipping

FDMEE Essbase/Planning Script Execution Glitch

FDMEE Essbase/Planning Script Execution Glitch

When targeting Essbase/Planning applications through FDMEE, a particularly useful feature is the ability to trigger Calculation scripts before and after the Load process as well as before and after the Check process. Not only can you execute scripts, but you can control the execution order and pass script parameters. This functionality is quite useful for executing fixed scope data clear operations before the data load and executing targeting aggregation/calculations after the data load has been completed.

Figure 1 – FDMEE Target Application Calculation Script Editor

While this feature works great when the scripts are working, what happens when there is a script failure? If a script executed during the Load process fails, should the FDMEE Load step report a failure? (even if the data loaded?) How about a script failure during the Check step? Would you be surprised to know that currently in 11.1.2.4.210, this is not the case?

In the event of a successful load process, even if a script error occurred, all FDMEE “fish” steps will return gold and the Process Monitor report will reflect the same. (e.g. no issues) Your only indication of a failure will be in the job log for the data load process!

If you are currently leveraging this functionality, please be aware of this quirk until this is corrected!

NOTE(s):

  • 11.1.2.4.100 (Patch 20648390) advertises this as being fixed; however, it still presents itself in 11.1.2.4.200+ [See Defect: 20631385]
  • An enhancement request, 25217240, was created for this issue when submitted in 11.1.2.4.200. I do not know if this has been implemented yet; however, no fix is listed in the Defects Fixed Finders Tool through 11.1.2.4.210

Steps to Recreate [target a non-existent script]

#1 – Create FDM Target Planning Application and associate a Calculation script that does not exist in the target application


Figure 2 – Create Calculation Script references for our sample Target Application

#2 – Create your Location / Data Load Maps / Data Load Rule / etc.

#3 – Load a data file through and confirm that the Process Monitor reflects success.


Figure 3 – Perform a data load and confirm that the Process Monitor User Interface shows no errors

#4 – Pull Process Monitor report and confirm no errors reflected.


Figure 4 – Run the Process Monitor Report to verify it also does not reflect any errors

#5 – Review the log file

Since the script doesn’t exist, the FDMEE log will reflect an Essbase error due to the non-existent script.


Figure 5 – Review the Job log to verify an error is reflected in the logs

Version Information:


Figure 6 – Confirm version of FDMEE

Workaround

If your load process is being performed manually, the easiest recommendation is to have users review the log to confirm successful script execution.

If you are performing automated processing where it is not feasible to manually review logs, considering implementing an Event script to scan the log file for script success/failure and using that to trigger a failure in FDMEE / Error Log / Email Notification / etc.

PBCS Data Loading Glitch

PBCS Data Loading Glitch

While building out data load automations for PBCS and FCCS recently, I ran into a somewhat annoying issue when loading data to PBCS. Even more surprising is that Oracle claims it works as expected….. This post will explain the issue and provide a really simple work around until this gets properly addressed.

The Problem

Anyone who has worked with FDM Classic, FDMEE, or Data Management for any amount of time has run into at least one failed data load due to invalid dimension member(s) in their exported data. While the data load process should ensure that the target application metadata is current before loading data, this doesn’t always happen. (e.g. last minute source ERP updates that do not get properly communicated to the EPM team)

While a data load failure isn’t optimal, typically this failure is easy enough to identify and fix based off of the feedback provided from FDM/FDMEE/Data Management and the target application.

FDMEE / PBCS Merge Load Failure

FDMEE / Planning Merge Load Failure

cid:image010.png@01D1E6E3.8124EAE0

Data Management / FCCS Merge Load Failure

Unfortunately, when you perform a Replace export to a PBCS target application, under some circumstances you will get a quite unhelpful response:

Data Management / PBCS Replace Load Failure

What Happened?

When performing a Merge export, FDMEE/Data Management simply passes the generated data file to the target application for loading without any pre-processing. When performing a Replace export, a data clear script is dynamically generated based on the members contained in the export data.

The script will clear data for every intersection of Scenario, Year, Period, Entity, and Value dimensions. In the event that one of those members is missing in the target application, the dynamic clear script will contain an invalid member resulting in the error shown above. While the error shown above is completely legitimate, it isn’t meaningful enough to allow you to locate the member easily.

Oddly, if you recreate this scenario when loading to FCCS, you will have a much different result! When performing a replace load to FCCS, invalid members will result in meaningful errors that include the invalid member name(s)! Even more odd, Oracle Support will tell you that both are working as designed. It would seem that PBCS should behave similarly to FCCS to simplify troubleshooting efforts. (Perhaps if enough people raise this as an issue, they will address it?)

Work Around

The easiest way to work around this issue is to perform two data exports to PBCS for Replace operations. The first export should be a Merge load. As the dynamic clear script will not be generated for a Merge, this will result allow you to receive specific errors in the event that there is a data load failure. After have a successful Merge load, then perform the Replace load to ensure that all data gets cleared, etc.

Job Output from Merge/Replace Load Automation Process

Oracle Hyperion Planning Application Migration Fun!

EPMA Planning Application w/ Workforce Module First Time Deployment (Post Migration)

If you have one or more Workforce enabled Planning applications and you have multiple EPM environments, you may be in for a surprise the first time you migrate those applications to another one of your environments!

After LCM’ing in your EPMA application definition and performing the initial Application deployment, you may be greeted with the following error:

C:\Users\cbeye002\AppData\Local\Temp\SNAGHTML2568c15f.PNG

Since the application works perfectly fine in the source environment *and* Planning supports shared members in the Time Period dimension, you would be correct to suspect ‘shenanigans’ are at play. The trick is that when the Workforce Planning module is first initialized, there cannot be shared members in this dimension! Once the module is initialized, you can add duplicate member instances. Unfortunately, when you copy an application to a new environment and deploy it for the first time, part of this process includes module initialization!

To work around this issue, keep the default Time Period members and remove any duplicates, deploy the application for the first time, use LCM to restore the full Time Period dimension, and then perform another deployment.

Remove shared members. (Use SHIFT+click to highlight multiple)

Click Yes on the confirmation window that appears after clicking on Remove Member.

Return to application library and attempt the deployment again.

Confirm that this Deployment completes without any errors.

Return to Shared Services and reimport the Period dimension for the application via the EPMA portion of the LCM file

After the LCM import completes successfully, return to Application Library. Since we changed the Period dimension back to its original state in EPMA, this will signal that the application is Out of Sync and will need to be deployed again.

Perform another deployment and confirm that the Status of the application is

 

NOTE:  This issue appears to impact Planning through 11.1.2.4.

Oracle Hyperion EPM Environment Branding Made Easy!

Oracle EPM Branding Made Easy

For IIS and OHS

Overview

A common pain point when working in multiple EPM environments is ensuring that you are working in the right one.  “Out of the box”, each environment visually looks exactly the same.  As no one wants to be the person that accidentally makes a change in the wrong environment, people have tried all sorts of ways to remedy this issue.

Typically, people physically swap out image files for the individual web applications; however, the solutions are problematic for multiple reasons:

  • require manual file system changes
  • require rework since patching / reinstallation / reconfiguration will wipe out the changes
  • updating requires modification to Java WAR/EAR files which could lead to unintended issues
  • does not work properly in all versions of EPM due to content-length limitations
  • does not scale well if branding multiple products

The solution below resolves all of these issues by intercepting image requests at the webserver level via URL Rewriting.   Utilizing URL Rewriting allows us to:

  • use a small number of images, in one central location, for multiple products
  • easily scale and allow for multiple branding options
  • significantly reduces the likelihood of our branding changes being lost due to patching, redeployment, installation, configuration
  • avoids manual manipulation of EPM application files..

The following walk-through will show you have to use URL rewriting to replace the Oracle logo contained in the upper corner of most EPM applications with one shared image.

NOTE: URL Rewriting could also be leveraged for other uses cases such as globally redirecting all EPM users to a maintenance page while allowing admins to access the system via a special URL.

Oracle logo swapping

There are a few images that lend themselves nicely to branding replacement as the images are used globally among all the EPM products. The red Oracle logo (oracleLogo.png) is one such example. The oracleLogo.png file appears in the title bar on many of the pages, such as the initial log on page:

A quick search for this file, on a webserver running most EPM products, reveals how common it is:

As the same file is virtually used in every EPM product, the best way to replace this image is through URL Rewriting. URL Rewriting instructs the web server to replace requests for a given URL with a different URL of our choosing.

For instance, if a user requests http://EPM.COM/EPMA/oracleLogo.png, we can tell the webserver to re-reroute that request to http://EPM.COM/MyCentralLocation/myCoolerLogo.png. Since this functionality supports regular expressions, we can create one rule to replace requests for almost all of the EPM products. (DRM & FDM need additional rules)

Implementing URL Rewriting on IIS

To implement URL Rewriting for the oracleLogo.png file, perform the following steps:
[NOTE: The screen shots below depict IIS 7/7.5; however, this process works for all currently supported versions of IIS]

  1. Create a replacement oracleLogo.png file. As the original file has a height of 25 pixels and a width of 119 pixels, it is imperative that your image is the same size. If you attempt to use an image with a different size, it will be scaled to fit and it may not look how you want it to.Sample images are shown below.

    (NOTE: We will use the QA file for the rest of the IIS walkthrough)
  2. Copy your replacement logo to a location accessible by the web server and the end users. (HINT: The IIS WWWROOT folder is typically available to all users and is a good common spot)
  3. Confirm that you can access this file via Web Browser
  4. Confirm that IIS Rewrite is installed on the Web Server.
    (If it is not, follow the steps in Appendix A)
  5. Start Internet Information Services (IIS) Manager
  6. In the connections panel (on the left), expand the Server, Sites, and then Default Web Site.
  7. In the right window, click on Features View and then double click on the URL Rewrite button.
  8. In the Actions panel (on the right), click on Add Rule(s)
  9. Click on Blank rule
  10. Complete the Inbound Rule Screen as follows
    1. Name: oracleLogo Replace
    2. Match URL
      1. Requested URL: Matches Pattern
      2. Using: Regular Expressions
      3. Pattern: (.*)/oracleLogo.png
      4. Ignore Case: [Checked]
    3. ConditionsSkip, No changes required.
    4. Server Variables Skip, No changes required.
    5. Action
      1. Action Type: Redirect
      2. Redirect URL: http://<Web Server Name Here>/oracleLogo_qa.png
      3. Append query string: Checked
      4. Redirect type: 302 Found

  1. Click Apply
  2. Confirm changes were saved successfully
  3. Test a page
  4. For FDM add a rule as follows:
    1. Name: FDM Logo
    2. Match URL
      1. Requested URL: Matches Pattern
      2. Using: Regular Expressions
      3. Pattern: (.*)/logo.gif
      4. Ignore Case: [Checked]
    3. ConditionsSkip, No changes required.
    4. Server Variables Skip, No changes required.
    5. Action
      1. Action Type: Redirect
      2. Redirect URL: http://<Web Server Name Here>/logo_qa.gif
      3. Append query string: Checked
      4. Redirect type: 302 Found

 


 

Implementing URL Rewriting on OHS

URL Rewriting in OHS is relatively simple as the capability is activated out of the box in the version that is installed with EPM products. To redirect oracleLogo.png in OHS, perform the following steps:

  1. Copy your replace image to the OHS Root folder.
    cp /oracleLogo-TRN.png /Oracle/Middleware/user_projects/epmsystem1/httpConfig/ohs/config/OHS/ohs_component/htdocs/
  2. Update the epm_rewrite_rules.conf configuration file for the redirect actions
    NOTE: For each file above, create a RedirectMatch entry similar to below:RedirectMatch (.*)\oracleLogo.png$ https://<ServerNameHere>/oracleLogo-TRN.png

  1. Restart OHS
    cid:image004.png@01CFB70C.BCE74F40
    cid:image005.png@01CFB70C.BCE74F40
  2. Open a Web Browser (after clearing all caches / temporary files) to confirm update has taken place

Appendix A – Install URL Rewrite on IIS

If your IIS server does not already have URL Rewrite installed, perform the following steps to acquire / install it from Microsoft.

  • Download URL Rewrite

 

HFM – Removing Years

One of the most important steps during the creation of your application creation is determining how many years should exist in the application.  Specifying too many years could result in having wasted space and poor application performance while too few years could result in running out of room in your application when you hit the upper limit.

HFM Application Profile Editor (Pre-11.1.2.4 shown, though same applies to 11.1.2.4)

When HFM first shipped, you had no means of updating the Start Year or the End Year of your application; therefore, a mistake during the application creation process could lead to major headaches in the future.  As customer’s HFM applications matured and those upper year limits were getting close, Hyperion/Oracle rolled out the capability to add more years to the applications.  (with 11.1.2.4, there is a built in function to do this)

When it comes to removing years from the application, you can clear data from years; however, there is not a way to actually remove the years in the application.  Unfortunately, Oracle has never released guidance or functionality for altering the Start Year and will tell you that it isn’t possible to change this.  I’m here to tell you that, you can absolutely change this!  Before we get into how to make those updates, some quick words on why you’d want to do this and some considerations around it.

Reasons to remove years from your application:

  • Database has grown significantly due to continued use of the same application over many years
  • Metadata is cluttered due to the need to ensure historical data remains consistent
  • Prior years are no longer used for reporting  and are not necessary
  • Minimize the years displayed in the U/I to only valid years
  • Reduce the number of database objects (data tables may exist for each year/period/scenario combination)

Considerations when removing years from your application:

  • Beginning Balances for Balance Sheet accounts – (e.g. If our Start Year changes to 2015, we will still need prior year ending balances)
  • HFM Rules/Calc Mgr – (e.g. ensure that rules are not impacted by beginning year change

Process to change the Start Year:

  • “Easy Steps”
    • Make a backup.  (LCM / Database Backup / etc.)
    • Create an archive application – Consider creating an archive HFM application for the historical data.  Storing the information in a separate application gives you the ability to still view the data if needed without having to make metadata trade-offs in your day to day/primary application.  Use HFM Application Copy Utility or LCM/Application Snapshot to accomplish this.  (If you already have an archive app, you can export/import data, though Journals may be slightly annoying in this instance)
    • Update Beginning Year variables / values in the application rules / data for the new Start Year
    • Stop the HFM services
  • “Not so Easy Steps”  (explained later in more detail, don’t panic yet)
    • Connect to your HFM database via your database tool of choice  (e.g. SQL Server Management Studio, SQLPlus, etc.)
    • Query the <APP_NAME>_BINARYFILES table by ENTRYIDX for Labels of type SharedAppData
    • Update bytes 74 & 75 of the first row to reflect your new Start Year
  • Start HFM Services
  • Verify application data / functionality
  • “Post Validation Cleanup”
    • Stop HFM Services
    • Remove Year / Period specific database tables (e.g. <APP_NAME>_DCE_<PeriodNum>_<Year> )

HFM Binary Files Table

For each HFM application, you will find a Binary Files table in the format <APP_NAME>_BINARYFILES.  This table is used for multiple purposes, including storing application configuration information, application metadata, HFM rules, memberlists, etc.  The LABEL column defines the type of file the data is related to.  The BINARYFILE column contains the raw information from the corresponding file.  (e.g. Application Profile .PER file, Rules .RUL file, Metadata, etc.)  Since this column has a character limit that is smaller than most of these files, you will find multiple rows for each file type.  The ENTRYIDX column is used to order the rows in the proper order.  (Ascending order)

As the Start and End years are defined in the profile, we only want to view rows containing a LABEL of SharedAppData:

SELECT
 ENTRYIDX, LABEL, BINARYFILE
FROM
 &lt;APP_NAME&gt;_BINARYFILES
WHERE
 LABEL = 'SharedAppData'
ORDER BY
 ENTRYIDX ASC

 

SharedAppData 

SharedAppData ENTRYIDX 1 as viewed in a Hex Editor

As mentioned above, the SharedAppData rows correspond to the application profile (.PER) that defines the HFM application.  Analysis of this data indicates that both the Start and End years are stored in the first row of this information.  The Start Year is stored in bytes 74 and 75 while, the End Year is stored in bytes 78 and 79.  The data is stored in Least Significant Byte Order.  (e.g. Byte 75 is first half of year, 74 is the second)

In the Shared App Data above, the Start Year is 2006 while the End Year is 2015.

Start Year (Hexadecimal) = D6 07 = 07 D6 (flipped) = 2006 (decimal)
End Year (Hexadecimal) = DF 07 = 07 DF (flipped) = 2015 (decimal)

To change the Start Year, convert the decimal year to hexadecimal and then reverse the byte order.  For instance, if your new start year is 1999, you would end up with:  1999 (dec) = 07CF = CE 07

HINT:  You can use Windows built-in Calculator program in Programmer mode to make these conversions.

Update SharedAppData

After you’ve calculated your new Start Year value, create an Update query to alter the stored binary data

UPDATE query to alter Start Year

 

Verify Results

After you have updated your settings and restarted HFM, you can verify that you now have the appropriate years

Original Years

 

Updated Years

NOTE – While the screen shots are from an HFM 11.1.2.1 application, this has been tested through 11.1.2.4, though I won’t guarantee this works with every patch version, etc.  Always test in a non-production environment.

Hyperion Profitability (HPCM) 11.1.2.4 – “Transaction rolled back because transaction was set to RollbackOnly”

While working with a client running HPCM 11.1.2.4.110, we were encountering intermittent errors while executing rules:

Problem Overview

“javax.persistence.RollbackException: Transaction rolled back because transaction was set to RollbackOnly”

1_UI_Error

As the exact same rule could be re-run without any errors, it appeared to be application/environment related.  (also appeared to be database related given the content of the error)

Reviewing the profitability log provides a much clearer view of the issue

NOTE: Log would typically be found in a location similar to:  \Oracle\Middleware\user_projects\domains\EPMSystem\servers\Profitability\logs\hpcm.log

3_HPCM_Error_Log

From the log file snippet above, note the highlighted section:

“[SQL Server] Arithmetic overflow error converting expression to data type smallinit.”


What is the Problem?

While this does not match the error message we see in the user interface, this is definitely related as:

  1. The error message was logged at the same time as the error in HPCM
  2. The user name logged corresponded to the user receiving the error
  3. SQL Server rolling back a transaction after a failed insert makes a lot of sense.

(very) Loosely speaking, database transactions exist to protect the integrity of a database.  If a program, or user, were to execute a series of statements against a database and one or more fail, what should happen?  Should we leave the database in an inconsistent state or should we put the database back and alert the user?  While application developers could build this logic into their program code, it is a lot more convenient to give the database a series of steps and let it handle that for us!

In this case, the INSERT statement is part of a transaction.  Since the INSERT failed, SQL Server has rolled back the entire transaction and reported that to HPCM.


Why are we encountering this problem?

While that explain what happened, why did this happen?  The error in the log file has four key clues :

  1. We are attempting to add data to a database table  (INSERT INTO)
  2. The table is: HPM_STAT_DETAIL
  3. ARITHMETIC OVERFLOW occurred when trying to store a value in a column
  4. The target column has a Data Type of smallint

In SQL Server, a smallint datatype can have a maximum value of 32,767.  Another look at the error message reveals one numeric, 43,014, which exceeds 32,767.  This value is being stored in a column called JAVA_THREAD.  As JAVA_THREAD is storing the process id, which is semi-randomly generated, if the number returned is < 32,768, the program works as expected.  If the ID is > 32,767, then things don’t go as well…..

Reviewing the table structure for this table confirms the suspicion.

2_Database_Column_Definitions


How to fix this

The easiest fix for this issue is to change the datatype for this column from smallint to int.  As the largest int value is well over 2 Billion, this issue should not occur again.

LEGALESE – While I have reviewed this change with Oracle and am very confident this will not cause any issues, proceed at your own risk.   🙂

4_Updated_Table

NOTE(s):

  • As of 6/26, Oracle has confirmed this as a BUG.  No ETA on an update yet, though.
  • This may be SQL Server specific, have not evaluated against Oracle schema to confirm data type used.  [Oracle equivalent of smallint would be number(1)]

Kscope2015 – Smart View

This past year at Kscope15, I presented a session all about the technical side of Smart View.  While I intended on spending equal time between infrastructure and API/programming  topics, I ended up focusing a bit more on the API/programming side.  There are so many ways to improve your Smart View documents by understanding some basic Visual Basic for Applications (VBA) code and leveraging the Smart View API, I simply couldn’t resist!

For those more interested in the infrastructure side of Smart View deployments, do not fear!  While the session itself didn’t spend as much time on it, the Power Point includes a fair amount of slides which provide information on how to automate Smart View deployment, automatically set user defaults, and deploy preconfigured Private and Shared Connections.

The sessions, and slide deck below, provide oodles of information on the following topics:

  • Improving Robustness of Smart View Documents
    • Excel Add-In Failure Detection  (e.g. Disabled Add-In / Missing Smart View)
    • Proactive Connection Monitoring
  • Deployment Simplification / Initial Configuration
    • Automated Installation Guidance
    • Automated Default Preferences Push
    • Automated Shared / Private Connection Push
  • Essbase Add-In / Smart View Conversions
  • VBA Important Tips / Tricks
  • Smart View API Important Tips / Tricks

As with all of my presentations, you will find a plethora of working examples such as:

  • Excel Performance Improvements ( Screen Updating / Enable Events / Calculation Mode )
  • Invalid Cell Data Identification ( Catch Non-Numeric data before it wrecks your formulas! )
  • Add-In Presence & Status Detection
  • Broken Link Detection & Correction
  • Planning Cell Note Editor
  • Working with Excel & VBA (Workbooks / Worksheets / Ranges / Events )
  • Working with Smart View API ( Refreshing Data / Creating, Establishing, Disconnecting, Deleting Connections )

Download the presentation here!

  • 1
  • 2