EPM Automate – State Management “Public Safety Announcement”

Overview

EPM Automate is an Oracle provided command line interface which is used for interfacing with Oracle Cloud EPM applications. These applications expose REST API endpoints for consumption to accomplish a wide variety of activities. (data loading, user management, maintenance, rules execution, etc., etc., etc.) The EPM Automate utility allows users to consume the REST API endpoints without having to write writing code. This is useful for application administrators and power users who wish to perform certain activities without having to navigate the applications graphical user interface. The utility can also be utilized by automation processes to reduce technical debt/maintenance overhead since it abstracts the API details.

Using EPM Automate is relatively simple as you just download/install the utility, on a Windows or Linux system, and issues commands via the command line. Commands are issued one after another in the necessary sequence to perform a particular task. In the example below, I am connecting to an EPM Cloud application, requesting the value of a substitution variable, and then logging out of the application.

After logging into the application, you can run as many individual commands as you would like and you can leave you ‘session’ open for some time w/o having to provide authenticate again. Even though the underlying API calls are executed separately, and you are not providing credentials for each command issue, the EPM Automate utility is maintaining state behind the scenes.

While this approach is convenient for the user, it is important to understand how this is being accomplished as misuse can result in potentially serious issues! The purpose of this post is to educate you on how EPM Automate State Management behaves and how to avoid any serious issues based on its design. Additionally, I’ll provide some future update thoughts which could address the concern raised here.

State Management

State Management is accomplished through a text file called .prefs. This file contains all of the relevant information that EPM Automate requires to communicate & authenticate with the REST API. The file is created after you successfully execute a ‘login’ command in EPM Automate.

In the early releases of the utility, Oracle stated that you should not run more than one instance of this utility at the same time. While they didn’t elaborate as to why, the issue revolved around the .prefs file. While they have made updates to the application to help address the issue, there are still some gaps which could lead to undesirable consequences.

The (mostly resolved) Issue

The issue with the .prefs file occurs when you have more than one instance of EPM Automate concurrently in use on the same machine. Originally, the .prefs file was placed in a fixed folder meaning that all instances would be reading/writing the same file and the last successful login attempt by any user on the machine would overwrite the others! If those sessions were working against the same application, not a big deal; however, if the sessions were interacting with different applications, this would be bad. The best movie analogy would be Ghostbusters’ “DON’T CROSS THE STREAMS!”

Since the original version of EPM Automate, they have adjusted this logic to allow for concurrent execution by different users. To accomplish this, the .prefs file is moved under a hidden subfolder which reflects the logged on user’s information.

By keying off of the username, this ensures that separate users will not have a collision and can operate safely.

But what happens if the same user wants to be super-efficient and do work on two different Cloud EPM applications concurrently? (it will depend, based on how careful you are!)

Crossing the Streams, for Science!

To demonstrate what could happen, we will open two separate Command Windows and just jump right into issue EPM Automate commands.

Before issuing EPM Automate commands, I’ve added a global substitution variable “ServiceInstanceName” which will hold the service instance name to two FCCS applications.

After creating the substitution variables, we will connect to one application at a time and query the substitution variable to see what we get

Next, let’s do the same thing for the second server:

Now, let’s go back to the first window and query the variable again.

As you can see above, running two concurrent processes as the same user can lead to some undesirable results. While my example is simply reading a variable, imagine what would happen if you logged into a second application and triggered a wipe/reset of the 1st? Oops!

Easy Workaround

Fortunately, there is an easy workaround. Before starting your EPM Automate sessions, ensure that you are working from different folders and all will be fine since the .prefs files will be created in different locations.

Future Update Thoughts

To fully prevent this issue from impacting interactive users, EPM Automate could append the Process ID (PID) to the folder name which houses the .prefs file. This would ensure that each & every command window has a distinct folder/file.

While this would be a quick/easy fix for interactive users, this would create a potential headache for anyone who is using EPM Automate in a non-interactive manner if EPM Automate commands are being issued in separate sessions. (e.g., AutoSys, Control-M, etc.). To address this, Oracle cloud add an optional command line parameter (“prefsfile”) to allow for explicitly setting the location of the .prefs file.

Retro Computer (Compaq SLT/286) “Steak Knife” Repair

Overview

Recently, I ran across someone looking to sell an “ancient” laptop that had been long forgotten in their closet for ~30 years. The laptop in question is a Compaq SLT/286. As I’m a sucker for very clean old tech, this was right up my alley! Since it was a Compaq machine, I was especially interested as I have a few of their previous ‘portable’ machines as well.

When I inquired about the machine, the owner state that it turns on perfectly fine; however, they were not able to do much else since it doesn’t ‘boot up’. The person stated that it was a work machine that was upgraded and prompt left, in its original bag, in their closet and was only recently discovered.

When I plugged in the machine, it did start up immediately; however, I was presented with a very familiar error message:

Issue

This error message is a clear indicator that the machine had “forgotten” certain configuration settings. In older computers, these Basic Input / Output Services (BIOS) settings are stored in a Complementary Metal-Oxide-Semiconductor. (aka CMOS) The CMOS requires a small amount of power to retain its memory. The earliest PCs typically had a small battery attached to the motherboard which would provide this power. Unfortunately, batteries do not last forever and will fail. In a best case scenario, the battery simply fails to produce enough power while in worst case scenarios, the battery loses integrity and acid can damage the motherboard.

Worst Case Example: Compaq Portable II w/ catastrophically failed battery

Fortunately, this Compaq model did not use a battery that could fail in such a manner and the motherboard was clean as a whistle. (see all the disassembly pics at the end to see just how clean this machine is!)

A quick scan of the motherboard also reveals that there isn’t a battery per se. For this computer, Compaq chose to use a Dallas Real Time Chip (RTC) module which stores the relevant information. This device, which is still used to this day in certain applications, performs multiple activities: “nonvolatile” RAM memory, supports certain Read Only Memory (ROM) operations, and Real-Time Clock (RTC) operations. The one “gotcha” to this device is that the RAM is non-volatile only because there is a battery hidden inside the module which provides power to the device. Once this internal battery is depleted, it is no longer a nonvolatile RAM device.

The image on the left is the module found on this Compaq’s motherboard. The ‘8849’ is the build date for the module which corresponds to the 49th week of 1988! The image on the right is an X-Ray of this type of module. The power source is the circular item near the middle of the module. (X-Ray image courtesy of https://ardent-tool.com/)

CMOS Repair

In order to get this PC up and running, we will need to address this failed module. There are a few options:

  • Replace the Dallas RTC module with a newer one
  • Carefully modify the existing Dallas module to power it from an external source

As with any repair, there are always trade-offs to consider. In this scenario, I chose to supply external power to the existing module for the following reasons:

  • The existing module is directly soldered to the motherboard. This would require me to desolder the existing module and then solder a new one to the board. While this is not overly difficult, it requires a bit more effort and care than the other option. (If Compaq had placed the module in a socket, I would have opted to replace the module)
  • Many suppliers of compatible Dallas RTC modules are selling “New Old Stock” (NOS) components. Because of this, the batteries may only last for a few years before requiring replacement again.
  • Accessing the module requires a complete disassembly of the unit. (disassembly pictures are in a later section)
    • If I’m going to have to replace a battery again, I’m strongly prefer to *NOT* have to disassemble the unit
    • NOTE: As you’ll see later, my repair allows for a 60 second battery replacement and requires no tools.
  • This approach is relatively low risk. If I make a mistake accessing the internal battery, I can always resort to replacing the module. If someone else acquires this machine and would rather replace the module, they can still do this.

The following steps were performed to implement the external battery solution:

  1. Carefully remove the top layers of the Dallas module’s epoxy to expose the battery
  2. Disconnect the battery from the module pins
  3. Solder new leads to the power pins
  4. Route wires to a convenient location
  5. Solder coin battery holder to other end of leads, add battery, and verify power

Module Modification

If you’ve read this far, you are probably wondering: “Why did Charles mention he used a steak knife to effect this repair”? If so, you don’t have to look any further! To cut into the module, I choose to use a nice and sharp steak knife! While a Dremel would have been more surgical, it was in the garage and I didn’t feel like trekking into the cold to fetch it. As the epoxy/resin material is relatively soft, I had no trouble using a knife to accomplish this task. Once the battery was exposed, a small wire cutter was used to disconnect the existing battery. Next I prepped some wires, soldered them to the appropriate pin outs, and sealed it with my glue gun. (Yes, I could definitely have made this cleaned, but I wanted to make sure it was working first…)

Wire Routing & External Battery

I gave myself plenty of extra wire so that I could route it to a spot where it wouldn’t impact the operation of the laptop *and* it could be easily accessible.

(Look how clean this unit is! Amazing)

I then put together the coin battery holder and verified that I was getting proper voltage.

Next, I soldered the coin battery holder to the module wires. (Always use heatshrink!)

Finally, I wrapped the holder in electrical tape and tucked this into the access port on the side of the laptop. This convenient location will allow a battery swap in seconds without the need for any tools. If I add the optional model, I will need to relocate this; however, it is a perfect location for now.

With the machine physically repaired, the only thing left to do is create the Compaq Diagnostic Disk and reconfigure the device.

Software Reconfiguration

After scouring the internet, I was able to locate the appropriate software and used it in yet another retro machine to create the necessary software:

I insert this newly created disk into the Compaq, turned it on, and the configuration utility started. The utility seemingly auto-detected everything auto-magically.

After saving the configuration, the machine started up w/o any error messages!

It Lives!

After exiting the configuration utility, the computer sprang to life and I was greeted with a DOS prompt. Looking around on this machine, I found some pretty standard late 1980’s business programs. Quite a trip down memory lane.

1-2-3

WordPerfect

PFS Write

Bonus Pics

Below are disassembly and other pictures that didn’t make the cut up above. Take a look at how clean this machine is. Amazing for the age. Also, if you made it this far, there is a hint for my next post. Something else running on that workbench is a bit out of place….. If you recognize it, feel free to point it out and take a guess what we’ll be doing with it!

Compaq Portable Family

The largest machine is only a couple years old. Amazing at how much they shrunk down the electronics! Another post is going to dig into that machine and you will be able to see the difference!

Disassembly

 

Natural Language Processing (NLP) Overview Sentiment Analysis Demo – Twitter / Stock

[NOTE – This post covers NLP & Sentiment Analysis at a very high level and is not intended to be an ‘all-inclusive’ deep dive in the subject. There are many nuances and details that will not be discussed in this post. Subsequent posts will dive into additional details]

NLP Overview

While modern computers can easily outperform humans in many areas, natural language processing is one area which still poses significant challenges to even the most advanced computer systems. For instance, while people are able to read Twitter posts about a given company to infer a general sentiment about the company’s performance, this type of interpretation is challenging to computers for a multitude of reasons:

  • Lexical Ambiguity – A word may have more than one meaning
    • I went to the bank to get a loan / I went to the bank of the river to go fishing
  • Part of Speech (POS) Ambiguity – The same word can be used as a very / noun / adjective / etc. / in different sentences
    • I received a small loan (noun) from the bank / Please could you loan (verb) me some money
  • Syntactic ambiguity – A sentence / word sequence which could be interpreted differently.
    • The company said on Tuesday it would issue its financial report
      • Is the company issuing the report on Tuesday or did they talk about the report on Tuesday?
  • Anaphora resolution – Confusion about how to resolve references to items in a sentence (e.g. pronoun resolution)
    • Bob asked Tom to buy lunch for himself
      • Does “himself” refer to Bob or Tom?
  • Presupposition – Sentence which can be used to infer prior state information not explicitly disclosed
    • Charles’ Tesla lowers his carbon footprint while driving
      • Implies Charles used to drive internal combustion vehicles

For reasons, such as those shown above, NLP can typically be automated quite well at a “shallow” level but typically needs manual assistance for “deep” understanding. More specifically automation of Part of Speech tagging, can be very accurate. (e.g. >90%) Other activities, such as full-sentence semantic analysis, are nearly impossible to accurately achieve in a fully automated fashion.

Sentiment Analysis

One area where NLP can, and is, successfully leveraged is sentiment analysis. In this specific use case of NLP, text/audio/visual data is fed into an analysis engine to determine whether positive / negative / neutral sentiment exists for a product/service/brand/etc. While companies would historically employ directed surveys to gather this type of information, this is relatively manual, occurs infrequently, and incurs expense. As sites like Twitter / Facebook / LinkedIn provide continuous real-time streams of information, using NLP against these sources can yield relevant sentiment based information in a far more useful way.

Figure 1 – Twitter Sample Data (Peloton)

In the screenshot shown above, Twitter posts have been extracted via their API for analysis on the company Peloton. A few relevant comments have been highlighted to indicate posts that might indicate positive and negative sentiment. While it is easy for people to differentiate the positive / negative posts, how do we accomplish automated analysis via NLP?

While there are numerous ways to accomplish this task, this post will provide a basic example which leverages the Natural Language Tool Kit (NLTK) and Python to perform the analysis.

The process will consist of the following key steps:

  • Acquire/Create Training Data (Positive / Negative / Neutral)
  • Acquire Analysis Data (from Twitter)
  • Data Cleansing (Training & Analysis)
  • Training Data Classification (via NLTK Naive Bayes Classifier)
  • Classify Analysis (Twitter) data based on training data
  • Generate Output
  • Review Output / Make process adjustments / “Rinse and Repeat” prior steps to refine analysis

Training Data

As previously discussed, it is very difficult for a computer to perform semantic analysis of natural language. One method of assisting the computer in performing this task is to provide training data which will provide hints as to what should apply to each of our classification “buckets”. With this information, the NLP classifier can make categorization decisions based on probability. In this example, we will leverage NLTK’s Naïve Bayes classifier to perform the probabilistic classifications.

As we will primarily be focused on Positive / Negative / Neutral sentiment, we need to provide three different training sets. Samples are shown below.

Figure 2 – Training Data

Figure 3 – Sample Neutral Training Data

NOTES:

  • The accuracy of the analysis is heavily reliant on relevant training data. If the training data is of low quality, the classification process will not be accurate. (e.g. “Garbage In / Garbage Out”)
  • While the examples above a pretty “easy”, not all training data is simple to determine and multiple testing iterations may be required to improve results
  • Training data will most likely vary based on the specific categorization tasks even for the same source data
  • Cleaning of training data is typically required (this is covered somewhat in a following section)

Acquire Analysis Data

This step will be heavily dependent on what you are analyzing and how you will process it. In our example, we are working with data source from Twitter (via API) and we will be loading it into the NLTK framework via Python. As the training data was also based on Twitter posts, we used the same process to acquire the initial data for the training data sets.

Acquiring Twitter feed data is relatively straight forward in Python courtesy of the Tweepy library. You will need to register with Twitter to receive a consumer key and access token. Additionally, you may want to configure other settings, such as how many posts to read per query, maximum number of posts, and how far back to retrieve.

Figure 4 – Twitter API relevant variables

Once you have the correct information, you can query Twitter data with a few calls:

Figure 5 – Import tweepy Library

Figure 6 – Twitter Authentication

Figure 7 – Query Tweets

As NLTK can easily handle Comma Separated Value (CSV) files, the Twitter output is written out to file in this format. While Twitter returns the User ID, as well as other information related to the post, we only write the contents of the post in this example.

Figure 8 – Sample Twitter CSV export

Once you have a process for acquiring the training / analysis data, you will need to “clean” the data.

Data Cleansing

As you can imagine, if you start with “garbage” data, your classification program will not be very accurate. (e.g. Garbage In = Garbage Out) To minimize the impact of poor quality information, there are a variety of steps that we can take to clean up the training & analysis data:

  • Country/Language specific – Remove any data that doesn’t conform with the language being analyzed. (fortunately the Twitter API provides a language filter which simplifies this)
  • Remove Non-Words / Special Characters – Whitespace, Control Characters, URLs, numeric values, non-alphanumeric characters may all need to be removed from your data. This may vary depending on the exact situation; however. (e.g. Hashtags may be useful)
  • Remove Stop Words” – There are many words that you will find in sentences which are quite common and will not provide any useful context from an analysis perspective. As the probabilistic categorization routines will look at word frequency to help categorize the training / analysis data, it is critical that common words, such as the/he/she/and/it, are removed. The NLTK framework includes functionality for handling common stop words.Sample Stop words are shown below for reference:

  • Word Stemming – In order to simplify analysis, related words are “stemmed” so that a common word is used in all instances. For example buy / buys / buying will all be converted to “buy”. Fortunately, the NLTK framework also includes functionality for accomplishing this task
  • Word Length Trimming – Typically speaking, very small words, 3 characters or less, are not going to have any meaningful weight on a sentence; therefore, they can be eliminated.

Once the data has been cleansed, training categorization can occur.

Training Data Categorization

As referenced previously, the sentiment analysis is performed by providing the NLP process training data for each of the desired categories. After stripping away the stop words and other unnecessary information, each training data set should contain word / word groupings which are distinct enough from each other that the categorization algorithm (Naïve Bayes) will be able to best match each data row to one of the desired categories.

In the example provided, the process is essentially implementing a unigram based “bag of words” approach where each training data set is boiled down into an array of key words which represent each category. As there are many details to this process, I will not dive into a deeper explanation in this post; however, the general idea is that we will use the “bag of words” to calculate probabilities to fit the incoming analysis data.

For example:

  • Training Data “Bag of Words”
    • Positive: “buy”, “value”, “high”, “bull”, “awesome”, “undervalued”, “soared”
    • Negative: “sell”, “poor”, “depressed”, “overvalued”, “drop”
    • Neutral: “mediocre”, “hold”, “modest”
  • Twitter Analysis Sample Data (Tesla & Peloton)
    • Tesla : “Tesla stock is soaring after CyberTruck release” = Positive
    • Peloton: “Peloton shares have dropped after commercial backlash” = Negative

The unigram “bag of words” approach described above can run into issues, especially when the same word may be used in different contexts in different training data sets. Consider the following examples:

  • Positive Training – Shares are at their highest point and will go higher
  • Negative/Neutral Training – Shares will not go higher, they are at their highest point

As the unigram (one) word approach looks at words individually, this approach will not be able to accurately differentiate the positive/negative use of “high”. (There are further approaches to refine the process, but we’ll save that for another post!)

IMPORTANT – The output of this process must be reviewed to ensure that the model works as expected. It is highly likely that many iterations of tweaking training data / algorithm tuning parameters will be required to improve the accuracy of your process.

Analysis Data Categorization

Once the training data has been processed and the estimated accuracy is acceptable, you can use iterate through the analysis data to create your sentiment analysis. In our example, we step through each Twitter post and attempt to determine if the post was Positive / Negative / Neutral. Depending on the classification of the post, the corresponding counter is incremented. After all posts have been processed, the category with the highest total is our sentiment indicator.

Generate Analysis

Once all of the pieces are in-place, you can run your process and review the output to validate your model. To simplify the testing, I created a fictitious phrase [FnuaFlaHorgenn] and created a handful of Twitter posts referencing it. I then ran the program to verify that it correctly classified the Twitter post(s)

H:\OneDrive\Illinois-Masters\CS410\410_Sentiment_Analysis - Copy\doc\images\FnuaFlaHorgenn_BadTweet.png

Figure 9 – Sample Twitter Post

Figure 10 – Twitter API Extract

H:\OneDrive\Illinois-Masters\CS410\410_Sentiment_Analysis - Copy\doc\images\FnuaFlaHorgenn_Bearish_Results.png

Figure 11 – Program Output (Bearish)

Conclusion

While NLP is not perfect and has limits, there are a wide variety of use cases where it can be applied with success. The availability of frameworks, such as NLTK, make implementation relatively easy; however, great care must be taken to ensure that you get meaningful results.

Performance Testing Oracle EPM Cloud Applications via Smart View / EPM Automate / Fiddler


Overview

Load based application performance testing is critical for ensuring that your application is performing optimally throughout its lifecycle. Having the capability to perform these tests, in an automated fashion, not only allows an organization to capture initial baseline performance metrics but to analyze the impact of future application updates.

Traditional testing tools, such as LoadRunner and JMeter, excel at web application testing as they can accurately recreate a user workflow at varying load levels. (e.g. simulating multiple user sessions concurrently) Unfortunately, these tools have a few drawbacks which may limit their usefulness in some scenarios: a) Products such as LoadRunner are quite costly, b) Significant technical skill is required to create scripts, and c) Scripts may require frequent updates as the web applications and/or test cases evolve.

To mitigate some of these drawbacks while retaining the benefits of application load besting for Oracle EPM Cloud applications, Oracle has application load testing functionality in the EPM Automate tool. The EPM Automate functionality allows for the recording/playback/performance analysis of a Smart View based user session against Oracle Cloud EPM applications. As users can perform just about any activity through Smart View that they would via the web-based interface, this allows for a near real world performance analysis. While there will be some performance difference between a Smart View sequence and its corresponding web-based sequence, the comparison is close enough to allow for meaningful results. (e.g. business rules triggered via Smart View or the web U/I will not have a performance difference while page rendering may be slightly different due to the lack of a web browser rendering delay)

To complete a performance testing cycle, there are four key areas of focus:

  • Pre-Requisites – Ensure that your environment is setup for supporting the performance testing process
  • Test Case Recording – The recording is created by leveraging a utility, such as Fiddler, which is installed on the end user’s machine and is capable of recording internet traffic. While the user is working in Smart View, Fiddler captures messages sent to/from the Oracle EPM Cloud application and Smart View.
  • Test Case Replay – After the recording is complete, the EPM Automate utility can be used to read the recording and “replay” the steps performed by the original user against PBCS. The results for each step are saved to file and can be analyzed to determine pass/fail of each step, time elapsed, etc. The utility also allows for the ability to stress test your application by running the recorded session multiple times concurrently to simulate having multiple users I the system at the same time. Depending on the test case, some tweaking may be required to make this work properly. (e.g. having 20 people run the same calc with the same POV wouldn’t make sense, etc.)
  • Analysis – While EPM Automate is performing the test case replay(s), it captures performance data. This data is stored and needs to be processed for analysis. Excel is typically used for performing the performance analysis.

Pre-Requisites

You should have the following installed on your computer / available to you:

In addition to the software requirements, you should also make sure you have the following functional items:

  • Created temporary test users which will be used during the testing process (e.g. If you want to test performance w/ 20 concurrent users, create 20 temp user accounts and/or make available 20 existing user accounts)
  • A test script which can be executed via Smart View which will simulate a “real world” use case. The test case should include all required steps that will be executed from Smart View (e.g. Variables to set, forms to open, calcs, etc.)

Test Case Recording

To create the recording, you will need to ensure that you have: Smart View, Telerik Fiddler, relevant connection details to the Oracle EPM Cloud product, and a script containing the instructions you will follow. Once you have that information, follow the steps below

  1. Starting Fiddler
    1. Once you have all of your prerequisite items prepared, start Fiddler. Navigate through the welcome screens and any update prompts

    2. After the initial welcome screen / update check, you will end up on the primary application screen which contains multiple windows:
      1. Traffic Window – this window shows a real-time activity sequence of HTTP data flowing from your PC to server end points

      2. Request Data – This tab in the upper right pane display “Request” information for the selected packet

      3. Response Data – This table in the lower right pane displays the “Response” information for the selected packet

NOTE – Request Data is information sent from your machine to the web server and Response data is the associated reply from the server.

  1. Filter Fiddler Traffic – As you’ll quickly realize, your machine is constantly sending information back/forth which Fiddler will see. We do not want all of this extra information in our recording. Perform the following steps to eliminate redundant information:
    1. Close any unnecessary programs – Close all browser windows, Outlook, Excel, Word, etc., etc., etc. The only programs you’ll want to have open are an instance of Excel for Smart View, a browser window for Oracle EPM, and Fiddler.
    2. You may still see data flowing from Windows services or other background processes. If you still see packets appearing in the traffic window, right click on one of the messages, select the Filter Now submenu, and then pick the appropriate “Hide” option. As you will typically see a pattern around the Host (target server), filtering by Host is usually the best choice.

    3. Continue filtering until your traffic window is no longer filling up with unnecessary packets. Also, ensure that Capture Traffic is enabled to ensure you are actively monitoring for new packets.
  2. Start Excel, Connect to Oracle EPM, and start executing your script via the Smart View add-in

As you are working through the steps, you should see activity being captured in Fiddler which corresponds to Oracle Cloud EPM traffic

  1. Export the Fiddler traffic into a HTTP Archive v1.1 file (.HAR)

Notes:

  • Consider creating multiple different recordings to test different use cases. Replaying the exact same script multiple times may not be a good real-world load test. (e.g. Would you really run the exact same business rule / POV concurrently?)
  • If done carefully, you can modify the .HAR file to create unique test cases with minimal effort (e.g. Create clones of a test case which use a different “POV”, Run Time Prompt values, business rule, report name, etc.) The example below shows the information passed for execution a Business Rule. If done properly, you can alter the rule name, the cube, POV and/or RTP members.

Test Case Replay

Now that you have created a recording file for your test case, you can use EPM Automate to play it back. Perform the following steps to perform the replay testing / performance data capturing:

  1. Create a replay user list file – This file will be used by EPM Automate to determine how many concurrent tests will be executed. This comma delimited file will contain a list of user accounts, passwords, and the name/location of the appropriate HAR that will be executed. The sample file shown below will execute for 10 concurrent users who are all executing the same test case.

  2. On your testing PC, open a command prompt in Admin Mode

  3. Using EPM Automate, login as a Service Administrator

  4. Using EPM Automate, start the testing session by issuing the replay command.

NOTE(s)

  1. Review the data output to the screen and/or to a specified output file. It should look similar to:

For each step in your test case, for each user, you will see a duration, username, time, HTTP status code, and action details.

Test Result Analysis

While you can manually review the output data shown above, if you want to compare multiple test iterations, you will probably need to pull the data into Excel for further analysis. For instance, you may have test output for 1 / 5 / 10 / 15 / 20 concurrent users so that you can see how well a process will scale as concurrent users are added.

In order to accomplish this comparison, you will have to do a little work in Excel as the data returned from EPM Automate isn’t consistently delimited in the output. (e.g. Business Rule execution will have multiple values in one column that need to be split to find actual Essbase provider execution times, etc.)

To fully analyze the information in the output, you will end up with columns covering:

  • User Experience Duration – This is referring to timing around the User Interface and doesn’t necessarily correlate to Business Rule / Essbase performance
  • Client Duration – This metric also speaks to U/I client software timing
  • Total Provider Time – This metric is buried in the “Object / Addl” columns in the EPM Automate output. For operations hitting against the cube, this is a number that will reflect execution time
  • Essbase Duration – While this is reported in EPM Automate output, it didn’t really seem to be useful
  • Smart Push Duration – Time spent for Smart Push operations
  • Business Rule Duration – While it should reflect execution duration of the business rule, it didn’t really seem to have meaningful values
  • Form Validation Duration
  • Total I-O Duration / Total Duration – Columns I add to calculate a grand total for an operation which may have durations for multiple categories. (e.g. Essbase, Provider, Smart Push, etc.)

As I’m typically running tests for different user levels (1 / 10 / 20 / 35), separate worksheets exist which contain the raw data for each of the user groups. A Unique ID is generated so that the same tasks can be matched up on the user tables (e.g. Execute Business Rule “A” timing is pulled for 1/10/20/35 users)

The summary tab uses the Unique IDs to display a distinct list of activities and the respective timings for each test set.

Finally, this data is used to generate presentation graphs:

Maximizing Excel / VBA Automation Performance

Overview

When automating Excel operations with VBA, there are many ‘tricks’ which can be implemented to significantly improve performance. As the automations run in a non-interactive fashion, features that primarily exist for the benefit of an end user can be temporarily disabled resulting in performance gains. The following Excel settings fall into this category:

Excel Application Properties

  • Calculation
  • EnableEvents
  • ScreenUpdating
  • EnableAnimations
  • DisplayStatusBar

Calculation

One of the best features of a spreadsheet is its magical ability to instantly update data contained in the worksheet cells. While these real-time updates are useful when users are interactively working in the Excel document, this feature robs performance and isn’t necessary when executing automation processes. By changing the Application.Calculation property, we can toggle the calculation logic between Automatic, Manual, and SemiAutomatic.

Toggling between Manual and Automatic during your VBA code execution will typically result in significant performance gains.

Figure 1 – Disable Automatic Calculations in Excel

Figure 2 – Enable Automatic Calculations

To illustrate the potential performance difference, I created a simple VBA routine to generate a 22×22 grid of numbers. Formulas are contained on the bottom of the sheet which sum up the numbers in the grid. The numbers on the grid are updated multiple times based on a user supplied value. When this grid is generated with / without automatic calculations enabled / disabled, there is a significant performance impact. *

Figure 3 – Calculation Manual vs Automatic

As the size of the grid/number of calculations required increases, the performance will continue to decline. Temporarily disabling the calculation method will significantly boost performance on sheets that make use of formulas. (e.g. just about every spreadsheet on the planet)

*IMPORTANT NOTE – Performance metrics are going to vary on a machine by machine basis depending on the system’s hardware. As a general rule, these settings will almost always improve performance. The amount of improvement will be inversely proportional to the machine’s hardware. (e.g. the slower the machine, the larger the improvement) Even if the improvement isn’t the greatest on your machine, you should still consider implementing these steps if other users will work with the excel files.

EnableEvents

In Excel, events can trigger actions and/or programming logic to be executed. For instance, when you change a value in a cell, a “changed” event triggers. Events are one of the ways in which Excel “automagically” knows to update formula based cells when the underlying data is altered. Disabling events is just as easy as adjusting the calculation mode:

Figure 4 – Disable / Enable Excel Events

Using the same 22×22 grid of numbers from the previous example, testing was performed contrasting the difference between enabling / disabling events. Disabling events resulted in a 50% improvement in processing time on a modern quad core i7 machine!

ScreenUpdating

The ScreenUpdating setting in Excel is relatively self-explanatory. When this is enabled, the worksheet display area will be refreshed after any cell values are altered. When users are directly interfacing with Excel, this absolutely needs to be enabled; however, when executing scripts, this is not needed. Updating this setting is also straight forward:

Figure 5 – Disable / Enable ScreenUpdating

Using the same 22×22 grid of numbers from the previous example, testing was performed contrasting the difference between enabling / disabling screen updating. Disabling ScreenUpdating resulted in a modest 10% improvement in processing time on a modern quad core i7 machine with a high end graphics card.

EnableAnimations / DisplayStatusBar

The final two items, EnableAnimations and DisplayStatusBar, typically do not result in significant gains; however, they can help if you need to squeeze every possible ounce of performance out of your Excel/VBA.

EnableAnimations is another user interface related performance setting. This setting only disables certain visual elements as opposed to all screen updates. (e.g. user interface animations for certain actions) Typically, this setting will not make a significant impact; however, it is better than nothing in situations where you cannot leverage the ScreenUpdating setting. Like many of the previously discussed settings, this is another On/Off (True/False) setting:

Figure 6 – Disabled / Enable Application Animations

The DisplayStatusBar is yet another user interface setting that may result in minimal gains. Unless your Excel application relies heavily on the status bar, it is not likely this is worth the effort. This is also an On / Off (True / False) setting:

Figure 7 – Disable / Enable Status Bar

 

Additional Tips / Suggestions

When leveraging these settings, keep the following in mind:

  • Cumulative Effects –The performance settings can be combined for cumulative improvements! In the previously discussed grid test, enabling all performance settings results in an amazing improvement. On a modern quad core i7 computer, the test went from 4 seconds to less than 1 second!
  • Global Scope – Be cognizant that these settings are global settings and will persist across workbooks, instances of Excel, and will even persist after you close and restart Excel! Be sure to consider this when designing your solutions.
  • Restoring original settings – While these settings are great for performance, they are not great for end users! If your code fails to restore the original values, end users will be negatively impacted! (End users and IT teams alike may remove you from their Christmas mailing lists!)

As some of the settings, such as Calculation State, are more than an On/Off setting, it is recommended that you read the current setting and store it in a variable to ensure that you can correctly reset the application setting to its original state.

  • Code Robustness – To ensure that the settings are consistently reset, it is imperative that you follow good coding practice and account for errors/exceptions in your code. If you do not implement proper error handling in your code, an end user will eventually run into an unhandled exception resulting in the code ending in an incomplete state. (e.g. your VBA code tries to delete a file and encounters an error, such as ‘permission denied’) The ‘On Error’ statement should be utilized in your code to provide a way for your code to handle errors gracefully.

The attached Excel workbook contains sample code as well as the Sample Grid test which has been referenced in this write-up:

Figure 8 – Excel Performance Demo Application

http://www.charlescbeyer.com/SampleCode/2_PerformanceOptimizations.xlsm