While working with a client running HPCM 220.127.116.11.110, we were encountering intermittent errors while executing rules:
“javax.persistence.RollbackException: Transaction rolled back because transaction was set to RollbackOnly”
As the exact same rule could be re-run without any errors, it appeared to be application/environment related. (also appeared to be database related given the content of the error)
Reviewing the profitability log provides a much clearer view of the issue
NOTE: Log would typically be found in a location similar to: \Oracle\Middleware\user_projects\domains\EPMSystem\servers\Profitability\logs\hpcm.log
From the log file snippet above, note the highlighted section:
“[SQL Server] Arithmetic overflow error converting expression to data type smallinit.”
What is the Problem?
While this does not match the error message we see in the user interface, this is definitely related as:
The error message was logged at the same time as the error in HPCM
The user name logged corresponded to the user receiving the error
SQL Server rolling back a transaction after a failed insert makes a lot of sense.
(very) Loosely speaking, database transactions exist to protect the integrity of a database. If a program, or user, were to execute a series of statements against a database and one or more fail, what should happen? Should we leave the database in an inconsistent state or should we put the database back and alert the user? While application developers could build this logic into their program code, it is a lot more convenient to give the database a series of steps and let it handle that for us!
In this case, the INSERT statement is part of a transaction. Since the INSERT failed, SQL Server has rolled back the entire transaction and reported that to HPCM.
Why are we encountering this problem?
While that explain what happened, why did this happen? The error in the log file has four key clues :
We are attempting to add data to a database table (INSERT INTO)
The table is: HPM_STAT_DETAIL
ARITHMETIC OVERFLOW occurred when trying to store a value in a column
The target column has a Data Type of smallint
In SQL Server, a smallint datatype can have a maximum value of 32,767. Another look at the error message reveals one numeric, 43,014, which exceeds 32,767. This value is being stored in a column called JAVA_THREAD. As JAVA_THREAD is storing the process id, which is semi-randomly generated, if the number returned is < 32,768, the program works as expected. If the ID is > 32,767, then things don’t go as well…..
Reviewing the table structure for this table confirms the suspicion.
How to fix this
The easiest fix for this issue is to change the datatype for this column from smallint to int. As the largest int value is well over 2 Billion, this issue should not occur again.
LEGALESE – While I have reviewed this change with Oracle and am very confident this will not cause any issues, proceed at your own risk. 🙂
As of 6/26, Oracle has confirmed this as a BUG. No ETA on an update yet, though.
This may be SQL Server specific, have not evaluated against Oracle schema to confirm data type used. [Oracle equivalent of smallint would be number(1)]
Since you can never have enough horsepower, I worked with my good friend, Kevin Margitta of Cartuning to replace the Eaton Gen IV M90 Supercharger on my 2004 Grand Prix Comp G with an intercooled turbocharger. While Kevin had the hardware worked out, I contributed to the project by creating the software necessarily to reprogram the engine’s control module. With the intercooled turbo & software updates in place, this swap easily put down 401 HP to the wheels before I decided I had enough fun at the dyno shop for the day.
A couple of months after the swap was complete, I ran into Don Keefe and asked him if he would like a ride in the car. While he didn’t seem very interested at first, he became very interested after the boost kicked in! Shortly thereafter, the following awesome article was published in High Performance Pontiac:
I’ve been slowly repurposing a spare room in the basement to serve as my electronics project room. Not quite done, but thought I’d post some photos showing current status. Just need to get the electrical finalized and will be good to go!
This past year at Kscope15, I presented a session all about the technical side of Smart View. While I intended on spending equal time between infrastructure and API/programming topics, I ended up focusing a bit more on the API/programming side. There are so many ways to improve your Smart View documents by understanding some basic Visual Basic for Applications (VBA) code and leveraging the Smart View API, I simply couldn’t resist!
For those more interested in the infrastructure side of Smart View deployments, do not fear! While the session itself didn’t spend as much time on it, the Power Point includes a fair amount of slides which provide information on how to automate Smart View deployment, automatically set user defaults, and deploy preconfigured Private and Shared Connections.
The sessions, and slide deck below, provide oodles of information on the following topics:
Recently I was asked if I could recover some data from some old 5 1/4″ floppy disks. As I kept my old 8086 computer laying around “just in case”, I was more than happy to oblige! I took the disks and began my challenge laden journey to get the data into a modern system.
Bringing ‘ole faithful back to life
After a quick search of my electronic scrapheap, I located my first IBM-PC compatible computer that I ever owned, an Epson 8086! Like the Millenium Falcon from Star Wars, it may not look like much, but it has it where it counts! (e.g., it has a working 5 1/4″ disk drive!) For those who are interested, here are the specs to this late 80’s powerhouse:
CPU : Intel 8086 Processor, no Math Co-processor
Speed : 4.77 Mhz w/ 10 Mhz 'Turbo' Mode
Memory : 640 KB Conventional RAM, no Extended or Expanded RAM
Video : CGA Graphics - 256 colors in text mode, 4 colors @ 320x200 graphics resolution
HDD : 20 MB RLL Encoded Hard Drive
Floppy : 360 KB 5 1/4" Drive
Ports : 9-Pin DIN Keyboard, 25-Pin Serial, 25-Pin Parallel
Just like the Millenium Falcon; however, I had performed a slight modification to this computer, a Microsoft Mach 20 add-on board!
Microsoft Mach 20 Board
CPU : Intel 80286 Processor
Speed : 8 Mhz 'Turbo' ("Equivalent to 12 Mhz 80286's of the day")
Memory : 512 KB RAM, expandable to 3.5 MB
The board works by replacing your 8086 Processor with an special adapter that connects the Mach 20 board to the computer’s motherboard. The additional RAM is recognizable as Extended (first 384 KB) and Expanded (remainder) by including HIMEM.SYS and EMM386.SYS in the MS DOS config.sys file.
Why am I mentioning this board you ask? It was the first obstacle between me and my data!
Well I guess the 20, in Mach 20, refers to its lifespan….
I hooked up the keyboard, the monitor, and power cables and hit the Hyperdrive. (Power button) The lights in my workout room momentarily dimmed and the screen flickered with a BIOS self test, perfect! The memory test systematically ran through the memory: 64 KB, 128 KB, 256 KB. Just as I thought things were going well, failure! The screen froze at 256KB and then the screen went dead. I shrugged it off as a fluke and punched the button again. Once again, the memory test began: 64 KB, 128 KB, 256 KB, 320 KB, 384 KB. Once again, I was met with a resounding failure as it froze again. I looked over the internals of the computer and at initial glance everything appeared to be in order. As I started feeling the connections, I discovered that the Mach 20 CPU connection cable felt suspect. On a hunch, I held the cable down and hit the switch again. 64 KB, 128 KB, 256 KB, 320 KB, 384 KB, 448 KB, 512 KB, 576 KB, 640 KB! Success! I quickly shut off the machine and placed one of the our epoxy coated 5 LB dumbbells on the connection. With the Mach 20 situation addressed, I was on the fast path to getting that data….. or not …….
Anyone have a 9-PIN DIN Keyboard laying around? I didn’t think so ……
As I restarted the computer and watched the memory test complete, I was greeted with the next obstacle: BIOS test failures! The first error was a lovely 301 Keyboard error. As the picture at the beginning of this post should attest, the keyboard has seen better days and it was very, very unhappy. As I don’t keep spare DIN keyboards laying around and it was ~ midnight, I didn’t have any options other than to find a way to make it work. After disassembling the keyboard, I found significant corrosion on the connectors. I cleaned the corrosion and reassembled what was left of the keyboard. Fortunately, this was sufficient to allow the BIOS keyboard test to pass! Later into the process of recovering the data, I discovered that some buttons didn’t work on the keyboard which made typing difficult, but not impossible. (HINT: ALT + ASCII code will put a character on the screen)
Floppy Drive Cable, Where Art Thou?
After resolving the keyboard failure, I was presented with yet another BIOS test gift, 601 – Diskette Error. As the whole point of this exercise is to read data from the floppy disk, this error was especially cringe worthy. After staring at the screen momentarily in a mixture of disbelief and annoyance, I took another look at the cabling and discovered a likely explanation for the problem: there wasn’t a floppy drive cable! Even though it had been over a decade since I last used this machine, I tried to come up with a reason why the cable would be gone and came up completely blank. (still don’t know) I searched through my bin of spare cables; however, this was a cable I didn’t have since these cables are nothing like current drive cables. Defeated, I retreated to my laptop in search of a cable. In an amazing stroke of luck, Fry’s electronics stocks these cables and I picked two up in the AM. (I have no idea why they stock this cable, but I’m sure glad they had it!) I connected the cable, fired up the machine, and all BIOS tests passed. The machine booted up DOS and things were finally falling into place. I inserted the 5 1/4″ floppy, type A: to access the drive and ……..
Abort, Retry, Fail?
Anyone who has worked in DOS knows this message and the fear it creates as you realize your data is probably corrupt and gone forever. While I set early expectations that the 5 1/4″ disk may not be recoverable due to its age, the disk was well stored and I was really hoping to get this data. I wasn’t ready to let it die on the operation room table! (actually my workout room floor, but who is keeping track…) I futilely tried to access the drive repeatedly. Each time I was met with the same cold message. I looked at the clock and almost called the time of death on my data recovery effort, but I wanted to try one more thing. Looking very closely at the disk, I noticed that it didn’t appear to seat entirely into the disk drive. Even though the locking mechanism closed, the disk drive started up, and it sounded like it was working, it didn’t look quite right. I used my left hand to push the disk into the drive farther and then tried to access the disk again. Success! Treating the disk as if it were an self destructing Mission Impossible message, I quickly copied the entire contents of the floppy disk to the computer’s hard drive. I basked in the successful data copy for a few moments and then realized that my job wasn’t close to done.
Congratulations, you’ve moved your data from an obsolete 5 1/4″ disk to an obsolete computer, now what?!?!?
While I was able to read the data of the floppy successfully, I still had to find a way to get it to the person needing the data. This antique didn’t have an Ethernet card (or software to run it) so I couldn’t email, FTP, or directly move the data across a network. The machine didn’t have a 3 1/2″ disk drive which could be more useful. I didn’t even have a modem. There were two options that came to mind:
Transplant the hard drive into a more modern computer – While at first this may seem like a dead end, this is actually a very viable option. Even though the hard drive is an older format which a newer computer wouldn’t recognize, I could move the controller card AND the hard disk into a newer machine with ISA slots and it would work! (I’ve done it once before) I grabbed the oldest/modern machine I had laying around, an HP Pentium based desktop, ripped it open and …….. *NO* ISA slots. Unfortunately, it was just new enough that it had PCI; therefore, I would not be able to move the hard drive and needed a Plan B.
Transmit the data over the Serial / Parallel ports to another computer – Thinking back to my old DOS days, I recalled a program built into the later versions of DOS that would allow you to send files between computers. After combing through the DOS folder, I located the program: Microsoft Interlink.
My victory was once again short lived as a few realizations set in:
Microsoft Interlink needs to run on both the host AND destination computers – As newer computers don’t ship with this program, I would need to get it copied over to the destination machine. Of course if I had a way to copy that file, I wouldn’t need the program in the first place……… After a lot of scouring, I found a someone who was hosting the necessary files!
Microsoft Interlink needs a “LapLink” or Null modem cable – No problem, I know I have a Laplink cable here somewhere……….. I searched through my spare cable bin and my cable was nowhere to be found. Unfortunately I gave away a large amount of older computer items just weeks before I took on this task and I suspect it was one of the items I gave away. (After all, who would need an old LapLink cable, right?) That’s OK, lets make our own….
Microsoft Interlnk, the gift that keeps on giving!
A quick scan of the Microsoft website (http://support.microsoft.com/kb/121246/en-us/), as well as the Interlnk help file, provided all the information necessary to wire up the pins. In a huge stroke of luck, the prototyping cables I use for my circuit work perfectly fit the serial port pins making connecting the machines quick work. The only drawback is since the cables are relatively short, it was a bit of a challenge to hook everything up. With the cables connected to the computers, I turned my attention to configuring the software.
As I alluded to earlier, both machines need to have software running for this to work, the following steps outline what needs to happen:
Place INTERLNK.EXE and INTERSVR.EXE on both machines
Update the CONFIG.SYS file to load a device drive as follows:
DEVICE=C:\DOS\INTERLNK.EXE /<ltp|com>:<number> where:
<ltp/com> – specify LPT if connecting through a parallel port, com if connecting through serial port.
<number> – is the port number (i.e. If connection to LPT2 / COM2, enter a 2 here)
Update the AUTOEXEC.bat to call the INTERSVR.EXE program
Ensure the cable is connected between the machines
Restart the machines
If everything is setup right, during the boot-up process you will see a message indicating that a link has been established and it will assign a local drive letter to the remote system.
Success at last!
After restarting both machines, Microsoft Interlnk loaded and a drive map was established between the two machines. I copied the files over to the newer machine and then used a IDE-USB adapter to connect that computer’s hard drive to my laptop and send the files off.
It wasn’t pretty, but in the end everything worked out and I got to justify keeping my old 8086 around!
Since it is up and running….
After I had successful sent off the files, I went back to the old 8086 to see what was still on the machine. I found a couple of old, but good games that I used to play and some old code I wrote to add mouse support to a BASIC program I was writing. A nice trip down memory lane!
Have you recently upgraded DRM to the 18.104.22.168 platform and begun experiencing performance problems? If so, you’re not alone. Scanning discussion boards and internet search sites leads one to the conclusion that many people have taken a performance hit since upgrading to this version. While in a status meeting at my current client, I heard grumblings of slow performance and volunteered to take a look at the issue. After investigating, I have found one significant issue with a relatively easy fix!
As DRM is a Microsoft .NET based application, it is not truly compiled until runtime. When this “Just In Time” compilation occurs, the Framework looks at certain configuration files (e.g., machine.config, web.config, app.config)used during the compilation process. These files contain settings covering key configuration topics such as authentication method, logging options, “global” variables, database connection strings, and compilation options.
One of the compilation options, debug, has a significant impact on the operation of the compiled application.
While looking at the configuration files for DRM, I quickly spotted a common mistake made by development teams releasing .NET code: debug=”true”. While debug=true is great for development teams, it is not great for production software. While I won’t bore you with all of the details as to why this is, the following MSDN blog post covers a fair amount of the differences between an ASP.NET application running in debug mode: Pros and Cons of using Debug=True:
To resolve this issue, do a search for any app.config, machine.config, web.config files contained in your DRM product folder.
For instance, you should be able to locate web.config files in the following folders:
%EPM System Home%\products\DataRelationshipManagement\client\web-client
%EPM System Home%\products\DataRelationshipManagement\client\migration-client
When you open these files, search for the compilation property. If you see debug=”true”, change this to debug=”false”. For the ASP.NET web applications, these changes will take effect immediately since the Framework will notice that the web.config has changed and will restart the web application.
– Oracle has confirmed this is an issue will address
– We have seen this issue with the following 22.214.171.124 releases: .500 and .304
As with previous years, this year’s KScope conference did not disappoint! The location, people, and content were all great.
While the weather in New Orleans was hot, the close proximity to many local attractions was cool and made it easy to explore the town between sessions. The Sheraton was but steps to local dining, shopping, the historic French Quarter, and the Mississippi. If you wanted to travel farther out, you simply had to walk out the front door and hop on a trolley car.
The record breaking drives of worldwide people who participated at the conference were great! The organizers, venue staff, ODTUG Board Members, presenters, vendors, and attendees were all welcoming, helpful, and focused on making the conference successful. The quality people that attend this conference make the conference special.
Overall, the content was exceptional. Attendees had their pick of high quality content in all of the regular areas and even a few new areas such as .NET programming! Oracle was on-hand to discuss the latest and greatest releases, consultants shared some of their secrets and best practices, vendors demonstrated their products, and employees provided real world case studies.
Speaking of sessions, I once again fielded a session focusing on API programming in EPM. Unlike the 2012 session which took a shotgun blast approach to all sorts of EPM APIs, this year’s session shined a laser on the HFM COM API. The goal of the session was to walk the attendees through an entire application build from scratch.
We covered installing the necessary HFM files to your computer, installing a development environment (Visual Studio), creating your first project, adding the necessary DLL file references, implementing key API calls, and compiling/executing your first program. To further help the attendees get started, I provided a complete sample project that users could use as a starting point.
In case you missed my session(s) or you want a copy of the presentation deck, you can find them here:
Recently a saw a colleague asking for some information about the HFM table layout and how they could interface with the data in the system. While I recalled there were a few documents on the subject, I could not locate them via Google. While I’ve provided them at the bottom of this blog post, I thought I’d give a 10 second high level explanation of the primary tables in regards to the actual HFM data. (Pre 126.96.36.199)
Generally speaking, when you configured HFM, you provided a database for use by the system. This database will hold the data for all of your applications created in the envirionment. To keep the data separated, each application’s data tables will be prefixed with the application name. (i.e. SampleApp_CSE_11_2011)
Interestingly, they do not cover all of the tables as they appear to skip over the data tables. When it comes to the data, there are 3 primary tables that hold the financial information : DCE, DCN, and DCT tables. The DCE tables store <Entity Currency> and <Parent Currency> values and the related adjustments. The DCN tables store data for the remaining members of the value dimension. The DCT tables store the journal transactions – when the journals are posted, the data values flow to the DCE and/or DCN tables.
When viewing your database, you will see a series of tables that follow a relatively obvious naming convention: <App_Name>_DCE_<PeriodNum>_<Year>
Elevator Trivia Note: The fact that the Period and Year and part of the naming convention is a key reason as to why you specify an ending year (and number of periods) during the App build process!
Looking at the structure of these tables, you’ll see the following fields used consistently:
With the exception of lValue, these are pretty straightforward and act as foreign keys back to the respective tables such as <AppName>_Entity_Item.
lValue – Is slightly different as it takes a little work to determine what this is. For each currency in your system, you will have Base Currency “Entity Currency”, Adjustments, and Total. This field is a combination of that AND the currency. The following formula will help you translate:
In addition to those fields, the tables all contain one or more Timestamp field. Minor conversion is needed on the Timestamp field to make it human readable. For instance in SQL Server you would return it as:
cast(TimeStamp–2 as smalldatetime)
After these columns, each of the 3 table sets have slightly different layouts; however, one additional pattern you should see is that there will be incrementing columns that correlate to the number of periods in your application. One of the columns will hold your actual data while the other one will hold a timestamp or some sort of indicator about the data (i.e. Trans Type)
Armed with this knowledge, one thing you can do is search all of your data tables to find orphaned information. While Oracle gives you a way to clear it out, it doesn’t give you a good way to see what you are clearing. With a basic query, we can not only find out if we have orphaned records, but what the data is and when it was created/last used/updated!
The following Microsoft SQL Query will dynamically go through ALL tables that reference lEntity and then determine undefined values. NOTE: You should theoretically search for Account, ICP, and Custom orphans as well, but this is a start!
— Get List of Tables that refer to Entity ID and then search them all looking for undefined values
— NOTE: You would probably do the same for lValue, lAccount, lICP, lCustom1 – lCustom4,
— NOTE: lParent would correspond to entity table like lEntity
— NOTE: lSrc and lDes should also be checked against actual table. i.e. lSrcAccount checked against Accounts
–Initialize some working memory
Declare @tblSearchTable varchar(50)
Declare @tblSrcTable varchar(50)
Declare @tblSearchField varchar(50)
Declare @sql nvarchar(4000)
— Setup defaults
set @tblSearchField =‘lEntity’
set @tblSrcTable =‘<APPNAME>_entity_item’
Declare tmp_cursor CURSOR
For Select distinct name from sys.tables where object_id in ( select object_id from sys.columns where name= @tblSearchField) order by name asc
–Retrieve Table name to process
SET @sql=‘select ”’+ @tblSearchTable +”’ as SrcTable, * from [‘+ @tblSearchTable +‘] where ‘+ @tblSearchField +‘ not in (select itemid from [‘+ @tblSrcTable +‘])’
For Kscope13, I’m presenting a follow-up to my API session which will see us complete a .NET desktop app from scratch to consuming web services via a tablet app. Should be interesting for those looking to expand their product’s functionality via the APIs.
Disclaimer : This post will talk about direct database access. Do not perform unless you are sure you know what you are doing and I’m not liable! I’m just passing along information I’ve learned in case it helps you out….
Recently, we decided that we wanted to review all of our reports to determine which ones we could drop and which ones we should keep. Over the past 7 years, things have accumulated and we literally have thousands of reports gasp between all of the backups, Work In Process, and flat out duplication. In order to handle the review of this we wanted to assign prople to review specific sections of the reports. I was asked to provide a nice excel ‘report’ that would break down the report / folder hierarchy so that we could assign it.
That is when I realized that I really couldn’t locate a tool / report that would generate such a report listing!
In the old 7.2.5 version of Reports, it was relatively easy to get this information as all of the report information (including actual report content) was stored in a single database table.
With System 9 and Workspace, this is no longer true. With Workspace, the files are stored on the server’s file system in a manner consistent with the hierarchy of your reports/folders. Intially I thought I would be able to use this file system to build my report using simple DOS commands. (i.e. dir /s > reportlist.txt) Unfortunately though, the file names of the reports and folders are not ‘human readable’. (In order to decipher it, you’d have to go into the workspace database and x-ref these names to a table, etc) Fortunately; however, you can build a report listing via SQL. The script below recursively builds a report with the following output :
(NOTE: Actually the script includes more columns of output; however, for display purposes here, I’m only showing the name and description)
Report Name Description
===Actg-YContractual Obligations – Multiple Entities Contractual Obligations
===Annual Validation Summary Annual Validation Summary
===Customers and End User Markets NULL
====== 001 – End Use Markets Book End Use Markets Book
====== 002 – Top Customers Book Top Customers Book
====== 010 – Customers Book Reports – Corp NULL
========= Revenue by Top Customers – All Sectors – Pr Yr Comp Customer Revenues
========= Revenue by Top Customers – All Sectors – Pr Yr Comp -TEST Customer Revenues
====== 020 – Markets Book Reports – Corp NULL
========= Revenue by End User Markets – All Sectors End Use Markets
====== 030 – All Sectors – End Use Markets – Pr Yr Comp NULL
========= End Use Markets – Pr Yr Comp Customer Revenues
SQL 2005+ Query
WITH Reports (PARENT_FOLDER_UUID, CONTAINER_UUID, NAME, DESCRIPTION, Level, Sort, CREATION_DATE, LAST_MODIFIED_DATE)
— Anchor member definition
SELECT d.PARENT_FOLDER_UUID, d.CONTAINER_UUID, d.NAME, d.DESCRIPTION, 0 as level, cast(NAME as nvarchar(1024)) as sort, d.CREATION_DATE, d.LAST_MODIFIED_DATE
FROM V8_CONTAINER as d
WHERE d.PARENT_FOLDER_UUID = ‘0000011b42b29e00-0000-0b4d-c0a8e22a’ UNION ALL
SELECT replicate(‘ ‘,Level) + NAME, DESCRIPTION, CREATION_DATE, LAST_MODIFIED_DATE, Level, PARENT_FOLDER_UUID, CONTAINER_UUID FROM Reports order by sort
How this works :
The table V8_CONTAINER holds the main part of the report / folder entry data. In this table you will find Names, Creation Dates, Modified Dates, Owner name, UUID, and Parent UUID information. In order to build a hierarchy, you will need to pay careful attention to the UUID fields. CONTAINER_UUID is the internal name of the object in Workspace. (i.e. non-user friendly name) PARENT_FOLDER_UUID is a reference back to the item’s parent.
With this in mind, if you know the UUID of your starting point, you can simply recurse from there to get a listing of all folders and subsequent reports. In the script above I start off looking for an entry that is the starting point of our reports that we wanted to audit. For your purposes you will need to change this to what your starting point is.
NOTE : If you want to get everything in the table, use REPORTMART. REPORTMART is the parent to everything in the table.
I hope my walk through wasn’t terribly boring and helpful. I apologize for only providing a script that works in SQL Server; however, that is what we use at our site and there should be enough explanation here for someone relatively proficient with Oracle to write a similar script if they desire.