5 1/4″ Floppy Fun

Recently I was asked if I could recover some data from some old 5 1/4″ floppy disks. As I kept my old 8086 computer laying around “just in case”, I was more than happy to oblige!  I took the disks and began my challenge laden journey to get the data into a modern system.

Bringing ‘ole faithful back to life

After a quick search of my electronic scrapheap, I located my first IBM-PC compatible computer that I ever owned, an Epson 8086!  Like the Millenium Falcon from Star Wars, it may not look like much, but it has it where it counts!  (e.g., it has a working 5 1/4″ disk drive!)  For those who are interested, here are the specs to this late 80’s powerhouse:

'Ole Faithful
My first PC, an Epson 8086 4.77 Mhz w/ 10 Mhz “Turbo” mode!
Epson Apex 
CPU    : Intel 8086 Processor, no Math Co-processor
Speed  : 4.77 Mhz w/ 10 Mhz 'Turbo' Mode
Memory : 640 KB Conventional RAM, no Extended or Expanded RAM
Video  : CGA Graphics - 256 colors in text mode, 4 colors @ 320x200 graphics resolution 
HDD    : 20 MB RLL Encoded Hard Drive 
Floppy : 360 KB 5 1/4" Drive
Ports  : 9-Pin DIN Keyboard, 25-Pin Serial, 25-Pin Parallel

Just like the Millenium Falcon; however, I had performed a slight modification to this computer, a Microsoft Mach 20 add-on board!

Microsoft Mach 20 Board
CPU    : Intel 80286 Processor
Speed  : 8 Mhz 'Turbo' ("Equivalent to 12 Mhz 80286's of the day")
Memory : 512 KB RAM, expandable to 3.5 MB

The board works by replacing your 8086 Processor with an special adapter that connects the Mach 20 board to the computer’s motherboard.  The additional RAM is recognizable as Extended (first 384 KB) and Expanded  (remainder) by including HIMEM.SYS and EMM386.SYS in the MS DOS config.sys file.

Why am I mentioning this board you ask?  It was the first obstacle between me and my data!

Well I guess the 20, in Mach 20, refers to its lifespan….

I hooked up the keyboard, the monitor, and power cables and hit the Hyperdrive. (Power button)  The lights in my workout room momentarily dimmed and the screen flickered with a BIOS self test, perfect!  The memory test systematically ran through the memory: 64 KB, 128 KB, 256 KB.  Just as I thought things were going well, failure!  The screen froze at 256KB and then the screen went dead.  I shrugged it off as a fluke and punched the button again.  Once again, the memory test began: 64 KB, 128 KB, 256 KB, 320 KB, 384 KB.  Once again, I was met with a resounding failure as it froze again.  I looked over the internals of the computer and at initial glance everything appeared to be in order.  As I started feeling the connections, I discovered that the Mach 20 CPU connection cable felt suspect.  On a hunch, I held the cable down and hit the switch again.  64 KB, 128 KB, 256 KB, 320 KB, 384 KB, 448 KB, 512 KB, 576 KB, 640 KB!  Success!  I quickly shut off the machine and placed one of the our epoxy coated 5 LB dumbbells on the connection.  With the Mach 20 situation addressed, I was on the fast path to getting that data….. or not …….

Anyone have a 9-PIN DIN Keyboard laying around?  I didn’t think so ……

As I restarted the computer and watched the memory test complete, I was greeted with the next obstacle: BIOS test failures!  The first error was a lovely 301 Keyboard error.  As the picture at the beginning of this post should attest, the keyboard has seen better days and it was very, very unhappy.  As I don’t keep spare DIN keyboards laying around and it was ~ midnight, I didn’t have any options other than to find a way to make it work.  After disassembling the keyboard, I found significant corrosion on the connectors.  I cleaned the corrosion and reassembled what was left of the keyboard.  Fortunately, this was sufficient to allow the BIOS keyboard test to pass!  Later into the process of recovering the data, I discovered that some buttons didn’t work on the keyboard which made typing  difficult, but not impossible.  (HINT: ALT + ASCII code will put a character on the screen)

Floppy Drive Cable, Where Art Thou?

After resolving the keyboard failure, I was presented with yet another BIOS test gift, 601 – Diskette Error.  As the whole point of thisAfter awakening from a 20 year slumber, my computer was not happy. exercise is to read data from the floppy disk, this error was especially cringe worthy.  After staring at the screen momentarily in a mixture of disbelief and annoyance, I took another look at the cabling and discovered a likely explanation for the problem:  there wasn’t a floppy drive cable!  Even though it had been over a decade since I last used this machine,  I tried to come up with a reason why the cable would be gone and came up completely blank.  (still don’t know)  I searched through my bin of spare cables; however, this was a cable I didn’t have since these cables are nothing like current drive cables.  Defeated, I retreated to my laptop in search of a cable. In an amazing stroke of luck, Fry’s electronics stocks these cables and I picked two up in the AM.  (I have no idea why they stock this cable, but I’m sure glad they had it!)  I connected the cable, fired up the machine, and all BIOS tests passed.  The machine booted up DOS and things were finally falling into place.  I inserted the 5 1/4″ floppy, type A: to access the drive and ……..

Abort, Retry, Fail?

Anyone who has worked in DOS knows this message and the fear it creates as you realize your data is probably corrupt and gone forever.  While I set early expectations that the 5 1/4″ disk may not be recoverable due to its age, the disk was well stored and  I was really hoping to get this data.  I wasn’t ready to let it die on the operation room table! (actually my workout room floor, but who is keeping track…)  I futilely tried to access the drive repeatedly.  Each time I was met with the same cold message.  I looked at the clock and almost called the time of death on my data recovery effort, but I wanted to try one more thing.  Looking very closely at the disk, I noticed that it didn’t appear to seat entirely into the disk drive.  Even though the locking mechanism closed, the disk drive started up, and it sounded like it was working, it didn’t look quite right.  I used my left hand to push the disk into the drive farther and then tried to access the disk again.   Success!  Treating the disk as if it were an self destructing Mission Impossible message, I quickly copied the entire contents of the floppy disk to the computer’s hard drive.  I basked in the successful data copy for a few moments and then realized that my job wasn’t close to done.

Congratulations, you’ve moved your data from an obsolete 5 1/4″ disk to an obsolete computer, now what?!?!?

While I was able to read the data of the floppy successfully, I still had to find a way to get it to the person needing the data.  This antique didn’t have an Ethernet card (or software to run it) so I couldn’t email, FTP, or directly move the data across a network.  The machine didn’t have a 3 1/2″ disk drive which could be more useful.  I didn’t even have a modem.  There were two options that came to mind:

  • Transplant the hard drive into a more modern computer – While at first this may seem like a dead end, this is actually a very viable option.  Even though the hard drive is an older format which a newer computer wouldn’t recognize, I could move the controller card AND the hard disk into a newer machine with ISA slots and it would work!  (I’ve done it once before)  I grabbed the oldest/modern machine I had laying around, an HP Pentium based desktop, ripped it open and ……..    *NO* ISA slots.  Unfortunately, it was just new enough that it had PCI; therefore, I would not be able to move the hard drive and needed a Plan B.
  • Transmit the data over the Serial / Parallel ports to another computer – Thinking back to my old DOS days, I recalled a program built into the later versions of DOS that would allow you to send files between computers.  After combing through the DOS folder, I located the program:  Microsoft Interlink.

My victory was once again short lived as a few realizations set in:

  • Microsoft Interlink needs to run on both the host AND destination computers – As newer computers don’t ship with this program, I would need to get it copied over to the destination machine.  Of course if I had a way to copy that file, I wouldn’t need the program in the first place………   After a lot of scouring, I found a someone who was hosting the necessary files!
    http://www.pcxt-micro.com/dos-interlink.html
  • Microsoft Interlink needs a “LapLink” or Null modem cable – No problem, I know I have a Laplink cable here somewhere………..  I searched through my spare cable bin and my cable was nowhere to be found.  Unfortunately I gave away a large amount of older computer items just weeks before I took on this task and I suspect it was one of the items I gave away.  (After all, who would need an old LapLink cable, right?)  That’s OK, lets make our own….

Microsoft Interlnk, the gift that keeps on giving!

Using EDLIN to update the CONFIG.SYS to load INTERLNK driver
Using EDLIN to update the CONFIG.SYS to load INTERLNK driver

A quick scan of the Microsoft website (http://support.microsoft.com/kb/121246/en-us/), as well as the Interlnk help file, provided all the information necessary to wire up the pins.  In a huge stroke of luck, the prototyping cables I use for my circuit work perfectly fit the serial port pins making connecting the machines quick work.  The only drawback is since the cables are relatively short, it was a bit of a challenge to hook everything up.  With the cables connected to the computers, I turned my attention to configuring the software.

As I alluded to earlier, both machines need to have software running for this to work, the following steps outline what needs to happen:

  • Place INTERLNK.EXE and INTERSVR.EXE on both machines
  • Update the CONFIG.SYS file to load a device drive as follows:
    DEVICE=C:\DOS\INTERLNK.EXE /<ltp|com>:<number>  where:
  • <ltp/com> – specify LPT if connecting through a parallel port, com if connecting through serial port.
  • <number> – is the port number (i.e. If connection to LPT2 / COM2, enter a 2 here)
  • Update the AUTOEXEC.bat to call the INTERSVR.EXE program
    C:\DOS\INTERSVR.EXE
  • Ensure the cable is connected between the machines
  • Restart the machines
Directly Connecting Computers using Prototype Circuit Board jumper cables
Directly Connecting Computers using Prototype Circuit Board jumper cables

If everything is setup right, during the boot-up process you will see a message indicating that a link has been established and it will assign a local drive letter to the remote system.

Success at last!

After restarting both machines, Microsoft Interlnk loaded and a drive map was established between the two machines.  I copied the files over to the newer machine and then used a IDE-USB adapter to connect that computer’s hard drive to my laptop and send the files off.

Connection Established!
Connection Established!

It wasn’t pretty, but in the end everything worked out and I got to justify keeping my old 8086 around!

Since it is up and running….

After I had successful sent off the files, I went back to the old 8086 to see what was still on the machine.  I found a couple of old, but good games that I used to play and some old code I wrote to add mouse support to a BASIC program I was writing.  A nice trip down memory lane!

Centurion 688AttackSub ASM to Activate Mouse Cursor GottaLoveCGA

Oracle DRM 11.1.2.3.xxx performance problems?

Have you recently upgraded DRM to the 11.1.2.3 platform and begun experiencing performance problems? If so, you’re not alone. Scanning discussion boards and internet search sites leads one to the conclusion that many people have taken a performance hit since upgrading to this version. While in a status meeting at my current client, I heard grumblings of slow performance and volunteered to take a look at the issue. After investigating, I have found one significant issue with a relatively easy fix!

As DRM is a Microsoft .NET based application, it is not truly compiled until runtime. When this “Just In Time” compilation occurs, the Framework looks at certain configuration files (e.g., machine.config, web.config, app.config)used during the compilation process. These files contain settings covering key configuration topics such as authentication method, logging options, “global” variables, database connection strings, and compilation options.
One of the compilation options, debug, has a significant impact on the operation of the compiled application.

While looking at the configuration files for DRM, I quickly spotted a common mistake made by development teams releasing .NET code: debug=”true”. While debug=true is great for development teams, it is not great for production software. While I won’t bore you with all of the details as to why this is, the following MSDN blog post covers a fair amount of the differences between an ASP.NET application running in debug mode: Pros and Cons of using Debug=True:

To resolve this issue, do a search for any app.config, machine.config, web.config files contained in your DRM product folder.

For instance, you should be able to locate web.config files in the following folders:

  • %EPM System Home%\products\DataRelationshipManagement\client\web-client
  • %EPM System Home%\products\DataRelationshipManagement\client\migration-client

When you open these files, search for the compilation property. If you see debug=”true”, change this to debug=”false”. For the ASP.NET web applications, these changes will take effect immediately since the Framework will notice that the web.config has changed and will restart the web application.

Debug=True

————————-
Updates:
– Oracle has confirmed this is an issue will address
– We have seen this issue with the following 11.1.2.3 releases: .500 and .304

Kscope13 Recap

As with previous years, this year’s KScope conference did not disappoint!  The location, people, and content were all great.

While the weather in New Orleans was hot, the close proximity to many local attractions was cool and made it easy to explore the town between sessions.  The Sheraton was but steps to local dining, shopping, the historic French Quarter, and the Mississippi.  If you wanted to travel farther out, you simply had to walk out the front door and hop on a trolley car.

The record breaking drives of worldwide people who participated at the conference were great!  The organizers, venue staff, ODTUG Board Members, presenters, vendors, and attendees were all welcoming, helpful, and focused on making the conference successful.  The quality people that attend this conference make the conference special.

Overall, the content was exceptional.  Attendees had their pick of high quality content in all of the regular areas and even a few new areas such as .NET programming!  Oracle was on-hand to discuss the latest and greatest releases, consultants shared some of their secrets and best practices, vendors demonstrated their products, and employees provided real world case studies.

A few of my personal favorite sessions were:

* – Biased as this was MY session!

Speaking of sessions, I once again fielded a session focusing on API programming in EPM.  Unlike the 2012 session which took a shotgun blast approach to all sorts of EPM APIs, this year’s session shined a laser on the HFM COM API.  The goal of the session was to walk the attendees through an entire application build from scratch.

We covered installing the necessary HFM files to your computer, installing a development environment (Visual Studio), creating your first project, adding the necessary DLL file references, implementing key API calls, and compiling/executing your first program.  To further help the attendees get started, I provided a complete sample project that users could use as a starting point.

In case you missed my session(s) or you want a copy of the presentation deck, you can find them here:

2013 Session : https://www.box.com/s/ttapg8dcwebw7z9p0k0x

2012 Session : https://www.box.com/s/a2ea627ae8c6f38c674c

NOTE:  Links to source code are included in the files at the end of the slide decks.

Hope to see you at Kscope14 in Seattle!

Charles

Mardi Gras World
Mardi Gras World
NOLA Trolley Car
Trolley Car
What would happen if .....
What would happen if …..

 

HFM Subcube Architecture Documents (Pre-11.1.2.2)

Recently a saw a colleague asking for some information about the HFM table layout and how they could interface with the data in the system.  While I recalled there were a few documents on the subject, I could not locate them via Google.  While I’ve provided them at the bottom of this blog post, I thought I’d give a 10 second high level explanation of the primary tables in regards to the actual HFM data.  (Pre 11.1.2.2)

Generally speaking, when you configured HFM, you provided a database for use by the system.  This database will hold the data for all of your applications created in the envirionment.  To keep the data separated, each application’s data tables will be prefixed with the application name.  (i.e. SampleApp_CSE_11_2011)

Oracle is kind enough to publish a data model breakdown for most of the tables which can be found here:
http://www.oracle.com/technetwork/middleware/bi-foundation/resource-library-090986.html

Interestingly, they do not cover all of the tables as they appear to skip over the data tables.  When it comes to the data, there are 3 primary tables that hold the financial information : DCE, DCN, and DCT tables.  The DCE tables store <Entity Currency> and <Parent Currency> values and the related adjustments. The DCN tables store data for the remaining members of the value dimension. The DCT tables store the journal transactions – when the journals are posted, the data values flow to the DCE and/or DCN tables.

When viewing your database, you will see a series of tables that follow a relatively obvious naming convention:
<App_Name>_DCE_<PeriodNum>_<Year>

Elevator Trivia Note: The fact that the Period and Year and part of the naming convention is a key reason as to why you specify an ending year (and number of periods) during the App build process!

Looking at the structure of these tables, you’ll see the following fields used consistently:

lEntity
lParent
lValue
lAccount
lICP
lCustom1
lCustom2
lCustom3
lCustom4

With the exception of lValue, these are pretty straightforward and act as foreign keys back to the respective tables such as <AppName>_Entity_Item.

lValue – Is slightly different as it takes a little work to determine what this is.  For each currency in your system, you will have Base Currency “Entity Currency”, Adjustments, and Total.  This field is a combination of that AND the currency.  The following formula will help you translate:

<AppName>_CURRENCIES.ItemID * 3 + 15 = “Currency Total”
<AppName>_CURRENCIES.ItemID * 3 + 16 = “Currency Adj”
<AppName>_CURRENCIES.ItemID * 3 + 17 = “Currency”

In addition to those fields, the tables all contain one or more Timestamp field.  Minor conversion is needed on the Timestamp field to make it human readable.  For instance in SQL Server you would return it as:

cast(TimeStamp2 as smalldatetime)

After these columns, each of the 3 table sets have slightly different layouts; however, one additional pattern you should see is that there will be incrementing columns that correlate to the number of periods in your application.  One of the columns will hold your actual data while the other one will hold a timestamp or some sort of indicator about the data (i.e. Trans Type)

Armed with this knowledge, one thing you can do is search all of your data tables to find orphaned information.  While Oracle gives you a way to clear it out, it doesn’t give you a good way to see what you are clearing.  With a basic query, we can not only find out if we have orphaned records, but what the data is and when it was created/last used/updated!

The following Microsoft SQL Query will dynamically go through ALL tables that reference lEntity and then determine undefined values.
NOTE: You should theoretically search for Account, ICP, and Custom orphans as well, but this is a start!

_________________________________
— Get List of Tables that refer to Entity ID and then search them all looking for undefined values

— NOTE: You would probably do the same for lValue, lAccount, lICP, lCustom1 – lCustom4,

— NOTE: lParent would correspond to entity table like lEntity

— NOTE: lSrc and lDes should also be checked against actual table. i.e. lSrcAccount checked against Accounts

 

–Initialize some working memory

Declare @tblSearchTable varchar(50)

Declare @tblSrcTable varchar(50)

Declare @tblSearchField varchar(50)

Declare @sql nvarchar(4000)

— Setup defaults

set @tblSearchField =‘lEntity’

set @tblSrcTable =‘<APPNAME>_entity_item’

–Setup Cursor

Declare tmp_cursor CURSOR

For Select distinct name from sys.tables where object_id in ( select object_id from sys.columns where name= @tblSearchField) order by name asc

Open tmp_cursor

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable

WHILE@@FETCH_STATUS= 0

BEGIN

SET @sql=‘select ”’+ @tblSearchTable +”’ as SrcTable, * from [‘+ @tblSearchTable +‘] where ‘+ @tblSearchField +‘ not in (select itemid from [‘+ @tblSrcTable +‘])’

EXECsp_executesql @sql

–Retrieve Table name to process

Fetchnextfrom tmp_cursor

INTO @tblSearchTable

END

CLOSE tmp_cursor

DEALLOCATE tmp_cursor

__________________________________________________

Additional Reading

Hyperion Subcube Architecture Document

HFM Tuning Guide

Kscope13 is approaching!

With Kscope13 around the corner, I thought I’d share last year’s presentations:

https://www.box.com/s/a2ea627ae8c6f38c674c

For Kscope13, I’m presenting a follow-up to my API session which will see us complete a .NET desktop app from scratch to consuming web services via a tablet app.  Should be interesting for those looking to expand their product’s functionality via the APIs.

To see a full break down for Kscope 13, please go here : http://kscope13.com/

See you there!