Wednesday, September 11, 2013

IBM i: A Competitive Advantage

07/29/2013

 Alison Butterill

Thursday, September 5, 2013

The Green Screen of Death

The Green Screen of Death

At an IBM-oriented conference, a group of midrange pros stand chewing the fat between seminars. A senior IT guy at a multinational is bemoaning the demands of his users from both above and below. "They just want a nice, pretty GUI," he says, the latter part in little girly voice. The group all chuckle at this knowingly.
These are, after all, grown-up business computer people. They laugh because they know the real value of the good old green screen: fast data entry, efficient order processing—all that stuff. Plus, they know that their users' desires are driven by the ephemeral experiences of their personal lives. Lives where cartoons of doggies chasing their tails or smiley dancing bank managers or kiddies playing with balloons glimmer out of EPOS systems, bank ATMs, and numerous other screens everywhere they go.
Our group of veterans finds the idea of adding to this cacophony of animation most amusing. Big Blue never foisted a native GUI onto their big systems. There are many ways to modernize a proper application and, as adults, IBM gives them the choice of how to proceed.
The problem is, though, that many don't proceed at all. And—hands up—I have been a member of a group like this, and I've laughed along at the GUI thing myself. However, it's time to face facts: The number's up for the AS/400's traditional interface. Those familiar green letters on a black background now spell just one word: death.
Culturally, I mean this in a literal sense. In any modern TV show or film, if a character encounters a legacy-looking computer interface, it's dramatic shorthand for something old, mysterious, inscrutable, and, more often than not, downright dangerous. The makers of sci-fi shows seem particularly keen on this. Lovingly crafted, future-retro scary screens adorn anything remotely alien or dystopian.
The writing was probably on the wall as far back as 1999, when film producers the Wachowskis employed a menacing motif of vertical green-on-black Japanese half-width kana computer text in The Matrix. Somewhat ironically, this green rain effect became the RPG programmer's screensaver of choice for many years afterwards.
Given this background, it's fairly surprising that anyone under the age of, say, 35 can be persuaded to operate an old fashioned-looking application at all. And, if you were an ISV, imagine asking them to go out and actually sell one.
As time goes by, I find more and more anecdotal evidence to back this up. As one example, the other day, my wife and I were waiting for a train in the concourse of our local station. We were near two ATMs for the same bank. The station wasn't particularly busy, so there was no real queue for the machines.
"Watch this," I said, "they always go to the machine on the left." And people kept on doing just that. My wife said, "How do you know?" The answer was simple: The forlorn ATM on the right had an old-school text screen; the one on the left showed a smiling, helpful-looking woman dressed in the livery of the bank. Even older people almost always chose the bright, GUI-fied ATM.
Or take another example. This time, my wife and I were at a department store. The items we'd seen on a previous visit were no longer on display. A helpful assistant said they were out of stock, but she could check whether they had any at the store's other local outlets or whether she could order them. On her EPOS system, she started hopping between a catalogue-type web app (graphical, with photos) and a green screen.
"AS/400," I said to my wife. "It's what he writes about," she explained, apologetically, to the quizzical assistant. The assistant, a mature lady, told us that the younger staff hated the old green screen application. She, on the other hand, still rather liked it, even though she now had to skip from one system to the other. She'd been using it for over 20 years and even showed us how fast she could whizz around it.
"Still," she said, adamantly, "we do need a new system."
I'm not sure what you can do about that. Yes, modernization can be expensive: A decent-sized project can cost around the same amount as the annual wage of one of your IT team. Yes, you need to be careful about the methods you use to make apps fit for the 21st century. But there is a vibrant ecosystem of firms dedicated to the task and a plethora of advice on the subject on this website alone.
That's not to say that apps should be modernised for the sake of it. But a new approach does offer the opportunity to improve and, crucially, future-proof systems. Current skill sets need to be weighed against various approaches. New skills may have to be adopted, and you might even have to involve design-oriented folk.
Of course, in a real-life world of order-fulfilments and supply chain headaches, there will always be other priorities to find. But admit it—it's over for the green screen. To cling to its nostalgic embrace any longer is to consign yourself and your work to history.
There will still be midrange professionals chuckling, patronizingly, over this kind of heresy amongst themselves for some time to come (a few, no doubt, over HD video-conference links on big boardroom touch screens). But one wonders how long it will take them to realize that the joke might, very soon, be on them.

Wednesday, August 21, 2013

IBM Posts Impressive SAP Benchmark with IBM i

IBM i continues to power SAP installations around the world, but admittedly the workhorse operating system on POWER-based servers doesn't scream and shout often enough to keep up the mindshare that the dynamic duo -- IBM i on Power Systems -- deserves. In fact, when IBM i is used to deliver SAP's enterprise goodness, it can handle all the vagaries of SAP with snappy performance . . . and do it at enviable price points.
Here's two relatively new case in points:

1. New SAP BW Benchmark

SAP just posted a new benchmark for 500 million records running SAP BW Enhanced Mixed Load (BW EML). The OS was IBM i 7.1, using DB2 for i 7.1 with SAP NetWeaver 7.30. And the hardware: IBM Power System 750 Express Server with 4 processors, 32 cores, 128 threads, a 4.06 GHz POWER7+ processor, 32 KB(D) + 32 KB(I) L1 cache and 256 KB L2 cache per core, 10 MB L3 cache per core, and 512 GB main memory.
 
With this configuration, IBM achieved 66,900 ad-hoc navigation steps per hour. 
 
This compares quite favorably -- if not exceeds -- a year-older SAP HANA set up that utilizes two x86-based HP servers, one for the database and one for application, running SUSE Linux. What's interesting here is how the dual-server setup compares to the single-server Power 750 -- and how the pricing and management efforts might shake out using the two different methods to deliver SAP solutions. (Incidentally, the dual x86 server setup ran against a billion records.)

2. The Cost/Benefit Case for SAP Business All-in-One Deployments on IBM i and Power Systems

Meanwhile, back to the cost and benefits of IBM i with SAP . . . an International Technology Group white paper takes a closer look at the architecture of an IBM i-based system and applies it to SAP and the cost to manage a deployment over three years. Basically, no surprise here for IBM i aficionados: While x86-based systems often start out cheaper, over three years the cost of the x86-based system gets dangerously close to twice as much as the IBM i on Power setup.
 
What's crazy to see is that the IBM i-based hardware & maintenance cost is less, the software licensing is less, the software support is less, the facilities cost is less, and of course, the personnel costs are far less -- ITG breaks it all out . . . and then goes quite a bit farther. 
 
If you want to understand the benefits of IBM i on Power -- even if you're not considering SAP -- the ITG white paper isn't a bad place to start. And if you are looking at SAP and IBM i . . . again, put the white paper on your must-read list. 

Sunday, July 21, 2013

Microsoft now has 1 million servers versue 23 AS/400's back in the 1990's

Microsoft now has 1 million servers. That's quite the growth from their supposed 23 AS/400's back in the 1990's. ;)

Tweet of Mr. Steve Pitcher July 2013

Wednesday, July 17, 2013

IBM i Customers Believe IBM i Is 'Future Proof'

IBM i Customers Believe IBM i Is 'Future Proof'

Here's a gem: Infor recently completed research into its customer base—70,000 strong—that revealed that its 16,000-plus IBM i-running customers still consider IBM i on Power Systems to be the platform the future. Of course, Infor's customers typically run some or several of Infor's ERP suites, so these are customers who are deeply committed to serious enterprise-wide solutions. 
Twenty-five years after the launch of the AS/400 and evolution of name changes, Infor reports that IBM i has retained its core values of reliability and low total cost of ownership.  Infor’s EMEA System i Survey 2013—with results from more than 100 managers and high level decision-makers—revealed a vote of confidence in IBM i. However, Infor says, the report also flags continuing concerns around skills and cloud.
 
"As the biggest IBM partner with more than 16,000 customers worldwide using the platform, Infor has access to a lot of end user insight,” notes Paul Field, general manager of Infor's System i/IBM i group in EMEA. "It is clear from this research that a lot of businesses are still reliant on IBM i and that the platform is holding its own amongst younger technologies. The reliability, cost-effectiveness and security of the platform combine to make it virtually future-proof at 25 years old. This is in itself quite an achievement but even more impressive is that the platform continues to remain relevant with new updates and investment in capabilities such as mobile or analytics."
 
He adds, "It is clear, however, that some of this investment will need to focus on maintaining a base of skilled staff that will form the basis of this continued ROI in years to come."

Key Findings

  • 71 percent of IBM i platform end-users agreed with the statement, "We believe our System i platform is future-proof." Twenty-two percent were neutral to the statement and only 7 percent disagreed.
     
  • When asked whether their business system is "very reliable," 92.5 percent agreed while the remaining 7.5 percent were "neutral." Side note: None of the respondents running their mission-critical business systems on IBM i disagreed.
     
  • The survey results also support the widely-held belief that IBM i applications can be deployed faster and maintained with fewer staff. When asked whether total cost of ownership of their system compares well with alternatives, nearly two thirds—63 percent—agreed.
     
  • When asked whether users are able to access the data they need to run the business, 71 percent agreed, with the vast majority of the remainder giving a neutral answer. Only 8 percent disagreed -- proving that even a mature business system, kept up to date, will continue to serve business needs very well, Infor says.
     
  • The report shows that the industry is reaching a tipping point on IBM i skills with 52 percent of respondents saying attracting and retaining critical IT skills is becoming a problem. The role of cloud computing in addressing these issues is also far from clear. When asked whether cloud could help address skills shortages, 53 percent were neutral.  Surprisingly a further 33 percent said cloud could not help.
     
  • Looking ahead, many users indicated that they plan further investment in the platform: 90 percent of respondents consider reporting and analytics, additional functionality, mobile, and 24x7 availability as either a need or a priority for investment.
 
To see additional findings and download the full survey results, check outinforsystemi.com/survey/.
 

Monday, June 24, 2013

IBM AS/400 Turns 25: Will It Last Another 25 Years?

IBM AS/400 Turns 25: Will It Last Another 25 Years?

By Sean Michael Kerner (Send Email
June 24, 2013
Some server operating systems were built to last the test of time. The IBM AS/400 is one such system.

"Today we call it the IBM i, because the 'I' stands for Integration," Ian Jarman, Business Unit Executive, Power Systems Lab Services & Training at IBM, told ServerWatch. "The integration operating environment runs today on our latest Power systems and also on our latest PureSystems."The AS/400 (Application System/400) was first introduced by IBM 25 years ago in June of 1988, and it's a system that is still alive and well today in 2013. Though the core server operating operating system that constituted the AS/400 is still alive, the name AS/400 as a product brand is not. In 2008, IBM rebranded what had once been the AS/400 as System i.
The original AS/400 operating system was a combination of IBM's System 38 and System 36 that were merged together. Jarman explained that the AS/400 was actually the name of the hardware platform, while the operating system was originally known as the OS/400

The AS/400 operating system from its birth 25 years ago has always been an integrated operating system that includes an IBM DB2 relational database.
In Jarman's view, the biggest change to the platform occurred in 2008, when the IBM i operating system was brought together with IBM's AIX Unix operating system and Linux onto the same Power server systems.IBM AS/400

PHP

When the AS/400 was first introduced the concept of open source software didn't really exist. In 2013, open source is a reality and IBM i integrates with the open source PHP language to further extend the platform.
"One of the areas of strongest growth for new applications on the platform is PHP, as people use it to link out to web and mobile applications," Jarman said.

RPG IV

When the AS/400 first debuted one of the most popular languages for programming on it was the RPG IV. As it turns out, 25 years later RPG IV is also still alive and well, too.
Jarman noted that just as the AS/400 has been transformed over the last 25 years, so too has RPG IV.
"RPG is an incredibly efficient transaction processing language," Jarman said.
He added that RPG now works well with other modern languages such as PHP and Java.
"You can put a PHP front-end with an RPG IV backend," Jarman said. "That combination is very popular because people that come from PHP are able to very easily familiarize themselves with the new RPG IV."

AS/400, AIX, Linux

When it comes to the open nature of the Power architecture, the IBM i, AIX and Linux operating systems can all exist on the same server, at the same time.
The multi-OS nature of Power is enabled by way of the PowerVM virtualization technology.
"The vast majority of our IBM i users today are using PowerVM to virtualize their systems," Jarman said. "It's very common for people to run a combination of operating systems because that's the way they can drive the highest efficiency."

The Next 25 Years

While IBM is now celebrating 25 years of the AS/400, it isn't resting on its laurels. There is a planned IBM i 7.2 release set for next year as development and innovation on the platform continue.
One of the areas where Jarman expects IBM i to grow is on the PureSystems portfolio. The IBM PureSystems approach itself is an integrated, storage, compute, networking and applications stack.
"PureSystems gives us the ability to run IBM i and Windows workloads or Linux on x86 workloads very efficiently together," Jarman said.
From the day that AS/400 debuted 25 years ago to the modern day, Jarman stressed that a key component of the architecture is that it has a technology-independent machine interface.
"Effectively what that does is it protects you from technology change," Jarman said. "It's difficult to predict the future, except to say that in next 25 years the technology underneath IBM i will fundamentally change."
The promise of the IBM i is that it is able to change as underlying hardware changes. It's a promise that could see the platform survive for the next 25 years.
"I'm very confident given that we made a big promise of technology independence 25 years ago with the AS/400 and we delivered on that promise, I'm very confident that people will be running IBM i applications 25 years from now."

Sean Michael Kerner is a senior editor at InternetNews.com, the news service of the IT Business Edge Network, the network for technology professionals Follow him on Twitter @TechJournalist.

Monday, June 10, 2013

Gartner Publishes 2013 Magic Quadrant for SIEM

Monday, 10 June 2013

Gartner Publishes 2013 Magic Quadrant for SIEM



Just as surely as spring has established a foothold on Cape Cod, the SIEM Magic Quadrant for 2013 has published. The news is out, and IBM Security has improved our position as a Leader in the 2013 Magic Quadrant for SIEM (Security Information and Event Management) again — marking the 5th year in a row that IBM Security/Q1 Labs has achieved this leadership position. For the first time, IBM/Q1 Labs is in the top position in the SIEM MQ.
IBM/Q1 Labs also received outstanding scores and improved standings in the 2013 SIEM Critical Capabilities report, which provides numerical ratings of vendors by capability and use case.
Back to bragging: IBM/Q1 Labs is rated #1 (above every other vendor) on “Ability to Execute” (the Y-axis).  This represents overall viability, product/service, customer experience, market responsiveness, product track record, sales execution, operations and marketing execution.
  • IBM/Q1 Labs is rated above major competitors (McAfee/Nitro, Splunk, LogRhythm, and RSA) on both “Ability to Execute” and “Completeness of Vision” (the X-axis).  Completeness of Vision represents product strategy, innovation, market understanding, geographic strategy, and other factors.
  • IBM/Q1 Labs is rated highest in the Critical Capabilities report for essential elements of Security Intelligence with Big Data: Analytics and Behavior profiling
  • IBM/Q1 Labs is the highest rated in the SIEM Use Case, Product Rating, and Overall Use Case categories.

Besides vendor chest-thumping, what does this mean to our customers? Simply this: the creation and development of the IBM Security Systems division concurrent with the acquisition of Q1 Labs ensured:
  • Customer-facing focus
  • Continued and increased investments in Security Intelligence
  • More opportunities to engage with more customers worldwide
  • More 3rd party partnerships to ensure Big Data collection from more and more sources
  • Resources unique to IBM. And face it, no one knows data like IBM.
 

Thursday, May 30, 2013

IBM i for Enterprise Business

IBM i for Enterprise Business

Quantifying the Value of Resilience
The IBM i operating environment has a longstanding track record of maintaining extremely high levels of availability, security and disaster recovery that are – by wide margins – greater than any competitive platform. What is the value of these strengths? Few would dispute that disruption of core enterprise systems can affect the bottom line. Many organizations, however, do not factor costs of downtime into their platform selection processes. This may be a serious mistake. Business damage due to planned as well as unplanned outages may vary significantly between platforms.
This report presents two sets of three-year cost comparisons for use of IBM i, Microsoft Windows Server Failover Clusters (WSFC), and Oracle Exadata Database Machine to support core enterprise systems in six companies. Comparisons are presented for companies operating supply chains, and for financial services companies with revenues of between $1 billion and $10 billion.
Results uncovered:
Costs of downtime – i.e., business costs due to outages – averaged 90 percent less for use of IBM i than for Windows server clusters, and 71 percent less than for Oracle Exadata. This calculation is for planned outages and unplanned outages of less than three hours duration.
Lower IBM i costs of downtime translated into three-year business savings of $2.8 million to $35.3 million compared to use of clustered Windows servers, and $700,000 to $8.6 million compared to use of Oracle Exadata.
Risk exposure to severe unplanned outages of 6 to 24 hours duration is also significantly lower for use of IBM i. These calculations, which employ a standard probability/impact methodology, indicate that risks of severe business damage for use IBM i average 93 percent less than for use of clustered Windows servers and 73 percent less than for use of Oracle Exadata. These variances translated into $257,000 to $7.43 million in higher risk exposure for use of clustered Windows servers and $56,000 to $1.69 million for use of Oracle Exadata.
Comparisons are based on use of IBM i 7.1 with IBM PowerHA SystemMirror for i high availability clusters on latest-generation Power Systems; Windows Server 2008 R2, SQL Server 2008 R2 and WSFC on latest-generation Intel E5- and E7-based platforms; and current Oracle Exadata models with Oracle 11g Database including Real Application Clusters (RAC).
Lower costs of downtime and risk exposure for use of IBM i are due to fundamental differences in architecture and technology.

Thursday, April 25, 2013

IBMi25 Not Slowing Down


Number six is "Subsystems - Hostel or hotel?" 
 
Here's my favorite excerpt from the chapter: 
 
IBM i subsystems are the business hotel rooms of the operating system world. Within one image, subsystems isolate database and application workloads, matching resources and priorities to business service goals. They are, very simply, designed for business.
 
More like hostels, x86 operating systems are not designed to isolate workloads; if one process fails, it may affect another. To avoid conflicts, applications and databases are typically run in separate virtual machines or servers—they’re moved to a different hostel.
 
Plus, there's a new video by IBM's chief architect for IBM i, Steve Will: 
 
 
And number seven? "7. Object orientation – May I see your badge, please?"
 
IBM i was designed to be object-based, meaning the operating system will ensure that each of the hundreds of object types entering the system will behave predictably and within the limits of a user's authority. Object-oriented security helps protect IBM i against malware or other malicious attacks and is the foundation for its well-earned reputation as one of the most secure IT systems for business.
 
In any event, there's all sorts of IBM i goodness going on here, and it's worthwhile to check out the celebration, if only to bone up on IBM i awesomeness. As for Facebook likes, IBMi25 is up to nearly 1,100. 

Thursday, February 14, 2013

Database Integration: Get Data from IBM i out and Other DBs In



Database DB2
Written by Marinus Van Sandwyk   
Wednesday, 13 February 2013 00:00

As enterprise applications expand in database requirements and complexity, the need to access multiple databases becomes paramount, requiring multiple applications to co-exist across different platforms.

In POWER environments running legacy IBM i systems, a two-phased approach is required to enable the integration of multiple databases across different platforms.

1. First of all, in order to easily extract data from DB2 for IBM i, you will achieve a lot more by moving to DB2 SQL. You need to convert the database schema from DDS to DB2 SQL (DDL) natively with absolute minimum impact to the rest of the applications running on the system. A fundamental requirement here is that you focus on placing the access and extraction processes in the hands of your SQL-literate users to remove dependencies (constraints) and provide users with unconstrained access to their business information.
2. Then, using native SQL tools and interfaces on the IBM i, it is possible to access specific files on DB2 and non-DB2 remote databases without the need for additional hardware appliances or proprietary software on the remote database.

Get Data from IBM i Out

The first step is to convert the database schema from DDS to DDL, unlocking a host of additional functionality in the SQE interface. This process of conversion can be achieved by extracting structural metadata directly from the compiled objects. Structural metadata is the information contained within a file that describes the structure of the file, not its contents. It's possible to automatically import the underlying structural metadata of all the files in the original schema into the database without relying on source code. This import function extracts the actual information about the structure of the schema directly from the compiled objects and guarantees that the correct production version of the schema is imported without having to locate the correct source code. Once this process is complete, a full definition of the existing DDS schema and its structure is available within a cloned copy of the database. At this time, the cloned database has no data, just information about its structure.

This process can be achieved with or without the need for surrogate logical files. Surrogate logical files masquerade the change to the underlying database, allowing legacy systems to access the new database files without the need for recompilation. Surrogates are beneficial when new applications are being implemented, and legacy applications remain unchanged. Should you, however, aim to leverage the competitive advantage and value of your heritage applications, approximately 80 percent of the lines of code (all lines of code implementing validations and enforcing data relationships) currently in your legacy applications will eventually end up in the database engine. By eliminating surrogate local files, you end up with a far more efficient approach to enabling long-term modernization.

021313Marinussurrogate-diagram copy
Figure 1: Compare the DDS-to-DDS modernization with and without surrogate files.

The next theoretical step is to register a new schema using the original schema's structural metadata and generate associated DDL statements and database objects from this. This process can be done without changing, manipulating, or massaging the original structural data. The aim of this exercise is to build DDL from the cloned DDS structural metadata without changing level IDs in any way (which is a significant requirement when migrating and upgrading DDS to DDL in phase 1, which facilitates an easy, non-disruptive, low-risk process that is entirely transparent to legacy applications).

The new schema is built using the cloned copy of the structural metadata as its source. This newly registered schema will be the schema into which DDL structures are going to be built. The empty schema is based on the original structure, which matches the original DDS schema exactly, except the DDS files are now native DDL files.

To ensure that the rebuild has been successful, "level ID cross-checking" needs to occur. This can be accomplished by pulling the structural metadata from both the original schema and the new schema, as well as the structural metadata in the cloned database. The cross-check ensures that the structural metadata and, specifically, the level IDs are identical. The primary objective of this process is to ensure that the level IDs in the new schema are exactly equal to the level IDs in the old schema. Once complete, it is possible to switch to the new database, which can be done without recompiling application code.

Once the cross-check confirms that both schemas are exactly alike, a replication function of your choice (CPYF or custom replication tools) is used to copy the data from the original schema to the new schema. The time taken to transfer the data is purely a factor of the volume of data in the schema. Very large schemas can take a few days to replicate, depending on available resources.

Once the data has been copied across, the replication function should continue to check that the two schemas are in sync, ensuring that any changes made to the content of the original schema have been replicated immediately to the new schema.

Now you should set up a testing environment. The test uses the original application to test against the new schema while simultaneously running the original application. During this process, there should be no interruptions or changes to the original application.

The database architect can now maintain the new DDL schema and add constraints, keys, triggers, and anything else necessary to enhance the functionality of the new database and its associated dictionary. This will usually be an ongoing exercise, as you gradually start exploiting the incredible power of the SQL interface on DB2.

Cross-checking capabilities should be introduced in the background to continuously check that the structure of the schema and the structure as defined in the new database remain exactly alike. This helps to identify changes that may have been made to the schema's structure via a "backdoor," which could cause validation errors and data loss problems in the future.

Finally, the stage has been reached where the new schema has been completely tested, all cross-checks and data comparisons are complete, and everything is working to the user's satisfaction. The old DDS schema can now be phased out, and the original unchanged application can be switched over to the new DDL schema. The schema has been converted from DDS to DDL, and it has all been accomplished within a couple of days, with absolutely no application down time and no risk to business continuity.

And Other DBs In

Once you have your DB2 databases defined using DDL, with long table (file) names and long column (field) names enabled, your users will be presented with a modern-looking database, providing a solid foundation for you to use IBM i and DB2 as your consolidation platform.

For far too long, we have been guilty of cowering in the corner, allowing SQL Server and other tools to assume the incredible power we have in IBM i and DB2. The IBM i platform is significantly better-suited to be used as a consolidation platform, processing data from any other platform.

A variety of options exist here, and the most value can be unlocked by leveraging the inherent database processing capabilities of DB2 for i and RPG IV. A recent and powerful development in the form of ROA (Rational Open Access for RPG), allows you to develop a device handler for any non-DB2 database connection, accessing the contents of other non-DB2 databases as tables in RPG IV on IBM i. You can process and consolidate data, and produce reports and output from multiple sources. The potential of this approach is limitless, although it does require specialist programming skills.

Alternative tools, for example DB-GATE from RAZ-LEE, are available that allow access of non-DB2 data sources (SQL Server, Oracle, or any other SQL-compliant data source) on IBM i. This enables the access of specific files on DB2 and non-DB2 remote databases through natural interfaces such as interactive STRSQL or directly from any standard RPG, COBOL, and C programs.

This approach improves access by eliminating the need for SQLPKG on target DB2s. In addition, support for other databases is available:
  • DB2
  • Oracle
  • Microsoft SQL Server
  • Postgre SQL
  • MySQL
  • SQLite
  • Firebird
  • Excel CSV, TXT
021313Marinusfigure2
Figure 2: Get IBM i data into other databases.
We should recognize that improved performance and functional integration is possible across disparate systems, leveraging IBM i and DB2 capabilities. The IBM i platform is very well-suited to being the integration platform of choice, and using it is preferable to the conventional wisdom of years past: trying to extract data from DB2 on i to SQL Server or other databases to consolidate and then generating reports and other functions on languages inferior to RPG IV.

Thanks to Carol Hildebrandt

The author would like to thank Carol Hildebrandt for her contributions to this article. Carol has over 20 years of international marketing and sales experience, delivering enterprise-wide growth initiatives for IBM Storage & Technology Group, IBM Software Group, and other leading multinational IT brands. Carol was responsible for the launch of IBM PureSystems into emerging markets. She is the consulting Chief Marketing Officer for TEMBO Application Generation focusing on enterprise modernization on POWER Systems running IBM i, and is a contributing editor. LinkedIn profile: au.linkedin.com/in/carolhildebrandt/.


Marinus Van Sandwyk
About the Author:
Marinus Van Sandwyk has almost 30 years of experience on the IBM i platform, having started his career on System/38 as a programmer in 1983. He is the Founder and CTO of TEMBO Technology Lab, the developers of Adsero Optima™ Enterprise Modernization Suite (www.adsero-optima.com). Prior to founding TEMBO, Marinus was the architect behind the CATSe technology, which was licensed globally by IBM Global Services, to render Rapid Recovery/400 services. This product suite allowed for the partitioning of OS/400 prior to the availability of LPAR, and parts of the technology were sub-licensed by Lakeview Technology to deliver their clustering and cluster enablement solutions in 2000. Marinus is keenly interested in clustering, virtualization, SaaS and application resilience, as an integral part of application modernization. LinkedIn profile: www.linkedin.com/in/mbogo
Read More >>