Sunday, January 18, 2015

Power Systems Inspire New z13 Mainframe

Power Systems Inspire New z13 Mainframe
http://www.itjungle.com/tfh/tfh011915-story04.html


Published: January 19, 2015
by Timothy Prickett Morgan

Back in the old days, the mainframe and midrange divisions of IBM rivalled each other almost as much as they took on competition from outside the walls of Big Blue. But since the mid-1990s, when the company first started converging its system lines and made sure they could all run Java and its application server, the different system units of IBM have been collaborating and converging. Now, after selling off its System x division to Lenovo Group last fall, IBM is down to two system divisions within a single IBM Systems group.
The first machine to come out of the new IBM Systems group, which is being led by Tom Rosamilia, familiar to the IBM i community as a former general manager in charge of the Power Systems division as well as the System z mainframe division, is the System z13 mainframe, which was announced in New York City last Wednesday to much fanfare. The System z13 machine looks to be coming out a little earlier than many had expected, and I think that IBM actually moved the announcement up at some point in recent months. IBM's System z techies were set to divulge all of the feeds and speeds of the new eight-core z13 processor at the heart of the new mainframe at the International Solid State Circuits Conference that runs from February 22 through 26. IBM did not provide much in the way of details specs for the new z13 chip, but Mike Desens, vice president of System z development in the new IBM Systems group, gave me some insight into the new processor and the systems that wrap around them. And as has been the case in the past, the Power and z processors are designed by a single processing team and are borrowing technologies from each other. This does not, however, mean that IBM is creating a converged processor that can support either Power or z instruction sets. IBM has not done that, to date, and to do so would be a Herculean engineering task. It is far easier to have two different chips that share common elements wherever they can.
The new z13 chips are implemented in the 22 nanometer process at the fab in East Fishkill, New York, that is now owned by GlobalFoundries. The System z13 machine makes use of processors with six, seven, or eight working cores and mixes and matches them to get a varying amount of active cores across the product line. There are five different models, which offer scalability from 30 to 141 total cores that are configurable by end users in the system; the largest machine, the System z13-NE1, actually has 168 physical cores in its refrigerator-sized cabinet. (This 22 nanometer process is the same one used to make the Power8 processor, which comes in a variant with six-cores that are put two per socket into machines and another with a dozen cores on the same die.)
Like other chip manufacturers, IBM uses the chip manufacturing process shrink to add more transistors, and therefore more features, to the chip. In the case of the z13, IBM had to keep one eye on boosting the single-threaded performance of its core z/OS workloads and the overall scalability of the box to run lots of virtualized workloads while at the same time goosing the performance of the chip for in-line analytics for transaction processing or making generic Linux workloads run faster. The clock speed that IBM chooses for each System z processor generation is set based on the thermals and throughput constraints of the design, and as has been the case with the Power chip family, sometimes clock speeds go up and sometimes they come down as IBM is goosing the performance. With the Power7 chips, IBM was able to double the performance while radically goosing the core count from two with the Power6 to eight with the Power7 while at the same time cutting the clock speed because of a radical redesign of the core. With the z13, IBM is similarly dropping clock speeds and yet boosting single-threaded performance.
To be specific, the z11 chips, which had four cores running at a top speed of 5.2 GHz, was implemented in a 45 nanometer process when it came out in 2010. A single z11 core delivered about 1,200 MIPS of raw computing capacity running at full throttle, as gauged by the mythical measure of mainframe oomph. The z12 chip came out in the summer of 2012, and it had six-cores clocking in at 5.5 GHz, with each core delivering about 1,600 MIPS of performance. The z12 chip was etched in 32 nanometer processes, and IBM used the process shrink to goose the clock speed by 6 percent to boost the core count by 50 percent. The z12 chip had a new out-of-order execution pipeline and much larger on-chip caches to further increase single-threaded performance. The new z13 chip implemented in 22 nanometer processes runs at 5 GHz, mainly to cut back on heat, and yet offers about a 10 percent performance bump per core thanks to other tweaks in the core design. This includes better branch prediction and better pipelining in the core, just to name two improvements.
The z13 chip also has much larger caches, which IBM feels is the best way to secure good performance on a wide variety of workloads that are heavy on I/O and processing. Specifically, the z13 core has 96 KB of L1 instruction and 120 KB of L1 data cache. The L2 caches on the most recent generations of mainframe chips are split into data and instructions caches, and in this case have been doubled to 2 MB each. The on-chip L3 cache, which is implemented in embedded DRAM (eDRAM) as on the Power7 and Power8 chips, has been increased by 50 percent to 64 MB shared across the six cores. And the L4 cache that is parked out on the SMP node controller chip in the System z13 machine has been boosted to 480 MB, a 25 percent increase. The System z13 tops out at 10 TB of main memory, three times that of the predecessor zEnterprise EC12 machine.
All told, says Desens, the changes in the cache hierarchy smooth out the SMP scalability of the system, and a top-end System z13 will have about 40 percent more aggregate MIPS than the largest System zEnterprise EC12 from two and a half years ago. I estimated that zEnterprise EC12 machine at a 75,000 MIPS of total capacity, and that puts the new System z13 at 105,000 MIPS.
To give you a sense of what that might mean in terms of Power8 performance, IBM's own performance documents from the Power4 generation say that to calculate the rough equivalent performance on IBM i workloads, take the MIPS and multiply by seven and that will give you an approximate ranking on the Commercial Performance Workload (CPW) test that IBM uses for OS/400 and IBM i database and transaction processing work. That means a top-end System z13-NE1 model would be rated at about 735,000 CPWs. A 256-core Power 795 using 4 GHz Power7 chips had about 1.6 million CPWs, and Power E880 with 64 Power8 cores running at 4.35 GHz delivers 755,000 CPWs. Roughly speaking, the Power E880 is delivering 12,000 CPWs per core while the new System z13-NE1 is delivering around 5,200 CPWs per core, a least based on my MIPS estimates and the MIPS-to-CPW ratios. Everything comes down to cases, and the important thing is that both the Power8 and z13 systems offer lots of capacity. (IBM has sophisticated Parallel Sysplex clustering to lash multiple z13 machines into a single compute engine, too, and IBM has not really talked about its DB2 for i Multisystem clustering for about 15 years. But as I have said before, it should.) The other thing to remember is that the performance numbers for the Power 795 and Power E880 have four-way and eight-way SMT turned on, respectively, and this significantly boosts performance on thread-friendly workloads. Like by a factor of 50 percent moving from two to eight virtual threads, according to the internal IBM data that I have seen. IBM will very likely increase the SMT virtual threading on future System z processors, and will probably get to eight-way at some point, perhaps with the z14, perhaps with a z13+ if such a thing is ever announced.
Some z13 workloads are going to run a lot faster than these raw performance estimates imply, and that is because some technologies that have been in the Power chips for years are now making their way into mainframe engines. First, IBM has implemented simultaneous multithreading in the z chips for the first time. SMT is a hardware virtualization technique that allows for a single physical pipeline in the processor to be virtualized, allowing for compilers to schedule instructions and data movement into the pipeline more efficiently. The SMT in the z13 chip is two-way, meaning that it presents two virtual pipelines for the physical pipeline in the core; IBM did two-way SMT in the Power6, four-way SMT in the Power7, and has eight-way SMT in the Power8. As is the case with the Power chips, this SMT is automatically and dynamically configurable based on the workloads. For software that likes threads, these virtual threads can really boost performance. IBM has also added SIMD--that's single instruction, multiple data--vector math units to the z13 chip, also the first time it has done so.
The two-way SMT helps Linux workloads run up to 32 percent faster than on z12 chips, says Desens, and the combination of SMT threading and SIMD units in the z13 can help Java8 applications get as much as 50 percent more throughput per core. (Those fatter caches and wider pipes into them help a lot here, too.)
The sales pitch for the System z13 machine is interesting in that Big Blue is talking about how a single box can process 2.5 billion transactions per day, and that mobile computing is driving up transaction volumes on an exponential scale. Now that we can look up anything at any time, we do, and this is driving up traffic on back-end databases and transaction processing systems for the companies that are part of our lives such as banks, insurance companies, and such. The ability to use the various kinds of computing to do risk analysis and fraud detection while transactions themselves are being composed and processed is not something that is unique to the System z mainframe. All of the hardware pieces are there to do it on the Power Systems platforms, too. The question is this: Will IBM's marketing point this out, and will it similarly peddle its Power-based systems?
RELATED STORIES

Sunday, January 11, 2015

IBM Reorganizes To Reflect Its New Business Machine

IBM Reorganizes To Reflect Its New Business Machine
Published: January 12, 2015
by Timothy Prickett Morgan
Big Blue did a lot of changing last year, and CEO Ginni Rometty started off this year by making some organizational and personnel changes that reflect the new shape of its company and the opportunities that it sees ahead of it in the global economy that is also undergoing wrenching change. Information technology and the economy have been changing each other for so long that it is hard to say what is cause and what is effect, but what can be said is that IBM has spent more than 10 decades adapting to such changes.
In a memo to IBM employees that was sent out on January 5, Rometty explained the organizational changes as well as the rationale behind them.
"A year ago we laid out our strategy, and said that IBM's investments, acquisitions, divestitures--and our own practices as IBMers--would be reshaped by our strategic imperatives of data, cloud and engagement, underscored by security," Rometty wrote. "The past year has strongly validated our strategy, as clients embrace and invest in these new technologies. Our industry is rapidly re-ordering. And IBM has been moving aggressively--evident in a long list of 'signature moments' through 2014. They included the formation of IBM Watson; the global expansion of SoftLayer's cloud pods; the launch of Power8; the creation of our cloud platform-as-a-service, BlueMix; our $3 billion investment in next-generation semiconductor R&D; the acquisition of Cloudant; the launch of Watson Analytics; our divestitures of x86 servers and semiconductor manufacturing; our enterprise mobility alliance with Apple; our cloud partnerships with SAP and Tencent; and our Big Data partnership with Twitter."
While IBM mashed up Software Group, its very profitable software arm, with Systems and Technology Group, its sometimes profitable and sometimes money-losing hardware unit, back in July 2010 during a massive reorganization that set the stage for Rometty's rise to the top of Big Blue, the company did not organize itself into vertical stacks as it talked about its financials. Software Group and Systems and Technology Group remained distinct businesses, with their own segments and profit and loss statements. With the sale of the System x division to Lenovo Group, which closed in the United States in October and in Europe last week, IBM has seen fit to start talking about its systems business as a whole.
To that end, the new IBM Systems group will include Power Systems and System z mainframe servers, IBM's various tape, disk, and flash storage products as well as operating systems and middleware as a single unit. Tom Rosamilia, who was heading up Systems and Technology Group, will lead the IBM Systems group. The interesting bit to me is that IBM will finally show revenue for systems that actually reflects what it really does derive from its systems business--something it should have done a long time ago. IBM sells integrated stacks of servers, storage, and software to a very large portion of its customers, and if it wanted to show its strength in this regard, it should have perhaps changed the way it reported its revenues many years ago. It will be interesting to see how IBM accounts for the sale of middleware, database, and application software that is not tied to its own systems in some fashion.
As part of the reorganization, IBM is also appointing Steve Mills, the long-time leader of the Software Group and the executive chosen to run the combined software and systems units in recent years, as an executive vice president in charge of software and systems. The name change is significant in that Mills is the only executive vice president--all of the other top leaders are senior vice presidents. Even though software assets will be embedded in various IBM groups (including a new IBM Cloud group) and in vertical stacks that also have their own managers (such as the new IBM Healthcare group that was also created), Mills is being tapped to keep track of all of the software assets as a whole and keep everything humming. "We will also look to him to play a leadership role in our most significant technology partnerships and relationships involving clients, countries and our industry," Rometty said.
With Systems and Technology Group gone, IBM Research has to go somewhere, and it will be a free-standing unit led by Arvind Krishna, who is being elevated to senior vice president and director of the R&D arm of Big Blue. He succeeds John Kelly, who now has the position of senior vice president of solutions portfolio and research. Krishna was most recently general manager of development and manufacturing in the Systems and Technology Group, and he led the team that created the Power8 processor and who ran IBM's database business for many years as well as having key technical roles in Software Group and IBM Research.
In his new role spanning research and (I just hate this word) solutions, Kelly is manifesting something that has been happening with IBM for the past two decades, and that is the increasing connection between the research and development that IBM's scientists and researchers perform and a tie to solving specific business problems for customers. Indeed, Kelly is the bridge between IBM Research and three new cross-platform business units. They are: IBM Analytics, led by Bob Picciano; IBM Commerce, led by Deepak Advani; and IBM Security, led by Brendan Hannigan. All of these divisions, Rometty explained, will have an analytics component, will have software assets tied to them as well as professional services. Each unit, as Rometty put it, "will offer hybrid cloud delivery, and each unit will support open platforms and global ecosystems." I think that means the stacks IBM creates will be available on IBM's own systems as well as those of others and on its SoftLayer cloud and very likely other big clouds.
In some ways, these new stack groups resemble the IBM Watson group that was formed last year and that is being led by Mike Rhodin. IBM is also forming another industry-focused unit called IBM Healthcare, which Rhodin will create using the hardware, software, services, and research available across IBM.
All of the executives of these units as well as Krishna report to Kelly in the new organization chart. Picciano has been general manager of IBM's Lotus groupware business, overall software sales, and development and support for the DB2 database running on Linux, Windows, and Unix platforms, among other jobs. Deepak Advani was CEO at statistical analysis software maker SPSS before IBM acquired it and has most recently been in charge of management tools for systems and cloud. Hannigan was CEO of Q1 Labs, which IBM acquired in October 2011 and which is the basis of its former security division. Rhodin was also a general manager related to software before taking the IBM Watson role, and headed up IBM's Software Solutions Group, which peddled Smarter Commerce, Smarter Cities, business analytics, and social business stacks to customers. Before taking over various IBM system units over the past year, Rosamilia was in charge of IBM's very successful WebSphere middleware business.
The last big change that was announced for 2015 is the creation of the IBM Cloud group, which will be headed up by long-time IBM executive Robert LeBlanc. The SoftLayer cloud, the Cloud Managed Services (CMS) tools derived from SmartCloud and OpenStack, and the BlueMix commercialized version of the open source Cloud Foundry tool will be put into this new group. LeBlanc has been in charge of IBM's Tivoli systems management and WebSphere middleware lines in the past, among many other high-level jobs. It looks like Rosamilia and LeBlanc report directly to Rometty, but that is not clear from her message to employees.
If you are sensing a theme here, aside from change, it is that software has been the route to a senior position at IBM.
"As you can see, IBM software has become an essential element in every part of our company," Rometty said in her note to IBM employees. "Indeed, it is so because our software portfolio is the foundation of the world's core business systems and powers the expanding array of cloud-based solutions we are bringing to clients."
I can't wait to see the new financial presentation reflecting these new groups and units, which IBM will probably use when it reports its fourth quarter 2014 financial results, which should come out by the end of January if history is any guide.

Monday, December 29, 2014

TCO, TCA and Reliability – the 2014 ITG IBM i Studies

TCO, TCA and Reliability – the 2014 ITG IBM i Studies

November 11, 2014
Two years ago, I wrote a blog about two ITG studies that compared IBM i to our competition in the small and midsized business (SMB) market, and in the Enterprise market. Those studies have recently been refreshed, and I’ve been using charts and data from these new studies as I’ve been traveling talking about the value of IBM i on Power Systems. In today’s blog, I will point out the highlights and give you links so you can get the full studies.
 
The first 2014 study is the ITG study called “IBM i on Power Systems for Midsize Businesses.” The short URL for it ishttp://bit.ly/IBM_i_ITG2014SMB and it’s not much of a surprise that it shows, again, that the total cost of ownership for IBM i is significantly less than for the typical x86-based competition. Here’s a key chart:



As with the previous studies, the comparisons are made over a three-year period, where each set of bars represents the entire cost of running a business using only the platforms indicated. You can see businesses using IBM i and Power Systems costs 49% less than running that business on Microsoft Windows Server and x86, and 55% less than Oracle/Linux on x86. IBM i integration, ease of use and powerful DB2 contribute greatly to the value proposition the platform has had for years.
 
This 2014 study reconfirmed the competitive Total Cost of Acquisition (TCA) we now have with IBM i. While this still surprises many customers, it is valuable information to have when discussing IT investments.


 
Again, IBM i and Power Systems combine to beat the competition on average for acquisition costs; by 35% vs. Windows & x86 and by 46% vs. Oracle/Linux & x86. Powerful data to show your businesses how much IBM i and Power Systems help the bottom line, even when only looking at costs.
 
As in the previous round of studies, in 2014, ITG refreshed its study of the reliability of the platform. The new study is called “IBM i on Power Systems for Enterprise Businesses” because one of the most important aspects of a platform for large clients is “How much money will it cost me when it doesn’t work?” The short URL is http://bit.ly/IBM_i_ITG2014Ent.
 
For this study, ITG looked at large businesses in various industries that run on IBM i, and those in the same industry that run on competitive platforms. I’ve selected one chart that shows the huge difference platform choice makes.
 
Again the study looks at a three-year period, and again, IBM i + Power Systems is a winning combination. The length of the bar indicates how much money is being lost by the business when downtime occurs – and that can be actual revenue lost, or potential revenue that is not able to be gained because the application is unavailable. With the integrity and reliability of the system, and with the features we’ve added over time to allow more changes to be made in the environment without disruption, the cost of downtime is significantly lower on IBM i.
 
I’d encourage all of you to follow the links and enter the little bit of information our marketing people ask you to provide so that you can see the full reports. Then, the next time you encounter someone who wonders if trusting your business to IBM i and Power Systems is the right business decision, point them to the documents. The numbers in the study, plus your personal experiences with the stability and function of IBM i make a pretty powerful story.

Sunday, December 28, 2014

Debunking the Myth that IBM i Costs More for Midsize Businesses: ITG Looks at the Numbers

Debunking the Myth that IBM i Costs More for Midsize Businesses: ITG Looks at the Numbers


Chris Maxcer 1 dec. 2014 Tags:  linux tco x86 ibmi 219 bezoeken



0 personen vinden dit leuk 0 personen vinden dit leuk


The International Technology Group (ITG) has been taking a close look at total cost of ownership (TCO) data for years, often packaging up their findings in research reports that include a deep understanding of IBM i on Power Systems. This fall, ITG has released a pair of new reports that are, quite simply, must-reads for any IBM i-loving IT pro.
 
More importantly, the research compares the relative costs of competing systems against businesses of similar sizes and types -- manufacturing, distribution, and retail companies. In "IBM i on Power Systems for Midsized Businesses: Minimizing Costs and Risks for Cloud, Analytics and Mobile Environments," the report compares the IBM i operating system deployed on POWER8-based systems to two alternatives -- Microsoft Windows Server 2012 and SQL Server 2014, and x86 Linux servers with Oracle Database 12c.
 
Not only does IBM i on POWER8 crush the competition in TCO when calculated over three years, IBM i costs for hardware and software licensing fees are significantly lower than Windows and SQL server and lower than x86 Linux server with Oracle. The numbers are amazing, but I'm willing to bet that a good many IT pros believe IBM i is not only more expensive for midsize businesses . . . but that some IBM i-focused pros still believe that, too.
image
 

How Much Less Expensive Is IBM i on POWER8? 

 
In initial cost of acquisition, an IBM i installation averages 35% less than using Windows and SQL Server . . . and 46% less than using x86 Linux servers with Oracle. When extended out to three years, IBM i 7.2 on Power Systems average 45% less than for use of Microsoft Windows Server 2012 and SQL Server 2014, and 51% less than for x85 Linux servers with Oracle.
image
 
Wow.
 
You need to read this report and have it handy (download the .pdf) so you can scan it again before key meetings with upper management. Better still, download the short and sweet Executive Brief .pdf -- you never know when it might be important to share this information. 
 

Numbers of Servers

 
In Windows and x86 Linux environments, ITG explains, separate servers are typically deployed to handle database, application, and Web serving, in addition to test and development systems. These extra servers increase licensing and support costs. ITG notes: 
 
"In smaller installations, between three and five physical x86 servers are required for workloads handled by single Power System. In others, between 6 and 11 physical servers are required for workloads handled by pairs of Power Systems duplexed for redundancy."
 
This increase in tightly integrated simplicity is more important today than ever before, ITG says. Why? Mobile, cloud, and analytic services all draw upon core enterprise data. If core systems suffer from quality of service -- speed or availability -- the ripple effects rapidly extend outward through the organization and beyond. 
 
In fact, ITG goes into detail over costs of downtime and risk exposure, detailing how and why it's important in today's enterprise computing environments. As you might guess, the three year cost of downtime for IBM i-based organizations is significantly less than the others. And by a similar token, security and malware protection are much improved with IBM i over other solutions.
 
All-in-all, "IBM i on Power Systems for Midsize Businesses: Minimizing Costs and Risks for Cloud, Analytics and Mobile Environments" is a report you need to go download from IBM (it's free) right now. Even if you already know or suspect the basics of what's inside the report, it's nice to see it delivered by someone who's spent the time measuring all this through 42 midsize companies.
 
 
 
Gewijzigd op 1 dec. 2014 door Chris Maxcer

Why IBM Power Systems

Why IBM Power Systems?

IBM Power Systems enables organizations like yours to get things done faster and be more agile than your competition. Entering their eighth-generation of innovative processor leadership, they now seamlessly integrate into any infrastructure running Linux and are built to optimize software for big data.

Contact IBM

See the video

Improve marketing 100% or more

See more videos

Watch now.

Speed

Watch how fast POWER8 can process visual data.
See the invitation.

Your invitation to move to open systems

Watch now.

Speed

Watch how fast POWER8 can process visual data.

Get the papers

Migrating Linux applications from Intel to Power Systems
Which workloads to move first — and the most efficient ways to move them
PowerVM
Comparison study of Enterprise virtualization environments
Deployment and safety thresholds for virtualization
Bon-Ton case study
“How can you run it right to the edge, maximize that hardware capacity, while minimizing the software costs.”

Sunday, June 22, 2014

In the Wheelhouse: Why You Should Invest in In-House IT

In the Wheelhouse: Why You Should Invest in In-House IT PDF Print E-mail
Analysis - Commentary
Written by Steve Pitcher   
Monday, 16 June 2014

     
 
Who knows what's best for your company's technology future? It's the people in the trenches who get their fingers dirty crawling under your desks: your faithful IT department. And they need to be freed from stagnating budgets.
 
Why is it so hard to fund the mandatory?
 
A few months ago, I heard from a peer of mine who was tickled pink because she was able to bring on two new full-time IT resources; one is a developer, and the other is a business intelligence reporting guru. Those are two new positions, not replacements. For a small shop like hers, it's an extremely big deal. By small, I mean they have a CIO, four pure tech people, and two peripheral workers that have IT-related duties. That group supports about 600 users. By a big deal, I mean that it took her close to three years to cost-justify the positions and get them approved. Her department simply had too much work to do and couldn't afford not to do it. If you have a cash-flow problem and need your shingles replaced, you'll bring your brother-in-law over for a weekend and give him a case of cold ones. It needs to be done or the house will eventually suffer.
 
When you bring someone on as a full-time IT worker, the expectations are high. After all, selling the need for hiring someone for a brand-new position is usually a daunting task.
 
Why is that?
 
For starters, IT budgets have been stagnating for years. Since the dot-com bust of 2002, IT budgets have either been flat or negative. 2014 was slated to be on the upswing by about three whole points according to some sources.
 
Think about that.
 
With the advent of mobile devices, always-on and always-available solutions, the rise of the mobile worker (I always picture users in Braveheart face paint at my office door when I hear that one), bring your own device (BYOD) social computing, big data, analytics, and every other high-level topic, buzzword, and technology euphemism you can fit into a digital paragraph, the budgets that many of you CIOs, IT managers, or IT directors have had to work with have been sitting still or decreasing.
 
What do we as IT professionals do? We have been asked to do more with less and turn those corporate problem lemons into solutions lemonade. Some IT departments might even turn water into wine on the odd occasion, but if there's any divine intervention to be had, it's usually not in the budget.
 
The marketing machine selling the idea of moving to the cloud has been on fire for a couple of years now. You see examples of some large companies that have fired most of their IT staff to move to the cloud, turning an expense of human resources into pure tech-service charges. We see successful outsourcing case studies, and we have no doubt about the benefits, but there are still people somewhere, provisioning accounts and creating virtual servers. And no, a monkey can't do it. These are technical people. The resource pool of technical workers hasn't been shrinking, nor has the cost of paying for IT solutions. The workers are shifting more to the cloud, and companies are paying for cloud computing solutions and services there instead of in-house. In other words, it's a cost shift and not usually a cost reduction.
 
A strong but simple difference between an in-house technical services staff member and an outsourced service is that the employee has a vested interest in the success of the company. I'd bet you my last nickel that the average IT employee cares more about their company's survivability and sustainability than a contractor would. That's with all due respect to contractors, and I've known quite a few who would answer the phone on New Year's Eve and bail me out of a mess. Those guys are paid handsomely for that too. But the IT employee needs the company to expand, to sell more widgets, to cut costs, to be more environmentally friendly. There's a pride of association you get from in-house workers that you don't get from outsourced solutions.
 
Yes, IT staff needs to be trained. They need laptops. They need smartphones. They need salaries, health and dental benefits, insurance, and all that extra overhead like every other employee.
 
What they give in return is not the same as the IT employee of yesteryear. No offense to people who cut their teeth thirty-five years ago, but the job description for a modern IT professional has unofficially morphed into an always-available, always-on, mobile, work from the couch, work from the car on the side of the highway, work from the breakfast/lunch/dinner table, work on vacation, call at 3:00 a.m. to unlock an account or manually fix a process gone sour, 24x7x365, sleep-deprived, factory-formulated, high-test geeky workhorse.
 
And when they're not doing that, they're trying to plan for the future. Not their future, but the company's future. The IT people worth their weight in gold are looking to pay for themselves every year by implementing cost-saving measures or more efficient solutions. Those people scrutinize process and procedure, looking for ways to make things run faster, leaner, and most importantly, more effectively. They do this in order to justify every budget dollar they get.
 
What value!
 
But it's no wonder that many IT departments can't get ahead on all the new stuff coming down the pike. Departmental growth has been stunted with stationary or down budgets. Do more with less works in the short term in times of uncertainty. In the long term, it's a recipe for disaster. Imagine if you didn't invest in maintaining and upgrading machinery that builds your widgets. How long would it take for it to break down? You can only hold a factory together with duct tape for so long before it starts to fall apart.
 
And if IT doesn't keep up with corporate demands because of a lack of resources, then that's when the outsourcing talk starts.
 
Of course, cloud computing has an initial cost. You have to spend money to save money, right? That's funny. It sounds like something an IT department might say from time to time. But alas, "it's not in the budget."
 
Do you see the disconnect?
 
Many in-house IT shops I know spend much time bailing out the boat and not getting enough resources to patch the holes, let alone outfitting it with sonar, radar, and a new efficient propeller. When the water rises high enough inside the boat, the question is raised about buying a different boatone that doesn't need someone to steer, or bail out, or put gas into the engines.
 
The extreme cloud marketing hype is hard to resist. You'd better get on board or else you'll get "left behind." How many times have you heard that mantra repeated over and over? It's using shame to market products and services.
 
Some research firms will tell you that 75% of companies will be "in the cloud" by 2024. What does that mean? That you'll use a private cloud or public cloud or that you'll allow the use of something like Dropbox or Office 365? It's such a questionable statistic that 75% really means nothing. It probably doesn't mean 75% of companies will be outsourcing their IT departments. Even if it did, I'm calling shenanigans based on the ten-year prediction alone. Where's my new flying Delorean and auto-fitting clothes like in Back to the Future II? Science fiction is unrealistic you say? Hardly. Think of the tech we have now that was inspired by Star Trek.
 
Those influencing the decision to move are usually far removed from technology, looking at cloud replacement strictly as a cost-cutting measure but with the promises of bells and whistles that in-house IT couldn't provide because they were buried under their workloads (i.e., understaffed) or budget-deficient (i.e., underfunded).
 
We need to be investing in our IT resources constantly. If you want IT to align with the business and really do something, then you need to fund it properly.
 
When IT gets the resources to do their jobs, then companies will be rewarded with employees who do their jobs better. IT departments can help. We need to unshackle them.
 

Steve Pitcher
About the Author:
Steve Pitcher is the Enterprise Systems Manager for Scotsburn Dairy Group in Nova Scotia, Canada, and is a specialist in IBM i and IBM Lotus Domino solutions since 2001. Visit Steve's Website, follow his Twitter account, or contact him directly at stevepitcher@scotsburn.com.
- See more at: http://www.mcpressonline.com/commentary/in-the-wheelhouse-why-you-should-invest-in-in-house-it.html#sthash.hf1Rhqk9.dpuf