Tuesday, January 24, 2017

UK.gov still drowning in legacy tech because no one's boarding Blighty's £700m data centre Ark

https://www.theregister.co.uk/2017/01/23/untying_the_government/

UK.gov still drowning in legacy tech because no one's boarding Blighty's £700m data centre Ark

Little love for Crown Hosting from Whitehall depts


Analysis Only in IT is “legacy” a pejorative term, where it is used to condemn ageing systems and forgotten workarounds.
In the UK government, as with banks, increasingly difficult-to-maintain mission-critical systems are a huge problem. Not least because of the dwindling number of folk who remember how the damn things work.
One solution to Whitehall’s myriad string and sticky-tape systems was the creation of the Crown Hosting programme in 2015. That was intended to house departments’ legacy systems in facilities run by a joint venture of which the government owned 25 per cent and small data centre biz Ark the remainder.
By “lifting and shifting” all the legacy systems and housing them in one data centre, it was hoped replaceable systems could be identified, contained and run at a much lower cost – with the eventual plan to ditch them entirely.
But since it was announced, that programme has gone eerily quiet.
The Register whipped out the Freedom of Information Act to ask for a list of public sector bodies that have signed up to the arrangement. However, we were told the Cabinet Office could not disclose the customer list for this commercial arrangement because “authorities could be targeted by individuals or groups willing to use malicious or other hostile ways to gain unauthorised access to information (sensitive or otherwise) stored at colocation sites.”
One can’t help wonder why the UK government bothered to announce the deal at all: it was supposedly meant to contain all of government’s data centre estate via Ark's two data centres in Farnborough and Corsham in a deal worth up to £700m.
But there could be another reason why the Cabinet Office veiled the project under a cloak of invisibility (beyond that being its usual modus operandi). According to numerous sources, uptake has so far been extremely low.
One source revealed that the Department for Work and Pensions had intended to shift 250 of its systems to the data centre, but is now migrating just five. That was part of the department’s mega £340m hosting services refresh to tackle its ageing infrastructure. In this instance, the department had hired hundreds of contractors to help it virtualise the current platforms onto the new kit.

Ignore that burning, everything’s fine

At the end of October, several contractors got in touch with The Register to report there had been hundreds of layoffs and hundreds of millions in overspend at the DWP. But that has been vigorously denied by the department. Commenting on the refresh programme, a spokeswoman said of the eight-year SSBA refresh programme: “It is ahead of schedule and has already delivered three large-scale, secured and resilient platforms.”
Reg readers are welcome to ponder the plausibly of a government project of that size being ahead of schedule. Nevertheless, it’s possible some of the systems have been refreshed.
One source told us: “My guess is that it will be the Customer Information System (CIS) that is moving as IBM were already commissioning a replacement [CIS] at Corsham & Farnborough, currently hosted on ancient Sun E25k frames, which have caused serious outages [of] Critical National Infrastructure due to hardware failures.
“Therefore it would it make a lot of sense to put these together in the same data centres. Moving CNI systems to Ark should be a lot more secure."
One contact said part of the problem with Ark for government use was the fact that some of the departments’ legacy kit won’t fit in its racks, while in other cases the hardware is partly or fully owned by a system integrator – making it difficult to shunt their kit somewhere else.
Another said the problem is that the government knows little about its old systems – citing the Home Office’s 1995 Casework Information Database (CID) as an example – and has been patched many times in haste and changed only when legislation requires.

Systems from the 1990s

“It uses old versions of everything at every layer of its architecture," we're told.
"There probably isn’t anyone who really knows how it works. Worse, CID isn’t just ‘CID’ – it’s a system that interacts and exchanges data with dozens of other systems, some inside the immigration department, some in the rest of the Home Office and some across government. So when you move ‘CID’ you are moving a living thing.
“Many of these systems don’t even have true disaster recovery – the ideal option would be to move the disaster recovery (DR) to the Crown Hosting site and fail over to it, so avoiding lots of hassle. But in the 1990s and 2000s government tended not to build real DR (in the sense of active / active or even active / near active).”
As a sidenote, the CID system was to be replaced by an Immigration Case Work (ICW) system in December 2008 by IBM that was intended to support applications for visas and immigration. However, the department was forced to write off £347m in 2013. The National Audit Office noted in 2014 that the CID system is plagued by problems such as freezing, a lack of interface with other systems, and a lack of controls.
According to one of our sources, the Home Office is still working on its Ark transition, building a couple of environments there for production services, but have yet to move anything. “I’ve done a lot of hosting moves in my time and, unless the folks doing it have also done a lot, they will massively underestimate how hard it is, especially if the hosting and the apps people are separate companies,” a contact told us.
For him, a failure to migrate the legacy kit comes down to "the cost being too big, with a relatively long-term payback, while managing an awful lot of risk, particularly the risk that it just won’t work because you don’t understand how the system works.”
No doubt many poor souls tasked with working out legacy replacements would love nothing more than to pull the plug, throw it in a skip and install something else. But unfortunately when it comes to mission-critical legacy gear, that particular Gordian Knot can’t be cut. ®

Wednesday, January 11, 2017

How Mainframes Prevent Data Breaches


As 2017 begins, let's talk about how using mainframes to process and protect data can keep hackers from having another banner year.

2016 was a strange year marked by everything from election surprises to a seemingly endless spate of celebrity deaths. But when historians look back at this mirum anno—weird year—it may end up being known as the year of the data breach.
Of course, this sort of thing isn’t restricted to 2016, but its impact on the world was hard to ignore. Among government organizations, the IRS and FBI suffered data breaches, and corporate victims included LinkedIn, Target, Verizon and Yahoo. Literally millions of people had their private information exposed to black hats, thieves and other ne’er-do-wells of the digital world. This epidemic of data theft calls upon security experts to get serious about creating new solutions.
You don’t need to hack my computer (in fact, please don’t do that!) to discover I’m going to advocate for one piece of established technology in particular: mainframes. The term “Big Iron” conveys strength and security for a reason. Housing all of one’s data in a single, powerful machine lowers the overall vulnerability of that data. Protecting a single mainframe is much easier than defending data spread out to all corners of a company firewall, and it carries the added advantage of the mainframe’s processing power helping to prevent fraud and other malfeasance.
“But isn’t that putting all your eggs in one basket?” I hear you saying to your computer screen. If it were, the basket is incredibly powerful and easy to secure: it’s more a vault than basket. And it’s still a better idea than spreading data where it can’t all be strictly monitored in real time, allowing hackers to sneak in through various weak points. Don’t forget that all of those ETL scripts are exacerbating the issue by making lots of copies of sensitive data and sending it out to more places and opportunities to be hacked.

Experiencing DB2 Performance Problems or DB2 Memory Utilization Challenges?

The Buffer Pool Tool for DB2
This is why I strongly recommend keeping your data and analytics together—to reduce the potential breach points. Housing data across multiple systems is a governance nightmare because widespread data gets breached in small clusters all the time, making it hard to track the origin of the hacks.
Blockchain on mainframes
Mainframes also make it easier to implement security measures such as Blockchain to prevent hackers from tampering with the files they access. Created to protect the security of Bitcoin and dark web transactions, Blockchain essentially keeps data from being altered without authorization.
Today, Blockchain is stepping out of the shadows, having been adopted by IBM and financial institutions such as JPMorgan Chase to ensure the ironclad integrity of their data. Experts predict Blockchain will soon help protect patient information in the healthcare system, and from there perhaps facilitate a period of more secure data.
I say “a period” because no safe is uncrackable, no firewall unbreachable, and no system foolproof. Eventually the bad guys will find a way around every new protection. That’s why security experts all say it’s not about 100 percent safety but about making every system as arduous to crack as you can.
Yes, hackers will try to exploit weaknesses in Blockchain or breach the security of a mighty mainframe, but these systems working in tandem can create enough of a roadblock to discourage their efforts, or slow those efforts long enough for security to kick in. No data is ever completely safe, but any data is a whole lot safer in the care of Big Iron.
This Blog by Bryan Smith is re-posted with the author's permission. It was originally posted here.

The Unknown IBM i – Part 2

The Unknown IBM i – Part 2

“COOL” IBM i Technology We Take For Granted That Saves You $85,000 Per Year
Some of you may recall I posted “The Unknown IBM I” blog on August 15, 2016.
It began:
“Many of you may be old enough to remember the Gong Show and the “Unknown Comic” who wore a paper bag over his head. You could not see his face, so you did not know who he really was. That was part of the gag.
 “I have a similar tale. Too bad the punchline of this real story is so true.”
Well, here is Part 2.
This last week I was talking to several technology writers. They all have strong Microsoft backgrounds. Besides writing about technology they all had brought technology projects to life.
As I described some of the high-level difference between IBM i and Windows, I was surprised with their amazement that IBM i had an integrated SQL relational database. To them, this concept was stunning.
I further explained that the IBM i single-level storage and data management architecture.
For example, I explained how the IBM i would manage object within the system. Frequently-needed object stayed in memory for fast access. Commonly-used data and program objects are stored on the physical disk units outside edge for faster access. Infrequently-used objects are stored deeper inside the physical disk drive – closer to the center – because it takes the disk access arms longer to retrieve --- which is infrequent.
In this way, the IBM i manages itself for optimal performance. No extra staff is required for load-balancing or system tuning.
Their response?
“COOL!”
I continued. This is more than just “cool” technology. It has REAL economic advantages that too many technical or business people do not recognize.
It saves business money!
How much?
Depending on skill-level and location, a systems engineer earns between $65,000 - $130,000 per year. Most commonly about $85,000 per year. In most cases, this is an IT staff member an IBM i user does not need.
In other words, most small-to-medium sized IBM i users do not have to pay an extra $85,000 per year to keep their server optimally running.
At that point they offered something I had NOT heard before from folks with a strong Windows background.
“If we had that technology in our past projects, we could have deployed our project faster and with far fewer hurdles than we encountered,” they volunteered.
So, these IBM i features would have made project deployment way easier…not just less expensive.
“Yes!” they said.
They explained they could have benefited from the OS-integrated SQL relational data base without setup delays that could range from several weeks to several months.
They would have also benefited from proper SQL setup and system self-management.
So, I asked, you would have been able to bring your technology projects to life faster and with fewer holdups?
“Yes! We sure wish we had something like that.”
Just as these technologists were unaware…and amazed…with the IBM i capabilities, my sense is that most IBM i users and management are also unaware of the extraordinary IBM i features.
We need to educate and remind our teams about what makes the IBM i so exceptional.
So, when someone suggests “we need to get off the IBM i and go to Windows because the hardware is so much cheaper”, we can remind them that they could be paying lots more in staffing, delays, and extra support to minimize disruption, prevent viruses and malware, improve load-balancing.
Cheaper hardware without IBM i architecture may cost LOADS MORE – in staffing, delays and on-going support.