Author Topic: Forget Testing, HealthCare.Gov is a Data Nightmare  (Read 362 times)

0 Members and 1 Guest are viewing this topic.

Offline Rapunzel

  • Hero Member
  • ****
  • Posts: 71,719
Forget Testing, HealthCare.Gov is a Data Nightmare
« on: October 26, 2013, 12:39:09 AM »

Forget Testing, HealthCare.Gov is a Data Nightmare
John Casaretto | October 25th is a technical abomination.  Sure, it may not have been reasonable to think that there wouldn’t be some hitch at launch with a project like this, but this has gone way beyond a couple of hitches.  Slapping the word ‘glitch’ to the level of incompetence behind this project, wherever the ultimate blame may lie – that’s disingenuous.  As more news continues to emerge, it is obvious that the issues are much deeper than the site wasn’t tested enough.  The site is built with a whopping 500 million lines of code in this monster– all hurriedly put together by an army of contracted programmers and as we’ve established, not tested very well.  This code itself – once it’s deemed ‘septic’ - has to be removed, meaning a total teardown.  Numerous technology experts have come forward to state that the whole thing needs to be restarted.
Red flags a’wavin’

.Yesterday a lineup of contractors, representatives from CGI Federal, Equifax Workforce Solutions, and Optum/QSSI involved in the project, stepped up to explain what happened before a House Energy and Commerce Committee hearing.  The picture, as you can expect, wasn’t pretty. They testified as late as September that the project was on track, smooth sailing, and there was no reason to suspect that there were any problems from these reports.

That all changed when was unleashed on the country.  The site, as we now know, crashed immediately and the rest is now the stuff of cluster*** legend.  The testimony this week featured some animated exchanges and questioning to say the least.  One question was posed as to whether there was any notion to contact Centers for Medicare and Medicaid Services (CMS) and alert them that perhaps this site wasn’t ready.  After all, they had only launched tests on the site within the last week before the launch of the site, a test, by the way, where once the users got into the hundreds – it crashed.  The Washington Post reports,

  Days before the launch of President Obama’s online health ­insurance marketplace, government officials and contractors tested a key part of the Web site to see whether it could handle tens of thousands of consumers at the same time. It crashed after a simulation in which just a few hundred people tried to log on simultaneously.

And then they went live with that.  In fact, red flags were everywhere.  The response to that question? “We weren’t in a position to alert CMS.”

The CMS is also now stating that 3.5 years was not enough time to make work.   “A ‘compressed timeframe’ precluded sufficient end-to-end testing of the sign-up system,” says CMS spokeswoman Julie Bataille.

CMS also sidestepped a question as to whether Kathleen Sebelius, the Health and Human Services Secretary, had knowledge of the site problems before the launch.  There’s a lot of heat on her here. Here’s the thing, Julie – that team could have been given months and month of testing and the conclusion would be the same, the site is completely broken.  Then what?  You would have to rebuild it anyway.  By their own assessment CMS appears to be right on this – 3.5 years, there was no way, not a project like this.

Big Data, Big Problems

That’s just the surface has a significant data problem that won’t be fixed simply.  That means no patches or rewrites are likely to fix it.  Reports are emerging that focus on these ‘834’ forms.  These forms are used in the process of direct enrollment, and it appears that they are filled with incorrect information, making purchasing plans for many impossible.  The data that is coming to insurers from is corrupted or sending ‘questionable’ data.

    “The data’s pretty bad,” the executive said. And even if the data was not corrupted, the number of enrollments the insurers are getting from is “pretty low.”

The data is pretty bad.  How can that be?  The answer is clear if you consider the role of the site in the first place.  At its core the most fundamental function of the site is to access and manage data from a variety of massive databases, from sources like the IRS, Veterans Administration, insurance carriers, the DHS, and Medicare.  We are talking about a lot of incongruent and varied information that has to be extracted, transformed and analyzed, ideally on the fly.  Does that sound like anything?  To our dedicated readers, that’s a classic Big Data use case.  So where has this gone wrong?

Critics are going to point out that no one knows what they’ve implemented as far as data technology goes, but these issues are self evident.  It turns out that even if they had the technology, Big Data, analysis, massive concurrency from various sources, at scale, whatever they used – well that’s just one piece of a properly functioning solution.

There’s a process behind it.   Creating and imposing the required data structuring onto this variety of sizable databases means a tremendous, effective effort should have taken place.   Data from these systems needs to be identified, correlated, and processed in accordance with the target system.  It would be nice if that was automated, but the truth is that many, many data points would have had to been manually evaluated, then categorized.   It is quite likely that there are massive variations from one database to the next, from one instance to the next, even from within the same system.  Differences in syntax, standards, data types, the list of information has got to be tremendously long.

Another challenge, these databases – who knows how dated they are.  Government systems are famously several revisions behind all the time as they are always awaiting security validation and certification or believe it or not ($634 Billion), behind revisions because of budget.  In light of all those challenges – given what we know about CGI Federal and CMS that was running this thing, the misinformation that purposely was put on display so that the public and congress would believe this site was ready to go, does anybody really think this crew would be able to pull this off?  Or that we’re a couple of handful of patches away from fixing this thing?

Let me paraphrase this – bad data mapping leads to this exact problem – error-filled code and bad end data.  That’s your 834 issue.  Hacks and workarounds can only stack so high.  There is a foundational flaw with the entire thing, and as we can all see the website was just the beginning.

Starting over will be hard

The bottom line is that to reconstruct, it appears that creating an entirely newly coded site from scratch is just scratching the surface people.  This whole data issue – the way it looks, the whole thing has to be scrapped and started over again, completely.  CMS said it themselves – 3.5 years was not enough time to build a site like this.  And if they have to start over, a fully-functioning, seamless site that incorporates all of the target data is a long ways away.

The only other alternative would be to trim some of the scope and function from the site itself, also conducive to a complete teardown.  What we’ll probably see, however, is a hobbled operation, while elements of the site are repaired in parallel, until ultimately it can all be brought together at some point.
“The time is now near at hand which must probably determine, whether Americans are to be, Freemen, or Slaves.” G Washington July 2, 1776

Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo