The Phoenix pay system snowball.

The Phoenix pay system fiasco could get a whole lot worse before it gets better. We are witnessing a massive software project failure unroll in plain sight. Is the growing snowball of compounded failures gaining momentum or coming to a halt?

As the financial and productivity losses accumulate. So too does the confidence in the government IT services industry. The public discourse surrounding the controversy has been inadequate, to say the least. We need more voices from experts in the practice to speak up. Too much focus has been aimed at blame rather than analysis of causes and solutions. The government, right or wrong, will seek to appease the loudest voices in this discussion. Let’s try to make the message the right one. If not, we risk repeating the same mistakes recurrently.

Legacy software systems cannot live forever. As technology platforms age the resource talent pool available for system maintenance shrinks. Resources allocation becomes problematic. The potential for an unskilled workforce increases. Time delays creep into project initiatives. Another issue with legacy software relates to third-party vendors. Software systems always have dependencies on third-party technologies. Vendors of these technologies only support older product versions over a limited period. Vendors discontinue support once that period expires. The host software, in this case, can’t address problems with unsupported issues. These issues could be behavioral, performance related, security, etc.

I am not familiar the system preceding the Phoenix pay system. But it had been in place for 40 years. It’s very likely that it was in a legacy state and or in need of major upgrades. In the wake of all the blowback from the Phoenix failures. Many people opine that the existing system should have been left in place. But, what upgrades were needed? What’s the cost analysis of reverting to the old system? If we are considering the preexisting system as an option in resolving the pay system issues. It’s very important to answer those questions.

The sales team for COTS products like the system used in Phoenix are first-class. These products will save you time, money and accommodate all your business needs. Well, that is what the brochure says. Once you dig further into this more details emerge.

Many organizational processes are not preconfigured into the COTS systems. These processes typically take life in two ways. In some instances, system administrators input a series of instructions into the system. In other cases an application developer programs the customization as a system extension. These scenarios usually make up a small percentage of the required functionality. But, they are the highest risks. The system can only perform as well as the quality of the instructions. Garbage-in-garbage-out as they say.

These COTS systems don’t work without data. I would imagine that a pay system for all employees in the federal government requires a great deal of data. Migrating data from an existing pay system to a new system would be a significant project task. How many of the problems currently encountered in Phoenix relate to data migration? It’s not zero…

The issue of user training seemed to consume a few media cycles. Naturally, system users need training on any new product. Not only are the user interfaces different between products. But, it’s also likely logistics were changed and or introduced as part of the new system. There’s a ramp-up time associated with user training, especially on a new system. These ramp up times do tend to recede as the knowledge propagates into the user community.

I’m not sure how concrete the government’s plans are after the release of the #budget2018. But, they need to tread carefully. Migrating to another system offers no guarantee we won’t encounter similar issues. Customization work, data migration tasks, and training will all need repeating.

Leave a Reply

Your email address will not be published. Required fields are marked *