Mobile navigation

FEATURE 

The importance of technology

Good content is still king, but, writes Paul Lomax, without technology to match, publishers may not reach the audiences they once enjoyed.

By Paul Lomax


As publishers, we compete with nimble start-ups, often filled with technically savvy youth, who don't have a legacy print business, or legacy technology, to worry about. It's never been as important to have a solid technical strategy - one that will be fit for purpose, cost effective, and as future proof as possible.

Rather than consider what better technology will mean to your business, instead consider what may happen if you do not keep up with this changing world, but your competitors do? The new normal is that if you stand still, you're going backwards in real terms. Being competitive requires investment, it requires innovation, and it requires difficult decisions being made without black-and-white returns on investment. Let’s face it, if a piece of technology improves efficiencies, you can't make a saving by cutting a tenth of a person - but it might just give you an edge.

Build or buy?

Most publishers are not technology businesses. "IT" is something that lives in the basement, not the boardroom. Technology is not considered part of their core business. And yet so often, publishers create their own software, write their own website content management systems, and build their own platforms without necessarily having the expertise, experience or investment to do it properly.

A good technical strategy requires the ability to know when to make something part of your core business and invest (and hire talent) accordingly, and when to leave it to the experts. Of course, that's easier said than done as finding both talent and the right experts is no mean feat.

Ironically, one of the main objections to using popular open source technology, like Drupal, is that it is very difficult to find good Drupal developers. I can assure you, it is a lot harder to find people with experience of working with the software that you built yourself. And I would be willing to bet that the documentation and community support around well-used software like Drupal is a lot better too!

Sunk costs and technical debt

That's not to say we should never develop our own software, rather that it's an endeavour that should not be undertaken lightly - know what you're getting yourself into. Building the first version is easy, but if it's not done right, then future development becomes impossible or so expensive that the cost of changes outweighs any benefits. These problems can build up over time and are known as 'technical debt'. Eventually you may be left with no choice but to write off the investment and start again - something I refer to as 'declaring technical bankruptcy'.

Technical bankruptcy is surprisingly common, either because the systems created are riddled with technical debt, or they are simply no longer fit for purpose (if they ever were). For example, the BBC recently suspended their Chief Technology Officer (CTO) and announced the cancellation of their in-house developed content management system, which had to date cost £95 million. Whilst I cannot comment on the suspension of their CTO, pulling the funding for the project and proceeding with technology that is now available off-the-shelf sounds like a smart move, despite the emotional attachments there will undoubtedly be, given all the 'sunk costs'.

All too often, businesses will make decisions entirely based on these sunk costs. But it is not unlike playing poker. You have to know when to let go, even if it means losing a large pot of money already invested. Tell yourself the bet was the right decision at the time; if it helps - it might well have been. But fold when the odds are stacked against you, and live to fight another day and place a new bet. Don't pour good money in after bad. But be bold with your next bet.

Grand designs

As with many things in life, the key to success in technology is keeping things simple. Many IT projects fail because they attempt too much, they attempt to boil the ocean. By the time you've figured out what you're doing, the rest of the world has moved on. There is also the 'iceberg problem' in technology, where at first glance something appears to be simple on the surface. Developers will say something will only take a few weeks, but it quickly spirals into months as more of the iceberg under the water is revealed.

The truth is that when most software developers are writing code, they're doing something for the first time. Otherwise they'd just lift the code from a library. So that's why most projects end up like those in Grand Designs, where everything looks great on paper, until they start digging out the foundations and discover an ancient underground river, burial ground, or other such unforeseen obstruction.

There's an old saying, "The only reliable way to know how long something will take is to do it, measure how long it took, and even then you'll be wrong."

Also, similar to Grand Designs, is the problem of change, and the fact you don't know what you want until you see it. Kevin McCloud is always quick to spot the clients who will blow their budget because every detail is subject to change along the way. Compromises have to be made, but there is much that cannot be undone.

Embracing the uncertainty principle

Some technology projects are project managed using a process known as PRINCE2, which stands for PRojects IN a Controlled Environment. The problem with this approach is that we don't operate in a controlled environment. It's useful if you're building a tower block and have complex dependencies - the bricks need to turn up before the roof - but in software development, everything's a lot more fluid. As a result, over the last 20 years, a process known as Agile has become the de facto standard. There are a few popular versions, the most common being 'SCRUM'.

Oddly, I've found that whilst most media companies now use Agile methodologies when building their own websites, they seem to revert to the more rigid 'up front planning' style of PRINCE2 when it comes to outsourcing projects. This is often because a fixed price is desired, but in 15 years, I've never heard of a fixed price project finishing on budget. Fixed price is a fallacy. The only way your supplier can do it is to allow so much padding in the price to allow for work to double and still make a profit.

Regardless of whether you're building in-house, outsourcing or working with software vendors, the key is having strong management. The role many publishers are now embracing is that of 'product manager'. In short, this is a person who is empowered by the business to make day-to-day decisions, using data where possible. They should help stop 'scope creep' and keep stakeholders (with their Hippos - 'highest paid person's opinions') in the loop when needed, and kept at bay when not.

Without an empowered single product owner, projects will slip by months, one day at a time, as decisions are made by committee, not instantly on the shop-floor as unknowns occur. Every day, they will need to make pragmatic decisions (developers tend not to!), say 'no', or say 'phase 2’ a lot. If you want to be on-time and on-budget, when unknowns crop up, the only thing you can vary is scope.

Another common mistake in IT projects occurs when writing 'requirements'. Instead of trying to provide the solution to your techies ('the system shall…'), define objectives and problems you are trying to solve. Write 'use cases' in the format, 'As a [user], I want to [do a task] so that I may [achieve a goal]’. Sometimes, you may be better off changing your processes rather than the software. Be careful not to ask for things to replicate what you have, instead, look at opportunities for change. For that reason, any vendor or software integrator who just asks you what you want will ultimately fail you.

Break IT down

I would strongly advocate what I call a 'component strategy' across the business. This means breaking every project down into its smallest part, any bit of which can be built in-house, outsourced, or found off the shelf (as software or in the cloud). First of all, it prevents the 'large IT project' risk. Secondly, as each component goes in, you can learn and adapt. If one part doesn't meet the objectives or is no longer fit for purpose, you don't need to throw the whole thing out. It means your technology can stand on the shoulders of giants.

This way, each component can be evaluated thus: Is there anything suitable out there? If not, build it yourself. As soon as there becomes something out there that's as good as yours, replace yours with theirs. Note how I said 'as good as yours', not 'better than yours'. Every line of code you delete is a line of code you don't need to maintain. Also, the key to 'is it suitable' is defining what's known as a 'minimal viable product' - the least amount you need to move forward.

If you've identified eight requirements for some software, and everything out there only does seven of them, ask yourself if you can move forward with seven - especially if all seven requirements are things you can't do right now. The last one does not make it worth building yourself.

The key, therefore, when evaluating software (including software as a service, ie cloud), is to consider the interoperability - does it play nicely with other software and services? How easy is it to get data in and out? Does it have a 'RESTful JSON-based web service'? (most developers' preferred technology-neutral way of making two systems talk to each other). What systems does it talk to natively, without development? Does it have a good eco-system? Whenever I'm trying out web-based services, the first thing I do is check the 'integrations' page to see what other apps it works with - it's often a great way to find nifty bits of software that solve problems you didn't even know you had.

I would also advise evaluating the 'user interface' of any software very closely. If it's difficult to use, it will not be well used, and this will limit the adoption and thus the return on investment. Don't evaluate software purely based on a feature list. If it takes 20 clicks to do something simple, the feature is worthless. This usability requirement also applies to in-house built software, something which is often overlooked. Software houses will employ specialist 'user experience' (UX) designers.

Whilst the old maxim, "Nobody ever got fired for choosing Microsoft", may be true, it has never earned anyone a promotion either.

The Cloud

At this point, I must quote our esteemed Chairman, Mr Felix Dennis: “If it flies, floats or fornicates, in the long-run, it's cheaper to rent it.” I believe the same applies to technology. By renting software, usually as software-as-a-service hosted in 'the cloud', if you make the wrong decision or your business changes, you simply switch provider. Thus you avoid decisions made based on sunk costs. This is how start-ups are able to compete and grow so quickly - their up-front investment costs are so low, allowing them to invest in talent instead.

Unfortunately, for most of us, we have a legacy problem to deal with. We all have existing software and servers that were purchased or created at great expense. They're now worthless, but we're still paying for them five years later, thanks to 'capex' (capital expenditure). To put that into context, investments that you've just finished paying for this year were defined and signed off at a time before Facebook or Twitter existed.

Capex is a great way of spreading the cost of investment, but it can be a double-edge sword if not treated with respect. It is not free money. It's addictive. You can easily convince yourself that your project is a one-off investment, when in fact, you need this level of investment now and for evermore. As we wean ourselves off capex and big-bang IT projects, and into the cloud, we need to move to monthly fees and operating expenditure. This is not an easy transition, as there will be an overlap period.

The business case is the final challenge. The best way to prove business value is to start delivering some business value. Agile also advocates working software as the primary measure of progress, so why not build prototypes, under the radar, and iterate? Test, adapt, learn.

A fantastic example of this is the 'gov.uk' project, originally led by Tom Loosemore. The project was ambitious - to replace all government websites - but was broken down into phases. They started with key parts of direct.gov, but their first phase was to launch a public 'alpha' website. It would not be perfect, but it secured the funding for a 'beta' site, and finally full deployment which has apparently saved £46m this year. The budget for the alpha project was £261k - the exact amount Tom's boss could authorise without requiring a higher sign-off level. The beta project budget was then £1.6m, the next sign-off level up…

Taking such a lean, agile and cloud-based approach leads to real innovation within government IT. It's a strange world we live in when the government is 'doing' technology better than most businesses. What has hopefully become clear is that a good technology strategy has very little to do with technology itself. It's about process, attitude, vision, and change. This is not beyond our reach. In fact, it's just up in the clouds.