JTX IT Consultancy April 2026 NHS Programme Assurance

Independent Assurance in NHS Digital Programmes: What It Is and When You Actually Need It

Independent assurance gets called many things in NHS programmes — QA, programme review, gate review, independent technical review. What it means in practice varies enormously. This post is about what genuine independent assurance looks like for clinical system programmes: what it covers, what it produces, and when you should be reaching for it.

Need independent eyes on your NHS programme? Book a 20-minute fit check — no commitment, no pitch deck.

What Independent Assurance Is (and Is Not)

Independent assurance is senior technical and delivery review by a party outside the main delivery team. That distinction matters: outside the main delivery team, not outside the organisation, not from a different floor of the same building, and not from a consultancy that also holds a delivery contract on the same programme.

It is not a compliance audit. It is not a project management review. It is not a vendor health check. Those activities have their place, but they answer different questions and require different expertise. Independent assurance for a clinical system programme focuses on the things that determine whether the system will actually work when it goes live — and whether the clinical and operational risk is genuinely understood by the people accountable for it.

In practice, this means reviewing four specific areas:

  • Integration architecture: Are the interfaces governed, documented, and tested? Is the interface specification current and agreed by both sides?
  • Go-live readiness: Is the cutover plan realistic, specific, and rehearsed — or is it a slide deck?
  • Clinical risk: What happens clinically if a specific interface fails at go-live, and who owns that risk in writing?
  • Delivery honesty: Is the programme's account of its own readiness accurate? Does the evidence support the RAG status?

The output is a written risk assessment: specific findings, clear ownership for each, and an honest account of where the programme actually is — not where it says it is. Crucially, the findings go directly to the executive sponsor. Not filtered through the programme team, not softened through layers of programme governance. Directly to the person accountable for the decision.

When NHS Programmes Typically Need Independent Assurance

Some programmes commission independent assurance as standard practice — particularly where the board has made a prior commitment to independent sign-off as a condition of proceeding to go-live. That is becoming more common after a run of high-profile NHS digital programme failures, and it is a reasonable governance requirement for programmes that carry significant clinical risk.

More often, independent assurance is triggered by a specific situation. The patterns that most consistently indicate a programme needs it:

  • Approaching go-live without confirmed interface readiness. The programme is weeks from cutover and the integration test evidence is incomplete, contested, or based on partial environments. Interface owners are giving verbal assurance but nothing is documented and signed off.
  • The delivery team and the vendor are giving different accounts to the board. This is more common than it should be. The programme team reports amber. The vendor reports green. The board has no way to assess which account is more accurate. An independent review can reconcile the two.
  • Stalled or recovering programmes that need a current-state assessment. A programme that has slipped, restructured, or changed key personnel needs an honest view of where it actually stands before committing to a new go-live date. Internal assessments in this situation are almost always optimistic.
  • Board or executive requests for independent go-live sign-off. An executive sponsor who has read about EPR failures elsewhere, or who has been through a difficult go-live previously, will sometimes make independent assurance a condition of their approval to proceed. This is a legitimate and sensible governance requirement.
  • Post-go-live stabilisation reviews where problems surfaced but ownership is disputed. After a difficult go-live, the conversation quickly becomes about who is responsible for what went wrong. An independent review of what actually happened — and what the programme documentation said before go-live — provides a factual basis for that conversation.

What an Independent Assurance Review Covers

The scope varies depending on the programme and what has prompted the review, but a pre-go-live assurance review for an NHS clinical system programme typically covers the following areas.

  • Integration architecture review. Are the interfaces governed, documented, and tested? Is there a versioned specification per feed, agreed by both the sending and receiving system? Is there evidence of end-to-end testing — not just connectivity testing — with production-equivalent message types?
  • Cutover readiness. Is the cutover plan realistic and specific? Does it include a realistic timeline, named resources, explicit go/no-go criteria, and a tested rollback procedure? Has it been rehearsed at anything approaching production scale?
  • Clinical risk assessment. What are the specific clinical failure modes if an interface fails at go-live — and who owns that risk? Are downtime procedures in place and understood by clinical staff? Is the clinical safety case current and does it reflect actual delivery status?
  • Vendor readiness. Is the delivery team's account of vendor status accurate and independently verified? Has each third-party vendor with an interface dependency confirmed readiness in writing, with test evidence — or is readiness based on self-declaration?
  • Staff readiness. Are clinical teams prepared for downtime procedures and workflow change? Has training been completed for the staff who will be on shift during cutover? Are ward leads aware of what to do if specific systems are unavailable?
  • DCB0129 / DCB0160 clinical safety. Is the clinical safety case current? Does the hazard log reflect the programme's actual delivery status, not the status at an earlier point in the build? Has the clinical safety officer reviewed it against the current configuration?

What Makes Assurance Genuinely Independent

The word independent is used loosely in NHS procurement and governance contexts. It is worth being specific about what genuine independence requires in practice, because a review that is nominally independent but structured to avoid uncomfortable conclusions is not assurance — it is cover.

Genuine independence has four components:

  • No financial interest in the outcome. The reviewer has no contract on the programme, no commercial relationship with the vendor, and no financial benefit from the programme continuing on its current trajectory. A consultancy that holds a delivery contract on the programme and is also commissioned to provide assurance is not independent, regardless of how it is described.
  • Senior-led throughout. The person doing the review is the person writing the findings — not a junior analyst reporting to a principal who signs off the report. In complex clinical system programmes, the expertise required to assess an integration architecture or a cutover plan is senior. A review that relies on junior resource to gather evidence and senior resource to present findings is not the same thing.
  • Direct access to programme artefacts. Interface specifications, test evidence, cutover plans, risk logs, vendor correspondence — not presentations, not summary slides, not what the programme team chooses to share. A reviewer who can only see what the programme team shows them cannot give an independent view of where the programme actually is.
  • Findings go directly to the executive sponsor. Not filtered through the programme director. Not softened through governance committees. Directly to the person who is accountable for the go/no-go decision. If the reviewer is not prepared to deliver findings directly to the executive sponsor, the assurance is not independent.

And the hardest requirement: an independent assurance reviewer must be prepared to say — clearly, in writing — that a programme is not ready to go live. That conclusion is unwelcome in almost every case where it is warranted. A reviewer who will not write it is not providing assurance; they are providing validation.

The Difference Between Independent Assurance and Gateway Reviews

NHS Gateway Reviews are formal process checks run by NHSE review teams against a standard framework. They assess programme governance, business case, delivery approach, and organisational readiness at defined stage gates. They are structured, well-established, and have a clear remit within NHS programme governance.

Independent technical assurance goes deeper into the clinical and integration specifics that gateway frameworks do not typically have the domain knowledge to assess. Gateway reviewers assess whether the programme has the right governance structures, the right documentation, the right approvals. Independent technical assurance assesses whether the integration architecture is actually sound, whether the cutover plan will actually work under production conditions, and whether the clinical risk is actually managed — not just documented.

Both have value, and they answer different questions. Gateway asks: is this programme following the right process? Independent technical assurance asks: will this clinical system actually work when it goes live, and what happens to patients if it does not? The two are complementary, not interchangeable.

For programmes carrying significant integration complexity or clinical risk — major EPR deployments, PAS replacements, systems with multiple third-party interface dependencies — the case for independent technical assurance in addition to gateway is straightforward. Gateway cannot substitute for domain-specific review of integration readiness; it was not designed to.

Need independent eyes on your NHS programme?

Book a 20-minute fit check. We will identify your top go-live risks and tell you what is worth fixing before cutover week.

Frequently Asked Questions

No. NHS Gateway Reviews are formal process-based checks run by NHS England / NHSE review teams, assessing programme governance, business case, and delivery approach against a standard framework. Independent technical assurance is domain-specific — it requires healthcare integration and clinical system delivery expertise to assess whether the interfaces are actually ready, whether the cutover plan is realistic, and whether the clinical risk has been adequately managed. The two serve different purposes and neither substitutes for the other.

For a programme approaching go-live, a focused independent assurance review typically takes 2-4 weeks — 1-2 weeks of document review and stakeholder interviews, 1-2 weeks to produce findings and recommendations. The scope can be adjusted depending on programme size and urgency. A pre-go-live review that is commissioned with 4 weeks' notice can still be completed in time to influence the go/no-go decision, which is often why they are commissioned when they are.

Typically the trust or health organisation, because the point of independent assurance is to provide an honest view to the organisation taking on operational and clinical risk — not to the vendor delivering the system. Vendors sometimes commission independent reviews of their own delivery, which is valuable but different: the audience and accountability are different. A review commissioned by the trust, with findings reported directly to the executive sponsor, is more likely to surface uncomfortable truths about delivery readiness.

Related insights

NHS Integration

NHS Integration Delivery: Senior Assurance

Senior-led assurance for NHS integration programmes in the UK — what it covers, what it finds, and what it takes to deliver clinical system integration that holds up at go-live.

Read insight
Integration Experience

Healthcare Integration Experience: What Survives Reality

Senior delivery lessons from NZ, UK, and APAC healthcare integration programmes — what the theory gets wrong and what actually matters in practice.

Read insight
Go-Live Readiness

Clinical System Testing & Go-Live Readiness

How JTX approaches clinical system testing, cutover rehearsal, and go-live readiness assurance — from integration test design through to stabilisation support.

Read insight