JTX IT Consultancy April 2026 Data Migration & Go-Live Readiness

PAS and EMR Migration Checklist: What You Need Before Clinical Day One

Data migration checklists are easy to write from a distance. This one is built from what actually fails in hospital migrations — the data quality issues that surface only in production, the interface edge cases that weren't in scope, and the rollback plan that was never tested.

Need a senior review of your migration readiness? Book a 20-minute fit check — no commitment, no pitch deck.

Pre-Migration: Data Quality and Readiness

Most migration programmes begin with optimism about data quality and end with a clinical data quality review that surfaces problems no one expected. The issues are rarely unique — they are the same issues that surface in almost every hospital migration. The difference between programmes that manage them and programmes that are derailed by them is whether the assessment happened early enough to act on the findings.

Source data assessment

The ETL vendor's data profiling report is not a data quality assessment. It tells you what the data looks like structurally — field completeness, format consistency, referential integrity. It does not tell you whether the data is clinically meaningful, whether the values are accurate, or whether the mapping assumptions the ETL tooling has made are correct.

A genuine source data assessment requires clinicians reviewing a representative sample of actual records: admissions, outpatient episodes, order sets, results, referrals. The questions are: does this data tell the clinical story it is supposed to tell? Are the values in these fields what a clinician would expect to see in the new system? If this record were displayed in the target system, would clinical staff be able to use it safely? ETL reports cannot answer these questions. Only clinical review can.

Duplicate patient records

NHI validation (in NZ) or NHS number matching (in the UK) is the mechanism for identifying the same patient across records. In practice, every PAS that has been running for more than a few years has duplicate patient records — the same individual registered under two NHI or NHS numbers, often with partially different demographics, different encounter histories, and records spread across both. Migrating duplicates propagates the problem into the new system and compounds it, because the new system's record-matching logic will treat them as distinct patients.

Before migration begins, the programme needs a specific answer to: how many potential duplicates exist in the source data, what is the resolution process, and what percentage remains unresolved at cutover? There is no universally acceptable threshold — the answer depends on clinical risk, volume, and the type of duplicate (administrative vs clinically consequential). But the question must be answered explicitly, not assumed away.

Open orders and active clinical episodes

Active admissions, open referrals, pending orders, and in-flight results are the most operationally sensitive data in a PAS migration. They cannot simply be left in the old system — clinical operations depend on them being visible in the system that staff are now working in. But migrating them in mid-flight introduces risk: orders that were placed against the old system's catalogue, results that are returning to feeds that are being reconfigured, and episode structures that may not map cleanly to the new system's data model.

The programme must document, clinically, exactly which active episodes will be carried across, in what state, and who is responsible for reviewing them in the new system before clinical staff rely on them. The decision on which orders go across, which are closed, and which are manually re-entered must be made by clinicians — not by the data migration team based on what is technically possible.

Historical data

Historical data decisions are often driven by technical convenience rather than clinical need. The question is not "how much can we migrate" but "what do clinicians actually need to see in the new system, and what can they access another way?" Every record that is migrated is a record that must be mapped, transformed, validated, and tested. Every record that is not migrated needs an access pathway for the clinicians who will occasionally need to see it.

The transition period — the weeks or months after go-live when some historical data is only in the old system — requires a specific plan: read-only access to the legacy PAS, a defined decommission timeline, and clinical staff who know how to use it. This is not a technical afterthought. It is a clinical safety requirement.

Data mapping sign-off

Technical validation of a data mapping confirms that the ETL can extract a value from field A in the source system and load it into field B in the target system. It does not confirm that field A and field B mean the same thing, that the value in field A is correct, or that the value as it will appear in the target system's clinical UI is what the clinician expects to see.

Clinical validation of key field mappings — diagnoses, allergies, medication histories, order sets, result types — must be done by clinicians reviewing rendered output in the target system, not by reviewing mapping spreadsheets. A clinician looking at a migrated record in the new system and confirming it is correct is a different thing from a data analyst confirming the mapping logic is technically consistent. Both are required. Clinical sign-off on mappings cannot be delegated to the technical team.

Interface Readiness Checklist

Interfaces are the most consistent source of late-breaking go-live risk in PAS migrations. They involve multiple teams, multiple vendors, and multiple systems — each with their own timelines, change freezes, and definitions of "ready." The checklist below reflects what "ready" actually means in practice, not what it means in a RAG status report.

  • Interface inventory complete and current
    Not the inventory from the initial scoping exercise six months ago. A current inventory reviewed within the last four weeks, with each interface confirmed as in-scope, out-of-scope, or deferred — and those decisions recorded against named owners. Interfaces are frequently added, descoped, or deferred during a migration programme without the inventory being updated. By go-live, the inventory no longer reflects reality.
  • All interfaces have versioned message specifications
    Not "interface maps" from the initial design phase that describe intended behaviour. Current specifications that document what the PAS will actually send at go-live: trigger events, message structure, required and optional segments, field usage, Z-segments, encoding requirements, and acknowledgement behaviour. Versioned, dated, and agreed by both the sending and receiving side. Without this, interface testing is testing against an assumption, not a specification.
  • Consuming system sign-off obtained in writing
    "They said they're ready" is not sign-off. Written confirmation from the consuming system's technical lead and, where the interface carries clinical data, a clinical lead — confirming that they have reviewed the message specification, completed integration testing to their satisfaction, and have named contacts available during the cutover window. Email is acceptable. A project manager's recollection of a verbal conversation is not.
  • Interface testing completed with production-representative data
    Testing with seeded test data confirms that the plumbing works. Testing with production-representative data — real message volumes, real patient demographics, real order types and result structures — confirms that the interface works under the conditions it will actually face. The distinction matters most for interfaces with high message volume (ADT, results delivery), complex message structures (pathology, radiology), and downstream systems that have specific parsing behaviour for values they are not expecting.
  • Monitoring and alerting configured and tested before go-live
    Interface monitoring that is configured the week before go-live and has never generated an alert is not monitoring — it is the appearance of monitoring. By go-live, the team should know what a healthy interface looks like in the monitoring tool, what an unhealthy one looks like, and who receives the alert when a threshold is breached. This requires the monitoring to have been running in a pre-production environment long enough for the team to be familiar with it.
  • Rollback plan for each critical interface documented and agreed
    The rollback plan for an interface is not "revert to the old PAS." It is a specific, step-by-step procedure: who initiates the rollback, what actions are taken in what sequence, how long the rollback is expected to take, and what clinical operations must be notified when it happens. This plan must be agreed with the consuming system owner — not written unilaterally by the migration programme and filed somewhere on the SharePoint.

Cutover Rehearsal Requirements

Why one rehearsal is rarely enough

The first rehearsal is not a dress rehearsal. It is a structured exercise designed to surface what you do not know. Its purpose is to find problems — in the data migration process, in the interface sequencing, in the timing assumptions, in the team's understanding of who does what and when. A first rehearsal that completes without finding significant issues is a sign that the rehearsal was not realistic enough, not that the programme is in better shape than average.

Subsequent rehearsals confirm that findings from the first have been resolved, that the revised process works under realistic conditions, and that the team executing the cutover can do so without needing to improvise. The first rehearsal should run at least 12 weeks before go-live. The dress rehearsal should run at least 6-8 weeks before go-live — close enough to reflect the actual production environment, far enough to allow findings to be remediated before cutover day.

What the rehearsal must cover

A meaningful rehearsal covers the full cutover procedure, not just the data migration component. This includes: the data extraction and load at production-representative volume, the interface cutover sequence (which interfaces are switched in what order, with what verification steps between), the timing of each stage against the cutover window, the rollback decision point (the specific moment in the process where a rollback is still feasible), and the escalation path — who calls whom if a specific component fails, with named contacts and contact details that are actually current.

A rehearsal that does not include the rollback procedure is incomplete. Go-live day is not the moment to discover that the rollback takes four hours instead of one, or that the person who knows how to execute it is not available during the cutover window.

Running a meaningful dress rehearsal without disrupting live operations

The practical constraint on cutover rehearsals is that the live environment cannot be used while clinical operations are running. The solution is a production-equivalent non-production environment — built from a recent copy of production data, running at production infrastructure specification, with all interfaces connected to equivalent test endpoints in consuming systems. This is a significant infrastructure investment. Programmes that do not build it tend to rehearse against a UAT environment that is neither data-equivalent nor infrastructure-equivalent, and the rehearsal tells them relatively little about what go-live will actually look like.

The go/no-go framework for rehearsals

Each rehearsal should have pre-defined criteria against which it is assessed. These criteria determine whether the rehearsal outcome supports proceeding or requires another rehearsal cycle. Typical criteria include: data migration completed within the planned time window, all critical interfaces verified as processing correctly by the end of the rehearsal, rollback procedure executed and confirmed to work within the planned rollback window, and no unresolved blocking issues at rehearsal close.

If a rehearsal fails against these criteria, the process is clear: the issues are documented, owners are assigned, a remediation timeline is agreed, and a further rehearsal is scheduled. What must not happen is a rehearsal that surfaces significant problems and is nonetheless recorded as a pass on the grounds that "it was only a rehearsal." The purpose of the rehearsal is to give the programme confidence in the cutover procedure. A failed rehearsal that is called a pass does not give anyone confidence — it simply defers the reckoning to go-live day.

Clinical Safety Controls

DCB0129 / DCB0160 considerations for UK programmes

For programmes in England operating under NHS Digital's clinical safety standards, the clinical safety case must be current at go-live — not the version that was produced during the design phase and not updated since. A clinical safety case that does not reflect the actual system configuration, the interfaces in use, the data migration approach, and the outstanding known issues at go-live is not a current clinical safety case. It is a document that was current at some earlier point in the programme.

The clinical safety case review cycle should be aligned to programme milestones, not to administrative deadlines. At minimum, it should be reviewed and updated after each major rehearsal, following any significant change to the scope or configuration of the system, and formally updated before go-live sign-off. The Clinical Safety Officer sign-off at go-live should reflect a genuine review of the current state — including any outstanding hazards and the mitigations in place — not a procedural confirmation that a document exists.

Clinical Safety Officer involvement and formal sign-off

Clinical Safety Officer involvement that consists of reviewing a document at the end of the programme and signing it is not clinical safety governance — it is the appearance of it. The CSO needs visibility of the programme's risk profile throughout delivery: the data migration findings, the interface issues, the rehearsal outcomes, and the outstanding items at go-live. Go-live sign-off from the CSO should reflect that visibility, and the sign-off document should record what outstanding hazards exist, what mitigations are in place, and what conditions must be met post-go-live to close them.

Downtime procedures

Every ward that depends on the PAS for clinical operations needs a downtime procedure: a documented, paper-based fallback for the period when the system is unavailable. The procedure must be tested before go-live — not reviewed and filed, but actually exercised by clinical staff. The cutover window is a planned downtime. If clinical staff have never used the downtime procedure, the cutover window is the first time they practise it. That is not a safe position.

Downtime procedures should also cover the period immediately after go-live: the first hours when the system is live but the team is still verifying data and interfaces, and when clinical staff may encounter unexpected system behaviour. The downtime procedure provides the fallback if the verification finds something that requires a temporary system suspension.

Post-migration data validation protocol

Data validation after go-live is not optional. The migration has moved data through a complex transformation process under time pressure. Some of what went across will not be correct. The question is whether the programme finds the errors or the clinical staff do.

The post-migration validation protocol should specify: which data domains are validated, by whom, by when, what constitutes an acceptable error rate, and what action is taken if errors are found — including who has the authority to decide whether a finding requires a clinical incident report. The protocol should be agreed before go-live and executed by named individuals, not delegated to the BAU team who were not part of the migration and do not have the context to distinguish an expected data gap from an error that requires escalation.

The Week After: Hypercare Planning

What hypercare actually means

Hypercare means senior resource — people with direct knowledge of the system configuration, the data migration decisions, the interface specifications, and the programme's known issues — are available and accessible to clinical and operational staff in the days and weeks immediately following go-live. It does not mean a junior analyst on call who knows how to log a ticket.

The distinction matters because the issues that surface in the first week post-go-live are typically not simple. They are edge cases that no test scenario covered, interface behaviours that only appear under specific clinical workflows, and data quality issues where the correct response requires knowledge of what the migration was supposed to do and what it actually did. Resolving these requires people with programme knowledge, not people with helpdesk access.

Minimum hypercare period

For a district hospital PAS replacement, a minimum hypercare period of four weeks is realistic — not as a target to work down from, but as a floor below which the programme should not compress. The first week will surface the highest volume of issues; the second and third weeks surface the edge cases; the fourth week is typically when the issue rate has dropped to a level that BAU support can manage with appropriate documentation and handover.

Programmes that close out hypercare after one week — usually because the go-live appeared to go well and there is pressure to release resource — routinely face a second wave of issues in weeks two and three that are harder to resolve because the people with programme knowledge are no longer available.

Exit criteria for hypercare

The hypercare period ends when specific conditions have been met — not when the programme team has run out of budget or patience. Exit criteria typically include: the issue rate has dropped below a defined threshold and is stable, all critical and high-priority issues are resolved or have a documented remediation path with an owner and a timeline, the post-migration data validation has been completed and findings actioned, and the BAU support team has confirmed they can manage the remaining issue volume with their normal resourcing.

These criteria should be agreed before go-live. A hypercare exit decision made without agreed criteria is a management judgement call made under resource pressure — which is not the same thing as a programme governance decision.

Documentation handover

The BAU team that takes over after hypercare must have, in their hands before the project closes: the current system configuration documentation, the interface specifications as-built (not as-designed), the data migration decisions log (what was migrated, what was excluded, and why), the outstanding known issues list with status, the post-migration validation findings, and the escalation contacts for any vendor or third-party systems that remain within the warranty or support period.

Documentation that exists only in the programme team's heads, on personal laptops, or in a project SharePoint that will be archived is not documentation that the BAU team can use. The handover should be a formal process with sign-off from the BAU lead confirming they have received and can access everything they need to support the system going forward.

Need a senior review of your migration readiness?

Book a 20-minute fit check. We will identify your top migration risks and tell you what needs to be resolved before cutover week.

Frequently Asked Questions

For a PAS/EMR migration, data migration testing typically takes 3-6 months for programmes of typical NZ or UK district hospital scale. This includes: initial data extraction and mapping validation, clinical data quality review cycles (expect multiple rounds), full rehearsal migrations in non-production environments, and a final dress rehearsal with production-representative data. Programmes that compress this into 4-6 weeks tend to surface data quality issues during go-live or in the first weeks of operation — at which point the cost of remediation is significantly higher.

The data that should not migrate is as important as the data that should. Common candidates for exclusion: administrative records with no clinical relevance that would create noise in the new system, duplicate records that have not been resolved and would propagate the duplicate into the new system, and historical data where the migration would require significant transformation that cannot be validated clinically. The decision on what not to migrate must be made by clinicians and data owners — not defaults from the ETL tooling.

This is one of the most operationally complex questions in any PAS migration. The options are: freeze new admissions for a defined period before cutover (operationally disruptive but technically cleanest), carry forward active episodes as records in the new system with a defined data quality review, or run a parallel period where both systems are maintained (expensive and creates its own risk). Most programmes use a combination — a short admission freeze for elective activity combined with a defined protocol for emergency admissions during the cutover window. The approach must be agreed clinically, not decided by the programme team alone.

Related insights

Data Migration

Healthcare Data Migration & Cutover Assurance

How JTX approaches data migration risk, cutover planning, and stabilisation assurance for clinical system go-lives.

Read insight
Go-Live Readiness

Clinical System Testing & Go-Live Readiness

The testing and readiness assurance approach JTX applies to clinical system programmes before cutover day.

Read insight
Cutover Planning

Clinical System Cutover Checklist

A practical checklist covering the cutover window itself — interface sequencing, rollback decision points, and what good looks like at each stage.

Read insight