Basic Imaging Object Change Management - Detailed Proposal

From IHE Wiki
Jump to navigation Jump to search

1. Proposed Workitem:

  • Proposal Editor: Kinson Ho (Agfa)
  • Whitepaper Editors:
Kinson Ho (Agfa)
  • Domain: Radiology, Cardiology

Summary

On a regular basis, DICOM objects are copied, distributed to where they can be used (great), and modified in the course of being used (unavoidable). Differing modifications of different copies results in "outdated" objects or conflicting versions of the "same" object. We need a solution for managing/synchronizing these objects. We also need services supporting object Lifecycle Management, such as data retention.

There is interest in adding the necessary mechanisms to DICOM, but a whitepaper is needed first to focus the work.

An IHE Whitepaper would document the uses cases, solution requirements, tricky issues, and problems currently being experienced in the field. The whitepaper would guide the DICOM work. When the DICOM work is complete, IHE could consider a profile to combine the whitepaper (Vol 1 material) and DICOM mechanisms (Vol 2 material).

Regional PACS deployments (specifically Canada Infoway, Europe and the U.S. Military health system) report these issues are a problem. A variety of other problems not involving regional PACS can also be traced back to the same underlying issue of change & lifecycle management.

Agfa has already developed a proprietary, DICOM-oriented method of solving these issues (which provides proof-of-concept and domain experience). Other vendors have their respective proprietary solution. To improve interoperability, ideally IHE provides a solution that could be broadly implemented.

IHE Radiology members have valuable experience with use cases, an interest in making the contents of imaging objects reliable and correct, and a desire for an open standards-based solution. The solutions will work best when many vendors support them.

2. The Problem

As the preferred protocol for distributing and storing medical imaging, vast numbers of DICOM objects are created and distributed every day.

For various reasons, it is common to create and distribute multiple copies of instances:

  • Providing copies to other sites (or departments) caring for the same patient
  • Sending copies for processing (3D, CAD, Clinical Analysis)
  • Local caching of instances to compensate for network performance
  • Mirroring instances on a Fail-over/Backup server
  • Use of multiple "peer" archives
  • Migrating to a new PACS system
  • Modify caching of prior Studies based on Img Lifetime Management policies such as those specified by the VA (i.e. bring all priors for a person on-line to primary cache once they are on active duty)

For various reasons, it is also common to modify instances:

  • Correction/update of demographics
  • Splitting/combining studies
  • Updating references to other related instances
  • Taking “bad” images out of circulation
  • Coercing instances to fit into local data models/workflow
  • Permanently delete old images or entire Studies as may be required by institutional record retention policies

The combination of needing to distribute copies of instances and needing to modify instances leads to copies which are inconsistent which in turn creates the potential for confusion, error or loss of data.

It would be useful to have reliable, efficient mechanisms to know whether two copies of an instance have diverged, what has changed and if and how to synch them.

3. Key Use Cases

This is just a start to highlight some major use cases. The work item will involve fleshing out the details of these use cases and flushing out other significant use cases.

Central Archive, Local PACS

  • i.e. Infoway model
  • Each site has a local PACS for operational activities
  • A regional Archive takes care of inter-site image exchange and both medium and long term archiving
  • Also, local PACS may serve as a local cache (see next use case)

Local Cache

  • A group of three hospitals each have local PACS.
  • When a patient is transfered to another hospital (e.g. for specialist care), a copy of recent images are transfered to the second hospital.
  • The first hospital identifies a demographic change and updates their Master Copy of the images.
  • ?? What happens at the second hospital, how??

Quality Control

  • A study received at a local PACS is sent to the regional Archive
  • Later on mistake is noticed on the study (e.g. two different acquisitions incorrectly merged into one study due to incorrect worklist item selected)
  • Quality control procedure reconcile the study at the local PACS
  • Such reconciliation needs to be propagated to the regional Archive. Ideally this can be done automatically.

Media Consistency

  • Consider objects coming off of media that have been "unaware" of changes that have happened since the media was burned.
  • Consider also the case of media import where a comparison/correction can be done, but no "negotiation" is possible with the media source.

Derived Objects

  • Image data is processed by tech, creating a result file and result snapshot
    • eg, NM gated blood pool study, with output of a screen snapshot showing ejection fraction and regions used
  • Result file and snapshot are distributed
  • Original image data and result snapshot is viewed by MD
  • MD revises the regions or other processing parameters
  • New result file and snapshot are generated
  • How should this be handled using contributing systems, derivation details and change management?

Data Retention Management

  • Data retention policies trigger the condition that particular studies (i.e. based on age) should be deleted
  • Need to be able to notify all systems that they must delete any data they may have related to these studies
  • If a separate system handles data retention policies, it also requires the same type of transaction to notify one or more 'Image Managers' that a particular study should be deleted.

Fundamental functions include:

  • determine if two copies of an instance have diverged
  • determine what has changed
  • determine whether to synch them
  • determine what changes to make to synch them


Features include

  1. manual delete,
  2. scheduled delete (e.g. due to data retention policy)
  3. Undelete?,
  4. supercede/replace,
  5. Update Patient Demographics,
  6. Update Procedure Information,
  7. Merge Patient Records,
  8. Link/unlink Patient Records,
  9. Update Study Level Attribute,
  10. Update Series Level Attributes, and
  11. Update Image Level Attributes;
  12. Study Split (e.g. an incorrect worklist entry is selected for an acquisition. This causes the current acquisition to incorrectly merged into an existing study. The incorrect portion of the study needs to be split out)
  13. Study Merge (e.g. After the incorrect portion of the study is split out, it is now updated with the correct order. There may already exists a study for the correct order. So in this case, the split out objects will be merged to the existing study)
  14. Study Fix Up (e.g. an incorrect worklist entry is selected for an acquisition. A correct entry is selected later. This may change the study instance uid of the existing objects)
  15. Subscribe/unsubscribe to change notification
  16. Revision Log?

4. Standards and Systems

  • DICOM has prepared a work item for Data Consistency that could provide mechanisms, but is hesitant to approve the work item without a whitepaper to better document the use cases.
  • IHE Mammography Acquisition Workflow (MAWF) defines a way to communicate deletion of objects from Modality to Image Manager by using DICOM Key Object Selection Document.

5. Technical Approach

The first goal is a whitepaper that maps out use cases and explores relevant issues, such as implementation questions, how the needed profiles might be organized, impact on existing installations, how it would work in a "mixed environment", etc.

The secondary goal is in the whitepaper, provides a potential solution (may or may not be fully supported by current standard) for each use case identified.

Existing actors

Existing actors in order of "vested interest" would be:

  • Image Manager
  • Image Archive
  • Document Source
  • Importer
  • Acquisition Modality
  • Image Display
  • Document Consumer

New actors

  • Change Initiator?
  • Change Acceptor?

Existing transactions

<Indicate how existing transactions might be used or might need to be extended.>

New transactions (standards used)

  • Notify Of Change (DICOM)
  • Query For Changes (DICOM)


Impact on existing integration profiles

  • Need to decide whether this would be a separate profile that could be combined with existing profile, or whether it would be worthwhile to build options for it in relevant existing profiles.

New integration profiles needed

Depending on how the use cases shape up, there might be several profiles to propose in later years, e.g.:

  • Imaging Change Management (Basic change management of imaging objects)
  • Imaging Study Quality Control (based on ICM, add quality control functions for higher order operations such as study split, study merge, etc.)

Breakdown of tasks that need to be accomplished

  • Define detailed clinical use cases for change management
  • Define desire requirements for the change management mechanism
  • Work with DICOM WG to define any necessary addition in DICOM to support this mechanism
  • Design how each use case can be realized using the defined existing or new transactions

6. Support & Resources

  • Agfa and McKesson have offered to lead development of the profile
  • Merge and several other vendors have offered resources.
  • Agfa has already developed a proprietary, DICOM-oriented method of solving these issues (which provides proof-of-concept and domain experience) but would like to see IHE provide a solution that could be broadly implemented.
  • Canada Health Infoway and different provincial project teams could potentially be recruited.

7. Risks

  • Will very likely depend on new mechanism defined in DICOM.
    • We need to coordinate with DICOM and make sure this is also a workitem they can work on.
  • "Right-sizing" the scope may be a challenge
    • Avoid Feature Creep
    • Avoid Solving Half the Problem

8. Open Issues

The design for deletion (data retention, quality control) may need to synchronize with the design already specified in MAWF to avoid multiple designs with similar purpose.

Who should be responsible for recording differences between objects? Who is responsible for notifications or polling?

Do we need to differentiate between the handling of changes during the period right after creation, vs after distribution, vs after "stable" archiving?

Do we need to rationalize our solutions with the concepts of data that can be preliminary, final, verified, etc.

9. Tech Cmte Evaluation

Discussion:

  • Are there other technical approach mechanisms
  • are web-services a better protocol
  • Can we be rigid about requiring the UID to change if any byte changes? Probably not
    • Would simply shift the nature of the problem/solution to managing/linking the proliferating objects
  • Can think of this as PIR for distributed systems and loosely coupled players
  • Do we need different mechanisms for different architectures?
  • Need to clarify when to use PIR, when to use this profile
  • Do we endorse Study Split instead of PGP
  • Consider how this works when there are "dumb consumers and changers" and "smart consumers and changers"
  • Would be nice if we could consider DICOM objects an arbitrary byte stream, but unfortunately there are a lot of semantics inside
  • Also need to consider who is the "master" of data to change, e.g. talk to the MPR if you want to update
    • some change triggers come from the master
    • some changes might need to be validated with the master
  • Survey how this is currently being solved today (e.g. global broadcasts, proprietary, etc)
  • Agfa proposal from IHE Canada in Google Groups with a public document will link here.
    • Some concern about duplication of scheduled workflow and PIR capabilities.
  • IHE is not obliged to use the Agfa proposal as it stands
  • Lots of issues when there is not a single source of truth, which is the situation we face.
  • Consider listing the ADT and other sources of truth as systems that need to be "consulted with" perhaps.
  • May need to request change to Master systems
  • Systems receiving notification or information of a change, may not have the local authority to make the change
    • do we end up with race conditions or cycles?


Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • 35%
    • (1 day of 4 and extra t-cons, or 1.5 days of 4)
    • has elasticity since it's a white paper not a profile


Responses to Issues:

See italics in Risk and Open Issue sections

Candidate Editors:

Kinson Ho