Imaging Object Change Management - Detailed Proposal
1. Proposed Workitem:
- Proposal Editor: Kevin O'Donnell
- Whitepaper Editors:
- Kinson Ho (Agfa)
- David Heaney (McKesson)
- Domain: Radiology, Cardiology
On a regular basis, DICOM objects are copied, distributed to where they can be used (great), and modified in the course of being used (unavoidable). Differing modifications of different copies results in "outdated" objects or conflicting versions of the "same" object. We need a solution for managing/synchronizing these objects. We also need services supporting object Lifecycle Management, such as data retention.
There is interest in adding the necessary mechanisms to DICOM, but a whitepaper is needed first to focus the work.
An IHE Whitepaper would document the uses cases, solution requirements, tricky issues, and problems currently being experienced in the field. The whitepaper would guide the DICOM work. When the DICOM work is complete, IHE could consider a profile to combine the whitepaper (Vol 1 material) and DICOM mechanisms (Vol 2 material).
Regional PACS deployments (specifically Canada Infoway, Europe and the U.S. Military health system) report these issues are a problem. A variety of other problems not involving regional PACS can also be traced back to the same underlying issue of change & lifecycle management.
Agfa and McKesson have offered to lead development of the profile. Agfa has already developed a proprietary, DICOM-oriented method of solving these issues (which provides proof-of-concept and domain experience) but would like to see IHE provide a solution that could be broadly implemented.
IHE Radiology members have valuable experience with use cases, an interest in making the contents of imaging objects reliable and correct, and a desire for an open standards-based solution. The solutions will work best when many vendors support them.
2. The Problem
As the preferred protocol for distributing and storing medical imaging, vast numbers of DICOM objects are created and distributed every day.
For various reasons, it is common to create and distribute multiple copies of instances:
- Providing copies to other sites (or departments) caring for the same patient
- Sending copies for processing (3D, CAD, Clinical Analysis)
- Local caching of instances to compensate for network performance
- Mirroring instances on a Fail-over/Backup server
- Use of multiple "peer" archives
- Migrating to a new PACS system
- Modify caching of prior Studies based on Img Lifetime Management policies such as those specified by the VA (i.e. bring all priors for a person on-line to primary cache once they are on active duty)
For various reasons, it is also common to modify instances:
- Correction/update of demographics
- Splitting/combining studies
- Updating references to other related instances
- Taking “bad” images out of circulation
- Coercing instances to fit into local data models/workflow
- Permanently delete old images or entire Studies as may be required by institutional record retention policies
The combination of needing to distribute copies of instances and needing to modify instances leads to copies which are inconsistent which in turn creates the potential for confusion, error or loss of data.
It would be useful to have reliable, efficient mechanisms to know whether two copies of an instance have diverged, what has changed and if and how to synch them.
3. Key Use Cases
This is just a start to highlight some major use cases. The work item will involve fleshing out the details of these use cases and flushing out other significant use cases.
Central Archive, Local PACS
- i.e. Infoway model
- Each site has a local PACS for operational activities
- A regional Archive takes care of inter-site image exchange and both medium and long term archiving
- Also, local PACS may serve as a local cache (see next use case)
- A group of three hospitals each have local PACS.
- When a patient is transfered to another hospital (e.g. for specialist care), a copy of recent images are transfered to the second hospital.
- The first hospital identifies a demographic change and updates their Master Copy of the images.
- ?? What happens at the second hospital, how??
- A study received at a local PACS is sent to the regional Archive
- Later on mistake is noticed on the study (e.g. two different acquisitions incorrectly merged into one study due to incorrect worklist item selected)
- Quality control procedure reconcile the study at the local PACS
- Such reconciliation needs to be propagated to the regional Archive. Ideally this can be done automatically.
- Consider objects coming off of media that have been "unaware" of changes that have happened since the media was burned.
- Consider also the case of media import where a comparison/correction can be done, but no "negotiation" is possible with the media source.
- Image data is processed by tech, creating a result file and result snapshot
- eg, NM gated blood pool study, with output of a screen snapshot showing ejection fraction and regions used
- Result file and snapshot are distributed
- Original image data and result snapshot is viewed by MD
- MD revises the regions or other processing parameters
- New result file and snapshot are generated
- How should this be handled using contributing systems, derivation details and change management?
- Image data is processed by tech, creating a result file and result snapshot
Data Retention Management
- Data retention policies trigger the condition that particular studies (i.e. based on age) should be deleted
- Need to be able to notify all systems that they must delete any data they may have related to these studies
- If a separate system handles data retention policies, it also requires the same type of transaction to notify one or more 'Image Managers' that a particular study should be deleted.
- would be useful to be able to sync updates to the original objects over to the Clinical Trial objects if the changes are clinical, but wouldn't want to carry over a change to the name to update to married name for example.
Fundamental functions include:
- determine if two copies of an instance have diverged
- determine what has changed
- determine whether to synch them
- determine what changes to make to synch them
- manual delete,
- scheduled delete (e.g. due to data retention policy)
- Update Patient Demographics,
- Update Procedure Information,
- Merge Patient Records,
- Link/unlink Patient Records,
- Update Study Level Attribute,
- Update Series Level Attributes, and
- Update Image Level Attributes;
- Study Split (e.g. an incorrect worklist entry is selected for an acquisition. This causes the current acquisition to incorrectly merged into an existing study. The incorrect portion of the study needs to be split out)
- Study Merge (e.g. After the incorrect portion of the study is split out, it is now updated with the correct order. There may already exists a study for the correct order. So in this case, the split out objects will be merged to the existing study)
- Study Fix Up (e.g. an incorrect worklist entry is selected for an acquisition. A correct entry is selected later. This may change the study instance uid of the existing objects)
- Subscribe/unsubscribe to change notification
- Revision Log
4. Standards and Systems
- DICOM has prepared a work item for Data Consistency that could provide mechanisms, but is hesitant to approve the work item without a whitepaper to better document the use cases.
5. Technical Approach
The first goal is a whitepaper that maps out use cases and explores relevant issues, such as implementation questions, how the needed profiles might be organized, impact on existing installations, how it would work in a "mixed environment", etc.
The two key technical elements currently under consideration are:
- a DICOM "delta object" that records differences between two objects
- a hash code as a derivable "version ID" for any instance of any object.
The versatility of these two concepts as the basis for a change management system has been demonstrated by tools such as git.
Existing actors in order of "vested interest" would be:
- Image Manager
- Image Archive
- Document Source
- Acquisition Modality
- Image Display
- Document Consumer
- Change Initiator?
- Change Acceptor?
<Indicate how existing transactions might be used or might need to be extended.>
New transactions (standards used)
- Notify Of Change (DICOM)
- Query For Changes (DICOM)
- Store Change Delta (DICOM)
Impact on existing integration profiles
- Need to decide whether this would be a separate profile that could be combined with existing profile, or whether it would be worthwhile to build options for it in relevant existing profiles.
New integration profiles needed
Depending on how the use cases shape up, there might be several profiles to propose in later years, e.g.:
- Imaging Change Management (Basic change management of imaging objects)
- Imaging Study Quality Control (based on ICM, add quality control functions for higher order operations such as study split, study merge, etc.)
Breakdown of tasks that need to be accomplished
- Define detailed clinical use cases for change management
- Define desire requirements for the change management mechanism
- Work with DICOM WG to define any necessary addition in DICOM to support this mechanism
- Design how each use case can be realized using the defined existing or new transactions
High level components of the change management design
Consider these during design in the context of the use cases:
- Change details are persistent
- Provides a natural audit trail for the study
- Change details are captured in an object separate from the original object
- Allows easy propagation of change (change can be propagated without transmitting the original objects)
- Keeps the existing object intact
- Changes are easily archive-able (works well with the Diagnostic Imaging Archive actor)
- Consistent conceptual design as other DICOM sop classes such as GSPS, DICOM SR, KOS, Radiation Dose, etc.
- Changes are now queryable using standard DIMSE C-Find service
- Change history is embedded
- Change Acceptor can make an independent decision whether or not to accept the change (especially important in case of receiving conflicting changes)
- Only interested parties (e.g. Change Acceptor) need to interpret the object (minimized impact on existing systems while allowing them to take advantage of the features in the future)
- Alleviates the dependency on timing (not always possible to identify the order of changes. Two changes that affect the same attribute can occur in parallel)
- Easy adoption to XDS-I
- When change details are captured in an object, this can easily be published to XDS-I by creating a manifest of the change object.
- No addition work required to integrate with XDS-I.
- Consider Reconcilition Logic
- when deltas are applied, reconciliation needs to be considered
- when the delta is applied to the original the changes were applied to, the result was made consistent by the original changer
- when the delta is combined with another delta, someone needs to make sure that all of the cross-references, totalling of component values, etc. are internally consistent/correct in the final object.
- Consider recording "Why" the change was made
- should be recorded when it was changed, because the changer knows why at the moment they make the change
- these details may impact whether a change is accepted/incorporated, and may inform future rollbacks or reconciliations, etc.
6. Support & Resources
- Agfa and McKesson have offered to lead development of the profile
- Merge and several other vendors have offered resources.
- Agfa has already developed a proprietary, DICOM-oriented method of solving these issues (which provides proof-of-concept and domain experience) but would like to see IHE provide a solution that could be broadly implemented.
- Canada Health Infoway and different provincial project teams could potentially be recruited.
- Will very likely depend on new mechanism defined in DICOM.
- We need to coordinate with DICOM and make sure this is also a workitem they can work on.
- "Right-sizing" the scope may be a challenge
- Avoid Feature Creep
- Avoid Solving Half the Problem
8. Open Issues
The design for deletion (data retention, quality control) may need to synchronize with the design already specified in MAWF to avoid multiple designs with similar purpose.
Who should be responsible for recording differences between objects? Who is responsible for notifications or polling?
Do we need to differentiate between the handling of changes during the period right after creation, vs after distribution, vs after "stable" archiving?
Do we need to rationalize our solutions with the concepts of data that can be preliminary, final, verified, etc.
9. Tech Cmte Evaluation
- Are there other technical approach mechanisms
- are web-services a better protocol
- Can we be rigid about requiring the UID to change if any byte changes? Probably not
- Would simply shift the nature of the problem/solution to managing/linking the proliferating objects
- Can think of this as PIR for distributed systems and loosely coupled players
- Do we need different mechanisms for different architectures?
- Need to clarify when to use PIR, when to use this profile
- Do we endorse Study Split instead of PGP
- Consider how this works when there are "dumb consumers and changers" and "smart consumers and changers"
- Would be nice if we could consider DICOM objects an arbitrary byte stream, but unfortunately there are a lot of semantics inside
- Also need to consider who is the "master" of data to change, e.g. talk to the MPR if you want to update
- some change triggers come from the master
- some changes might need to be validated with the master
- Survey how this is currently being solved today (e.g. global broadcasts, proprietary, etc)
- Agfa proposal from IHE Canada in Google Groups with a public document will link here.
- Some concern about duplication of scheduled workflow and PIR capabilities.
- IHE is not obliged to use the Agfa proposal as it stands
- Lots of issues when there is not a single source of truth, which is the situation we face.
- Consider listing the ADT and other sources of truth as systems that need to be "consulted with" perhaps.
- May need to request change to Master systems
- Systems receiving notification or information of a change, may not have the local authority to make the change
- do we end up with race conditions or cycles?
Effort Evaluation (as a % of Tech Cmte Bandwidth):
- (1 day of 4 and extra t-cons, or 1.5 days of 4)
- has elasticity since it's a white paper not a profile
Responses to Issues:
- See italics in Risk and Open Issue sections
- David Heaney
- Kinson Ho