Difference between revisions of "Reporting Worklist Prioritization - Proposal"

From IHE Wiki
Jump to navigation Jump to search
 
(7 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
* Editor: TBA  
 
* Editor: TBA  
 
* Domain: Radiology
 
* Domain: Radiology
 +
 +
===Summary===
 +
Continuously sorting reading worklist entries into an order that optimizes outcomes requires obtaining a variety of details from multiple systems.
 +
 +
Some relevant information is available today, some is newly becoming available (e.g. prospective findings generated by an AI).
 +
 +
A Reporting Worklist Prioritization specification (Option or Profile) could identify sources and formats for aggregating the relevant information in interoperable ways.
 +
 +
The potential for such improvements has drawn positive attention at recent RSNA demonstrations.
  
 
==2. The Problem==
 
==2. The Problem==
Line 18: Line 27:
 
==3. Key Use Case==
 
==3. Key Use Case==
  
Consider sitting down with a few chief radiologists and radiology department administrators and asking “If you were manually prioritizing this couple dozen entries in a worklist, what does “optimal ordering” mean to you, what characteristics would drive your relative priority choices, how do you handle tradeoffs between a low confidence high risk finding and a high confidence moderate risk finding." etc.
+
Consider sitting down with a few chief radiologists and radiology department administrators and asking “If you were manually prioritizing this couple dozen entries in a worklist, what does “optimal ordering” mean to you, what characteristics would drive your relative priority choices, how do you handle tradeoffs between a low confidence high risk finding and a high confidence moderate risk finding." etc.  Should consult people representing different practice types since prioritization of pediatric cases may differ and use different criteria; oncology practices may also have considerations that differ, etc.  
  
 
The profile use case should spell out:
 
The profile use case should spell out:
Line 26: Line 35:
 
:* imaging procedure type
 
:* imaging procedure type
 
:* patient type/location
 
:* patient type/location
:* Reading Physician availability
+
:* Reading Physician availability, role, "assignment" at that point in time
 +
::* Note there isn't "One" worklist or priority order, there are/may be worklists for all the working radiologists
 
:* known/suspected risks to the patient
 
:* known/suspected risks to the patient
 
::* admitting diagnosis, reason for procedure
 
::* admitting diagnosis, reason for procedure
Line 35: Line 45:
 
:::* confidence of finding
 
:::* confidence of finding
 
::* lab findings, pathology findings
 
::* lab findings, pathology findings
 +
:* TODO also describe findings in priors and the situation where we are doing followup
 
:* when is the patient expected to leave the facility
 
:* when is the patient expected to leave the facility
 
:* whether associated information (e.g. lab results, CAD results) is available yet, and if not, when  
 
:* whether associated information (e.g. lab results, CAD results) is available yet, and if not, when  
Line 44: Line 55:
 
* What codesets are used for certain concepts, e.g.
 
* What codesets are used for certain concepts, e.g.
 
:* finding codes, anatomy codes, severity codes, etc.
 
:* finding codes, anatomy codes, severity codes, etc.
 +
 +
:* Also consider whether collecting data on the actual order or reading and turnaround times etc is also in scope of the profile.
 +
::* Helps make the business case and/or justify this work and/or "troubleshoot" issues with how things are getting read
 +
::* Might need to tag information on "reasons" why things were "slower" - what needs to be "fixed" to make prioritization results "better"
 +
::* Consider "dynamic" aspects of "bumping things up the worklist" if they're not getting done, or the SLA is not being met
  
 
The profile should define:
 
The profile should define:
Line 55: Line 71:
 
* Reporting Worklist Manager (RIS/PACS)
 
* Reporting Worklist Manager (RIS/PACS)
 
* PACS - study data source
 
* PACS - study data source
* AI Applications - presence/absence of various findings
+
* AI Applications and/or AI Manager - presence/absence of various findings
 
:* Might need adjustments to MAP Profile to get additional details incorporated in the results
 
:* Might need adjustments to MAP Profile to get additional details incorporated in the results
 
* EMR - admission and patient record information
 
* EMR - admission and patient record information
 
* Staff/Scheduling System - expertise and availability of reading staff
 
* Staff/Scheduling System - expertise and availability of reading staff
  
==5. Discussion==
+
==5. Technical Approach==
 +
 
 +
See Breakdown of Tasks details in the [https://docs.google.com/spreadsheets/d/1IoRwFz1xxORCwYIJat-Yn2trfkhiT5OKbU8toEpKb5k/edit?usp=sharing Prioritization Tab of the 2022-23 Evaluation Worksheet]
 +
* For reference see also the [https://docs.google.com/spreadsheets/d/18wETLZhLYcXq5pOMfAhH8-Q4M5cKztj6QNUEOAcOnng/edit?usp=sharing Prioritization Tab of the 2021-22 Evaluation Worksheet]
 +
 
 +
==6. Support & Resources==
 +
''<List groups that have expressed support for the proposal and resources that would be available to accomplish the tasks listed above.>''
 +
 
 +
''<Identify anyone who has indicated an interest in implementing/prototyping the Profile if it is published this cycle.>''
  
 
Might interact with the AIR Data/Root Results proposal. The Root Results might serve as a useful summary, and if more details are needed it might warrant extending attributes in existing MAP.
 
Might interact with the AIR Data/Root Results proposal. The Root Results might serve as a useful summary, and if more details are needed it might warrant extending attributes in existing MAP.
  
How should workitem this be packaged? As an option on SWF.b? As an extension of one of the reporting workflow profiles? As a revision to AIW (given that AI results are a high profile piece of data but not the only data of interest)?
+
==7. Risks==
 +
''<List real-world practical or political risks that could impede successfully fielding the profile.>''
 +
 
 +
''<Technical risks should be noted above under Uncertainties.>''
 +
 
 +
==8. Tech Cmte Evaluation==
 +
 
 +
''<The technical committee will use this area to record details of the effort estimation, etc.>''
 +
 
 +
Effort Evaluation (as a % of Tech Cmte Bandwidth):
 +
:* xx% for MUE
 +
:* yy% for MUE + optional
 +
 
 +
Editor:
 +
: TBA
 +
 
 +
SME/Champion:
 +
: TBA ''<typically with a technical editor, the Subject Matter Expert will bring clinical expertise; in the (unusual) case of a clinical editor, the SME will bring technical expertise>''

Latest revision as of 10:31, 29 August 2022

1. Proposed Workitem: Reporting Worklist Prioritization

  • Proposal Editor: Antje Schroeder, Kevin O'Donnell, Teri Sippel
  • Editor: TBA
  • Domain: Radiology

Summary

Continuously sorting reading worklist entries into an order that optimizes outcomes requires obtaining a variety of details from multiple systems.

Some relevant information is available today, some is newly becoming available (e.g. prospective findings generated by an AI).

A Reporting Worklist Prioritization specification (Option or Profile) could identify sources and formats for aggregating the relevant information in interoperable ways.

The potential for such improvements has drawn positive attention at recent RSNA demonstrations.

2. The Problem

At any point in time, there are N items on the reading worklist when a new reading task arrives, so the Reading Worklist Manager has to choose to prioritize the new task into one of N+1 possible positions on the list. All priorities are relative.

While the logic (and preferences) for which other tasks a task should come before is internal to the reading worklist provider (and hopefully customizable), all the details for each task that may be relevant to prioritization come from outside the reading worklist provider so getting access to them is an interoperability problem.

An AI Application that detects lung nodules or that rules out strokes can't tell the reading worklist provider where to prioritize a new study in the worklist because the AI Application doesn't know what else is on the worklist. But the AI can contribute positive or negative findings that can feed the logic of the worklist provider. Other systems will also need to provide (and update) relevant information. And certain details that the worklist provider wants will need the encoding to be standardized (especially if it comes from multiple implementations)

What findings are a higher priority than a partial stenosis of a coronary artery and what are lower is something that will be encoded into the central prioritizer, not each AI Application (granting that the prioritizer might eventually be an AI itself...)

3. Key Use Case

Consider sitting down with a few chief radiologists and radiology department administrators and asking “If you were manually prioritizing this couple dozen entries in a worklist, what does “optimal ordering” mean to you, what characteristics would drive your relative priority choices, how do you handle tradeoffs between a low confidence high risk finding and a high confidence moderate risk finding." etc. Should consult people representing different practice types since prioritization of pediatric cases may differ and use different criteria; oncology practices may also have considerations that differ, etc.

The profile use case should spell out:

  • What pieces of information does the prioritizer need/use to prioritize worklist items
  • priority of the underlying imaging order
  • agreed turn around times (service level agreements)
  • imaging procedure type
  • patient type/location
  • Reading Physician availability, role, "assignment" at that point in time
  • Note there isn't "One" worklist or priority order, there are/may be worklists for all the working radiologists
  • known/suspected risks to the patient
  • admitting diagnosis, reason for procedure
  • anatomy being imaged
  • preliminary findings from one or more AI algorithms
  • stoke is present, stroke is absent
  • severity of hemorrhage (large/small, located in a more critical area or a less critical area)
  • confidence of finding
  • lab findings, pathology findings
  • TODO also describe findings in priors and the situation where we are doing followup
  • when is the patient expected to leave the facility
  • whether associated information (e.g. lab results, CAD results) is available yet, and if not, when
  • What system(s) can provide each piece of information and how is it encoded, e.g.
  • some from modality (via PACS) in the image header
  • some from analysis application encoded in DICOM SR content
  • some from the Order Placer in the incoming Order message
  • some from a support system in a Procedure Update OMI, ….
  • What codesets are used for certain concepts, e.g.
  • finding codes, anatomy codes, severity codes, etc.
  • Also consider whether collecting data on the actual order or reading and turnaround times etc is also in scope of the profile.
  • Helps make the business case and/or justify this work and/or "troubleshoot" issues with how things are getting read
  • Might need to tag information on "reasons" why things were "slower" - what needs to be "fixed" to make prioritization results "better"
  • Consider "dynamic" aspects of "bumping things up the worklist" if they're not getting done, or the SLA is not being met

The profile should define:

  • What system has the relevant information listed above
  • How that system encodes and conveys that information to the prioritizer
  • Whether to proxy/aggregate the information to make it more digestible for a prioritizer

4. Standards and Systems

Systems:

  • Reporting Worklist Manager (RIS/PACS)
  • PACS - study data source
  • AI Applications and/or AI Manager - presence/absence of various findings
  • Might need adjustments to MAP Profile to get additional details incorporated in the results
  • EMR - admission and patient record information
  • Staff/Scheduling System - expertise and availability of reading staff

5. Technical Approach

See Breakdown of Tasks details in the Prioritization Tab of the 2022-23 Evaluation Worksheet

6. Support & Resources

<List groups that have expressed support for the proposal and resources that would be available to accomplish the tasks listed above.>

<Identify anyone who has indicated an interest in implementing/prototyping the Profile if it is published this cycle.>

Might interact with the AIR Data/Root Results proposal. The Root Results might serve as a useful summary, and if more details are needed it might warrant extending attributes in existing MAP.

7. Risks

<List real-world practical or political risks that could impede successfully fielding the profile.>

<Technical risks should be noted above under Uncertainties.>

8. Tech Cmte Evaluation

<The technical committee will use this area to record details of the effort estimation, etc.>

Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • xx% for MUE
  • yy% for MUE + optional

Editor:

TBA

SME/Champion:

TBA <typically with a technical editor, the Subject Matter Expert will bring clinical expertise; in the (unusual) case of a clinical editor, the SME will bring technical expertise>