Reporting Worklist Prioritization - Proposal

From IHE Wiki
Jump to navigation Jump to search

1. Proposed Workitem: Reporting Worklist Prioritization

  • Proposal Editor: Antje Schroeder, Kevin O'Donnell, Teri Sippel
  • Editor: TBA
  • Domain: Radiology

Summary

Continuously sorting reading worklist entries into an order that optimizes outcomes requires obtaining a variety of details from multiple systems.

Some relevant information is available today, some is newly becoming available (e.g. prospective findings generated by an AI).

A Reporting Worklist Prioritization specification (Option or Profile) could identify sources and formats for aggregating the relevant information in interoperable ways.

The potential for such improvements has drawn positive attention at recent RSNA demonstrations.

2. The Problem

At any point in time, there are N items on the reading worklist when a new reading task arrives, so the Reading Worklist Manager has to choose to prioritize the new task into one of N+1 possible positions on the list. All priorities are relative.

While the logic (and preferences) for which other tasks a task should come before is internal to the reading worklist provider (and hopefully customizable), all the details for each task that may be relevant to prioritization come from outside the reading worklist provider so getting access to them is an interoperability problem.

An AI Application that detects lung nodules or that rules out strokes can't tell the reading worklist provider where to prioritize a new study in the worklist because the AI Application doesn't know what else is on the worklist. But the AI can contribute positive or negative findings that can feed the logic of the worklist provider. Other systems will also need to provide (and update) relevant information. And certain details that the worklist provider wants will need the encoding to be standardized (especially if it comes from multiple implementations)

What findings are a higher priority than a partial stenosis of a coronary artery and what are lower is something that will be encoded into the central prioritizer, not each AI Application (granting that the prioritizer might eventually be an AI itself...)

3. Key Use Case

Consider sitting down with a few chief radiologists and radiology department administrators and asking “If you were manually prioritizing this couple dozen entries in a worklist, what does “optimal ordering” mean to you, what characteristics would drive your relative priority choices, how do you handle tradeoffs between a low confidence high risk finding and a high confidence moderate risk finding." etc. Should consult people representing different practice types since prioritization of pediatric cases may differ and use different criteria; oncology practices may also have considerations that differ, etc.

The profile use case should spell out:

  • What pieces of information does the prioritizer need/use to prioritize worklist items
  • priority of the underlying imaging order
  • agreed turn around times (service level agreements)
  • imaging procedure type
  • patient type/location
  • Reading Physician availability, role, "assignment" at that point in time
  • Note there isn't "One" worklist or priority order, there are/may be worklists for all the working radiologists
  • known/suspected risks to the patient
  • admitting diagnosis, reason for procedure
  • anatomy being imaged
  • preliminary findings from one or more AI algorithms
  • stoke is present, stroke is absent
  • severity of hemorrhage (large/small, located in a more critical area or a less critical area)
  • confidence of finding
  • lab findings, pathology findings
  • when is the patient expected to leave the facility
  • whether associated information (e.g. lab results, CAD results) is available yet, and if not, when
  • What system(s) can provide each piece of information and how is it encoded, e.g.
  • some from modality (via PACS) in the image header
  • some from analysis application encoded in DICOM SR content
  • some from the Order Placer in the incoming Order message
  • some from a support system in a Procedure Update OMI, ….
  • What codesets are used for certain concepts, e.g.
  • finding codes, anatomy codes, severity codes, etc.

The profile should define:

  • What system has the relevant information listed above
  • How that system encodes and conveys that information to the prioritizer
  • Whether to proxy/aggregate the information to make it more digestible for a prioritizer

4. Standards and Systems

Systems:

  • Reporting Worklist Manager (RIS/PACS)
  • PACS - study data source
  • AI Applications and/or AI Manager - presence/absence of various findings
  • Might need adjustments to MAP Profile to get additional details incorporated in the results
  • EMR - admission and patient record information
  • Staff/Scheduling System - expertise and availability of reading staff

5. Technical Approach

See Breakdown of Tasks details in the Prioritization Tab of the 2022-23 Evaluation Worksheet

6. Support & Resources

<List groups that have expressed support for the proposal and resources that would be available to accomplish the tasks listed above.>

<Identify anyone who has indicated an interest in implementing/prototyping the Profile if it is published this cycle.>

Might interact with the AIR Data/Root Results proposal. The Root Results might serve as a useful summary, and if more details are needed it might warrant extending attributes in existing MAP.

7. Risks

<List real-world practical or political risks that could impede successfully fielding the profile.>

<Technical risks should be noted above under Uncertainties.>

8. Tech Cmte Evaluation

<The technical committee will use this area to record details of the effort estimation, etc.>

Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • xx% for MUE
  • yy% for MUE + optional

Editor:

TBA

SME/Champion:

TBA <typically with a technical editor, the Subject Matter Expert will bring clinical expertise; in the (unusual) case of a clinical editor, the SME will bring technical expertise>