Difference between revisions of "AI Results - Brief Proposal"

From IHE Wiki
Jump to navigation Jump to search
Line 90: Line 90:
 
===Breakdown of tasks===
 
===Breakdown of tasks===
  
:* ''<Discuss/Confirm use case list (currently 5) to decide in/out of scope and details>''
+
:*  
:* ''<Analyze/choose which report guidelines to use>''
+
 
:* ''<Resolve open issue on the degree of computer parsability>''
+
:* Draft/Review "transaction" for Display AI Result
:* ''<Draft/Review content definition for Oncology Report (profile use of CCDA)>''
+
::* General baseline behavior, displays can augment and may pursue the varied preferences of their users
:* ''<Draft/Review transaction for Retrieve Report>''
+
::* New work
  
 
==6. Support & Resources==
 
==6. Support & Resources==

Revision as of 15:44, 27 August 2019

1. Proposed Workitem: AI Results (AIR) Profile

  • Proposal Editor: Kevin O'Donnell
  • Editor: Kevin O'Donnell
  • Domain: Radiology

2. The Problem

Applying AI methods (deep learning, etc) to the analysis of medical images is an area of significant interest and activity, so...

Interoperable results: The results of such AI analysis need to be presented to radiologists in their reading environment. This depends on interoperability between AI result generation products and radiology reading workstation systems/software.

Study Integrated results: Radiologists interpreting studies likely expect AI-generated results to supplement rather than replace traditional image analysis results and thus a given study will be composed of acquired images, AI results, & traditional clinical data.

Convergence of result encoding: Many AI results are results a human could otherwise have produced, and those human results may be used as training data for the AI, and AI results may be used by other AIs in an adversarial network. AI and non-AI results need to handled together. Also want to facilitate data pooling and sharing between sites.

For AI to productively live up to it's promise, results and data must be reliably, and conveniently, assembled and managed.

3. Key Use Case

Goal: AI packages store AI results that are retrieved and presented consistently by a variety of imaging display systems.

The display presents the result in the context of the medical imaging study to which it applies.

To minimize implementation complexity for displays, and avoid needing different software for each new AI result, compose AI results from a reasonable set of primitives.

This fits the IHE model of profiles as tools for convergence.

  • Display products that support those primitives can declare they are “AI-Ready”
  • AI products that output results using those primitives know a variety of displays can present their results
  • Users motivate display vendors to be AI-Ready by requiring conformance
  • Users motivate AI vendors to conform since it simplifies deployment for the users

4. Standards and Systems

  • AI algorithms - running in PACS, cloud, processing servers, standalone workstations, etc. (Result Creators)
  • PACS, VNA, etc.
  • Reading workstations, Image Displays
  • Databases, clinical analysis, report creators (import results)

Standards

DICOM SR - TID 1500

DICOM Segmentations

DICOM Sup XXX Simplified SR in JSON for AI

  • David Clunie has a significant draft that will be presented at WG-06 in September. He has also been working on tooling to test/confirm the transcoding logic.
  • This work is what really makes this accessible to the AI community, and bridges into the existing radiology infrastructure

AIM - Annotation and Image Markup

RSNA CDE (Common Data Elements)

  • helps with longitudinal data and automation logic for reporting and decision support
  • http://radelement.org

IHE Evidence Documents Profile

  • Should review to see if there anything we should borrow/revive

IHE Query for Existing Data for Mobile

  • consider using this to query the EHR for diagnostic reports and observations about the patient

IHE Results Distribution

  • consider the critical results status flags since AI results may be used to prioritize reading worklists

5. Technical Approach

A basic store and retrieve Profile that establishes a baseline for encoding, display behaviors/conventions, and data management (where do AI results belong/where do you find/use them).

There are many other radiology applications of AI that are not about processing images. This proposal is image-centric.

Existing actors

  • Image Manager/Archive - store AI segmentations and measurements alongside human segmentations and measurements
  • Image Display - present AI results together with the associated images

New actors

  • Result Creators - AI algorithms running in PACS, cloud, processing servers, standalone workstations, etc.
  • Result Consumers - systems that consume AI results rather than display them (databases, clinical analysis, report creators)

Existing transactions

<Indicate how existing transactions might be used or might need to be extended.>

New transactions (standards used)

  • Store AI Result -
  • Query AI Result -
  • Retrieve AI Result -
  • Display AI Result - technically a behavior spec. not a transaction

Impact on existing integration profiles

  • If SR-based then most other Radiology profiles apply - XDS-I, XCA-I, IRWF.b, PDI, IDEP, IOCM, WIA, ATNA-Rad, etc.

New integration profiles needed

  • AIR - AI Results

Breakdown of tasks

  • Draft/Review "transaction" for Display AI Result
  • General baseline behavior, displays can augment and may pursue the varied preferences of their users
  • New work

6. Support & Resources

  • DICOM WG-23 is engaged on the topic of AI Results and has completed work useful to this activity

<Identify anyone who as indicated an interest in implementing/prototyping the Profile if it is published this cycle.>

7. Risks

  • The JSON SR supplement could get hung up on technical hitches.
  • Debates on alternative standards
  • Scope creep - AI is a large space and there is much fun to be had

8. Open Issues

<Point out any key issues or design problems. This will be helpful for estimating the amount of work and demonstrates thought has already gone into the candidate profile.>

9. Tech Cmte Evaluation

<The technical committee will use this area to record details of the effort estimation, etc.>

Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • xx% for ...

Candidate Editor:

Kevin O'Donnell