Difference between revisions of "AI Results - Brief Proposal"

From IHE Wiki
Jump to navigation Jump to search
Line 22: Line 22:
  
 
To minimize the implementation complexity for the displays, and to avoid having to change software for each new AI result, it would make sense to compose AI results from a reasonable set of primitives.
 
To minimize the implementation complexity for the displays, and to avoid having to change software for each new AI result, it would make sense to compose AI results from a reasonable set of primitives.
 +
 +
This fits the IHE model of profiles as tools for convergence.
 +
* Display products that support those primitives can declare they are “AI-Ready”
 +
* Users can motivate display vendors to be AI-Ready
 +
* AI products that output their result using those primitives will know that a variety of displays will be able to present their results.
 +
* Users can motivate AI vendors to conform since that will simplify deployment for the users
  
 
==4. Standards and Systems==
 
==4. Standards and Systems==
'''Result Creators''' - AI algorithms running in PACS, cloud, processing servers, standalone workstations, etc.
+
* AI algorithms - running in PACS, cloud, processing servers, standalone workstations, etc. (Result Creators)
 
+
* PACS, VNA, etc.
'''Storage''' - PACS, VNA, etc.
+
* Reading workstations, Image Displays
 
+
* Databases, clinical analysis, report creators (import results)
'''Result Displays''' - reading workstations (could argue this is a special case of consumer, but might want to have specific required display features/behaviors)
 
 
 
'''Result Consumers''' - databases, clinical analysis, report creators (import results)
 
 
 
  
 
===Standards===
 
===Standards===
Line 57: Line 59:
 
* consider the critical results status flags since AI results may be used to prioritize reading worklists
 
* consider the critical results status flags since AI results may be used to prioritize reading worklists
  
==5. Discussion==
 
This fits the IHE model of profiles as tools for convergence.
 
* Display products that support those primitives can declare they are “AI-Ready”
 
* Users can motivate display vendors to be AI-Ready
 
* AI products that output their result using those primitives will know that a variety of displays will be able to present their results.
 
* Users can motivate AI vendors to conform since that will simplify deployment for the users
 
IHE Rad may want to consider defining some baseline presentation requirements or conventions for certain results.
 
 
There are many other radiology applications of AI that are not about processing images.  This proposal is initially image-centric.
 
  
 +
==5. Technical Approach==
  
==5. Technical Approach==
+
A basic store and retrieve Profile that establishes a baseline for encoding, display behaviors/conventions, and data management (where do AI results belong/where do you find/use them).  
''<This section can be very short but include as much detail as you like.  The Technical Committee will flesh it out when doing the effort estimation.>''
 
  
''<Outline how the standards could be used/refined to solve the problems in the Use CasesThe Technical Committee will be responsible for the full design and may choose to take a different approach, but a sample design is a good indication of feasibility.>''
+
There are many other radiology applications of AI that are not about processing imagesThis proposal is image-centric.
  
''<If a phased approach would make sense indicate some logical phases.  This may be because standards are evolving, because the problem is too big to solve at once, or because there are unknowns that won’t be resolved soon.>''
 
  
 
===Existing actors===
 
===Existing actors===
''<Indicate what existing actors could be used or might be affected by the profile.>''
+
* '''Image Manager/Archive''' - store AI segmentations and measurements alongside human segmentations and measurements
 +
* '''Image Display''' - present AI results together with the associated images
  
 
===New actors===
 
===New actors===
''<List possible new actors>''
+
* '''Result Creators''' - AI algorithms running in PACS, cloud, processing servers, standalone workstations, etc.
 +
* '''Result Consumers''' - systems that consume AI results rather than display them (databases, clinical analysis, report creators)
  
 
===Existing transactions===
 
===Existing transactions===
Line 85: Line 79:
  
 
===New transactions (standards used)===
 
===New transactions (standards used)===
''<Describe possible new transactions (indicating what standards might be used for each).>''
+
* '''Store AI Result''' -
 
+
* '''Query AI Result''' -
''<Transaction diagrams are very helpful here.  Go into as much detail as seems useful.>''
+
* '''Retrieve AI Result''' -
 +
* '''Display AI Result''' - technically a behavior spec. not a transaction
  
 
===Impact on existing integration profiles===
 
===Impact on existing integration profiles===
''<Indicate how existing profiles might need to be modified.>''
+
* If SR-based then most other Radiology profiles apply - XDS-I, XCA-I, IRWF.b, PDI, IDEP, IOCM, WIA, ATNA-Rad, etc.  
  
 
===New integration profiles needed===
 
===New integration profiles needed===
''<Indicate what new profile(s) might need to be created.>''
+
* '''AIR - AI Results'''
  
 
===Breakdown of tasks===
 
===Breakdown of tasks===

Revision as of 13:58, 27 August 2019

1. Proposed Workitem: AI Results (AIR) Profile

  • Proposal Editor: Kevin O'Donnell
  • Editor: TBA
  • Domain: Radiology

2. The Problem

There is significant interest and activity applying AI methods (deep learning, etc) to the analysis of medical images.

Interoperable results: The results of such AI analysis need to be presented to radiologists in their reading environment. This depends on interoperability between AI result generation products and radiology reading workstation systems/software.

Study Integrated results: Radiologists interpreting studies likely expect AI-generated results to supplement rather than replace traditional image analysis results and thus a given study will be composed of acquired images, AI results, traditional clinical data.

Convergence of result encoding: Many AI results are results a human could otherwise have produced, and those human results may be used as training data for the AI, and AI results may be used by other AIs in an adversarial network. AI and non-AI results need to handled together. Also want to facilitate data pooling and sharing between sites.

For AI to productively live up to it's promise, results and data must be reliably, and conveniently, assembled and managed.

3. Key Use Case

The goal is for AI packages to be able to store AI results that can be retrieved and presented consistently by a variety of imaging display systems.

The display would present the result in the context of the medical imaging study to which it applies.

To minimize the implementation complexity for the displays, and to avoid having to change software for each new AI result, it would make sense to compose AI results from a reasonable set of primitives.

This fits the IHE model of profiles as tools for convergence.

  • Display products that support those primitives can declare they are “AI-Ready”
  • Users can motivate display vendors to be AI-Ready
  • AI products that output their result using those primitives will know that a variety of displays will be able to present their results.
  • Users can motivate AI vendors to conform since that will simplify deployment for the users

4. Standards and Systems

  • AI algorithms - running in PACS, cloud, processing servers, standalone workstations, etc. (Result Creators)
  • PACS, VNA, etc.
  • Reading workstations, Image Displays
  • Databases, clinical analysis, report creators (import results)

Standards

DICOM SR - TID 1500

DICOM Segmentations

DICOM Sup XXX Simplified SR in JSON for AI

  • David Clunie has a significant draft that will be presented at WG-06 in September. He has also been working on tooling to test/confirm the transcoding logic.
  • This work is what really makes this accessible to the AI community, and bridges into the existing radiology infrastructure

AIM - Annotation and Image Markup

RSNA CDE (Common Data Elements)

  • helps with longitudinal data and automation logic for reporting and decision support
  • http://radelement.org

IHE Evidence Documents Profile

  • Should review to see if there anything we should borrow/revive

IHE Query for Existing Data for Mobile

  • consider using this to query the EHR for diagnostic reports and observations about the patient

IHE Results Distribution

  • consider the critical results status flags since AI results may be used to prioritize reading worklists


5. Technical Approach

A basic store and retrieve Profile that establishes a baseline for encoding, display behaviors/conventions, and data management (where do AI results belong/where do you find/use them).

There are many other radiology applications of AI that are not about processing images. This proposal is image-centric.


Existing actors

  • Image Manager/Archive - store AI segmentations and measurements alongside human segmentations and measurements
  • Image Display - present AI results together with the associated images

New actors

  • Result Creators - AI algorithms running in PACS, cloud, processing servers, standalone workstations, etc.
  • Result Consumers - systems that consume AI results rather than display them (databases, clinical analysis, report creators)

Existing transactions

<Indicate how existing transactions might be used or might need to be extended.>

New transactions (standards used)

  • Store AI Result -
  • Query AI Result -
  • Retrieve AI Result -
  • Display AI Result - technically a behavior spec. not a transaction

Impact on existing integration profiles

  • If SR-based then most other Radiology profiles apply - XDS-I, XCA-I, IRWF.b, PDI, IDEP, IOCM, WIA, ATNA-Rad, etc.

New integration profiles needed

  • AIR - AI Results

Breakdown of tasks

<As the basis for the effort estimation, enumerate the tasks to develop the Profile text. E.g.>

  • <Discuss/Confirm use case list (currently 5) to decide in/out of scope and details>
  • <Analyze/choose which report guidelines to use>
  • <Resolve open issue on the degree of computer parsability>
  • <Draft/Review content definition for Oncology Report (profile use of CCDA)>
  • <Draft/Review transaction for Retrieve Report>

6. Support & Resources

  • DICOM WG-23 is engaged on the topic of AI Results and has completed work useful to this activity

<Identify anyone who as indicated an interest in implementing/prototyping the Profile if it is published this cycle.>

7. Risks

  • The JSON SR supplement could get hung up on technical hitches.
  • Debates on alternative standards
  • Scope creep - AI is a large space

8. Open Issues

<Point out any key issues or design problems. This will be helpful for estimating the amount of work and demonstrates thought has already gone into the candidate profile.>

9. Tech Cmte Evaluation

<The technical committee will use this area to record details of the effort estimation, etc.>

Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • 35% for ...

Candidate Editor:

TBA