Difference between revisions of "Rad Tech Minutes 2019-11-11 to 2019-11-14"

From IHE Wiki
Jump to navigation Jump to search
(Created page with "Participants: - Andrei - Chris Lindop - Salt - Steve N. - Kevin - Jonathon - Kinson - Kevin Schap (CAP, IHE PaLM) - Antje - Wim - Elliot Silver - Sridhar B (Nuance) - Neil - B...")
 
 
(12 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 +
==Participants==
 
Participants:
 
Participants:
 
- Andrei
 
- Andrei
Line 5: Line 6:
 
- Steve N.
 
- Steve N.
 
- Kevin
 
- Kevin
- Jonathon
+
- Jonathan
 
- Kinson
 
- Kinson
 
- Kevin Schap (CAP, IHE PaLM)
 
- Kevin Schap (CAP, IHE PaLM)
Line 14: Line 15:
 
- Neil
 
- Neil
 
- Brad
 
- Brad
 +
- David Kwan
 +
- Hamid
 +
- Charles Parisot
 +
 +
==TF Maintenance decisions==
  
 
Maintenance:
 
Maintenance:
CP approved for ballot
+
:* CP approved for ballot
:* CP-RAD-356
+
:** CP-RAD-258
:* CP-RAD-378
+
:** CP-RAD-356
:* CP-RAD-379
+
:** CP-RAD-378
:* CP-RAD-382
+
:** CP-RAD-379
 +
:** CP-RAD-382
 +
 
 +
:* CP cancelled
 +
:** CP-RAD-239
 +
:** CP-RAD-259
 +
 
 +
=== Notes on break up of RAD Tech Framework ===
 +
 
 +
See https://wiki.ihe.net/index.php/RAD_TF_Maintenance_2019-2020#Proposed_breakdown_-_initial_proposal_from_Lynn
 +
 
 +
==TI Supplements final text / deprecation decisions==
 +
:* DBT extension
 +
::* Technical Committee consensus has been achieved to promote the DBT Extension too Final Text, see [[DBT Extension Evaluation]]
 +
:* IID
 +
:* It is proposed that the profile is not advanced to the Final Text until CPs 349 and 364 against it are finalized.
 +
:* MAWF
 +
:* Steve to draft Final Text evaluation for [[SWF.b FT Evaluation]]
 +
 
 +
==Kickoff Evaluations==
 +
=== AI Workflow===
 +
** Describe gaps in Use Case coverage
 +
*** Currently reconstituting the Simple Case (scan through result distribution)
 +
*** Thinking through variants on the simple case
 +
*** Some open discussion around which places proxy, e.g. Orchestrator proxying for the Model to the local data systems
 +
** Review the "uncertainty points" in the evaluation. Is there a resolution plan for each?
 +
*** Mostly they got copied into Open Issues and that OI list has been reviewed and resolutions planned.
 +
*** (Didn't actually review the UP list from the proposal - think we got them all)
 +
** Do the effort points in the evaluation still seem right?
 +
*** Mostly seem right. The UPS-RS issue was scoped for no discussion, but we have spent some time.
 +
*** (Learning point: make sure resolutions during evaluation are carefully captured and clearly expressed)
 +
*** We've been trying hard to hold our scope management which has helped.
 +
** Describe unresolved technical issues/tasks
 +
*** Need to fully put the UPS-RS issue to bed
 +
**** Chose to use existing transactions on an existing standard (that is not yet widely implemented).  The proposal could also have proposed to develop new transactions on a soon to be existing standard (FHIR Task), or develop new transactions on a standard we would have to standardize ().  The latter would have refocussed this cycle work onto that transaction work instead of the workflow level we've been looking at.
 +
*** Kevin will help document the UPS-RS equivalents for the fields highlighted by Neil.  That information should be covered in the Workitem Concept section or somewhere more normative if needed.
 +
*** Procedure update has some questions around how AI-driven worklist reprioritization should work - who decides what, what need to be communicated, how is that encoded.
 +
*** A number of sections that need technical content, but it seems like we have an understanding of what goes there.
 +
** Describe potential practical issues
 +
*** Some of todays AI Models are used to being handed PNG files. It will be a step up to do what we are describing.
 +
*** Haven't really described how our framework applies to the current pattern of AI platform products. (May be covered in Actor Descriptions)
 +
*** AI Models being in the cloud vs locally deployed might involve some practicalities we dont have experience with yet, e.g. security and need for configuration/proxying
 +
*** Questions were raised about regulatory requirements/monitoring, but we are putting those a bit out of scope for now.
 +
** Review the open issue list. Does it feel complete
 +
*** Pretty much? Maybe think about questions we want to ask/highlight for the Public Commenter community
 +
** Which open issues feel most risky
 +
*** Workflow is different from the data-push model
 +
** How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
 +
*** Mostly OK.
 +
** How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
 +
*** Keeps nudging to expand; we keep reining it back in...
 +
** If you had to reduce scope, what would you drop?
 +
*** Can't drop anything. If we run out of time, simply don't publish this cycle and request again next cycle.
 +
** Have the promised resources manifested?
 +
*** Mostly - Brad, Neil, Sridhar, Dave Kwan
 +
** What tasks would benefit from additional expertise? (e.g. each actor, user)
 +
*** AI Model developer, especially standalone. Users (radiologists) who will use multiple models. Users who will configure the logic on the Requesters (IIT? PACS admins? Integrator? Jon Shoemaker?)
 +
** What vendors are engaged for each actor? Record how many.
 +
*** Requester -
 +
*** Orchestrator - ???
 +
*** AI Model - ???
 +
*** Reporting Worklist - Visage, Nuance, GE, Change
 +
** How many tcons would you like between now and the PC Prep Meeting?
 +
*** 1 mid-December, 1 mid-January
 +
*** (Joint with WG23 in Dec)
 +
 
 +
It will be the responsibility of the Profile Editor to lead resolution of these issues before the Public Comment preparation meeting.
  
CP cancelled
+
===AI Results===
 +
** Describe gaps in Use Case coverage
 +
*** None that I can think of. Narrow scope.
 +
** Review the "uncertainty points" in the evaluation. Is there a resolution plan for each?
 +
*** Yes. Resolved with relevant Concept sections etc.
 +
** Do the effort points in the evaluation still seem right?
 +
*** Basically yes. Actually used a bit less this meeting. Probably will use full slot next meeting.
 +
** Describe unresolved technical issues/tasks
 +
*** None currently visible.  Mostly exploring practical product design issues to see if anything needs additional tech.
 +
*** Creating the example encodings for each primitive will be a bit of work. (consider both inline explanation, and actual sample objects in the Implementation Materials folder.
 +
** Describe potential practical issues
 +
*** Navigation and presentation will be challenging for Displays.
 +
** Review the open issue list. Does it feel complete
 +
*** Seems complete
 +
** Which open issues feel most risky
 +
*** Not an open issue, but it will be a step for the AI Models to implement the specified DICOM SOPs. Might want to encourage platform/proxies to help them with this. (and make the case you get to read these too to get ground truth)
 +
*** Another adoption risk is "conflict" between multiple standards activities (e.g. FHIR, ACR, local groups, etc)
 +
** How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
 +
*** Well. A bit of time to spare this meeting.
 +
** How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
 +
*** Good. Constraining scope creep.
 +
** If you had to reduce scope, what would you drop?
 +
*** STOW AI Sketch, Consumer (not much of a savings)
 +
** Have the promised resources manifested?
 +
*** Julian, Eliot, Dave Kwan, Sridhar, Jonathan, Andrei, Kinson, etc. (CAD, Displays, etc)
 +
** What tasks would benefit from additional expertise?
 +
*** Radiologists to vet the navigation and display requirements.
 +
** What vendors are engaged for each actor? Record how many.
 +
*** Evidence Creator - Nuance
 +
*** Image Manager - Vital, Visage, Change, GE, Siemens
 +
*** Image Display - Vital, Visage, Change, GE,
 +
*** Imaging Doc Consumer - Nuance,
 +
** How many tcons would you like between now and the PC Prep Meeting?
 +
*** Yes, 1 mid-Dec, 1 mid-Jan. Will likely cancel one or both but hold the spot.

Latest revision as of 11:27, 25 November 2019

Participants

Participants: - Andrei - Chris Lindop - Salt - Steve N. - Kevin - Jonathan - Kinson - Kevin Schap (CAP, IHE PaLM) - Antje - Wim - Elliot Silver - Sridhar B (Nuance) - Neil - Brad - David Kwan - Hamid - Charles Parisot

TF Maintenance decisions

Maintenance:

  • CP approved for ballot
    • CP-RAD-258
    • CP-RAD-356
    • CP-RAD-378
    • CP-RAD-379
    • CP-RAD-382
  • CP cancelled
    • CP-RAD-239
    • CP-RAD-259

Notes on break up of RAD Tech Framework

See https://wiki.ihe.net/index.php/RAD_TF_Maintenance_2019-2020#Proposed_breakdown_-_initial_proposal_from_Lynn

TI Supplements final text / deprecation decisions

  • DBT extension
  • Technical Committee consensus has been achieved to promote the DBT Extension too Final Text, see DBT Extension Evaluation
  • IID
  • It is proposed that the profile is not advanced to the Final Text until CPs 349 and 364 against it are finalized.
  • MAWF
  • Steve to draft Final Text evaluation for SWF.b FT Evaluation

Kickoff Evaluations

AI Workflow

    • Describe gaps in Use Case coverage
      • Currently reconstituting the Simple Case (scan through result distribution)
      • Thinking through variants on the simple case
      • Some open discussion around which places proxy, e.g. Orchestrator proxying for the Model to the local data systems
    • Review the "uncertainty points" in the evaluation. Is there a resolution plan for each?
      • Mostly they got copied into Open Issues and that OI list has been reviewed and resolutions planned.
      • (Didn't actually review the UP list from the proposal - think we got them all)
    • Do the effort points in the evaluation still seem right?
      • Mostly seem right. The UPS-RS issue was scoped for no discussion, but we have spent some time.
      • (Learning point: make sure resolutions during evaluation are carefully captured and clearly expressed)
      • We've been trying hard to hold our scope management which has helped.
    • Describe unresolved technical issues/tasks
      • Need to fully put the UPS-RS issue to bed
        • Chose to use existing transactions on an existing standard (that is not yet widely implemented). The proposal could also have proposed to develop new transactions on a soon to be existing standard (FHIR Task), or develop new transactions on a standard we would have to standardize (). The latter would have refocussed this cycle work onto that transaction work instead of the workflow level we've been looking at.
      • Kevin will help document the UPS-RS equivalents for the fields highlighted by Neil. That information should be covered in the Workitem Concept section or somewhere more normative if needed.
      • Procedure update has some questions around how AI-driven worklist reprioritization should work - who decides what, what need to be communicated, how is that encoded.
      • A number of sections that need technical content, but it seems like we have an understanding of what goes there.
    • Describe potential practical issues
      • Some of todays AI Models are used to being handed PNG files. It will be a step up to do what we are describing.
      • Haven't really described how our framework applies to the current pattern of AI platform products. (May be covered in Actor Descriptions)
      • AI Models being in the cloud vs locally deployed might involve some practicalities we dont have experience with yet, e.g. security and need for configuration/proxying
      • Questions were raised about regulatory requirements/monitoring, but we are putting those a bit out of scope for now.
    • Review the open issue list. Does it feel complete
      • Pretty much? Maybe think about questions we want to ask/highlight for the Public Commenter community
    • Which open issues feel most risky
      • Workflow is different from the data-push model
    • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
      • Mostly OK.
    • How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
      • Keeps nudging to expand; we keep reining it back in...
    • If you had to reduce scope, what would you drop?
      • Can't drop anything. If we run out of time, simply don't publish this cycle and request again next cycle.
    • Have the promised resources manifested?
      • Mostly - Brad, Neil, Sridhar, Dave Kwan
    • What tasks would benefit from additional expertise? (e.g. each actor, user)
      • AI Model developer, especially standalone. Users (radiologists) who will use multiple models. Users who will configure the logic on the Requesters (IIT? PACS admins? Integrator? Jon Shoemaker?)
    • What vendors are engaged for each actor? Record how many.
      • Requester -
      • Orchestrator - ???
      • AI Model - ???
      • Reporting Worklist - Visage, Nuance, GE, Change
    • How many tcons would you like between now and the PC Prep Meeting?
      • 1 mid-December, 1 mid-January
      • (Joint with WG23 in Dec)

It will be the responsibility of the Profile Editor to lead resolution of these issues before the Public Comment preparation meeting.

AI Results

    • Describe gaps in Use Case coverage
      • None that I can think of. Narrow scope.
    • Review the "uncertainty points" in the evaluation. Is there a resolution plan for each?
      • Yes. Resolved with relevant Concept sections etc.
    • Do the effort points in the evaluation still seem right?
      • Basically yes. Actually used a bit less this meeting. Probably will use full slot next meeting.
    • Describe unresolved technical issues/tasks
      • None currently visible. Mostly exploring practical product design issues to see if anything needs additional tech.
      • Creating the example encodings for each primitive will be a bit of work. (consider both inline explanation, and actual sample objects in the Implementation Materials folder.
    • Describe potential practical issues
      • Navigation and presentation will be challenging for Displays.
    • Review the open issue list. Does it feel complete
      • Seems complete
    • Which open issues feel most risky
      • Not an open issue, but it will be a step for the AI Models to implement the specified DICOM SOPs. Might want to encourage platform/proxies to help them with this. (and make the case you get to read these too to get ground truth)
      • Another adoption risk is "conflict" between multiple standards activities (e.g. FHIR, ACR, local groups, etc)
    • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
      • Well. A bit of time to spare this meeting.
    • How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
      • Good. Constraining scope creep.
    • If you had to reduce scope, what would you drop?
      • STOW AI Sketch, Consumer (not much of a savings)
    • Have the promised resources manifested?
      • Julian, Eliot, Dave Kwan, Sridhar, Jonathan, Andrei, Kinson, etc. (CAD, Displays, etc)
    • What tasks would benefit from additional expertise?
      • Radiologists to vet the navigation and display requirements.
    • What vendors are engaged for each actor? Record how many.
      • Evidence Creator - Nuance
      • Image Manager - Vital, Visage, Change, GE, Siemens
      • Image Display - Vital, Visage, Change, GE,
      • Imaging Doc Consumer - Nuance,
    • How many tcons would you like between now and the PC Prep Meeting?
      • Yes, 1 mid-Dec, 1 mid-Jan. Will likely cancel one or both but hold the spot.