Difference between revisions of "Rad Tech Minutes 2020-11-16-20"

From IHE Wiki
Jump to navigation Jump to search
 
(6 intermediate revisions by 2 users not shown)
Line 113: Line 113:
 
:*Harald Zachmann, IBM Watson
 
:*Harald Zachmann, IBM Watson
  
:S1 08:30 - 10:00:  Maintenance (5.5) (includes AI testing discussion @ 09:00)
+
:S1 08:30 - 10:00:  Maintenance (6) (includes AI testing discussion @ 09:00)
 
:*Notes to be added by LF next week (incorporating TS and LF notes)
 
:*Notes to be added by LF next week (incorporating TS and LF notes)
 
:S2 11:15 - 13:15:  AI Whitepaper (10)
 
:S2 11:15 - 13:15:  AI Whitepaper (10)
Line 161: Line 161:
 
=='''Friday, November 18, 2020: 8:45 am - 1:15 pm (CT)'''==
 
=='''Friday, November 18, 2020: 8:45 am - 1:15 pm (CT)'''==
 
:Attendees
 
:Attendees
:S1 09:00 - 11:00:  AI Whitepaper (12)
+
:*Wim Corbijn, Philips
:*[https://docs.google.com/spreadsheets/d/1K9__bo5Nl_nZ6z2kVFjIwv8243vgFjwUn4AGU_gyxyg/edit?usp=sharing Link to spreadsheet for collecting potential AI vendors for Connectathon testing]
+
:*Karl Diedrich, IBM Watson
:S2:11:15 - 12:45:  Maintenance (7)
+
:*Lynn Felhofer, IHE Radiology Technical Project Manager
:S2 12:15 - 13:15:  Overflow TC Time if needed, otherwise available for Authoring Groups
+
:*Brad Genereaux, SIIM
 +
:*Kinson Ho, Change Healthcare
 +
:*Nichole Knox, RSNA
 +
:*David Kwan, Ontario Health
 +
:*Andrei Leontiev, Visage Imaging
 +
:*Chris Lindop, Citius Tech
 +
:*Stephen Meyers
 +
:*Steven Nichols, GE
 +
:*Kevin O'Donnell, Canon
 +
:*Antje Schroeder, Siemens Healthineers
 +
:*Harald Zachmann, IBM Watson
 +
 
 +
:S1 09:00 - 12:00:  AI Whitepaper (13)
 +
* Profile Name: AI Interoperability in Imaging WP
 +
** Describe gaps in Use Case coverage
 +
*** (It's really broad but..) feels like we have a good handle on the overall scope and the many various activities involved in the AI Ecosystem.
 +
*** (And really, how much more do we want to ADD at this point? It's already pretty hefty)
 +
** Review ALL "uncertainty points" in the evaluation. Is there a resolution plan for each?
 +
*** Good coverage of the "boxes" of the different activities. 
 +
*** Those level 3 headings and common mech. will be waypoints on the overall map, but still some inspiration needed to create it.
 +
*** Dataset is really well covered. Going deeper to pick technical standards is out of scope (but some listing of candidates is OK as long as the other work is done)
 +
*** Model lifecycle and Feedback lifecycle well covered.
 +
*** Next steps had 2 UP but we are agreeing on a plan to have next steps be a key Open Issue question in PC so we can work on that during resolutions in April
 +
** Do the effort points in the evaluation still seem right?
 +
*** Relative weightings between topics is good, but there may be a multiplier to apply to all to scale up a bit.
 +
*** Will assess the overall scale later.  Might be underestimating the EP a bit because there will be a lot of text to write, and a LOT of pages to review.
 +
*** Todo - do a time/point estimate and keep an eye on page count.
 +
** Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
 +
*** Pretty much. Was a good framework.
 +
** Describe unresolved technical issues/tasks
 +
*** (There's a lot of writing to do and there will be a lot of review work) Tech looks good.
 +
** Describe potential practical issues
 +
*** <Improve this question>
 +
** Review the open issue list. Does it feel complete
 +
*** What Open Issue list? ;-)
 +
*** The whitepaper is a giant list of open issues. 
 +
*** Would be good to have framing questions to spur productive feedback during public comment
 +
** Which open issues feel most risky; what other risks exist?
 +
*** (Added some risks to the document - is AI too dynamic for this to be productive?)
 +
** How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
 +
*** Pretty well. We allocated a lot, but we did get through the full "lifecycle".  Doesn't feel like we skipped important stuff.
 +
*** We added a lot of (good) detail.  Does that feel like we're losing control?
 +
*** Started with "four pillars" now have 7 but was more of a re-division with a bit of expansion (10-20%), but not "doubling". (Wasn't a direction change but added depth and re-divided)
 +
*** Some de-risking. Removed gaps. Clarified scope.  Tightened up concepts.
 +
** How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
 +
*** Appropriately ambitious.  With Whitepaper, it should be broad scoped.
 +
** If you had to reduce scope, what would you drop?
 +
*** If needed, cut depth/specific details across the board. 
 +
*** Stick to one soup-to-nuts example rather than multiple patterns
 +
*** Will likely stop the use case diagrams
 +
** Have the promised resources manifested?
 +
*** YES. The Authoring Group has been meeting regularly. Committee was very engaged.
 +
** What tasks would benefit from additional expertise?
 +
*** Chris R/RIC input is good. Will lean on them for the Perform Inference section
 +
*** Would like to get a rep from independent AI performers
 +
*** Would be good to get reps from "repository" community - David K, ACR?, Larry Tarbox, AJCC, NACCRR
 +
*** Report Creators are still a perennial gap. Maybe Chris can dragoon? Are important part of incorporating results into reading WF.
 +
** What vendors are engaged for each actor? Record how many.
 +
*** (A bit N/A for WP) - we do have representation from a broad cross section of the product community.
 +
** Was the profile where it needed to be at the start of the Kickoff meeting (See "Kickoff Meeting" above), if not what was the gap
 +
*** Pretty good (thanks to the work of the Authoring Group and the Editor)
 +
*** Can't think of particular gaps.
 +
** Was the profile where it needed to be at the end of the Kickoff meeting, if not what was the gap
 +
*** Pretty much. Had to hustle through the editing plan for going forward, but no real gaps per se.
 +
** How many tcons would you like between now and the PC Prep Meeting?
 +
*** AG is meeting "weekly" (with holidays)
 +
*** 1 TC - 2nd week of January - 2 hours
 +
**** Brad will have done a lot of writing, and reviewed with AG in first week of Jan
 +
 
 +
:S2:12:15 - 13:15  Maintenance (7)
 +
:*IOCM Review
 +
:*CPs completed this week and will be balloted in January 2021
 +
:**CP-RAD-433
 +
:**CP-RAD-434
 +
:**CP-RAD-451
 +
:**CP-RAD-452
 +
:**CP-RAD-454
 +
:**CP-RAD-455
 +
:*Will review CP-RAD-356 at next CP call in January 2021
 +
:*[https://docs.google.com/spreadsheets/d/1K9__bo5Nl_nZ6z2kVFjIwv8243vgFjwUn4AGU_gyxyg/edit?usp=sharing Connectathon - Potential AI vendors]
 +
:**Lynn will ask Teri and Chris L to review and edit based on AI demo participants
 +
:**Spreadsheet can be reviewed and edited by all
 +
:*HTML Publications
 +
:**ITI is publishing in PDF and HTML for one more year. At the end of that time, PDF will go away and all new documents will be published in HTML only.
 +
:**[https://profiles.ihe.net/ ITI profiles ready to review here]

Latest revision as of 22:39, 7 December 2020

Monday, November 16, 2020: 8:15 - 5:00 pm Central Time (CT)

IHE Radiology Technical Committee Roster
08:15 - 08:30: Welcome, Patent Disclosure Announcement, Agenda Review
  • Attendees
    • Tomoyuki Araki, IHE J
    • Brian Bialecki, American College of Radiology (ACR)
    • Wim Corbijn, Philips Healthcare
    • Lynn Felhofer, IHE Radiology Technical Project Manager
    • Brad Genereaux, SIIM
    • Kinson Ho, Change Healthcare
    • Nichole Knox, RSNA
    • David Kwan, Ontario Health
    • Andrei Leontiev, Visage Imaging
    • Chris Lindop, Citius Tech
    • Steven Nichols, GE
    • Kevin O'Donnell, Canon
    • Antje Schroeder, Siemens Healthineers
    • Jonathan Whitby, Canon
    • Harald Zachmann, IBM Watson
S1 08:30 - 11:00: Contrast Administration Management (CAM) (2.5)
  • Goal: Retire some uncertainty points.
S2 11:15 - 13:15: AI Whitepaper (2)
  • Brad will upload a snapshot of document at end of day.
S3 13:45 – 15:45 Maintenance (2)
  • CP Review
    • CP-RAD-453: Clarify Implicit Post-processing Workflow, SWF,SWF.b, Submitted by Steven Nichols; Assigned to Steven Nichols
    • CP-RAD-454: RAD-5 Keys Clarification, SWF,SWF.b, 2020-11-16, Submitted by Lynn Felhofer; Assigned to Lynn Felhofer
    • CP-RAD-455: SWF.b Section Number Fixes, SWF.b, 2020-11-16, Submitted by Lynn; Assigned to Lynn; completed
    • CP-RAD-456: Enterprise Identity Option - Issuer of Other Patient ID, SWF.b, Submitted by Gunter Zeilinger, Assigned to Andrei Leontiev
S4 16:00 - 17:00: Not TC Time. Available for Authoring Groups if needed
  • Time not needed. Session ended at 3:30pm

Tuesday, November 17, 2020: 8:45 am - 5:00 pm CT

  • Attendees
    • Tomoyuki Araki, IHE J
    • Salt, Canon
    • Brian Bialecki, American College of Radiology (ACR)
    • Wim Corbijn, Philips Healthcare
    • Lynn Felhofer, IHE Radiology Technical Project Manager
    • Brad Genereaux, SIIM
    • Kinson Ho, Change Healthcare
    • Nichole Knox, RSNA
    • David Kwan, Ontario Health
    • Andrei Leontiev, Visage Imaging
    • Chris Lindop, Citius Tech
    • Steven Nichols, GE
    • Kevin O'Donnell, Canon
    • Antje Schroeder, Siemens Healthineers
    • Jonathan Whitby, Canon
    • Harald Zachmann, IBM Watson
S1:08:45 - 10:30am Contrast Administration Management (CAM) (4.25)
S1:10:30 - 11:00am Maintenance (2.5)
  • IHE European Connectathon Recap
  • IHE Japan Connectathon Recap
S2:11:15 - 13:15: AI Whitepaper (4)
S3:13:45 - 15:00: Free discussions content to be decided

Wednesday, November 18, 2020: 8:45 am - 5:00 pm (CT)

  • Attendees
    • Brian Bialecki, American College of Radiology (ACR)
    • Chris Carr, RSNA
    • Wim Corbijn, Philips Healthcare
    • Karl Diedrich, IBM Watson
    • Lynn Felhofer, IHE Radiology Technical Project Manager
    • Brad Genereaux, SIIM
    • Kinson Ho, Change Healthcare
    • Nichole Knox, RSNA
    • David Kwan, Ontario Health
    • Andrei Leontiev, Visage Imaging
    • Chris Lindop, Citius Tech
    • Sujith Nair, American College of Radiology (ACR)
    • Steven Nichols, GE
    • Jouke Numan, GE
    • Kevin O'Donnell, Canon
    • Antje Schroeder, Siemens Healthineers
    • Jonathan Whitby, Canon
    • Harald Zachmann, IBM Watson
S1 09:00 - 11:00: Maintenance (4.5)
  • CP-RAD-423
    • Next steps
      • 1. Understanding something about the current state is wrong.
      • 2. Fully understanding what the current state is
      • 3. Deciding what you want the new state to be
      • 4. encoding the desired state changes
      • 5. communicating the desired state changes
      • 6. Specific behavior when receiving the communicated desired state changes
  • CP-RAD-454 remains assigned to Lynn; she will create an updated version based on committee input
  • CP-RAD-452 is approved for the next ballot pending updates Lynn from today’s discussion
S2 11:15 - 13:15: AI Whitepaper (6)
S3 13:45 - 15:45: AI Whitepaper (8)
S4 16:00 - 17:00: Free discussions content to be decided

Thursday, November 19, 2020: 8:15 am - 5:00 pm CT

Attendees
  • Tomo Araki, Canon
  • Brian Bialecki, American College of Radiology (ACR)
  • Wim Corbijn, Philips
  • Karl Diedrich, IBM Watson
  • Lynn Felhofer, IHE Radiology Technical Project Manager
  • Brad Genereaux, SIIM
  • Kinson Ho, Change Healthcare
  • Nichole Knox, RSNA
  • David Kwan, Ontario Health
  • Andrei Leontiev, Visage Imaging
  • Chris Lindop, Citius Tech
  • Steven Nichols, GE
  • Kevin O'Donnell, Canon
  • Antje Schroeder, Siemens Healthineers
  • Jonathan Whitby, Canon
  • Harald Zachmann, IBM Watson
S1 08:30 - 10:00: Maintenance (6) (includes AI testing discussion @ 09:00)
  • Notes to be added by LF next week (incorporating TS and LF notes)
S2 11:15 - 13:15: AI Whitepaper (10)
S3 13:45 - 15:00: Free time
S3 15:00 - 15:30: Contrast Administration Management (CAM) (5)
15:30 - 16:00: CAM Kickoff Assessment
  • Profile Name: CAM
    • Describe gaps in Use Case coverage
      • Seems complete. Have fine tuned.
    • Review ALL "uncertainty points" in the evaluation. Is there a resolution plan for each?
      • There was 1. Yes, we’ll proceed with DIMSE and reach out to remaining injector vendors to ask about DICOMweb.
    • Do the effort points in the evaluation still seem right?
      • Yes. Scope was pretty well fleshed out. Few if any surprises.
    • Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
      • Seemed to match pretty closely.
    • Describe unresolved technical issues/tasks
      • None really. Have some homework that is typical profiling text completion. Tight scoping helps.
    • Describe potential practical issues
      • Avoiding the whole workflow thing keeps it simple. Mismatching IDs etc. Phase 2 may raise those (but we also have experience in how to do it)
      • e.g. using new SUID rather than the one the rest of the study is using. So use MWL.
      • In public comment – highlight doing this and ask about interest
      • “This profile assumes you are doing something to get good data (e.g. MWL)”
    • Review the open issue list. Does it feel complete
      • Seems reasonable.
    • Which open issues feel most risky; what other risks exist?
      • DICOMweb has the most potential to expand the workload.
      • Note that we have not addressed obtaining patient/procedure metadata (e.g. using MWL). The profile assumes. Addressing completely is part (core) of Phase 2.
    • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
      • Just right or time to spare (we used the time to do some of our work from the PC prep meeting)
    • How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
      • The scope is small (which is why it is proceeding quickly and on time) but there are certainly more interesting (and complex) issues lurking past the edges and in the Phase 2 space.
    • If you had to reduce scope, what would you drop?
      • Its already mostly written.
    • Have the promised resources manifested?
      • Mostly. Tomo has been working hard. We have been corralling the injector vendors (see below). And the committee has provided good feedback as always.
    • What tasks would benefit from additional expertise?
      • The National Extension/Open Issue on regional variation will be a focus point for now and PC
    • What vendors are engaged for each actor? Record how many.
      • Infusion Manager – 1.5, working for 4; Image Manager – several, usual suspects; Contrast Info Consumer – RIS? Touch base with the Radiation Reporters, Reporting vendors? IHE-J will explore.
    • Was the profile where it needed to be at the start of the Kickoff meeting (See "Kickoff Meeting" above), if not what was the gap
      • Definitely. It was have a meeting ahead of schedule. We had a full draft (with a few missing chunks) to review.
    • Was the profile where it needed to be at the end of the Kickoff meeting, if not what was the gap
      • Definitely. Have already reviewed 50+% of the PC material.
    • How many tcons would you like between now and the PC Prep Meeting?
      • None. Seems on track. Open issues seem manageable.

Friday, November 18, 2020: 8:45 am - 1:15 pm (CT)

Attendees
  • Wim Corbijn, Philips
  • Karl Diedrich, IBM Watson
  • Lynn Felhofer, IHE Radiology Technical Project Manager
  • Brad Genereaux, SIIM
  • Kinson Ho, Change Healthcare
  • Nichole Knox, RSNA
  • David Kwan, Ontario Health
  • Andrei Leontiev, Visage Imaging
  • Chris Lindop, Citius Tech
  • Stephen Meyers
  • Steven Nichols, GE
  • Kevin O'Donnell, Canon
  • Antje Schroeder, Siemens Healthineers
  • Harald Zachmann, IBM Watson
S1 09:00 - 12:00: AI Whitepaper (13)
  • Profile Name: AI Interoperability in Imaging WP
    • Describe gaps in Use Case coverage
      • (It's really broad but..) feels like we have a good handle on the overall scope and the many various activities involved in the AI Ecosystem.
      • (And really, how much more do we want to ADD at this point? It's already pretty hefty)
    • Review ALL "uncertainty points" in the evaluation. Is there a resolution plan for each?
      • Good coverage of the "boxes" of the different activities.
      • Those level 3 headings and common mech. will be waypoints on the overall map, but still some inspiration needed to create it.
      • Dataset is really well covered. Going deeper to pick technical standards is out of scope (but some listing of candidates is OK as long as the other work is done)
      • Model lifecycle and Feedback lifecycle well covered.
      • Next steps had 2 UP but we are agreeing on a plan to have next steps be a key Open Issue question in PC so we can work on that during resolutions in April
    • Do the effort points in the evaluation still seem right?
      • Relative weightings between topics is good, but there may be a multiplier to apply to all to scale up a bit.
      • Will assess the overall scale later. Might be underestimating the EP a bit because there will be a lot of text to write, and a LOT of pages to review.
      • Todo - do a time/point estimate and keep an eye on page count.
    • Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
      • Pretty much. Was a good framework.
    • Describe unresolved technical issues/tasks
      • (There's a lot of writing to do and there will be a lot of review work) Tech looks good.
    • Describe potential practical issues
      • <Improve this question>
    • Review the open issue list. Does it feel complete
      • What Open Issue list? ;-)
      • The whitepaper is a giant list of open issues.
      • Would be good to have framing questions to spur productive feedback during public comment
    • Which open issues feel most risky; what other risks exist?
      • (Added some risks to the document - is AI too dynamic for this to be productive?)
    • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
      • Pretty well. We allocated a lot, but we did get through the full "lifecycle". Doesn't feel like we skipped important stuff.
      • We added a lot of (good) detail. Does that feel like we're losing control?
      • Started with "four pillars" now have 7 but was more of a re-division with a bit of expansion (10-20%), but not "doubling". (Wasn't a direction change but added depth and re-divided)
      • Some de-risking. Removed gaps. Clarified scope. Tightened up concepts.
    • How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
      • Appropriately ambitious. With Whitepaper, it should be broad scoped.
    • If you had to reduce scope, what would you drop?
      • If needed, cut depth/specific details across the board.
      • Stick to one soup-to-nuts example rather than multiple patterns
      • Will likely stop the use case diagrams
    • Have the promised resources manifested?
      • YES. The Authoring Group has been meeting regularly. Committee was very engaged.
    • What tasks would benefit from additional expertise?
      • Chris R/RIC input is good. Will lean on them for the Perform Inference section
      • Would like to get a rep from independent AI performers
      • Would be good to get reps from "repository" community - David K, ACR?, Larry Tarbox, AJCC, NACCRR
      • Report Creators are still a perennial gap. Maybe Chris can dragoon? Are important part of incorporating results into reading WF.
    • What vendors are engaged for each actor? Record how many.
      • (A bit N/A for WP) - we do have representation from a broad cross section of the product community.
    • Was the profile where it needed to be at the start of the Kickoff meeting (See "Kickoff Meeting" above), if not what was the gap
      • Pretty good (thanks to the work of the Authoring Group and the Editor)
      • Can't think of particular gaps.
    • Was the profile where it needed to be at the end of the Kickoff meeting, if not what was the gap
      • Pretty much. Had to hustle through the editing plan for going forward, but no real gaps per se.
    • How many tcons would you like between now and the PC Prep Meeting?
      • AG is meeting "weekly" (with holidays)
      • 1 TC - 2nd week of January - 2 hours
        • Brad will have done a lot of writing, and reviewed with AG in first week of Jan
S2:12:15 - 13:15 Maintenance (7)
  • IOCM Review
  • CPs completed this week and will be balloted in January 2021
    • CP-RAD-433
    • CP-RAD-434
    • CP-RAD-451
    • CP-RAD-452
    • CP-RAD-454
    • CP-RAD-455
  • Will review CP-RAD-356 at next CP call in January 2021
  • Connectathon - Potential AI vendors
    • Lynn will ask Teri and Chris L to review and edit based on AI demo participants
    • Spreadsheet can be reviewed and edited by all
  • HTML Publications
    • ITI is publishing in PDF and HTML for one more year. At the end of that time, PDF will go away and all new documents will be published in HTML only.
    • ITI profiles ready to review here