AIRAI Checkpoint Assessment

From IHE Wiki
Jump to navigation Jump to search

Kick Off Meeting

  • Describe gaps in Use Case coverage
    • The expectation between participants as to what use cases have to be addressed by the profile hugely differed
    • All use cases have been confirmed to be in scope.
    • The original Basic Statistics use case was significantly enlarged in scope and increased in importance. This created additional uncertainty as to where the effort for this profile should be focused on.
    • The importance and details associated with each case were discussed to a large degree, and the importance of the use cases and the scenarios within a use case were debated and remain uncertain.
    • The implementation of the profile depends largely on finalization of these discussions as it will affect the data and underlying mechanisms of the profile.
  • Review ALL "uncertainty points" in the evaluation. Is there a resolution plan for each?
    • The uncertainty points were not given to Creation and Consumption use cases which turned out to be more controversial than expected. The plan is to prepare a more detailed definition of the use cases. Have a first follow-up t-con to discuss and formalize these descriptions
    • Discussion was centered around the Assessment Record Method this week and is not resolved. The uncertainty of this subject has not decreased and probably even increased. The plan is to have a second follow-up t-con to compare prepared detailed record methods vis-a-vis extended use case definitions. Authors will create shared documents on the Google Drive and other participants will contribute to.
    • Undiscussed uncertainties include edge use cases, packaging and grouping. Plan to be established on the results of the two t-con.
  • Do the effort points in the evaluation still seem right?
    • Since concept is not decided on, hard to judge
  • Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
    • The tasks are roughly correct, but we underestimated uncertainty and amount of preparation required.
    • In retrospect, a better recognition of uncertainty would suggest an advanced committee t-con to map out the resolution strategy that authors could bring to the kick-off meeting.
  • Describe unresolved technical issues/tasks
    • Not decided yet, see uncertainty section.
  • Describe potential practical issues
    • Since concept is not decided, not entirely clear
    • Issues with large volumes of objects were raised as potential impacts on the practicality of implementation.
    • Issues with complexity of objects
    • Issues with coverage of use cases
  • Review the open issue list. Does it feel complete
    • There are no open issues list yet
  • Which open issues feel most risky; what other risks exist?
    • The large uncertainty is the largest risk
  • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
    • Things were left undone. Have not completed uncertainty resolution, which prevented work on subsequent tasks
  • How does the scope feel? (Room to expand? Just right? Pretty ambitious?)
    • Feeling more ambitious than expected
    • More debates than expected on priority of various parts of the scope
  • If you had to reduce scope, what would you drop?
    • Unless the uncertainty resolution is achieved, one strategy is to fallback to a whitepaper to achieve resolution this cycle
    • If use cases and mechanisms partition in a well-separated fashion, may be able to do one use case and mechanism.
  • Have the promised resources manifested?
    • Yes; only really promised opinions.
  • What tasks would benefit from additional expertise?
    • Not at this point
  • What vendors are engaged for each actor? Record how many.
      • Most engagement is indirect through IHE-Europe Task force AIGI. Siemens Healtheeners (depending on the use cases adopted), Merge, VISUS, Visage (depending on the use cases adopted).
  • Was the profile where it needed to be at the start of the Kickoff meeting (See "Kickoff Meeting" above), if not what was the gap
    • Yes, based on minimal guidance on the early steps.
    • One choice for underlying concepts was well documented, however, alternatives were not available in advance
    • Thus resolution of uncertainties was limited.
    • In retrospect, large uncertainty point items (>1) should trigger discovery and documentation of main alternatives and methods of comparison/assessment.
  • Was the profile where it needed to be at the end of the Kickoff meeting, if not what was the gap
    • No; major uncertainties are unresolved
  • How many t-cons would you like between now and the PC Prep Meeting?
    • At least three


Public Comment Preparation Meeting

  • Profile Name: AI Result Assessment for Imaging (AIRAI)
  • Did we line-by-line the entire document
    • No, two more two hour tcons needed
  • How ready is it to go out for PC: Completely, Almost, Soonish, Hmmm
    • Editors will incorporate feedback from face to face
    • Individual members review and tons for line-by-line review needed.
  • Which open issues are risky, and why
    • Open Issue 5 may lead to some complex but necessary discussions.
  • Are all open issues phrased to solicit the needed information to close them?
    • Yes (but some may be missing)
  • Which use cases need more input
    • Want general feedback on all use cases from a variety of facilities. Want to explore the variation in patterns they have, hoping they all boil down to the same core.
  • Which issues from the Kickoff Closing Assessment are still unresolved
    • Encoding issues are largely resolved with the use of an assessment object. Large volumes of objects handled because we are not using individual KOS.
    • Complexity issues mostly resolved by keeping the assessment object fairly flat, perhaps with nesting for multiple assessors/assessments.
  • What significant debates in PC-prep were not anticipated in the Kickoff
    • Some debates but we did better than expected!!
  • Review ALL "uncertainty points" in the evaluation. Are all now resolved?
    • Here is the sheet: https://docs.google.com/spreadsheets/d/1IbHYAHk11JfgTp4OFXVStxRLVfH76zhrNwzBG4BRKsc/edit?gid=45209358#gid=45209358
    • Row 6 The edge case uncertainty is still in Open Issues
    • Row 7 Concept sections -- still expecting some PC feedback from sites
    • Row 15 Assessment Record Method - Some residual uncertainty remains. (perhaps...) Rejection is always explicit, In the clinical use case, it is acceptable to leave some elements unreviewed, In the QA use case, all elements must be assessed.
    • Row 16 Grouping with IOCM - Yes, there is grouping, but it is still to be determined whether some changes are needed in RAD-66
    • Row 17 CP-RAD-549 will be used to update generic RAD-50 Store Instances transaction for reuse and customization for this profile
    • Row 18 AIR Primitive Handling - mostly addressed with the 'pointer structure' in the assessment object
  • Review ALL "complexity points" in the evaluation. Did each get appropriate text coverage/resolution?
    • 'Yes' except for Row 15 (assessment record method)
  • Review the "effort points" in the evaluation. Still seems right? Need more?
    • Seems mostly accurate but a bit light overall. Still need to review the reuse of and identify RAD_137 query patterns that could to meet the needs of various systems, e.g. Image Display vs an automated monitoring system
  • How does the scope feel in terms of being a useful chunk of work? (Needs more? Just right? More than enough?)
    • Well scoped to deliver a initial usable set of functionality, but it is a sizable chunk of work.
  • How is the work fitting in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
    • Mostly. It is manageable.
  • Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
    • Recorded tasks in the estimate were good. Some were underestimated and there were some additional tasks.
  • Looking forward, if you had to reduce scope to hit TI, what would you drop?
    • Reduce assessed IODs to retain AIR SRs, Secondary Capture, Encapsulated PDF, Segmentation(?)
  • Have the promised resources manifested?
    • Didn't recruit resources beyond editors. The planned resources participated; Marc admitted it was more work than anticipated but good to work with Antje.
  • What vendors are engaged (for each actor)
    • Note Creator - Merge, VISUS, Siemens, GE
    • Image Manager - Visage, VISUS, Siemens, Merge
    • Image Display - Visage, VISUS, Siemens, Merge
    • Quality Info Reporter - no vendor yet; (speculate that Bayer may do this; Marc will contact)
  • When will we have sample data/objects
    • Soon! Mohannad is producing them for the Toronto Plugathon.
  • Who should specifically be targeted for Public Comment feedback
    • Our clinical consult group. PACS vendors on RAD Tech.
  • Was the profile where it needed to be at the start of the PC meeting (See "PC Prep Meeting" above), if not what was the gap
    • The document was a bit short of being a solid PC draft at the beginning of the week
    • Collaboration using google docs was necessary but translation to MS-Word will be additional work before PC.
  • Was the profile where it needed to be at the end of the PC meeting, if not what was the gap?
    • We did not do line by line during the week.
  • How many tcons would you like between now and PC Publication
    • 3 (cancel one if we can)
  • Do you need any tcons before TI Prep Meeting
    • Yes.
    • Consider a clinical-consult style review call in the middle of the public comment period. Contact someone like Ali to provide input on 1 or more open issues and then present his comments as starting point to get input from clinicians. Take notes during conversation that get translated into actual public comments.
    • Review public comment and make a plan for the F2F week.

Trial Implementation Preparation Closing Assessment

  • Did we line-by-line the entire document
    • Partially. During incorporation of the comments a lot of sections have been reviewee, however some parts still need a line by line review
  • How ready is it to go out for TI: Completely, Almost, Soonish, Hmmm
    • All public comments have been addressed, however a few new concept sections (especially in order to address use cases out of scope for the profile) have to be written
    • Therefore I think at least two or three tcons are needed after the content has been finalized
  • How did the work fit in the allocated bandwidth? (Time to spare? Just right? Things were left undone?)
    • For the TI preparation meeting the allocated time was sufficient, however overall the during the brainstorming face more time would have been good.
  • Review the evaluation. Which complexity/uncertainty/effort points missed the mark?
  • Or alternatively, estimate how many points you went over and assign the overage effort/complexity/uncertainty to the appropriate points.
  • Are all the open issues closed?
    • Some more discssuions
  • What significant debates in TI-prep were not anticipated in the Kickoff or PC-Prep
    • a lot of discussions regarding the wording
    • dealing with Web Services in IOCM (due to the IOCM transaction Rad-66 only covering DIMSE, a lot of discussion surfaced around how we could support the described functionality for web based transactions
    • revisiting the status values for AI findings
    • Discussions around performance metrics
  • Did the Breakdown of Tasks accurately reflect the work? What extra tasks arose?
  • What residual risks are worth noting
    • I think requirement-wise the profile covers everything, some more clarifications for the concept sections are needed
  • Does it feel we've met all the use cases
    • the ones we had planned for are addressed
  • Did the promised resources manifest
    • A little low on public comments. It would especially be nice to get feedback on wording related issues during public comment and not just at the meeting
  • What vendors are engaged (for each actor)
    • Quality Information Reporter: Visus, Siemens Healthineers, Merative
    • Image Manager/Image Archive: Visus, Siemens Healthineers, Merative
    • Image Display: Visus, Siemens Healthineers, Merative
    • Quality Information Reporter:
  • Who should specifically be targeted for TI notification (implementors & advocates)
    • Implementers, especially on the side of the Quality Information Reporter
  • When will we have sample data/objects
    • Will be available at TI Implementation, some minor updates are needed due to additional Status Value
  • Was the profile where it needed to be at the start of the TI meeting, if not what was the gap
    • Yes. 59 of 97 comments were incorporated, 15 were addressed, however needed review, 19 discussion items were open and some TODOs were left
  • Was the profile where it needed to be at the end of the TI meeting, if not what was the gap
    • Due to some newly identified concept sections, a change in the status values for findings and a missing line by line review, the profile is not yet ready for TI
  • Do you need any tcons between now and TI Publication
    • At least two, however three would be better.