Difference between revisions of "Critical Finding Follow-up and Communication"

From IHE Wiki
Jump to navigation Jump to search
Line 152: Line 152:
Effort Evaluation (as a % of Tech Cmte Bandwidth):
Effort Evaluation (as a % of Tech Cmte Bandwidth):
:* 35% for ...
:* 35% (due to open technology question)
:* Small transaction count, but what tech do we use?
:* Small transaction count, but what tech do we use?
Candidate Editor: Steve Langer/Tessa Cook
Candidate Editor: Steve Langer/Tessa Cook
Mentor: Kevin?
Mentor: Kevin?

Revision as of 10:55, 13 September 2016

1. Proposed Workitem: Non-Critical Actionable Findings Follow-up and Communication

  • Proposal Editor: Tessa Cook
  • Proposal Contributors: Steve Langer, Kevin O'Donnell; Felicha Candelaria-Cook, Michael Kopinsky, Amy Wang, Deb Woodcock (OHSU BMI 516 SU2015 team 3, Harry Solomon, instructor)
  • Profile Editor: Steve Langer/Tessa Cook
  • Profile Contributors: Steve Langer (Mayo), Kevin O'Donnell; Tessa Cook MD PhD (UPenn)
  • Domain: Radiology


When radiologist’s recommendations for follow-up fall through the cracks, it can (and has) resulted in patient suffering, death and cost to the healthcare system: E.g. pressing medical issues from a motor vehicle accident overshadow suspicious findings until untreatable cancer is diagnosed some years later.

Follow-up recommendations should be captured and reminders triggered at appropriate times to the initiating radiologist and the patient's primary physician until completed or cancelled. A detailed order for followup procedures could even be queued up automatically for approval and execution. Existing IHE transactions for ordering and notifications might be adapted.

EMR vendors have tried to build generic follow-up monitoring systems but have not yet released them because of the complexity involved. Academic sites (BWH, Penn) have designed in-house systems, but address only part of the problem and are not easily replicated.

Coordinating follow-up is even more challenging between institutions, making this ripe for an IHE-style standards-based solution.

2. The Problem

Patients often receive recommendations for follow-up evaluation (e.g., imaging, laboratory, pathology or clinical evaluation) as a result of findings on an imaging study. However, without manually searching for the results of follow-up testing or evaluation, radiologists have no way of knowing when or if a patient completes the recommendation, or what the results may be. This puts the patient at risk of being lost to follow-up and returning at a later date after experiencing an adverse event, e.g., diagnosis of an advanced cancer.

At present, there is no way for actionable findings to effectively be communicated to the EMR, or for either a RIS or EMR to track whether follow-up is completed within a particular health system or whether results of an evaluation outside the system are submitted.

There are serious potential consequences to missed follow-up:

  • increased personal and monetary cost to the patient experiencing a complication or adverse outcome
  • potential malpractice costs associated with missed follow-up and adverse patient outcomes

Every large hospital/health system has experienced at least one instance of a patient being lost to follow-up despite multiple interactions with the healthcare system during the time the follow-up should have been obtained. There is significant potential for cost savings for multiple stakeholders by addressing this problem.

3. Key Use Case

Clinical Scenario: Patient John Doe has a low-velocity motor vehicle. Since he complains of chest pain, Mr. Doe gets a chest X-ray and a chest CT at the nearby community hospital. He is found to have a 7 mm left lung nodule on his chest CT, and the interpreting radiologist recommends a follow-up chest CT in 6 months.


  • The recommendation is communicated to the ER physician and verbally to Mr. Doe, but not documented in his discharge paperwork.
  • Mr. Doe never gets the follow-up chest CT. He sees other doctors in his health system and mentions his accident and imaging but does not recall the nodule or the follow-up recommendation. None of his doctors follow up to get the results of his imaging.
  • Two years later, he begins coughing up blood and has another chest CT, which shows a 2.5 cm left lung nodule, a left-sided pleural effusion and left-sided lymph nodes above his clavicle. He is diagnosed with unresectable lung cancer.


  • The radiologist’s recommendation within the report is automatically translated by the EMR into a reminder in the patient’s chart.
  • If the patient does not routinely receive care within this health system, the reminder is communicated to the patient’s home EMR. Consideration will have to be given to whether the home EMR is within the same health system as the original treating hospital, within the same HIE, or neither.
  • The follow-up reminder produces gradually escalating alerts (both to the original reporting radiologist and the patient’s home EMR) until the follow-up is completed or the loop is closed by someone on the patient’s care team within the home EMR. The loop should be able to be closed manually (e.g., by the primary care physician who can certify it has been performed already or is not clinically indicated) or automatically when the recommended testing result is available in the EMR.

Be clear about critical findings vs incidental findings - target here is for the ones that are more slow burn (months/years), not things that affect the patient in minutes/hours.

4. Standards and Systems

Existing systems: RIS/EMR at the treating hospital, EMR within the patient’s typical health system or within the system in which follow-up is expected to be obtained.

Relevant components of standards:

  • Report Mobile Alert (ITI-84) and Query for Alert Status (ITI-85)
  • Would DSUB let the radiologist subscribe to be notified when the relevant followup report gets published.

5. Technical Approach

New integration profiles needed

Critical Finding Followup Profile (CFF)

University of Pennsylvania Health System is using structured reporting and an in-house coding system for focal masses and pulmonary nodules.

An IHE profile is necessary to properly address this problem, because the scope is much larger than is realistically addressed with the UPenn pilot.

Impact on existing integration profiles

If mechanisms from other profiles are used, new text might be added there.

Might add an option to Reporting profiles.

Existing actors

  • Report Creator
  • other actors depending on what mechanisms are borrowed from existing profiles (e.g. XDS Registry)

New actors

  • Notification Manager?

Existing transactions

Existing transactions for subscriptions, notifications, cross-enterprise document sharing and perhaps communicating orders could be used/adapted.

Existing work on coded reports would also be applicable.

New transactions (standards used)

Ideally, the profile would dictate the structure of the reminder for a follow-up recommendation, how that reminder is communicated between systems, how often alerts surrounding this reminder are issued, how the reminder can be dismissed (i.e., when follow-up is considered “complete”), and how results are communicated back to the radiologist who made the original recommendation.

Breakdown of tasks

  • Discuss/Confirm use case list (currently 5) to decide in/out of scope and details
  • Consider Three arenas for deployment and decide which are in scope
  • Patient is in their "home" facility so order and followup are all local
  • Patient is in an affiliated facility (e.g. XDS Affinity domain) so not technically local, but channels exist
  • Raises questions about ordering privileges
  • Patient is in an unaffiliated facility so no coordinated IT interface
  • Review current solutions at UPenn, Mayo and perhaps elsewhere
  • UPenn standalone system from EMR that mines report, internally creates, databases and triggers reminders and monitors EMR for closure.
  • Timing/business logic is internal/local policy
  • Mike - auto scrub reports for keywords that get forwarded to a human who handles the followup list and tracking. Rads can also trigger manually to send for followup tracking.
  • Define the component Functions
  • Capture the "trigger" recommendation from report (either automatic parsing of report or manual step by radiologist)
  • Triggered agent initiates an order and/or a reminder (nag-a-gram in the problem list for the patient)
  • Needs to know which system to send the order to and what to put in the order
  • Processes and send reminders to radiologist and referring at certain intervals
  • The referring will be engaged with the problem list but radiologist will need an entirely different mechanism (maybe a reminder queue they see once a day in the reading room?)
  • Might be good for the patient to be reminded too
  • Capture the "trigger" for clearance of nag-a-gram or completion of order
  • If going on completion of followup, this might use the same mechanism as searching for priors
  • One or two transactions for each component
  • Target transaction map and technology choices at end of kickoff meeting (Nov).

See also Critical Results - Detailed Proposal

6. Support & Resources

<List groups that have expressed support for the proposal and resources that would be available to accomplish the tasks listed above.>

  • Support from Dr. Nabile Safdar (Emory)
  • Check with Paul Nagy (JHU), Harry Solomon (OHSU), Peter Kuzmak?, Ewout Kramer? (concrete FHIR use case?), XDS-I People (Epic, Life Image, DICOMGrid, Merge, Cloverleaf, Cerner)
  • Folks from other sites that are in discussions to collaborate with Penn in this effort (TJUH, Emory, Geisinger, Hershey, etc.)
  • Some companies are good at the urgent critical in-house. Still a gap for the cross-site, longer term followup.
  • JACR Article (Larsen, Kahn, et al 2014) - check RadReports.org for http://www.jacr.org/article/S1546-1440(13)00840-5/pdf
  • Might be national metrics/MACRA that would be interested in helping us succeed at this. Gap today on knowing the denominator.

7. Risks

  • Patient Identification
    • When doing this within a particular hospital or health system, there may occasionally be the issue of correctly identifying the patient (i.e., different hospitals may use different MRNs within the same system) but it is fairly easily correctable because the data is often shared.
  • Cross-facility information sharing
    • The biggest technical (and actually, also political) risks boil down to sharing information between facilities. However, this profile would require sharing data--correctly and securely--between facilities. While the patient’s medical record belongs to him or her, sharing patient information to enable care delivery elsewhere likely goes against the financial mission of a healthcare facility. As a result, there may be some political resistance that is met in the development of this profile. However, facilitating completion of follow-up, even at another facility, is the best thing for the patient and should be the overarching goal.
    • Communication and database management will be important technical details

8. Open Issues

<Point out any key issues or design problems. This will be helpful for estimating the amount of work and demonstrates thought has already gone into the candidate profile.>

  • Consider potential issues around self-referral, need to have the referring physician properly engaged
  • When does the nag-a-gram start working (e.g. for 6mo followup, does it start at 5mo or 6mo+1day)
  • How far does it escalate?
  • Ask Radiologists where they consider their responsibility ended; is it the same as the referring?
  • How does all this work in the more distributed arenas, e.g. where does the nag list live and how do you add/remove from it remotely
  • Will orders be "accepted" from "outside"?
  • Do we send the "proposed order" for someone else to place? Do we send the recommendation?
  • Do we have to use a central repository where things get pushed to and participants have to pull from there (to keep IP Sec happy)?

9. Tech Cmte Evaluation

<The technical committee will use this area to record details of the effort estimation, etc.>

Effort Evaluation (as a % of Tech Cmte Bandwidth):

  • 35% (due to open technology question)
  • Small transaction count, but what tech do we use?

Candidate Editor: Steve Langer/Tessa Cook Mentor: Kevin?