Clinical Research, Public Health and Quality use of EHR Data

From IHE Wiki
Jump to navigation Jump to search

Islands and Bridges

This document is a work in progress.

Introduction

IHE Patient Care Coordination (PCC) domain was established in July 2005 to deal with integration issues that cross providers, patient problems or time. It deals with general clinical care aspects such as document exchange, order processing, and coordination with other specialty domains. PCC also addresses workflows that are common to multiple specialty areas and the integration needs of specialty areas that do not have a separate domain within IHE.

In November of 2006, numerous constituents from clinical research, public health, and quality domains came to the PCC Planning meeting promoting specific needs of each domain. During this meeting and subsequently, they were asked to work on a white paper to help the PCC domain focus on their issues. What follows is the outcome of that interchange.

Clinical research, public health and quality domains arrived at IHE focused on our own specific needs, and ended up finding many similarities in our interoperability needs. We present these commonalities in the following chapters in service of the clinical sites on which we depend and to focus the efforts our implementing vendors.

Scope

In this paper, we are looking for several things:

  • First, for each chapter, we would like to clearly define each domain.
  • Secondly, we will identify stakeholders in each domain.
  • Thirdly, we present a view of each domain framed by a common set of tasks for information exchange. These tasks are the following (not in any particular order):
    • Express the criteria
    • Identify a patient meeting certain criteria (compositional) – patient level
    • Reporting data
    • Data Review/feedback
    • Analysis/Evaluate
    • Mapping (harmonizing semantics and concepts)
    • Validation (data integrity, correctness… QA)
    • Aggregation/reporting (communicate aggregated report – probably different by topic)
    • Communication (of raw data, of analyzed/aggregated data, of feedback/alerts, Sharing)
    • Clarification (data reported – need clarification on meaning of the data – or additional supporting data)
    • Feedback
  • Lastly, we will put forth a proposed plan of work in the form of high level road map for each domain as well as goals we feel suit our common needs.

Common Recommendations

We have identified the following common recommendations for future work in IHE:

EMR Workflow Insertion

Definition: A set of workflow instructions from another system to be inserted into an EMR to inform and instruct the EMR system and/or users.

Sample Use Case Clinical Trials: Physician workflow in their existing EMR system is described and inserted from the research protocol. Workflow includes the scheduling of clinical trial visits, order sets, and alerts for study coordination to achieve work in one system.

Sample Use Case Public Health: Pediatrician receives current immunization record from the immunization registry which includes past immunization, immunization alerts, and immunization recommendations.

Sample Use Case Quality: Best practice workflow insertion.


Controlled Retrieval of Patient Data

Definition: A set of rules from another system to be inserted into a data source for the purpose of identifying data to push/pull for these secondary use cases. (How does this differ from RFD?? QED?? XDS?? NAV?? RID??) These IHE profiles will be used within the process, but do not themselves solve the problem.

Sample Use Case Clinical Trials: Patient population inclusion/exclusion for upcoming clinical trials (ie subject recruitment).

Sample Use Case Public Health: Automation of known reportable conditions, as well as emerging threats. Public health officials receive reports of a new set of disease symptoms with unknown cause. They create a rule to be notified immediately of reports with patients presenting with these symptoms to help epidemiologists discover and remedy the cause.

Sample Use Case Quality: Patient inclusion/exclusion for the numerator/denominator in quality measures.


Potential Future Recommendations with Further Discussion

Multi-modal

  • images
  • video
  • device data
  • audio
  • text - structured, unstructured

Sample Use Case Clinical Trials: Acquisition of data for clinical research - image data, patient care device (point of care, home devices) data.

Sample Use Case Public Health: Same, as well as care provided through public institutions.

Sample Use Case Quality: ??


XDS Metadata

Definition:

  • Standardize usage of event code
  • Standardize usage of extensions

Sample Use Case Clinical Trials: ??

Sample Use Case Public Health: The identification of a report as containing a "reportable condition" within for automatic retrieval.

Sample Use Case Quality: ??


Patient Pseudononymization

  • subject ID


Semantic Validation

  • Quality of product
    • consistency
    • validity
    • semantic completeness


Aggregate Rules Definition

  • quality - measure extraction

Clinical Trials

Introduction

The clinical research community, predominantly the biopharmaceutical companies that sponsor clinicial trials, has its own international standards development organization, CDISC, whose mission is to develop standards for research, including standards for submitting data to the FDA (SDTM), for preforming statistical analysis (ADaM), and for collecting and managing data (ODM). The bulk of the definitions that follow have been copied from the CDISC Glossary . Sections taken from the CDISC Glossary have a title in bold followed by the text in normal typeface.

What is Clinical Research

Clinical research and development: The testing of a drug compound in humans primarily done to determine its safety and pharmacological effectiveness. Clinical development is done in phases, which progress from very tightly controlled dosing of a small number of subjects to less tightly controlled studies involving large numbers of patients. [SQA]

Clinical trial: Any investigation in human subjects intended to discover or verify the clinical, pharmacological and/or other pharmacodynamic effects of one of more investigational medicinal product(s), and/or to identify any adverse reactions to one or more investigational medicinal product(s), and/or to study absorption, distribution, metabolism and excretion of one of more investigational medicinal product(s) with the object of ascertaining its (their) safety and/or efficacy. [Directive 2001/20/EC; Modified from ICH E6 Glossary]

Stakeholders

  • The Public;
  • Biopharmaceutical sponsors;
  • Investigators and other site participants;
  • Subjects;
  • Regulatory agencies.

Tasks for Information Exchange

Express the Criteria

Patients are recruited into a clinical trail based on criteria specified in the protocol. Patients are often recruited by an investigator at a patient care site with which the patient has an existing healthcare relationship. So subject enrollment often entails two distinct processes: site selection and patient recruitment by the site.

Protocol: A document that describes the objective(s), design, methodology, statistical considerations, and organization of a trial. The protocol usually also gives the background and rationale for the trial, but these could be provided in other protocol referenced documents. Throughout the ICH GCP Guideline the term protocol refers to protocol and protocol amendments. NOTE: Present usage can refer to any of three distinct entities: 1) the plan (i.e., content) of a protocol, 2) the protocol document and 3) a series of tests or treatments (as in oncology). [ICH E6 Glossary]

Inclusion criteria: The criteria in a protocol that prospective subjects must meet to be eligible for participation in a study. NOTE: Exclusion and inclusion criteria define the study population.

Exclusion criteria: List of characteristics in a protocol, any one of which may exclude a potential subject from participation in a study.

Screening (of sites): Determining the suitability of an investigative site and personnel to participate in a clinical trial

Recruitment (investigators): Process used by sponsors to identify, select and arrange for investigators to serve in a clinical study.

Identify a Patient Meeting Criteria

Study population: Defined by protocol inclusion/exclusion criteria.

Recruitment (subjects): Process used by investigators to find and enroll appropriate subjects (those selected on the basis of the protocol’s inclusion and exclusion criteria) into a clinical study.

Recruitment period: Time period during which subjects are or are planned to be enrolled in a clinical trial.

Recruitment target: Number of subjects that must be recruited as candidates for enrollment into a study to meet the requirements of the protocol. In multicenter studies, each investigator has a recruitment target.

Baseline characteristics: Demographic, clinical, and other data collected for each participant at the beginning of the trial before the intervention is administered. NOTE: Randomized, controlled trials aim to compare groups of participants that differ only with respect to the intervention (treatment). Although proper random assignment prevents selection bias, it does not guarantee that the groups are equivalent at baseline. Any differences in baseline characteristics are, however, the result of chance rather than bias. The study groups should be compared at baseline for important demographic and clinical characteristics. Baseline data may be especially valuable when the outcome measure can also be measured at the start of the trial. [CONSORT Statement]

Screening (of subjects): A process of active consideration of potential subjects for enrollment in a trial.

Screen failure: Potential subject who did not meet one or more criteria required for participation in a trial.

Admission criteria: Basis for selecting target population for a clinical trial. Subjects must be screened to ensure that their characteristics match a list of admission criteria and that none of their characteristics match any single one of the exclusion criteria set up for the study.

Enroll: To register or enter into a clinical trial; transitive and intransitive. NOTE: informed consent precedes enrollment, which precedes or is contemporaneous with randomization.

Enrollment:

  1. The act of enrolling one or more subjects.
  2. The class of enrolled subjects in a clinical trial.

Reporting Data

Data selection criteria: The rules by which particular data are selected and/or transferred between the point of care and the patient record; subsequently, from the patient record to the database; and from database to inclusion in sub-population analyses.

Data entry: Human input of data into a structured, computerized format using an interface such as a keyboard, pen-based tablet, or voice recognition. Contrast with data acquisition, electronic data capture.

Case report form (CRF):

  1. A printed, optical, or electronic document designed to record all of the protocol required information to be reported to the sponsor for each trial subject.
  2. A record of clinical study observations and other information that a study protocol designates must be completed for each subject. NOTE: In common usage, CRF can refer to either a CRF page, which denotes a group of one or more data items linked together for collection and display, or a casebook, which includes the entire group of CRF pages on which a set of clinical study observations and other information can be or have been collected, or the information actually collected by completion of such CRF pages for a subject in a clinical study [ICH E6 Glossary]. See also CRF (paper).

Item definition:

  1. In a questionnaire or form to be completed in a clinical trial, the specification of a question and the specification of the format and semantics of the response.
  2. Formal specification of the properties of an item or field of data in an eClinical trial. [2. ODM]

Data Review/Feedback

Query: A request for clarification on a data item collected for a clinical trial; specifically a request from a sponsor or sponsor’s representative to an investigator to resolve an error or inconsistency discovered during data review.

Query management: Ongoing process of data review, discrepancy generation, and resolving errors and inconsistencies that arise in the entry and transcription of clinical trial data.

Query resolution: The closure of a query usually based on information contained in a data clarification.

Data clarification: Answer supplied by the investigator in response to a query. NOTE: The investigator may supply a new data point value to replace the initial value or a confirmation of the queried data point.

Clinical clarification: A query resolution received from the sponsor staff (medical monitors, DSMB monitoring board, etc.).

Data validation:

  1. Checking data for correctness and/or compliance with applicable standards, rules, and conventions.
  2. Process used to determine if data are inaccurate, incomplete, or unreasonable. The process may include format checks, completeness checks, check key tests, reasonableness checks, and limit checks. [1. FDA 2. ISO]

Discrepancy: The failure of a data point to pass a validation check. NOTE: Discrepancies may be detected by computerized edit checks or observed/identified by the data reviewer as a result of manual data review. See also query.

Edit Check: An auditable process, usually automated, of assessing the content of a data field against its expected logical, format, range or other properties that is intended to reduce error. NOTE: Time-of-entry edit checks are a type of edit check that is run (executed) at the time data are first captured or transcribed to anelectronic device at the time entry is completed of each field or group of fields on a form. Back-end edit checks are a type that is run against data that has been entered or captured electronically and has also been received by a centralized data store.

Analysis/Evaluate

Statistical analysis plan: A document that contains a more technical and detailed elaboration of the principal features of the analysis described in the protocol, and includes detailed procedures for executing the statistical analysis of the primary and secondary variables and other data. [ICH E9]

Mapping (harmonizing semantics and concepts)

Within the clinical research domain semantics are tightly defined within the protocol. Mapping, semantics, and harmonization become issues when the protocol interacts with the healthcare information systems at the site. The BRIDG is a domain model for clinical research which begins the process of harmonization to the HL7 RIM.

Validation (data integrity, correctness… QA)

The relevant definitions for validation are seen in the Data Review/Feedback section above.

Aggregation/Reporting

Considered to be part of the sponsor's Clinical Trial Management System.

Communication

Clinical study (trial) report: A written description of a study of any therapeutic, prophylactic, or diagnostic agent conducted in human subjects, in which the clinical and statistical description, presentations, and analysis are fully integrated into a single report. NOTE: For further information, see the ICH Guideline for Structure and Content of Clinical Study Reports. [ICH E6 Glossary]

Submission model: A set of data standards (including SDTM, ADaM and define.xml) or representing data that are submitted to regulatory authorities to support product marketing applications. NOTE: CDISC submission data consist of: tabulations that represent the essential data collected about patients; analysis data structured to support analysis and interpretation; and metadata descriptions. (SDTM has been endorsed by FDA.)

Interim clinical trial/study report: A report of intermediate results and their evaluation based on planned analyses performed during the course of a trial. [ICH]

Clarification

The relevant definitions for clarification are seen in the Data Review/Feedback section above.

Feedback

The relevant definitions for feedback are seen in the Data Review/Feedback section above.

Similarities and Differences from Clinical Research perspective

Clinical Research resembles Public Health and Quality in significant and fairly obvious ways. All three domains have similar needs to select patients of interest and to capture data from the patient care domain to support the domain's activities. The differences between clinical research and the other domains are more subtle, and are easily overlooked. These differences are occluded by the use of the same word to mean different things. The words 'clinical' and 'protocol', for example, have significantly different uses in clinical research and in healthcare.

Clinical research is driven by a protocol, but a research protocol is quite different from a patient care protocol or a care plan. The research protocol includes a trial design which specifies exactly the required visits for patient's participation in a trial, and the exact data to be collected. A healthcare protocol, by contrast, must deal with a great deal more complexity, since the condition of a patient across time can change and the treatment must change accordingly. A research protocol is much more tightly constrained and immutable than any healthcare protocol could be.

Clinical research data (merely called 'clinical data' in the research community) are likewise more tightly defined and constrained than data for use in patient care. Since the goal of a randomized clinical trial is to draw a statistical inference about the treatment under study, variability in the data must be tightly controlled. In many cases, data that are perfectly suitable for making a patient care decision are inadequate for research purposes. In virtually ALL cases, clinical trials will require additional data that are not present in the record. So the data needs of research include some pre-existing data, and some requirements for the creation of new data. Clinical research needs may never be entirely met by extracting data from a patient care database.

Clinical research complies with a set of regulations that are separate and distinct from those in patient care. Regulatory authorities in Europe (EMEA and national authorities) and the US (FDA) impose a regulatory framework for the capture of clinical trial data. In the US, section 21 of the Code of Federal Regulations (CFR) places requirements on the investigator to clearly identify source data unique to clinical research and to establish an auditable chain of custody.

Goals

The clinical research community has four goals for on-going profile development with IHE. Each of these goals entails integration of clinical research tasks with existing patient care functionality. Four IHE goals for the coming year are Content Profiles, Protocol Insertion, Image Acquisition, and Device Data Acquisition. These goals require interaction with IHE Patient Care Coordination, Radiology, and Patient Care Device domains.

Content Profile for use in Retrieve Form for Data-capture (RFD) The CDISC project Clinical Data Acquisition Standards Harmonization (CDASH) defines a set of standard collection instruments that can form the basis for an IHE content profile for use with RFD. Such trial design and data collection instruments can be transported using CDISC's Operational Data Model (ODM) in conjunction with RFD. The proper layering of RFD, CDASH, and ODM needs to be specified in an integration profile.

Protocol Insertion A protocol includes a trial design section that can be thought of as workflow instructions for the conduct of the trial. If these instructions could be expressed as rules and inserted into an EHR as executable instructions, yet another piece of the clinical trial work could be integrated with the patient care workflow. The embedded image (also included as a PowerPoint link) shows how protocol insertion might work.

Media:Media-ProcessDiagramClinicalTrial_CaseMgt_14Mar07_lb.ppt ClinicalTrial CaseMgt 14Mar07 lb.jpg

Image Acquisition Research protocols often require access to image data. The goal is to automatically access and retrieve appropriate images from healthcare stores such as PACS servers.

Device Data Acquisition Researchers often require data directly from patient care devices such as heart monitors, glucose monitors, weight scale. The goal is to access and retrieve appropriate data from devices in various settings, from home health to intensive care.

Public Health

Introduction

What is Public Health

Public Health Functions

  • Prevents epidemics and the spread of disease
  • Protects against environmental hazards
  • Prevents injuries
  • Promotes and encourages healthy behaviors
  • Responds to disasters and assists communities in recovery
  • Assures the quality and accessibility of health services


Essential Public Health Services

  • Monitor health status to identify community health problems
  • Diagnose and investigate health problems and health hazards in the community
  • Inform, educate, and empower people about health issues
  • Mobilize community partnerships to identify and solve health problems
  • Develop policies and plans that support individual and community health efforts
  • Enforce laws and regulations that protect health and ensure safety
  • Link people to needed personal health services and assure the provision of health care when otherwise unavailable
  • Assure a competent public health and personal health care workforce
  • Evaluate effectiveness, accessibility, and quality of personal and population-based health services
  • Research for new insights and innovative solutions to health problems


Stakeholders

  • Healthcare providers (public)
  • Healthcare providers (private)
  • Local Health Department
  • Internationalize State/Province Health Department
  • National Health Agencies (disease control, health services, education, etc.)
  • Other healthcare providers (e.g. hospitals, labs, RX, Nursing home, home health)
  • Social Services case workers
  • Disease Management Departments
  • Citizens
  • Emergency preparedness
  • Epidemiologists

Tasks for Information Exchange

Express the Criteria

(aka Define the Patient Record Set)

This is the data definition used to determine that a patient is of interest to a particular PH monitoring program. The program has a well defined population to serve, though the population may be difficult to identify. These criteria are typically communicated through policy and paper-based instructions indicating case selection criteria, requirements for submission times relative to case identification, and details regarding required data elements. Opportunities to establish interoperable electronic communications of such information are of interest to public health.

  • Examples:
    • Newborn Screening – many jurisdictions require screening of newborns for disorders where early intervention may result in dramatic health status outlook. The selection criteria might be any birth record where blood has been drawn for specified laboratory testing
    • Newborn Follow-up – a newborn with a positive newborn screening test (lab result) may trigger a follow-up intervention report. The selection criteria might be age<4 weeks and a positive lab result for a specified laboratory test.
    • Immunization – many jurisdictions offer immunization tracking programs to monitor and improve the patient and community resistance to communicable or other disease. The criteria define which records are relevant to a registry. In some jurisdictions, registries are population-based in that they are initialized from electronic birth records. In other jurisdictions, a record becomes relevant when an immunization should be administered according to a medically-accepted immunization schedule. In still others, a record becomes relevant only when the first immunization has actually been administered. While initially registries were largely tracking childhood immunizations only, increasingly they have come to contain life-long records (including in some cases bio-terrorism preparedness).
    • Cancer Registry - many jurisdictions maintain disease registries in order to better monitor, manage, and research disease management practices and community health. The selection criteria may be any record where the patient has a new or historical diagnosis of a positive laboratory test for cancer.

Identify a Patient Meeting Criteria

(aka Choose the Relevant Patients)

This is the identification of patients that meet the selection criteria identifying patients of interest to a particular PH monitoring program. This process may include review of information from the current visit as well as information from historical visits with the same or other providers. In some cases, patients may be selected based on criteria unrelated to a specific visit, encounter, or act (e.g., age). Such information may be retrieved as part of assessing whether or not the patient meets the selection criteria. To support such historical information retrieval, identity resolution may be required to retrieve information across multiple providers. Patient query is also needed to retrieve supplemental data with constraints based on rules (decision tree and supplemental data retrieval).

How this happens varies greatly within PH. Patient identification comes from referrals that include private and public clinics as well as other agencies. Incoming lists may require human review and further clarifications.

  • Examples:
    • Newborn Screening – in a jurisdiction that monitors screening of newborns for birth disorders, all newborn patients that meet the newborn screening selection criteria described above would be identified.
    • Newborn Follow-up – in a jurisdiction that offers intervention procedures for newborns with positive results of newborn screening tests, all patients with such a positive result as described by the selection criteria above would be identified.
    • Immunization – in a jurisdiction that offers immunization tracking, all patients meeting the immunization selection criteria described above would be identified. In population-based registries, for example, simply being born or living within a jurisdiction may be enough to meet the criteria.
    • Cancer Registry - in a jurisdiction that monitors, manages, or studies the cancer incidence and treatment, all patients meeting the cancer registry selection criteria described above would be identified.

Reporting Data

(aka Data Submission)

This is the communication of patient data to the PH monitoring program where the patient has been identified as meeting the selection criteria expressed above. This step may include manual entry of data through a user interface into a public health system, collection of data on paper forms by the PH agency, or the retrieval of information electronically from another system by the PH monitoring program. Information may be part of the current encounter, or it may be historical in nature. While much of this information today is retrieved through manual means, opportunities to collect such information through interoperable electronic exchanges supplemented by data entry options are becoming more prevelant.

  • Examples:
    • Newborn Screening – Where a patient meets the selection criteria for communication of newborn screening information, data collected may include public health laboratory values confirmation of laboratory diagnosis, ongoing treatment reports, and patient demographics.
    • Newborn Follow-up – Where a patient has been identified with a positive screen and the patient is eligible for newborn follow-up, data collected may include patient demographics, contact information for the patient primary care provider, laboratory values, and abnormality measures.
    • Immunization – (Event Notification - Update Message/Publish) Most registries collect a core data set defined by CDC, and some collect additional data as well. Data is usually either entered online into a registry application (usually web-based) or collected via electronic file submissions in various proprietary and standard formats.
    • Cancer Registry – Where a patient meets the selection criteria for communication of cancer registry information, data collected may include pathology, radiology, diagnosis, procedures, laboratory measures, medications, and demographics. Additional patient health status data may be collected through data entry of patient survey information.

Data Review/Feedback

(aka Data Quality Assurance)

This is the processing of the reviewing data submitted for compliance, completeness, and accuracy. Compliance is evaluating that the submissions expected are received in a timely manner and match patients known to be included in the program registry, or that the submissions are appropriate to the purpose or condition for which the data is being collected (e.g., mandatory reportable disease conditions). A report received for a patient not in the registry would need to be evaluated for inclusion in the program. Completeness is identifying missing or weak content that needs follow-up. Accuracy is correctness of the data if determinable. This review process may be manual, electronic, or a combination of the two. Where this review is conducted by an external resource, interoperable electronic feedback would be of interest.

  • Examples:
    • Immunization - When data is entered online through a user application, edit checks are usually performed before data is accepted to ensure appropriate content. When data is accepted via electronic transfer similar review is typically done by the software used for processing of incoming records into the registry.

Analysis/Evaluate

(aka Data Analysis and Use)

For most public health functions, the evaluation and analysis is typically conducted by the PH monitoring program. Results of such analysis may be further communicated to other public health authorities, programs, providers or subjects of care in steps described in sections that follow. Third party evaluation or analysis services are uncommon, but such analysis may be conducted in provider evaluation programs.

  • Examples:
    • Immunization - For immunization registries, this usually means practice-based or population-based assessment of the up-to-date status of children and adolescents according to a medically-accepted immunization schedule. For adults, this might mean practice-based or population-based compliance with recommended periodic vaccination recommendations. For bio-terrorism preparedness programs, this might mean assurance that targeted populations are properly vaccinated.

Mapping (harmonizing semantics and concepts)

(aka Consistent Coding and Meaning)

Public health semantics and concepts have significant variation across multiple jurisdictions because local practices differ. In moving toward interoperable systems, it is anticipated that mapping of local concepts and data values to semantically interoperable concepts will be important. Content profiles developed for public health purposes may require significant harmonization efforts to enable interoperable systems.

  • Examples:
    • Immunization - an HL7 implementation guide for registry-to-registry data exchange has existed for a number of years, and it forms the basis of all standards-based interoperability between registries and other systems. Embedded in the guide are coding standards for many important data elements. The HL7 PHER is currently articulating shared concepts for system interoperability.

Validation (data integrity, correctness… QA)

[This seems the same as Data Review/Feedback; Clinical Trials folks above did not know what to do with it either. Quality folks below did not differentiate clearly. (Noam)]

Aggregation/Reporting

[Seems the same as Analysis/Evaluate above. I don't know how to differentiate, though perhaps Analysis/Evaluate is the computation of results and the Aggregation/Reporting is the formatting and transmission of those results. Also seems the same as Feedback which could be moved here. (Noam)]

For public health, the aggregation and reporting is typically conducted by the public health monitoring program. Results may be requested by or communicated to other public health authorities, programs, social service programs, legislative processes, providers or subjects of care in steps described in sections that follow. Third party evaluation or analysis is often performed by academic partners and researchers. The resource used for aggregation and reporting may require privacy enhancement.

Communication

(aka Ancillary System Interoperability)

[Clinical Trials seems to interpret this as Formal Reports; Quality considers this the "results of analysis that are communicated" but does not adequately differentiate this from Aggregation/Reporting or Feedback. I think we need to differentiate this kind of communication. See my comments here. (Noam)]

Aside from routine data submission and formal Report and Feedback covered in other sections, several addditional types of communications are conducted in support of public health programs. Information collected and maintained in support of public health services and monitoring programs is shared either by push or pull mechanisms to the providers involved in the patient’s care as such new information is available to the Destination Agency. For example, communication of consent or change in consent to view data or participate in a registry or aggregation of data is an important area of supplemental communication.

  • Examples
    • Immunization - Some jurisdictions require patient consent for participation in a registry ("opt-in"), some mandate participation or allow selective "opt-out". When permitted, these choices can change over a patient's lifetime and need to be communicated to the registry. Additionally, some registries allow their immunization schedule decision support engines to provide decision support services to other systems through these ancillary communications.

Clarification

[Clinical Trials folks did not know what this was, and the Quality folks understood this to be the ability of a data submitter to update that was submitted for clarity. I don't know what it means. (Noam)]

For public health purposes, event detection processes may require clarification from the information source. This process would service event detection verification, and would service event case investigation purposes.

Feedback

(aka Reports)

This information is covered in data review/feedback section above. [No, I don't think this is right. Data Review/Feedback is feedback related to data quality. Note that the clinical trials folks include formal reports from the clinical trials process in Communication and do not know what to do with Feedback; the Quality folks are more general in their Communication section and put formal reports here in Feedback. This could go in Aggregation/Reporting in which case this category would fall away. This is what I suggest for this category below: (Noam)]

Various output and reports can be generated from the PH systems that have received data from outside sources. These reports may be used internally or circulated to various stakeholders including participating data providers, other levels of public health aggregated above the agency in question (e.g., state to Federal), or even the public. Real-time alerts may be patient-specific or more general based on analysis done within PH systems. Of particular note for public health is a need for communications with certain inventory resources (e.g. vaccine availability) or emergency responder/preparedness officials (e.g. for catastrophic outbreak detection). In many cases, this Feedback represents the formal results of the Analysis/Evaluate activity above.

  • Examples:
    • Immunization - Various standard reports are typical, including a Patient Report showing the immunizations received and due for a specific patient, practice or population coverage reports, practice-based vaccine usage and inventory reports, and reminder/recall reports.

Similarities and Differences from Public Health perspective

  • Similarities:
    • Identify Cohort/populations fitting a similar criteria
    • Select Cohort
    • Overlaps (e.g. PH/Quality: HCUP/HEDIS, Pharmacovigilence - quality)
  • Differences:
    • E.g. PH Environmental subject of care (water, building…)
    • Privacy Requirements (constraints)
    • Non-healthcare provider communication partners
    • Re-identification requirements for privacy-enhanced resources

Goals

Collaborative Goals

This section will describe the organizational goals of this joint collaboration (what “people”-related things we want to accomplish).

  1. Engaging public health community in the development of the technical specifications for interoperable clinical and public health systems: IHE Profile Proposals for 2007-2009
    1. Identify public health domain/programs for the development of profiles – more on this in the next section
    2. Establish program participation framework establishing processes for stakeholder input and collaboration with IHE engineering volunteers.
    3. Recruit vendors that service public health systems to contribute to the profile development.
    4. Identify and document value proposition
    5. Identify program sponsors
    6. Consider engaging the community as a focused domain.
  2. Educating the community (vendors, public health, etc.) on the new profiles or profile extensions for public health.
  3. Encourage adoption of our joint technical output in the community (both public and private)
    1. Identify communication forums aligning with the various public health programs (e.g. PHIN, …)
    2. Engage in connectathon new directions at HIMSS
    3. Publish, present and disseminate concepts, value proposition, and successes
    4. Establish IHE showcase demonstrations at domain-focused conferences


Technical Goals

This section will describe the goals related to new/existing technology that needs to be created/employed

  1. Proposed timeline (or prioritization list)
    1. Identify common (public health domain independent) core technologies that need to be established as profiles (or profile extensions, in the case that a profile exists that can satisfy a particular technical goal).
    2. Identify public health domain/programs for the development of profiles
    3. Semantic interoperability for information resources
    4. Interoperability with HIEs leveraging XDS and other IT Infrastructure profiles
    5. Enhancement of IT Infrastructure profiles to enable public health interoperability with clinical health information resources
    6. Electronic data capture through provider native electronic health record systems
    7. Cross-jurisdiction information sharing
    8. Bi-directional communications
    9. Decision Support
  2. Developing functional requirements and specifications (profiles) for interoperable clinical-public health systems
    1. Data content harmonization across public health systems
      • Document and message based
    2. Data sharing workflows
      • Document and message based
    3. Notification/Surveillance
    4. Privacy and security
    5. public health specific domain considerations
      • ex. Immunization, Reportable Conditions, etc.
    6. Communications
    7. etc?
  3. Survey of existing international Standards and IHE profiles – This section will contain a list of standards, existing IHE profiles, etc. that will be under consideration for use in achieving the aforementioned technical goals.
  4. Identify public health programs that may leverage existing IHE profiles. For these programs, develop program profiles specifying the use of these profiles to conduct program activities.
  5. Identify public health programs that may leverage existing IHE profiles with minimal additional profile development. For these programs, develop program profiles specifying the use of these profiles to conduct program activities.
  6. Identify public health programs that may require multiple IHE profile development. For these programs, develop program profile roadmaps aligning where possible with other initiatives.

Quality

Introduction

What is Quality

Healthcare quality is a broad term defined differently by many. Common definitions of quality include:

  • a specific characteristic of an object,
  • the essence of an object,
  • the achievement or excellence of an object,
  • the meaning of excellence itself.

Many efforts are in progress for the standardization of quality measurement. An example of an international coordinating effort is the Organization for Economic Cooperation and Development (OECD) Health Care Quality Indicators (HCQI) Project, bringing together 23 OECD countries, international organizations, such as the World Health Organization (WHO) and the European Commission (EC), expert organizations such as the International Society of Quality in Healthcare (ISQua) and the European Society for Quality in Healthcare (ESQH), and several universities. (Information available at: International Journal for Quality in Health Care 2006 18(Supplement 1):1-4; doi:10.1093/intqhc/mzl019). The Institute of Medicine in the United States provided a direct, discreet and usable definition by itemizing six attributes of healthcare by which to measure and identify healthcare quality – safe, effective, efficient, patient-centered, timely and equitable; further defining five key areas in which information technology could contribute to an improved health care delivery system:

  1. Access to the medical knowledge-base
  2. Computer-aided decision support systems
  3. Collection and sharing of clinical information
  4. Reduction in errors
  5. Enhanced patient and clinician communication

To most effectively achieve the goals of quality, measurement is a key component. Measurement of quality is intended to determine the effectiveness and efficiency of care and to faciliate improvement in care processes and ultimately patient outcomes. Therefore, for the purpose of this white paper, Quality Domain includes:

  • Aggregate measures of performance,
  • Individual case reporting of adverse events
  • Concurrent delivery of care based on evidence-generated guidelines and protocols of care

In this regard, there are three dimensions of quality measurement as described by Donabedian (Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83(4):691-729):

  • Structural (presence of specific factors in the environment)
  • Process (compliance with specific procedures)
  • Outcome (achievement of specific status by the patient) components of quality and quality measurement.

These three dimensions can also be described by the the previously stated six IOM aims (safe, effective, efficient, patient-centered, timely and equitable) and with reference to overuse, underuse and misuse of services.

Healthcare quality measures are quantitative indicators that are utilized to evaluate the quality of specific healthcare activities. Quality measures are developed by certain healthcare stakeholders for a variety of uses such as support for operations, resource utilization, and performance improvement. The measures, which are often called “indicators”, serve to inform the stakeholders or guide certain actions that will help improve performance or quality of healthcare delivery process or service. In some cases, the measures serve to enforce accountability for certain healthcare activities that are known to impact quality. Given the magnitude of the reliance on quality measures for health policy and provider accountability, these measures must be “meaningful, scientifically sound, and interpretable” to all the stakeholders.

An example of a quality measure evaluates recommended angiotensin converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARBs) treatments for patients with coronary artery disease (CAD) and acute myocardial infarction (AMI). These recommendations are based on clinical practice guidelines (CPGs) developed by American Heart Association (AHA) and American College of Cardiology (ACC). (CPGs are systematically developed statements to provide guidance to healthcare professionals and patients focused on “what should be” in regards the management of patients). Cinical performance measures (quality measures) (CPMs) are metrics derived from CPGs, or are otherwise evidence based, to assess “what is”.

There is abundant evidence that there are gaps in the care for patients with CAD and AMI. Assessing the performance of physicians, physician groups, hospitals and integrated delivery systems should be undertaken for multiple purposes, to include supporting internal quality improvement, for public reporting to ensure accountability and to facilitate patient choice of provider and in support of P4P programs.

For all these purposes the data elements described in the measure numerator and denominator will need to be recorded in the medical record and collected at the patient level, then used to construct the measure according to the detailed specification described by the measure developer. Once the measure is so constructed, aggregation to create a performance rate will need to be undertaken. Rates of performance are then feedback to the healthcare professionals (physicians and others) for use in quality improvement projects, and reported externally to the public and other stakeholders as described above.

The use of performance measures needs to be integrated with similarly constructed point of care rules and alerts. (See figure below).

Decision Support Convergence.jpg

Stakeholders

  1. Consumers – users of healthcare
  2. Providers (organizations) – Institutions / organizations that provide healthcare, including, but not limited to: hospitals, ambulatory practices, pharmacies, labs, radiology clinics
  3. Clinical Practitioners (individuals) – Individual clinicians involved in direct interaction with the consumers / patients, including, but not limited to: physicians practicing in any setting, nurses, mid-level practitioners, etc.
  4. Employers – Purchasers of healthcare for employees and, thus, financially sharing the burden of healthcare cost with employees in various proportions
  5. Policymakers – Individuals and groups who determine procedures for reimbursement, accountability and structural components of the healthcare enterprise
  6. Accreditors – Organizations that review the credentials as well as the financial, structural, clinical and service related performance of a provider or practitioner for approval and/or accreditation
  7. Research Community – Organizations and individuals that use patient data for scholarly, scientific investigation or inquiry to answer hypotheses about clinical care and response, leading to results that provide evidence to determine care delivery decisions. This community includes but is not limited to the Clinical Trials community.
  8. Vendor of healthcare information systems – Organizations that create and market software to be used by clinical practitioners as part of the healthcare delivery process
  9. Performance Measure Development Organizations – Organizations that develop clinical quality measures (structural, process and outcome) based on literature review of established peer-reviewed studies and committees comprised of clinical domain experts (examples: NCQA, JCAHO, American College of Cardiology <ACC>, etc. – note one organization may serve the role as measure developer and a measure adopter)
  10. Performance Measure Endorsers/ Approvers – Organizations that evaluate and endorse clinical quality measures (structural, process and outcome) using panels of clinical and epidemiology domain experts (examples: NQF, AQA, HQA)
  11. Performance Measure Adopter – Organizations that select endorsed clinical quality measures (structural, process and outcome) by which to hold accountable individual providers of care or provider organizations; accountability may be tied to accreditation, reimbursement, incentive payments or some combination of these (examples: CMS, JCAHO, NCQA, etc. – note one organization may serve the role as measure developer and a measure adopter) E.g., Payors, etc.
  12. Clearinghouse/ Outsourced Measure Calculator/ Benchmarking Service – Third Party Contractor Organizations that collect raw data, transform it as required, analyze it and coordinate performance reports based on requirements of the Performance Measure Adopter (examples: regional Quality Improvement Organizations <QIOs>, JCAHO approved data warehouse vendors, etc.)
  13. Performance Measure Implementers/ Receivers – Benchmarking Organizations that receive results of quality performance (structural, process and outcome measures) and create reports for comparison among various providers in a region or nationally; implementers can be the same organization as the Performance Measure Adopter, or approved 3rd party organizations

Example.jpg

Tasks for Information Exchange

Express the Criteria

Destination / Monitoring Agency defines measure and method of expression

  • Candidate Query - Identify required data elements specific to measure definition, categories across multiple systems:
    1. Demographics (sources: Patient identification, ADT, Financial systems <enrollment>)
    2. Results (Laboratory, Imaging)
    3. Substance Administration
    4. Procedures
    5. Location (current and destination)
    6. Events
    7. Clinical Observations / Findings
    8. Problems (Conditions, including but not limited to Allergies)
    9. Diagnoses
    10. History (patient or provider generated)


Identify a Patient Meeting Criteria

  • Resolve patient identity (ala PIX / PDQ)
  • Retrieve additional required data elements for patient meeting criteria (compositional – patient level)
    • Identify additional required data elements specific to measure definition, categories across multiple systems:
      1. Demographics (sources: Patient identification, ADT, Financial systems <enrollment>)
      2. Results (Laboratory, Imaging)
      3. Substance Administration
      4. Procedures
      5. Location (current and destination)
      6. Events
      7. Clinical Observations / Findings
      8. Problems (Conditions, including but not limited to Allergies)
      9. Diagnoses
      10. History (patient or provider generated)
    • Identify cohort based on criteria

Reporting Data

For specific CAD measure:

Principal diagnosis, result data (Left Ventricular Ejection Fraction <LVEF>), age, discharge status and destination, problem and allergy data (contraindications for medication avoidance), medications (Angiotensin Converting Enzyme Inhibitors <ACEI> or Angiotensin Receptor Blockers <ARB>), and contextual data such as the patient ID and originating source of information (e.g., hospital location, address and identifier)

Data Review/Feedback

Review for completeness of data submission.

Analysis/Evaluate

In Quality, this typically occurs after mapping and validation:

Analyze individual patient-level data to determine if patient matches denominator inclusion criteria, and for each such patient if:

  • numerator processes or outcomes were met
  • numerator exclusions are met

From data, determine if individual patient is adherent with, excluded from, or non-adherent with respect to measure

Mapping (harmonizing semantics and concepts)

Map local terms to terms identified within the measure.

Validation (data integrity, correctness… QA)

Validate integrity of data, correctness based on expected values for field.

Aggregation/Reporting

Aggregate data for performance across all applicable patients by practitioner / provider site.

Communication

The results of the analysis are communicated with the care provider / practitioner.

  • Privacy / Consent
  • Raw Data: Communication includes raw data for clarification step such that individual clinical practitioners can provide additional data that exist in poorly accessible data sources
  • Analyzed/aggregated data: Communication includes overall performance (percentage) of the applicable cohort from the clinical site / practitioner
  • Feedback / alerts:Recommendations for alteration in care process based on findings from the analysis (individual or aggregate)
  • Sharing: Patient-level data are made available (push, pull) at the site of any provider involved in the patient’s care as such new information is available to the Destination Agency
  • Care coordination (population-based): The individual responsible for monitoring performance provides general recommendations for alteration in care process based on the findings from aggregate analysis

Clarification

The practitioner / practice has an opportunity to update data with information that exists in poorly accessible data sources.

Feedback

Practitioners and organizational providers are provided with reports indicating individual and organizational performance with respect to each measure. Performance is most often provided with comparison to peers or similar organizations, including the top level performers as with benchmarking. Feedback can be provided at a local level comparing with other local practitioners and providers of care, at a regional level, a national level, or at an international level, comparing country performance.

Similarities and Differences from Quality perspective

Similarities:

  • Identify Cohort/populations fitting a similar criteria
  • Evaluate inclusion criteria: The criteria in a measure (denominator) for prospective subjects to be eligible for inclusion in a study cohort. Note: Exclusion and inclusion criteria define the study population
  • Evaluate exclusion criteria: List of undesired characteristics for a measure, any one of which may exclude a potential subject the study cohort. Note that exclusions may apply to the denominator of a measure or study population selection (E.g., all patients with diabetes EXCEPT those with gestational diabetes). Exclusions may also apply to the numerator of a measure or study population selection (E.g., those patients who have contraindications <such as allergy> to the expected medication or drug class). Individuals who represent numerator exclusions are removed from the denominator of the analysis
  • Select Cohort
  • Maintain privacy, confidentiality constraints on sharing of patient-level data - Optional depending on measuring agency
  • Maintain re-identification requirements for privacy-enhanced resources - Optional depending on measuring agency


Differences:

  • Determine the level of attribution for quality performance assessment and performance improvement initiatives

Goals