Showing posts with label HealthIT. Show all posts
Showing posts with label HealthIT. Show all posts

Sunday, December 29, 2013

Improving the quality of mental health and substance use treatment: how can Informatics help?


According to the 2012 National Survey on Drug Use and Health, an estimated 43.7 million adults aged 18 or older in the United States had mental illness in the past year. This represents 18.6 percent of all adults in this country. Among those 43.7 million adults, 19.2 percent (8.4 million adults) met criteria for a substance use disorder (i.e., illicit drug or alcohol dependence or abuse). In 2012, an estimated 9.0 million adults (3.9 percent) aged 18 or older had serious thoughts of suicide in the past year.

Mental health and substance use are often associated with other issues such as:

  • Co-morbidity involving other chronic diseases like HIV, hepatitis, diabetes, and cardiovascular disease.

  • Overdose and emergency care utilization.

  • Social issues like incarceration, violence, homelessness, and unemployment.
It is now well established that addiction is a chronic disease of the brain and should be treated as such from a health and social policy standpoint.


The regulatory framework

  • The Affordable Care Act (ACA) requires non-grandfathered health plans in the individual and small group markets to provide essential health benefits (EHBs) including mental health and substance use disorder benefits.  

  • Starting in 2014, insurers can no longer deny coverage because of a pre-existing mental health condition.

  • The ACA requires health plans to cover recommended evidence-based prevention and screening services including depression screening for adults and adolescents and behavioral assessments for children.

  • On November 8, 2013, HHS and the Departments of Labor and Treasury released the final rules implementing the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA). 

  • Not all behavioral health specialists are eligible to the Meaningful Use EHR Incentive program created by the Health Information Technology for Economic and Clinical Health Act (HITECH) of 2009.

 

Implementing Clinical Practice Guidelines (CPGs) with Clinical Decision Support (CDS) systems

 

Clinical Decision Support (CDS) can help address key challenges in mental health and substance use treatment such as:

  • Shortages and high turnover in the addiction treatment workforce.

  • Insufficient or lack of adequate clinician education in mental health and addiction medicine.

  • Lack of implementation of available evidence-based clinical practice guideline (CPGs) in mental health and addiction medicine.
For example, there are a number of scientifically validated CPGs for the Medication Assisted Treatment (MAT) of opioid addiction using methadone or buprenorphine. These evidence-based CPGs can be translated into executable CDS rules using business rule engines. These executable clinical rules should also be seamlessly integrated with clinical workflows.

The complexity and costs inherent in capturing the medical knowledge in clinical guidelines and translating that knowledge into executable code remains an impediment to the widespread adoption of CDS software. Therefore, there is a need for standards that facilitate the sharing and interchange of CDS knowledge artifacts and executable clinical guidelines. The ONC Health eDecision Initiative has published specifications to support the interoperability of CDS knowledge artifacts and services.

Ontologies as knowledge representation formalism are well suited for modeling complex medical knowledge and can facilitate reasoning during the automated execution of clinical guidelines based on patient data at the point of care.

The typical Clinical Practice Guideline (CPG) is 50 to 150 pages long. Clinical Decision Support (CDS) should also include other forms of cognitive aid such as Electronic Checklists, Data Visualization, Order Sets, and Infobuttons.

The issues of human factors and usability of CDS systems as well as CDS integration with clinical workflows have been the subject of many research projects in healthcare informatics. The challenge is to be bring these research findings into the practice of developing clinical systems software.


Learning from Data


Learning what works and what does not work in clinical practice is important for building a learning health system. This can be achieved by incorporating the results of Comparative Effectiveness Research (CER) and Patient-Centered Outcome Research (PCOR) into CDS systems. Increasingly, outcomes research will be performed using observational studies (based on real world clinical data) which are recognized as complementary to randomized control trials (RCTs). For example, CER and PCOR can help answer questions about the comparative effectiveness of pharmacological and  psychotherapeutic interventions in mental health and substance abuse treatment. This is a form of Practice-Based Evidence (PBE) that is necessary to close the evidence loop.

Three factors are contributing to the availability of massive amounts of clinical data: the rising adoption of EHRs by providers (thanks in part to the Meaningful Use incentive program), medical devices (including those used by patients outside of healthcare facilities), and medical knowledge (for example in the form of medical research literature). Massively parallel  computing platforms such as Apache Hadoop or Apache Spark can process humongous amounts of data (including in real time) to obtain actionable insights for effective clinical decision making.

The use of predictive modeling for personalized medicine (based on statistical computing and machine learning techniques) is becoming a common practice in healthcare delivery as well. These models can predict the health risk of patients (for pro-active care) based on their individual health profiles and can also help predict which treatments are more likely to lead to positive outcomes.

Embedding Visual Analytics capabilities into CDS systems can help clinicians obtain deep insight for effective understanding, reasoning, and decision making through the visual exploration of massive, complex, and often ambiguous data. For example, Visual Analytics can help in comparing different interventions and care pathways and their respective clinical outcomes for a patient or population of patients over a certain period of time through the vivid showing of causes, variables, comparisons, and explanations.


Genomics of Addiction and Personalized Medicine


Advances in genomics and pharmacogenomics are helping researchers understand treatment response variability among patients in addiction treatment. Clinical Decision Support (CDS) systems can also be used to provide cognitive support to clinicians in providing genetically guided treatment interventions.


Quality Measurement for Mental Health and Substance Use Treatment


An important implication of the shift from a fee-for-service to a value-based healthcare delivery model is that existing process measures and the regulatory requirements to report them are no longer sufficient.

Patient-reported outcomes (PROs) and patient-centered measures include essential metrics such as mortality, functional status, time to recovery, severity of side effects, and remission (depression remission at six and twelve months). These measures should take into account the values, goals, and wishes of the patient. Therefore patient-centered outcomes should also include the patient's own evaluation of the care received.

Another issue to be addressed is the lack of data elements in Electronic Medical Record (EMR) systems for capturing, reporting, and analyzing PROs. This is the key to accountability and quality improvement in mental health and substance use treatment.


Using Natural Language Processing (NLP) for the automated processing of clinical narratives


Electronic documentation in mental health and substance use treatment is often captured in the form of narrative text such as psychotherapy notes. Natural Language Processing (NLP) and machine learning tools and techniques (such as named entity recognition) can be used to extract clinical concepts and other insight from clinical notes.

Another area of interest is Clinical Question Answering (CQA) that would allow clinicians to ask questions in natural language and extract clinical answers from very large amounts of unstructured sources of medical knowledge. PubMed has more than 23 millions citations for biomedical literature from MEDLINE, life science journals, and online books. It is impossible for the human brain to keep up with that amount of knowledge.



Computer-Based Cognitive Behavioral Therapy (CCBT) and mHealth


According to a report published last year by the California HealthCare Foundation and titled The Online Couch: Mental Health Care on the Web:

"Computer-based cognitive behavioral therapy (CCBT) cost-effectively leverages the Internet for coaching patterns in self-driven or provider-assisted programs. Technological advances have enabled computer systems designed to replicate aspects of cognitive behavior therapy for a growing range of mental health issues".
An example of a successful nationwide adoption of CCBT is the online behavioral therapy site Beating the Blues in the United Kingdom which has been proven to help patients suffering from anxiety and mild to moderate depression. Beating the Blues has been recommended for use in the NHS by the National Institute for Health and Clinical Excellence (NICE).

In addition, there is growing evidence to support the efficacy of mobile health (mHealth) technologies for supporting patent engagement and activation in health behavior change (e.g., smoking cessation).

 

Technologies in support of a Collaborative Care Model


There is sufficient evidence to support the efficacy of the collaborative care model (CCM) in the treatment of chronic mental health and substance use conditions.The CCM is based on the following principles:
  • Coordinated care involving a multi-disciplinary care team.

  • Longitudinal care plan as the backbone of care coordination.

  • Co-location of primary care and mental health and substance use specialists.

  • Case management by a Care Manager. 
Implementing an effective collaborative care model will require a new breed of advanced clinical collaboration tools and capabilities such as:
  • Conversations and knowledge sharing using tools like video conferencing for virtual two-way face-to-face communication between clinicians (see my previous post titled Health IT Innovations for Care Coordination).

  • Clinical content management and case management tools.

  • File sharing and syncing allowing the longitudinal care plan to be synchronized and shared among all members of the care team.

  • Light-weight and simple clinical data exchange standards and protocols for content, transport, security, and privacy. 

 

Patient Consent and Privacy


Because of the stigma associated with mental health and substance use, it is important to give patients control over the sharing of their medical record. Patients consent should be obtained about what type information is shared, with whom, and for what purpose. The patient should also have access to an audit trail of all data exchange-related events. Current paper-based consent processes are inefficient and lack accountability. Web-based consent management applications facilitate the capture and automated enforcement of patient consent directives (see my previous post titled Patient privacy at web scale).

Sunday, November 10, 2013

Toward Polyglot Programming on the JVM

In my previous post titled Treating Javascript as a first class language, I wrote about how the Java Virtual Machine (JVM) is evolving with new languages and frameworks like Groovy, Grails, Scala, Akka, and the Play Framework. In this post, I report on my experience in learning and evaluating these emerging technologies and their roles in the Java ecosystem.

A KangaRoo on the JVM


On a previous project, I used Spring Roo to jumpstart the software development process. Spring Roo was created by Ben Alex, an Australian engineer who is also the creator of Spring Security. Spring Roo was a big productivity boost and generated a significant amount of code and configuration based on the specification of the domain model. Spring Roo automatically generated the following:

  • The domain entities with support for JPA annotations.
  • Repository and service layers. In addition to JPA, Spring Roo also supports NoSQL persistence for MongoDB based on the Spring Data repository abstraction.
  • A web layer with Spring MVC controllers and JSP views with support for Tiles-based layout, theming, and localization. The JSP views were subsequently replaced with a combination of Thymeleaf (a next generation server-side HTML5 template engine) and Twitter Boostrap to support a Responsive Web Design (RWD) approach. Roo also supports GWT and JSF.
  • REST and JSON remoting for all domain types.
  • Basic configuration for Spring Security, Spring Web Flow, Spring Integration, JMS, Email, and Apache Solr.
  • Entity mocking, automatic generation of test data ("Data on Demand"),  in-container integration testing, and end-to-end Selenium integration tests.
  • A Maven build file for the project and full integration with Spring STS.
  • Deployment to Cloud Foundry.
Roo also supports other features such as database reverse engineering and Ajax . Another benefit of using Roo is that it helped enforce Spring best practices and other architectural concerns such as proper application layering.

For my future projects, I am looking forward to taking developer's productivity and innovation to the next level. There are several criteria in my mind:

  • Being able to do more with less. This means being able to write code that is concise, expressive, requires less configuration and boilerplate coding, and is easier to understand and maintain (particularly for difficult concerns like concurrency which is a key factor in scalability).
  • Interoperability with the Java language and being able to run on the JVM, so that I can take advantage of the larger and rich Java ecosystem of tools and frameworks.
  • Lastly, my interest in responsive, massively scalable, and fault-tolerant systems has picked up recently.


Getting Groovy


Maven has been a very powerful build system for several projects that I have worked on. My goal now is to support continuous delivery pipelines as a pattern for achieving high quality software. Large open source projects like Hibernate, Spring, and Android have already moved to Gradle. Gradle builds are written in a Groovy DSL and are more concise than Maven POM files which are based on a more verbose XML syntax. Gradle supports Java, Groovy, and Scala out-of-the box. It also has other benefits like incremental builds, multi-project builds, and plugins for other essential development tools like Eclipse, Jenkins, SonarQube, Ivy, and Artifactory.

Grails is a full-stack framework based on Groovy, leveraging its concise syntax (which includes Closures), dynamic language programming, metaprogramming, and DSL support. The core principle of Grails is "convention over configuration". Grails also integrates well with existing and popular Java projects like Spring Security, Hibernate, and Sitemesh. Roo generates code at development time and makes use of AOP. Grails on the other hand generates code at run-time, allowing the developer to do more with less code. The scaffolding mechanism is very similar in Roo and Grails.

Grails has its own view technology called Groovy Server Pages (GSP) and its own ORM implementation called Grails Object Relational Mapping (GORM) which uses Hibernate under the hood. There is also decent support for REST/JSON and URL routing to controller actions. This makes it easy to use Grails together with Javascript MVC frameworks like AngularJS in creating more responsive user experiences based on the Single Page Application (SPA) architectural pattern.

There are many factors that can influence the decision to use Roo vs. Grails (e.g., the learning curve associated with Groovy and Grails for a traditional Java team). There is also a new high-productivity framework called Spring Boot that is emerging as part of the soon to be released Spring Framework 4.0.


Becoming Reactive


I am also interested in massively scalable and fault-tolerant systems. This is no longer a requirement solely for big internet players like Google, Twitter, Yahoo, and LinkedIn that need to scale to millions of users. These requirements (including response time and up time) are also essential in mission-critical applications such as healthcare.

The recently published "Reactive Manifesto" makes the case for a new breed of applications called "Reactive Applications". According to the manifesto, the Reactive Application architecture allows developers to build "systems that are event-driven, scalable, resilient, and responsive." That is the premise of the other two prominent languages on the JVM: Scala and Clojure. They are based on a different programming paradigm (than traditional OOP) called Functional Programming that is becoming very popular in the multi-core era.

Twitter uses Scala and has open-sourced some of their internal Scala resources like "Effective Scala" and "Scala School". One interesting framework based on Scala is Akka, a concurrency framework built on the Actor Model.

The Play Framework 2 is a full-stack web application framework based on Scala which is currently used by LinkedIn (which has over 225 millions registered users worldwide). In addition to its elegant design, Play's unique benefits include:

  • An embedded Java NIO (New I/O) non-blocking server based on JBoss Netty, providing the ability to call collaborating services asynchronously without relying on thread pools to handle I/O. This new breed of servers is called "Evented Servers" (NodeJS is another implementation) as opposed to the old "Threaded Servers". Older frameworks like Spring MVC use a threaded and synchronous approach which is more difficult to scale.
  • The ability to make changes to the source code and just refresh the browser page to see the changes (this is called hot reload).
  • Type-safe Scala templates (errors are displayed in the browser during development).
  • Integrated support for Akka which provides (among other benefits) fault-tolerance, the ability to quickly recover from failure.
  • Asynchronous responses (based on the concepts of "Future" and "Promise" also found in AngularJS), caching, iteratees (for processing large streams of data), and support for real-time push-based technologies like WebSockets and Server-Sent Events.
The biggest challenge in moving to Scala is that the move to Functional Programming can be a significant learning curve for developers with a traditional OOP background in Java. Functional Programming is not new. Languages like Lisp and Haskell are functional programming languages. More recently, XML processing languages like XSLT and XQuery have adopted functional programming ideas.


Bringing Clojure to the JVM


Clojure is a dialect of LISP and a dynamically-type functional programming language which compiles to JVM bytecode. Clojure supports multithreaded programming and immutable data structures. One interesting application of Clojure is Incanter, a statistical computing and data visualization environment enabling big data analysis on the JVM.

Thursday, August 15, 2013

Health IT Innovations for Care Coordination

The Business Case


According to an article by Bodenheimer et al. published in the January/February 2009 issue of Health Affairs and titled Confronting The Growing Burden Of Chronic Disease: Can The U.S. Health Care Workforce Do The Job?:

In 2005, 133 million americans were living with at least one chronic condition. In 2020, this number is expected to grow to 157 million. In 2005, sixty-three million people had multiple chronic illnesses, and that number will reach eighty-one million in 2020. 

Patients with co-morbidities are typically treated by multiple clinicians working for different healthcare organizations. Care Coordination is necessary for the effective treatment of these patients and reducing costs. Effective Care Coordination can reduce the number of redundant tests and procedures, hospital admissions and readmissions, medical errors, and patient safety issues related to the lack of medication reconciliation. 

According to a paper by Dennison and Hugues published in the Journal of Cardiovascular Nursing and titled Progress in Prevention Imperative to Improve Care Transitions for Cardiovascular Patients, direct communication between the hospital and primary care setting occurred only 3 percent of the time. According to the same paper, at discharge, a summary was provided only 12 percent of the time, and this occurrence remained poor at 4 weeks post-discharge, with only 51 percent of practitioners providing a summary. The paper concluded that this standard affected quality of care in 25 percent of follow-up visits.

Health Information Exchanges (HIEs) and emerging delivery models like the Accountable Care Organization (ACO) and the Patient-Centered Medical Home (PCMH) were designed to promote care coordination. However, according to an article by Furukawa et al. published in the August 2013 issue of Health Affairs and titled Hospital Electronic Health Information Exchange Grew Substantially In 2008–12:

In 2012, 51 percent of hospitals exchanged clinical information with unaffiliated ambulatory care providers, but only 36 percent exchanged information with other hospitals outside the organization. . . . In 2012 more than half of hospitals exchanged laboratory results or radiology reports, but only about one-third of them exchanged clinical care summaries or medication lists with outside providers.                      


Furthermore, the financial sustainability of many HIEs remains an issue. According to another article by Adler-Milstein et al. published in the same issue of Health Affairs and titled Operational Health Information Exchanges Show Substantial Growth, But Long-Term Funding Remains A Concern, "74 percent of health information exchange efforts report struggling to develop a sustainable business model".  

There are other obstacles to care coordination including the existing fee-for-service healthcare delivery model (as opposed to a value-based model), the lack of interoperability between healthcare information systems, and the lack of adoption of effective collaboration tools.

According to a report by the Institute of Medicine (IOM) titled  The Healthcare Imperative: Lowering Costs and Improving Outcomes, a program designed to improve care coordination could result in national annual savings of $240.1 billions.

What Can We Learn From High Risk Operations in Other Industries?


Similar breakdowns in communication during shift handovers have also been observed in risky operating environments, sometimes with devastating consequences. In the aerospace industry, human factors research and training have played an important role in successfully addressing the issue. A research paper by Parke and Mishkin titled Best Practices in Shift Handover Communication: Mars Exploration Rover Surface Operations included the following recommendations:

  • Two-way Communication, Preferably Face-to-Face. . . . Two-way communication enables the incoming worker to ask questions and rephrase the material to be handed over, so as to expose these differences [in mental model].


  • Face-to-Face Handovers with Written Support. Face-to-face handovers are improved if they are supported by structured written material—e.g., a checklist of items to convey, and/or a position log to review. 


  • Content of Handover Captures Intent. Handover communication works best if it captures problems, hypotheses, and intent, rather than simply lists what occurred.
While the logistics of healthcare delivery does not always permit physical face-to-face communication between clinicians during transitions of care, the web has seen an explosion in online collaboration tools. Innovative organizations have embraced these technologies giving rise to a new breed of enterprise software known as Enterprise 2.0 or Social Enterprise Software. This new breed of software is not only social, but also mobile, and cloud-based.

Care Coordination in the Health Enterprise 2.0


  • Collaborative Authoring of a Longitudinal Care Plan. From a content perspective, the Care Plan is the backbone of Care Coordination. The Care Plan should be comprehensive and standardized (similar to the checklist in aerospace operations). It should include problems, medications, orders, results, care goals (taking into consideration the patient's wishes and values), care team members and their responsibilities, and actual patient outcomes (e.g., functional status). Clinical Decision Support (CDS) tools can be used to dynamically generate a basic Care Plan based on the patient's specific clinical data. This basic Care Plan can be used by members of the care team to build a more elaborate Longitudinal Care Plan. CDS tools can also automatically generate alerts and reminders for the care team.


  • Communication and Collaboration using Enterprise 2.0 Software.  These tools should be used to enable collaboration between all members of the care team which include not only clinicians, but also non-clinician caregivers, and the patient herself. Beyond email, these tools allow conversations and knowledge sharing through instant messaging, video conferencing (for virtual two-way face-to-face communication), content management, file syncing (allowing the longitudinal care plan to be synchronized and shared among all members of the care team), search, and enterprise social networking (because clinical work is a social activity like most human activities). A providers directory should make it easy for users to find a specific provider and all their contact information based on search criteria such as location, specialty, knowledge, experience, and telephone number.


  • Light Weight Standards and Protocols for Content, Transport, Security, and Privacy. The foundation standards are: REST, JSON, OAuth2, and OpenID Connect. An emerging approach that could really help put patients in control of the privacy of their electronic medical record is the OAuth2.0-based User-Managed Access (UMA) Protocol of the Kantara Initiative (see my previous post titled Patient Privacy at Web Scale). Initiatives like the ONC-sponsored RESTful Health Exchange (RHEX) project and the HL7 Fast Healthcare Interoperability Resources (FHIR) hold great promise.


  • Case Management Tools. They are typically used by Nurse Practionners (Case Managers) in Medical Homes, a concept popularized by the Patient-Centered Medical Home healthcare delivery model to coordinate care. These tools integrate various capabilities such as risk stratification (using predictive modeling) to identify at-risk patients, content management (check-in, check-out, versioning), workflows (human tasks), communication, business rule engine, and case reporting/analytics capabilities.

Sunday, June 9, 2013

Essential IT Capabilities Of An Accountable Care Organization (ACO)

The Certification Commission for Health Information Technology (CCHIT) recently published a document entitled A Health IT Framework for Accountable Care. The document identifies the following key processes and functions necessary to meet the objectives of an ACO:

  • Care Coordination
  • Cohort Management
  • Patient and Caregiver Relationship Management
  • Clinician Engagement
  • Financial Management
  • Reporting
  • Knowledge Management.

The key to success is a shift to a data-driven healthcare delivery. The following is my assessment of the most critical IT capabilities for ACO success:

  • Comprehensive and standardized care documentation in the form of electronic health records including as a minimum: patients' signs and symptoms, diagnostic tests, diagnoses, allergies, social and familiy history, medications, lab results, care plans, interventions, and actual outcomes. Disease-specific Documentation Templates can support the effective use of automated Clinical Decision Support (CDS). Comprehensive electronic documentation is the foundation of accountability and quality improvement.

  • Care coordination through the secure electronic exchange and the collaborative authoring of the patient's medical record and care plan (this is referred to as clinical information reconciliation in the CCHIT Framework). This also requires health IT interoperability standards that are easy to use and designed following rigorous and well-defined software engineering practices. Unfortunately, this has not always been the case, resulting in standards that are actually obstacles to interoperability as opposed to enablers of interoperability. Case Management tools used by Medical Homes (a concept popularized by the Patient-Centered Medical Home model) can greatly facilitate collaboration and Care Coordination.

  • Patients' access to and ownership of their electronic health records including the ability to edit, correct, and update their records. Patient portals can be used to increase patients' health literacy with health education resources. Decision aids comparing the benefits and harms of various interventions (Comparative Effectiveness Research) should be available to patients. Patients' health behavior change remains one of the greatest challenges in Healthcare Transformation. mHealth tools have demonstrated their ability to support Patient Activation.

  • Secure communication between patients and their providers. Patients should have the ability to specify with whom, for what purpose, and the kind of medical information they want to share. Patients should have access to an audit trail of all access events to their medical records just as consumers of financial services can obtain their credit record and determine who has inquired about their credit score.

  • Clinical Decision Support (CDS) as well as other forms of cognitive aids such as Electronic Checklists, Data Visualization, Order Sets, Infobuttons, and more advanced Clinical Question Answering (CQA) capabilities (see my previous post entitled Automated Clinical Question Answering: The Next Frontier in Healthcare Informatics). The unaided mind (as Dr. Lawrence Weed, the father of the Problem-Oriented Medical Record calls it) is no longer able to cope with the large amounts of data and knowledge required in clinical decision making today. CDS should be used to implement clinical practice guidelines (CPGs) and other forms of Evidence-Based Medicine (EBM).

    However, the delivery of care should also take into account the unique clinical characteristics of individual patients (e.g., co-morbidities and social history) as well as their preferences, wishes, and values. Standardized Clinical Assessment And Management Plans (SCAMPs) promote care standardization while taking into account patient preferences and the professional judgment of the clinician. CDS should be well integrated with clinical workflows (see my my previous post entitled Addressing Challenges to the Adoption of Clinical Decision Support (CDS) Systems).

  • Predictive risk modeling to identity at-risk populations and provide them with pro-active care including early screening and prevention. For example, predictive risk modeling can help identify patients at risk of hospital re-admission, an important ACO quality measure.

  • Outcomes measurement with an emphasis on patient outcomes in addition to existing process measures. Examples of patient outcome measures include: mortality, functional status, and time to recovery.

  • Clinical Knowledge Management (CKM) to disseminate knowledge throughout the system in order to support a learning health system. The Institute of Medicine (IOM) released a report titled  Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care. The report describes the learning health system as:

    "delivery of best practice guidance at the point of choice, continuous learning and feedback in both health and health care, and seamless, ongoing communication among participants, all facilitated through the application of IT."

  • Applications of Human Factors research to enable the effective use of technology in clinical settings. Examples include: implementation of usability guidelines to reduce Alert Fatigue in Clinical Decision Support (CDS), Checklists, and Visual Analytics. There are many lessons to be learned from other mission-critical industries that have adopted automation. Following several incidents and accidents related to the introduction of the Glass Cockpit about 25 years ago, Human Factors training known as Cockpit Resource Management (CRM) is now standard practice in the aviation industry.

Sunday, April 28, 2013

How I Make Technology Decisions

The open source community has responded to the increasing complexity of software systems by creating many frameworks which are supposed to facilitate the work of developing software. Software developers spend a considerable amount of time researching, learning, and integrating these frameworks to build new software products. Selecting the wrong technology can cost an organization millions of dollars. In this post, I describe my approach to selecting these frameworks. I also discuss the frameworks that have made it to my software development toolbox.

Understanding the Business


The first step is to build a strong understanding of the following:

  • The business goals and challenges of the organization. For example, the healthcare industry is currently shifting to a value-based payment model in an increasingly tightening regulatory environment. Healthcare organizations are looking for a computing infrastructure that support new demands such as the Accountable Care Organization (ACO) model, patient-centered outcomes, patient engagement, care coordination, quality measures, bundled payments, and Patient-Centered Medical Homes (PCMH).

  • The intended buyers and users of the system and their concerns. For example, what are their pain points? which devices are they using? and what are their security and privacy concerns?

  • The standards and regulations of the industry.

  • The competitive landscape in the industry. To build a system that is relevant, it is important to have some ideas about the following: what is the competition? what are the current capabilities of their systems? what is on their road map? and what are customers saying about their products. This knowledge can help shape a Blue Ocean Strategy.

  • Emerging trends in technologies.

This type of knowledge comes with industry experience and a habit of continuously paying attention to these issues. For example, on a daily basis, I read industry news as well as scientific and technical publications. As a member of the American Medical Informatics Association (AMIA), I receive the latest issue of the Journal of the American Medical Informatics Association (JAMIA) which allows me to access cutting-edge research in medical informatics. I speak at industry conferences when possible and this allows me not only to hone my presentation skills, but also attend all sessions for free or at a discounted price. For the latest in software development, I turn to publications like InfoQ, DZone, and TechCrunch.

To better understand the users and their needs and concerns, I perform early usability testing (using sketches, wireframes, or mockups) to test design ideas and obtain feedback before actual development starts. For generating innovative design ideas, I recommend the following book: Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions by Bruce Hanington and Bella Martin.

 

Architecting the Solution


Armed with a solid understanding of the business and technological landscape as well as the domain, I can start creating a solution architecture. Software development projects can be chaotic. Based on my experience working on many software development projects across industries, I found that Domain Driven Design (DDD) can help foster a disciplined approach to software development. For more on my experience with DDD, see my previous post entitled How Not to Build A Big Ball of Mud, Part 2.

Frameworks evolve over time. So, I make sure that the architecture is framework-agnostic and focused on supporting the domain. This allows me to retrofit the system in the future with new frameworks as they emerge.


 

Due Diligence


Software development is a rapidly evolving field. I keep my eyes on the radar and try not to drink the vendors Kool-Aid. For example, not all vendors have a good track record in supporting standards, interoperability, and cross-platform solutions.

The ThoughtWorks Technology Radar is an excellent source of information and analysis on emerging trends in software. Its contributors include software thought leaders like Martin Fowler and Rebecca Parson. I also look at surveys of the developers community to determine the popularity, community size, and usage statistics of competing frameworks and tools. Sites like InfoQ often conduct these types of surveys like the recent InfoQ survey on Top JavaScript MVC Frameworks. I also like Matt Raible's Comparing JVM Web Frameworks.

I value the opinion of recognized experts in the field of interest. I read their books, blogs, and watch their presentations. Before formulating my own position, I make sure that I read expert opinions on opposing sides of the argument. For example, in deciding on a pure Java EE vs. Spring Framework approach, I read arguments by experts on both sides (experts like Arun Gupta, Java EE Evangelist at Oracle and Adrian Colyer, CTO at SpringSource).

Finally, consider a peer review of the architecture using a methodology like the Architecture Tradeoff Analysis Method (ATAM). Simply going through the exercise of explaining the architecture to stakeholders and receiving feedback can significantly help in improving it.


Rapid Prototyping 

 

It's generally a good idea to create a rapid prototype to quickly learn and demonstrate the capabilities and value of the framework to the business. This can also generate excitement in the development team, particularly if the framework can enhance the productivity of developers and make their life easier.

 

The Frameworks I've Selected


The Spring Framework

I am a big fan of the Spring Framework. I believe it is really designed to support the need of developers from a productivity standpoint. In addition to dependency injection (DI), Aspect Oriented Programming (AOP), and Spring MVC, I like the Spring Data repository abstraction for JPA, MongoDB, Neo4J, and Hadoop. Spring supports Polyglot Persistence and Big Data today. I use Spring Roo for rapid application development and this allows me to focus on modeling the domain. I use the Roo scaffolding feature to generate a lot of Spring configuration and Java code for the domain, repository (Roo supports JPA and MongDB), service, and web layers (Roo supports Spring MVC, JSF, and GWT). Spring also support for unit and integration testing with the recent release of Spring MVC Test.

I use Spring Security which allows me to use AOP and annotations to secure methods and supports advanced features like Remenber Me and regular expressions for URLs. I think that JAAS is too low-level. Spring Security allows me to meet all OWASP Top Ten requirements (see my previous post entitled  Application-Level Security in Health IT Systems: A Roadmap).

Spring Social makes it easy to connect a Spring application to social network sites like Facebook, Twitter, and LinkedIn using the OAuth2 protocol. From a tooling standpoint, Spring STS supports many Spring features and I can deploy directly to Cloud Foundry from Spring STS. I look forward to evaluating Grails and the Play Framework which use convention over configuration and are built on Groovy and Scala respectively.

Thymeleaf, Twitter Boostrap, and JQuery

I use Twitter Boostrap because it is based on HTML5, CSS3, JQuery, LESS, and also supports a Responsive Web Design (RWD) approach. The size of the components library and the community is quite impressive.

Thymeleaf is an HTML5 templating engine and a replacement for traditional JSP. It is well integrated with Spring MVC and supports a clear division of labor between back-end and front-end developers. Twitter Boostrap and Thymeleaf work well together.


AngularJS

For Single Page Applications (SPA) my definitive choice is AngularJS. It provides everything I need including a clean MVC pattern implementation, directives, view routing, Deep Linking (for bookmarking), dependency injection, two-way databinding, and BDD-style unit testing with Jasmine. AngularJS has its own dedicated debugging tool called Batarang. There are also several learning resources (including books) on AngularJS.

Check this page comparing the performance of AngulaJS vs. KnockoutJS. This is a survey of the popularity of  Top JavaScript MVC Frameworks.

 

D3.js 

D3.js is my favorite for data visualization in data-intensive applications. It is based on HTML5, SVG, and Javascript. For simple charting and plotting, I use jqPlot which is based on JQuery. See my previous post entitled Visual Analytics for Clinical Decision Making.

 

I use R for statistical computing, data analysis, and predictive analytics. See my previous post entitled Statistical Computing and Data Mining with R.


Development Tools


My development tools include: Git (Distributed Version Control), Maven or Gradle (build), Jenkins (Continuous Integration), Artifactory (Repository Manager), and Sonar (source code quality management). My testing toolkit includes Mockito, DBUnit, Cucumber JVM, JMeter, and Selenium.

Sunday, April 14, 2013

Addressing Challenges to the Adoption of Clinical Decision Support (CDS) Systems: A Practical Approach

Laptop and stethoscope by jfcherry is licensed under CC BY-SA 2.0
This post has been updated on February 15, 2015.

Despite its potential to improve the quality of care, CDS is not widely used in health care delivery today. In technology marketing parlance, CDS has not crossed the chasm. There are several issues that need to be addressed including:

  • Clinicians' acceptance of the concept of automated execution of evidence-based clinical practice guidelines.

  • Integration into clinical workflows and care protocols.

  • Usability and human factors issues including alert fatigue and the human factors that influence alert acceptance.

  • With the expanding use of clinical prediction models for diagnosis and prognosis, there is a need for a better understanding of the probabilistic approach to clinical decision making under uncertainty.

  • Standardization to enable the interoperability and reuse of CDS knowledge artifacts and executable clinical guidelines.

  • The challenges associated with the automatic concurrent execution of multiple clinical practice guidelines for patients with co-morbidities.

  • Integration with modern information retrieval capabilities which allow clinicians to access up-to-date biomedical literature. These capabilities includes text mining, Natural Language Processing (NLP), and more advanced Clinical Question Answering (CQA) tools.  CQA allows clinicians to ask clinical questions in natural language and extracts answers from very large amounts of unstructured sources of medical knowledge. PubMed has more than 23 millions citations for biomedical literature from MEDLINE, life science journals, and online books. The typical Clinical Practice Guideline (CPG) is 50 to 150 pages long. It is impossible for the human brain to keep up with that amount of knowledge.

  • The use of mathematical simulations in CDS to explore and compare the outcomes of various treatment alternatives.

  • Integration of genomics to enable personalized medicine as the cost of whole-genome sequencing (WGS) continues to fall.

  • Integration of outcomes research in the context of a shift to a value-based healthcare delivery model. This can be achieved by incorporating the results of Comparative Effectiveness Research (CER) and Patient-Centered Outcome Research (PCOR) into CDS systems. Increasingly, outcomes research will be performed using observational studies (based on real world clinical data) which are recognized as complementary to randomized control trials (RCTs) for discovering what works and what doesn't work in practice. This is a form of Practice-Based Evidence (PBE) that is necessary to close the evidence loop.

  • Support for a shared decision making process which takes into account the values, goals, and wishes of the patient.

  • The use of Visual Analytics in CDS to facilitate analytical reasoning over very large amounts of structured and unstructured data sources.

  • Finally, the challenges associated with developing hybrid decision support systems which seamlessly integrate all the technologies mentioned above including: machine learning predictive algorithms, real-time data stream mining, visual analytics, ontology reasoning, and text mining.

In response to a paper titled Grand challenges in clinical decision support by Sittig et al. [1], Fox et al. [2] outlined four theoretical foundations for the design and implementation of CDS systems: decision theory, theories of knowledge representation, process design, and organizational modeling. The practical approach discussed in this post is grounded in those four theoretical foundations.


CDS Interoperability


The complexity and cost inherent in capturing the medical knowledge in clinical guidelines and translating that knowledge into executable code remain a major impediment to the widespread adoption of CDS software. Therefore, there is a need for standards for the interchange and reuse of CDS knowledge artifacts and executable clinical guidelines.

Different formalisms, methodologies, and architectures have been proposed over the years for representing the medical knowledge in clinical guidelines. Examples include but are not limited to the following:

  • The Arden Syntax
  • GLIF (Guideline Interchange Format)
  • GELLO (Guideline Expression Language Object-Oriented)
  • GEM (Guidelines Element Model)
  • The Web Ontology Language (OWL)
  • PROforma
  • EON
  • PRODIGY
  • Asbru
  • GUIDE
  • SAGE.
More recently, HL7 has published the Clinical Decision Support (CDS) Knowledge Artifact Specification which provides guidance on shareable CDS knowledge artifacts including event-condition-action rules, order sets, and documentation templates.

The HL7 Context-Aware Knowledge Retrieval (Infobutton) specifications provide a standard mechanism for clinical information systems to request context-specific clinical knowledge to satisfy clinicians and patients' information needs at the point of care.

Enabling the interoperability of executable clinical guidelines requires a standardized domain model for representing the medical information of patients and other contextual clinical information. The HL7 Virtual Medical Record (vMR) is a standardized domain model for representing the inputs and outputs of CDS systems. The ability to transform an HL7 CCDA document into an HL7 vMR document means that EHR systems that are Meaningful Use Stage 2 certified can consume these standard-compliant decision support services.

Because of the complexity and cost of developing CDS software, CDS software capabilities can be exposed as a set of services (part of a service-oriented architecture [16]) which can be consumed by other clinical systems such as EHR and Computerized Physician Order Entry (CPOE) systems. When deployed in the cloud, these CDS software services can be shared by several health care providers to reduce costs. The HL7 Decision Support Service (DSS) specification defines a REST and a SOAP web service interfaces using the vMR as message payload for accessing interoperable decision support services.

In practice, executable CDS rules (like other complex types of business rules) can be implemented with a production rule system using forward chaining. This is the approach taken by OpenCDS and some other large scale CDS implementations in real-world healthcare delivery settings. This allows CDS software developers to externalize the medical knowledge contained in clinical practice guidelines in the form of declarative rules as opposed to embedding that knowledge in procedural code. Many viable open source business rule management systems (BRMS) are available today and provide capabilities such as a rule authoring user interface, a rules repository, and a testing environment. Furthermore, a rule execution environment can be integrated with business processes, ontologies, and predictive analytics models (more on that later).

The W3C Rule Interchange Format (RIF) specification is a possible solution to the interchange of executable CDS rules. The RIF Production Rule Dialect (RIF PRD) is designed as a common XML serialization syntax for multiple rule languages to enable rule interchange between different BRMS. For example, RIF-PRD would allow the exchange of executable rules between existing BRMS like JBoss Drools, IBM ILOG JRules, and Jess. RIF is currently a W3C Recommendation and is backed by several BRMS vendors. Consentino et al. [3] described a model-driven approach for interoperability between IBM's proprietary ILOG rule language and RIF.


Seamless Integration into Clinical Workflows and Care Pathways


One of the main complaints against CDS systems is that they are not well integrated into clinical workflows and care protocols. Existing business process management standards like the Business Process Modeling Notation (BPMN) can provide a proven, practical, and adaptable approach to the integration of CDS rules and clinical pathways and protocols. Some existing open source and commercial BRMS already provide an integration of business rules and business processes out-of-the box and there are well-known patterns for integrating rules and processes [4, 5, 6] in business applications.

In 2014, the Object Management Group (OMG) released the Decision Model and Notation (DMN) specification which defines various constructs for modeling decision logic. The combination of BPMN and DMN [7, 8] provides a practical approach for modeling the decisions in clinical practice guidelines while integrating these decisions with clinical workflows. BPMN and DMN also support the modeling of decisions and processes that span functional and organizational boundaries.


Human Factors in the Use of Clinical Decision Support Systems


We need to do a better job in understanding the human factors that influence alert acceptance by clinicians in CDS. We also need clear and proven usability guidelines (backed by scientific research) that can be implemented by CDS software developers. Several research projects have sought to understand why clinicians accept or ignore alerts in medication-related CDS [9, 10]. Zacharia et al. [11] developed and validated an Instrument for Evaluating Human-Factors Principles in Medication-Related Decision Support Alerts (I-MeDeSA). I-MeDeSA measures CDS alerts on the following nine human factors principles: alarm philosophy, placement, visibility, prioritization, color learnability and confusability, text-based information, proximity of task components being displayed, and corrective actions.

The British National Health Service (NHS) Common User Interface (CUI) Program has created standards and guidance in support of the usability of clinical applications with inputs from user interface design specialists, usability experts, and hundreds of clinicians with a diversity of background in using health information technology. The program is based on a rigorous development process which includes: research, design, prototyping, review, usability testing, and patient safety assessment by clinicians. In the US, the National Institute of Standards and Technology (NIST) has also published some guidance on the usability of clinical applications.

Studies have also shown that like in aviation, checklists can provide cognitive support to clinicians in the decision making process.


Integrating Genomic Data with CDS


The costs of whole-genome sequencing (WGS) and whole-exome sequencing (WES) continue to fall. Increasingly, both WGS and WES will be used in clinical practice for inherited disease risk assessment and pharmacogenomic findings [21]. There is a need for a modern CDS architecture that can support and facilitate the introduction and use of WGS and WES in clinical practice.

In a paper titled Technical desiderata for the integration of genomic data with clinical decision support [14], Welch et al. proposed technical requirements for the integration of genomic data with clinical decision support.  In another paper titled A proposed clinical decision support architecture capable of supporting whole genome sequence information [15], Welch et al. proposed the following clinical decision support architecture for supporting whole genome sequence information (click to enlarge):

Proposed service-oriented architecture (SOA) for whole genome sequence (WGS)-enabled CDS by Brandon M. Welch, Salvador Rodriguez Loya, Karen Eilbeck, and Kensaku Kawamoto is licensed under CC BY 3.0

The proposed architecture includes the following components: the genome variant knowledge base, the genome database, the CDS knowledge base, a CDS controller and the electronic health record (EHR). The authors suggest separating the genome data from the EHR data.


Practice-Based Evidence (PBE) needed for closing the evidence loop


Prospective randomized controlled trials (RCTs) are still considered the gold standard in evidence-based medicine. Although they can control for biases, RCTs are costly, time consuming, and must be performed under carefully controlled conditions.

The retrospective analysis of existing clinical data sources is increasingly recognized as complementary to RCTs for discovering what works and what doesn't work in real world clinical practice [23]. These retrospective studies will allow the creation of clinical prediction models which can provide personalized absolute risk and treatment outcome predictions for patients. They also facilitate what has been referred to as "rapid learning health care." [24]

Toward Data-Driven Clinical Decision Support (CDS)


Williams Osler (1849-1919) [19] famously said that "Medicine is a science of uncertainty and an art of probability."

The use of clinical prediction models for diagnosis and prognosis is becoming common practice in clinical care. These models can predict the health risks of patients based on their individual health data. Clinical Prediction Models provide absolute risk and treatment outcome prediction for conditions such as diabetes, kidney disease, cancer, cardiovascular disease, and depression.  These models are built with statistical learning techniques and introduce new challenges related to their probabilistic approach to clinical decision making under uncertainty [20]. In his book titled Super Crunchers: Why Thinking-By-Numbers Is The New Way To be Smart, Ian Ayres wrote:

Traditional experts make better decisions when they are provided with the results of statistical prediction. Those who cling to the authority of traditional experts tend to embrace the idea of combining the two forms of knowledge by giving the experts 'statistical support'. The purveyors of diagnostic software are careful to underscore that its purpose is only to provide support and suggestions. They want the ultimate decision and discretion to lie with the doctor. [12]

Furthermore, in order to leverage existing clinical domain knowledge from clinical practice guidelines and biomedical ontologies [22], machine learning algorithms' probabilistic approach to decision making under uncertainty must be complemented by technologies like production rule systems and ontology reasoners. Sesen et al. [18] designed a hybrid CDS for lung cancer care based on probabilistic reasoning with a Bayesian Network model and guideline-based recommendations using a domain ontology and an ontology reasoner.

Fox et al. [2] proposed an argumentation approach based on the construction, summarization, and prioritization of arguments for and against each generated candidate decision. These arguments can be both qualitative or quantitative in nature. On the importance of presenting evidence-based rationale in CDS systems, Fox et al. wrote:

In short, to improve usability of clinical user interfaces we advocate basing design around a firm theoretical understanding of the clinician’s perspective on the medical logic in a decision, the qualitative as well as quantitative aspects of the decision, and providing an evidence-based rationale for all recommendations offered by a CDS. [2]
In a paper titled A canonical theory of dynamic decision-making [13], Fox et al. proposed a canonical theory of dynamic decision-making and presented the PROforma clinical guideline modeling language as an instance of the canonical theory.

Clinical predictive model presentation techniques include traditional score charts, nomograms, and clinical rules [17]. However Clinical Prediction Models are easier to use and maintain when deployed as scoring services (part of a service-oriented software architecture) and integrated into CDS systems. The scoring service can be deployed in the cloud to allow integration with multiple client clinical systems [20]. The Predictive Model Markup Language (PMML) specification published by the Data Mining Group (DMG) supports the interoperable deployment of predictive models in heterogeneous software environments.

Visual Analytics or data visualization techniques can also play an important role in the effective presentation of Clinical Prediction Models to nonstatisticians particularly in the context of shared decision making.


Concurrent execution of multiple guidelines for patients with co-morbidities


According to the Medicare 2012 chart book, "over two-thirds of beneficiaries having two or more chronic conditions and 14% having 6 or more chronic conditions." [25]

In Grand Challenges in Clinical Decision Support [1], Sittig et al. wrote:
The challenge is to create mechanisms to identify and eliminate redundant, contraindicated, potentially discordant, or mutually exclusive guideline-based recommendations for patients presenting with co-morbid conditions.
Michalowski et al. [26] proposed a mitigation framework based on a first-order logic (FOL) approach.


A CDS Architecture for the era of Precision Medicine


I proposed a scalable CDS architecture for Precision Medicine in another post titled Toward a Reference Architecture for Intelligent Systems in Clinical Care.

 

References


[1] Sittig D, Wright A, Osheroff JA, Middletone B, Jteich J, Ash J, et al. Grand challenges in clinical Decision support. J Biomed Inform 2008;41(2):387–92.

[2] Fox, J., Glasspool, D.W., Patkar, V., Austin, M., Black, L., South, M., et al. (2010). Delivering clinical decision support services: there is nothing as practical as a good theory. J. Biomed. Inform. 43, 831–843

[3] Valerio Cosentino, Marcos Didonet del Fabro, Adil El Ghali. A model driven approach for bridging ILOG Rule Language and RIF. RuleML, Aug 2012, Montpellier, France.

[4] Mauricio Salatino. (Processes & Rules) OR (Rules & Processes) 1/X. http://salaboy.com/2012/07/19/processes-rules-or-rules-processes-1x/. Retrieved February 15, 2015.

[5] Mauricio Salatino. (Processes & Rules) OR (Rules & Processes) 2/X. http://salaboy.com/2012/07/28/processes-rules-or-rules-processes-2x/. Retrieved February 15, 2015.

[6] Mauricio Salatino. (Processes & Rules) OR (Rules & Processes) 3/X. http://salaboy.com/2012/07/29/processes-rules-or-rules-processes-3x/. Retrieved February 15, 2015.

[7] Sylvie Dan. Modeling Clinical Rules with the Decision Modeling and Notation (DMN) Specification. http://sylviedanba.blogspot.com/2014/05/modeling-clinical-rules-with-decision.html. Retrieved February 15, 2015.

[8] Dennis Andrzejewski, Eberhard Beck, Eberhard Beck, Laura Tetzlaff. The transparent representation of medical decision structures based on the example of breast cancer treatment. 9th International Conference on Health Informatics.

[9] Phansalkar S, Zachariah M, Seidling HM, Mendes C, Volk L, Bates DW. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc. 2014 Oct;21(e2):e332-40. doi: 10.1136/amiajnl-2013-002279.

[10] Seidling HM, Phansalkar S, Seger DL, et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011;18:479–84.

[11] Zachariah M, Phansalkar S, Seidling HM, et al. Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems--I-MeDeSA. J Am Med Inform Assoc 2011;18(Suppl 1):i62–72.

[12] Ayres I. Super Crunchers: Why Thinking-By-Numbers Is The New Way To be Smart. Bantam.

[13] Fox J., Cooper R. P., Glasspool D. W. (2013). A canonical theory of dynamic decision-making. Front. Psychol. 4:150 10.3389/fpsyg.2013.00150.

[14] Welch, B.M.; Eilbeck, K.; Del Fiol, G.; Meyer, L.; Kawamoto, K. Technical desiderata for the integration of genomic data with clinical decision support. 2014,

[15] Welch BM, Loya SR, Eilbeck K, Kawamoto K. A proposed clinical decision support architecture capable of supporting whole genome sequence information. J Pers Med. 2014 Apr 4;4(2):176-99. doi: 10.3390/jpm4020176.

[16] Loya SR, Kawamoto K, Chatwin C, Huser V. Service oriented architecture for clinical decision support: a systematic review and future directions. J Med Syst. 2014 Dec;38(12):140. doi: 10.1007/s10916-014-0140-z.

[17] Ewout W. Steyerberg. Clinical Prediction Models. A Practical Approach to Development, Validation, and Updating. New York: Springer, 2010.

[18] Sesen MB, Peake MD, Banares-Alcantara R, Tse D, Kadir T, Stanley R, Gleeson F, Brady M. 2014 Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care. J. R. Soc. Interface 11: 20140534. http://dx.doi.org/10.1098/rsif.2014.0534

[19] Bean RB, Bean, BW. Sir William Osler: aphorisms from his bedside teachings and writings. New York; 1950.

[20] Joel Amoussou. How good is your crystal ball?: Utility, Methodology, and Validity of Clinical Prediction Models. http://efasoft.blogspot.com/2015/01/how-good-is-your-crystal-ball-utility.html. Retrieved February 15, 2015.

[21] Dewey FE, Grove ME, Pan C, et al. Clinical Interpretation and Implications of Whole-Genome Sequencing. JAMA. 2014;311(10):1035-1045. doi:10.1001/jama.2014.1717.

[22] Joel Amoussou. Ontologies for Addiction and Mental Disease: Enabling Translational Research and Clinical Decision Support. http://efasoft.blogspot.com/2014/08/ontologies-for-addiction-and-mental.html. Retrieved February 2015.

[23] Future radiotherapy practice will be based on evidence from retrospective interrogation of linked clinical data sources rather than prospective randomized controlled clinical trials. Dekker, Andre L. A. J. and Gulliford, Sarah L. and Ebert, Martin A. and Orton, Colin G., Medical Physics, 41, 030601 (2014), DOI:http://dx.doi.org/10.1118/1.4832139

[24] Lambin, Philippe et al. 'Rapid Learning health care in oncology' – An approach towards decision support systems enabling customised radiotherapy. Radiotherapy and Oncology , Volume 109 , Issue 1 , 159 - 164.

[25] Centers for Medicare & Medicaid Services. Chronic Conditions Among Medicare Beneficiaries, Chartbook: 2012 Edition. http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Chronic-Conditions/Downloads/2012Chartbook.pdf. Accessed Feb. 15, 2015.

[26] Szymon Wilk, Martin Michalowski, Xing Tan, Wojtek Michalowski: Using First-Order Logic to Represent Clinical Practice Guidelines and to Mitigate Adverse Interactions. KR4HC@VSL 2014: 45-61.



Sunday, February 27, 2011

The Greening of the HL7 CDA

I attended the HIMSS 2011 Conference this week in Orlando, FL. The GreenCDA was one of the big themes at the HL7 booth. The goal of the HL7 GreenCDA project is to provide a simple intermediary XML representation of the CDA to facilitate quick learning and ease of use for developers building healthcare data exchange solutions. Using the GreenCDA should not require prior knowledge of the HL7 Reference Information Model (RIM) and the associated model refinement process.

Developers should be able to generate code from the GreenCDA XML schema using data binding tools in any programming language of their choice. It should also be possible to create a round-trip transformation between the GreenCDA and the CDA. These requirements also apply to CDA implementations such as the HITSP C32. The GreenCDA will be available as an HL7 Implementation Guide and the HL7 Structured Document Working Group recently issued a GreenCDA wire format position statement.

In a previous post entitled "XML Processing in Healthcare Applications", I described some of the issues with the HL7 CDA and HITSP C32 XML structure and suggested some ideas on dealing with the complexity of the CDA schema and C32 generation process. In this post, I will share some thoughts on what can be done to ensure that the GreenCDA lives up to its full potential as the answer to the simplification challenge in healthcare data exchange standards.

XML Schemas In the Software Development Lifecycle

The XML schema is an important part of the service contract in Service Oriented Architecture (SOA). Services contracts also include the WSDL and WS-Policy documents. Using the recommended contract-first approach to web services development, developers generate client as well as server code using various tools and APIs in their native programming language and framework. Even when not using a pre-existing industry XML schema, the contract-first approach allows developers to decouple the service contract from platform-specific idiosyncrasies and adhere to cross-platform interoperability standards such as the WSI-Basic Profile.

On the Java platform, JAX-WS and JAXB allow developers to generate code from the WSDL and XML schema with tools like WSDL2Java.

On the .NET platform, the Windows Communication Framework (WCF) and Visual Studio provide data binding tools out-of-the-box like the Svcutil. There is also an open source tool called WSCF.blue specifically designed to facilitate contract-first web services development on the .NET platform.

The GreenCDA XML schema could also be used in support of the "Canonical Data Model" enterprise integration pattern. Enterprise data architects typically extend industry XML schemas components to satisfy custom needs.

Finally, the PCAST Report released in December 2010 recommended a universal exchange language that is "structured as individual data elements, together with metadata that provide an annotation for each data element". The report suggests that the metadata attached to each of these data elements

"...would include (i) enough identifying information about the patient to allow the data to be located (not necessarily a universal patient identifier), (ii) privacy protection information—who may access the mammograms, either identified or de-identified, and for what purposes, (iii) the provenance of the data—the date, time, type of equipment used, personnel (physician, nurse, or technician), and so forth."

Put together, these requirements argue in favor of a GreenCDA XML schema that supports the following:

  • Reusability
  • Extensibility
  • A well-defined versioning strategy
  • Seamless code generation in a variety of programming languages and development frameworks
  • A metadata facility per the PCAST recommendations.


Designing for Reuse and Extensibility

I suggest that the GreenCDA should only use global and named simple and complex types to facilitate reuse and extensibility. In other words, anonymous type definitions should be avoided. Extensibility is typically implemented through the <xsd:extension> element. Reuse can also be achieved by assembling logically related schema components into separate schema documents and using the <xsd:include> and <xsd:import> constructs.

Common XML schema components (also called core components) such as Hl7 datatypes as well as person, address, and organization should be in a separate schema file ideally under a different namespace than the target namespace of the GreenCDA itself.


Component Naming and Documentation

It would be nice to have different naming conventions for types vs. elements and attributes. Also schema component names should be spelled out for readability. A component name like "ivlTs" is not obvious for someone who is not familiar with HL7 datatypes.

Each type, element, or attribute should have a required <xs:annotation> child element which describes the semantics of the element in its child <xs:documentation> element. In other words, all schema components should be documented.


Support for Data Binding Tools

Certain features of the XML Schema language such as mixed content models, <xsd:choice>, and dynamic type substitution with xsi:type are not well supported by various XML databinding tools. The need to use these constructs to accurately express the GreenCDA XML data structure should be balanced against the ability to seamlessly generate code from the GreenCDA XML schema using various XML databinding tools.

Before the GreenCDA is released for production use, I suggest at least two open source reference implementations in two different development platforms (such as Java and .NET) covering the end-to-end web services development cycle using the specific tooling provided by the respective platforms.


What Can Be Learned From the National Information Exchange Model (NIEM)

The ONC Standards and Interoperability Framework is leveraging the NIEM from a process perspective. However, I believe there is much to be learned from the design of the NIEM as an XML data exchange standard. This does not imply that the GreenCDA should use the NIEM Core. It simply means that the healthcare domain can leverage certain NIEM design principles that are not only backed by advanced research (at Georgia Tech Research Institute) in XML schema modeling, but are also proven by the numerous government agencies using the NIEM.

The NIEM embodies recognized XML Schema design patterns in its Naming and Design Rules (NDR). The NIEM provides a schematron-based tool to automatically validate XML schemas against the rules defined in the NDR. For example, the schematron schema can enforce component naming conventions or the requirement to document every schema component.

The PCAST Report says:
"We think that a universal exchange language must facilitate the exchange of metadata tagged elements at a more atomic and disaggregated level, so that their varied assembly into documents or reports can itself be a robust, entrepreneurial marketplace of applications."

The NIEM defines an extensible metadata facility for adding metadata to any data elemeent in the spirit of the PCAST recommendations. The NIEM itself support the exchange of "data items" at any level of granularity. These XML Schema Design Patterns are universal and can be applied to any domain including the healthcare domain.

Thursday, February 3, 2011

A Therapeutic Layered Cake

With all the talk about the PCAST Report, I've been doing some Systems thinking on semantic interoperability in healthcare IT. Trying to put all the pieces together, I remembered Tim Berners-Lee's "Semantic Web Layer Cake".




The Semantic Web layer Cake has gone through several iterations over the years (see James Hendler's presentation on that subject). However, I think it can still be very helpful in visualizing a unified framework for addressing the challenges of semantic interoperability in Healthcare IT.

As we move to Stage 2 of Meaningful Use, I believe Clinical Decision Support (CDS) will take center stage. Beyond currently used XML-based data structures (such as HL7 v3 messages), this will put an increased emphasis on medical terminologies, ontologies, and knowledge representation in OWL. For example, ICD-11 is being developed using OWL to allow consistency checking and linking to other biomedical terminologies and ontologies. Equally important to knowledge representation, but not shown in the layer cake above is the Simple Knowledge Organization System (SKOS) specification.

In a report entitled "Semantic Interoperability Deployment and Research Roadmap", Alan Rector summarized the difference between the notions of ontology, knowledge representation, and data model:

  • Ontology – A representation of what is universally true, including what is true by definition

  • Knowledge Representation or "Background knowledge resource" – a representation of what is generally true, or widely known to be true in some specific instance. In general, the knowledge representation is formulated in terms of and indexed by the Ontology.

  • Information model or Data model a model of how information is structured in a given software system, message, or electronic health record. In general, the data structures carry codes for the ontology as their content.

Clinical guidelines are published in the form of narrative text, sometimes with an evaluation algorithm. The translation of those guidelines into an executable representation is a complex and costly process. Several formalisms and standards have been proposed such as the Arden Syntax, GLIF, GELLO, and GEM. However, none of these standards has been widely adopted. Developed with inputs from the Business Rules, Logic Programming, and Semantic Web communities, the W3C Rule Interchange Format (RIF) can help with the interchange of executable Clinical Decision Support (CDS) rules in addition to adding reasoning capabilities to patient records. This example shows how decision support rules could be exchanged between two rules engines (Drools and Jess) using the RIF PRD syntax, a standard XML serialization format for production rule languages.

Existing patient records marked up in XML HITSP C32 or ASTM CCR can be lifted into RDF statements (with XSLT or XQuery for example) and queried using SPARQL.

Proof, Trust, and Cryptography are being currently addressed by various standards and specifications in the healthcare industry notably the OASIS Cross-Enterprise Security and Privacy Authorization (XSPA) Profiles of XACML, SAML, and WS-Trust.

On the User Interface side, I see HTML5 giving both Flex and Silverlight a run for their money in the next few years. This will be driven in part by the demand for mobile health (mHealth).

Saturday, January 22, 2011

XML Processing in Healthcare Applications

Meaningful Use certification requires the ability to create patient summaries in either C32 or CCR format. One of the most frequently asked questions on the HL7 Structured Document mailing list is related to the processing of the CDA XML schema with data binding tools such as JAXB or Castor. Initially, people are not able to generate Java classes with JAXB at all. After some changes to the schema, JAXB finally works and creates hundreds of classes which are not very easy to work with and maintain. Then someone suggests using the Model-Driven Health Tools (MDHT) CDA tools which are Java-based. You face additional headache if you're not developing on the Java platform.

In a paper presented at the Balisage 2009 conference, a team of engineers who implemented the "Laika" C32 compliance testing tool described the issues with the CDA and C32 XML structure:

  • Repeated use of overly abstract data structures: The HL7 CDA defines a number of very generic objects that are used to represent information in a given document. Differing information, such as medications and conditions, are represented using the same XML elements with very subtle changes in their nesting and attributes. This makes a CDA document difficult to process.

  • Underspecified implementation, including lack of a normative schema: While there is an XML schema for the HL7 CDA, a final schema does not exist for the HITSP C32 or other CDA-based documents due to their use of attributes for selecting templates. Thus, defining schemas for these documents is impossible. As a result, CDA-based constructs such as HITSP C32 cannot be automatically validated by XML parsers; standard object mapping tools, such as XML Beans or JAXB, cannot be used.

  • Ambiguous data types: Data can be represented in multiple ways in a CDA document. Consumers of CDA documents must, therefore, write software that handles any of the numerous permutations of these data types. This leads to bloated software, or more likely, software that does not implement the full specification and experiences interoperability problems when it receives data in an unexpected format.

  • Steep and long learning curve: Mastery of the CDA and its many specifications and constructs takes an experienced software engineer many months to achieve. Once learned, it is very cumbersome to employ in robust software applications and services. These difficulties drive up the cost and time to develop and maintain health care software, thus reducing the pace of innovation.

In a previous post entitled "The Future of Healthcare Data Exchange Standards", I suggested some ideas on how to develop standard XML schemas that support the software development process as opposed to hindering it. Since we're not there yet, in this post I will suggest some ideas on dealing with the complexity of the CDA schema and C32 generation process.

The key is to leverage the power of XML related technologies such as XPath2, XSLT2, XQuery, XProc, ISO Schematron, and even XML Schema 1.1 (for assertions or business rules constraints) to simplify the task. First, generate a simple and perhaps flat XML representation (let's call it simpleC32) of the patient summary from your domain objects or database (through a data transfer object or DTO for example). That simpleC32 contains all the content that is needed to populate the C32 templates and generate a valid C32 document. You can create your own XML schema for your simpleC32 and use it for validation and data binding.

Once you have a valid simpleC32 document, you can use XSLT2 to transform the patient summary from your simpleC32 representation into a C32 document that can be validated against the NIST Meaningful Use C32 Validator. This is roughly the idea behind the GreenCDA project. Use that as an inspiration on how to create a simple representation of the C32. You can even use the GreenCDA XML schema as your simpleC32. But don't hesitate to create your own simpleC32 if the GreenCDA does not work for you, because the target is still the C32, and the idea here is to have an intermediary representation (an Adapter) to make your life easier. It is also an approach that allows you to isolate your domain model and prevent the complexity of the C32 data model from leaking into your domain layer (see my previous post on the concept of Anti-Corruption Layer in Domain Driven Design).

Why is this approach not used more often? Some developers who code with imperative programming languages (such as Java, C#, or JavaScript) are not comfortable with declarative programming using languages like XSLT2 and XQuery. I've recently seen a Java developer use JAXB to create hundreds of classes and thousands of hard to maintain lines of code for a simple transformation from the CDA to a different target XML schema.

The basic difference between declarative (and functional) programming languages and imperative languages is that the former specify the "what" (the intent) as opposed to the "how" (the algorithm). However, declarative programming with XSLT2 and XQuery can be mastered through training and practice: see my previous posts entitled: "In Defense of XSLT", "Why XProc Rocks", and Putting XQuery to Work in Healthcare".

While Java and C# are general purpose languages, processing languages like XSLT2, XQuery, and XProc are actually based on the XQuery 1.0 and XPath 2.0 Data Model (XDM) and specifically designed for the purpose of manipulating XML documents. This is particularly helpful when dealing with a complex and deep structure such as the HL7 CDA and other HL7 V3 messages. These XML-centric processing languages use XPath2 to navigate the XML tree. In general, consider using them in the following cases:

  • Applications that require dealing with a complex industry data exchange XML schema which is not easy to process with your databinding and other development tools. In that case, create an intermediary simpe XML representation and map it to the industry data exchange XML schema using XSLT2 or XQuery (XQuery is not just for querying native XML databases, it is also a powerful language for processing XML documents).

  • Applications that require translation from an XML schema to another target XML schema (for example a mapping from the HL7 CCD to the ASTM CCR or from the C32 to XHTML).

  • Applications that require translation from an XML representation to a non-XML representation and round-trip (for example HL7 v2.x to HL7 V3, C32 XML to JSON, or C32 to a non-XML serialization of RDF).

  • Consider using XProc if you need to chain multiple XML processing steps such as: query a data source with XQuery, expand XIncludes, validate against XML schema, validate against a schematron schema, transform with XSLT2, generate a PDF document with XSL FO, and so on.

The Universal Exchange Language proposed by the PCAST Report could be an opportunity to address the issues listed above.

Wednesday, September 22, 2010

The Future of Healthcare Data Exchange Standards

Meaningful Use Final Rule has finally been released and I think now is a good time to start thinking about where we want to be five years from now in terms of healthcare data exchange standards.


Listening to the Concerns of Implementers


I think it is very important that we listen to the concerns of the implementers of the current set of standards. They are the users of those standards and good software engineers like to get feedback from their end users to fix bugs and improve their software. The following post details some of the concerns out there regarding the current HL7 v3 XML schema development methodology and I believe they should not be ignored: Why Make It Simple If It Can Be Complicated?


Using an Industry Standard XML Schema: A Developer's Perspective

XML documents are not just viewed by human eyeballs through the use of an XSLT stylesheet. The XML schema has become an important part of the service contract in Service Oriented Architecture (SOA). SOA has emerged during the last few years as a set of design principles for integrating applications within and across organizational boundaries.

In the healthcare sector for example, the Nationwide Health Information Network (NHIN) and many Health Information Exchanges (HIEs) are being built on a decentralized service-oriented architecture using web services standards such as SOAP, WSDL, WS-Addressing, MTOM, and WS-Policy. The Web Services Interoperability (WS-I) Profiles WS-I Basic and WS-I Security provide additional guidelines that should be followed to ensure cross-platform interoperability for example between .NET and Java EE platforms. Some of the constraints defined by the WS-I Basic Profile are related to the design of XML schemas used in web services.

An increasingly popular alternative to the WS-* stack is to use RESTful web services. The REST architectural style does not mandate the use of web services contract such as XML schema, WSDL, and WS-Policy. However, the web application description language (WADL) has been proposed to play the role of service contract for RESTful web services. This post will not engage in the SOAP vs. REST debate except to mention that both are used in implementation projects today.

On top of these platform-agnostic web services standards, each platform defines a set of specifications and tooling for building web services applications. In the Java world, these specifications include:

  • The Java API for XML Web Services (JAX-WS)
  • The Java Architecture for XML Binding (JAXB)
  • The Java API for RESTful Web Services (JAX-RS)
  • The Streaming API for XML (StAX).

JAX-WS and JAXB allow developers to generate a significant amount of Java code from the WSDL and XML schema with tools like WSDL2Java. The quality of a standard XML schema largely depends on how well it supports the web services development process and that's why I believe that creating a reference implementation should be a necessary step before the release of new standards. An industry standard XML schema that is hard to use will directly translate into high implementation cost resulting from development project delays for example.


Embracing Design Patterns

Beyond our personal preferences (such as the NIEM vs. HL7 debate), there are well established engineering practices and methodologies that we can agree on. In terms of software development, design patterns have emerged as a well known approach to building effective software solutions. For example, the following two books have had a strong influence in the fields of object-oriented design and enterprise application integration respectively (and they sit proudly on my bookshelf):

  • Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides
  • Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions by Gregor Hohpe and Bobby Woolf.

An interesting design pattern from the "Enterprise Integration Patterns" book that is relevant to the current discussion on industry standard XML schemas is the "Canonical Data Model" design pattern. Enterprise data architects tasked with creating such canonical data models often reuse components from industry standard XML schemas. That approach makes sense but cannot succeed if the industry standard XML schema is not designed to support reusability, extensibility, and a clearly specified versioning strategy.


Modeling Data In Transit vs. Data at Rest

Modeling data at rest (e.g. data stored in relational databases) is a well established discipline. For example, data modeling patterns for relational data have been captured by Len Silverston and Paul Agnew in their book entitled "The Data Model Resource Book, Vol. 3: Universal Patterns for Data Modeling".

There is a need to apply the same engineering rigor to modeling data in transit (e.g. data in web services messages). The XML Schema specification became a W3C Recommendation more than 9 years ago and I think there is now enough implementation experience to start building consensus around a set of XML Schema Design Patterns. The latter should address the following issues:

  1. Usability: the factors that affect the ability of an average developer to quickly learn and use an XML schema in a software development project
  2. Component Reusability
  3. Web services cross-platform interoperability constraints. Some of those constraints are defined by the WS-I Basic Profile
  4. Issues surrounding the use of XML databinding tools such as JAXB. This is particularly important since developers use those tools for code generation in creating web services applications. It is well known that existing databinding tools do not provide adequate support for all XML Schema language features
  5. Ability to manipulate instances with XML APIs such as StAX
  6. Schema extensibility, versioning, and maintainability.

These design patterns should be packaged into a Naming and Design Rules (NDR) document to ensure a consistent and proven approach to developing future XML vocabularies for the healthcare domain.

The XML Schema 1.1 specification is currently a W3C Candidate Recommendation. It defines new features such as conditional type assignments and assertions which allow schema developers to consolidate structural and business rules constraints into a single schema. This could help alleviate some of the pain associated with the multiple layers of Schematron constraints currently specified by HITSP C32, IHE PCC, and the HL7 CCD (sometimes referred to as the "HITSP Onion"). Saxon supports some of these new features.

Developing Standards the Way We Develop Software

The final point I'd like to make is that we should start creating healthcare standards the same way we develop software. I am a proponent of agile development methodologies such as Extreme Programming and Scrum. These methodologies are based on practices such as user stories, iteration (sprint) planning, unit test first, refactoring, continuous integration, and acceptance testing. Agile programming helps create better software and I believe it can help create better healthcare standards as well.