On March 1, 2019, the B.C. Health Information Management Professionals Society (BCHIMPS) is hosting its 2019 Spring Education Forum, focusing on Prediction, Prevention and Promotion. The event, which will be facilitated by Yoel Robens-Paradise, Gevity’s Vice-President, Canada West – B.C., will include a presentation by Seattle, WA-based KenSci on integrating machine learning and artificial intelligence (AI) into clinical workflows.

We spoke to Dr. Greg McKelvey, KenSci’s Chief Medical Officer, about the work his company is doing and what attendees can look forward to learning from KenSci’s presentation.

Q: Tell us a bit about your background and how you got involved with KenSci.

GM: My background is in preventive medicine and occupational and environmental health. I’ve been at KenSci, where I’m in charge of the firm’s healthcare expertise, as the head of clinical informatics since 2016. I manage a number of full- and part-time health informaticists, support user training and product implementation and delivery, conduct research and development, and participate in thought leadership activities, particularly in the domains of regulatory affairs and patient safety.

Q: Can you tell us a bit more about KenSci?

GM: KenSci is essentially a software platform company built by medical, technology and data experts to use data, machine learning and artificial intelligence (AI) to help health systems and clinicians predict health outcomes to better prevent and treat illness. It’s a spin-out from the University of Washington, and the company’s name is a combination of the word Ken, meaning knowledge, and Sci, as in the business end of science. The company has grown considerably from when I started, and we now have a staff of about 85 people.

Q: Do you have any Canadian clients?

GM: Not yet. We currently have customers in the U.S., including large managed care organizations; Asia, where we work with the Health Promotion Board of Singapore; Australia; and the UK, including the NHS, so a lot of that experience is highly relevant to the Canadian healthcare model.

Q: KenSci works at different levels of the healthcare system – from the system level down to the clinician level. How does your company’s offering provide benefits at each of these layers?

GM: Healthcare data is unique in terms of being complex and messy. We’ve done a lot of work so that data can be handled in the cloud at scale and machine learning can be used to look at data at different levels. Those insights – about, for example, which patients or which type of patients are driving readmissions or increasing costs – are provided at the executive level, enabling health system managers to recognize patterns, detect variations in care, and take steps to bring down costs and improve care at a system level. Individual or encounter-level information revolves around prediction – what is the likelihood, for example, that a patient will be hospitalized for a long time or readmitted within a certain timeframe? That information is provided back to care providers in near real time so they are better supported to make informed decisions in real time.

Q: What will KenSci be speaking about at the BCHIMPS conference?

GM: First of all, I want to say that we’re really excited to be invited to participate in this event. We have a ton to learn about Canada! Jeff Lumpkin, our Vice-President of Customer Advising and Analytics, will be addressing the broad application of advanced analytics for tackling the opioid crisis – how, at the macro level, unsupervised machine learning, which uses pattern and anomaly detection, can help guide policies; and how, at the patient level, supervised machine learning, which is used more at the individual level for risk stratification, can help guide care decisions.

Q: Can you elaborate on that concept of risk stratification in the context of predictive analytics?

GM: Basically, risk stratification involves assigning a probability of something bad happening in the future. We’re using supervised machine learning at the level of a patient or hospital encounter to anticipate outcomes. In the opioid space, for example, we’re looking at patients and things like the potential for opioid abuse, the risk of overdose, the probability of recovery, and even the anticipated timelines associated with those predicted outcomes.

Q: A lot of patient-related data comes from sources outside the EMR. What does that mean to the healthcare system, facility or clinician looking to harness AI to better understand their patient population?

GM: We’re very aware that most of the information we want to know about a person exists outside the walls of a facility or their patient records. It may also come from wearables or remote monitoring, and it may even be locked inside their genome or their behaviour. The silver lining, though, is that we are far from having plumbed the depths of the health record, so even if we’re missing data from those other sources, there is still a lot we can use to make accurate and useful predictions. The other valuable thing is that as those data sources become available, it’s easy to append them to existing algorithms.

Q: How is control over AI-assisted decision-making built into your platform? How does that work in a practical sense?

GM: Decision-making is the thing we’re trying to support; in fact, clinical decision support is most of the realm we facilitate. But it’s not a new field; we’re just adding a new layer of sophistication to the current state. Our platform has to ensure two things: one, that the users can take action given the output, and two, that they understand how the output was determined. We’ve built functionality into our platform so the user can interrogate the output and confirm that it’s accurate. We also enabling users to review the output against gold standard clinical comparisons and benchmarks that don’t use machine learning.

Q: How do you see the issue of transparency in the collection and use of patient data?

GM: That’s extremely relevant to every one of us – we’re all patients as well at some point in our lives. It’s also highly relevant to us as a company and we’ve invested a lot in making sure our algorithms are fair and that they perform appropriately, especially when they are applied to vulnerable populations. A lot of these checks are automated and built into the software, but to a certain degree transparency is also enforced through compliance and regulations such as the General Data Protection Regulation in the UK and the EU. In the U.S., we are using de-identified information where possible and where we are using identified information, we follow relevant laws for security and privacy. We also obtain review board approvals in the research domain, so we’re largely building on existing processes and safeguards wherever possible.

Q: There is a lot of discussion about issues such as bias and accountability in AI. How does KenSci compensate for or reduce the likelihood of those problems?

GM: You have to be proactive; you have to look for it. We have invested a lot in ensuring everyone is aware of the risks; we’ve engineered processes and we monitor for it. We check the algorithms to ensure we have enough information on different subsets of populations and that our software’s predictions are holding true.

Q: What is the one key takeaway you want people to get from the BCHIMPS presentation?

GM: Jeff does a great job of demystifying machine learning and AI and explaining how supervised and unsupervised machine learning can be incorporated as additional tools are used to bring great benefits to the patients and populations you care for.