Informaticians Help Fine-Tune Pediatric Clinical Guidelines for EHRs

A group of computer scientists has been helping the American Academy of Pediatrics to improve the quality of clinical guidelines and policies that standardize the provision of attention. The research scientist of the Registrief Randall Grout, MD, MS and Stephen Downs, MD, MS of Wake Forest Institute Health innovation To explain how the Association for the implementation of policies (PPI) works to eliminate ambiguity in clinical recommendations, which in turn relieves the implementation of guideline recommendations by doctors and EHR developers.

In addition to his role in Registrief, Grout is also the Director of Health Informatics of Eskenazi Health and assistant professor of pediatrics at the University of Indiana Medicine.

Downs is a professor and associate director of Clinical Computer Science at the Center for Biomedical Informatics and Vice President of Learning Health Systems in the Department of Pediatrics at Wake Forest University. He was the founding director of children’s health services at the University of Indiana, where he retains the status of deputy faculty. It is co-developer of child health improvement through the computer automation system, known as a girl. The system helps pediatricians to maximize the time they have with their patients and address care guidelines through the use of information collected from electronic health records and parents to establish an agenda for the appointment based on the specific needs of the child.

Health innovation: Does the role of medical associations such as the American Academy of Pediatrics in the development and implementation of practice guidelines evolved in recent years, and has impacted the generalized use of EHR?

Grout: I believe that the role that applies to electronic health records has definitely changed. These associations have always tried to be an authorized and clear voice of a good evidence -based medicine, so they produced several guidelines and different mechanisms, typically through a magazine or a paper article. But now, as we change electronic health records, more and more practical is made electronically. Orders are made electronically. Much of the support for the decision is happening electronically. Having that at our electronic hand tips makes it more effective to be able to implement these guidelines. That is where something like the association for the implementation of policies to say: “Let’s take these guidelines that we are building as a group of pediatric experts and do so that they can be implemented in the Electronic Health Registry. The main work of the PPI is to help this implementation process. I think that many professional associations and societies are seeing the importance of taking their recommendations and putting it in a translatable format that someone can use at the point of attention.

HCI: Did the PPI grew when seeing problems with the guidelines that are not clear enough or having contradictions?

Downs: The PPI arose because a computer colleague of mine, Paul Biondich, had been invited to be in one of these guidelines. He returned from one of his meetings and said: “I do not know how to interpret this … everything is being given in these vague terms, as” you should evaluate children regularly “or” you must pay additional attention to children. that exhibit this problem. “And nobody knows what that means. I was thinking very specific way about how to put this on a computer. You can’t program a computer to remind you to do something regularly. You have to decide what that really means. Then I said : “Well, why don’t we encourage them to use an algorithm as a real formal flow diagram, to describe care”, and they really loved them, right? They thought this was very useful.

At that time, we said that this is probably useful for all AAP guidelines. So we approached the American Academy of Pediatrics, and in reality the federal government helped us a bit. The Maternal and Children’s Office granted a small subsidy to the American Academy of Pediatrics to finance a group of computer scientists to meet and start developing processes to make all its clinical guidelines and reports that follow this type of recommendations. I will say that we were not the only ones thinking about it. A Yale researcher named Rick Shiffman had been thinking for years and working on how to make guidelines to be unequivocal and easy to interpret.

HCI: I read in your article on this topic in the newspaper Pediatrics that computer scientists help through the use of a variety of tools to support the authorship of guidelines. What kind of tools and how does that work?

Downs: One of them are clinical algorithms, as I mentioned. For guidelines that recommend a specific flow of attention, as is the diagnostic process or here is the therapy process, we will produce these standardized flow diagrams. The idea is that if it shows a committee a very precise description of what you think they recommend for attention, it is extremely useful as a communication device, because people will say: “Oh, yes, that is exactly what I meant saying”, or you will discover many hidden disagreements in the committee. So that is a very useful tool.

Another was produced by Rick Shiffman in Yale called Bridge-Wiz, which is actually a web-based software piece that helps create a language for these key action statements that Randy mentioned to be precise and unequivocal. Actually ask you questions; You answer the questions, and then propose different ways in which you could write what would be unequivocal.

Grout: All these tools are increasing the computer experience when taking guidelines and putting them into practice. Sometimes it is just a meticulous and detail oriented eye with the experience of programming things in an electronic health registry before, and understanding if what I am reading can translate well. You can look at a sentence and say if the format in this structure with this type of standardized vocabulary, these are action commands and these are words of decision. Can you help you determine: Is it essential? Is it a duty? Is it a May?

HCI: So, in a way, it is a kind of linguistic challenge …

Downs: It is definitely linguistic. And we really have some words or phrases that are considered a trigger. We do not like: “You should consider doing this”, because considering doing something is not really an action. But you see it all the time in clinical guide. We also seek the use of passive voice, because passive voice masks who the actor is in the recommendation. So, if he says: “The child should receive an antibiotic”, who is supposed to give them the antibiotic, right? Instead of saying that the doctor must prescribe an antibiotic. Every time we see a document that recommends that a doctor do something with a patient or family, we want me to say who should do what and under what circumstances, right?

HCI: We often write about clinical quality measures that come from CMS and other payers. I know they are working to digitize many of these measures now, and supplier organizations and ACOs say it is very difficult to get them in the EHR. Are they dealing with some of the same problems as guidelines or different developers?

Downs: They are extremely closely related. In fact, if you have a well -formed key statement, you must say that under this circumstance, this actor must make this answer, and that is essentially the same as a quality measure. The first part of that becomes the denominator, right? What are the circumstances? And the action becomes the numerator of any quality metric. So, if you have a well -formed key statement, and you have it electronically integrated into your EHR, every time you govern, something belongs to the denominator, and every time the user responds, she has a count in the numerator. Therefore, the act of building support for the decision of these recommendations automatically creates its quality metric at the same time.

Grout: Yes, I was going to say that these quality measures and the recommendations are really only two sides of the same currency. So, while we are trying to build a very processable and unequivocal recommendation, the quality measure must be very obvious when observing those same criteria.

HCI: But is this change to electronic clinical measures really difficult for suppliers?

Grout: It is absolutely difficult. I think the scope and volume of CMS measures are what gives it some difficulty. For example, in our pediatrics space, our guidelines are often aimed at a certain population, a certain circumstance, a certain process of disease or something. So perhaps we have a narrower reach, but even within that range, trying to account for the edge cases in a flow diagram, imagine that the tree branches. If the scope is general health in the United States, for CMS, you can imagine so many branches and use cases and edge cases to realize that it becomes this immense work to try to program it. Therefore, it gets something very vague and wide or get something so difficult to handle that it becomes almost impossible to program. I think that the pure complexity of trying to capture something so wide, so detail, is certainly a monumental task.

Downs: I think that one of the other problems is probably important is that people often struggle to find the necessary data to make these quality metrics. They will say: “Well, we don’t measure that.” And that is why, from the point of view of the PPI, what really should happen is that you must go up river. You have to say: “Ok, if we have decided that what is really important is that we are going to evaluate all adolescents for depression, then we have to go upstream and have a way to capture that information.” And to our previous point, as long as that is going to do that, why don’t you build a system to support the decision that reminds people of the depression exam on each visit? Then, your decision support system is capturing its denominator and numerator.

HCI: So you are saying that they should have the reminder to make the action first, and then you can measure if it is done with sufficient frequency?

Downs: Exactly. This is my whole argument for girl. If this is important enough to measure, then it is important enough to go up and work to improve it. Then, measuring it is not a big problem because he already incorporated it into his system to improve it. That is not the way the system currently works. The way the system currently works is that someone decides, here is a quality metric, and the ACOs and the clinics, their hair sets fire because they say that we now have to work to improve this, and then drop all the others Balls that carry and focus on that thing. We believe that if we got on, we could simplify things.

We will be happy to hear your thoughts

Leave a reply

Tools4BLS
Logo
Compare items
  • Total (0)
Compare
0