Monday, November 11, 2013

Clinical Research Oriented Workshop (CROW) Meeting: November 7, 2013



Present:  Marianne Burke, Abby Crocker, Kairn Kelley, Rodger Kessler, Ben Littenberg, Connie van Eeghen
Guest:     Mark Kelly

Start Up: Technology assessment has to adjust to “letting the genie out of the bottle” – i.e., when the technology  becomes so available in the field, or users demand access to it until they all get it, that there is no comparative control group.

1.                  Discussion: Rodger Kessler’s review of an evaluation tool for integrated behavioral health, using a previously developed “Lexicon” of integration
a.       Sites willing to participate:
                                                  i.      Community health centers (probably low scorers)
                                                ii.      Primary care sites
                                              iii.      Co-located primary care/behavioral health sites
                                              iv.      Other interested sites: 2 large health systems 
b.      Considering testing and validating the evaluation tool on different models of integrated behavioral care; may be a NIH RO1
                                                  i.      Validation phase must be independent of its use as an evaluative tool
                                                ii.      The Lexicon tool went through 3 rounds of  “expert opinion” development and review
1.      Next: develop 3 scenarios for scoring, test on “expert opinion” panel
2.      Or, use willing sites (from above) to test
                                              iii.      Develop a relationship between evaluation scores and patient outcomes
c.       Validation as a process
                                                  i.      There is a Platonic ideal of the “Integrated Practice;” the tool measures how close any one practice is to that ideal.  There is a spectrum of integration; not a “yes/no” determination
                                                ii.      There are a variety of constructs associated with the ideal (“care team function,” “spatial arrangement”)
1.      The tool must address the constructs and the measures in the tool must represent the paradigm of each construct.  Furthermore, the measures must belong to the construct domains and each domain must be represented by the measures (construct or domain validity)
2.      The measures in the tool must make sense (face validity)
3.      Separate measures of the same construct can demonstrate the degree to which evaluations converge, i.e. the experts own opinion and the experts use of the tool (convergent validity)
4.      Gold standard by which to evaluate the strength of a measure does not exist (no criterion validity)
5.      Does the language express the construct accurately and precisely (sensibility)
6.      Each of these can be used to evaluate the strength of validity and the tasks that are needed
d.      Application to this study
                                                  i.      There are 5 – 7 domains
                                                ii.      The present scale has 4 point of choice (no middle choice), from 0 – 3, that can be aggregated (mean or median) by domain
                                              iii.      The tool produces a profile of 5 – 7 scores, which can be used for quality improvement purposes
                                              iv.      The primary purpose of the tool is ability to predict patient outcome
1.      Test each domain relative to outcome; evaluate the domains
a.       Referral rates
b.      Treatment initiation rates
c.       Health outcome (?)
d.      ED utilization
e.       Total cost
2.      The RO1 should be focused on developing this model, with an analysis plan that measures
a.       Correlation of items within domain (should be high)
b.      Correlation of domains (should be somewhat high)
c.       Plan to remove items where correlation is very high
d.      Plan to add items where correlation is too low
e.       Action steps:
                                                  i.      Kairn will circulate an article on a framework of validation concepts
                                                ii.      Vignette study, to confirm consistent outcomes by experts
                                              iii.      Field test the tool on a pilot sites

2.                  Next Workshop Meeting(s): Thursdays, 11:45 a.m. – 1:15 p.m., at Given Courtyard South Level 4.   
a.       November 14: Abby: cracking open the prescribing data base
b.      Future agenda to consider:
                                                  i.      Peter Callas or other faculty on multi-level modeling
                                                ii.      Charlie MacLean: demonstration of Tableau; or Rodger’s examples of Prezi
                                              iii.      Journal article: Gomes, 2013, Opioid Dose and MVA in Canada (Charlie)
                                              iv.      Ben: Tukey chapter reading assignments, or other book of general interest

Recorder: Connie van Eeghen

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.