Saturday, November 16, 2013

Clinical Research Oriented Workshop (CROW) Meeting: November 14, 2013



Present:  Marianne Burke, Kat Cheung, Abby Crocker, Kairn Kelley, Rodger Kessler, Ben Littenberg, Connie van Eeghen (by phone)

Start Up: Ben has been reading a Genghis Khan bio – a clever and thoughtful thug who developed a systematic communication process among illiterate troop leaders over thousands of miles – through song.  Abbie: the dustbowl of the Midwest and the socio-economic impact.  Kairn: Warmth of Other Suns: black migration in the US since the Reconstruction. 

1.                  Discussion: Kairn Kelley asked for feedback on a draft data collection form (parent questionnaire) and recruitment materials.  Kairn’s goal is to find a short, valid (face validity at a minimum) screening tool for use in her study.
a.       Materials shared:
                                                  i.      Screening instruments (two): Fisher’s and SIFTER
1.      Fisher’s: 1976, yes/no questions, not all are related to auditory processing disorders (APD).
2.      The group piloted tested 10 key questions on CROW members and their recollections of their children. May not discriminate between auditory and other issues (attentional, tone sensitivity, listening, understanding) but small sample of typically developing kids have scores below 3...  Focus: do these kids have any symptoms that might be related to APD?
                                                ii.      Article on children’s auditory processing scale – Appendix A: the scale itself – CHAPPS – most commonly used now, published 1992
                                              iii.      Symptoms of APD from Bellis and from AAA Clinical Guidelines (dated ~2010)
1.      The final page in this list, based on common behavioral manifestation, was suggested by the group as the best approach for developing a parent questionnaire.
2.      Questions could be parallel: “How often does your child (have difficulty with) …” with a scaled range of answers (e.g. 0-3) for 13 questions (highest score of 39), with missing answers not included in the average
3.      Another possible article to consider!  Look at Steckle (PHQ-9) to see a description of the development of this screening tool.
4.      CROW members rechecked their scores with this list of questions; looks like a good start.
b.      Research Questions:
                                                  i.      What is the reliability of dichotic test scores under test/retest repetition
                                                ii.      Do the different lists rank the children similarly
                                              iii.      Why don’t these tests give the same result each time (anything about the children that can help predict the size of differences)
c.       Analysis:
                                                  i.      Within subject variance (how much scores changed for each subject, time 1 to time 2)
                                                ii.      Number of children scores that changed category (normal/abnormal)
                                              iii.      Covariance of scores on different lists
                                              iv.      Predictive model including subject characteristics
d.      Today’s challenge: How to characterize subjects as having/not having APD issues
                                                  i.      Which questions get moved to parent questionnaire (see discussion under 1.a. above)
                                                ii.      These questionnaires have been used for multiple studies but have not been validated systematically
e.       Next steps:
                                                  i.      Draft instrument, to be sent around to CROW members for trialing

2.                  Next Workshop Meeting(s): Thursdays, 11:45 a.m. – 1:15 p.m., at Given Courtyard South Level 4.   
a.       November 21: Abby – data set diving for the Natural History of Opioids project
b.      Future agenda to consider:
                                                  i.      Peter Callas or other faculty on multi-level modeling
                                                ii.      Charlie MacLean: demonstration of Tableau; or Rodger’s examples of Prezi
                                              iii.      Journal article: Gomes, 2013, Opioid Dose and MVA in Canada (Charlie)
                                              iv.      Ben: Tukey chapter reading assignments, or other book of general interest

Recorder: Connie van Eeghen and Kairn Kelley

Monday, November 11, 2013

Clinical Research Oriented Workshop (CROW) Meeting: November 7, 2013



Present:  Marianne Burke, Abby Crocker, Kairn Kelley, Rodger Kessler, Ben Littenberg, Connie van Eeghen
Guest:     Mark Kelly

Start Up: Technology assessment has to adjust to “letting the genie out of the bottle” – i.e., when the technology  becomes so available in the field, or users demand access to it until they all get it, that there is no comparative control group.

1.                  Discussion: Rodger Kessler’s review of an evaluation tool for integrated behavioral health, using a previously developed “Lexicon” of integration
a.       Sites willing to participate:
                                                  i.      Community health centers (probably low scorers)
                                                ii.      Primary care sites
                                              iii.      Co-located primary care/behavioral health sites
                                              iv.      Other interested sites: 2 large health systems 
b.      Considering testing and validating the evaluation tool on different models of integrated behavioral care; may be a NIH RO1
                                                  i.      Validation phase must be independent of its use as an evaluative tool
                                                ii.      The Lexicon tool went through 3 rounds of  “expert opinion” development and review
1.      Next: develop 3 scenarios for scoring, test on “expert opinion” panel
2.      Or, use willing sites (from above) to test
                                              iii.      Develop a relationship between evaluation scores and patient outcomes
c.       Validation as a process
                                                  i.      There is a Platonic ideal of the “Integrated Practice;” the tool measures how close any one practice is to that ideal.  There is a spectrum of integration; not a “yes/no” determination
                                                ii.      There are a variety of constructs associated with the ideal (“care team function,” “spatial arrangement”)
1.      The tool must address the constructs and the measures in the tool must represent the paradigm of each construct.  Furthermore, the measures must belong to the construct domains and each domain must be represented by the measures (construct or domain validity)
2.      The measures in the tool must make sense (face validity)
3.      Separate measures of the same construct can demonstrate the degree to which evaluations converge, i.e. the experts own opinion and the experts use of the tool (convergent validity)
4.      Gold standard by which to evaluate the strength of a measure does not exist (no criterion validity)
5.      Does the language express the construct accurately and precisely (sensibility)
6.      Each of these can be used to evaluate the strength of validity and the tasks that are needed
d.      Application to this study
                                                  i.      There are 5 – 7 domains
                                                ii.      The present scale has 4 point of choice (no middle choice), from 0 – 3, that can be aggregated (mean or median) by domain
                                              iii.      The tool produces a profile of 5 – 7 scores, which can be used for quality improvement purposes
                                              iv.      The primary purpose of the tool is ability to predict patient outcome
1.      Test each domain relative to outcome; evaluate the domains
a.       Referral rates
b.      Treatment initiation rates
c.       Health outcome (?)
d.      ED utilization
e.       Total cost
2.      The RO1 should be focused on developing this model, with an analysis plan that measures
a.       Correlation of items within domain (should be high)
b.      Correlation of domains (should be somewhat high)
c.       Plan to remove items where correlation is very high
d.      Plan to add items where correlation is too low
e.       Action steps:
                                                  i.      Kairn will circulate an article on a framework of validation concepts
                                                ii.      Vignette study, to confirm consistent outcomes by experts
                                              iii.      Field test the tool on a pilot sites

2.                  Next Workshop Meeting(s): Thursdays, 11:45 a.m. – 1:15 p.m., at Given Courtyard South Level 4.   
a.       November 14: Abby: cracking open the prescribing data base
b.      Future agenda to consider:
                                                  i.      Peter Callas or other faculty on multi-level modeling
                                                ii.      Charlie MacLean: demonstration of Tableau; or Rodger’s examples of Prezi
                                              iii.      Journal article: Gomes, 2013, Opioid Dose and MVA in Canada (Charlie)
                                              iv.      Ben: Tukey chapter reading assignments, or other book of general interest

Recorder: Connie van Eeghen