Sunday, December 22, 2013
# Hash tags
Last week at CROW, we were talking about social media and the convention to put the symbol "#" in front of key words to support searching and indexing. The question came up: what is that hash tag character (the number sign, the pound sign, the musical sharp sign) really called? The answer is ... the octothorpe. For more than you ever needed to know on the octothorpe, click here.
Wednesday, December 18, 2013
CTS Seminar Schedule for Winter 2014
Winter 2014 Schedule
Workshop in Clinical Research (CROW)
Starting January 9, 2014:
Thursdays
Assemble 11:30 AM
Presentation 11:45 AM - 12:45 PM
Given Courtyard S457 (FRED)
Seminar in Clinical and Translational Science
Seminar in Clinical and Translational Science
Starting January 17, 2014
Fridays
12:00 PM - 1:15 PM
Given Courtyard S359
Clinical Research Oriented Workshop (CROW) Meeting: December 12, 2013
Present: Marianne Burke, Kat Cheung, Abby Crocker, Kairn
Kelley, Amanda Kennedy, Rodger Kessler, Ben Littenberg, Connie van Eeghen
Start Up: The value of a “D” degree
(PharmD, DPT, DrPH, PhD), whether in 3 years or 6 after baccalaureate; mostly positive
experiences, but it depends.
1.
Discussion: CROW’s
schedule for Spring Semester is set for every Thursday. We’ll gather at 11:30, topic discussion from
11:45 – 12:45.
2.
Discussion: Development of an analytic plan for medical
student evaluation data
a. Connie
is working with Alan Rubin and Cate Nicholas on an article about introducing an
EHR curriculum in a pre-clinical doctoring skills course. Medical students are evaluated by Standardized
Patients (SPs) during Clinical Skills Exams (CSEs) on a variety of skills. Among these, six questions evaluate their
PRISM skills and 1 evaluates their patient-centered skills while using PRISM. Note that this is not a research area that
falls inside Connie’s FINER goals, but it provides great opportunities for
networking, skill building, and development of future opportunities.
b. The
group discussion identified many key questions/issues for Connie to
clarify. These included:
i.
Are the co-authors willing to publish, regardless of
results?
ii.
Have
they submitted an IRB protocol yet? Can
Connie be included as a "key personnel?" Can the rest of CROW be included, to
participate in data analysis?
iii.
Understand
the 7 questions (6 PRISM and 1 patient-centered) on which the students are
evaluated. Do the SPs first complete a
check list, which they then use to score the questions? Or, at the end of the CSE, do they just score
the 7 items from memory? What is the
process used to create the data? How
are scores of "yes," "unsatisfactory," and "no"
determined? Will some of the data be
missing?
iv.
It's
customary to describe the population being studied in a general way. Are demographic data about the students
available (age at time of test (or year of birth) and gender)?
v.
It's
possible that these 7 questions are related to the score received for each CSE
as a whole. In other words, if a student is having a bad day, test-wise, the
score for the entire CSE will reflect this.
Consider adding to the final score for each CSE to the data set.
vi.
Make
sure the medical student identification is coded, to prevent
identification. Consider whether
demographic data are, by themselves, identifiers.
vii.
Find
out if SPs score for "patient-centered" characteristics on any CSEs
last year when PRISM was not being used.
This might be a way to see how they scored on patient-centeredness when
NOT distracted by PRISM.
c. Analytical
approach
i.
Descriptive: look at (graph) the medians by time period
ii.
Look at a segmented bar graph, in which the segments
are the three score categories
iii.
Put ALL the dots on the graph; do a low S curve
(non-parametric)
iv.
Identify how many students passed each question for
each test (pareto diagram)
v.
Consider looking at within-subject variation (Kairn
willing to help with this)
d. Thank
you, everyone!
a. December
20: POTLUCK! Along with a presentation
by Ben on Depression and social networks on the web, with Chris Danforth and
Peter Dobbs.
Tuesday, December 10, 2013
Clinical Research Oriented Workshop (CROW) Meeting: December 5, 2013
Present: Kairn Kelley, Rodger Kessler, Ben Littenberg, Connie
van Eeghen, Jon Van Luling
Start Up: Nelson Mandela… moved the
dot, and our society, measurably and immeasurably.
1.
Discussion: Rodger
is seeking a set of measures that can be used to reliably rate the degree of
behavioral health integration and a method to get an expert panel to assess a
set of clinical vignettes that will serve as an approximate gold standard of
different classes of integration.
a. The
first of vignettes have been developed; these need review and refinement. There are five vignettes at this time; the
goal is for them to be consistently and unambiguously categorized according to
the measures.
b. Anchors
were set up at four points, but not at quartiles: 0%, 1-49%, 50-99%, 100%. Discussion was vigorous about where these
points should be placed/what range of responses they should include, and how
they should be described.
c. Statements
(also called stems) were selected at random and reviewed from multiple
perspectives. Questions were raised about
how the statements reflect key aspects of the paradigm being tested. These questions will be reviewed with the
author of the paradigm case.
a. December
10: Connie’s analytic plan for medical student evaluation data
b.
Future agenda to consider:
i.
Peter Callas or other faculty on multi-level modeling
ii.
Charlie MacLean: demonstration of Tableau; or Rodger’s
examples of Prezi
iii.
Journal article: Gomes, 2013, Opioid Dose and MVA in
Canada (Charlie)
iv.
Ben: Tukey chapter reading assignments, or other book
of general interest
Saturday, November 16, 2013
Clinical Research Oriented Workshop (CROW) Meeting: November 14, 2013
Present: Marianne Burke, Kat Cheung, Abby Crocker, Kairn
Kelley, Rodger Kessler, Ben Littenberg, Connie van Eeghen (by phone)
Start Up: Ben has been reading a Genghis
Khan bio – a clever and thoughtful thug who developed a systematic
communication process among illiterate troop leaders over thousands of miles –
through song. Abbie: the dustbowl of the
Midwest and the socio-economic impact.
Kairn: Warmth of Other Suns: black migration in the US since the Reconstruction.
1.
Discussion: Kairn
Kelley asked for feedback on a draft data collection form (parent
questionnaire) and recruitment materials.
Kairn’s goal is to find a short, valid (face validity at a minimum)
screening tool for use in her study.
a. Materials
shared:
i.
Screening instruments (two): Fisher’s and SIFTER
1. Fisher’s:
1976, yes/no questions, not all are related to auditory processing disorders (APD).
2. The
group piloted tested 10 key questions on CROW members and their recollections
of their children. May not discriminate between auditory and other issues (attentional,
tone sensitivity, listening, understanding) but small sample of typically
developing kids have scores below 3...
Focus: do these kids have any symptoms that might be related to APD?
ii.
Article on children’s auditory processing scale –
Appendix A: the scale itself – CHAPPS – most commonly used now, published 1992
iii.
Symptoms of APD from Bellis and from AAA Clinical
Guidelines (dated ~2010)
1. The
final page in this list, based on common behavioral manifestation, was suggested
by the group as the best approach for developing a parent questionnaire.
2. Questions
could be parallel: “How often does your child (have difficulty with) …” with a
scaled range of answers (e.g. 0-3) for 13 questions (highest score of 39), with
missing answers not included in the average
3. Another
possible article to consider! Look at
Steckle (PHQ-9) to see a description of the development of this screening tool.
4. CROW
members rechecked their scores with this list of questions; looks like a good
start.
b. Research
Questions:
i.
What is the reliability of dichotic test scores under
test/retest repetition
ii.
Do the different lists rank the children similarly
iii.
Why don’t these tests give the same result each time
(anything about the children that can help predict the size of differences)
c. Analysis:
i.
Within subject variance (how much scores changed for
each subject, time 1 to time 2)
ii.
Number of children scores that changed category
(normal/abnormal)
iii.
Covariance of scores on different lists
iv.
Predictive model including subject characteristics
d. Today’s
challenge: How to characterize subjects as having/not having APD issues
i.
Which questions get moved to parent questionnaire (see
discussion under 1.a. above)
ii.
These questionnaires have been used for multiple studies
but have not been validated systematically
e. Next
steps:
i.
Draft instrument, to be sent around to CROW members for
trialing
a. November
21: Abby – data set diving for the Natural History of Opioids project
b.
Future agenda to consider:
i.
Peter Callas or other faculty on multi-level modeling
ii.
Charlie MacLean: demonstration of Tableau; or Rodger’s
examples of Prezi
iii.
Journal article: Gomes, 2013, Opioid Dose and MVA in
Canada (Charlie)
iv.
Ben: Tukey chapter reading assignments, or other book
of general interest
Monday, November 11, 2013
Clinical Research Oriented Workshop (CROW) Meeting: November 7, 2013
Present: Marianne Burke, Abby Crocker, Kairn Kelley, Rodger
Kessler, Ben Littenberg, Connie van Eeghen
Guest: Mark Kelly
Start Up: Technology assessment has to
adjust to “letting the genie out of the bottle” – i.e., when the
technology becomes so available in the
field, or users demand access to it until they all get it, that there is no
comparative control group.
1.
Discussion: Rodger
Kessler’s review of an evaluation tool for integrated behavioral health, using
a previously developed “Lexicon” of integration
a. Sites
willing to participate:
i.
Community health centers (probably low scorers)
ii.
Primary care sites
iii.
Co-located primary care/behavioral health sites
iv.
Other interested sites: 2 large health systems
b. Considering
testing and validating the evaluation tool on different models of integrated
behavioral care; may be a NIH RO1
i.
Validation phase must be independent of its use as an
evaluative tool
ii.
The Lexicon tool went through 3 rounds of “expert opinion” development and review
1. Next:
develop 3 scenarios for scoring, test on “expert opinion” panel
2. Or,
use willing sites (from above) to test
iii.
Develop a relationship between evaluation scores and
patient outcomes
c. Validation
as a process
i.
There is a Platonic ideal of the “Integrated Practice;”
the tool measures how close any one practice is to that ideal. There is a spectrum of integration; not a
“yes/no” determination
ii.
There are a variety of constructs associated with the
ideal (“care team function,” “spatial arrangement”)
1. The
tool must address the constructs and the measures in the tool must represent
the paradigm of each construct.
Furthermore, the measures must belong to the construct domains and each
domain must be represented by the measures (construct or domain validity)
2. The
measures in the tool must make sense (face validity)
3. Separate
measures of the same construct can demonstrate the degree to which evaluations
converge, i.e. the experts own opinion and the experts use of the tool
(convergent validity)
4. Gold
standard by which to evaluate the strength of a measure does not exist (no
criterion validity)
5. Does
the language express the construct accurately and precisely (sensibility)
6. Each
of these can be used to evaluate the strength of validity and the tasks that
are needed
d. Application
to this study
i.
There are 5 – 7 domains
ii.
The present scale has 4 point of choice (no middle
choice), from 0 – 3, that can be aggregated (mean or median) by domain
iii.
The tool produces a profile of 5 – 7 scores, which can
be used for quality improvement purposes
iv.
The primary purpose of the tool is ability to predict
patient outcome
1. Test
each domain relative to outcome; evaluate the domains
a. Referral
rates
b. Treatment
initiation rates
c. Health
outcome (?)
d. ED
utilization
e. Total
cost
2. The
RO1 should be focused on developing this model, with an analysis plan that
measures
a. Correlation
of items within domain (should be high)
b. Correlation
of domains (should be somewhat high)
c. Plan
to remove items where correlation is very high
d. Plan
to add items where correlation is too low
e. Action
steps:
i.
Kairn will circulate an article on a framework of
validation concepts
ii.
Vignette study, to confirm consistent outcomes by
experts
iii.
Field test the tool on a pilot sites
a. November
14: Abby: cracking open the prescribing data base
b.
Future agenda to consider:
i.
Peter Callas or other faculty on multi-level modeling
ii.
Charlie MacLean: demonstration of Tableau; or Rodger’s
examples of Prezi
iii.
Journal article: Gomes, 2013, Opioid Dose and MVA in
Canada (Charlie)
iv.
Ben: Tukey chapter reading assignments, or other book
of general interest
Subscribe to:
Posts (Atom)