Present: Kristi
Johnson, Rodger Kessler, Connie van Eeghen
1.
Start Up: What is our level of “I” in the FINER
evaluation of this toolkit? As it turns
out, really very high. So on with the
toolkit!
2.
Presentation: Connie
and Rodger’s R03 Critique - Aims and
Approach
a. We
debated the value of two Aim statements vs. three. Aims 2 and 3 are currently both about
measurement; we decided that it will be simpler for the reader to have just one
aim about measurement, with two sub-parts.
b. Whether
to select, as a primary outcome, one mental health screening score or a general
statement about improving mental health status overall was hotly (well, maybe
warmly) debated. One screening score
doesn’t cover the breadth of what Primary Care Behavioral Health (PCBH)
services provide, but it does make it easier to understand how we will evaluate
the study. TBC
c. Reviewers
questioned the significance of a toolkit relative to outcomes of care. Rodger will further research sources that
provide supporting theoretical constructs and results.
d. One
of the innovations proposed by the study is the use of a mixed methods
approach, which one reviewer criticized as “not highly innovative or
under-specified.” The Office of Behavioral
and Social Sciences Research, however, published a “best practices” guide
specifically intended for NIH investigators to develop and evaluate mixed
methods research applications. There are
few studies published in this field using this method and our belief is that we
are relatively early as researchers using this approach. Whether this is really an innovation or an
area where the field is simply catching up is still open for discussion.
e. The
method of approaching the second aim, measurement, is also a work in
progress. If we use one screening
measure (such as the PHQ) as the primary outcome, then we can conduct a t test
for paired samples, and we can adjust those samples as clusters (by practice)
using the intra-class correlation coefficient as developed by Donner using Ben’s
data from the VDIS study. This can be
described as one measure to answer the question “Does the toolkit work?” and
there can be many other measures that will help us do this as well. Patients will be compared to themselves at
three points in time. We could also run
other comparisons, such as comparing patients in the practice with positive scores
who didn’t receive PCBH services, but it is not clear if adding more examples
of analysis to the Approach section strengthens the application or makes it
more confusing.
f. The
group agreed that there is a balance between making the application as
comprehensive, and therefore scientifically interesting, as possible and making
it as streamlined as possible, thereby making it more accessible to the
reviewers who will be our “advocates” during the larger section meeting. This is a debate that we can continue to have
for a while. Next step: new draft for Rodger
and Connie to work on.
3.
Workshop
Goals for 2012:
a. Journal
club: identify UVM guests and articles; invite to CROW ahead of time
b. Research
updates: share work-in-process
a.
Aug 2: Abby – “How we picked the predictive model for
the NAS article”
b.
Aug 9: (no Abby)
c.
Aug 16: Kairn – review of draft article on IRR (no
Abby)
d.
Aug 23:
e.
Aug 30: (new schedule?)
f. Future
agenda to consider:
i.
Ben: budgeting exercise for grant applications
ii.
Journal Club: “Methods and metrics challenges of
delivery-system research,” Alexander and Hearld, March 2012 (for later in the
year?). UVM authors who have published
interesting design articles (Kim, Osler)
iii.
Rodger: Mixed methods article; article on Behavior’s
Influence on Medical Conditions (unpublished); drug company funding. Also: discuss design for PCBH clinical and
cost research.
iv.
Amanda: presentation and interpretation of data in
articles
v.
Sharon Henry: article by Cleland, Thoracic Spine
Manipulation, Physical Therapy 2007
Recorder: Connie van Eeghen
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.