Present: Marianne Burke, Kat Cheung, Abby Crocker, Kairn
Kelley, Amanda Kennedy, Rodger Kessler, Ben Littenberg, Connie van Eeghen
Start Up: The value of a “D” degree
(PharmD, DPT, DrPH, PhD), whether in 3 years or 6 after baccalaureate; mostly positive
experiences, but it depends.
1.
Discussion: CROW’s
schedule for Spring Semester is set for every Thursday. We’ll gather at 11:30, topic discussion from
11:45 – 12:45.
2.
Discussion: Development of an analytic plan for medical
student evaluation data
a. Connie
is working with Alan Rubin and Cate Nicholas on an article about introducing an
EHR curriculum in a pre-clinical doctoring skills course. Medical students are evaluated by Standardized
Patients (SPs) during Clinical Skills Exams (CSEs) on a variety of skills. Among these, six questions evaluate their
PRISM skills and 1 evaluates their patient-centered skills while using PRISM. Note that this is not a research area that
falls inside Connie’s FINER goals, but it provides great opportunities for
networking, skill building, and development of future opportunities.
b. The
group discussion identified many key questions/issues for Connie to
clarify. These included:
i.
Are the co-authors willing to publish, regardless of
results?
ii.
Have
they submitted an IRB protocol yet? Can
Connie be included as a "key personnel?" Can the rest of CROW be included, to
participate in data analysis?
iii.
Understand
the 7 questions (6 PRISM and 1 patient-centered) on which the students are
evaluated. Do the SPs first complete a
check list, which they then use to score the questions? Or, at the end of the CSE, do they just score
the 7 items from memory? What is the
process used to create the data? How
are scores of "yes," "unsatisfactory," and "no"
determined? Will some of the data be
missing?
iv.
It's
customary to describe the population being studied in a general way. Are demographic data about the students
available (age at time of test (or year of birth) and gender)?
v.
It's
possible that these 7 questions are related to the score received for each CSE
as a whole. In other words, if a student is having a bad day, test-wise, the
score for the entire CSE will reflect this.
Consider adding to the final score for each CSE to the data set.
vi.
Make
sure the medical student identification is coded, to prevent
identification. Consider whether
demographic data are, by themselves, identifiers.
vii.
Find
out if SPs score for "patient-centered" characteristics on any CSEs
last year when PRISM was not being used.
This might be a way to see how they scored on patient-centeredness when
NOT distracted by PRISM.
c. Analytical
approach
i.
Descriptive: look at (graph) the medians by time period
ii.
Look at a segmented bar graph, in which the segments
are the three score categories
iii.
Put ALL the dots on the graph; do a low S curve
(non-parametric)
iv.
Identify how many students passed each question for
each test (pareto diagram)
v.
Consider looking at within-subject variation (Kairn
willing to help with this)
d. Thank
you, everyone!
a. December
20: POTLUCK! Along with a presentation
by Ben on Depression and social networks on the web, with Chris Danforth and
Peter Dobbs.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.