Thursday, July 20, 2017

Fwd: Round 6 NHATS Beta Data Released

 

The National Health and Aging Trends Study (NHATS) is pleased to announce that a beta version of the Round 6 data files is now available at www.nhatsdata.org.  Data are available in both SAS and Stata formats. This beta release also includes sensitive data from NHATS.  Information on how to apply for sensitive data can be found at https://www.nhatsdata.org/ResDataFiles.aspx.  

Updated documentation, including a revised User Guide, technical papers, and a crosswalk between the instruments and the codebook, has been posted on http://www.nhats.org.    



​- Ben LIttenberg​
 

Group 8


Friday, July 14, 2017

Clinical Research Oriented Workshop (CROW) Meeting: July 13, 2017



Present:   Levi Bonnell, Marianne Burke, Jessica Clifton, Justine Dee, Nancy Gell, Kairn Kelley (phone), Lillian Savard, Juvena Hitt, Ben Littenberg, Tim Plante, Connie van Eeghen

Start Up: Big crowd – time to move downstairs?  Introductions and welcome to Levi, Jessica, and Tim.
1.                   Manuscript on SBIRT: van Eeghen, Hitt (in the cone of silence)
a.       There is an editorial on how to write for Academic Medicine about 5 years ago
b.       Summary: impact of education program on SBIRT for inter-professional group with pre/post assessment using self-reported data
c.       Title:
                                                   i.      First title: add “in providers” or “of providers”
                                                 ii.      Needed clarity on whether they were students or providers, what level of degrees they had or were trying to get.  Were they actually engaged with patients as they went through the program
d.       Strengths: well-written, interesting, IPE (hot), novel
e.       Weaknesses: self-report data; add “perceived” to title
                                                   i.      Recalled that there was a knowledge quiz; not just self-report
                                                 ii.      No objective measure of skills
                                               iii.      Abstract: was the intervention split into two different groups?  Yes, but not clear until much later in the article
1.       Does not match the core of the paper.
                                               iv.      What problem is it solving:
1.       If you don’t train them together, they do worse (no data on this) – this is not a measure of IPE; did not justify the relevance
a.       Did not talk about the administrative difficulty of pulling this off; was it worth it?  Does it work better than not going through this trouble?  Did not respond to assertion made in line 88 about “better utilization of shrinking health care workforce” – Ben got it.
b.       IPE is about learning about each other’s roles.  Did the surveys evaluate that?
2.       Does training improve attitudes, knowledge, skills
a.       But measured whether the effect was different between 2 groups AND whether the groups were different at baseline – what was the story
                                                                                                                           i.      “We had three goals” – and put them on different groups
                                                                                                                         ii.      People probably care that the groups are different – is that what this study is about?  Where is the literature?  What is the hypothesis?  This came out of left field.  Needs to be in intro. 
                                                                                                                       iii.      How does comparing learners at baseline say something about the training?
                                                 v.      IM residents had a different intervention – of course they show up differently
1.       IM bundled with FM and NP student groups: what was the rationale?  Not clear – because they were bio-medical-ish?  Aren’t the other groups (SW, Counseling) also bio-medical?  There is no specification of what “nursing” means – PCPs? 
2.       Should they analyze all five groups separately?  Consider changing the unit of analysis
f.        Background: did not specify the gap
g.       Method:
                                                   i.      Language about didactic training is out of place
                                                 ii.      Line 127 is a sub-sub heading of 123
h.       Curriculum:
                                                   i.      Good detail
                                                 ii.      Table 1 – supplementary appendix
                                               iii.      Why didn’t the survey address the items on IPE in Table 1?  E.g. mutual respect.
1.       The survey was more about SBIRT, not about IPE
i.         Survey methods – 174
                                                   i.      Hard to parse
                                                 ii.      Likert scale not identified up front
                                               iii.      Analyzed dichotomously – but all questions were analyzed that way.  The methods are “scattered all over”
                                               iv.      No comment about whether the surveys were validated.  If derived from other surveys, were those validated.  Need to be explicit, otherwise suspect.  (Discussion of validation would be another paper)
j.         Analysis
                                                   i.      Why dichotomized?
1.       Better: domain score (communication, knowledge); here is the score for each student; average out the change in scores for each question within domain; better power
a.       Makes the figures easier to interpret
b.       For the domain: indicates the change by domain, rather than each item
c.       Null: the average change is 0
Likert:              0                      1                      2                      3                      4                      5
Joe Q1                                     1st                                            2nd                                                       +2
Joe Q2                                     1st                                                                    2nd                               +3
Joe Q3                                     2nd                   1st                                                                                -1
Average                                                                                                                                              1.33

Across all students, the change can be expressed as a mean, range, standard error, CI…  With this number of students, will probably have a small enough standard error to show a difference.
                                                 ii.      There are four domains; represent as a box and whisker diagram, with each box a domain along the x-axis
1.       Perception (attitude)
2.       Perception (skills)
3.       Perception (communication)
4.       Knowledge (which was not graphically presented, but not explained why)
                                               iii.      “Who improved more” has to be part of the plan, explicitly
1.       Each domain as a box/whisker plot, in total and breaking out the FOUR groups (lump IM and FM together)
                                               iv.      There is way too much detail in the bar charts for the audience of AM: deans
k.       Title redux:
                                                   i.      A comparison of the SBIRT impact on different professions OR
                                                 ii.      The overall impact of SBIRT OR
                                               iii.      What the students are like at baseline
                                               iv.      If all three: set up as 3 goals, 3 methods, 3 results, 3 discussions (a long article)
                                                 v.      What about the first line of survey methods: what students thought about the training program – what happened to that?
l.         Discussion - limitations:
                                                   i.      270: differences in trainees is actually a strength
                                                 ii.      289: there was great power
                                               iii.      What was the n: did it count just the paired responses?  Be clear
m.     Conclusions:
                                                   i.      No previous mention of expense
                                                 ii.      Last 3 sentences – not needed.  Leave at two sentences… this is what could get it into the journal.
n.       Figures
                                                   i.      Put questions in appendix
                                                 ii.      Consist p values (decimals)

2.                   Next Workshop Meeting(s): Thursdays, 11:30 p.m. – 12:45 p.m., at Given Courtyard South Level 4 to end of August 2017.
a.       July 20: Jessica – publication plan for dissertation (no Ben)
b.       July 27: (no Ben)
c.       August 3: (no Ben, Connie, Jessica, Lillian)
d.       Future topics:
a.       Juvena: protocol development
b.       LaMantia: predictors of successful R01 applications: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155060

Recorder: Connie van Eeghen