Saturday, November 14, 2015

Clinical Research Oriented Workshop (CROW) Meeting: November 12, 2015



Present:  Kairn Kelley, Rodger Kessler, Ben Littenberg, Connie van Eeghen

Start Up:  Did you know that presentations are like articles, and co-authors should be informed of their name’s being used?  New to one of us…

1.                  Discussion: Kairn’s update 
a.       Kairn has been working on interpretation of the equations related to the findings of her study.  Her goal is to plan her discussion for an audience of clinical audiologists.
                                                  i.      There is a FINER article on the correction of the existing model (which needs covariance)
b.      The current paper may be focused on any one of interesting issues that Kairn is curious about:
                                                  i.      How much change occurs on retest?
                                                ii.      Is there a learning effect on the SCAN (the words test); do most kids really do better?
                                              iii.      Was there a test that was more reliable in than others?
1.      Classification (normal/abnormal) by Digits test was not consistent (because normal statistics were applied to a skewed distribution)
2.      Various cut-offs for the Syllabus test were tried; none resulted in consistent classification
                                              iv.      Either the Digits test has false positives or the Words test has false negatives
1.      Kids who did not do well with the Words test had parent reports of concerns (4/5 had parent concerns)
                                                v.      It is not possible to test ear difference on a 40 item Words test (20 right and 20 left)
                                              vi.      We know the detectable changes on each test; the smallest effect that is > chance is larger than expected, i.e. the test lacks precision.  The minimally detectable change is too large for use in normal clinical settings.
1.      There is little evidence of variability outside the binomial variation.  If the sample size is large (item count on test), better precision.
                                            vii.      Although the population was assumed to be normal, but parent report indicated otherwise for some, how do those score results differ from the rest?
1.      There is a relationship across all the data; this is not a helpful outcome.
                                          viii.      Virtues: the tests are stable over time, no extraneous sources of variability, not tightly correlated (or not fully correlated) so not measuring exactly the same thing, and no one should use a 40 item test. Areas for investigation:
1.      How to build an efficient, more reliable test
2.      How to create a gold standard that is dependent on a construct: what do we mean by auditory disorder
                                              ix.      Key findings:
1.      Repeatability coefficient (graphed against the number of items: power log) for each test, which could identify the number of items for a desired level of precision
                                                x.      Focus of article:
1.      How big the changes were
2.      Any change score smaller than x is too small to be meaningful
3.      The tests are not long enough; don’t rely on them
4.      Organize by score or by category?  Unsure
c.       Next steps:
                                                  i.      Rework draft and return to review
                                                ii.      Consider an SBIR grant for an “item response” theory driven derived (e.g. adaptive testing)

2.                  Next Workshop Meeting(s): Thursdays, 1:00 p.m. – 2:00 p.m., at Given Courtyard South 4.  
a.       November 19: Marianne’s update
b.      November 26: Cancelled
c.       December 3: Kairn’s 1st draft of manuscript (no Ben)
d.      December 10: TBD (no Ben)
e.       December 17: Rodger on data set of “at-risk type 2 DM”

Recorder: Connie van Eeghen

Friday, November 6, 2015

NIH Loan Repayment Programs

The NIH Loan Repayment Programs are accepting applications now through November 16, 2015.

In exchange for a two-year research commitment, the NIH LRPs will repay up to $70,000 in eligible student loan debt to researchers and health professionals conducting qualifying research funded by domestic non-profit or government organizations.

The 2016 Loan Repayment Programs applications deadline is November 16, 2015 at 8:00 PM. Please visit our webpage at www.lrp.nih.gov to learn more about eligibly and to apply.

If you have any questions about the application process, please contact our Information Center, Monday-Friday, from 9:00 AM – 5:00 PM, EST. We can be reached by phone at 866-849-4047 FREE and by email at lrp@nih.gov.

Our Information Center will be available to answer your questions on November 11 (Veteran’s Day) and on Saturday, November 14 from 10:00 AM – 3:00 PM, EST.

Apply Today!

https://www.lrp.nih.gov/apply


Wednesday, November 4, 2015

Kessler to deliver talk to The Vermont Center on Behavior and Health

The Vermont Center on Behavior and Health (VCBH) is very pleased to welcome UVM’s own Rodger Kessler, Ph.D. for November’s Lecture Series presentation.  
Dr. Kessler’s talk, "An Emerging Bold Standard for Conducting Relevant Research in a Changing World,” takes place on November 18, from Noon to 1 p.m., in the Davis Auditorium.