Monday, September 29, 2014

A THOUGHT FOR TODAY

This came through my email today...
There are two possible outcomes: If the result confirms the hypothesis, then you've made a measurement. If the result is contrary to the hypothesis, then you've made a discovery. -Enrico Fermi, physicist and Nobel laureate (1901-1954) 

(Of course, I suppose it could also be an error!)

Ben

Thursday, September 25, 2014

How I learned about Science on Facebook

Maybe there's hope for me yet...
http://www.xojane.com/issues/pop-science-may-be-annoying-but-its-necessary

Clinical Research Oriented Workshop (CROW) Meeting: Sept 25, 2014



Present:  Sylvie Frisbie, Kairn Kelley, Amanda Kennedy, Rodger Kessler, Ben Littenberg, Connie van Eeghen

Start Up:  On the Rise (in Richmond) is closing!  Many tears were shed while we happily ate their excellent food. 

1.                  Discussion: Honors College Course proposal
a.       Population: honors sophomores, about 19 years old
b.      Challenges
                                                  i.      Model for improvement: is this too challenging for the population?
                                                ii.      Interviews with new Americans: have the families come to class?
                                              iii.      Consider partnering with Rich and Halley to visit with new American office visits
                                              iv.      Consider two visits (one new American and one not) and compare/contrast;
1.      Remember that translator visits are scheduled, so easy to identify ahead of time
                                                v.      Consider community health center visits; also Planned Parenthood
                                              vi.      Consider migrant farmers – how to include?  Not necessarily
                                            vii.      The proposal describes a sociological course with some analytical skills; it does not, from a presentation perspective, talk about quality and health care – these terms and concepts must be explicitly defined and discussed
                                          viii.      Should this course “touch” on diversity?  Or be redesigned to focus on diversity?
1.      The focus of the course is to help learners understand that the U.S. health care system is not perfect AND that it can be changed. 
2.      The Model for Improvement project should be explicitly linked to a health care story of improvement, using the same tool
3.      “Diversity” can mean diversity of values (different stakeholders with different perspectives), rather than diversity in backgrounds or culture.  Consider using the issue of diversity as another perspective in an already complex course topic, not as one of its primary foci.  It may not meet the D1 or D2 definitions of UVM’s diversity requirements, but it doesn’t have to – it just has to address diversity.
c.       Suggestions:
                                                  i.      Change language to “Multiple perspectives”
                                                ii.      Define key concepts as the course progresses.  When it’s time to talk about diversity, ask: “What do we mean by diversity” and what do we mean by “difference/disparity/ discrimination” and how do you know what people really need or want?  By knowing their culture or asking them?
                                              iii.      Guest speakers can focus on a diverse range of perspectives: migrant workers, low SES,
                                              iv.      Key points
1.      What is health care
2.      What is quality
3.      What are values that influence perceptions of quality in health care
4.      What is improvement (and how do you do it?)
d.      Questions:
                                                  i.      Is it a good idea for students to go to family homes, when they might not know the limits of what they should get (in terms of information) and what they might be expected to provide for support?
                                                ii.      Is IRB approval needed for community interviews?
e.       Resources:
                                                  i.      CUPS office: support for service/community learning projects (Office of Community-University Partnerships & Service Learning found at http://www.uvm.edu/partnerships/)

2.                  Next Workshop Meeting(s): Thursdays, 11:30 a.m. – 12:45 p.m., at Given Courtyard South Level 4.   Remember: the first 15 minutes are for checking in with each other.
a.       Oct 2: Kairn’s written response to an editor for resubmission (no Amanda)
b.      Oct 9: Rodger’s recently published article and commentary in response – Rodger to circulate
c.       Oct 16: Marianne’s topic - TBD

Recorder: Connie van Eeghen

Sunday, September 21, 2014

Friday Septwember 19

19 September 2014 Friday 12 noon to 530 pm 28th Annual University of Vermont Fall Imaging Seminar College of Medicine Cardiac Abnormalities encountered on routine chest ct Curtis Green,MD Take home message there are a number of cardiac abnormalities that can be observed on routine chest ct scanning. Calcium, chamber abnormalities, aneurysms, myocardial fat,vascular anomalies. Lecture 2. MRI Artifacts Trevor Andrews PhD Take home message: Recognizing various types of artifacts, their causes and how to mitigate them. Lecture 3. Functional brain MRI Joshua Nickerson, MD Take home message: Functional MRI in the clinical setting is useful as presurgical planning but has limitation BOLD vs pcasl methods and future developments.

Thursday, September 18, 2014

CTS Seminar on Friday, September 19

For this week's seminar, we will do a journal club on this recent article:
Gagne JJ, Choudhry NK, Kesselheim AS, Polinski JM, Hutchins D, Matlin OS, et al. Comparative Effectiveness of Generic and Brand-Name Statins on Patient Outcomes: A Cohort Study. Ann Intern Med. 2014;161:400-407. doi:10.7326/M13-2942

Please read the article carefully and be prepared to say something about it in each of these areas:
What was it trying to do?
What were the methods?
What were the conclusions?
To whom do they apply?
What were its strengths?
What were its weaknesses?
Should it influence health care or policy?


Fun times for all!

Here is the link to the article:
http://www.ncbi.nlm.nih.gov/pubmed?term=%22Annals+of+internal+medicine%22[Jour]+AND+2014[pdat]+AND+Gagne+J[author]&cmd=detailssearch

Friday, September 12, 2014

Clinical Research Oriented Workshop (CROW) Meeting: Sept 11, 2014



Present:  Kairn Kelley, Amanda Kennedy, Ben Littenberg, Connie van Eeghen

Start Up: Cats, dogs, and the work involved… when working on everything else!  Kairn’s manuscript with Ben has been accepted – congratulations!  A recent court decision by the U.S. Ninth Circuit Court of Appeals defined “auditory processing disorder” as an “other health impairment” under the Individuals with Disabilities Education Act. It would be nice to have evidence base for diagnosis and accommodations.

1.                  Discussion: Karin’s lit review update
a.       Kairn has found the “Ear and Hearing” does publish systematic literature reviews
b.      The purpose of this discussion is to identify the kind of meaningful findings this review will include
c.       Kairn plans to evaluate different diagnostic dichotic listening tests to “support audiologists’ decisions about test selection and interpretation.”
                                                  i.      There are 9 recorded tests (some of which have several versions)
                                                ii.      There is no coherent biologic model to support these tests
                                              iii.      Many tests that have been used to support common models aren’t commercially available
                                              iv.      The commercially available tests are not well validated for reliability (and other domains – see below) or by rigorous trials
d.      The focus is comparing the tests and identifying whether any are supported for use
                                                  i.      There is plenty of literature reviewing the process of auditory testing, but none that review the tests themselves
                                                ii.      The concept of reliability also has to be well explained, as do the other domains:
1.      Reliability
2.      Accuracy
3.      Usefulness
4.      Value (not referenced in the literature – which should be so declared)
                                              iii.      Previous reviews reference correlation, which is not an adequate evaluation (this is for the Discussion section)
                                              iv.      Conclude with plan for next steps in research – of which the field is apparently wide open
e.       The methods section has been written; it originally used a broad net and this can be more tightly defined
f.       The objective of this review is to describe/summarize/identify/evaluate/summarize research studies that use any of these 9 tests and contain some evidence of reliability, accuracy, usefulness, and value.  Ben: Summarize the literature on reliability, accuracy, etc. for 9 commercially available tests.
                                                  i.      Eligibility criteria: Papers were eligible if they:
1.      One of 9 tests
2.      Children with normal hearing between ages 6 and 14 and are neurologically intact
3.      “We reported any test… with a gold standard diagnosis.”  (None meet this standard.)  Must reference reliability or accuracy or influenced care (useful) or good value for money.
4.      Or state as an exclusion: we will exclude any study that does not include reliability data or accuracy or usefulness or value
a.       Will not include review articles
                                                ii.      Intervention: reported results of these tests and any evidence
1.      Reliability: including correlation, Bland Altman plots, test-retest coefficients of variation, inter-rater reliability
2.      Accuracy: Kairn originally thought that since there is no agreement on “auditory processing,” there are no studies meeting this requirement.  Ben argued that the study is eligible for inclusion if it studies association with any reference test or gold standard.
a.       The fact that the accuracy data that does exist only includes brain tumor cases is still a reason to discuss it and to identify the opportunity to study accuracy in children who are neurologically intact
3.      Usefulness: TBD
4.      Value: TBD
                                              iii.      Quality issues:
1.      Sample size
g.      Next steps:
                                                  i.      Look at Kristi Johnson’s lit review on evaluating lymphedema diagnostic tools.
                                                ii.      Rewrite the methods section, focusing just on the steps related to selecting and analyzing the literature ONLY through the four elements of evaluation (nothing else).  One result of this will be a simple count of the number of articles that included each of the four elements.
1.      Stay close to the systematic review process
2.      OK to collect notes as a narrative assessment (or a grading system) on the quality of the studies (but this is NOT the purpose of the study – do NOT lose focus; this review will not lead to a high level quantitative study of the literature)
3.      Keep track of common evaluation criteria: age of study, size of sample, population of kids… 
                                              iii.      Define Usefulness and Value in an unambiguous way. 
                                              iv.      Sort the lit reviews based on the criteria for each of the four elements; document exactly where the evidence is that satisfies the criteria
h.      Remember: do not point out every deficiency in the study.  Stick to the focus: a story that convinces your colleagues that your evaluation is right – that no studies meet the criteria for evaluating auditory tests.
                                                  i.      One study (Wilson) looked for connections between any of the test results and parent surveys for children referred to clinic
                                                ii.      It did not include reliability data (like test/retest)
                                              iii.      It did perform linear regression, but not accuracy data
                                              iv.      The article may be reported separately as included in the original filter, but excluded due to lack of test-evaluative data

2.                  Next Workshop Meeting(s): Thursdays, 11:30 a.m. – 12:45 p.m., at Given Courtyard South Level 4.   Remember: the first 15 minutes are for checking in with each other.
a.       Sept 18: Marianne: draft of IRB application
b.      Sept 25: ???

Recorder: Connie van Eeghen

Thursday, September 11, 2014

Clinical Research Oriented Workshop (CROW) Meeting: Sept 4, 2014



Present:  Marianne Burke, Kat Cheung, Kairn Kelley, Amanda Kennedy, Rodger Kessler, Ben Littenberg, Connie van Eeghen

Start Up: Ben did a demonstration of a spontaneously dictated letter of support for Rodger and Connie’s SIM grant to the State of Vermont.   We also reviewed Rodger and Connie’s data flow management diagram and began to define the operational issues.  There is an obstacle regarding linkage of claims and clinical data.  Plan B is limited to incoming data streams from the practice EHRs that provide only primary care service information.  Incoming data streams from the all payers claims data might be able to match to primary care services from the EHRs.  This doesn’t provide patient level analysis, but may provide practice level analysis.

1.                  Discussion: Rodger and Connie: SIM grant draft review
a.       What data will be available for analysis:
                                                  i.      Facilitation data
                                                ii.      Clinical data
                                              iii.      Focus group
b.      It was not clear from the start that this is an intervention: MORE than just a demonstration/ observation project. 
                                                  i.      Which are being facilitated?  All 15? 
                                                ii.      Before and after comparisons
                                              iii.      What is the non-study comparator – for all the other practices we can know about?  In other words, how to evaluate secular trends?
1.      Before and after for arm 3
2.      Before and after for arms 1 and 2
c.       Suggestion: Identify 15 or more practices
                                                  i.      Recruit twice as many practices as can be facilitated (12 facilitated; 10-20 control; 30 total)
                                                ii.      Randomly assigned to intervention
1.      Two arms
a.       Integrated or not
b.      Improve VIP scores or not
                                                                                                                          i.      Do VIP scores correlate with outcomes
2.      Three arms, with sub-stratification to prevent downgrading
                                              iii.      Intervention for 18 months: use hard driving facilitation
                                              iv.      Compare integration with non-integration, using step-wedge (Rodger added this later)
d.      Questions
                                                  i.      Do VIP scores correlate with outcomes
                                                ii.      Can VT do integration: did VIP scores go up
                                              iii.      Did it make a difference: did changed VIP scores correlate with changed outcomes
                                              iv.      (Per Rodger, these questions are consistent with NCQA questions)
e.       Conclusion: 15 practices, all receiving facilitation over time, all with before/after comparisons of VIP and patient assessment scores

2.                  Next Workshop Meeting(s): Thursdays, 11:30 a.m. – 12:45 p.m., at Given Courtyard South Level 4.   Remember: the first 15 minutes are for checking in with each other.
a.       Sept 11: Kairn: draft of lit review
b.      Sept 18: Marianne: draft of IRB application
c.       Sept 25: ???

Recorder: Connie van Eeghen