Sunday, September 10, 2017

Fwd: A new Funding Paradigm for highly scored but unfunded research applications - OnPAR

Interesting new model to find funding...
Benjamin Littenberg, MD

Henry and Carleen Tufo Professor of Medicine and Professor of Nursing, University of Vermont
89 Beaumont Avenue, Room S459, Burlington, Vermont 05405
​o ​
​c 802-343-2830 f

---------- Forwarded message ----------
From: Hilda Alajajian <>
Date: Thu, Aug 31, 2017 at 12:38 PM
Subject: A new Funding Paradigm for highly scored but unfunded research applications - OnPAR

OnPAR  (Online Partnership to Accelerate Research), is a recently announced funding paradigm for highly scored unfunded research applications.  OnPAR is a matchmaking platform developed by Leidos where applicants and non-government funders come together.   Government, private foundations, private foundations, pharmaceutical companies, venture capital funds, and other funds for biomedical research are part of a funding ecosystem that supports global biomedical research. The players are Leidos (the company supplying the platform), Partners (national and international government funding agencies who work with Leidos to announce OnPAR to their unfunded, high-scoring applicants), Members ( non-government and other private organizations and companies seeking to fund research) and Applicants (researchers with high scoring unfunded research applications).


OnPAR started as a pilot  partnership between NIH and Leidos to find private support for highly scored but unfunded NIH Biomedical research applications.  It is now in the process of expanding to other areas (Energy and Agriculture) and adding additional Partner agencies (NSF, DoD, DOE and NASA and others).  Currently NIH is the only funding agency actively partnering with OnPAR to alert unfunded applicants, but any eligible applicant is encouraged to register at OnPAR and submit their highly scored, unfunded application abstract for consideration. OnPAR retains the abstracts for one year. 


Some past articles about OnPAR:

·         JACC: Basic to Translational Science - Funding Research Through the Online Partnership to Accelerate Research (OnPAR)

·         NIH blog from Mike Lauer - A Pilot Partnership to Find Private Support for Unfunded Applications

·         AAMC News – Program Provides Alternative Path to Funding for Research Grants


For questions contact: Martin A. Due├▒as, MPA | Leidos

Director, Health Research Management Practice | Manager| Life Sciences | Health Solutions Group
Mobile (1): 202.905.4582 | mobile(2): 917.318.6521 |Skype: mduenas58

Time Zone: Eastern Standard Time



Hilda Alajajian, MLS

Grant Resources Specialist

Sponsored Project Administration

University of Vermont

(802) 656-1322


Friday, August 18, 2017

Congratulations to Emily Tarleton

Congratulations to Emily Tarleton on a  successful dissertation defense earlier this week.  Emily gave a wonderful presentation to a packed classroom.  Nicely Done! We are all very proud of you Dr. Tarleton!

Emily Tarleton will join Peter Durda  on the list presented to University Senate in October 2017 as candidates to receive their PhD in CTS.  Program Director, Dr. Benjamin Littenberg will look forward to participate in the subsequent hooding ceremony.

Thursday, July 20, 2017

Fwd: Round 6 NHATS Beta Data Released


The National Health and Aging Trends Study (NHATS) is pleased to announce that a beta version of the Round 6 data files is now available at  Data are available in both SAS and Stata formats. This beta release also includes sensitive data from NHATS.  Information on how to apply for sensitive data can be found at  

Updated documentation, including a revised User Guide, technical papers, and a crosswalk between the instruments and the codebook, has been posted on    

​- Ben LIttenberg​

Group 8

Friday, July 14, 2017

Clinical Research Oriented Workshop (CROW) Meeting: July 13, 2017

Present:   Levi Bonnell, Marianne Burke, Jessica Clifton, Justine Dee, Nancy Gell, Kairn Kelley (phone), Lillian Savard, Juvena Hitt, Ben Littenberg, Tim Plante, Connie van Eeghen

Start Up: Big crowd – time to move downstairs?  Introductions and welcome to Levi, Jessica, and Tim.
1.                   Manuscript on SBIRT: van Eeghen, Hitt (in the cone of silence)
a.       There is an editorial on how to write for Academic Medicine about 5 years ago
b.       Summary: impact of education program on SBIRT for inter-professional group with pre/post assessment using self-reported data
c.       Title:
                                                   i.      First title: add “in providers” or “of providers”
                                                 ii.      Needed clarity on whether they were students or providers, what level of degrees they had or were trying to get.  Were they actually engaged with patients as they went through the program
d.       Strengths: well-written, interesting, IPE (hot), novel
e.       Weaknesses: self-report data; add “perceived” to title
                                                   i.      Recalled that there was a knowledge quiz; not just self-report
                                                 ii.      No objective measure of skills
                                               iii.      Abstract: was the intervention split into two different groups?  Yes, but not clear until much later in the article
1.       Does not match the core of the paper.
                                               iv.      What problem is it solving:
1.       If you don’t train them together, they do worse (no data on this) – this is not a measure of IPE; did not justify the relevance
a.       Did not talk about the administrative difficulty of pulling this off; was it worth it?  Does it work better than not going through this trouble?  Did not respond to assertion made in line 88 about “better utilization of shrinking health care workforce” – Ben got it.
b.       IPE is about learning about each other’s roles.  Did the surveys evaluate that?
2.       Does training improve attitudes, knowledge, skills
a.       But measured whether the effect was different between 2 groups AND whether the groups were different at baseline – what was the story
                                                                                                                           i.      “We had three goals” – and put them on different groups
                                                                                                                         ii.      People probably care that the groups are different – is that what this study is about?  Where is the literature?  What is the hypothesis?  This came out of left field.  Needs to be in intro. 
                                                                                                                       iii.      How does comparing learners at baseline say something about the training?
                                                 v.      IM residents had a different intervention – of course they show up differently
1.       IM bundled with FM and NP student groups: what was the rationale?  Not clear – because they were bio-medical-ish?  Aren’t the other groups (SW, Counseling) also bio-medical?  There is no specification of what “nursing” means – PCPs? 
2.       Should they analyze all five groups separately?  Consider changing the unit of analysis
f.        Background: did not specify the gap
g.       Method:
                                                   i.      Language about didactic training is out of place
                                                 ii.      Line 127 is a sub-sub heading of 123
h.       Curriculum:
                                                   i.      Good detail
                                                 ii.      Table 1 – supplementary appendix
                                               iii.      Why didn’t the survey address the items on IPE in Table 1?  E.g. mutual respect.
1.       The survey was more about SBIRT, not about IPE
i.         Survey methods – 174
                                                   i.      Hard to parse
                                                 ii.      Likert scale not identified up front
                                               iii.      Analyzed dichotomously – but all questions were analyzed that way.  The methods are “scattered all over”
                                               iv.      No comment about whether the surveys were validated.  If derived from other surveys, were those validated.  Need to be explicit, otherwise suspect.  (Discussion of validation would be another paper)
j.         Analysis
                                                   i.      Why dichotomized?
1.       Better: domain score (communication, knowledge); here is the score for each student; average out the change in scores for each question within domain; better power
a.       Makes the figures easier to interpret
b.       For the domain: indicates the change by domain, rather than each item
c.       Null: the average change is 0
Likert:              0                      1                      2                      3                      4                      5
Joe Q1                                     1st                                            2nd                                                       +2
Joe Q2                                     1st                                                                    2nd                               +3
Joe Q3                                     2nd                   1st                                                                                -1
Average                                                                                                                                              1.33

Across all students, the change can be expressed as a mean, range, standard error, CI…  With this number of students, will probably have a small enough standard error to show a difference.
                                                 ii.      There are four domains; represent as a box and whisker diagram, with each box a domain along the x-axis
1.       Perception (attitude)
2.       Perception (skills)
3.       Perception (communication)
4.       Knowledge (which was not graphically presented, but not explained why)
                                               iii.      “Who improved more” has to be part of the plan, explicitly
1.       Each domain as a box/whisker plot, in total and breaking out the FOUR groups (lump IM and FM together)
                                               iv.      There is way too much detail in the bar charts for the audience of AM: deans
k.       Title redux:
                                                   i.      A comparison of the SBIRT impact on different professions OR
                                                 ii.      The overall impact of SBIRT OR
                                               iii.      What the students are like at baseline
                                               iv.      If all three: set up as 3 goals, 3 methods, 3 results, 3 discussions (a long article)
                                                 v.      What about the first line of survey methods: what students thought about the training program – what happened to that?
l.         Discussion - limitations:
                                                   i.      270: differences in trainees is actually a strength
                                                 ii.      289: there was great power
                                               iii.      What was the n: did it count just the paired responses?  Be clear
m.     Conclusions:
                                                   i.      No previous mention of expense
                                                 ii.      Last 3 sentences – not needed.  Leave at two sentences… this is what could get it into the journal.
n.       Figures
                                                   i.      Put questions in appendix
                                                 ii.      Consist p values (decimals)

2.                   Next Workshop Meeting(s): Thursdays, 11:30 p.m. – 12:45 p.m., at Given Courtyard South Level 4 to end of August 2017.
a.       July 20: Jessica – publication plan for dissertation (no Ben)
b.       July 27: (no Ben)
c.       August 3: (no Ben, Connie, Jessica, Lillian)
d.       Future topics:
a.       Juvena: protocol development
b.       LaMantia: predictors of successful R01 applications:

Recorder: Connie van Eeghen