Saturday, March 20, 2010

Clinical Research Oriented Workshop (CROW) Meeting: Mar 19, 2010

Present: Matt Bovee, Kairn Kelley

1. Start up: Book club – “The Checklist Manifesto”: The first half of the session was general discussion about Gawande’s Checklist Manifesto.
At the last meeting we left it that we would discuss today chapters 5-6. However, there was also a question about whether the CM is worth many more sessions. At the session today “we” (Kairn and Matt) decided that we would like one more session to a) wrap up for those not in attendance today, and b) to get roundtable review of "so what”
• What is the book’s value/take-away according to CROW and individual members
• What can or should be done based on the book’s message(s)? How? By whom? For what purpose?

Other observations:
-huge amount of effort and experience/money goes into producing a good checklist, which then has to be modified on the ground by the ‘locals’ applying it
-similar to decision support tools of other types (can involve a lot of effort and cost to develop; benefits of application differ for experienced users versus novices)
-changes/impacts distribution of authority w/in a team. Even so, everybody knows the checklist is there, and has responsibility to chime in if they have concerns or see a problem
-all about distributed cognition; a team of people decide what's important, test it out, refine it, apply it. focuses the attention of a whole lot of people on a single process
-the process of attending to and evaluating a process and its outcome(s) is in itself likely to produce beneficial results, though not all of what Gawande claims for checklists
- it seems like the checklist eliminates isolation of the provider (and others), changes the context, says the outcome is more important than the individual implementing part of the processes, the process is important to the outcome. anyone can question the process at any time they have concerns, w/o it being perceived as insubordinate.
-NB: one of the first steps in business process redesign facilitation is to get people to set aside criticism on a point-by-point basis to elicit information about the process and to help build a team approach and mentality
-introducing everyone on the team is a 'revolutionary' issue that makes explicit one of the aspects of team building
-lots of similarities/connections between business process redesign, team-building, simulation, knowledge management, decision support

Questions
-would this have an impact on the performance of people simply because someone could audit whether they used a given checklist?
-does the checklist act as a buffer between people's sensitivities and challenging gaps in the ongoing process?


2. Presentation: Peter Callas – “Data Hygiene”
Handout on “Data Management” (distributed separately by Matt)
Data coding choices/schemes
-you can do it the traditional method, w/a full traditional data dictionary, or... Peter puts all the definitions (for columns of Excel data) in the first row of the Excel worksheet. That way the data definition is right there above the column in which the data is captured.
-Different schemes work for different folks. There are advantages to using numbers as raw data (instead of “F” and “M”, or “Female” and “Male” for example), but document it; be consistent.
-If you prefer using descriptive text, plan on additional processing steps to convert those values prior to analyses.
-Typically in spreadsheets rows = observations; columns = variables/measurements. However, a table can be transposed, either in Excel or programmatically.
-Given constraints of money, time, imposition on the study participants, etc. it is still important to capture all the details you'll need, whatever scheme you prefer.
-No "one" best statistical software for analyses. Popular ones include STATA, SAS, SPSSX
-If your data is in multiple tables, it can be merged into a single table prior to analyses, or programmatically merged during analysis by the statistical software. Need identifiers in each table to match up the rows, though.
-Differences in data sets collected at different times/in different ways can lead to loss of power, loss of specificity, loss of generalizability...sometimes can have the happy ending that the sets are completely mergable, or the early data can be treated as a 'pilot' and everything is OK after that.
-Peter: “we double data enter everything in our department using Excel.” Would be nice to do it w/immediate feedback by color change trapping errors
-Recommended to use different folders for different projects, and to include date and version numbers in data sets that change over time. For example, if you recode or otherwise transform a particular piece of data for all subjects, save the revised data set in a new file that includes the date and version number of the new data set. That way you can go back and reconstruct edits over time if need be.
-When you merge data sets, are there the same number of variables? Are there variables w/the same name in the two sets (are they what they originally were supposed to be, or did one get copied over into the other?). It’s important to have some sort of logic checks in place to ensure that data is correct, whether you’ve just entered it or are transforming/recoding it…whatever.
-Logic checks can be easily created in Excel for double data entry, for example. (see attached handout, item 2.)

Topic and other plans for next meeting were not addressed.


3. Next Fellows Meeting(s): Mar 26, 2010 from 9:30 – 11:00 a.m., at Given Courtyard Level 4
a. Mar 26: Bookclub (chapters 7 – 9 and wrap up); follow up session with Connie’s data
b. April 2: Topic suggestions: How to predict medical events effectively OR Mapping new NHANES data with mortality - Ben
c. Future agenda to consider:
i. Skype demo: Connie & Matt? Wait until Amanda K is back. Or do twice?
ii. Future: Review of different types of journal articles (lit review, case study, original article, letter to editor…), when each is appropriate, tips on planning/writing (Abby)
iii. Future: Informed consent QI: Connie to follow up with Nancy Stalnaker, Alan Rubin will follow up with Alan Wortheimer or Rob McCauly

Recorder: Matt Bovee

4 comments:

  1. Skype demos are easy to do, quick, and I typically have my web cam in my book bag. So, doing this twice is not a big deal - would take 10-15 minutes...maybe 20 at most if there were a lot of questions.

    ReplyDelete
  2. Also, Peter Callas would be happy to come back again for another session more specific to CROW individual's needs, or for other data/stats issues of interest to CROW

    ReplyDelete
  3. Finally, Chuck Hulse would be interested to present his talk on computer simulation and complex systems to CROW. It might be a fertile connection.

    ReplyDelete
  4. The notes on Peter's talk are excellent. It sounds like he presented some great stuff.

    Ben

    ReplyDelete

Note: Only a member of this blog may post a comment.