- To study the processes and outcomes of cross-disciplinary team science initiatives using multi-method approaches
- To develop and apply new methods, metrics, definitions, models, and approaches for evaluating cross-disciplinary team science initiatives
The SciTS Team has been engaged in a number of research and program evaluation efforts centered around the transdisciplinary center grant initiatives funded through BRP/DCCPS, including: (1) The Transdisciplinary Tobacco Use Research Centers (TTURC) initiative, (2) The Transdisciplinary Research on Energetics and Cancer (TREC), and (3) The Centers for Population Health and Health Disparities (CPHHD) Initiative. Below is a summary of the SciTS Team’s research and evaluation work related to each of these initiatives.
Transdisciplinary Tobacco Use Research Centers (TTURC) Initiative
The Transdisciplinary Tobacco Use Research Centers (TTURC) initiative was funded from 1999–2009 through the Tobacco Control Research Branch in BRP. The initiative aimed to facilitate integrated transdisciplinary approaches to tobacco use research. It was the first transdisciplinary research center funded at NIH.
The SciTS Team was formed in 2001 in response to the launch of the TTURC initiative, as it became apparent that there were no available frameworks or metrics to evaluate an initiative with such a unique structure (e.g., transdisciplinary collaborations, career planning components, higher-level management of the initiative, and other administrative and logistic challenges.). The SciTS team was created to evaluate TTURC and develop evaluation tools that could be used for future evaluations of team science grant initiatives.
TTURC Evaluation Logic Model
A logic model was created to help develop an evaluation plan and guide research questions. It was created using a concept mapping strategy and was led by William Trochim, a professor at Cornell University and an expert in program evaluation, who played a leadership role in the SciTS team at the time.
TTURC Evaluation System
A battery of 20 instruments was developed during a pilot program evaluation of the TTURC initiative. This evaluation utilized a mixed method approach and involved participation from a variety of stakeholders in an effort to explore potential methods and measures that could adequately assess the complexities of a large transdisciplinary research initiative. Details of the methods and results of this evaluation can be found in the following publication:
Trochim, W.M., Marcus, S.E.; Masse, L.C., Moser, R.P., & Weld, P.C. (2008) The Evaluation of Large Research Initiatives: A Participatory Mixed-method Approach . American Journal of Evaluation. 29, 8-27
TTURC Bibliometric Study
As funding for the TTURC initiative was coming to an end, there was an interest in better understanding the outcomes and impacts associated with transdisciplinary center grant initiatives. To address this question, the SciTS team conducted a quasi-experimental longitudinal study comparing bibliometric indicators of productivity, collaboration, and impact of the TTURCs (as a representative of transdisciplinary center grant initiatives) and R01s (traditional investigator-initiated grants) in the same field. Details on the methods and results from this study are included in a paper by Hall et al (2012).
TTURC Visualization and Global Mapping
The SciTS team worked in collaboration with Katy Borner , an expert in data visualization, to apply innovative visualization methodologies to the TTURC publication data. The introduction of these visualization techniques provides a richer understanding of the relationships identified through other methodological approaches (e.g., bibliometrics).
Transdisciplinary Research on Energetics and Cancer (TREC) Initiative
The Transdisciplinary Research on Energetics and Cancer (TREC) initiative (2004–2009; refunded 2011–2016) is funded through the Health Behaviors Research Branch with the goal of fostering transdisciplinary research in nutrition, physical activity, energy balance, obesity, and cancer. The SciTS team has been involved in program evaluation efforts with the TREC initiative since it began in 2004.
TREC Evaluation Logic Model
The TREC logic model identifies the temporal relationships among key process and outcome variables related to team science, including collaborative readiness, collaborative capacity, and outcomes. The model was developed as the result of a systematic review of the literature from a variety of disciplines (e.g., team research, virtual teams, informatics, human and computer interaction) and served as the basis for a stage-dependent evaluation strategy of the TREC initiative.
TREC Baseline Researcher Survey
The TREC Baseline Researcher Survey was developed to gather baseline information on collaborative readiness and early evidence of cross-disciplinary collaboration.
It included sections on respondents’ demographic characteristics, history of collaboration, research orientation (i.e., degree of proclivity toward transdisciplinary collaborative research), collaborative resources, collaborative processes, collaboration activities, attitudes about other center participants, institutional transdisciplinary research culture, and training resources, activities, and opportunities. Details on the methods and results from this survey are included in a paper by Hall et al (2008).
This survey is adaptable for use in the evaluation of other large cross-disciplinary team science initiatives.
TREC Follow–up Researcher Survey
The TREC Follow-up Researcher Survey gathered information about ongoing collaboration and training activities, processes surrounding collaboration, and the benefits and challenges of implementing and participating in TREC research and training activities. The survey repeated sections included in the TREC Baseline Researcher Survey such as demographic information, collaborative and training activities, collaborative processes, and researchers’ orientation toward transdisciplinary research. It also included new measures that focused on assessing TREC training activities and perceived benefits and challenges of collaboration and training within the TREC initiative.
Results were analyzed in conjunction with results of the baseline survey in order to assess the relationships among antecedent conditions and early collaborative processes measured in the baseline survey with collaborative outcomes and products measured in the follow-up survey.
Like the baseline survey, the follow-up survey is adaptable for use in the evaluation and monitoring of other large cross-disciplinary team science collaboration and training.
Written Products Protocol
The Written Products Protocol was a rating instrument developed to assess the extent to which transdisciplinary integration occurred in TREC Developmental Pilot Projects. These pilot projects were developed by investigators beginning in the second year of the TREC initiative to elaborate upon pre-existing research activities within TREC through innovative transdisciplinary approaches.
External reviewers used the instrument to rate research proposals on the following factors: (1) collaboration within or across TREC centers, (2) inclusion of multiple disciplinary perspectives, and (3) the scope of transdisciplinary integration with respect to research methods and analytical approaches.
TREC Lessons Learned Study
As the TREC initiative neared the end of its first funding cycle (2004–2009), qualitative in-depth interviews were conducted to capitalize on knowledge gained and lessons learned by both grantees and NCI program staff about engaging in transdisciplinary center grant initiatives. The study goals were: (1) to record TREC grantees’ strategies for success, challenges, and lessons learned related to engaging in a transdisciplinary center grant initiative; (2) to record TREC Coordinating Center and NCI TREC program staff members’ success, challenges, and lessons learned related to supporting a transdisciplinary center grant initiative; and (3) to document broad impacts of TREC, including advances in the science of energetics and cancer; impacts on participating scientists, trainees, and academic institutions; and impacts on other NCI and NIH activities.
Interview guides (1,2) were developed for different types of TREC participants, including center directors, senior and junior investigators, trainees, coordination center staff, and program staff. These guides were tailored to capture the unique perspectives of each interview participant, based on his or her unique role within TREC. Interview questions addressed diverse aspects of successful team science, including factors at the levels of the academic institution and center that facilitated team science; strategies used by investigators to facilitate team science; training benefits and impacts on professional development, and coordination center and NCI strategies to support effective transdisciplinary collaboration and cross-center collaboration.
Centers for Population Health and Health Disparities (CPHHD) Initiative
Social Network Survey
The CPHHD initiative (2003–2008; refunded 2010–2015) was established to accelerate the science on health disparities by supporting transdisciplinary approaches that could help advance the science on causal factors and effective interventions to address health disparities.
During the second funding period for CPHHD, the SciTS team implemented a baseline social network and collaboration readiness survey in late 2010 and early 2010. More than 90 percent of the key investigators involved in CPHHD centers completed the survey. Data were collected on investigators’ current and prior collaborations and on the type of each reported collaboration (e.g., co-authored publication, mentoring/trainee relationship, committee or work group participation, etc).
A follow-up survey will be conducted toward the end of the five-year funding period in order to assess the impact of the initiative on transdisciplinary collaborations.
Review of Evaluations of Center and Network Initiatives at NIH
In an effort to better understand the scope of program evaluations for center and network initiatives supported by NIH, in 2010, the SciTS team contracted a review of published and unpublished evaluations of these initiatives produced over the last three decades (1978–2009). A total of 61 evaluation cases were identified, and reports and articles produced by these evaluations were analyzed. Study design characteristics, including variables used, research methods, and data sources, were summarized. Implications for further program planning and evaluation design were discussed.