Trends in Large Proposal Development at Major Research Institutions

Volume XLVII, Number 1
Authors: 
Lorraine M. Mulfinger, PhD
The Pennsylvania State University, University Park, PA
Kevin A. Dressler, PhD
The Pennsylvania State University, University Park, PA
L. Eric James, JD, MS
Huron Consulting Group, Chicago, IL
Niki Page
The Pennsylvania State University, University Park, PA
Eduardo Serrano
Huron Consulting Group, Chicago, IL
Jorge Vazquez
Huron Consulting Group, Chicago, IL

Introduction

Institutional change scholars rely on social psychology constructs, principles or models for designing organizational change strategies. Focusing on an understanding of the psychological basis for changing an individual mindset or managing the dynamics of a group, change scholars often develop tools that equip change agents to effectively engage institutions and steward the change process (Eisold, 2005; Gardner, 2006; Morgan, 1997). Both internal and external challenges can drive the institutional necessity for change. For institutions of higher education a legal mandate, such as, legislation, statutes, other policies and court decisions, serve as major external drivers of change bearing serious institutional risks including fines, non-fiscal punitive measures, loss of prestige and privilege, and public criticism.

Despite the high liability for higher education institutions, change scholars have yet to create a tool for implementing legally mandated change. Ideally, a tool that facilitates institutional compliance while minimizing legal liability would remedy this omission. Currently, institutions facing a changing legislative landscape must respond on a policy-by-policy basis to develop adequate plans. Each institution runs the risk of making changes that may not embed in institutional practices and result in non-compliance. Institutional non-compliance can manifest in several ways: by misinterpreting the law, by ineffectively implementing the law, or by failing to guide institutional enforcers of the law (Kern, 2014; Lipsky, 2010). Creating a remedy requires a solution that addresses each of these risks and removes barriers to effective change from a human behavior perspective.

Background

The Highly Competitive Funding Environment

The National Science Foundation (NSF) recently reported to the National Science Board (NSB) that the number of all proposals acted upon from 2001 to 2013 increased by 53% while the percent of submissions receiving awards (i.e., proposal success rate) over the same period decreased by 9% (National Science Foundation, 2014c), as reported by the NSF Enterprise System. In the same report a similar trend in research awards was noted for the same 2001-2013 period, showing a decrease in success rate of 27% in 2001 to 19% in 2013. NSF noted to the National Science Board that some specific factors (e.g., increase in mean award size and budget changes such as the Budget Control Act of 2011 and American Tax Payer Relief Act of 2012) affected the number of new awards that could be made in 2013 which resulted in a 5% decrease from 2012 to 2013. The overall increase in the total number of awards since 2001 is one story, but the decrease in proposal success rates (those acted upon by NSF) tells another. Although the American Recovery and Reinvestment Act (ARRA) of 2009 provided some temporary relief to the downward trend in funding rates at NSF (boosting the rate to 32% in 2009), this impact was short-lived (NSF, 2014a; 2014c).

A similar funding history is seen at the National Institutes of Health (NIH). While current budget discussions portend hope for a significant budget increase for the NIH in the near future, this agency has seen an overall drop in proposal success rates of more than 14 percentage points between 1999 and 2013. In 1999, the overall success rate for all types of awards was 34% and this reached an all-time low of 18% in 2013 (NIH, 2014). ARRA had much less of an impact at NIH. Here the biggest drop in success rates occurred between 2003 and 2004 (a 32% to 26% drop), concurrent with the end of the historic annual budget increases that doubled the NIH budget between 1998 and 2003 (Smith, 2006).

Large Research Proposals and Team Science

The second factor impacting the size of grant requests and awards has been increased emphasis by funding agencies on collaborations across scientific disciplines, as reflected by an increase in multiple principal investigator (multi-PI) grants (including centers and other multi-year programs) and larger average award sizes. At NSF alone, the number of awards in both single PI grants and multi-PI grants has increased 4.8% and 18.6% respectively between 2004 and 2013 while the success rate of multi-PI grants has remained mostly unchanged with a slight decrease from 18% to 17% (NSF, 2014c). This impressive shift to larger and multi-PI research grants is even more prevalent at the NIH where the number of multi-PI grants has grown by two orders of magnitude from 2006 to 2013 (National Research Council Committee on the Science of Team Science, 2015). Therefore, the opportunity for this larger proportion of multi-PI grants is available and is just as competitive as it was more than 10 years ago. These multi-PI programs are especially attractive to research institutions not only because they are large dollar amounts per award, but most also cover a longer lifespan (5-7 years) compared to typical single investigator grants (2-3 years). This provides a certain level of economic stability not available with singular, smaller grants. Validation for this increased emphasis on team science is provided by a 2014 study by Stipelman et al. (2014) in which the impact of team-based transdisciplinary research was shown to have more rapid and broader impact across the science community than investigator-initiated programs.

Team Science approaches to research is clearly a developing trend among academic researchers. The trend is reflected in the nature of both publications and grants. A co-authorship analysis of articles published in three leading science journals (Nature, Proceedings of the National Academy of Science USA, and Science) shows a steady increase since 1958 in the number of authors per publications, extrapolating to a predicted average of 19 co-authors per publication by 2040 (Pavlidis, Petersen, & Semendeferi, 2014). While some agencies like the National Science Foundation (NSF) has long recognized multiple principal investigators on grants, The National Institutes of Health (NIH) formalized this multi-PI status in 2007 (NIH, 2006). NIH currently gives about a fifth of its external awards to projects with multiple PIs, and some suggest this trend could and should grow at the NIH and other funding agencies in the coming years (Chronicle Staff, 2014).

As team science seems to be blossoming, agencies have responded by making more large awards (Figure 1). Between 2000 and 2014 (U.S. Office of Management and Budget, 2014), a general trend toward more awards in either or both the $1-$5M and $5M-$25M ranges can be seen across at least four major agencies: NIH, NSF, USDA [US Department of Agriculture] and DoD [Department of Defense]. It is apparent that, despite often being affected by federal budgeting delays, economic policy change, and special initiatives such as the 2009 American Recovery and Reinvestment Act, both trends and special windows of opportunity (e.g., DoD and DoE [Department of Energy] in 2010) are obvious for five of the six major agencies explored, even when award sizes are adjusted for inflation.

Anecdotal evidence suggests that research administrator interest in the topic of large research proposals is high—arguably in response to the trends described above. Feedback obtained at a session of a major research administrator conference (Dressler et al., 2014), a related webinar, and additional informal conversations around the topic of large proposal development provided anecdotal evidence that support for large, multi-investigator proposals was seemingly heterogeneous. An obvious question of interest for this group is whether evidence exists that specific support models impact proposal-funding success. Thus, a team from Penn State University and the Huron Consulting Group developed and administered a survey to better understand the models that are being used to support these large, multi-investigator proposals. Many studies have been performed on the science of team science with the most recent comprehensive study published by The National Research Council Committee on the Science of Team Science in 2015 that focused on opportunities to enhance the effectiveness of collaborative research in science teams, research centers, and institutes. While typical science of team science studies such as this one focus on the teaming aspect of these groups, this study focused specifically on proposals submitted by such teams for large proposals. In this way, our research is complementary as this is one of many activities these teams perform in their pursuit of research and education outcomes.

Institutional Responses to Changes in the Funding Climate

A seminal study of the characteristics of research administration infrastructures at colleges and universities was conducted in 1996 by a team from Oak Ridge Associate Universities (ORAU) (Baker & Wohlpart, 1996). The ORAU study was a survey of 80 institutions that represented a wide range of Carnegie Classifications (Carnegie Commission on Higher Education, 1973) from Research 1 (R1) to Master’s 1 (M1). While some changes have occurred in the research administration landscape over the past 20 years, the Carnegie Classification and NSF-reported Higher Education Research Development Survey (HERD) expenditures continue to be important institutional characteristics reflecting mission and size. Because R1 institutions can be expected to more frequently submit large proposals, institutions from this category were chosen as the focus for this first exploration of large proposal support, and total HERD expenditures was used as an indicator of the relative size of an institution’s research enterprise. The ORAU survey explored many of the same or similar specific features of “Office Functions” and “Office Resources” but without differentiating the type or extent of services specifically devoted to large proposals as is the intent of this study.

The Penn State/Huron survey was designed with input from researchers and research administrators to determine how large proposals are being supported at different research institutions. The survey had two main objectives: 1) to characterize the heterogeneity of large proposal support models, and 2) to determine if there is a relationship between funding success rates and proposal support services or the models themselves. Three working hypotheses regarding successes in objective 2 tested by this survey included: 1) Research institutions with centralized, dedicated Research Development Offices (RDOs)/Large Proposal Offices (LPOs) are more successful at submitting large proposals and having large proposals funded; 2) A relationship exists between the number of dedicated RDO/LPO staff full-time equivalents (FTEs) and the success of large proposals; and 3) Research institutions with RDOs/LPOs have a higher award rate for large proposals than those without RDOs/ LPOs.

The support models included LPO offices, LPO-type activity across different units, and combinations of support elements that can range from fully centralized to fully decentralized. In any case, the focus of this study was whether an institution supports strategic proposals any differently than other proposals, and if so, how. Success was measured as the percent of submitted proposals that were ultimately funded by the target agency (i.e., funding rates).

Methods

The survey content was developed through three main steps: 1) A six-member Penn State/Huron research team developed a draft survey based on team knowledge and experiences in research administration at multiple institutions; 2) The survey concept was shared at NCURA 2014 in a discussion session; and 3) A focus group was held by videoconference to solicit inputfrom research administrators representing eight large institutions. Upon development of the draft survey in step one, the survey and research project plans were submitted to the Penn State Office for Research Protections for review and the project was determined to be exempt from Institutional Review Board review requirements (IRB #44907).

An important function of the survey focus group was to provide input on the definition of large proposal. For the purpose of this survey, the consensus of the focus group was to define large proposals as having two or more of the following attributes: 1) requesting funding totaling more than $1M per year, 2) involving more than two collaborating research institutions (i.e., subawards, federal laboratories/partners, industry partners, sites, or other), 3) involving two or more internal university departments participating in the proposal, or 4) responding to a funding opportunity for which submissions are limited by the funder. A fifth attribute identified as being able to function singularly as defining a large proposal was one that is requesting support for an activity that has been designated as strategic by the institution. The focus group also refined the large proposal support model definitions.

After the survey was adjusted according to feedback, an invitation to participate was distributed to senior administrators at the top 100 Research and Development (R&D) expenditure institutions, as reported by NSF for 2013 (NSF, 2014b). The top 100 were selected as a sample group because of the higher probability that they regularly submit large proposals, have established tracking systems, and have considered purposeful mechanisms for supporting such efforts. The survey was executed online using Qualtrics and managed by professional survey staff at the Penn State Survey Research Center.

Data

Survey participants were assured that the research team would not share the identities of the participating institutions and that published reports would avoid the inclusion of data that potentially could be used to identify individual institutions.

Following completion by the participants of the online survey, a data cleansing step included research team contact by telephone conference with each responding team to ensure that the survey questions were interpreted consistently across the participants and to verify input. These contacts used a standardized set of data follow-up questions. Subsequently, minor adjustments (e.g., adjustments to number of faculty, correction to R&D expenditures reported, inclusion of overhead when estimating proposal or award value) were made by a portion of institutions. Data and analysis in this report are inclusive of those minor adjustments. Importantly, none of these adjustments had significant effects on the reported results as the result of their inclusion.

Institutions were invited to report on either FY 2012 or FY 2013 depending on the window for which they could provide the most recent complete data. ARRA-funded projects were included if present in reported expenditure data for both of these fiscal years, but would not impact the success rates for either 2012 or 2013 because those awards were made only in 2009 and 2010. Because expenditure reporting was used only as a surrogate for institutional size, it is not viewed as a confounding factor for analyzing survey data on success rates and proposal support during the 2012-2013 timeframe.

Proposal support data were differentiated among six models (Table 1) as determined by feedback from the NCURA 2014 conference and the pre-survey pilot group. James et al, (2015) described the six models in more detail—they reported that models may not be all-inclusive but were meant to capture the heterogeneity of support infrastructure, known to this team, and the pre- survey feedback mentioned previously. Institutions can employ a multitude of models for large proposal support that includes elements from different models. For example, an institution may offer support functions that are both centralized (Model 1) and decentralized (Model 3) as their approach. A summary of model definitions is provided below and may also be found in James et al. (2015) where the models below were used to develop a conceptual model.

Results

Participating Institution Demographics

Twenty respondents from the 100 invited top-ranked research institutions (NSF, 2014b) provided partial or complete responses to the survey. The 20-institution sample was diverse with respect to the institution types and classifications represented by the overall top 100 from the 2013 HERD survey to the extent reflected by Table 2. The mix of public and private institutions was very similar.

Proposal Success Rates by Award Size

Reported proposal funding success rates were requested across four dollar ranges defined by $250K steps up to $1M. These results are summarized in Figure 2. Not surprisingly, a clear trend is evident for a lower mean funding rate as proposal values increase. Of interest, however, is that the larger range of institutional success rates seen for the category of proposals above $1M is larger than for any other category. This uniquely larger range might be indicative of institution-specific variables that impact proposal success in this size range more than in the lower ranges.

Institutional Expenditures and Proposal Success Rate

Noting the different average and variability in success rates for proposals over $1M, the survey data was next analyzed to determine if the size of the research funding base of the respondent institutions might correlate with proposal success.

As a standardized metric for institutional size and research funding, institutions were asked to provide the amount total of research expenditures reported to NSF for the HERD reporting year corresponding with their other reported survey data (FY2012 or FY2013) (NSF, 2013; 2014b). To provide anonymity, the expenditure number was then converted to a Relative R&D Expenditure Percentage based on the highest reported institutional spending level (i.e., the institution with the highest reported spending level has a 100% relative R&D expenditure). Based on this metric, success rates at the >$1M proposal size as well as across all award sizes were explored to determine any association with institutional relative R&D expenditures (Figure 3). A low, but positive correlation with R&D expenditure level was noted across the survey respondents for awards>$1M (Figure 3A; note the positive slope with R² = 0.1845) (NSF, 2013; 2014b). However, no positive correlation was evident between success rates of all proposals (i.e., any award size) and relative R&D expenditures (Figure 3B; note the negative slope). The correlation of expenditures with success rates for awards over $1M (Figure 3A) but not for awards in general (Figure 3B) suggests that institutions with larger expenditures may be doing something differently to facilitate large proposal success. Moreover, the lack of strong R² suggests that expenditure rates is not the only variable and that a closer look at other institutional characteristics is warranted in order to determine a formula for success and, thus, validated the need to look at other survey variables.

Support Model Types and Funding Rate of Proposals

The next step of the data analysis was to look for correlation of proposal success rates for any of the six models for large proposal support reported by institutions. Table 3 shows results for 20 participant institutions in order of overall proposal funding rates. Included are their institutional ranking within the survey sample based on R&D expenditures (i.e., relative R&D expenditure ranking), their funding rates for two larger proposal categories ($750K-$999K and >$1M), and their LP support models. When analyzed with respect to >$1M funding rates, there is clear heterogeneity in support model infrastructure among the institutions with 50% of them employing a combination of models. The CDC support model was most prevalent and present in 70% of the institutions, highlighted in the last column. Only three institutions reported separate LPO models; these were broadly distributed across success rates.

Percent Effort in Relation to Proposal Funding

Data on the number of staff FTEs (Full-Time Equivalent Employees) dedicated to large proposal support was requested from survey participants. Percent FTEs were converted to number of hours using the formula: 100% FTE = 40 hours per week for 48 weeks or 1920 hours per year. This information was then plotted against the percentage funding of large proposals (Figure 4). Recognizing that this effort might be quantified with several highly variable approaches, two templates were offered to participants for systematically collecting this information.

In Figure 5, the percent funding of awards greater than $1M is plotted against all awards. The significant R² value of ~0.4 indicates that these are related. This may indicate that success factors for large proposals may be related to the success factors for all proposals and vice versa. Successful institutions are successful in general and are resourcing personnel time for large proposals.

Discussion

This study is a baseline assessment of pre-award support for large proposals and various models that are employed at research-intensive institutions. The results provide a first look into how successful institutions with diverse characteristics address large proposals. A strong trend toward decreasing success rates as proposal size ranges increase is evident when considering the institutional medians, but trends are weak or inconclusive when success rates are associated with specific institutional characteristics such as overall R&D expenditures or support models.

A weak but positive trend was shown when considering the amount of personnel time spent on large proposals. The response rate for this aspect of the survey suggests that it was indeed challenging data to collect: only 14 respondents provided this data and only 21% confirmed use a template. While it might be expected that institutions with LPOs would be able to provide greater personnel time, Table 3 shows that only three institutions had LPOs and provided no suggestion of any trend of LPO offices being related to number of awards above $1M. Two of the three institutions with a specific LPO were within 1 and 2 standard deviations of the mean for the 4th and 12th, respectively. Two of the respondents (R&D ranks 14 and 16; see Table 3) did not report funding rates. However, the respondent institutions with Large Proposal Offices all indicated that they employ varied selection processes for determining which proposals they support, and none of the respondents indicated that these LPOs support all large proposals. These are key points because they confound any attempts to assess the impacts of Large Proposals Offices on funding success rates for proposals >$1M in this survey dataset.

The sample size for this study was relatively small, and could be confounded by a number of reporting variables. Data inquiry follow-ups with the respondents revealed that certain participants chose to report for a single institutional unit rather than institution-wide. Others indicated that success rates were likely boosted by inclusion of a large relative percentage of non- competing renewals in their portfolios.

Conclusions

This study was a baseline investigation into large proposal support. Conclusive findings are limited to three: 1) The decentralized College/Department/Center model is the most commonly used large proposal support model; 2) Large proposal offices and units have similar criteria in selecting proposals to be supported, the most common of which is awards equaling or exceeding $1M; and 3) Institutional setting is a factor in success rates for larger proposals more than smaller proposals as evidenced by greater variability in these rates.

While the conclusions are limited by data originating from a sample of 20 participants out of a possible 100, this study had broad representation (Table 1), and it is valuable in providing a structure for the data and metrics needed to more fully access proposal support infrastructure. For example, in addition to simply quantifying the number of staff FTE involved in the support process, the characteristics and experience of these personnel may be important. Looking forward, as more institutions may be considering establishing LPOs, it will be of interest to know how these offices select research teams worthy of proposal development support and how they identify funding opportunities appropriate for pursuing.

Over the long term, it will be worthwhile to assess whether certain LPO support models grow or diminish in popularity over time. Information that could help drive an informed choice of LPO models by institutions would include data on how large proposal success rates may be impacted by the time span over which a specific model is in existence at a particular institution. For example, institutions that chose to adopt new support models and infrastructures such as an LPO could consider tracking the overall number of proposals being submitted that are greater than $1M as well as the number of proposals they supported from this pool. This would allow them to measure the impact of any support infrastructure changes on the funding rate of large proposals within their institution. For example, if an institution’s overall funding rate drops from 21% to 19% while funding rates for large proposals not supported by an LPO goes from 14% to 17% and the funding rate for the proposals supported by the LPO goes from 14% to 29%, a closer look at the metrics associated with these two models would be warranted. This would then enable institutional resource decisions to be made based on quantifiable data and return on investment. However, a major caution to this approach is that environmental factors (e.g., uneven funding priorities across disciplines, geographical priorities among agencies, consistency among review panels, etc.) can be at play in large competitions, leading to a comparisons of “apples to oranges” from one proposal support unit to another or even within a competition. Large proposals are developed in teams and direct impact of singular inputs or activities are difficult to measure, especially given that proposal reviews do not generally identify items that produce tipping points, positive or negative. Thus, it is often difficult to measure the direct impact of LPO support on a proposal because of these and other confounding factors.

While funding rate is a typical metric used by administration to understand the bottom line, it is not a user-centric (i.e., faculty) assessment addressing overall impact. Additional user- centric metrics not assessed in this study but equally as important to successful proposal support models are parameters such as PI satisfaction, repeat PI customers, PI-valued services (e.g., budgeting, reviews, grant writing, proposal coordination, etc.), and other support infrastructure variables (e.g., data management, outreach or diversity programs, dedicated proposal staff, etc.). Understanding faculty needs and the services they value most may provide the best potential for increasing the levels of skilled faculty participating in large proposals. An essential element of large proposal success is the leadership of an experienced, credible PI; thus, PI satisfaction with the process is essential to retaining a solid pool of willing PI candidates.

While this study focused only on pre-award proposal development support, post-award administration may be equally important to future large proposal successes. Institutional records for post-award management are often part of agency evaluation and selection criteria when awarding large projects. It is apparent through a limited set of ancillary questions and follow- up that post-award management of strategic awards is clearly complicated, but highly valued. Moreover, strategic awards often undergo greater scrutiny by sponsors and external auditors. In light of potential for more scrutiny and increased complexity, concerns expressed by the participants ranged from needed specialized training for individuals responsible for managing these strategic awards to significantadministrative burdens thatarise from reporting requirements, necessary relationships with subawardees, and daily oversight. Thus, future studies may want to address the relationship between resources and success in post-award management and future funding success for large proposals.

Author’s Note

This manuscript reports the results of a collaborative investigation conducted by the six authors thathas beendescribedinpartduring presentations atthe 2014 and 2015 annual national meetings of the National Council of Research Administrators (NCURA) and in a poster prepared for the 2015 national meeting of the Society of Research Administrators. Survey administration services were purchased from The Penn State Survey Research Center. This research was supported in part by The Pennsylvania State University and Huron Consulting Group; however, the authors are responsible for the content which does not represent the opinions or endorsements of either organization. The authors would like to thank Matthew Faris of Huron Consulting for connecting the research team and providing ongoing support through the survey development, implementation, analysis and dissemination. The authors would also like to thank the institutional representatives who participated in the draft survey focus group for survey refinement and the institutional representatives who gathered the extensive data necessary for survey participation by their institutions.

Lorraine M. Mulfinger, PhD
Director for Strategic Initiatives and Research Program Development The Pennsylvania State University
101B Beecher Dock House University Park, PA, 16802, USA Telephone: 814-865-7787
Fax: 814-863-2830
Email: LXM14@psu.edu

Dr. Kevin Dressler, PhD
2DCC-MIP, Operations and User Facilities Director The Pennsylvania State University
N-339 Millennium Science Complex University Park, PA, 16802, USA

L. Eric James, JD, MS
Manager, Research Services, Education & Life Sciences Practice Huron Consulting Group
550 W. Van Buren Street Chicago, IL, 60607, USA

Niki L. Page
Grants and Contracts Manager The Pennsylvania State University
334 Health and Human Development Building University Park, PA, 16802, USA

Eduardo Serrano
Director, Research Services, Education & Life Sciences Practice Huron Consulting Group
550 W. Van Buren Street Chicago, IL, 60607, USA

Jorge Vazquez
Associate
550 W. Van Buren Street Chicago, Illinois 60607
Mobile 312-758-8236

Correspondence concerning this article should be addressed to Lorraine Mulfinger, PhD, Director for Strategic Initiatives and Research Program Development, The Pennsylvania State University, 101B Beecher Dock House, University Park, PA, 16802, USA, LXM14@psu.edu.

References: 

Baker, J. G., & Wohlpart, A. (1996). Research administration in colleges and universities: Characteristics and resources. Research Management Review, 10(1) 33-48. Retrieved from http://www.ncura.edu/Portals/0/Docs/RMR/v10n1.pdf

Carnegie Commission on Higher Education. (1973). A classification of institutions of higher education. Berkeley, CA.

Chronicle Staff. (2014, September 30). Team science is tied to growth in grants with multiple recipients. The Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/ticker/team-science-tied-to-growth-in-grants-with-multiple-recipients/87077

COINNEWS Media Group LLC. (2014). US inflation calculator. Retrieved from http://www.usinflationcalculator.com/

Dressler, K., Faris, M., James, E., Merki, A., Mulfinger, L., Page, N., & Serrano, E. (2014, August). Large research proposals and the process for successful proposals. 56th Annual Meeting of the National Council of University Research Administrators. Washington, DC.

James, L. E., Mulfinger, L., Dressler, K., Page, N., Serrano, E., & Vazquez, J. (2015). 2x2 model for research administration organizational structures as they pertain to large proposal development. Paper presented at the 2015 Society of Research Administrators International Annual Conference, Las Vegas, NV. Retrieved from http://srainternational.org/sites/default/files/documents/SRA%20Symposium%20Paper_E%20James.pdf

National Institutes of Health. (2006). Establishment of multiple principal investigator awards for the support of team science projects (NOT-OD-07-017). NIH Funding Opportunities and Notices. Retrieved from http://grants.nih.gov/grants/guide/notice-files/NOT-OD-07-017.html

National Institutes of Health. (2014). Success rates and funding rates. Retrieved from http://report.nih.gov/NIHDatabook/Charts/Default.aspx?showm=Y&chartId=275&catId=13

National Research Council Committee on the Science of Team Science. (2015). Enhancing the effectiveness of team science. N. J. Cooke & M. Hilton (Eds.). Washington, DC: National Academies Press. doi:10.17226/19007

National Science Foundation. (2013). Higher education R&D expenditures, ranked by FY 2012 R&D expenditures: FYs 2003–12. Retrieved from http://ncsesdata.nsf.gov/ herd/2012/html/HERD2012_DST_20.html

National Science Foundation. (2014a). Funding rate by state and organization. Retrieved from http://dellweb.bfa.nsf.gov/awdfr3/default.asp

National Science Foundation. (2014b). Higher education R&D expenditures, ranked by FY 2013 R&D expenditures: FYs 2004–13. Retrieved from http://ncsesdata.nsf.gov/herd/2013/html/HERD2013_DST_17.html

National Science Foundation. (2014c). Report to the National Science Board on the National Science Foundation’s merit review process. Retrieved from http://www.nsf.gov/nsb/ publications/2014/nsb1432

Pavlidis, I., Petersen, A. M., & Semendeferi, I. (2014). Together we stand. Nature Physics, 10(10), 700–702. doi:http://doi.org/10.1038/nphys3110

Smith, P. W. (2006). The National Institutes of Health (NIH): Organization, funding, and congressional issues: CRS Report for Congress RL33695. Washington, DC. Retrieved from http://digital.library.unt.edu/ark:/67531/metadc806903/m1/1/

Stipelman, B. A., Hall, K. L., Zoss, A., Okamoto, J., Stokols, D., & Börner, K. (2014). Mapping the impact of transdisciplinary research: A visual comparison of investigator-initiated and team-based tobacco use research publications. Journal of Translational Medicine and Epidemiology,  2(2), 1033–1039.

U.S. Office of Management and Budget. (2014). USA Spending. Retrieved from http://www.usaspending.gov/

Keywords: 

Large Proposals, Proposal Success Rates, Proposal Administrative Support, Research Development, Research Administration Organizational Structure, Team Science