Systems Engineering-Based Tool For Identifying Critical Research Systems

Volume XLVII, Number 2
Authors: 
Rodman P. Abbott
Naval Post Graduate School
Jerrell Stracener
Southern Methodist University

Introduction

Modern university research projects, even though they are still primarily leadership driven by individual Principal Investigator (PI)/Program Director (PD)-equivalent faculty members, must today utilize and rely on a series of institutional infrastructure systems for their facilities, instrumentation, travel, contracting, labor, and administrative needs to function effectively. Each of these systems has various costs associated with them (Haley, 2009; Haley 2011; Grieb, Horon, Wong, Durkin, & Kunkel, 2014).

This paper addresses the use of systems engineering-based management concepts to respond to and develop an advanced research systems administrative and management information approach prototype in order to allow research universities to improve the effectiveness of their research infrastructure administrative tools and policies.

The International Organization of Standards (ISO) Subcommittee on Software Engineering (SC/7) developed the current measurement standard for software measurement processes. “This International Standard identifies the activities and tasks that are necessary to successfully identify, define, select, apply and improve measurement within an overall project or organizational measurement structure” (ISO, 2007). However, while this standards title (referred to as ISO/IEC 15939) uses the terminology of software engineering, it is explicitly meant to also refer to systems engineering (Frenz, Roedler, Gantzer, & Baxter, 2010). ISO/IEC 15939 is also defined in terms of fields of application. In the context of a university research administrative organization, one of the fields of application, “by a supplier to implement a measurement process to address specific project or organizational information requirements” can be seen as representing the necessary measurement needs of a research university administrative component organization (ISO/IEC 15939, 2007).

Of note, ISO/IEC 15939 is not a library of measurements nor does it provide any recommendation on which measures apply to an individual project or organization. It merely defines a process supporting the construction of defined and tailored measures for an organization’s individual requirements.

From an organizational management perspective, ISO/IEC 15939 also details the steps necessary for an organization to ensure that their measurement processes are optimized to form a set of “requirements” to ensure the maximum utilization of these same measurement processes. Figure 1 depicts this process, including the four fundamental measurement task activities.

The International Organization of Standards (ISO) Subcommittee on Software Engineering (SC/7) developed the current measurement standard for software measurement processes. “This International Standard identifies the activities and tasks that are necessary to successfully identify, define, select, apply and improve measurement within an overall project or organizational measurement structure” (ISO, 2007). However, while this standards title (referred to as ISO/IEC 15939) uses the terminology of software engineering, it is explicitly meant to also refer to systems engineering (Frenz, Roedler, Gantzer, & Baxter, 2010). ISO/IEC 15939 is also defined in terms of fields of application. In the context of a university research administrative organization, one of the fields of application, “by a supplier to implement a measurement process to address specific project or organizational information requirements” can be seen as representing the necessary measurement needs of a research university administrative component organization (ISO/IEC 15939, 2007).

ISO/IEC 15939 Measurement Process Model
Figure 1. ISO/IEC 15939 Measurement Process Model. (From ISO/IEC 15939, 2007)

Institutional Background

Between 2009 and 2012, the Naval Post Graduate School (NPS) experienced an unprecedented growth rate exceeding thirty percent annually in its research funding. From ~$75M in 2009 to over ~$150M in 2012; the breadth and scope of research work done at NPS increased dramatically (NPS, 2014). Such growth was not without its problems, however. Several major new research program investments failed to achieve their intended results and goals. A root cause analysis was conducted at the project level by the Chairs and Dean of the associated schools. Additionally, several internal and external reviews were conducted of both institutional research project acceptance and research project review processes. As a consequence, the top level research management and administrative level organization at NPS, the Research and Sponsored Programs Office (RSPO), began exploring possible additional technical and management processes in order to assist in preventing such similar occurrences from occurring again.

As a result of these technical and management process actions, the RSPO has adopted a multi-pronged institutional approach to managing the acquisition of key information required for support, improvement, and strategic planning for critical research activities. These include: (1) the development, formation, and maintenance of an institutional work acceptance policy and joint academic, research, and leadership committee to monitor and serve as a senior decision body for research program acceptance according to the institutional strategic plan; and (2) the development, formation, and maintenance of an evolving research system, staff, and project research output measurement information construct. This latter element will form the major portion of this paper.

Research Administration Background

Education, research, and services constitute the three major responsibilities of universities (Boyer, 1996). Because of the inherent complexity of optimizing and simultaneously balancing these three different mission areas, the resultant university structure that results from it is one that requires a critical, evolving, and well thought out management process. This would include the specific research management structure employed (Bosch & Taylor, 2011; Pettigrew, Lee, Meek, & Barros, 2013). Furthermore, in selecting the elements necessary in formulating the elements of this research management structure, Mintzberg (1979) lists four generic parameters, including an information based decision making system as being necessary (Haines, 2012).

With respect to this decision making system Kirkland (2008) notes “a system to identify any emerging problems at an early stage” as being critical. Taylor (2006) also supports this concept by stating that research management administration should be seen as “encouraging, supporting and monitoring” project entities.

The general advantages of developing a systematic information based measurements tool basis as a tool for research administration are not new. Haines (2012) has pointed out that their use includes establishing and oversight of research business processes, defining responsibilities, controlling expectations, driving team motivations, assessing research staff performance, as well as upgrading tools for both research decision making and prioritization.

While many research administrative organizations use various information measurement constructs as a key portion of their overall responsibility, as Nguyen, Huong, and Meek (2015) point out, “the need for an effective, evidence-based metric standard that captures the complexity of the (Research Management) field remains unmet.”

As Nguyen et al. (2015) have also pointed out, universities typically use an amalgam of publication information, peer review, or a combination of the two former to quantify research personnel outputs at the individual principal investigator, department, or school levels. This combination, or bibliometrics, includes such elements as impact factors and/or citation rates. While there are both pro arguments (Taylor, 2011) and contra arguments (Adams, 2009) to the use of bibliometrics, the inclusion of specific system derived information directly relevant to the integrated System of System (SoS) research project outputs has, to the best of the authors’ knowledge, not been attempted for inclusion.

Literature Review

Cost functions as almost a universal sanctioned means of exchange and value throughout communities (Newlyn, 1978). Investments in a particular quantity, whether it be equipment, labor, project, etc., can all be associated with the appropriate process if there is an accurate accounting system to assign “cost” in dollars to these processes and interactions (Langford, 2012). The system engineering usage of cost constituting the basis for modelling is also well known (Boehm et al., 2000; Blanchard, 1998).

Any performance management information system which has the capability to actively predict performance, must also meet two additional criteria: (1) it must have a program management framework; and (2) it must also have a procedural framework (Folan & Browne, 2005). The numeric basis of cost makes it attractive as many other gauges of systems measurement (Beamon, 1998). Beamon (1998) also concluded, however, that it is improbable that a unitary performance measure like cost will be sufficient, hence a conglomerate of performance measures is demanded for precise assessment.

The need to further classify research university project performance measures as they relate to performance information management is apparent from the fact that effective performance information management involves more than merely quantifying usefulness or benefit as the produced outcome of any organizational undertaking (Macbryde & Mendibil, 2003). Monitoring the processes responsible for those outcomes themselves is equally as important in order to influence the determination of the usefulness or benefit (Busi & Bitici, 2006). The outcome-process difference is also related to a ‘systems’ view of research university organizational functioning, in that the effectiveness or performance of the organizational system in question will itself then be a nascent property of interaction and processes between which and within which all elements that comprise the system as a whole may function (Atkinson, Waterhouse, & Wells, 1997).

Systems engineering (SE) involves the utilization of multiple academic disciplines to integrate various concepts “encompassing a wide range of engineering fields and associated analytical thought processes” (Kossiakoff & Sweet, 2003). It further defines an overall ‘systems’ thinking to consider the broader perspective that embraces a systems perspective as well as the interaction and integration of individual parts (Cowan, Allen, & Mistree, 2006; Haskins, 2006). Systems Engineering also bears a strong resemblance to an implementation of General Systems Theory in that the “general nature of a complex problem is to find a solution using systems ideas and principles” (von Bertalanffy, 1962).

DeLaurentis and Callaway (2004) define a SoS as “the combination of a set of different systems [that] forms a larger system of systems that performs a function not performable by a single system alone.” In the context of a university research administration application, however, the definition by Jamshidi (2009) as “an integration of a finite number of constituent systems which are independent and operable, and which are networked together for a period of time to achieve a certain higher goal” seems to serve the present application best.

To be considered as a SoS, there are at least five identified traits: “operational independence, managerial independence, emergent behavior, evolutionary development, and geographical distribution” (DeLaurentis, 2007; Boardman, DiMario, Sauser, & Verma, 2006). Maier (1998) interprets operational independence as: “if the SoS is disassembled into its component systems, the components systems must be able to usefully operate independently.” Managerial independence is defined similarly as: “the component systems not only can operate independently, they do operate independently.”

In summary, a SoS executes both tasks and designs that cannot and are not either inhabiting any single constituent system (Engell, 2014). These functions are the nascent constituent elements of the total SoS itself and may not be confined to any single constitutive system. The foremost functions of the SoS emerge and are in turn satisfied by the SoS (Maier, 1998). Hence, SoS are comprised of multiple systems that are each managed and operated independently. At the SoS level, they deliver additional benefits. The enterprises at the systems level are all seen to have managerial and operational independence. Together, however, the individual component systems collaborate to develop and operate the SoS (Maier, 1998).

The current state of NPS research administration operations is seen to be requiring a more developed research output measurement information construct. NPS is not alone in this regard (Nguyen et al., 2015). A systems engineering-based method of analysis offers us one possible way to gain additional perspective into the many complexities observed in managing a research university construct by providing an additional dimension through which to view research conduct.

Method

At NPS, the increased fidelity of individual research project financial data has provided valuable spending pattern accounting information. It was not until 2011, however, that questions arose whether the individual total annual system cost structure items afforded by use of the institutions Kuali Financial System (KFS) (labor, equipment, etc.) may be treated as representative of the individual systems they originate from in order to possibly afford additional insight into NPS RSPO research operations (Kuali, 2014). This further implied that a possible new integrated RSPO planning and control information mechanism could be constructed, utilizing continuous process measurement techniques to track the state(s) of the system(s) variance trends.

The detailing of research project total expenditures as broken down by labor, equipment, travel, contracting, and “indirect” costs is conceived as five operational and managerial independent systems functioning as a SoS. Labor, Equipment, Contracting, Indirect, and Travel are all individually operationally separate and managed organizational entities. This includes their “input personnel, hardware requirements, software requirements, facility spaces, office policies, and in house documents that interact substantially through processes, feedback, and boundaries” (INCOSE, 2006). The outputs from each system, with the exception of Indirect, include “specifically different qualities products, properties, characteristics, functions, behaviors, and performances” (INCOSE, 2006). The individual system elements (Labor, Equipment, Travel, and Contracting) are all “managerial and operationally independent elements” (Boardman et al., 2006). They also possess the traits of “emergent behavior and evolutionary development” (DeLaurentis, 2007). Geographical distribution of elements may be more problematic. The individual system offices are physically co-located within a ½ square mile area, however, the geographical network responsible for the core policies and procedures governing each individual system are nationwide (DeLaurentis, 2007).

Knowing one measure regarding a system presages a degree of comprehension of an alternate measure of the system (Kuhn, 1962). Systems engineering ‘integration’ could be defined as the act of ‘combining two different knowledge representations together’ (Kim & Porter, 2007). The integrative framework that is systems integration serves as a tool to analyze the anatomical basis for an item; the act of integration serving to identify both missing objects, quantities, and processes (Langford, 2007).

The use of academic faculty performance measurement tools has been the subject of the academic literature since 1961 (Gustad, 1961). The evaluation of a researcher’s output is of obvious interest due to the benefits of developing criteria that are not biased or unfair in any way. If such a metric could be devised, it could serve as the basis for research project funding, faculty comparison, promotion, and management of the same (Sidiropoulos, Katsaros, & Manolopoulos, 2007). In an attempt to answer some of the drawbacks of use of such simple indices, researchers have developed a myriad of academic productivity indices based upon aggregation, time, seniority, dynamic properties, multi-dimensional, trend, etc. based metrics in that they combine mathematically other simple indexes into new ones (Garcia-Perez, 2009; Boell & Wilson, 2010).

In terms of evaluation of the various indices outlined above with respect to quantitatively tying results to specific process or outcome factors, there is precious little research, however. An example of such quantitative empirical analysis that could be found in the literature has been used to identify the “input-output” efficiency of Chinese universities (Chen, Shen, & Fang, 2010). The six output indicators for this study were scientific and technical awards value, academic writing, academic theses, national research projects, number of patents granted, and tech transfer income. These factors were correlated against input factors which included numbers of various staff personnel in various categories (by number) (Chen et al., 2010).

In the present case, we are looking for quantities to tie and compare directly against individual research project system costs; we will postulate the existence of a metric: Faculty Integrated Outputs. We will assume that this is composed of the digital sum of all individual research program subject area publications, presentations, reports, and citations on the one hand, and student theses and dissertation numbers for a particular funded research program within two years of the onset of program project funding. The two year figure is based upon the observed local output data. An NPS principal investigator is seen to normally publish at least one scholarly article within a two year time frame commencing after project onset. Similarly, students are seen to normally produce a M.S. degree thesis within two years of project onset. In another research university implementation, either of these two values may be different. We are also specifically including two separate categories of “FIO” in our discussion in order to recognize the dynamic yet dual nature of research administration in the university environment: between the need to publish in whichever form to meet the faculty members academic research performance milestones, and the need to balance those against the need to meet the teaching institution’s requirement for passing along information to the next generation of academics/students via theses & dissertations.

The expression for the FIO variables of “Publications”, and “Dissertation & Thesis” outputs generated are clearly terms for “performance” as, in totality or individually, they are commonly accepted metrics of functionality (Langford, 2012). The measurement of them are also assumed to be “continuous and quantifiable” (Langford, 2012). The individual performance metrics are also seen to be impacted by both temporal changes and events. While they are also each individually discrete, we also will make the assumption that we may manipulate them mathematically as a means of determining a total FIO quotient for each faculty member, department, school, Center, or institute. This approach is common in Academia. Several authors, as stated above, have attempted to identify various mathematical constructs (ex: h-index, m-index, relative value unit, academic readiness level, etc.) (Iyengar, Wang, Chow, & Charney, 2009; Thompson, Callen, and Nahata, 2009; Mezrich & Nagy, 2007). None of these expressions, however, is intended to constitute a single “determinator” in that one citation, publication, or thesis is any more valuable than any other, just that we will make the assumption that they may be manipulated mathematically as a means of quantifying and discovering relationships between FIO and any other research administrative developed metrics.

Before displaying several FIO and other critical example FIO data, a status review will be conducted with respect to the characteristics of the presumed model that has been proposed. A postulated multiple input (assumed four independent variable-labor, travel, equipment, contract) single output(s) (Faculty Integrated Output-Dissertation & Thesis or Publication) model. What we do not have at the moment is more than a general theoretical idea of how the independent variables should interact in order to produce a predictive trend of how the individual NPS researcher and research groups “products” should evolve. The next question is obviously, why the concern? There are computationally effective methods available to the research administration community that can optimize such “black-box models” with undetermined structurally complex interactions. Neural Networks and genetic algorithms are but two possible examples. In order to narrow down the mathematical analysis approach further, we should consider what exactly are the attributes of the FIO product that we are looking for? The insight developed from the FIO use is meant to motivate deeper involvement in both discovering as well as fixing the issues brought to light through their use. Secondly, given the scope, breadth, and multiple interaction pathways experienced by all research projects at a university, one would like to believe that the widest possible search process done in order to find the best near-optimal solution would seem to be appropriate. As the introduction of a new FIO metric is also expected to be continuous, the solution space for any research group FIO trend data is expected to evolve over time. For example, if the labor system cost solution factor evolves as a more important component of a total FIO data for one research group versus another over time, this in itself provides motivation for determining the reasons why. How the FIO would also be implemented into the day to day operations of the research administrative organization is also important. We are not talking about a requirement to generate instant and complex time sensitive solutions. We are talking about the incorporation of data over long periods of time wherein the individual “optimal FIO solution” from one research entity is combined with other metrics under the watchful eyes of presumably very experienced and trained research administrative personnel before any decisions are made. Lastly, as all system independent variables are tracked as cost components, which individual system components (and combinations thereof) play the largest role in research projects X or research groups Y work should be easily available (i.e., the ability to choose and adapt other functional constraints on the functionality of the problem solution should be easily adaptable) for research administration observation.

Research Questions

In the context of university research administration, the specific research questions are:

  1. May a research institution develop a mathematical computational analysis method linking the independent system variables of Labor, Travel, Equipment, and Contracts on the one hand, and the dependent variable of FIO on the other?
  2. Provided we are successful in (1.), what increased situational awareness does the analysis of the Systems to FIO data provide research administrators?

Chosen Computational Method

There are several basic computational analysis options available to analyze the above quantitative multi-variate data. The authors have chosen to use the Mahalanobis-Taguchi System (MTS) (Taguchi & Jugulum, 2002) analysis technique for this proof-of-concept trial. In the setting of research administration operations, MTS has been shown to offer a number of important advantages over other computational analysis methods: (1) it offers the possibility of independent variable reduction; (2) it can identify both “abnormal” and “normal” data sets; (3) the degree of “abnormality” can be measured relatively simply; (4) it requires no assumptions as to the distribution form for the data; (5) the controls governing what is “normal” and what is “abnormal” can be easily changed and the effects viewed; and (6) it is not computationally complex (Kumano, Mikami, & Aoyama, 2011; Holcomb, 2016).

MTS is comprised of four stages. Performing a multivariate analysis utilizing MTS requires the obligatory data. Once the data is obtained, including all variables that can influence any outcomes, the researcher must determine the parameters of “normal” versus “abnormal” data. From these separate identities, a Mahalanobis Space (MS), composed of the differentiated Mahalanobis Distances (MD) for each sample, is calculated. The MD represents a scaled distance of all variables delineating the correlation between each. Two calculation methods are known for determining MD: (1) Inverse Matrix Method; and (2) Gram-Schmidt Orthogonal Process (Taguchi & Jugulum 2002). The Inverse Matrix Method (IMM) will be used in this work. The IMM is described in Equation 1.

In stage two of MTS, the MDs for the abnormal group are calculated and compared to the normal group values. To calculate the abnormal MDs, the standardized vectors must be standardized using the mean and standard deviation from the normal group. Also, the correlation matrix and inverse correlation matrix come from the normal group as well. The abnormal threshold is validated if the MDs for this group have higher values than for the healthy group.

The third step in MTS is to determine all functional variables that influence the outcome using Orthogonal Arrays (OA) and Signal-to-Noise Ratios (SNRs). In MTS, an OA is a survey instrument with two identities; either the variable is present or it is not. For each run specified by the OA, MDs are recalculated. The SNRs attempts to recognize all functional variables compared to variations in the system. SNRs are calculated from the MDs of each run using Equation 3.

 

Non-functional variables are removed based on the overall gain of the system. Only functional variables that increase the system’s overall gain are retained. The last stage of MTS is the condition of monitoring the MD scales while applying either suitable conclusions or remedial actions based on the observed values.

In the present situation research administration case, where there are expected to be a large variation in the incidences of correlations because of the diversity of research and principal investigators as well as the individual and group small data sets, as long as we select the proper “normal” and “abnormal” groups correctly, MTS would seem to offer an initial advantage over any other technique because of the prior stated reasons. In this proof-of concept trial, we have set the initial publication and thesis & dissertation abnormal/normal differentiation data set factor boundary at zero. Thus, if the research unit (e.g., department, Center, or Institute), faculty type (e.g., tenure track versus non-tenure track), or individual principal investigator produces a publication or thesis/dissertation, it is considered “normal.” If they do not, it is considered “abnormal.” In future iterations, where the normal/abnormal data set boundary could be set at a non-zero quantity, as in the case of mandated research project quarterly or monthly reports, we would have to possibly modify the selection criteria to include cluster analysis or publication quantity averaging for determination of the boundary standard.

System Uncertainty Sources

In the desire to link research project system costs to research project outputs of D/T numbers and Publication output, we have assumed that the expressions for system costs are capturing the true costs in total annual labor $, travel $, equipment $, and contract $ as they relate to those same outputs. How valid is this?

From work in the 1980’s concerning production “technical efficiencies” (Schmidt & Sickles, 1984) we know that at least some airline companies’ ability to produce “maximal output” is in the range of seventy to one hundred percent. This would also imply that there are inefficiencies on the order of zero to thirty percent. Such items as “marketing research costs”, testing costs, and administrative costs are all not usually correctly costed in research project accounting (Skaife, Swenson, & Wangerin, 2013). There have also been so-called R&D “intangible” expenses identified (Siegel & Borgia, 2007) to include HW/SW, technical expertise, training programs, and customer service capability that could be left out of any research project total accounting scheme (Johnson, 1964). Hence, we know that there are likely to be sufficient research project uncertainty components that are missing from individual or group research projects depending on whether or if these “costs” are accounted for. In an attempt to quantify the specific NPS case, Langford (2015) has calculated there may be a ~20-25% uncertainty rate in the labor cost accounting alone. To deal computationally with these cost uncertainty “noise” issues, a fifth random variable has been inserted into the MTS calculation along with the other four independent variable system costs. Because of the number of variables and the lack of any known variable interactions, the L8(27) Taguchi Table was chosen as the orthogonal array for use in our MTS calculations (Taguchi & Jugulum, 2002).

Results

Data Sources

Between 2013 and 2015, the authors collected and reduced the 2010-2012 NPS FIO research output and matched them with the correlated system cost data for over three thousand research projects from fourteen different university departments, centers, and institutes in order to establish a possible basis for an additional system research administrative information metric tool. Dissertation and Thesis (D/T) and Publication (Pubs) specific MTS Gain factors were obtained for the fourteen research units as well as the faculty sub-unit type Tenure (TT), Non-Tenure Track (NTT), and individual Principal Investigators (PIs) in some cases.

Data Summary

ISO/IEC 15939 Measurement Process Model
Table 1. NPS Group MTS Gain Factor Success Percentages.

ISO/IEC 15939 Measurement Process Model
Table 2. NPS Faculty Type MTS Gain Factor Success Percentages.

ISO/IEC 15939 Measurement Process Model
Table 3. Group 4 Integrated TT/NTT/PI MTS Gain Factors vs. Budget Expenditure Percentages.

ISO/IEC 15939 Measurement Process Model
Table 4. Group 4 Calculated Standard Errors and Signal-to-Noise Ratios for FY10-12 D/T Data.

ISO/IEC 15939 Measurement Process Model
Table 5. Group 4 Calculated Standard Errors and Signal-to-Noise Ratios for FY10-12 Pubs Data.

ISO/IEC 15939 Measurement Process Model
Table 6. Group 4 TT/NTT/Individual Calculated Standard Errors and Signal-to-Noise Ratios for FY10-12 DT and Pubs Data.

 

Discussion

Aligning Findings to the Research Questions

Research Question 1: System to FIO Analysis Method. The use of the Mahalanobis Taguchi System (MTS) method allows for determination of the specific system variables and their associated degree of contribution to both of the identified Pubs and D/T outputs at the research unit levels. As can be seen in Table 1 the majority of NPS research units (Department, Centers, and Institutes) covered in the fourteen groups researched provided MTS Gain Factors which identify the systems affecting research outputs. At the faculty sub-unit levels, as can be seen in Table 2, the percentages are much lower. This result is not surprising. All groups under MTS evaluation require both a minimum number of acceptable independent variable value sets as well as adherence to compliance rules for determining the validity/usefulness of the derived Mahalanobis Distance. In the case of TT and NTT there are simply less data available. For the specific situation of individual NPS PIs researched, the situation is even worse. Over a three year period, only five individual PIs researched successfully developed MTS Gain Factors for the same reasons. The same MTS Gain Factors were also for individual years and hence, were not available for tracking over a multi-year period. Table 3 is used to display the integrated combined research group, faculty, and principal investigator results for one of the fourteen research groups observed. As can be seen, the differences are indicative of factors impacting research group and sub-elements of those groups.

Between FY10 and FY12, Group 4’s total research budget increased 12%. D/T Gain Factor information was not available for FY10. For FY10, all abnormal D/T MD values were negative, negating any MTS Gain Factor results. Raw thesis and dissertation output numbers more than tripled from FY10 to FY12 for Group 4. In FY11, neither the Equipment nor Contract systems contributed to the generation of this group’s D/T output as the MTS Gain Factors were both negative. Labor was seen as both the largest budget percentage as well as the largest system contributor to the generation of D/T output between FY11 and FY12.

The publication output numbers from this group increased 13% from FY10-FY12. What is interesting here is that the MTS Gain Factor numbers associated with the Equipment system for this group remained negative for the three year period. This means that the Equipment system variable is not seen as contributing to the Pubs outputs. More in-depth analysis of the specific dynamics of this group would have to be done to account for this.

While there were slight differences between TT and NTT budget percentages in FY12, the budget was dominated by the Labor system contribution. NTT personnel were responsible for thirty three percent of the raw dissertation and thesis output in FY12 and fifty percent of the publications raw output. The Labor system was also seen to dominate the contributions to NTT D/T and Pubs contribution outputs. Individual E produced twelve percent of the FY11 group raw dissertation and thesis output. Individual B produced thirty two percent of the FY11 raw publications output. For both individuals, there are substantial MTS Gain Factor system differences between them and the member research group MTS Gain Factors. Unfortunately, there was not enough data to support either a direct TT or NTT comparative analysis for these years. More in depth analysis of the specific dynamics of this group would have to be done to account for this.

Research Question 2: Research Administration Situational Awareness. By providing additional system information insight into the relevant output behaviors for research institutions, we provide an additional system contribution perspective valuable to the execution of research administration. As an example, if we have advance knowledge of macro research system changes (ex: changes to contracting or travel organization/policy execution), we may have advance knowledge of which specific research groups outputs may be vulnerable to those systems in the near term.

 

Conclusion

Summary

This study addressed the postulated relationship between research system independent variables of Labor, Travel, Equipment, and Contract total annual cost and the dependent variables of total bi-annual publications/thesis & dissertation outputs. The three-year study data captured the identification of which research system variables are responsible for these same outputs by magnitude and degree. The findings of the study also help to identify which critical systems are responsible for research group outputs independent of budget percentage expenditures. At higher levels of research group fidelity, the MTS Gain Factor identification was shown to be problematic due to a combination of acceptable variable value sets as well as MTS compliance rules. Through such concrete research system identification, however, research administrative personnel may have the possibility of directly identifying the association between likely effects of reduced or affected research systems on the research outputs themselves.

Future Research

The authors will continue to develop the system engineering-based FIO construct seen above to include possible extension to a non-research university laboratory data set case. The authors will also review any potential system structural effects that may have been responsible for the NPS results. As MTS has also been used to forecast other physical system multi-variate systems with success (Soylemezoglu, Sarangapani, & Saygin, 2011; Hu, Zhang, & Liang, 2013), we will specifically include this possibility in our future research administration information development construct.

 

Author’s Note

The research contained in this work has been derived from the lead author’s continuing doctoral dissertation work. The lead author would like to acknowledge the contributions of Dr. Karl van Bibber and Dr. Jeff Paduan, NPS Deans of Research for providing inspiration and institutional data for this work. The views expressed in this article are those of the author and do not necessarily reflect the official policies, position, or views of the U.S. Department of Defense, the U.S. Department of the Navy, or the Naval Postgraduate School. The content has been cleared for unlimited public release with no restrictions.

Rodman P. Abbott, BA, MS, MSIB, PhD
Program Manager
Naval Post Graduate School Naval Research Program
699 Dyer Road, Halligan Hall, Room 201A
Monterey, CA 93940
(831) 656-2579
rpabbott@nps.edu

Jerrell Stracener, PhD
Founding Director, Department of Engineering Management, Information, and Systems
Department of EMIS
Lyle School of Engineering
Southern Methodist University
P.O. Box 750123
Dallas, TX 75275-0123
jerrells@lyle.smu.edu
(214) 768-1535

Jerrell Stracener is Professor of Practice and founding Director of the Southern Methodist University (SMU) Systems Engineering Program. He teaches graduate-level courses in engineering probability and statistics, systems reliability and supportability analysis, integrated logistics support (ILS), and supervises PhD student research. He is the SMU Lead Senior Researcher in the U.S. DoD-sponsored Systems Engineering Research Center (SERC). Prior to joining SMU full time in January 2000, Dr. Stracener was employed by LTV/Vought/Northrop Grumman where he conducted and directed systems engineering studies and analysis, and systems reliability and supportability projects and was ILS program manager, on many of the nation’s most advanced military aircraft. Jerrell served in the U.S. Navy and was co-founder and leader of the SAE Reliability, Maintainability and Supportability (RMS) Division (G-11). He is an SAE Fellow and AIAA Associate Fellow. Dr. Stracener earned PhD and MS degrees in Statistics from SMU and a BS in Math from Arlington State College (now the University of Texas at Arlington)

References: 
  • Adams, J. (2009). The use of bibliometrics to measure research quality in UK higher education institutions. Archivum Immunologiae et Therapiae Experimentalis, 57(1), 19–32. doi:10.1007/s00005-009-0003-3
  • Atkinson, A. A., Waterhouse, J. H., & Wells, R. B. (1997). A stakeholder approach to strategic performance measurement. Sloan Management Review, 38(3), 25–37.
  • Beamon, B. (1998). Supply chain design and analysis: models and methods. International Journal of Production Economics, 55(3), 281–294. doi:10.1016/S0925-5273(98)00079-6.
  • Blanchard, B. (1998). Systems engineering management (2nd ed.). New York: John Wiley & Sons.
  • Boardman, J., DiMario, M., Sauser, B., & Verma, D. (2006). System of systems characteristic and interoperability in joint command and control. Paper presented at the Annual System of Systems Engineering Conference, Ft. Belvoir, Virginia, Defense Acquisition University, July 25-26, 2006.
  • Boehm, B., Abts, C., Brown, A. W., Chulani, S., Clark, B. K., Horowitz, E., Steece, B., & Reifer, D. (2000). Software cost estimation with COCOMO II. Upper Saddle River, NJ: Prentice-Hall.
  • Boell, S. K., & Wilson, C. S. (2010). Journal impact factors for evaluating scientific performance: Use of h-like indicators. Scientometrics, 82(3), 613-626. doi:10.1007/s11192-010-0175-y.
  • Bosch, A., & Taylor, J. (2011). A proposed framework of institutional research development phases. Journal of Higher Education Policy and Management, 33(5), 443-457. doi:10.1080/1360080X.2011.585742.
  • Boyer, E. L. (1996). From scholarship reconsidered to scholarship assessed. Quest, 48(2), 129-139. doi:10.1080/00336297.1996.10484184.
  • Busi, M., & Bitici, U. (2006) Collaborative performance management: Present gaps and future research. International Journal of Productivity and Performance Management, 55(1), 7-25. doi:10.1108/17410400610635471 .
  • Chen, L., Shen, G., & Fang, Y. (2010). Comparative study of input-output efficiency on college scientific research. Paper presented at the 2nd International Workshop on Database Technology and Applications, Wuhan, China, November 27-28, 2010. doi:10.1109/DBTA.2010.5659057.
  • Cowan, F. S., Allen, J. K., & Mistree, F. (2006). Functional modelling in engineering design: A perspectival approach featuring living systems theory. Systems Research and Behavioral Science, 23, 365–381. doi:10.1002/sres.733
  • DeLaurentis, D., & Callaway, R. K. (2004). System-of-systems perspective for public policy decisions. Review of Policy Research, 21(6), 829–837. doi:10.1111/j.1541-1338.2004.00111.x
  • DeLaurentis, D. (2007). Research foundations. West Lafayette, IN: School of Aeronautics and Astronautics, Purdue University.
  • Engell, S. (2014). Cyber physical SoS-Definition and core research and development areas. Working paper of the Support Action CPSoS. Retrieved from http://www.cpsos.eu/wp-content/uploads/2015/07/CPSoS-Scope-paper-vOct-26...
  • Folan, P., & Browne, J. (2005). A review of performance measurement: Towards performance management. Computers in Industry, 56(7), 663–680. doi:10.1016/j.compind.2005.03. 001
  • Frenz, P., Roedler, G., Gantzer, D. J., & Baxter, P. (2010). Systems engineering measurement primer: A basic introduction to measurement concepts and use for systems engineering (Version 2.0). San Diego, CA: International Council on System Engineering (INCOSE), INCOSE‐TP‐2010‐005‐02.
  • Garcia-Perez, M. A. (2009). A multidimensional extension to Hirsch's h-index. Scientometrics, 81(3), 779-785. doi:10.1007/s11192-009-2290-1
  • Gorod, A., Sauser, B., & Boardman, J. (2008). System-of-systems engineering management: A review of modern history and a path forward. IEEE Systems Journal, 2(4), 484–499.
  • Grieb, T., Horon., J., Wong, C., Durkin, J., & Kunkel, S. (2014). Optimizing institutional approaches to enable research. Journal of Research Administration, 45(2).
  • Gustad, J. W. (1961). Policies and practices in faculty evaluation. Educational Record, 42, 194-211.
  • Haines, N. (2012). Metrics for research administration offices (Parts 1/2). Journal of Clinical Research Best Practices, 8(6/7). Retrieved from www.huronconsultinggroup.com/
  • Haley, R. (2009). A framework for managing core facilities within the research enterprise. Journal of Biomolecular Techniques, 20(4), 226-230.
  • Haley, R. (2011). Institutional management of core facilities during challenging financial times. Journal of Biomolecular Techniques, 22(4), 127-130.
  • Holcomb, S. (2016). Mahalanobis Taguchi System (MTS) for pattern recognition, prediction, and optimization. MODSIM World, Paper No. 34. Retrieved from http://www.modsim world.org/papers/2016/Mahalanobis_Taguchi_System_for_Pattern_Recognition_Prediction_and_Optimization.pdf
  • Hu, J., Zhang, L., & Liang, W. (2013). Dynamic degradation observer for bearing fault by MTS-SOM system. Mechanical Systems and Signal Processing, 36(2), 385-400. doi:10.1016/j.ymssp.2012.10.006
  • INCOSE. (2006, June). Systems engineering handbook: A guide for system life cycle processes and activities (version 3). Retrieved from http://www.las.inpe.br/~perondi/21.06.2010/ SEHandbookv3.pdf
  • International Organization for Standardization. (2007). ISO/IEC 15939:2007: Systems and software engineering-Measurement process. Retrieved from http://www.iso.org/iso/ catalogue_detail.htm?csnumber=44344
  • Iyengar, R., Wang, Y. P, Chow, J., & Charney, D. S. (2009). An integrated approach to evaluate faculty members' research performance. Academic Medicine, 84(11), 1610-1616.
  • Jamshidi, M. (2009). System of systems engineering: principles and applications. Boca Raton, FL: CRC Press.
  • Kim, D. S., & Porter, B. (2007). Handling granularity differences in knowledge integration. Association for the Advancement of Artificial Intelligence. Retrieved from http://www.aaai.org
  • Kirkland, J. (2008). University research management: An emerging profession in the developing world. Technology Analysis and Strategic Management, 20(6), 717-726. doi:10.1080/09537320802426416
  • Kossiakoff, A., & Sweet, W. N. (2003). Systems engineering principles and practice. Hoboken, NJ: John Wiley & Sons.
  • KUALI. (2014). KUALI Financial System. Retrieved from http://www.kuali.org/kfs
  • Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.
  • Kumano, S., Mikami, N., & Aoyama, K. (2011). Advanced gas turbine diagnostics using pattern recognition. Paper presented at the ASME 2011 Turbo Expo, Volume 3, Vancouver, British Columbia, Canada, June 6–10, 2011. Paper No. GT2011-45670, pp. 179-187. doi:10.1115/GT2011-45670
  • Langford, G. (2007). Fundamentals of management for systems engineering. Paper presented at the NDIA 10th Annual Systems Engineering Conference, San Diego, California, October 22-25, 2007.
  • Langford, G. O. (2012). Engineering systems integration: Theory, metrics, and methods. New York: CRC Press.
  • Langford, G. O. (2015). Personal communication.
  • Macbryde, J. C., & Mendibil, K. (2003). Designing performance measurement systems for teams: Theory and practice. Management Decision, 41(8), 722-33. doi:10.1108/00251740310496233
  • Maier, M. W. (1998). Architecting principles for systems-of-systems. Systems Engineering, 1(4), 267-284. doi:10.1002/(SICI)1520-6858(1998)1:4267::AID-SYS33.0.CO;2-D
  • Mezrich, R., & Nagy, G. (2007). The academic RVU: a system for measuring academic productivity. Journal of the American College of Radiology, 4(7), 471-478. doi:10.1016/j.jacr.2007.02.009
  • Mintzberg, H. (1979). The structuring of organization: A synthesis of the research. Englewood Cliffs, NJ: Prentice-Hall.
  • Newlyn, W. T. (1978). Theory of money. Oxford University Press.
  • Nguyen, T., Huong, L., & Meek, V. L. (2015). Key considerations in organizing and structuring university research. The Journal of Research Administration, 46(1), 41-62.
  • NPS. (2014). Retrieved from http://intranet.nps.edu/
  • Pettigrew, A., Lee, M., Meek, L., & Barros, F. B. D. (2013). A typology of knowledge and skills requirements for effective research and innovation management. In A. Olsson, & L. Meek (Eds.), Effectiveness of research and innovation management at policy and institutional levels: Cambodia, Malaysia, Thailand and Vietnam (pp. 29-74). OECD Publishing. Retrieved from https://www.oecd.org/
  • Schmidt, P., & Sickles, R. (1984). Production frontiers and panel data. Journal of Business & Economic Statistics, 2(4), 367-374. doi:10.2307/1391278
  • Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2007). Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics, 72(2), 253–280. doi:10.1007/s11192-007-1722-z
  • Siegel, P., & Borgia, C. (2007). The measurement and recognition of intangible assets. Journal of Business and Public Affairs, 1(1). Retrieved from http://www.scientificjournals.org/ journals2007/articles/1006.htm
  • Skaife, H., Swenson, L., & Wangerin, D. (2013). A study of discretionary R&D reporting. UC Davis School of Management. Retrieved from http://gsm.ucdavis.edu/faculty/hollis-skaife
  • Soylemezoglu, A., Sarangapani, J., & Saygin, C. (2011). Mahalanobis-Taguchi system as a multi-sensor based decision making prognostics tool for centrifugal pump failures. IEEE Transactions on Reliability, 60(4), 864-878. doi:10.1109/TR.2011.2170255
  • Taguchi, G., & Jugulum, R. (2002). The Mahalanobis-Taguchi Strategy. New York, NY: John Wiley & Sons.
  • Taylor, J. (2006). Managing the unmanageable: The management of research in research-intensive universities. Higher Education Management & Policy, 18(2), 1-25. Retrieved from http://www.oecd.org/edu/imhe/42348780.pdf
  • Taylor, J. (2011). The assessment of research quality in UK universities: Peer review or metrics? British Journal of Management, 22(2), 202-217. doi:10.1111/j.1467-8551.2010.00722.x
  • Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). New indices in scholarship assessment. American Journal of Pharmaceutical Education, 73(6), 111. Retrieved from PubMed Central.
  • von Bertalanffy, L. (1962). General System Theory - A critical review. General Systems, 7, 1–20.
Keywords: 

Faculty Integrated Outputs, Mahalanobis Taguchi System, System of Systems