58 results on '"L. Allison Jones"'
Search Results
2. Artificial intelligence and statistics for quality technology: an introduction to the special issue
- Author
-
Enrique Castillo, Bianca Maria Colosimo, L. Allison Jones-Farmer, and Kamran Paynabar
- Subjects
Artificial intelligence ,Computer science ,business.industry ,Strategy and Management ,media_common.quotation_subject ,Design of experiments ,statistical process monitoring ,Replicate ,Management Science and Operations Research ,Artificial intelligence, design of experiments, machine learning, quality technology, statistical process monitoring ,GeneralLiterature_MISCELLANEOUS ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,design of experiments ,machine learning ,ComputingMethodologies_PATTERNRECOGNITION ,quality technology ,Quality (business) ,Statistical process monitoring ,Safety, Risk, Reliability and Quality ,business ,media_common - Abstract
In many applied and industrial settings, the use of Artificial Intelligence (AI) for quality technology is gaining growing attention. AI refers to the broad set of techniques which replicate human ...
- Published
- 2021
- Full Text
- View/download PDF
3. Robustness of the one‐class Peeling method to the Gaussian Kernel Bandwidth
- Author
-
Lina Lee, L. Allison Jones-Farmer, Waldyn G. Martinez, and Maria L. Weese
- Subjects
symbols.namesake ,Computer science ,Robustness (computer science) ,Bandwidth (computing) ,Gaussian function ,symbols ,Management Science and Operations Research ,Safety, Risk, Reliability and Quality ,Algorithm ,Class (biology) - Published
- 2021
- Full Text
- View/download PDF
4. Personalized and Nonparametric Framework for Detecting Changes in Gait Cycles
- Author
-
Lora Cavuoto, Fadel M. Megahed, Jiyeon Kang, L. Allison Jones-Farmer, and Saeb Ragani Lamooki
- Subjects
Computer science ,business.industry ,Feature extraction ,Nonparametric statistics ,Machine learning ,computer.software_genre ,Gait ,Visual inspection ,Inertial measurement unit ,Gait analysis ,Trajectory ,Task analysis ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,computer - Abstract
Gait analysis is a standard practice used by clinicians and researchers to identify abnormalities, examine disease progression, or assess the success of interventions. Traditionally, assessments were performed with visual inspection by a trained professional. However, with the recent breakthroughs in sensing technologies, there is a growing body of literature that uses features extracted from sensing data as inputs to machine learning methods. These models require a large representative sample of gait cycles labeled according to each category of interest (e.g. standard, anomalous) for model training. This paper provides a personalized, nonparametric statistical framework that can be used for detecting and interpreting gait changes in individuals while requiring only a small number of baseline gait cycles. This framework can be applied using the acceleration trajectory or features from a single Intertial Measurement Unit (IMU). The individualized framework does not require the gait cycles to be labeled and does not require the assumption that the observed patterns are consistent across subjects. The personalized framework is applied to gait cycles extracted from a material handling task that simulates moving heavy loads in a warehouse. Twelve subjects were monitored and significant changes in personalized gait patterns consistent with perceived exertion were observed. Further interpretation of the changes illustrates that participants exhibit individualized patterns in gait as they approach the fatigued state.
- Published
- 2021
- Full Text
- View/download PDF
5. A one‐class peeling method for multivariate outlier detection with applications in phase I SPC
- Author
-
L. Allison Jones-Farmer, Maria L. Weese, and Waldyn G. Martinez
- Subjects
Clustering high-dimensional data ,business.industry ,Computer science ,Phase (waves) ,Pattern recognition ,Multivariate outlier detection ,Management Science and Operations Research ,Class (biology) ,symbols.namesake ,Gaussian function ,symbols ,Control chart ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business - Published
- 2020
- Full Text
- View/download PDF
6. Explaining Predictive Model Performance: An Experimental Study of Data Preparation and Model Choice
- Author
-
Hamidreza Ahady Dolatsara, Fadel M. Megahed, Robert D. Leonard, Ying-Ju Chen, and L. Allison Jones-Farmer
- Subjects
Information Systems and Management ,Process (engineering) ,Heuristic ,Computer science ,business.industry ,Model selection ,Behavioural sciences ,Feature selection ,Machine learning ,computer.software_genre ,Computer Science Applications ,Applied research ,Imputation (statistics) ,Artificial intelligence ,business ,Categorical variable ,computer ,Information Systems - Abstract
Although confirmatory modeling has dominated much of applied research in medical, business, and behavioral sciences, modeling large data sets with the goal of accurate prediction has become more widely accepted. The current practice for fitting predictive models is guided by heuristic-based modeling frameworks that lead researchers to make a series of often isolated decisions regarding data preparation and cleaning that may result in substandard predictive performance. In this article, we use an experimental design to evaluate the impact of six factors related to data preparation and model selection (techniques for numerical imputation, categorical imputation, encoding, subsampling for unbalanced data, feature selection, and machine learning algorithm) and their interactions on the predictive accuracy of models applied to a large, publicly available heart transplantation database. Our factorial experiment includes 10,800 models evaluated on 5 independent test partitions of the data. Results confirm that some decisions made early in the modeling process interact with later decisions to affect predictive performance; therefore, the current practice of making these decisions independently can negatively affect predictive outcomes. A key result of this case study is to highlight the need for improved rigor in applied predictive research. By using the scientific method to inform predictive modeling, we can work toward a framework for applied predictive modeling and a standard for reproducibility in predictive research.
- Published
- 2021
- Full Text
- View/download PDF
7. Leveraging industrial statistics in the data revolution: The Youden Memorial Address at the 63rd Annual Fall Technical Conference
- Author
-
L. Allison Jones-Farmer
- Subjects
Printing press ,law ,Analytics ,business.industry ,Political science ,Economic history ,Safety, Risk, Reliability and Quality ,business ,Industrial and Manufacturing Engineering ,law.invention - Abstract
We are in the midst of a “Data Revolution” that is transforming our economy. This revolution is as large and profound as other major economic shifts from the introduction of the printing press to t...
- Published
- 2019
- Full Text
- View/download PDF
8. Explaining the Varying Patterns of COVID-19 Deaths Across the United States: 2-Stage Time Series Clustering Framework
- Author
-
Fadel M Megahed, L Allison Jones-Farmer, Yinjiao Ma, and Steven E Rigdon
- Subjects
Influenza A Virus, H1N1 Subtype ,Time Factors ,SARS-CoV-2 ,Public Health, Environmental and Occupational Health ,COVID-19 ,Cluster Analysis ,Humans ,Health Informatics ,United States - Abstract
Background Socially vulnerable communities are at increased risk for adverse health outcomes during a pandemic. Although this association has been established for H1N1, Middle East respiratory syndrome (MERS), and COVID-19 outbreaks, understanding the factors influencing the outbreak pattern for different communities remains limited. Objective Our 3 objectives are to determine how many distinct clusters of time series there are for COVID-19 deaths in 3108 contiguous counties in the United States, how the clusters are geographically distributed, and what factors influence the probability of cluster membership. Methods We proposed a 2-stage data analytic framework that can account for different levels of temporal aggregation for the pandemic outcomes and community-level predictors. Specifically, we used time-series clustering to identify clusters with similar outcome patterns for the 3108 contiguous US counties. Multinomial logistic regression was used to explain the relationship between community-level predictors and cluster assignment. We analyzed county-level confirmed COVID-19 deaths from Sunday, March 1, 2020, to Saturday, February 27, 2021. Results Four distinct patterns of deaths were observed across the contiguous US counties. The multinomial regression model correctly classified 1904 (61.25%) of the counties’ outbreak patterns/clusters. Conclusions Our results provide evidence that county-level patterns of COVID-19 deaths are different and can be explained in part by social and political predictors.
- Published
- 2021
9. Explaining the Varying Patterns of COVID-19 Deaths Across the United States: 2-Stage Time Series Clustering Framework (Preprint)
- Author
-
Fadel M Megahed, L Allison Jones-Farmer, Yinjiao Ma, and Steven E Rigdon
- Abstract
BACKGROUND Socially vulnerable communities are at increased risk for adverse health outcomes during a pandemic. Although this association has been established for H1N1, Middle East respiratory syndrome (MERS), and COVID-19 outbreaks, understanding the factors influencing the outbreak pattern for different communities remains limited. OBJECTIVE Our 3 objectives are to determine how many distinct clusters of time series there are for COVID-19 deaths in 3108 contiguous counties in the United States, how the clusters are geographically distributed, and what factors influence the probability of cluster membership. METHODS We proposed a 2-stage data analytic framework that can account for different levels of temporal aggregation for the pandemic outcomes and community-level predictors. Specifically, we used time-series clustering to identify clusters with similar outcome patterns for the 3108 contiguous US counties. Multinomial logistic regression was used to explain the relationship between community-level predictors and cluster assignment. We analyzed county-level confirmed COVID-19 deaths from Sunday, March 1, 2020, to Saturday, February 27, 2021. RESULTS Four distinct patterns of deaths were observed across the contiguous US counties. The multinomial regression model correctly classified 1904 (61.25%) of the counties’ outbreak patterns/clusters. CONCLUSIONS Our results provide evidence that county-level patterns of COVID-19 deaths are different and can be explained in part by social and political predictors.
- Published
- 2021
- Full Text
- View/download PDF
10. A Statistical (Process Monitoring) Perspective on Human Performance Modeling in the Age of Cyber-Physical Systems
- Author
-
L. Allison Jones-Farmer, Steven E. Rigdon, Manar Mohamed, Fadel M. Megahed, and Miao Cai
- Subjects
Identification (information) ,Authentication ,Industry 4.0 ,Computer science ,Frame (networking) ,Mobile computing ,Cyber-physical system ,Wearable computer ,Data science ,Host (network) - Abstract
With the continued technological advancements in mobile computing, sensors, and artificial intelligence methodologies, computer acquisition of human and physical data, often called cyber-physical convergence, is becoming more pervasive. Consequently, personal device data can be used as a proxy for human operators, creating a digital signature of their typical usage. Examples of such data sources include: wearable sensors, motion capture devices, and sensors embedded in work stations. Our motivation behind this paper is to encourage the quality community to investigate relevant research problems that pertain to human operators. To frame our discussion, we examine three application areas (with distinct data sources and characteristics) for human performance modeling: (a) identification of physical human fatigue using wearable sensors/accelerometers; (b) capturing changes in a driver’s safety performance based on fusing on-board sensor data with online API data; and (c) human authentication for cybersecurity applications. Through three case studies, we identify opportunities for applying industrial statistics methodologies and present directions for future work. To encourage future examination by the quality community, we host our data, Code, and analysis on an online repository.
- Published
- 2021
- Full Text
- View/download PDF
11. Modeling the differences in the time-series profiles of new COVID-19 daily confirmed cases in 3,108 contiguous U.S. counties: A retrospective analysis
- Author
-
L. Allison Jones-Farmer, Longwen Zhao, Steven E. Rigdon, and Fadel M. Megahed
- Subjects
Viral Diseases ,Coronavirus disease 2019 (COVID-19) ,Epidemiology ,Science ,Political Science ,Explanatory model ,Population Dynamics ,Social Sciences ,Research and Analysis Methods ,Models, Biological ,Governments ,Clustering Algorithms ,Medical Conditions ,Population Metrics ,Retrospective analysis ,Medicine and Health Sciences ,Cluster Analysis ,Humans ,Public and Occupational Health ,Location ,Socioeconomic status ,Pandemics ,Retrospective Studies ,Population Density ,Series (stratigraphy) ,Multidisciplinary ,Population Biology ,Applied Mathematics ,Simulation and Modeling ,Outbreak ,COVID-19 ,Biology and Life Sciences ,Covid 19 ,United States ,Socioeconomic Aspects of Health ,Geographic Distribution ,Geographic distribution ,Health Care ,Geography ,Infectious Diseases ,Time and Motion Studies ,Physical Sciences ,Medicine ,Mathematics ,Algorithms ,Demography ,Research Article - Abstract
Objective The COVID-19 pandemic in the U.S. has exhibited a distinct multiwave pattern beginning in March 2020. Paradoxically, most counties do not exhibit this same multiwave pattern. We aim to answer three research questions: (1) How many distinct clusters of counties exhibit similar COVID-19 patterns in the time-series of daily confirmed cases? (2) What is the geographic distribution of the counties within each cluster? and (3) Are county-level demographic, socioeconomic and political variables associated with the COVID-19 case patterns? Materials and methods We analyzed data from counties in the U.S. from March 1, 2020 to January 2, 2021. Time series clustering identified clusters in the daily confirmed cases of COVID-19. An explanatory model was used to identify demographic, socioeconomic and political variables associated with the outbreak patterns. Results Three patterns were identified from the cluster solution including counties in which cases are still increasing, those that peaked in the late fall, and those with low case counts to date. Several county-level demographic, socioeconomic, and political variables showed significant associations with the identified clusters. Discussion The pattern of the outbreak is related both to the geographic location within the U.S. and several variables including population density and government response. Conclusion The reported pattern of cases in the U.S. is observed through aggregation of the daily confirmed COVID-19 cases, suggesting that local trends may be more informative. The pattern of the outbreak varies by county, and is associated with important demographic, socioeconomic, political and geographic factors.
- Published
- 2020
12. Guest editorial
- Author
-
Bianca Maria Colosimo, L. Allison Jones-Farmer, Ross Sparks, and David M. Steinberg
- Subjects
Safety, Risk, Reliability and Quality ,Industrial and Manufacturing Engineering - Published
- 2020
- Full Text
- View/download PDF
13. A forecasting framework for predicting perceived fatigue: Using time series methods to forecast ratings of perceived exertion with features from wearable sensors
- Author
-
Ehsan Rashedi, Sahand Hajifar, Lora Cavuoto, Fadel M. Megahed, L. Allison Jones-Farmer, and Hongyue Sun
- Subjects
Computer science ,Physical Exertion ,Wearable computer ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Perceived exertion ,Machine learning ,computer.software_genre ,Vector autoregression ,03 medical and health sciences ,Wearable Electronic Devices ,0302 clinical medicine ,Gait (human) ,Humans ,0501 psychology and cognitive sciences ,Autoregressive integrated moving average ,Safety, Risk, Reliability and Quality ,Engineering (miscellaneous) ,050107 human factors ,Fatigue ,business.industry ,05 social sciences ,030210 environmental & occupational health ,Error correction model ,Autoregressive model ,Research Design ,Artificial intelligence ,business ,Material handling ,computer ,Forecasting - Abstract
Advancements in sensing and network technologies have increased the amount of data being collected to monitor the worker conditions. In this study, we consider the use of time series methods to forecast physical fatigue using subjective ratings of perceived exertion (RPE) and gait data from wearable sensors captured during a simulated in-lab manual material handling task (Lab Study 1) and a fatiguing squatting with intermittent walking cycle (Lab Study 2). To determine whether time series models can accurately forecast individual response and for how many time periods ahead, five models were compared: naive method, autoregression (AR), autoregressive integrated moving average (ARIMA), vector autoregression (VAR), and the vector error correction model (VECM). For forecasts of three or more time periods ahead, the VECM model that incorporates historical RPE and wearable sensor data outperformed the other models with median mean absolute error (MAE) 1 . 24 and median MAE 1 . 22 across all participants for Lab Study 1 and Lab Study 2, respectively. These results suggest that wearable sensor data can support forecasting a worker’s condition and the forecasts obtained are as good as current state-of-the-art models using multiple sensors for current time prediction.
- Published
- 2020
14. On the Selection of the Bandwidth Parameter for thek-Chart
- Author
-
Waldyn G. Martinez, L. Allison Jones-Farmer, and Maria L. Weese
- Subjects
021103 operations research ,Computer science ,Bandwidth (signal processing) ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,computer.software_genre ,01 natural sciences ,Data description ,Support vector machine ,010104 statistics & probability ,symbols.namesake ,Chart ,Gaussian function ,symbols ,One-class classification ,Data mining ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Algorithm ,computer - Abstract
The k-chart, based on support vector data description, has received recent attention in the literature. We review four different methods for choosing the bandwidth parameter, s, when the k-chart is designed using the Gaussian kernel. We provide results of extensive Phase I and Phase II simulation studies varying the method of choosing the bandwidth parameter along with the size and distribution of sample data. In very limited cases, the k-chart performed as desired. In general, we are unable to recommend the k-chart for use in a Phase I or Phase II process monitoring study in its current form. Copyright © 2017 John Wiley & Sons, Ltd.
- Published
- 2017
- Full Text
- View/download PDF
15. Statistical Learning Methods Applied to Process Monitoring: An Overview and Perspective
- Author
-
L. Allison Jones-Farmer, Maria L. Weese, Fadel M. Megahed, and Waldyn G. Martinez
- Subjects
Multivariate statistics ,021103 operations research ,Artificial neural network ,business.industry ,Computer science ,Process (engineering) ,Strategy and Management ,Big data ,0211 other engineering and technologies ,Feature selection ,02 engineering and technology ,Management Science and Operations Research ,computer.software_genre ,01 natural sciences ,Data type ,Data science ,Industrial and Manufacturing Engineering ,Support vector machine ,010104 statistics & probability ,Control chart ,Data mining ,0101 mathematics ,Safety, Risk, Reliability and Quality ,business ,computer - Abstract
The increasing availability of high-volume, high-velocity data sets, often containing variables of dfferent data types, brings an increasing need for monitoring tools that are designed to handle these big data sets. While the research on multivariate st..
- Published
- 2016
- Full Text
- View/download PDF
16. Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study
- Author
-
Casey G. Cegielski and L. Allison Jones-Farmer
- Subjects
Knowledge management ,business.industry ,Multimethodology ,05 social sciences ,Big data ,Entry Level ,Delphi method ,Information technology ,Public relations ,Education ,Business analytics ,Analytics ,0502 economics and business ,Business intelligence ,Business, Management and Accounting (miscellaneous) ,050211 marketing ,Decision Sciences (miscellaneous) ,business ,050203 business & management - Abstract
It is impossible to deny the significant impact from the emergence of big data and business analytics” on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method research agenda that was executed to ascertain insights regarding which knowledge, skills, and abilities, (KSAs) are valued by employers seeking to hire entry-level analytics professionals from schools of business. Current undergraduate business analytics programs are first examined to define the research scope. A triangulated mixed-method research approach is then used to determine the knowledge, skills, and abilities that are in demand for entry-level jobs in this area. Finally, the multi-method triangulation of data is combined with experiences in building academic programs in business analytics at two nationally-ranked state universities to offer insights for those seeking to develop academic programs in this area.
- Published
- 2016
- Full Text
- View/download PDF
17. The Conditional In-Control Performance of Self-Starting Control Charts
- Author
-
L. Allison Jones-Farmer, William H. Woodall, and Matthew J. Keefe
- Subjects
Computer science ,Control (management) ,Process (computing) ,Baseline data ,computer.software_genre ,Statistical process control ,Industrial and Manufacturing Engineering ,Data set ,Statistics ,Control chart ,Data mining ,Safety, Risk, Reliability and Quality ,Shewhart individuals control chart ,computer ,\bar x and R chart - Abstract
The recommended size of the Phase I data set used to estimate the in-control parameters has been discussed many times in the process monitoring literature. Collecting baseline data, however, can be difficult or slow in some applications. Such issues hav..
- Published
- 2015
- Full Text
- View/download PDF
18. A Distribution-Free Multivariate Phase I Location Control Chart for Subgrouped Data from Elliptical Distributions
- Author
-
Nedret Billor, L. Allison Jones-Farmer, and Richard C. Bell
- Subjects
Statistics and Probability ,Multivariate statistics ,Chart ,Applied Mathematics ,Modeling and Simulation ,Monte Carlo method ,Statistics ,Outlier ,Nonparametric statistics ,Radar chart ,Control chart ,Multivariate normal distribution ,Mathematics - Abstract
In quality control, a proper Phase I analysis is essential to the success of Phase II monitoring. A literature review reveals no distribution-free Phase I multivariate techniques in existence. This research develops a Phase I location control chart for multivariate elliptical processes. The resulting in-control reference sample can then be used to estimate the parameters for Phase II monitoring. Using Monte Carlo simulation, the proposed method is compared with the Hotelling's T2 Phase I chart. Although Hotelling's T2 chart is preferred when the data are multivariate normal, the proposed method is shown to perform significantly better under nonnormality. This article has supplementary material online.
- Published
- 2014
- Full Text
- View/download PDF
19. Identifying Characteristics of Dissemination Success Using an Expert Panel
- Author
-
L. Allison Jones-Farmer, Chetan S. Sankar, David Bourrie, and Casey G. Cegielski
- Subjects
Knowledge management ,business.industry ,4. Education ,05 social sciences ,Information Dissemination ,Delphi method ,050301 education ,Public relations ,Diffusion of innovations ,Education ,Work (electrical) ,0502 economics and business ,Business, Management and Accounting (miscellaneous) ,Decision Sciences (miscellaneous) ,Sociology ,business ,0503 education ,Research question ,050203 business & management - Abstract
Although considerable work has been done to develop new educational innovations, few have found widespread acceptance in the classroom. To improve the likelihood of adoption of educational innovations, researchers need to understand why some innovations are adopted and routinely used, while others are not. An initial aspect of the diffusion of innovations, as defined in the classical sociological literature, involves the communication of ideas and concepts related to innovations between individuals. This article presents an expert panel's answer to the following question: “What are the most important characteristics that relate to the dissemination of educational innovations?” As dissemination is a critical facet of the diffusion of an innovation, 45 researchers who received technology and engineering grants from the National Science Foundation (NSF) participated in a Delphi study designed to address this research question. In three rounds, the experts identified and ranked 11 characteristics of educational innovations, 6 characteristics of students, 13 characteristics of faculty members, and 5 characteristics of administrators that can relate to the successful dissemination of educational innovations. The results of this study led to the formation of a Characteristics of Dissemination Success (CODS) framework. This framework offers useful guidance for educational innovators seeking a better understanding of the influences on the dissemination of educational innovations.
- Published
- 2014
- Full Text
- View/download PDF
20. Using Visual Data Mining to Enhance the Simple Tools in Statistical Process Control: A Case Study
- Author
-
Fadel M. Megahed, L. Allison Jones-Farmer, Mark Clark, and Huw D. Smith
- Subjects
Visual analytics ,Process (engineering) ,Computer science ,Process capability ,media_common.quotation_subject ,Management Science and Operations Research ,Statistical process control ,computer.software_genre ,Visualization ,Ishikawa diagram ,Quality (business) ,Seven Basic Tools of Quality ,Data mining ,Safety, Risk, Reliability and Quality ,computer ,media_common - Abstract
Statistical process control (SPC) is a collection of problem-solving tools used to achieve process stability and improve process capability through variation reduction. Because of its sound statistical basis and intuitive use of visual displays, SPC has been extensively used in manufacturing and health care and service industries. Deploying SPC involves both a technical aspect and a proper environment for continuous improvement activities based on management support and worker empowerment. Many of the commonly used SPC tools, including histograms, fishbone diagrams, scatter plots, and defect concentration diagrams, were proposed prior to the advent of microcomputers as efficient methods to record and visualize data for single (or few) variable(s) processes. As the volume, variety, and velocity of data continues to evolve, there are opportunities to supplement and improve these methods for understanding and visualizing process variation. In this paper, we propose enhancements to some of the basic quality tools that can be easily applied with a desktop computer. We demonstrate how these updated tools can be used to better characterize, understand, and/or diagnose variation in a case study involving a US manufacturer of structural tubular metal products. Finally, we create the quality visualization toolkit to allow practitioners to implement some of these visualization tools without the need for training, extensive statistical background, and/or specialized statistical software. Copyright © 2014 John Wiley & Sons, Ltd.
- Published
- 2014
- Full Text
- View/download PDF
21. Data quality for data science, predictive analytics, and big data in supply chain management: An introduction to the problem and suggestions for research and applications
- Author
-
Jeremy D. Ezell, Benjamin T. Hazen, Christopher A. Boone, and L. Allison Jones-Farmer
- Subjects
Economics and Econometrics ,Supply chain management ,business.industry ,Computer science ,Supply chain ,Data management ,Big data ,Context (language use) ,Management Science and Operations Research ,Predictive analytics ,Statistical process control ,General Business, Management and Accounting ,Data science ,Industrial and Manufacturing Engineering ,Data governance ,Data quality ,business - Abstract
Today׳s supply chain professionals are inundated with data, motivating new ways of thinking about how data are produced, organized, and analyzed. This has provided an impetus for organizations to adopt and perfect data analytic functions (e.g. data science, predictive analytics, and big data) in order to enhance supply chain processes and, ultimately, performance. However, management decisions informed by the use of these data analytic methods are only as good as the data on which they are based. In this paper, we introduce the data quality problem in the context of supply chain management (SCM) and propose methods for monitoring and controlling data quality. In addition to advocating for the importance of addressing data quality in supply chain research and practice, we also highlight interdisciplinary research topics based on complementary theory.
- Published
- 2014
- Full Text
- View/download PDF
22. An Overview of Phase I Analysis for Process Improvement and Monitoring
- Author
-
William H. Woodall, Stefan H. Steiner, L. Allison Jones-Farmer, and Charles W. Champ
- Subjects
021103 operations research ,Data collection ,Operations research ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,Process improvement ,02 engineering and technology ,Management Science and Operations Research ,Work in process ,Statistical process control ,01 natural sciences ,Phase (combat) ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Systems engineering ,Data analysis ,Control chart ,0101 mathematics ,Safety, Risk, Reliability and Quality - Abstract
An overview and perspective is provided on the Phase I collection and analysis of data for use in process improvement and control charting.
- Published
- 2014
- Full Text
- View/download PDF
23. Performance expectancy and use of enterprise architecture: training as an intervention
- Author
-
LeeAnn Kung, L. Allison Jones-Farmer, Casey G. Cegielski, and Benjamin T. Hazen
- Subjects
Expectancy theory ,Engineering ,Knowledge management ,business.industry ,Business process ,General Decision Sciences ,Enterprise architecture ,Affect (psychology) ,Structural equation modeling ,Test (assessment) ,Survey methodology ,Management of Technology and Innovation ,Information system ,business ,Information Systems - Abstract
Purpose – Enterprise architecture (EA) aligns information systems with business processes to enable firms to reach their strategic objectives and, when effectively employed by organizations, can lead to enhanced levels of performance. However, while many firms may adopt EA, it is often not used extensively. The purpose of this paper is to examine how performance expectancy (PE) and training affect the degree to which organizations use EA. Design/methodology/approach – The paper employed a survey method to gather data from IT professionals, senior managers, and consultants who work within organizations that have adopted EA. Covariance-based structural equation modeling was used to analyze the research model and test the hypotheses. Findings – The paper found PE to be a significant predictor of EA use. In addition, training is also shown to enhance use of EA while also playing a mediating role within the relationship between PE and use of EA. Research limitations/implications – The study is limited by the focus only on training as an intervention. Other mediators and/or moderators such as top management support and organization culture may also play an important role and should be examined in future studies. Nonetheless, the study demonstrates the critical role that training can play in facilitating widespread use of EA within organizations. Practical implications – Widespread use is a critical success factor for organizations that want to gain the maximum possible benefit from EA. To achieve extensive use, the study suggests that organizations that adopt EA should consider implementing a formal and robust education and training program. Originality/value – This study extends the research on information technology training by examining the role of training as an intervention within the technology acceptance paradigm. The paper also contributes to the literature regarding post-adoption innovation diffusion by demonstrating the efficacy of organizational training in promoting widespread usage.
- Published
- 2014
- Full Text
- View/download PDF
24. Applying Control Chart Methods to Enhance Data Quality
- Author
-
Jeremy D. Ezell, Benjamin T. Hazen, and L. Allison Jones-Farmer
- Subjects
Statistics and Probability ,Quality management ,business.industry ,Computer science ,Process (engineering) ,Applied Mathematics ,computer.software_genre ,Data science ,Variety (cybernetics) ,Data governance ,Analytics ,Modeling and Simulation ,Data quality ,Data analysis ,Control chart ,Data mining ,business ,computer - Abstract
As the volume and variety of available data continue to proliferate, organizations increasingly turn to analytics in order to enhance business decision-making and ultimately, performance. However, the decisions made as a result of the analytics process are only as good as the data on which they are based. In this article, we examine the data quality problem and propose the use of control charting methods as viable tools for data quality monitoring and improvement. We motivate our discussion using an integrated case study example of a real aircraft maintenance database. We include discussions of the measures of multiple data quality dimensions in this online process. We highlight the lack of appropriate statistical methods for the analysis of this type of problem and suggest opportunities for research in control chart methods within the data quality environment. This article has supplementary material online.
- Published
- 2014
- Full Text
- View/download PDF
25. The Robustness of ME/I Evaluations to Among-Group Dependence
- Author
-
L. Allison Jones-Farmer, Brian Luis Perdomo, Bryan D. Edwards, and Daniel J. Svyantek
- Subjects
Sociology and Political Science ,Monte Carlo method ,General Decision Sciences ,Confirmatory factor analysis ,Standard error ,Modeling and Simulation ,Likelihood-ratio test ,Statistics ,Econometrics ,Measurement invariance ,Specific model ,Psychology ,General Economics, Econometrics and Finance ,Equivalence (measure theory) ,Factor analysis - Abstract
This study evaluates the robustness of multisample confirmatory factor analysis approaches to measurement equivalence/invariance (ME/I) evaluations under a specific model misspecification: when independence among the groups is assumed, but groups are dependent. Monte Carlo simulation is used to investigate the robustness of several ME/I evaluation procedures including the likelihood ratio test ( ), ΔCFI, ΔSRMR, and ΔRMSEA. The effect of this model misspecification on the factor loadings and their standard errors is also considered. Assuming the groups are independent when they are not is shown to have no practical effect on the results of the , ΔCFI, ΔSRMR, or ΔRMSEA procedures because the effect of model misspecification is canceled out through differencing. Similarly, the estimated factor loadings and standard errors are not significantly affected by incorrectly assuming among-group dependence. These results are used to develop recommendations for researchers who conduct multigroup analyses using struct...
- Published
- 2014
- Full Text
- View/download PDF
26. One-Class Peeling for Outlier Detection in High Dimensions
- Author
-
Waldyn G Martinez, Weese, Maria L, and L Allison Jones-Farmer
- Published
- 2017
- Full Text
- View/download PDF
27. Consumer reactions to the adoption of green reverse logistics
- Author
-
L. Allison Jones-Farmer, Dianne J. Hall, Casey G. Cegielski, Benjamin T. Hazen, and Yun Wu
- Subjects
Marketing ,Economics and Econometrics ,Supply chain ,media_common.quotation_subject ,Green logistics ,Reverse logistics ,Reuse ,Competitive advantage ,Willingness to pay ,Loyalty ,Business ,Business and International Management ,Remanufacturing ,media_common - Abstract
Firms are beginning to find that the adoption of certain reverse logistics practices may offer a lucrative approach to greening their supply chain. Consisting of remanufacturing, reusing, and recycling, this idea of green reverse logistics is currently being diffused throughout the supply chain. A recently published logistics diffusion model with its basis in resource-advantage theory suggests that a logistics innovation is more likely to be adopted if it enhances competitive advantage for the firm considering adoption. Our study uses diffusion of innovation and resource-advantage theories as a foundation to investigate consumer reactions to firms that implement green reverse logistics practices. We investigate whether a firm's adoption of green reverse logistics leads to higher levels of consumer loyalty and a willingness to pay more for the firm's products. Our findings suggest that consumers' satisfaction with firms that adopt green reverse logistics leads to increased levels of loyalty to the firm, wh...
- Published
- 2012
- Full Text
- View/download PDF
28. Adoption of cloud computing technologies in supply chains
- Author
-
Casey G. Cegielski, Benjamin T. Hazen, L. Allison Jones-Farmer, and Yun Wu
- Subjects
Process management ,Supply chain management ,business.industry ,Computer science ,Supply chain ,Delphi method ,Information processing ,Information technology ,Transportation ,Cloud computing ,Information processing theory ,Enabling ,Business and International Management ,Marketing ,business - Abstract
PurposeThe purpose of this paper is to employ organizational information processing theory to assess how a firm's information processing requirements and capabilities combine to affect the intention to adopt cloud computing as an enabler of electronic supply chain management systems. Specifically, the paper examines the extent to which task uncertainty, environmental uncertainty, and inter‐organizational uncertainty affect intention to adopt cloud computing technology and how information processing capability may moderate these relationships.Design/methodology/approachThe paper uses a multiple method approach, thus examining the hypothesized model with both quantitative and qualitative methods. To begin, the paper incorporates a Delphi study as a way in which to choose a practically relevant characterization of the moderating variable, information processing capability. The authors then use a survey method and hierarchical linear regression to quantitatively test their hypotheses. Finally, the authors employ interviews to gather additional qualitative data, which they examine via use of content analysis in order to provide additional insight into the tenability of the proposed model.FindingsThe quantitative analysis suggests that significant two‐way interactions exist between each independent variable and the moderating variable; each of these interactions is significantly related to intention to adopt cloud computing. The qualitative results support the assertion that information processing requirements and information processing capability affect intention to adopt cloud computing. These findings support the relationships addressed in the hypothesized model and suggest that the decision to adopt cloud computing is based upon complex circumstances.Research limitations/implicationsThis research is limited by the use of single key informants for both the quantitative and qualitative portions of the study. Nonetheless, this study enhances understanding of electronic supply chain management systems, and specifically cloud computing, through the application of organizational information processing theory. The authors’ mixed‐methods approach allowed them to draw more substantive conclusions; the findings provide a theoretical and empirical foundation for future research in this area, and also suggest the use of additional theoretical perspectives.Practical implicationsThis study provides insight that can help supply chain managers to better understand how requirements, when coupled with capabilities, may influence the decision to adopt cloud computing as an enabler of supply chain management systems.Originality/valueAs an emerging technology, cloud computing is changing the form and function of information technology infrastructures. This study enhances the understanding of how this technology may diffuse within the supply chain.
- Published
- 2012
- Full Text
- View/download PDF
29. A Proposed Framework for Educational Innovation Dissemination
- Author
-
Benjamin T. Hazen, L. Allison Jones-Farmer, Chetan S. Sankar, and Yun Wu
- Subjects
Educational research ,Knowledge management ,business.industry ,Computer science ,Process (engineering) ,Multimethodology ,Education theory ,Technology transfer ,Educational technology ,Predictor variables ,business ,Dissemination - Abstract
Although the need for new educational technologies is increasing, the process for disseminating these innovations remains a challenge. A literature review shows that few studies have thoroughly investigated this area. Furthermore, there is no comprehensive framework or coordinated research agenda that may be used to guide such investigation. This study draws on diffusion of innovation, technology acceptance, and related literatures as a basis to examine the process by which educational innovations are disseminated. In this article, we develop a framework for educational innovation dissemination and illustrate the process described by the framework using online education as an example. The stages of the dissemination process are discussed and it is shown how characteristics of the innovation, adopter, and environment may affect how well an innovation progresses through these stages. This leads to the development of a series of propositions that can be used as the basis for future investigation.
- Published
- 2012
- Full Text
- View/download PDF
30. The role of ambiguity tolerance in consumer perception of remanufactured products
- Author
-
Hubert S. Field, L. Allison Jones-Farmer, Robert E. Overstreet, and Benjamin T. Hazen
- Subjects
Economics and Econometrics ,media_common.quotation_subject ,Ambiguity ,Management Science and Operations Research ,General Business, Management and Accounting ,Industrial and Manufacturing Engineering ,Structural equation modeling ,Willingness to pay ,Order (business) ,Perception ,Quality (business) ,Business ,Marketing ,Activity-based costing ,Remanufacturing ,media_common - Abstract
This study examines ambiguity tolerance, perceived quality, and willingness to pay for remanufactured products. We found evidence to support a direct relationship between a consumer's tolerance for ambiguity and their willingness to pay for remanufactured products. There was also support for an indirect relationship between ambiguity tolerance and willingness to pay that is mediated through perceived quality. Extant literature often lacks an empirical justification regarding costing and quality assumptions for remanufactured products. This research provides such justification while also offering an explanation as to why consumers view remanufactured products as being of lower quality and are less willing to pay for them. Those employed in the remanufacturing industry are advised to reduce the level of ambiguity associated with their remanufacturing processes in order to command higher prices for their products in the marketplace.
- Published
- 2012
- Full Text
- View/download PDF
31. A Distribution-Free Phase I Control Chart for Subgroup Scale
- Author
-
L. Allison Jones-Farmer and Charles W. Champ
- Subjects
Distribution free ,021103 operations research ,Scale (ratio) ,Strategy and Management ,Monte Carlo method ,0211 other engineering and technologies ,Nonparametric statistics ,Phase (waves) ,02 engineering and technology ,Management Science and Operations Research ,Statistical process control ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Statistics ,Process control ,Control chart ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Mathematics - Abstract
A Phase I control chart that is distribution free is proposed for processes that are in control. The proposed method achieves in-control performance when used with both normal and nonnormal data and is sensitive to subgroup differences in process scale...
- Published
- 2010
- Full Text
- View/download PDF
32. The Effect of Among-Group Dependence on the Invariance Likelihood Ratio Test
- Author
-
L. Allison Jones-Farmer
- Subjects
Sociology and Political Science ,General Decision Sciences ,Latent variable ,Factor structure ,Confirmatory factor analysis ,Goodness of fit ,Modeling and Simulation ,Likelihood-ratio test ,Statistics ,Econometrics ,Measurement invariance ,General Economics, Econometrics and Finance ,Equivalence (measure theory) ,Mathematics ,Statistical hypothesis testing - Abstract
When comparing latent variables among groups, it is important to first establish the equivalence or invariance of the measurement model across groups. Confirmatory factor analysis (CFA) is a commonly used methodological approach to examine measurement equivalence/invariance (ME/I). Within the CFA framework, the chi-square goodness-of-fit test and chi-square difference tests are used to evaluate ME/I, and these tests rely on the assumption of independence among groups. Limitations in the study design can hinder the practicality of the independence among groups assumption. This article illustrates, algebraically, the effects of violations independence on the chi-square goodness-of-fit and chi-square difference test statistics in a multigroup CFA.
- Published
- 2010
- Full Text
- View/download PDF
33. Distribution-Free Phase I Control Charts for Subgroup Location
- Author
-
L. Allison Jones-Farmer, Charles W. Champ, and Victoria S. Jordan
- Subjects
021103 operations research ,Operations research ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,Nonparametric statistics ,02 engineering and technology ,Management Science and Operations Research ,Statistical process control ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Heavy-tailed distribution ,Outlier ,Process control ,Control chart ,0101 mathematics ,Robust control ,Safety, Risk, Reliability and Quality ,Completeness (statistics) ,Algorithm - Abstract
Statistical quality control often relies on the completion of a Phase I study. Many Phase I control charts, however, rely on an assumption of normally distributed process observations that may not be reasonable in the early stages of process control. A ..
- Published
- 2009
- Full Text
- View/download PDF
34. A Note on Multigroup Comparisons Using SAS PROC CALIS
- Author
-
R. Kelly Rainer, L. Allison Jones-Farmer, and Jennifer P. Pitts
- Subjects
Sociology and Political Science ,Degrees of freedom (statistics) ,General Decision Sciences ,Sampling (statistics) ,Structural equation modeling ,Standard error ,Goodness of fit ,Sample size determination ,Modeling and Simulation ,Statistics ,Measurement invariance ,Psychology ,Equal size ,General Economics, Econometrics and Finance ,Algorithm - Abstract
Although SAS PROC CALIS is not designed to perform multigroup comparisons, it is believed that SAS can be “tricked” into doing so for groups of equal size. At present, there are no comprehensive examples of the steps involved in performing a multigroup comparison in SAS. The purpose of this article is to illustrate these steps. We demonstrate procedures using an example to evaluate the measurement invariance of communication satisfaction and organizational justice across 2 groups. Following the approach outlined in Byrne (2004), we conduct the same analysis in AMOS and compare the results to those obtained in SAS. We show that the sample size must be input correctly and the degrees of freedom must be adjusted in order for the standard errors and goodness-of-fit statistics to be correct. In addition, several of the fit indexes must be modified to obtain the correct values.
- Published
- 2008
- Full Text
- View/download PDF
35. Properties of Multivariate Control Charts with Estimated Parameters
- Author
-
L. Allison Jones-Farmer and Charles W. Champ
- Subjects
Statistics and Probability ,Multivariate statistics ,Statistical distance ,Chart ,Covariance matrix ,Multivariate exponentially weighted moving average ,Modeling and Simulation ,Statistics ,Length distribution ,Multivariate control charts ,Mathematics - Abstract
The Hotelling's T 2, multivariate exponentially weighted moving average (MEWMA), and several multivariate cumulative sum (MCUSUM) charts are examined in this paper. Two descriptions are given of each chart with estimated parameters for monitoring the mean of a vector of quality measurements. For each chart, one description explains how the chart can be applied with estimated parameters in practice, and the other description is useful for analyzing the run length performance of the chart. It is shown that, if the covariance matrix is in control, the run length distribution of most of these charts depends only on the distributional parameters through the size of the process shift in terms of statistical distance. Simulation is used to provide performance analyses and comparisons of these charts. An example is given to illustrate the MCUSUM and MEWMA charts when parameters are estimated.
- Published
- 2007
- Full Text
- View/download PDF
36. An assessment of attraction toward affirmative action organizations: investigating the role of individual differences
- Author
-
Jeremy B. Bernerth, William F. Giles, L. Allison Jones-Farmer, H. Jack Walker, and Hubert S. Feild
- Subjects
Attractiveness ,Organizational Behavior and Human Resource Management ,Affirmative action ,Equity (economics) ,Sociology and Political Science ,education ,Psychology ,Attraction ,Equal employment opportunity ,Social psychology ,humanities ,General Psychology ,Applied Psychology - Abstract
Our study investigated applicant characteristics in response to organizations incorporating an affirmative action policy (AAP) statement in recruitment material. Study participants (N = 217; White upper-level management students) randomly received recruitment material containing one of three statements (e.g., affirmative action, equal employment opportunity (EEO), or no statement regarding affirmative action or EEO) and were asked to evaluate the attractiveness of the organization publicizing the designated policy. Results indicated that individuals responded negatively to AAPs in recruitment material because of prejudice attitudes, the perceived unfairness of such programs (which we relate to equity sensitivity), or in an attempt to protect their own self-interest (which we relate to general self-efficacy). Additionally, individuals' equity sensitivity and general self-efficacy both moderated the relationship between racial prejudice and organizational attractiveness. Specifically, the negative relationships between participants' prejudice attitudes and the attractiveness of organizations publicizing an affirmative action policy were stronger for benevolents (persons tolerant of situations where they are under-rewarded) and for persons low in self-efficacy. Implications of our findings for organizational recruitment practices are discussed. Copyright © 2006 John Wiley & Sons, Ltd.
- Published
- 2007
- Full Text
- View/download PDF
37. Effects of Parameter Estimation on Control Chart Properties: A Literature Review
- Author
-
Charles W. Champ, L. Allison Jones-Farmer, William H. Woodall, and Willis A. Jensen
- Subjects
021103 operations research ,business.industry ,Estimation theory ,Strategy and Management ,0211 other engineering and technologies ,02 engineering and technology ,Conditional probability distribution ,Management Science and Operations Research ,Statistical process control ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Control limits ,Sample size determination ,Statistics ,Control chart ,0101 mathematics ,Marginal distribution ,Safety, Risk, Reliability and Quality ,business ,Quality assurance ,Mathematics - Abstract
Control chart limits are often calculated using parameter estimates from an in-control Phase I reference sample. In Phase II monitoring, statistics based on new samples are compared with the estimated control limits to monitor for departures from the in..
- Published
- 2006
- Full Text
- View/download PDF
38. Properties of theT2Control Chart When Parameters Are Estimated
- Author
-
Charles W. Champ, Steven E. Rigdon, and L. Allison Jones-Farmer
- Subjects
Statistics and Probability ,Chart ,Control limits ,Covariance matrix ,Sample size determination ,Applied Mathematics ,Modeling and Simulation ,Statistics ,X-bar chart ,Control chart ,Multivariate normal distribution ,Statistical process control ,Mathematics - Abstract
Moments of the run length distribution are often used to design and study the performance of quality control charts. In this article the run length distribution of the T2 chart for monitoring a multivariate process mean is analyzed. It is assumed that the in-control process observations are iid random samples from a multivariate normal distribution with unknown mean vector and covariance matrix. It is shown that the in-control run length distribution of the chart does not depend on the unknown process parameters. Furthermore, it is shown that the out-of-control run length distribution of the chart depends only on the statistical distance between the in-control and out-of-control mean vectors. It follows that a performance analysis can be given without knowledge of the in-control values of the parameters or their estimates. The performance of charts constructed using traditional F-distribution–based control limits is studied. Recommendations are given for sample size requirements necessary to achieve desir...
- Published
- 2005
- Full Text
- View/download PDF
39. Designing Phase I―X Charts with Small Sample Sizes
- Author
-
L. Allison Jones and Charles W. Champ
- Subjects
Simple (abstract algebra) ,Joint probability distribution ,Statistics ,Phase (waves) ,Estimator ,Control chart ,Multivariate t-distribution ,Management Science and Operations Research ,Safety, Risk, Reliability and Quality ,\bar x and R chart ,Standard deviation ,Mathematics - Abstract
Methods are examined for obtaining probability limits for Phase I Shewhart ―X charts when the process mean and standard deviation are estimated. The design methods assume m independent random samples of size n will be taken periodically from a normally distributed process. It is shown that the joint distribution of the standardized subgroup means follows either an approximate or exact multivariate t distribution, depending on the standard deviation estimator used. The multivariate t distribution is used to define the probability limits for the Phase I ―X chart. Extensive simulation compares the performance of the proposed limits with other design procedures. Tables of design constants are given for , and simple procedures for obtaining design constants are given for . Copyright © 2004 John Wiley & Sons, Ltd.
- Published
- 2004
- Full Text
- View/download PDF
40. The Run Length Distribution of the CUSUM with Estimated Parameters
- Author
-
Charles W. Champ, Steven E. Rigdon, and L. Allison Jones
- Subjects
021103 operations research ,Average run length ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,Process (computing) ,CUSUM ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Industrial and Manufacturing Engineering ,Cusum control chart ,010104 statistics & probability ,Statistics ,Econometrics ,Production (economics) ,Control chart ,Length distribution ,0101 mathematics ,Safety, Risk, Reliability and Quality - Abstract
The performance of the CUSUM control chart used to monitor the performance of production processes is usually evaluated with the assumption that the process parameters are know. In practice, however, the parameters are seldom known and are often replace..
- Published
- 2004
- Full Text
- View/download PDF
41. Statistical Perspectives on 'Big Data'
- Author
-
Fadel M. Megahed and L. Allison Jones-Farmer
- Subjects
Clustering high-dimensional data ,business.industry ,Analytics ,Computer science ,Big data ,Control chart ,Information infrastructure ,business ,Statistical process control ,Data science ,Term (time) ,Variety (cybernetics) - Abstract
As our information infrastructure evolves, our ability to store, extract, and analyze data is rapidly changing. Big data is a popular term that is used to describe the large, diverse, complex and/or longitudinal datasets generated from a variety of instruments, sensors and/or computer-based transactions. The term big data refers not only to the size or volume of data, but also to the variety of data and the velocity or speed of data accrual. As the volume, variety, and velocity of data increase, our existing analytical methodologies are stretched to new limits. These changes pose new opportunities for researchers in statistical methodology, including those interested in surveillance and statistical process control methods. Although it is well documented that harnessing big data to make better decisions can serve as a basis for innovative solutions in industry, healthcare, and science, these solutions can be found more easily with sound statistical methodologies. In this paper, we discuss several big data applications to highlight the opportunities and challenges for applied statisticians interested in surveillance and statistical process control. Our goal is to bring the research issues into better focus and encourage methodological developments for big data analysis in these areas.
- Published
- 2015
- Full Text
- View/download PDF
42. The Statistical Design of EWMA Control Charts with Estimated Parameters
- Author
-
L. Allison Jones
- Subjects
Service (business) ,021103 operations research ,Statistical design ,Average run length ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,Exponentially weighted moving average ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Integral equation ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Statistics ,Control chart ,EWMA chart ,0101 mathematics ,Safety, Risk, Reliability and Quality - Abstract
[This abstract is based on the author's abstract.] In designing exponentially weighted moving average (EWMA) control charts it is generally assumed that the parameters are known. In most industrial and service applications, however, the parameters are u..
- Published
- 2002
- Full Text
- View/download PDF
43. A Self-Starting Control Chart for Multivariate Individual Observations
- Author
-
L. Allison Jones and Joe H. Sullivan
- Subjects
Statistics and Probability ,u-chart ,Chart ,Control limits ,Applied Mathematics ,Modeling and Simulation ,X-bar chart ,Statistics ,Control chart ,EWMA chart ,Shewhart individuals control chart ,Statistical process control ,Mathematics - Abstract
Multivariate versions of cumulative sum, exponentially weighted moving average (EWMA), and Hotelling's T2 charts typically assume knowledge of the in-control process parameters or, with a new or changed process, use parameter estimates from an in-control reference sample of preliminary observations. In contrast, the self-starting chart begins controlling the process without the need for preliminary observations, an advantage when production is slow or when the cost of early out-of-control production is high. Furthermore, the use of estimated parameters substantially degrades the expected performance of conventional charts, a problem avoided by the proposed chart. The self-starting chart uses the deviation of each observation vector from the average of all previous observations. These deviations, or innovations, can be plotted on a T2 control chart or accumulated in a multivariate EWMA (MEWMA) chart. The run-length performance is evaluated for step shifts occurring at various points, and the MEWMA charts a...
- Published
- 2002
- Full Text
- View/download PDF
44. Phase I control charts for times between events
- Author
-
L. Allison Jones and Charles W. Champ
- Subjects
c-chart ,Engineering ,Exponential distribution ,business.industry ,Phase (waves) ,Process (computing) ,Poisson process ,Management Science and Operations Research ,Reliability engineering ,symbols.namesake ,Control limits ,Statistics ,symbols ,Control chart ,Safety, Risk, Reliability and Quality ,business ,Type I and type II errors - Abstract
A count of the number of defects is often used to monitor the quality of a production process. When defects rarely occur in a process, it is often desirable to monitor the time between the occurrence of each defect rather than a count of the number of defects. An exponential distribution often provides a useful model of the time between defects. Phase I control charts for exponentially distributed processes are discussed. Methods for computing the control limits are given and the overall Type I error rates of these charts are evaluated. Copyright © 2002 John Wiley & Sons, Ltd.
- Published
- 2002
- Full Text
- View/download PDF
45. The Performance of Exponentially Weighted Moving Average Charts With Estimated Parameters
- Author
-
Steven E. Rigdon, Charles W. Champ, and L. Allison Jones
- Subjects
Statistics and Probability ,Estimation ,u-chart ,Chart ,Applied Mathematics ,Modeling and Simulation ,X-bar chart ,Statistics ,Process (computing) ,Control chart ,Sensitivity (control systems) ,EWMA chart ,Mathematics - Abstract
The exponentially weighted moving average (EWMA) control chart is typically designed assuming that standards are given for the process parameters. In practice, the parameters are rarely known, and control charts are constructed using estimates in place of the parameters. This practice can affect the control chart's run-length performance in both in- and out-of-control situations. Specifically, estimation can lead to substantially more frequent false alarms and yet reduce the sensitivity of the chart to detecting process changes. In this article, the run-length distribution of the EWMA chart with estimated parameters is derived. The effect of estimation on the performance of the chart is discussed in a variety of practical scenarios.
- Published
- 2001
- Full Text
- View/download PDF
46. Exact Properties of Demerit Control Charts
- Author
-
Michael D. Conerly, William H. Woodall, and L. Allison Jones
- Subjects
021103 operations research ,Strategy and Management ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Statistical process control ,01 natural sciences ,Industrial and Manufacturing Engineering ,Multi-vari chart ,Plot (graphics) ,010104 statistics & probability ,Product (mathematics) ,Statistics ,Control chart ,Rating system ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Linear combination ,Statistic ,Mathematics - Abstract
A demerit rating system is used to simultaneously monitor counts of several types of defects in a complex product. The demerit statistic is a linear combination of the counts of these types of defects. The traditional recommendation is to plot the dem..
- Published
- 1999
- Full Text
- View/download PDF
47. The Performance of Bootstrap Control Charts
- Author
-
William H. Woodall and L. Allison Jones
- Subjects
Statistics::Theory ,021103 operations research ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Statistical process control ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Control limits ,Statistics ,Econometrics ,Statistics::Methodology ,Control chart ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Shewhart individuals control chart ,Parametric statistics - Abstract
The bootstrap is a statistical technique that substitutes computing-power for traditional parametric assumptions. Recently, several authors have considered the application of the bootstrap to statistical quality control charts. Simulation studies have b..
- Published
- 1998
- Full Text
- View/download PDF
48. A runs rule alternative to level crossings in statistical process control
- Author
-
L. Allison Jones and William H. Woodall
- Subjects
Statistics and Probability ,Applied Mathematics ,Monte Carlo method ,Autocorrelation ,Process (computing) ,Rule-based system ,Level crossing ,Statistical process control ,Modeling and Simulation ,Statistics ,Control chart ,Statistics, Probability and Uncertainty ,Marginal distribution ,Algorithm ,Mathematics - Abstract
We develop a method for monitoring autocorrelated and independent processes based on the number of consecutive values above or below a threshold level. Our runs rule based method is compared to the level crossing control chart proposed by Willemain and Runger (1994). Monte Carlo simulations are used to show that the runs rule based method detects large and small shifts in the process mean more quickly than the level crossing method. The runs rule based method is widely applicable to high output processes, robust to the marginal distribution of the data, highly accessible to practitioners, and effectively detects shifts in the porcess mean.
- Published
- 1997
- Full Text
- View/download PDF
49. Discussion of 'Analyzing behavioral big data: Methodological, practical, ethical, and moral issues'
- Author
-
L. Allison Jones-Farmer and Nathaniel T. Stevens
- Subjects
010104 statistics & probability ,021103 operations research ,Computer science ,business.industry ,Big data ,0211 other engineering and technologies ,Engineering ethics ,02 engineering and technology ,0101 mathematics ,Safety, Risk, Reliability and Quality ,business ,01 natural sciences ,Industrial and Manufacturing Engineering - Abstract
Galit Shmueli provides a thought-provoking introduction to the Behavioral Big Data (BBD) landscape in “Analyzing Behavioral Big Data: Methodological, practical, ethical and moral issues.” We are gr...
- Published
- 2016
- Full Text
- View/download PDF
50. Discussion of 'Bridging the gap between theory and practice in basic statistical process monitoring'
- Author
-
L. Allison Jones-Farmer and Nathaniel T. Stevens
- Subjects
010104 statistics & probability ,021103 operations research ,Management science ,0211 other engineering and technologies ,Engineering ethics ,Statistical process monitoring ,02 engineering and technology ,Sociology ,0101 mathematics ,Safety, Risk, Reliability and Quality ,01 natural sciences ,Industrial and Manufacturing Engineering ,Bridging (programming) - Abstract
We congratulate Professor William H. Woodall on his thorough articulation of both the reasons behind the gap between theory and practice and also his recommended solutions for bridging this gap in ...
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.