11 results on '"Oguz Akkas"'
Search Results
2. Driver Movement Patterns Indicate Distraction and Engagement.
- Author
-
Robert G. Radwin, John D. Lee, and Oguz Akkas
- Published
- 2017
- Full Text
- View/download PDF
3. Identifying Multiple Sclerosis Relapses from Clinical Notes Using Combined Rule-based and Deep Learning Methodologies (P3-3.001)
- Author
-
Iris Chin, Heather Moss, Kathryn Sands, Manan Kocher, Oguz Akkas, and Aracelis Torres
- Published
- 2023
- Full Text
- View/download PDF
4. Driver Movement Patterns Indicate Distraction and Engagement
- Author
-
John D. Lee, Oguz Akkas, and Robert G. Radwin
- Subjects
Adult ,medicine.medical_specialty ,Automobile Driving ,Psychometrics ,Poison control ,Human Factors and Ergonomics ,Motor Activity ,Suicide prevention ,050105 experimental psychology ,Occupational safety and health ,Behavioral Neuroscience ,Physical medicine and rehabilitation ,Distraction ,Injury prevention ,medicine ,Humans ,0501 psychology and cognitive sciences ,Attention ,CLIPS ,050107 human factors ,Applied Psychology ,computer.programming_language ,05 social sciences ,Human factors and ergonomics ,Observer (special relativity) ,Biomechanical Phenomena ,Psychology ,computer ,Psychomotor Performance - Abstract
Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.
- Published
- 2017
5. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision
- Author
-
Oguz Akkas, David Rempel, Yu Hen Hu, Carisa Harris Adamson, Cheng Hsien Lee, and Robert G. Radwin
- Subjects
Engineering ,Feature vector ,Physical Exertion ,Decision tree ,Video Recording ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Repetitive motion ,03 medical and health sciences ,0302 clinical medicine ,Computer vision algorithms ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Exertion ,Sensitivity (control systems) ,050107 human factors ,Simulation ,business.industry ,Computers ,05 social sciences ,Hand ,030210 environmental & occupational health ,Duty cycle ,Time and Motion Studies ,Artificial intelligence ,business ,Algorithms - Abstract
Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was −5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates.Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.
- Published
- 2017
6. Measuring Elemental Time and Duty Cycle Using Automated Video Processing
- Author
-
Yu Hen Hu, Cheng-Hsien Lee, Oguz Akkas, Robert G. Radwin, and Thomas Y. Yen
- Subjects
Male ,Computer science ,Feature vector ,Movement ,Acceleration ,Decision tree ,Video Recording ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Kinematics ,Article ,03 medical and health sciences ,0302 clinical medicine ,Task Performance and Analysis ,Image Processing, Computer-Assisted ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,050107 human factors ,Ground truth ,business.industry ,05 social sciences ,Repetitive task ,030229 sport sciences ,Video processing ,Hand ,Biomechanical Phenomena ,Duty cycle ,Video tracking ,Muscle Fatigue ,Female ,Artificial intelligence ,business ,Algorithms - Abstract
A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p
- Published
- 2016
7. A hand speed-duty cycle equation for estimating the ACGIH hand activity level rating
- Author
-
Oguz Akkas, Sheryl S. Ulin, Robert G. Radwin, David P. Azari, Thomas J. Armstrong, David Rempel, Chia Hsiung Eric Chen, and Yu Hen Hu
- Subjects
Engineering ,Work ,Threshold limit value ,Movement ,Monte Carlo method ,Physical Exertion ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Residual ,Article ,Root mean square ,Statistics ,Linear regression ,Task Performance and Analysis ,Range (statistics) ,Humans ,Threshold Limit Values ,Simulation ,Occupational Health ,Anthropometry ,business.industry ,Regression analysis ,Hand ,United States ,Biomechanical Phenomena ,Military Personnel ,Duty cycle ,Regression Analysis ,business - Abstract
An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).
- Published
- 2014
8. Measuring Duty Cycle Using Automated Video Processing
- Author
-
Yu Hen Hu, Robert G. Radwin, and Oguz Akkas
- Subjects
Medical Terminology ,Computer science ,Duty cycle ,Real-time computing ,Video processing ,Medical Assisting and Transcription - Published
- 2015
- Full Text
- View/download PDF
9. Are Driver Movement Patterns Indicators of Distraction and Engagement?
- Author
-
Robert G. Radwin, Oguz Akkas, and John D. Lee
- Subjects
Medical Terminology ,medicine.medical_specialty ,Physical medicine and rehabilitation ,Movement (music) ,Distraction ,medicine ,Psychology ,Medical Assisting and Transcription - Published
- 2015
- Full Text
- View/download PDF
10. MEASURING CUSTOMER SATISFACTION IN TURK TELEKOM COMPANY USING STRUCTURAL EQUATION MODELING TECHNIQUE
- Author
-
Selim ZAIM, Ali TURKYILMAZ, Mehves TARIM, Bilal UCAR, and Oguz AKKAS
- Published
- 2010
- Full Text
- View/download PDF
11. How do computer vision upper extremity exposure measures compare against manual measures?
- Author
-
Alysha R. Meyers, Oguz Akkas, David Rempel, Yu Hen Hu, Carisa Harris-Adamson, Robert G. Radwin, Stephen Bao, Jia-Hua Lin, and Cheng-Hsien Lee
- Subjects
medicine.medical_specialty ,Quantification methods ,Computer science ,05 social sciences ,Measure (physics) ,030210 environmental & occupational health ,Medical Terminology ,03 medical and health sciences ,0302 clinical medicine ,Physical medicine and rehabilitation ,medicine ,0501 psychology and cognitive sciences ,050107 human factors ,Medical Assisting and Transcription - Abstract
Background Various quantification methods have been used to measure exposure to risk factors for musculoskeletal injuries, including observation, video-based frame-by-frame analysis, and direct measurements. Each technique has advantages and disadvantages. The American Conference of Government Industrial Hygienists (2017) Threshold Limit Value® (TLV®) uses the hand activity level (HAL) rating scale, a 10-point visual analog scale based on hand speed and rest pauses. HAL may be determined subjectively by an observer or from a lookup table, or an equation by measuring exertion frequency ( F) and percent duty cycle ( D). This study compares task level physical exposure variables measured manually and using video computer vision for jobs selected from a selected subset of the Upper Limb MSD Consortium prospective study. We compared F and D, calculated both using manual single-frame MVTA analysis and automatic computer vision (Akkas et al., 2015, Akkas et al., 2016, Akkas et al., 2017, Greene et al., 2017). Methods This study utilized exposure data from prospective studies conducted by the National Institute for Occupational Safety and Health (NIOSH), the Safety & Health Assessment & Research for Prevention (SHARP) in the State of Washington, and the University of California -San Francisco (UCSF). Some data from these prospective cohort studies had been previously pooled and analyzed as part of the Upper Limb MSD Consortium, a group of seven prospective cohort studies (Bao et al., 2015; Harris-Adamson et. al., 2013a, 2013b; Harris-Adamson et. al., 2014; Kapellusch et al., 2013, 2014; Fan et al., 2015). Because the videos were created for a different purpose, not all were suitable for computer vision analysis. We selected 1001 videos where we applied hand tracking and data checking to date. Thus, not all study sites are equally represented. The occurrence of each exertion was first identified in all the videos by human analysts for manually calculating the frequency (exertions/ second) and duty cycle (percent exertion time/ cycle time). The hands were tracked using marker-less video tracking and a feature vector training (FVT) algorithm (Akkas et al., 2016 Akkas et al., 2017) was trained using the first cycle exertions identified by an analyst, for automatically estimating subsequent exertions in the videos. We then applied the FVT algorithm to the 1001 videos clips and automatically identified video frames representing exertions of the dominant hand. As a result, we counted total frames of exertions as well as the total number of exertions to calculate F and D. Results The calculated D (%) and F (Hz) errors were the average difference between the manual frame-by-frame and the computer vision estimates. We found an average error of 12.7% (SD=36.8%) for D and 0.06 Hz (SD=0.38 Hz) for F. The average HAL error was 1.3 (SD=2.2), which is considered negligible. Conclusions The results indicate that computer vision can reliably estimate important exposure variables for many tasks. Since the videos used in this study were taken for a different purpose, we anticipate the algorithms will perform better when videos are recorded specifically for computer vision analysis.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.