11 results on '"Katsavounidis I"'
Search Results
2. Topology control with coverage and lifetime optimization of wireless sensor networks with unequal energy distribution
- Author
-
Xenakis, A., primary, Foukalas, F., additional, Stamoulis, G., additional, and Katsavounidis, I., additional
- Published
- 2017
- Full Text
- View/download PDF
3. Efficient video processing at scale using MSVP
- Author
-
Tescher, Andrew G., Ebrahimi, Touradj, Reddy, H. M., Chen, Y., Lan, J., Katsavounidis, I., Anandharengan, B., Lalgudi, H. G., Alaparthi, S., Hua, G., Chuang, H.-C., Wu, P.-H., Lei, Z., Mastro, A., Petersen, C., Chaudhari, G., Prakash, P., Regunathan, S., Reddy, S., Venkatapuram, P., Rao, V., Noru, K., Bjorlin, A., Zeile, M., Lewis, A., Singh, A., Sunil, A., Chen, C.-C., Lin, C.-F., Chen, C., Sundar, D. P., Jayaraman, D., Ucar, H., Li, H., Singh, J., Liu, J. C. C., Rachamreddy, K. R., Sriadibhatla, K., Datla, K., Berg, L. V. D., Feng, L., Jampani, P., Moola, R., Mallya, R., Jha, S., Pan, S., Srinivasan, S., Vaduganathan, V., Zha, X., Wang, Z., Sengottuvel, A. K., Alluri, B., Oshin, B., Kanumetta, C., Sahin, E., Athaide, J. M., Wu, J., Kurapati, K. C., Manthati, K., Thottempudi, K., Chennamsetti, R. R., Jagannath, K. R., Arvapalli, S., Kala, T., Wang, T., Chopda, P., Gandhi, K., Ramesh, A., Gupta, R., Fadnavis, S., Qassoud, A., Friedt, C., Li, F., Gao, H., Lee, J., Dixit, M., Ugaji, S., Karuturi, T., Xie, X., Narasimha, A., Jakka, B., Dodds, B., Yang, J., Skandakumaran, K., Modi, M., Modi, P., Stejerean, C., Ronca, D., Wang, H., Pham, N., Lu, L., Shen, H., Ning, J., Narayanan, K., Chen, L., Avidan, N., Arnold, W., Xu, F., Patil, G., Balan, V., and Grandhi, S. D.
- Published
- 2023
- Full Text
- View/download PDF
4. Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality.
- Author
-
Chen YC, Saha A, Chapiro A, Hane C, Bazin JC, Qiu B, Zanetti S, Katsavounidis I, and Bovik AC
- Subjects
- Humans, Avatar, Video Recording methods, Virtual Reality, Algorithms, Image Processing, Computer-Assisted methods
- Abstract
We study the visual quality judgments of human subjects on digital human avatars (sometimes referred to as "holograms" in the parlance of virtual reality [VR] and augmented reality [AR] systems) that have been subjected to distortions. We also study the ability of video quality models to predict human judgments. As streaming human avatar videos in VR or AR become increasingly common, the need for more advanced human avatar video compression protocols will be required to address the tradeoffs between faithfully transmitting high-quality visual representations while adjusting to changeable bandwidth scenarios. During transmission over the internet, the perceived quality of compressed human avatar videos can be severely impaired by visual artifacts. To optimize trade-offs between perceptual quality and data volume in practical workflows, video quality assessment (VQA) models are essential tools. However, there are very few VQA algorithms developed specifically to analyze human body avatar videos, due, at least in part, to the dearth of appropriate and comprehensive datasets of adequate size. Towards filling this gap, we introduce the LIVE-Meta Rendered Human Avatar VQA Database, which contains 720 human avatar videos processed using 20 different combinations of encoding parameters, labeled by corresponding human perceptual quality judgments that were collected in six degrees of freedom VR headsets. To demonstrate the usefulness of this new and unique video resource, we use it to study and compare the performances of a variety of state-of-the-art Full Reference and No Reference video quality prediction models, including a new model called HoloQA. As a service to the research community, we publicly releases the metadata of the new database at https://live.ece.utexas.edu/research/LIVE-Meta-rendered-human-avatar/index.html.
- Published
- 2024
- Full Text
- View/download PDF
5. One Transform To Compute Them All: Efficient Fusion-Based Full-Reference Video Quality Assessment.
- Author
-
Venkataramanan AK, Stejerean C, Katsavounidis I, and Bovik AC
- Abstract
The Visual Multimethod Assessment Fusion (VMAF) algorithm has recently emerged as a state-of-the-art approach to video quality prediction, that now pervades the streaming and social media industry. However, since VMAF requires the evaluation of a heterogeneous set of quality models, it is computationally expensive. Given other advances in hardware-accelerated encoding, quality assessment is emerging as a significant bottleneck in video compression pipelines. Towards alleviating this burden, we propose a novel Fusion of Unified Quality Evaluators (FUNQUE) framework, by enabling computation sharing and by using a transform that is sensitive to visual perception to boost accuracy. Further, we expand the FUNQUE framework to define a collection of improved low-complexity fused-feature models that advance the state-of-the-art of video quality performance with respect to both accuracy, by 4.2% to 5.3%, and computational efficiency, by factors of 3.8 to 11 times!.
- Published
- 2023
- Full Text
- View/download PDF
6. Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos.
- Author
-
Saha A, Chen YC, Davis C, Qiu B, Wang X, Gowda R, Katsavounidis I, and Bovik AC
- Abstract
We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos. Rapid advancements in cloud services, faster video encoding technologies, and increased access to high-speed, low-latency wireless internet have all contributed to the exponential growth of the Mobile Cloud Gaming industry. Consequently, the development of methods to assess the quality of real-time video feeds to end-users of cloud gaming platforms has become increasingly important. However, due to the lack of a large-scale public Mobile Cloud Gaming Video dataset containing a diverse set of distorted videos with corresponding subjective scores, there has been limited work on the development of MCG-VQA models. Towards accelerating progress towards these goals, we created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos, on which we collected 14,400 subjective quality ratings from an in-lab subjective study. Additionally, to demonstrate the usefulness of the new resource, we benchmarked multiple state-of-the-art VQA algorithms on the database. The new database will be made publicly available on our website: https://live.ece.utexas.edu/research/LIVE-Meta-Mobile-Cloud-Gaming/index.html.
- Published
- 2023
- Full Text
- View/download PDF
7. Towards Perceptually Optimized Adaptive Video Streaming-A Realistic Quality of Experience Database.
- Author
-
Bampis CG, Li Z, Katsavounidis I, Huang TY, Ekanadham C, and Bovik AC
- Abstract
Measuring Quality of Experience (QoE) and integrating these measurements into video streaming algorithms is a multi-faceted problem that fundamentally requires the design of comprehensive subjective QoE databases and objective QoE prediction models. To achieve this goal, we have recently designed the LIVE-NFLX-II database, a highly-realistic database which contains subjective QoE responses to various design dimensions, such as bitrate adaptation algorithms, network conditions and video content. Our database builds on recent advancements in content-adaptive encoding and incorporates actual network traces to capture realistic network variations on the client device. The new database focuses on low bandwidth conditions which are more challenging for bitrate adaptation algorithms, which often must navigate tradeoffs between rebuffering and video quality. Using our database, we study the effects of multiple streaming dimensions on user experience and evaluate video quality and quality of experience models and analyze their strengths and weaknesses. We believe that the tools introduced here will help inspire further progress on the development of perceptually-optimized client adaptation and video streaming strategies. The database is publicly available at http://live.ece.utexas.edu/research/LIVE_NFLX_II/live_nflx_plus.html.
- Published
- 2021
- Full Text
- View/download PDF
8. Image Coding with Data-Driven Transforms: Methodology, Performance and Potential.
- Author
-
Zhang X, Yang C, Li X, Liu S, Yang H, Katsavounidis I, Lei SM, and Kuo CJ
- Abstract
Image compression has always been an important topic in the last decades due to the explosive increase of images. The popular image compression formats are based on different transforms which convert images from the spatial domain into compact frequency domain to remove the spatial correlation. In this paper, we focus on the exploration of data-driven transform, Karhunen-Loéve transform (KLT), the kernels of which are derived from specific images via Principal Component Analysis (PCA), and design a high efficient KLT based image compression algorithm with variable transform sizes. To explore the optimal compression performance, the multiple transform sizes and categories are utilized and determined adaptively according to their rate-distortion (RD) costs. Moreover, comprehensive analyses on the transform coefficients are provided and a band-adaptive quantization scheme is proposed based on the coefficient RD performance. Extensive experiments are performed on several class-specific images as well as general images, and the proposed method achieves significant coding gain over the popular image compression standards including JPEG, JPEG 2000, and the state-of-the-art dictionary learning based methods.
- Published
- 2020
- Full Text
- View/download PDF
9. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.
- Author
-
Bampis CG, Li Z, Katsavounidis I, and Bovik AC
- Abstract
Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.
- Published
- 2018
- Full Text
- View/download PDF
10. Study of Temporal Effects on Subjective Video Quality of Experience.
- Author
-
Bampis CG, Zhi Li, Moorthy AK, Katsavounidis I, Aaron A, and Bovik AC
- Abstract
HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.
- Published
- 2017
- Full Text
- View/download PDF
11. Low-Complexity Hand Gesture Recognition System for Continuous Streams of Digits and Letters.
- Author
-
Poularakis S and Katsavounidis I
- Subjects
- Accelerometry, Humans, Sign Language, Algorithms, Gestures, Hand physiology, Pattern Recognition, Automated methods
- Abstract
In this paper, we propose a complete gesture recognition framework based on maximum cosine similarity and fast nearest neighbor (NN) techniques, which offers high-recognition accuracy and great computational advantages for three fundamental problems of gesture recognition: 1) isolated recognition; 2) gesture verification; and 3) gesture spotting on continuous data streams. To support our arguments, we provide a thorough evaluation on three large publicly available databases, examining various scenarios, such as noisy environments, limited number of training examples, and time delay in system's response. Our experimental results suggest that this simple NN-based approach is quite accurate for trajectory classification of digits and letters and could become a promising approach for implementations on low-power embedded systems.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.