339 results on '"Thin Client"'
Search Results
2. Application of the Method of Problem Learning in the Study of the Discipline 'Information Security'
- Author
-
V. A. Sizov, D. M. Malinichev, Kh. K. Kuchmezov, and V. V. Mochalov
- Subjects
Service (systems architecture) ,teaching methodology ,LC8-6691 ,Process (engineering) ,Computer science ,information security ,Teaching method ,05 social sciences ,050301 education ,Information security ,threats to information security ,Computer security ,computer.software_genre ,Special aspects of education ,Information protection policy ,Thin client ,state information systems ,terminal access devices ,problem learning method ,0501 psychology and cognitive sciences ,Set (psychology) ,0503 education ,Competence (human resources) ,computer ,050107 human factors - Abstract
The purpose of this article is to develop students’ critical thinking for solving problems in the field of information security by using the method of problem learning in teaching the discipline “Information Security”. The role of this method in the development of critical thinking, research creativity of students and their achievement of a better understanding of educational material in the field of information security is emphasized.Materials and research methods. The main conditions for the effectiveness of problem learning in the study of the discipline “Information Security” are highlighted by the method of analysis of the subject area: motivation of students, the feasibility and significance of problem situations offered to students on various aspects of information security, dialogical friendly communication between lecturer and students. As research materials, an example of using the method of problem learning in solving the task of information protection in state information systems with terminal access devices is considered. The example presents the problem of increasing the efficiency of information protection in state information systems with terminal access devices, i.e. state information systems using the “thin client” architecture, as well as a way to solve it by assessing threats and improving the relevant mechanisms for ensuring information security, presented in the regulatory documents governing the requirements for information protection in state information systems with terminal access devices. Results. The paper considers the practical task of creating and resolving a problem situation for the protection of information in state information systems with terminal access devices, which can be used in the educational process to solve similar tasks by the method of problem learning.The creation of a problematic situation is based on the existing contradictions in the regulations governing the functioning and protection of information of this type of systems in which the protected information is processed in order to comply with legislation and ensure the functioning of authorities. As a result of using a systematic approach, which involves considering the process of information protection in the form of a set of stages in the formation of requirements for state information systems using the architecture of the “thin client”, improving the regulatory framework, the trainees form proposals for the protection of information in state information systems using the architecture of the “thin client” to ensure the design security of state information systems, taking into account the complex of urgent threats to information security. The presented solution to the problem situation in the considered task requires from the trainees general cultural competencies, such as: identifying contradictions, colliding opposing points of view, comparing facts, considering the problem from different points of view, generalizing, concretizing facts, etc.Conclusions. Thus, the paper substantiates the method of problem learning in the study of the discipline “Information Security” and presents an example of its use in solving the problem of information protection in state information systems with terminal access devices. As a result, the trainees must identify threats that are absent in the information security threat databank of the Federal Service for Technical and Export Control of the Russian Federation (FSTEC of Russia) and determine the directions for further development of information security and information protection in state information systems with terminal access devices. The practical solution of this problem by a group of students within the framework of the study of the discipline “Information Security” showed a high level of competence development.
- Published
- 2021
3. Distributed Internet voting architecture: A thin client approach to Internet voting
- Author
-
Jim E Helm
- Subjects
Biometrics ,Computer science ,business.industry ,Electronic voting ,Strategy and Management ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Computer security ,computer.software_genre ,Merkle tree ,Thin client ,Voting ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,Smart card ,Architecture ,business ,computer ,Information Systems ,media_common - Abstract
Principles required for secure electronic voting using the Internet are known and published. Although the Internet voting functionalities and technologies are well-defined, none of the existing state-sponsored Internet voting approaches in use incorporate a total Internet-based system approach that includes voter registration, the voting process, and vote counting. The distributed Internet voting architecture concept discussed in this article uses a novel thin client approach to Internet voting. The architecture uses existing technologies and knowledge to create a viable whole system approach to Internet voting. This article describes various aspects and processes necessary to support an integrated approach. The application programming interface software for many of the critical functions was developed in Python and functionality tested. A virtual network, including a cloud-based functionality, was created and used to evaluate the various conceptual aspects of the proposed architecture. This included the concepts associated with programming and accessing smart cards, capturing and saving fingerprint data, structuring virtual private networks using tunneling and Internet Protocol Security, encrypting ballots using asymmetric encryption, using symmetric encryption for secret cookies, thin client interaction, and creating hash functions to be used within a blockchain structure in a Merkle tree architecture. The systems’ primary user targets are individuals remotely located from their home voting precincts and senior citizens who have limited mobility and mostly reside in assisted living facilities. The research supports the contention that a cybersecure Internet voting system that significantly reduces the opportunity for mail-in voter fraud, helps to ensure privacy for the voter, including nonrepudiation, nonattribution, receipt freeness, and vote acknowledgment can be created using existing technology.
- Published
- 2021
- Full Text
- View/download PDF
4. Implementasi Compatibility Layer Pada Jaringan Server Diskless Berbasis Lubuntu 18.04 LTS
- Author
-
Ade Silvia Handayani, Ibnu ziad, and Farid Jatri Abiyyu
- Subjects
Source code ,Computer science ,Secure Shell ,media_common.quotation_subject ,computer.software_genre ,Terminal server ,Thin client ,Packet loss ,Computer cluster ,Cross-platform ,Operating system ,computer ,Jitter ,media_common - Abstract
Diskless server is a cluster computer network which uses SSH (Secure Shell) protocol to grant the client an access to the host's directory and modify it's content so that the client don't need a hardisk (Thin Client). One way to design a diskless server is by utilizing "Linux Terminal Server Project", an open source-based script for Linux. However, using Linux has it own drawback, such as it can't cross platform for running an aplication based on Windows system which are commonly used. This drawback can be overcomed by using a compatibility layer that converts a windows-based application's source code. The data which will be monitored is the compatibility layer implementation's result, and the throughput, packet loss, delay, and jitter. The result of measurement from those four parameters resulting in "Excellent" for throughput, "Perfect" for packet loss and delay, and "Good" for jitter.
- Published
- 2020
- Full Text
- View/download PDF
5. Improvement of the Regulatory Framework of Information Security for Terminal Access Devices of the State Information System
- Author
-
V. A. Sizov, V. V. Mochalov, and D. M. Malinichev
- Subjects
Service (systems architecture) ,LC8-6691 ,information security ,Computer science ,05 social sciences ,050301 education ,Information security ,Computer security ,computer.software_genre ,Special aspects of education ,state information system ,Information protection policy ,File server ,secure information systems ,thin client ,Information security management ,Thin client ,0501 psychology and cognitive sciences ,Confidentiality ,State (computer science) ,0503 education ,computer ,data processing center ,050107 human factors - Abstract
The aim of the study is to increase the effectiveness of information security management for state information systems (SIS) with terminal access devices by improving regulatory legal acts that should be logically interconnected and not contradict each other, as well as use a single professional thesaurus that allows understanding and describe information security processes.Currently, state information systems with terminal access devices are used to ensure the realization of the legitimate interests of citizens in information interaction with public authorities [1].One of the types of such systems are public systems [2]. They are designed to provide electronic services to citizens, such as paying taxes, obtaining certificates, filing of applications and other information. The processed personal data may belong to special, biometric, publicly available and other categories [3]. Various categories of personal data, concentrated in a large volume about a large number of citizens, can lead to significant damage as a result of their leakage, which means that this creates information risks.There are several basic types of architectures of state information systems: systems based on the “thin clientpeer-to-peer network systems; file server systems; data processing centers; systems with remote user access; the use of different types of operating systems (heterogeneity of the environment); use of applications independent of operating systems; use of dedicated communication channels [4]. Such diversity and heterogeneity of state information systems, on the one hand, and the need for high-quality state regulation in the field of information security in these systems, on the other hand, require the study and development of legal acts that take into account primarily the features of systems that have a typical modern architecture of “thin customer". Materials and research methods. The protection of the state information system is regulated by a large number of legal acts that are constantly being improved with changes and additions to the content. At the substantive level, it includes many stages, such as the formation of SIS requirements, the development of a security system, its implementation, and certification. The protected information is processed in order to enforce the law and ensure the functioning of the authorities. The need to protect confidential information is determined by the legislation of the Russian Federation [5, 6]. Therefore, to assess the quality of the regulatory framework of information security for terminal access devices of the state information system, the analysis of the main regulatory legal acts is carried out and on the basis of it, proposals are developed by analogy to improve existing regulatory documents in the field of information security.Results. The paper has developed proposals for improving the regulatory framework of information security for terminal access devices of the state information system- for uniformity and unification, the terms with corresponding definitions are justified for their establishment in the documents of the Federal Service for Technical and Export Control (FSTEC) or Rosstandart;- rules for the formation of requirements for terminals, which should be equivalent requirements for computer equipment in the “Concept for the protection of computer equipment and automated systems from unauthorized access to information ".Conclusion. General recommendations on information protection in state information systems using the “thin client" architecture are proposed, specific threats that are absent in the FSTEC threat bank are justified, and directions for further information security for the class of state information systems under consideration are identified. Due to the large number of stakeholders involved in the coordination and development of unified solutions, a more specific consideration of the problems and issues raised is possible only with the participation of representatives of authorized federal executive bodies and business representatives for discussion.
- Published
- 2020
- Full Text
- View/download PDF
6. THIN CLIENT IN MASSIVE RLS WITH CLOUD APPLICATION
- Author
-
Pavel Beňo and František Schauer
- Subjects
Thin client ,business.industry ,Computer science ,Operating system ,Cloud computing ,General Medicine ,computer.software_genre ,business ,computer - Published
- 2020
- Full Text
- View/download PDF
7. Analysis of server-side and client-side Web-GIS data processing methods on the example of JTS and JSTS using open data from OSM and geoportal
- Author
-
Marcin Kulawiak, Agnieszka Dawidowicz, and Marek Emanuel Pacholczyk
- Subjects
Geographic information system ,business.industry ,Computer science ,0208 environmental biotechnology ,02 engineering and technology ,Client-side ,010502 geochemistry & geophysics ,computer.software_genre ,01 natural sciences ,020801 environmental engineering ,World Wide Web ,Open data ,Software ,Thin client ,Plug-in ,Computers in Earth Sciences ,business ,computer ,Server-side ,Geoportal ,0105 earth and related environmental sciences ,Information Systems - Abstract
The last decade has seen a rapid evolution of processing, analysis and visualization of freely available geographic data using Open Source Web-GIS. In the beginning, Web-based Geographic Information Systems employed a thick-client approach which required installation of platform-specific browser plugins. Later on, research focus shifted to platform-independent thin client solutions in which data processing and analysis was performed by the server machine. More recently, however, the rapid development of computer hardware as well as software technologies such has HTML5 has enabled the creation of platform-independent thick clients which offer advanced GIS functionalities such as geoprocessing. This article aims to analyse the current state of Open Source technologies and publicly available geographic data sources in the context of creating cost-effective Web-GIS applications for integration and processing of spatial data. For this purpose the article discusses the availability and potential of Web-GIS architectures, software libraries and data sources. The analysis of freely available data sources includes a discussion of the quality and accuracy of crowd-sourced as well as public sector data, while the investigation of software libraries and architectures involves a comparison of server-side and client-side data processing performance under a set of real-world scenarios. The article concludes with a discussion of the choice of cost-effective Web-GIS architectures, software libraries and data sources in the context of the institution and environment of system deployment.
- Published
- 2019
- Full Text
- View/download PDF
8. DESIGN AND IMPLEMENTATION OF INTELLIGENT COMMUNITY SYSTEM BASED ON THIN CLIENT AND CLOUD COMPUTING
- Author
-
Dongfeng Yuan, Weitao Xu, and Liangfei Xue
- Subjects
FOS: Computer and information sciences ,Intelligent Community System, Thin client, Cloud Computing, Virtualization, Distributed File System ,Computer science ,business.industry ,Intelligent Community System ,Cloud computing ,Cloud Computing ,computer.software_genre ,Distributed File System ,Mobile cloud computing ,Shared resource ,Software Engineering (cs.SE) ,Computer Science - Computers and Society ,Computer Science - Software Engineering ,Virtualization ,Thin client ,Virtual machine ,Server ,Computers and Society (cs.CY) ,Information system ,business ,Mobile device ,computer ,Computer network - Abstract
With the continuous development of science and technology, the intelligent development of community system becomes a trend. Meanwhile, smart mobile devices and cloud computing technology are increasingly used in intelligent information systems; however, smart mobile devices such as smartphone and smart pad, also known as thin clients, limited by either their capacities (CPU, memory or battery) or their network resources, do not always meet users' satisfaction in using mobile services. Mobile cloud computing, in which resource-rich virtual machines of smart mobile device are provided to a customer as a service, can be terrific solution for expanding the limitation of real smart mobile device, but the resources utilization rate is low and the information cannot be shared easily. To address the problems above, this paper proposes an information system for intelligent community, which is composed of thin clients, wide band network and cloud computing servers. On one hand, the thin clients with the characteristics of energy efficiency, high robustness and high computing capacity can efficiently avoid the problems encountered in the PC architecture and mobile devices. On the other hand, the cloud computing servers in the proposed information system solve the problems of resource sharing barriers. Finally, the system is built in real environments to evaluate the performance. We deploy the proposed system in a community with more than 2000 residents, and it is demonstrated that the proposed system is robust and efficient.
- Published
- 2021
- Full Text
- View/download PDF
9. Development of SOA-based WebGIS framework for education sector
- Author
-
Sonam Agrawal and R. D. Gupta
- Subjects
Geographic information system ,Geospatial analysis ,010504 meteorology & atmospheric sciences ,Computer science ,Service delivery framework ,business.industry ,Interoperability ,010502 geochemistry & geophysics ,computer.software_genre ,01 natural sciences ,World Wide Web ,Data sharing ,Resource (project management) ,Thin client ,Open standard ,General Earth and Planetary Sciences ,business ,computer ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
The applications of Geographic Information System (GIS) in the education sector are increasing day by day. The geospatial information can be published, discovered, searched, analyzed, and displayed through webGIS-based applications. Lack of an open source geospatial resource-based platform for data sharing, discovery, and service delivery in the education sector is a critical issue in managing the education of large population in India. The use of open geospatial consortium (OGC) developed open standards for geospatial web services will result in the interoperability of geographic information. In this paper, an interoperable and secure service-oriented architecture (SOA)-based webGIS framework is developed to handle the technical and non-technical issues in the education sector. In this research work, spatial analysis on schools is performed along with the design and development of webGIS framework using SOA, OGC standards, and open source software. The developed webGIS framework, acronym as EduGIS, is interoperable and secure which is implemented for the education sector. The development of webGIS framework is based upon three-tier thin client architecture. The present research work has investigated an optimized adoption of various free and open source software like Quantum GIS, GeoServer, Apache Tomcat, PostGIS, and uDig in different tiers of developed webGIS framework. The interoperability of developed EduGIS ensures that it can be shared across different technologies, data, platforms, and organizations. The development of open source-based webGIS framework will serve as a means of reducing licensing costs in developing countries like India and will promote indigenous technological development for primary education in rural areas.
- Published
- 2020
- Full Text
- View/download PDF
10. A thin client friendly trusted execution framework for infrastructure-as-a-service clouds
- Author
-
Masoom Alam, Imran Khan, Habib ur Rehman, Mohammad Alkhatib, and Zahid Anwar
- Subjects
0301 basic medicine ,Distributed Computing Environment ,Computer Networks and Communications ,Computer science ,business.industry ,Cloud computing ,02 engineering and technology ,Trusted Computing ,Client-side ,Trusted third party ,Computer security ,computer.software_genre ,020202 computer hardware & architecture ,03 medical and health sciences ,030104 developmental biology ,Thin client ,Hardware and Architecture ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,business ,Mobile device ,computer ,Software - Abstract
Individuals and businesses are moving to cloud-based services, to benefit from their pay-as-you-go and elastic scalability features. The main concern to wide adoption of cloud-based services is the lack of protection of clients’ data and computation from the various outsider as well as insider attacks, which threaten to compromise client data confidentiality and integrity. Trusted computing provides a foundation for designing security services that are resilient to various threats and attacks in a distributed environment such as the cloud. Current trusted computing based solutions are ill-suited to the cloud as they inadvertently disclose too many details about the underlying infrastructure to clients and at the same time involve the complex task of attestation and verification on the client side. Additionally, direct verification of security properties of the cloud platform to each and every client introduces computational bottlenecks. In this work, we propose a scalable framework which enables verification of the properties of the cloud platform through a trusted third party without the direct involvement of the client. Our proposed framework is thin client (mobile device) friendly, as the client is alleviated of direct attestation and verification process. Performance analysis shows that the cost of our presented approach is lower in order of magnitude when compared with traditional trusted computing based solutions.
- Published
- 2018
- Full Text
- View/download PDF
11. Repurposing end of life notebook computers from consumer WEEE as thin client computers – A hybrid end of life strategy for the Circular Economy in electronics
- Author
-
Colin Fitzpatrick, M. Molly McMahon, and Damian Coughlan
- Subjects
Renewable Energy, Sustainability and the Environment ,Computer science ,020209 energy ,Strategy and Management ,02 engineering and technology ,Benchmarking ,010501 environmental sciences ,Reuse ,computer.software_genre ,01 natural sciences ,Industrial and Manufacturing Engineering ,Test (assessment) ,Product (business) ,Identification (information) ,Thin client ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,Electronics ,computer ,Repurposing ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
This paper presents an investigation into the feasibility of repurposing end-of-life notebook computers as thin client computers. Repurposing is the identification of a new use for a product that can no longer be used in its original form and has the potential to become a hybrid re-use/recycling end-of-life strategy for suitable e-waste when direct reuse is not economically or technically feasible. In this instance, it was targeted to produce thin client computers using motherboards, processors and memory from used laptops while recycling all other components. Notebook computers are of interest for this type of strategy due to having a substantial environmental impact in manufacturing but often not having the option of direct reuse as they are prone to damage and experience a rapid loss of value over time. They also contain multiple critical raw materials with very low recycling rates. The notebook computers were sourced from Civic Amenity sites (CA) and originated from business-to-consumer (B2C) channels. A total of 246 notebook computers were collected and analysed. The paper outlines a methodology developed to identify, test, analyse, and disassemble suitable devices for repurposing. The methodology consists of the following stages with associated pass rates; Visual Inspection & Power-on Test (32%) the Initial-stage functionality test comprised of Functionality Test (56%), Diagnostics (100%) and Benchmarking (86%). The Disassembly stage had a pass rate of 100% and the Post-Disassembly Test comprised of a Validation test (86%). The Final-stage functionality test had a pass rate of 61%. The overall results show that 9% of the notebook computers were suitable for repurposing as thin client computers. It recommends the following design changes to notebooks/laptops that would support repurposing; 1) PCB mounted Fan and Heatsink assembly, 2) Eliminate daughter and I/O boards, 3) A separate Power Button assembly, 4) Reduction in size of the motherboards surface area or physical size. These design changes would allow for a more efficient transition for a change of role. A streamlined lifecycle analysis based on Cumulative Energy Demand (CED) was undertaken to compare the impact of repurposed notebook computers with new thin client computers. The results indicated that there are significant potential savings to be made by extending lifetimes and offsetting the production of new thin client computers under a range of assumptions.
- Published
- 2018
- Full Text
- View/download PDF
12. Implementing Machine Learning on Edge Devices with Limited Working Memory
- Author
-
Saksham Jhawar, B. S. Anisha, P. Ramakanth Kumar, and A. Harish
- Subjects
Edge device ,Computer science ,Working memory ,business.industry ,Computation ,Process (computing) ,Machine learning ,computer.software_genre ,Thin client ,Immediacy ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Architecture ,business ,computer - Abstract
The architecture is aimed at pushing the computing towards the edge. When most of the computation occurs towards the edge device where the data is generated, the processing becomes faster and more efficient. This improves the user’s wait time and delivers results faster to the user. Machine learning techniques are implemented in the edge devices. While one can process data at the sensor, what one can do is limited by the processing power available on each IoT device. Data is at the heart of an IoT architecture, and one needs to choose between immediacy and depth of insight when processing that data. The more immediate the need for information, the closer to the end devices your processing needs to be. We propose an architecture to use machine learning algorithms in the limited memory of the edge device.
- Published
- 2020
- Full Text
- View/download PDF
13. Measuring Key Quality Indicators in Cloud Gaming: Framework and Assessment Over Wireless Networks
- Author
-
Carlos Baena, Raquel Barco, Oswaldo Sebastian Peñaherrera-Pulla, Sergio Fortes, and Eduardo Baena
- Subjects
wireless networks ,service architecture ,computer.internet_protocol ,Computer science ,Cloud gaming ,Cloud computing ,quality of experience ,02 engineering and technology ,lcsh:Chemical technology ,Biochemistry ,Article ,Analytical Chemistry ,Rendering (computer graphics) ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,service performance ,Quality of experience ,Electrical and Electronic Engineering ,Graphics ,Instrumentation ,Video game ,Wireless network ,business.industry ,Testbed ,key quality indicators ,020206 networking & telecommunications ,cloud gaming ,Service-oriented architecture ,Atomic and Molecular Physics, and Optics ,Thin client ,020201 artificial intelligence & image processing ,business ,computer ,Computer network - Abstract
Cloud Gaming is a cutting-edge paradigm in the video game provision where the graphics rendering and logic are computed in the cloud. This allows a user’s thin client systems with much more limited capabilities to offer a comparable experience with traditional local and online gaming but using reduced hardware requirements. In contrast, this approach stresses the communication networks between the client and the cloud. In this context, it is necessary to know how to configure the network in order to provide service with the best quality. To that end, the present work defines a novel framework for Cloud Gaming performance evaluation. This system is implemented in a real testbed and evaluates the Cloud Gaming approach for different transport networks (Ethernet, WiFi, and LTE (Long Term Evolution)) and scenarios, automating the acquisition of the gaming metrics. From this, the impact on the overall gaming experience is analyzed identifying the main parameters involved in its performance. Hence, the future lines for Cloud Gaming QoE-based (Quality of Experience) optimization are established, this way being of configuration, a trendy paradigm in the new-generation networks, such as 4G and 5G (Fourth and Fifth Generation of Mobile Networks).
- Published
- 2021
- Full Text
- View/download PDF
14. Using of virtualized IT-infrastructure under normal operation of automation systems of technological objects
- Subjects
Engineering ,RAID ,business.industry ,Hypervisor ,computer.software_genre ,Process automation system ,Virtualization ,law.invention ,Thin client ,Windows Server ,Virtual machine ,Backup ,law ,Embedded system ,Operating system ,business ,computer - Abstract
Taking into account the global trends and experience of implementation of modern information technologies in production processes, with the aim of updating and increasing the competitiveness of Ukrainian industrial complexes, the issues and methods for using of hardware and software and technological solutions in the field of virtualization are considered.The main method of research is computer simulation – simulation of real automation systems (including server component) using the tools of virtualization (Microsoft Hyper V). The essence of the method is creation of a virtual environment (infrastructure), including primary and backup server with process control system and workstations. Virtual machines of automation systems are fully meet their physical analogues by heir characteristics.The ways of using of traditional automation systems, which are deployed on the basis of virtualization platform Hyper V, are considered. The main way of using of traditional software automation is their deployment on the basis of server operating system with support for one of the many virtualization technologies, such as: MS Hyper V, VMWare VSphere, Citrix Xen Server, and others.An opportunity of practical operation of automation systems on the basis of virtualized hardware and software server complex with the thin clients as workstations is proved for Experion PKS system and Honeywell C200 controller. The process control system is deployed in a virtualized environment on the basis of server (Windows Server 2012 R2) and normal (Windows 10) operating systems.The possible positive effect of implementation of modern IT infrastructure for technological objects is also analyzed. It lays in the fact of theoretically increase of fault tolerance level, practical simplification of system administration, and creation of bank for backup of virtual machines.This result is associated with a more rational and efficient use of capabilities of modern computer systems (CPU and RAM), data storage systems (using of RAID hard drives) and software.
- Published
- 2017
- Full Text
- View/download PDF
15. Evaluation of Different Thin-Client Based Virtual Classroom Architectures
- Author
-
Jozsef Domokos, Konrád József Kiss, and Örs Darabont
- Subjects
Multimedia ,Thin client ,Computer science ,business.industry ,Information technology ,General Medicine ,computer.software_genre ,business ,Energy engineering ,computer ,Virtual classroom - Abstract
This paper presents an evaluation of different methods used to deliver virtual machines capable of being accessed remotely by thin-clients. The objective of the research was to provide a recommendation for building a cost-effective computer infrastructure for use in two scenarios: as a programming lab, and as an office infrastructure. We have found that different thin-client solutions based on single board computers are reliable solutions for commercially available thin client replacement, because they can run free Linux-based operating systems, can handle Remote Desktop Protocol, have lower acquisition costs, lower power consumption and offer almost the same computing performance. For providing remote desktops, there are several methods and virtualization platforms available. We benchmarked some of these platforms in order to choose the one best-suited for implementation. Our conclusion is that Microsoft Remote Desktop Services outperforms the virtualization based solutions, but it entails high license fees. Of the virtualization solutions tested, the VMW are ESXi based one is the most reliable choice.
- Published
- 2016
- Full Text
- View/download PDF
16. Power-saving control framework for cloud services based on set-top box
- Author
-
Eui-Suk Jung, Yong-Tae Lee, Hyun-Woo Lee, Eunjung Kwon, and Hyunho Park
- Subjects
Service (business) ,Engineering ,Database ,business.industry ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Service provider ,computer.software_genre ,020202 computer hardware & architecture ,Energy conservation ,Thin client ,Return on investment ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Operating system ,Electrical and Electronic Engineering ,Standby power ,business ,computer ,Efficient energy use - Abstract
Recently, the more regulations of energy conservation have been emphasized, the more home appliance manufactures have made competitive efforts to improve the energy efficiency of their products. However, there have been no correlation analysis between energy-efficient home appliances and cloud technologies. While cloud services have been rapidly adopted in many service areas to guarantee a high return of investment (ROI) for service providers, Previous studies on power saving have been conducted to reduce the standby power consumption for home appliances, like a set-top box (STB), which can be worked as a thin client for cloud services, or have only shown power models of a virtual machine (VM) on the cloud side without considering the power consumption states of its corresponding thin clients As a result, this paper proposes a power-saving control framework (PSCF) that consumes low power levels for both on the cloud side and on a STB efficiently for providing cloud services. The experimental result shows that the proposed framework outperforms the experiment not applied with any other power saving method for cloud services in terms of power efficiency.
- Published
- 2016
- Full Text
- View/download PDF
17. THIN CLIENT FOR REAL-TIME MONITORING OF COMMUNICATION INFRASTRUCTURE
- Author
-
Titus Bălan, Iulian Iliescu, Oana Garoiu, and Sorin Zamfir
- Subjects
Representational state transfer ,business.product_category ,computer.internet_protocol ,business.industry ,Computer science ,Core network ,computer.software_genre ,Computer security ,Data access ,Thin client ,Internet access ,The Internet ,Web service ,business ,computer ,Mobile device ,Computer network - Abstract
Monitoring the telecom infrastructure reflects not only a technical perspective but also a business perspective, highlighting, besides possible technical issues, the network areas that need improvements or investments. This paper describes the method for tele-monitoring of communication infrastructure responsible for switching and signaling in the mobile converged core network. The application allows the display of real-time information on a thin client. Thus, decisions related to infrastructure (resource management, optimizations, redeployments) can be taken based on notifications received on a mobile device (e.g. an Android smartphone) Network operators are dealing with the challenge of reducing operational expenses OPEX and optimize the functionality of distributed infrastructure, within fast response times or methods to predict and avoid network issues, through automatic maintenance solutions. The purpose of the implementations presented in this paper is to obtain a remote monitoring solution that provides mobility and which has an interface accessible to any user. The user has access to data wherever they are, as long as an Internet connection is available. The scope of the implemented demonstrator is to obtain an intuitive system, easy to set up and use, with the possibility of further development, while providing a very good perfor mance / price ratio. The user has the possibility to view graphs that highlight errors, can analyze detailed view of logs and can perform real-time monitoring of the activities of system responsible with backup actions. Data communication between the server and the Android application is achieved through REST Web services ("Representational State Transfer") (1). The multiplicity of standards and protocols used in the Internet have made possible the communication between two or more systems connected to the Internet that is a distance from one another. Industry software development, architectural models based on web services are seen increasingly more often, offering numerous advantages, especially when used together with an DAO ( "Data Access Objects")(2) type architecture that brings an innovation in the interpretation of data and modeling of data objects. This concept will be detailed later, with examples related to the demonstrations.
- Published
- 2016
- Full Text
- View/download PDF
18. Authentication algorithm for participants of information interoperability in process of operating system remote loading on thin client
- Author
-
Yu.A. Gatchin and O. A. Teploukhova
- Subjects
Computer science ,Email authentication ,computer.software_genre ,lcsh:QA75.5-76.95 ,thin client ,operating system ,Lightweight Extensible Authentication Protocol ,lcsh:QC350-467 ,Data Authentication Algorithm ,public key infrastructure ,Authentication ,business.industry ,Mechanical Engineering ,Multi-factor authentication ,Atomic and Molecular Physics, and Optics ,Chip Authentication Program ,trusted loading module ,Computer Science Applications ,Electronic, Optical and Magnetic Materials ,message authentication code ,Thin client ,digital signature ,Authentication protocol ,Operating system ,authentication ,lcsh:Electronic computers. Computer science ,business ,computer ,lcsh:Optics. Light ,Information Systems ,Computer network - Abstract
Subject of Research.This paper presents solution of authentication problem for all components of information interoperabilityin process of operation system network loading on thin client from terminal server. System Definition. In the proposed solution operation system integrity check is made by hardware-software module, including USB-token with protected memory for secure storage of cryptographic keys and loader. The key requirement for the solution is mutual authentication of four participants: terminal server, thin client, token and user. We have created two algorithms for the problem solution. The first of the designed algorithms compares the encrypted one-time password (random number) with the reference value stored in the memory of the token and updates this number in case of successful authentication. The second algorithm uses the public and private keys of the token and the server. As a result of cryptographic transformation, participants are authenticated and the secure channel is formed between the token, thin client and terminal server. Main Results. Additional research was carried out to find out if the designed algorithms meet the necessary requirements. Criteria used included applicability in a multi-access terminal system architecture, potential threats evaluation and overall system security. According to analysis results, it is recommended to use the algorithm based on PKI due to its high scalability and usability. High level of data security is proved as a result of asymmetric cryptography application with the guarantee that participants' private keys are never sent in the authentication process. Practical Relevance. The designed PKI-based algorithm allows solving the problem with the use of cryptographic algorithms according to state standard even in its absence on asymmetric cryptography. Thus, it can be applied in the State Information Systems with increased requirements to information security.
- Published
- 2016
- Full Text
- View/download PDF
19. LBAC Web
- Author
-
Zheng Song and Ying Yang
- Subjects
Computer science ,business.industry ,Principle of least privilege ,020207 software engineering ,Lattice-based access control ,Access control ,Usability ,02 engineering and technology ,Computer security model ,Computer security ,computer.software_genre ,Thin client ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business ,Cloud storage ,Virtual desktop ,computer - Abstract
The e-government puts forward the growing security demands on information system, in this context, thin client based solution appears. The thin client provides similar functions but better security by cloud storage and centralized management as rich client. The main technologies are virtual desktop infrastructure (VDI) and Web client by now. Recently the development of Web-based operating system (Web OS) promotes Web client. But these Web OSes mainly aim at mobility and cross-platform, give less confidentiality and integrity support for e-government. Firstly, this paper analyzes four open source Web OSes on their application type, access control policies and their security problems. Then it abstracts the common security model of Web OSes. Secondly, based on this common security model, it constructs a LBACWeb (Lattice-based access control model for Web OS) model, which combining confidential label, integrity label and category set on the basis of lattice structure. Finally, it proposes the least privilege principle on trusted subjects and define a special privileged subject to enhance the flexibility and usability. The analysis and verification are given at last to elaborate the security and applicability of the LBACWeb model.
- Published
- 2019
- Full Text
- View/download PDF
20. Data Governance on Local Storage in Offsite
- Author
-
K. Shyamala and G. Priyadharshini
- Subjects
Information sensitivity ,Thin client ,Air gap (networking) ,Computer science ,End user ,Information leakage ,Data security ,Data breach ,Computer security ,computer.software_genre ,Virtual desktop ,computer - Abstract
The major challenges faced by software industry are how to restrict confidential data and protect copyright or intellectual property information between customer location and delivery center. Typically customer will have multiple vendors spread across various geographical locations. In global delivery model, customer sensitive information will be exchanged between teams and there is possibility of data breach from customer network. Though these delivery centers are firewall segregated or air gap network, it is difficult to restrict end user to store information in local device. Thin client installation at delivery center or Virtual Desktop Infrastructure (VDI) setup at customer location are trivial solution to ensure information is not moved out of network. All traditional offshore development centers may not have thin client set up and they use workstation with local storage. Converting these workstations into thin client is expensive and customer may not be ready to provide VDI setup for offsite location. This paper provides simple solution with zero investment for local workstations to act like Thin Client, and also to ensure that there is no data or information leakage through local storage.
- Published
- 2019
- Full Text
- View/download PDF
21. Computational and Network Utilization in the Application of Thin Clients in Cloud-Based Virtual Applications
- Author
-
Glenn A. Martin, Chandler Lattin, Steven Zielinski, and Shehan Sirigampola
- Subjects
business.industry ,Computer science ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Thin client ,Virtual machine ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Virtual training ,020201 artificial intelligence & image processing ,Graphics ,business ,computer ,Computer network - Abstract
Typically, training using virtual environments uses a client-server or a fully distributed approach. In either arrangement, the clients used are full computers (PCs) with an adequate processor, memory, and graphics capability. These are reasonably costly, require maintenance, and have security concerns. In the office desktop environment, the use of thin clients is well known; however, the application of thin clients with cloud-based servers to virtual training is relatively new. Thin clients require less initial cost, require less setup and maintenance, and centralize the virtual environment configuration, maintenance, and security to virtual cloud servers. Rather than housing an expensive computer (a so-called thick client) at each station, functionality is replaced using a streaming protocol, a remote server, and a thin client to allow the user to interact. This paper reviews two game-focused streaming protocols running across a set of four thin clients (of various capability and cost) from both local and remote cloud-based data centers. Data were gathered to measure latency and network and computational utilization across each client using two scenarios in both local and remote conditions. Results of these experiments indicate that thin clients for use in virtual training is viable regardless of local or remote server location.
- Published
- 2019
- Full Text
- View/download PDF
22. Enhanced Resource Management for Web Based Thin Clients Using Cross-Platform Progressive Offline Capabilities
- Author
-
Vlad Fernoaga, Maurizio Murroni, Titus Balan, George-Alex Stelea, and Vlad Popescu
- Subjects
Intranet ,Multimedia ,business.industry ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Workflow ,Thin client ,Cross-platform ,Web application ,The Internet ,Resource management ,business ,Function (engineering) ,computer ,media_common - Abstract
Web based thin clients are applications delivering content from the Internet or Intranet and accessed via the browser on the running end device. These clients are portable and cross-device compatible and have a large spectrum of applications, can perform from tele-measurement tasks to management and information centralization. The capability of web-based thin clients to function offline is a requirement that is indispensable even today for many companies because offline-enabled thin clients allow the users to continue working without workflow disturbance, preventing the loss of data, even when the connection to the Internet is missing or malfunctioning. This paper is dedicated to a “barrier-free” cross-platform responsive and progressive web based thin client, presenting its architecture and development, as well as the offline capabilities using caching techniques and its advantages in resource management and information back-up and security.
- Published
- 2019
- Full Text
- View/download PDF
23. A Privacy-Preserving Thin-Client Scheme in Blockchain-Based PKI
- Author
-
Hongwei Li, Wenbo Jiang, Guowen Xu, Xiaodong Lin, Mi Wen, and Guishan Dong
- Subjects
021110 strategic, defence & security studies ,Security analysis ,Blockchain ,Computer science ,Node (networking) ,0211 other engineering and technologies ,Public key infrastructure ,02 engineering and technology ,Computer security ,computer.software_genre ,Web of trust ,Thin client ,Certificate authority ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,computer ,Private information retrieval - Abstract
Traditional centralized PKIs are vulnerable due to the single point of failure. A feasible solution is to build a decentralized PKI without certificate authority (CA). Web of Trust is the first step toward realizing a decentralized PKI, but it still has some limitations such as missing incentive and leaking user's privacy. Blockchain's numerous desirable properties, such as cryptographical security, decentralized nature and unalterable transaction record, make it a suitable tool to implement a decentralized PKI. However, the latest research findings about blockchain-based PKI are still incompatible with the thin-clients which have limited storage ability to download the entire blockchain. To combat that, we firstly present a Privacy-preserving Thin-client Scheme (PTS) utilizing the idea of k-anonymity, which enables thin-clients to run normally as full node users and protect user's privacy simultaneously. After that, in order to reduce cost, we further propose an Efficient Privacy preserving Thin-client Scheme (EPTS) employing the method of PIR (private information retrieval). Then security analysis and functional comparison are performed to demonstrate the high security and comprehensive functionality of EPTS compared with existing schemes. Finally, extensive experiments are undertaken to confirm that EPTS can reduce computational cost and communication cost impressively.
- Published
- 2018
- Full Text
- View/download PDF
24. A Semi-Virtualized Testbed Cluster with a Centralized Server for Networking Education
- Author
-
Andreas Stovkmayer, Michael Menth, Florian Heimgaertner, and Mark Schmidt
- Subjects
Computer science ,Patch panel ,Testbed ,020206 networking & telecommunications ,02 engineering and technology ,Network interface ,computer.software_genre ,Networking hardware ,Thin client ,Virtual machine ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,computer ,Software configuration management - Abstract
Hands-on computer networking labs are essential in many computer science curricula. They are conducted either on physical testbeds consisting of PCs, routers, switches, cables, etc., or on fully virtualized testbeds. The latter consist of only virtual machines (VM) that can be interconnected via software configuration. Fully virtualized testbeds require less resources (hardware, space, energy) than physical testbeds but students miss important hands-on experience with networking equipment. In this work, we present a semi-virtualized testbed: students are given physical access to networking interfaces of VMs via patch panels so that they can interconnect them through cables. Similarly to virtualized testbeds, the semi-virtualized testbed requires only little hardware and maintenance effort while preserving the hands-on experience of physical testbeds. We present a Python-based orchestration platform for several virtual student workspaces on a single physical server. Each virtual student workspace contains several VMs acting as clients, servers, and routers that can be configured by students. It is made available to a physical workspace on a 19-inch cabinet consisting of a thin client and patch panels allowing students to physically interconnect their VMs with cables.
- Published
- 2018
- Full Text
- View/download PDF
25. Migration from Standalone Legacy Systems to Thin Client Server Architecture
- Author
-
Saurabh Gupta, Amod Kumar, and Saba Suhail
- Subjects
Client–server model ,Non-repudiation ,Thin client ,Computer science ,Technological change ,Obsolescence ,Legacy system ,Hot spare ,Computer security ,computer.software_genre ,computer ,LEAPS - Abstract
Today technology is progressing in leaps and bounds. This is due to the fact that the rate of technological progress is increasing at a very fast rate. This is very beautifully captured by Moore’s law. Technologies are becoming outdated as quickly as within 6 months of their roll out. As a result, rate of obsolescence is also increasing rapidly. This is now a major problem in several industries primarily because of the large investment required to setup any system. Thus the question arises whether to change with the times or not. In this paper, we will be listing down the benefits of migrating from a standalone type of environment i.e. where multiple applications are being run on multiple stand alone desktop systems to a server client model i.e. where there will be a single server where all applications will run and users will have access to the same through thin clients.
- Published
- 2018
- Full Text
- View/download PDF
26. Identification of Delay Thresholds Representing the Perceived Quality of Enterprise Applications
- Author
-
Thomas Zinner, Matthias Hirth, Stanislav Lange, and Kathrin Borchert
- Subjects
Computer science ,Quality of service ,media_common.quotation_subject ,computer.software_genre ,Set (abstract data type) ,Data set ,Perceived quality ,Identification (information) ,Thin client ,Quality (business) ,Quality of experience ,Data mining ,computer ,media_common - Abstract
Modern enterprise applications are often designed as distributed architectures, e.g., thin client computing and thus degradations in network related Quality of Service (QoS) parameters may also negatively impact the user-perceived Quality of Experience (QoE) of the application. In this work, we create a model to predict the perceived application quality based on measurements of objective technical parameters. For this, we gathered a data set in a cooperating enterprise over a timespan of nearly three months. As the obtained data set is subject to bias that originates from seasonal effects as well as a limited and predefined set of technical parameters, we further evaluate how to identify segments of the data that lead to misclassification. Last, we quantify the trade-off between the gain in the QoE prediction accuracy and the amount of filtered data.
- Published
- 2018
- Full Text
- View/download PDF
27. A New Model of WEDM-CNC System with Digitizer/Player Architecture
- Author
-
Huang Guangwei, Wang Junqi, Tao Xumuye, Zhao Wansheng, Zheng Junmin, Chen Hao, Chen Mo, Xi Xuecheng, and Xia Weiwen
- Subjects
0209 industrial biotechnology ,Engineering ,business.product_category ,business.industry ,Real-time computing ,02 engineering and technology ,010501 environmental sciences ,Client-side ,G-code ,Motion control ,01 natural sciences ,Machine tool ,Client–server model ,020901 industrial engineering & automation ,Thin client ,Control theory ,General Earth and Planetary Sciences ,Bitstream format ,business ,computer ,Computer hardware ,0105 earth and related environmental sciences ,General Environmental Science ,computer.programming_language - Abstract
Nowadays typical WEDM-CNC systems are built as a monolithic application program with multiple threads or multiple processes. In the light of the modern computational paradigm, it is no doubt that the intelligent manufacturing systems must be built on the basis of the Internet infrastructure. This paper proposes a new model of CNC system for WEDM based on a digitizer/player architecture which is implemented in the Client/Server mode. The predefined continuous tool path described in G code is atomically digitalized by a digitizer with a specific tool path interpolator namely the Generalized Unit Arc Length Increment (GUALI) method. The increments for each moving axis are encoded in the bitstream format and compressed and stored in a file for further execution. On the client side, the real-time motion control of the tool path is implemented by a player which plays back the motion bit-stream by fetching the data from the server, and the feedrate along the tool path can be modulated by the gap discharge status to maintain a stable machining process. By applying the new model of WEDM-CNC system, the machine tool controller becomes extremely simple in accordance with the thought of thin client in the network-computing environment.
- Published
- 2016
- Full Text
- View/download PDF
28. Cloud Gaming: Understanding the Support From Advanced Virtualization and Hardware
- Author
-
Jiangchuan Liu, Ryan Shea, and Di Fu
- Subjects
business.industry ,Computer science ,Full virtualization ,Cloud gaming ,Cloud computing ,Virtualization ,computer.software_genre ,Shared resource ,Thin client ,Server ,Media Technology ,Operating system ,Electrical and Electronic Engineering ,business ,computer ,Computer hardware - Abstract
Existing cloud gaming platforms have mainly focused on private nonvirtualized environments with proprietary hardware. Modern public cloud platforms heavily rely on virtualization for efficient resource sharing, the potentials of which have yet to be explored. Migrating gaming to a public cloud is nontrivial, however, particularly considering the overhead for virtualization and that the graphics processing units (GPUs) for game rendering has long been an obstacle in virtualization. This paper takes a first step toward bridging the online gaming system and the public cloud platforms. We present the design and implementation of a fully virtualized cloud gaming platform with the latest hardware support for both remote servers and local clients. We explore many critical design issues inherent in cloud gaming, including the choice of hardware or software video encoding, and the configuration and the detailed power consumption of thin client. We demonstrate that with the latest hardware and virtualization support, gaming over virtualized cloud can be made possible with careful optimization and integration of the different modules. We also highlight critical challenges toward full-fledged deployment of gaming services over the public virtualized cloud.
- Published
- 2015
- Full Text
- View/download PDF
29. Case Notes: Factors Influencing the Adoption of Virtual Desktop Infrastructure (VDI) Within the South African Banking Sector
- Author
-
Matthews Sekwakwa and Sello Mokwena
- Subjects
Downtime ,Engineering ,lcsh:T58.5-58.64 ,business.industry ,lcsh:T ,lcsh:Information technology ,Legacy system ,Data security ,Usability ,Virtualization ,computer.software_genre ,perceived characteristics of innovations ,lcsh:Technology ,Banking sector ,thin client ,virtualisation ,innovation in banking ,vdi ,Operations management ,Case note ,virtual desktop infrastructure ,business ,Telecommunications ,Virtual desktop ,computer - Abstract
In the 21st century, portable computers and wide area networks are fast becoming the paradigm for computing presence in commercial and industrial settings. The concept of virtualisation in computing originated in the 1960s. Several virtualisation technologies have emerged over the past decade, with the most notable being VMWare, Citrix and Microsoft VDI solutions, including Azure RemoteApp. This paper explores factors influencing the adoption of VDI in the South African banking sector by implementing Rogers’ “perceived characteristics of innovations”. The study found that the relative advantage of VDI, as perceived in banking institutions, includes improved data security and staff working experience; reduced time to deploy devices; and reduced computer downtime. The findings on compatibility factors indicate that good VDI compatibility with legacy software and hardware has a direct relationship with users’ successful adoption. The findings on complexity of use show that other factors, such as the flexibility that comes with remote access, may be a greater influence on adoption than ease of use. Observability of reduced IT support time and increased productivity of remote access have a positive relationship with adoption.
- Published
- 2015
30. The development of a RFID and agent-based lot management controller for PROMIS in a client/server structure for IC assembly firm
- Author
-
Hsien-Pin Hsu
- Subjects
business.industry ,Computer science ,Controller (computing) ,Integrated circuit ,computer.software_genre ,Competitive advantage ,Industrial and Manufacturing Engineering ,Manufacturing engineering ,law.invention ,Software development process ,Client–server model ,Unified Modeling Language ,Thin client ,Control and Systems Engineering ,law ,Operating system ,Radio-frequency identification ,business ,computer ,computer.programming_language - Abstract
Many integrated circuit assembly firms today are struggling in a low-profit environment. To survive in the semiconductor industry, it requires assembly firms to do their best efforts in enhancing their competitive advantages. One effective way for an assembly firm to enhance its competitiveness is the introduction of advanced technologies into its shop floor control system (SFCS) to improve assembly yield and lower assembly costs. In this study, two advanced technologies, Radio Frequency Identification and agent-based approach, were used to initiate a lot management controller (LMC), which played as a thin client in a client/server architecture to cooperate with PROMIS, a SFCS being widely used in the semiconductor industry. In addition, an eight-stage software process, which is based on Unified Modeling Language, is proposed to facilitate the development of the LMC.
- Published
- 2015
- Full Text
- View/download PDF
31. A dynamic binary translation system in a client/server environment
- Author
-
Wei-Chung Hsu, Jan-Jan Wu, Ding-Yong Hong, Pangfeng Liu, and Chun-Chen Hsu
- Subjects
Computer science ,business.industry ,Mobile computing ,Client ,Remote evaluation ,computer.software_genre ,Client–server model ,Fat client ,Thin client ,Server farm ,Hardware and Architecture ,Server ,Operating system ,business ,computer ,Software ,Computer network - Abstract
With rapid advances in mobile computing, multi-core processors and expanded memory resources are being made available in new mobile devices. This trend will allow a wider range of existing applications to be migrated to mobile devices, for example, running desktop applications in IA-32 (x86) binaries on ARM-based mobile devices transparently using dynamic binary translation (DBT). However, the overall performance could significantly affect the energy consumption of the mobile devices because it is directly linked to the number of instructions executed and the overall execution time of the translated code. Hence, even though the capability of today's mobile devices will continue to grow, the concern over translation efficiency and energy consumption will put more constraints on a DBT for mobile devices, in particular, for thin mobile clients than that for severs. With increasing network accessibility and bandwidth in various environments, it makes many network servers highly accessible to thin mobile clients. Those network servers are usually equipped with a substantial amount of resources. This provides an opportunity for DBT on thin clients to leverage such powerful servers. However, designing such a DBT for a client/server environment requires many critical considerations.In this work, we looked at those design issues and developed a distributed DBT system based on a client/server model. It consists of two dynamic binary translators. An aggressive dynamic binary translator/optimizer on the server to service the translation/optimization requests from thin clients, and a thin DBT on each thin client to perform lightweight binary translation and basic emulation functions for its own. With such a two-translator client/server approach, we successfully off-load the DBT overhead of the thin client to the server and achieve a significant performance improvement over the non-client/server model. Experimental results show that the DBT of the client/server model could achieve 37% and 17% improvement over that of non-client/server model for x86/32-to-ARM emulation using MiBench and SPEC CINT2006 benchmarks with test inputs, respectively, and 84% improvement using SPLASH-2 benchmarks running two emulation threads.
- Published
- 2015
- Full Text
- View/download PDF
32. Efficient 3-D Scene Prefetching From Learning User Access Patterns
- Author
-
Zhong Zhou, Jingchang Zhang, and Ke Chen
- Subjects
Information retrieval ,Multimedia ,Computer science ,computer.software_genre ,Computer Science Applications ,Rendering (computer graphics) ,Thin client ,Virtual machine ,Signal Processing ,Media Technology ,Algorithm design ,Cache ,Electrical and Electronic Engineering ,Cluster analysis ,computer - Abstract
Rendering large-scale 3-D scenes on a thin client is attracting increasing attention with the development of the mobile Internet. Efficient scene prefetching to provide timely data with a limited cache is one of the most critical issues for remote 3-D data scheduling in networked virtual environment applications. Existing prefetching schemes predict the future positions of each individual user based on user traces. In this paper, we investigate scene content sequences accessed by various users instead of user viewpoint traces and propose a user access pattern-based 3-D scene prefetching scheme. We make a relationship graph-based clustering to partition history user access sequences into several clusters and choose representative sequences from among these clusters as user access patterns. Then, these user access patterns are prioritized by their popularity and users’ personal preference. Based on these access patterns, the proposed prefetching scheme predicts the scene contents that will most likely be visited in the future and delivers them to the client in advance. The experiment results demonstrate that our user access pattern-based prefetching approach achieves a high hit ratio and outperforms the prevailing prefetching schemes in terms of access latency and cache capacity.
- Published
- 2015
- Full Text
- View/download PDF
33. Desktop Computer Virtualization for Improvement Security, Power Consumption and Cost by SBC (Server Based Computer)
- Author
-
Lee Yong Hui, Kim Hwan Seok, and Kim Baek Ki
- Subjects
General Computer Science ,business.industry ,Desktop virtualization ,Computer science ,Full virtualization ,Hardware virtualization ,Thin provisioning ,Storage virtualization ,Virtualization ,computer.software_genre ,Thin client ,Embedded system ,Operating system ,business ,computer ,Data virtualization - Abstract
It is possible for virtualization of desktop to dramatically reduce maintenance costs and improve the security using various virtualization techniques rather than previous desktop environments. Also, with blocking beforehand the information leakage caused by data centralization, it is easy to manage the information security. This desktop virtualization provides creation and duplication of data and standardized desktop environments using easy and fast virtualization works. So, it is possible to improve efficiency, stability, and fusibility of virtualization. In this paper, with the desktop virtualization, the power saving effects are obtained from 65,750(kW) to 7,300(kW) , which is from 480(w) to 50 (w) for using one desktop for 8 hours per a day. In addition, the 62 desktops and 62 monitors are combined to one operational server with 62 thin clients. As a result of this, the security is improved greatly by data centralization, which the user can access the main server as a thin client with given space.
- Published
- 2015
- Full Text
- View/download PDF
34. A behavioral anomaly detection strategy based on time series process portraits for desktop virtualization systems
- Author
-
Hong Liu, Cong-Cong Xing, Yuan Zhong, Yanbing Liu, Gong Bo, and Yunpeng Xiao
- Subjects
Thin client ,Computer Networks and Communications ,Computer science ,Process (engineering) ,Virtual machine ,Desktop virtualization ,Server ,Operating system ,Anomaly detection ,computer.software_genre ,computer ,Software - Abstract
As the application of desktop virtualization systems (DVSs) continues to gain momentums, the security issue of DVSs becomes increasingly critical and is extensively studied. Unfortunately, the majority of current researches on DVSs only focuses on the virtual machines (VMs) on the servers, and overlooks to a large extent the security issue of the clients. In addition, traditional security techniques are not completely suitable for the DVSs' particularly thin client environment. Towards finding a solution to these problems, we propose a novel behavioral anomaly detection method for DVS clients by creating and using process portraits. Based on the correlations between users, virtualized desktop processes (VDPs), and VMs in DVSs, this proposed method describes the process behaviors of clients by the CPU utilization rates of VMs located on the server, constructs process portraits for VDPs by hidden Markov models and by considering the user profiles, and detects anomalies of VDPs by contrasting VDPs' behaviors against the constructed process portraits. Our experimental results show that the proposed method is effective and successful.
- Published
- 2015
- Full Text
- View/download PDF
35. Web GIS and its architecture: a review
- Author
-
Sonam Agrawal and R. D. Gupta
- Subjects
Distributed GIS ,Geographic information system ,Computer science ,business.industry ,0211 other engineering and technologies ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Client–server model ,World Wide Web ,Thin client ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,The Internet ,Web mapping ,Web service ,business ,computer ,021101 geological & geomatics engineering ,General Environmental Science - Abstract
Phenomenal progress has been witnessed in the field of Geographic Information System (GIS) in recent times. The development of webGIS is the result of the growth of the Internet and consequently World Wide Web. The webGIS architecture is continuously changing according to contemporary technologies and requirements. In this paper, webGIS and its architectures are reviewed. Firstly, GIS and invention of the Internet are discussed. Evolution of webGIS is then covered along with major milestones of its development and open source initiatives. It is followed by discussion on client server architecture and its types. Thick and thin client architectures are then described and compared. As the paradigm in the computing world shifted towards web services, service-oriented architecture (SOA) is also discussed in context of webGIS. Spatial cloud computing and cloud-based architecture for webGIS are then described. The paper also provides a comparison of different webGIS architectures so that suitable architecture can be selected by the user based upon the requirements.
- Published
- 2017
- Full Text
- View/download PDF
36. Tele-measurement with virtual instrumentation using web-services
- Author
-
George-Alex Stelea, Vlad Fernoaga, Dan Robu, and Florin Sandu
- Subjects
Representational state transfer ,Intranet ,Multimedia ,Virtual instrumentation ,computer.internet_protocol ,Computer science ,business.industry ,Cloud computing ,computer.software_genre ,Thin client ,Server ,Plug-in ,Web service ,business ,computer - Abstract
The web-services for tele-measurement are approached by REST (Representational State Transfer) as the unified representation of states-transition-events that enables a true integration of resource management in a distributed environment by simplifying the algorithmic state machines associated with both services and behavioral patterns of instruments. Each of the sensors we used got an own ESP8266-based Wi-Fi micro-system to publish its specific values in the Cloud. Forwarding from the public address of the router to the Intranet address is monitored by a National Instruments Web-Shared Variables server. Data produced by the sensors, accompanied by their presentation methods are taken-over by a web-based “thin client”, a dynamic and adaptive content-to-terminal “Control Board” developed to manage and connect the tele-measurement services, using a graphical user interface, with simple browser access and without the need to install additional third party modules and plugins.
- Published
- 2017
- Full Text
- View/download PDF
37. Thin Client Technology for Higher Education at Universities of Saudi Arabia: Implementation, Challenges and Lesson Learned
- Author
-
Wazir Zada Khan, Shakeel Ahmed, Mohammed Y. Aalsalem, and Ibrahim Ahmed Ghashem
- Subjects
Engineering ,Random access memory ,Class (computer programming) ,Virtual class ,Multimedia ,Higher education ,business.industry ,Public relations ,computer.software_genre ,Thin client ,Server ,business ,computer ,Implementation - Abstract
Recent advancement in technologies is reshaping our daily lives. The transformation has changed our traditional class rooms to virtual class rooms which enables millions of people to access knowledge at their fingertips. Recently, Thin Client is one of the mostly adopted technologies in higher education. Universities in Saudi Arabia are among the pioneer international universities who embraced Thin Client technology. In this paper, we focus on the challenges faced and lessons learned during the implementations phase of Thin Client Technology at Saudi Arabian Universities. In addition, we also performed a comparative analysis of Thin Client Technology implemented at various Saudi Arabian universities.
- Published
- 2017
- Full Text
- View/download PDF
38. Custodian Controlled Data Repository - supporting the timely, easy and cost effective access to linked data
- Author
-
Suzi Adams and Christopher Radbone
- Subjects
Engineering ,Information Systems and Management ,Data custodian ,business.industry ,Health Informatics ,Linked data ,Trusted third party ,Information repository ,Computer security ,computer.software_genre ,Data governance ,Data flow diagram ,Thin client ,business ,computer ,Information Systems ,Demography ,Custodians - Abstract
ObjectivesIn response to Data Custodian and Researcher's request to assist improving the timeliness and ease of data extractions, SA NT DataLink established the Custodian Controlled Data Repository (CCDR). Issues including conflicting work priorities and limited resources do prevent timely data extractions. SA NT DataLink, as a trusted third party enable Custodians to hold separate copies of their de-identified content data ready for timely release. The CCDR is opt-in for Custodians. Its key strength being the data remains under the control of participating Custodians, with Custodians updating and correcting their data. SA NT DataLink staff are work with Data Custodians to clean, standardise, update and prepare data in advance of anticipated approvals, including spatial enabled GIS variables. SA NT DataLink staff work under the direction of the Data Custodians, taking responsibility for preparing the data extractions required for the approved project. ApproachThe SA NT DataLink Custodian Controlled Data Repository (CCDR) takes advantage of the Secure Unified Research Environment (SURE), which is a secure remote access data laboratory operated by the Sax Institute in Sydney Australia. Using thin client and two-factor authentication, Data Custodians from South Australia and across Australia are able to securely store and maintain de-identified copies of their content data in SURE, ready for standardisation, quality review, and more timely release for approved use and data linkage projects. ResultsThe CCDR functional diagram provides an understanding of the data flow and processes that support more timely, easy and cost effective collation and provision of de-identified and privacy protected data. The Curated Gateway feature of SURE manages all the data coming into and being released from the Repository. Agreed regular updates of Data Custodian's data is able to be stored into their sub-directory, with access to the sub-directory managed by authentication and passwords. The SA NT DataLink Analysts is able to perform the role of data integrator for approved projects and use, also running privacy protecting algorithms and verification of the data being provided against the approvals. ConclusionThe CCDR securely stores the de-identified data ready for it to be integrated and released to Researchers. Use of secure remote access technologies allows Data Custodians to maintain control of their preloaded and pre-cleaned data in CCDR. In doing so this allows Custodians to authorise the use, from which SA NT DataLink staff dedicated to working only in the CCDR, integrate and release the data in a more timely manner.
- Published
- 2017
- Full Text
- View/download PDF
39. Clinical Impact and Value of Workstation Single Sign-On
- Author
-
George S Conklin, John A Gillean, George A Gellert, Lynn A Gibson, John F. Crouch, and S. Luke Webster
- Subjects
Workstation ,Computer science ,Desktop virtualization ,Cost-Benefit Analysis ,Information Storage and Retrieval ,Health Informatics ,Efficiency, Organizational ,Login ,computer.software_genre ,law.invention ,Access to Information ,03 medical and health sciences ,0302 clinical medicine ,law ,Physicians ,Electronic Health Records ,Humans ,Operations management ,030212 general & internal medicine ,Productivity ,Computer Security ,health care economics and organizations ,030504 nursing ,End user ,Identification (information) ,Thin client ,Operating system ,Single sign-on ,0305 other medical science ,computer ,Software - Abstract
Background CHRISTUS Health began implementation of computer workstation single sign-on (SSO) in 2015. SSO technology utilizes a badge reader placed at each workstation where clinicians swipe or “tap” their identification badges. Objective To assess the impact of SSO implementation in reducing clinician time logging in to various clinical software programs, and in financial savings from migrating to a thin client that enabled replacement of traditional hard drive computer workstations. Methods Following implementation of SSO, a total of 65,202 logins were sampled systematically during a 7 day period among 2256 active clinical end users for time saved in 6 facilities when compared to pre-implementation. Dollar values were assigned to the time saved by 3 groups of clinical end users: physicians, nurses and ancillary service providers. Results The reduction of total clinician login time over the 7 day period showed a net gain of 168.3 h per week of clinician time – 28.1 h (2.3 shifts) per facility per week. Annualized, 1461.2 h of mixed physician and nursing time is liberated per facility per annum (121.8 shifts of 12 h per year). The annual dollar cost savings of this reduction of time expended logging in is $92,146 per hospital per annum and $1,658,745 per annum in the first phase implementation of 18 hospitals. Computer hardware equipment savings due to desktop virtualization increases annual savings to $2,333,745. Qualitative value contributions to clinician satisfaction, reduction in staff turnover, facilitation of adoption of EHR applications, and other benefits of SSO are discussed. Conclusions SSO had a positive impact on clinician efficiency and productivity in the 6 hospitals evaluated, and is an effective and cost-effective method to liberate clinician time from repetitive and time consuming logins to clinical software applications.
- Published
- 2017
- Full Text
- View/download PDF
40. A method of DDoS attack detection using HTTP packet pattern and rule engine in cloud computing environment
- Author
-
Pankoo Kim, Chang Choi, Junho Choi, and Byeongkyu Ko
- Subjects
Web server ,Computer science ,business.industry ,Network packet ,Software as a service ,Denial-of-service attack ,Access control ,Cloud computing ,Virtualization ,computer.software_genre ,Theoretical Computer Science ,Thin client ,Utility computing ,Grid computing ,Server ,Geometry and Topology ,business ,computer ,Software ,Computer network - Abstract
Cloud computing is a more advanced technology for distributed processing, e.g., a thin client and grid computing, which is implemented by means of virtualization technology for servers and storages, and advanced network functionalities. However, this technology has certain disadvantages such as monotonous routing for attacks, easy attack method, and tools. This means that all network resources and operations are blocked all at once in the worst case. Various studies such as pattern analyses and network-based access control for infringement response based on Infrastructure as a Service, Platform as a Service and Software as a Service in cloud computing services have therefore been recently conducted. This study proposes a method of integration between HTTP GET flooding among Distributed Denial-of-Service attacks and MapReduce processing for fast attack detection in a cloud computing environment. In addition, experiments on the processing time were conducted to compare the performance with a pattern detection of the attack features using Snort detection based on HTTP packet patterns and log data from a Web server. The experimental results show that the proposed method is better than Snort detection because the processing time of the former is shorter with increasing congestion.
- Published
- 2014
- Full Text
- View/download PDF
41. On the performance of OnLive thin client games
- Author
-
Alexander Grant, Michael Solano, David Finkel, and Mark Claypool
- Subjects
Multimedia ,Computer Networks and Communications ,business.industry ,Computer science ,Network packet ,Lag ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Cloud computing ,computer.software_genre ,Game client ,Thin client ,Hardware and Architecture ,Media Technology ,Upstream (networking) ,Network performance ,business ,Downstream (networking) ,computer ,Software ,Information Systems ,Computer network - Abstract
Computer games stand to benefit from "cloud" technology by doing heavy-weight, graphics-intensive computations at the server, sending only the visual game frames down to a thin client, with the client sending only the player actions upstream to the server. However, computer games tend to be graphically intense with fast-paced user actions necessitating bitrates and update frequencies that may stress end-host networks. Understanding the traffic characteristics of thin client games is important for building traffic models and traffic classifiers, as well as adequately planning network infrastructures to meet future demand. While there have been numerous studies detailing online game traffic and streaming video traffic, this paper provides the first detailed study of the network characteristic of OnLive, a commercially available thin client game system. Carefully designed experiments measure OnLive game traffic for several game genres, analyzing the bitrates, packet sizes and inter-packet times for both upstream and downstream game traffic, and analyzing frame rates for the games. Results indicate OnLive rapidly sends large packets downstream, similar but still significantly different than live video. Upstream, OnLive less frequently sends much smaller packets, significantly different than upstream traditional game client traffic. OnLive supports only the top frame rates with high-capacity end-host connections, but provides good frame rates with moderate end-host connections. The results should be a useful beginning to building effective traffic models and traffic classifiers and for preparing end-host networks to support this upcoming generation of computer games.
- Published
- 2014
- Full Text
- View/download PDF
42. The Wide-Area Virtual Service Migration Problem: A Competitive Analysis Approach
- Author
-
Stefan Schmid, Anja Feldmann, Gregor Schaffrath, Johannes Grassler, and Marcin Bienkowski
- Subjects
Service (systems architecture) ,Competitive analysis ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Quality of service ,Network virtualization ,Virtualization ,computer.software_genre ,Computer Science Applications ,Thin client ,The Internet ,Electrical and Electronic Engineering ,Online algorithm ,business ,Mobile device ,computer ,Software ,Computer network - Abstract
Today's trend toward network virtualization and software-defined networking enables flexible new distributed systems where resources can be dynamically allocated and migrated to locations where they are most useful. This paper proposes a competitive analysis approach to design and reason about online algorithms that find a good tradeoff between the benefits and costs of a migratable service. A competitive online algorithm provides worst-case performance guarantees under any demand dynamics, and without any information or statistical assumptions on the demand in the future. This is attractive especially in scenarios where the demand is hard to predict and can be subject to unexpected events. As a case study, we describe a service (e.g., an SAP server or a gaming application) that uses network virtualization to improve the quality of service (QoS) experienced by thin client applications running on mobile devices. By decoupling the service from the underlying resource infrastructure, it can be migrated closer to the current client locations while taking into account migration costs. We identify the major cost factors in such a system and formalize the wide-area service migration problem. Our main contributions are a randomized and a deterministic online algorithm that achieve a competitive ratio of $O(\log {n})$ in a simplified scenario, where $n$is the size of the substrate network. This is almost optimal. We complement our worst-case analysis with simulations in different specific scenarios and also sketch a migration demonstrator.
- Published
- 2014
- Full Text
- View/download PDF
43. A Development of Thin Client based Video Guide Service Using Video Virtualization and WebRTC
- Author
-
Kwang-Yong Kim, Il-Gu Jung, and Won Ryu
- Subjects
Web browser ,Service (systems architecture) ,Thin client ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Communication methods ,Operating system ,Virtualization ,computer.software_genre ,computer ,WebRTC ,Information display ,Data compression - Abstract
In this paper , we have developed the application of a remote multi-lingual video guide that allows interaction with Set Top Box of a thin client by using the WebRTC, an web browser-based communication method. A server accesses a web camera in conjunction with digital information display of a thin client connected remotely and is exchanged guide information with user. This server can be played in thin client STB using virtualization based the high-quality video compression technology. Further, it is not dependent on the performance of the STB and provides a video guide remotely.
- Published
- 2013
- Full Text
- View/download PDF
44. Data Visualization Design of Bus Information Terminal using Smart Client Platform
- Author
-
Joohwan Kim and Doohee Nam
- Subjects
Ajax ,business.industry ,Computer science ,Client ,computer.software_genre ,World Wide Web ,Fat client ,Thin client ,Web application ,Look and feel ,Smart client ,Web service ,business ,computer ,computer.programming_language - Abstract
Smart client is a term describing an application environment which delivers applications over a web HTTP connection and does not require installation and/or updates. The term "Smart Client" is meant to refer to simultaneously capturing the benefits of a "thin client" (zero-install, auto-update) and a "fat client" (high performance, high productivity). A "Smart Client" application can be created in several very different technologies. Over the past few years, ITS has started to move towards smart clients, also called rich clients. The trend is a move from traditional client/server architecture to a Web-based model. More similar to a fat client vs. a thin client, smart clients are Internet-connected devices that allows a user's local applications to interact with server-based applications through the use of Web services. Smart Client applications in BIT bridge the gap between web applications and desktop applications. They provide the benefits of a web applicationwhile still providing the snappy look and feel inherent to desktop applications.
- Published
- 2013
- Full Text
- View/download PDF
45. FEASIBILITY OF DESKTOP VIRTUALIZATION PER SOFTWARE SERVICES AND LOCAL HARDWARE BASED ON THE NETWORK THROUGHPUT
- Author
-
Vitor Chaves de Oliveira, Lia Toledo Moreira Mota, Alexandre de Assis Mota, and Inacio Henrique Yano
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Desktop virtualization ,Full virtualization ,Quality of service ,Virtualization ,computer.software_genre ,Client–server model ,Thin client ,Artificial Intelligence ,Server ,Operating system ,Quality of experience ,business ,computer ,Software ,Computer hardware ,Computer network - Abstract
In recent years, virtualization computing has become a worldwide reality present in datacenters servers of most organizations. The motivations for the use of this solution are focused primarily on cost reduction and increases in availability, integrity and security of data. Based on these benefits, recently it was started the use of this technology for personal computers as well. That is, for desktops, giving birth to the so-called desktop virtualization. Given the technical advantages of the approach, its growth has been so significant that, before 2014, it is expected to be present in over 90% of organizations. However, this new method is completely based on a physical client-server architecture, which increases the importance of the communication network that makes this technique possible. Therefore, analyzing the network in order to investigate the effects according to the environment implemented becomes crucial. In this study it’s varied the local’s client hardware and the application, i.e., the service used. The purpose was to detail their effects on computer networks in a Quality of Service (QoS) parameter, throughput. Secondarily are outlined perceptions regarding the Quality of Experience (QoE)? This culminated in an assessment that traces the feasibility for applying this technology.
- Published
- 2013
- Full Text
- View/download PDF
46. VDI Performance Optimization with Hybrid Parallel Processing in Thick Client System under Heterogeneous Multi-Core Environment
- Author
-
Myeong-Seob Kim and Eui-Nam Huh
- Subjects
Multi-core processor ,business.industry ,Computer science ,Cloud computing ,computer.software_genre ,CUDA ,Thin client ,Parallel processing (DSP implementation) ,Operating system ,Quality of experience ,business ,computer ,Mobile device ,Virtual desktop - Abstract
Recently, the requirement of processing High Definition (HD) video or 3D application on low, mobile devices has been expanded and content data has been increased as well. It is becoming a major issue in Cloud computing where a Virtual Desktop Infrastructure (VDI) Service needs efficient data processing ability to provide Quality of Experience (QoE) in Cloud computing. In this paper, we propose three kind of Thick-Thin VDI Service which can share and delegate VDI service based on Thick Client using CPU and GPU. Furthermore, we propose and discuss the VDI Service Optimization Method in mixed CPU and GPU Heterogeneous Environment using CPU Parallel Processing OpenMP and GPU Parallel Processing CUDA.
- Published
- 2013
- Full Text
- View/download PDF
47. Green Computing Under Cloud Environment Proposed architecture using cloud computing & thin client
- Author
-
K. Senthil Kumar and T. Chandrasekar
- Subjects
Cloud computing security ,Cost efficiency ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,computer.software_genre ,Fat client ,Green computing ,Thin client ,Utility computing ,Cloud testing ,Operating system ,business ,computer - Abstract
Private Cloud computing provides attractive & cost efficient Server Based Computing (SBC). The implementation of Thin client computing for private cloud computing will reduce the IT Cost and consumes less power. Most cloud services run in browser based environment so we don't need a fat client to use in the private Cloud environment. Implementing Thin Client Technology along with Private Cloud Computing will help to reduce the IT Operational Cost by 90% by saving power, space and maintenance. It requires only minimal power for cooling the Infrastructure. Thin Client with private Cloud Computing can be referred as purest form of green computing & carbon free computing.
- Published
- 2013
- Full Text
- View/download PDF
48. Education Portal on Climate Change with Web GIS Client
- Author
-
Aleš Vávra and Vilém Pechanec
- Subjects
Engineering ,Information Systems and Management ,Multimedia ,business.industry ,Strategy and Management ,media_common.quotation_subject ,E-learning (theory) ,Climate change ,computer.software_genre ,ArcGIS Server ,Field (computer science) ,Computer Science Applications ,World Wide Web ,Thin client ,Work (electrical) ,ComputingMilieux_COMPUTERSANDEDUCATION ,The Internet ,Quality (business) ,business ,computer ,Information Systems ,media_common - Abstract
E-learning as the use of new multimedia technologies and the Internet is widely used to improve the quality of learning. This leads to improved quality of the pupils’ and students’ learning approach. The research aim is to create e-learning courses with a thematic focus on the climate and its change consisting accurate information from the field of climate change and environment. The main objective of the courses is to provide educational materials to various groups of users, focusing on natural and social sciences related to the climate change. During the creation of e-learning courses the authors faced the following problem: the e-learning system they used, Moodle, did not include any modules for work with maps or geodata. Their solution is based on the LMS Moodle and thin client which was created with the use of ArcGIS Server.
- Published
- 2013
- Full Text
- View/download PDF
49. CyberLiveApp: A secure sharing and migration approach for live virtual desktop applications in a cloud environment
- Author
-
Yu Jia, Lu Liu, Jianxin Li, and Tianyu Wo
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Cloud computing ,Virtualization ,computer.software_genre ,Software ,Thin client ,Hardware and Architecture ,Application sharing ,Virtual machine ,Operating system ,The Internet ,business ,Virtual desktop ,computer ,Protocol (object-oriented programming) - Abstract
In recent years, we have witnessed the rapid advent of cloud computing, in which remote software is delivered as a service and accessed by users using a thin client over the Internet. In particular, a traditional desktop application can execute in the remote virtual machines of clouds without re-architecture and provide a personal desktop experience to users through remote display technologies. However, existing cloud desktop applications have isolated environments with virtual machines (VMs), which cannot adequately support application-oriented collaborations between multiple users and VMs. In this paper, we propose a flexible collaboration approach, named CyberLiveApp, to enable live virtual desktop application sharing, based on a cloud and virtualization infrastructure. CyberLiveApp supports secure application sharing and on-demand migration among multiple users or equipment. To support VM desktop sharing among multiple users, we develop a secure access mechanism to distinguish their view privileges, in which window operation events are tracked to compute hidden areas of windows in real time. A proxy-based window filtering mechanism is also proposed to deliver desktops to different users. To achieve the goals of live application sharing and migration between VMs, a presentation redirection approach based on VNC protocol and a VM cloning service based on the Libvirt interface are used. These approaches have been preliminary evaluated on an extended MetaVNC. Results of evaluations have verified that these approaches are effective and useful.
- Published
- 2013
- Full Text
- View/download PDF
50. IMPACTS OF APPLICATION USAGE AND LOCAL HARDWARE ON THE THROUGHPUT OF COMPUTER NETWORKS WITH DESKTOP VIRTUALIZATION
- Author
-
Vitor Chaves de Oliveira, Lia Toledo Moreira Mota, and Alexandre de Assis Mota
- Subjects
Multidisciplinary ,Computer science ,Desktop virtualization ,Full virtualization ,business.industry ,Quality of service ,Virtualization ,computer.software_genre ,Client–server model ,Thin client ,Data integrity ,Quality of experience ,business ,computer ,Computer network - Abstract
Currently, virtualization solutions are employed in the vast majority of organizations around the world. The reasons for this are the benefits gained by the approach, focusing on increases in security, availability and data integrity. These privileges are also present in a new technique, which emerges from this same concept and is called desktop virtualization. This method, compelled by these advantages, has grown considerably and is likely to be implemented on more than three-quarters of organizations before 2014. As it is a technique based on physical client server architecture, it conducts all its actions on a local computer and responds to user interaction, through clients that are physically elsewhere. This means that the technique depends on the communication network which makes the interaction possible. Therefore, the importance of the network is increased and it is important to study its behavior compared to a traditional desktop solution, that is, a local solution. This article demonstrates the impact related to a Quality of Service (QoS) parameter, throughput, which suffered great alterations depending on the implemented computational environment. Concomitantly, other results are expressed concerning the Quality of Experience (QoE) decay with a thin client and a significant benefit of virtualization on the QoS, when remote access is required.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.