243 results on '"File area network"'
Search Results
2. Analyzing the Overhead of the Memory Mapped File I/O for In-Memory File Systems
- Author
-
Hwansoo Han and Jung-Sik Choi
- Subjects
File Control Block ,Indexed file ,Computer science ,Computer file ,Operating system ,File area network ,Versioning file system ,Parallel computing ,computer.software_genre ,Unix file types ,computer ,Flash file system ,Memory-mapped file - Published
- 2016
3. Analyzing Google File System and Hadoop Distributed File System
- Author
-
Nader Gemayel
- Subjects
File system ,020203 distributed computing ,General Computer Science ,Database ,Computer science ,Computer file ,02 engineering and technology ,computer.file_format ,Unix file types ,computer.software_genre ,Virtual file system ,Torrent file ,Self-certifying File System ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,File area network ,020201 artificial intelligence & image processing ,computer ,File system fragmentation - Published
- 2016
4. Content-based File Sharing in Peer-to-peer Networks Using Threshold
- Author
-
Amol P. Bhagat, Kiran A. Dongre, and Radhika Chaudhari
- Subjects
interest extracton ,Computer science ,BitTorrent tracker ,Stub file ,threshold based sharing ,02 engineering and technology ,computer.software_genre ,File sharing ,Data file ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Versioning file system ,interest oriented file sharing ,Global Namespace ,SSH File Transfer Protocol ,File system fragmentation ,General Environmental Science ,File system ,020203 distributed computing ,business.industry ,peer-to-peer network ,Device file ,020206 networking & telecommunications ,computer.file_format ,File sharing system ,Torrent file ,Shared resource ,Self-certifying File System ,Content-based file sharing ,Journaling file system ,General Earth and Planetary Sciences ,File area network ,business ,computer ,Computer network - Abstract
In content based file sharing peer-to-peer (P2P) [1] network model nodes share files directly with each other without a centralized server. In such a file sharing system, nodes meet and exchange requests and files in the format of text, short videos, and voice clips in different interest categories. Content is various and large file sharing such as the multimedia content is required with the rapid development of the wireless communication technology. File sharing can also mean having an allocated amount of personal file storage in a common file system. A P2P content based file sharing system, for efficient file searching, threshold takes advantage of node mobility by designating stable nodes, which have the most frequent contact with community members, as community coordinators for intra community searching, and highly mobile nodes that visit other communities frequently as community ambassadors for intercommunity searching. The large file sharing needs more stable end to end path and long transmission time. Last but not least, more relationship between nodes will be used to promote the file sharing process. Content based file sharing is helpful for taking certain decisions during file transmission. These decisions will benefit in proper utilization of network resources. In this paper content-based file sharing scheme using threshold is proposed. The user's interest is determined by the proposed scheme before searching and sharing the files in the peer-to-peer network. The resources in the network are utilized as per the contents of the files to be shared. The performance evaluation show that proposed system significantly lowers transmission cost and improves file sharing success rate compared to current methods.
- Published
- 2016
- Full Text
- View/download PDF
5. Design of Object Storage Using OpenNVM for High-performance Distributed File System
- Author
-
Fuyumasa Takatsu, Kohei Hiraga, and Osamu Tatebe
- Subjects
General Computer Science ,Database ,Computer science ,Computer file ,Stub file ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Unix file types ,Object storage ,Self-certifying File System ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,File area network ,Versioning file system ,Distributed File System ,computer - Published
- 2016
6. Performance Optimization for Managing Massive Numbers of Small Files in Distributed File Systems
- Author
-
Ligang He, Chenlin Huang, Kenli Li, Songling Fu, and Xiangke Liao
- Subjects
Flat file database ,Computer science ,Stub file ,computer.software_genre ,QA76 ,File size ,Server ,Data file ,Data_FILES ,Versioning file system ,Distributed File System ,SSH File Transfer Protocol ,File system fragmentation ,Database server ,File system ,Indexed file ,Database ,Distributed database ,ext4 ,Computer file ,Device file ,computer.file_format ,Everything is a file ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Computational Theory and Mathematics ,Hardware and Architecture ,Journaling file system ,Signal Processing ,Operating system ,File area network ,Fork (file system) ,computer - Abstract
The processing of massive numbers of small files is a challenge in the design of distributed file systems. Currently, the combined-block-storage approach is prevalent. However, the approach employs the traditional file systems such as ExtFS and may cause inefficiency when accessing small files randomly located in the disk. This paper focuses on optimizing the performance of data servers in accessing massive numbers of small files. We present a Flat Lightweight File System (iFlatLFS) to manage small files, which is based on a simple metadata scheme and a flat storage architecture. iFlatLFS is designed to substitute the traditional file system on data servers and can be deployed underneath distributed file systems that store massive numbers of small files. iFlatLFS can greatly simplify the original data access procedure. The new metadata proposed in this paper occupies only a fraction of the metadata size based on traditional file systems. We have implemented iFlatLFS in CentOS 5.5 and integrated it into an open source Distributed File System (DFS), called Taobao FileSystem (TFS), which is developed by a top B2C service provider, Alibaba, in China and is managing over 28.6 billion small photos. We have conducted extensive experiments to verify the performance of iFlatLFS. The results show that when the file size ranges from 1 to 64 KB, iFlatLFS is faster than Ext4 by 48 and 54 percent on average for random read and write in the DFS environment, respectively. Moreover, after iFlatLFS is integrated into TFS, iFlatLFS-based TFS is faster than the existing Ext4-based TFS by 45 and 49 percent on average for random read access and hybrid access (the mix of read and write accesses), respectively.
- Published
- 2015
7. LocoFS
- Author
-
Siyang Li, Yang Hu, Tao Li, Jiwu Shu, and Youyou Lu
- Subjects
Computer science ,02 engineering and technology ,Directory ,computer.software_genre ,Distributed data store ,Data file ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Global Namespace ,Distributed File System ,SSH File Transfer Protocol ,File system ,020203 distributed computing ,Database ,Storage Resource Broker ,Computer file ,Directory information tree ,020206 networking & telecommunications ,computer.file_format ,Replication (computing) ,Torrent file ,Metadata ,Self-certifying File System ,Scalability ,File area network ,computer - Abstract
Key-Value stores provide scalable metadata service for distributed file systems. However, the metadata's organization itself, which is organized using a directory tree structure, does not fit the key-value access pattern, thereby limiting the performance. To address this issue, we propose a distributed file system with a loosely-coupled metadata service, LocoFS, to bridge the performance gap between file system metadata and key-value stores. LocoFS is designed to decouple the dependencies between different kinds of metadata with two techniques. First, LocoFS decouples the directory content and structure, which organizes file and directory index nodes in a flat space while reversely indexing the directory entries. Second, it decouples the file metadata to further improve the key-value access performance. Evaluations show that LocoFS with eight nodes boosts the metadata throughput by 5 times, which approaches 93% throughput of a single-node key-value store, compared to 18% in the state-of-the-art IndexFS.
- Published
- 2017
8. An Efficient Gear-shifting Power-proportional Distributed File System
- Author
-
Re-, Han, Le, Hieu Hanh, Hikida, Satoshi, and Yokota, Haruo
- Subjects
Computer science ,Distributed computing ,Computer file ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Replication (computing) ,Torrent file ,File Control Block ,Self-certifying File System ,Metadata management ,Versioning file system ,Network File System ,File area network ,SSH File Transfer Protocol ,Distributed File System ,computer ,File system fragmentation - Abstract
Recently, power-aware distributed file systems for efficient big data processing have increasingly moved toward power proportional designs. However, inefficient gear-shifting in such systems is an important issue that can seriously degrade their performance. To address this issue, we propose and evaluate an efficient gear-shifting power proportional distributed file system. The proposed system utilizes flexible data placement that reduces the amount of reflected data and has an architecture that improves the metadata management to achieve high-efficiency gear-shifting. Extensive empirical experiments using actual machines based on the HDFS demonstrated that the proposed system gains up to \(22\,\%\) better throughput-per-watt performance. Moreover, a suitable metadata management setting corresponding to the amount of data updated while in low gear is found from the experimental results.
- Published
- 2015
9. A Proximity-Aware Interest-Clustered P2P File Sharing System
- Author
-
Lee Ward, Haiying Shen, and Guoxin Liu
- Subjects
Computer science ,BitTorrent tracker ,Stub file ,Overlay ,Class implementation file ,computer.software_genre ,File replication ,File sharing ,Server ,Data_FILES ,Versioning file system ,Distributed File System ,SSH File Transfer Protocol ,Global Namespace ,File system fragmentation ,Database ,business.industry ,Device file ,computer.file_format ,Data structure ,File sharing system ,Torrent file ,Self-certifying File System ,Computational Theory and Mathematics ,Hardware and Architecture ,Journaling file system ,Signal Processing ,File area network ,business ,computer ,Computer network - Abstract
Efficient file query is important to the overall performance of peer-to-peer (P2P) file sharing systems. Clustering peers by their common interests can significantly enhance the efficiency of file query. Clustering peers by their physical proximity can also improve file query performance. However, few current works are able to cluster peers based on both peer interest and physical proximity. Although structured P2Ps provide higher file query efficiency than unstructured P2Ps, it is difficult to realize it due to their strictly defined topologies. In this work, we introduce a Proximity-Aware and Interest-clustered P2P file sharing System (PAIS) based on a structured P2P, which forms physically-close nodes into a cluster and further groups physically-close and common-interest nodes into a sub-cluster based on a hierarchical topology. PAIS uses an intelligent file replication algorithm to further enhance file query efficiency. It creates replicas of files that are frequently requested by a group of physically close nodes in their location. Moreover, PAIS enhances the intra-sub-cluster file searching through several approaches. First, it further classifies the interest of a sub-cluster to a number of sub-interests, and clusters common-sub-interest nodes into a group for file sharing. Second, PAIS builds an overlay for each group that connects lower capacity nodes to higher capacity nodes for distributed file querying while avoiding node overload. Third, to reduce file searching delay, PAIS uses proactive file information collection so that a file requester can know if its requested file is in its nearby nodes. Fourth, to reduce the overhead of the file information collection, PAIS uses bloom filter based file information collection and corresponding distributed file searching. Fifth, to improve the file sharing efficiency, PAIS ranks the bloom filter results in order. Sixth, considering that a recently visited file tends to be visited again, the bloom filter based approach is enhanced by only checking the newly added bloom filter information to reduce file searching delay. Trace-driven experimental results from the real-world PlanetLab testbed demonstrate that PAIS dramatically reduces overhead and enhances the efficiency of file sharing with and without churn. Further, the experimental results show the high effectiveness of the intra-sub-cluster file searching approaches in improving file searching efficiency.
- Published
- 2015
10. HybridFS - a high performance and balanced file system framework with multiple distributed file systems
- Author
-
Hongji Yang, Lidong Zhang, Yeh-Ching Chung, Tse-Chuan Hsu, Yongwei Wu, and Ruini Xue
- Subjects
Java ,Computer science ,Stub file ,02 engineering and technology ,computer.software_genre ,Design rule for Camera File system ,Server ,Data file ,0202 electrical engineering, electronic engineering, information engineering ,Data_FILES ,Versioning file system ,SSH File Transfer Protocol ,Distributed File System ,File system fragmentation ,computer.programming_language ,File system ,020203 distributed computing ,Indexed file ,Computer file ,Device file ,computer.file_format ,Everything is a file ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,020201 artificial intelligence & image processing ,Fork (file system) ,computer - Abstract
In the big data era, the distributed file system is getting more and more significant due to the characteristics of\ud its scale-out capability, high availability, and high performance. Different distributed file systems may have different design goals. For example, some of them are designed to have good performance for small file operations, such as GlusterFS, while some of them are designed for large file operations, such as Hadoop distributed file system. With the divergence of big data applications, a distributed file system may provide good performance for some applications but fails for some other applications, that is, there has no universal distributed file system that can produce good performance for all applications. In this\ud paper, we propose a hybrid file system framework, HybridFS, which can deliver satisfactory performance for all applications. HybridFS is composed of multiple distributed file systems with the integration of advantages of these distributed file systems. In HybridFS, on top of multiple distributed file systems, we have designed a metadata management server to perform three functions: file placement, partial metadata store, and dynamic file migration. The file placement is performed based on a decision tree. The partial metadata store is performed for files whose size is less than a few hundred Bytes to increase throughput. The dynamic file migration is performed to balance the storage usage of distributed file systems without throttling performance. We have implemented HybridFS in java on eight nodes and choose Ceph, HDFS, and GlusterFS as designated distributed file systems. The experimental results show that, in the best case, HybridFS can have up to 30% performance improvement of read/write operations over a single distributed file system. In addition, if the difference of storage usage among multiple distributed file systems is less than 40%, the performance of HybridFS is guaranteed, that is, no performance degradation.
- Published
- 2017
11. File Hosting Service Based on Single-Board Computer
- Author
-
Jiri Vojtesek and Lukas Mlynek
- Subjects
Service (systems architecture) ,business.industry ,Computer science ,Computer file ,Internet hosting service ,computer.software_genre ,Self-certifying File System ,Single-board computer ,Backup ,Operating system ,File area network ,SSH File Transfer Protocol ,business ,computer - Abstract
Single-Board Computers (SBC) are very popular nowadays mainly because of their low price and sufficient performance for basic automation, multimedia, networking etc. tasks. The goal of this contribution was to find appropriate SBC with low price, free software for creation of a personal File hosting service for sharing, distribution and backup of files in small network. There were chosen two candidates from the group of Pi-based SBC, Raspberry Pi 2 and Banana Pi M2 which were then submitted to the performance tests. The open-source ownCloud instance was chosen for the File hosting task. There were also mentioned disadvantages and problems of SBC together with improvements and solutions of these problems.
- Published
- 2017
12. Methodologies for Geotagging in FAT and ExFAT File Systems for Smart Phones
- Author
-
G. T. Raju, Keshava Munegowda, and Veeramanikandan Raju
- Subjects
Computer science ,Stub file ,Directory ,computer.software_genre ,exFAT ,Design rule for Camera File system ,File allocation table ,Data file ,Data_FILES ,Versioning file system ,SSH File Transfer Protocol ,File system fragmentation ,Flash file system ,File system ,Indexed file ,business.industry ,Computer file ,Device file ,Byte ,computer.file_format ,Unix file types ,Virtual file system ,Mac OS ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Computer data storage ,ZAP File ,Operating system ,File area network ,Fork (file system) ,Image file formats ,business ,computer - Abstract
File Allocation Table (FAT)file system is the most common file system used in embedded devices such as smart phones, digital cameras, smart TVs, tablets, etc. Typically these embedded devices use Solid State Drives (SSD) as storage devices. The ExFAT file system is future file system for embedded devices and it is optimal for SSDs. This paper discourses the methodologies for Geotagging as a file system metadata instead of file data in FAT and ExFAT file systems. The designed methodologies of this paper adheresthe compatibility with the FAT file system specification and existing ExFAT file system implementations. Keywords Cluster, Contiguous, ExFAT, EXIF, FAT, File system, Flash memories, GPS, MMC, Multimedia, Micro SD, NAND, NOR, SSD,Storage, Video, XMP. 1. INTRODUCTION The Multimedia Cards (MMC) [1]/ Secure Digital (SD) [2] / Micro SD cards are composed of NOR and/or NAND flash memories [3]. The NOR and NAND flash memories are also called as “Solid State Drives”. The Flash memories are default choice of any embedded device as they are low-priced, smaller size and higher storage capacity.The FAT [4] file system is the widely adopted file system in SSDs. The multi-media applications such as video imaging, audio/video playback and recording uses FAT file system for storing and retrieving the data of the multimedia files. The first version of FAT file system was FAT12 by Microsoft Corporation, later it was extended as FAT16 and further as FAT32 to support higher storage capacity. The FAT file system was initially developed to use on floppy disks and Hard Disk Drive (HDD) s. Since most of the operating systems of Personal Computer (PC) s such as Windows, Linux, Mac OS, etc. implement the FAT file system, this file system has become a default and world-wide compatible storage format for embedded devices. Usually the device with the implementation of FAT is recognized as removable storage media in a PC. Even though FAT file system does not define flash management techniques such as Wear-Levelling and Bad Block management [4], the embedded devices implements this file system along with the dedicated flash block management algorithms. In FAT File system, the storage device is represented as group of linear clusters. A cluster is basic unit of data storage in FAT file system and it is a group of blocks or sectors of storage device. In FAT file system, the file or directory is a linked list of the clusters. The File Allocation Table stores the status of all available clusters of the device; Status of a cluster can be either allocated as part of file/directory data or free or reserved. An every entry of File Allocation Table indicates the status of corresponding cluster. The FAT file system specification limits the maximum supported storage size to 32 GB. But, the maximum size supported by FAT32 implementation is 128 GB. But, today the flash storage cards of more than 32 GB are available in market. The ExFAT [5] [6] file system is developed, by Microsoft Corporation, as successor of FAT32 file system. This file system is optimally designed to support large size flash storage cards with higher read and write performance. The maximum storage size supported by ExFAT file system in 128 peta bytes (PB). Global PositioningSystem (GPS) based positioning [7][8]is the positioning technique commonly used inSmartphones. Most of Smartphone contains the GPS receiver which continuously receives the signals from the satellites. These satellite signals contain the exacttime of the message sent, and the location information. The GPS receiver usesthe received signals of four or more satellites to determine the current position based on trilateration.Many smart phones uses the multiple sensors improve the accuracy of the GPS position [9]. In Android based mobile phones, Location Manger [10] can be used for the Geotagging of multimedia files. The eXtensible Metadata Platform (XMP) [11] and Exchangeable image file format (EXIF) [12] are the formats used to store the geographical coordinates and other related information in multimedia and Portable Document Format (PDF) files. In both XMP and EXIF formats, the Geographical data is always placed as multimedia file data. Due to this, file need to be opened and read operation is required to retrieve the geographical location of the file. This paper designs the techniques to place the geographical data of the file as attribute of the file instead of user data of the file. In FAT and ExFAT file systems, the attribute of the file contains the file name, size in bytes, date and time of creation and last write. The attribute of the file is referred as Meta data of the file. This paper includes the geographical data to the attribute of the file/directory by following methodologies. i) Geotagging with Reserved Sectors of FAT and ExFAT file systems. ii) Geotagging with Reserved Clusters of FAT file systems. The files created/updated with above techniques adheres the compatibility with the existing FAT and ExFAT file system implementations and specifications. This means, files created with Geographical data as attributes are accessible for read and write operations in other FAT and ExFAT file system implementations without Geotagging facility. The Geotagging methodologies described in this paper stores the geographical
- Published
- 2014
13. A Transparent File Encryption Scheme Based on FUSE
- Author
-
Yihong Long, Xiang He, and Liheng Zheng
- Subjects
Computer science ,Stub file ,0211 other engineering and technologies ,Data security ,02 engineering and technology ,Class implementation file ,Encryption ,computer.software_genre ,Design rule for Camera File system ,0203 mechanical engineering ,Filesystem-level encryption ,021105 building & construction ,Data_FILES ,User space ,Versioning file system ,SSH File Transfer Protocol ,File system fragmentation ,File system ,business.industry ,Computer file ,Device file ,Filter driver ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,Memory-mapped file ,File Control Block ,020303 mechanical engineering & transports ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,Cache ,business ,computer - Abstract
Transparent file encryption is an important means to protect file data security. However, for the traditional file encryption system based on file filter driver, constant cache cleaning is required to ensure the correctness of the data in the cache, which greatly reduces the efficiency of file operations. In the paper, a transparent file encryption system based on FUSE (File System in User Space) is proposed to overcome the shortcomings of the traditional transparent file encryption systems. To avoid cleaning the system file cache frequently, file redirection is adopted to transfer file operations to FUSE, which processes the file operations initiated by the trusted processes and non-trusted processes differently, with respect to cache. This system can not only be used for local file protection, but can also be applied to secure the files on cloud storages.
- Published
- 2016
14. Access Control Based on File View
- Author
-
Xin Yu Mao
- Subjects
File system ,Indexed file ,Database ,Flat file database ,Computer science ,Computer file ,Stub file ,General Medicine ,computer.file_format ,computer.software_genre ,Unix file types ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Data file ,Data_FILES ,File area network ,Versioning file system ,Fork (file system) ,File synchronization ,SSH File Transfer Protocol ,computer ,File system fragmentation ,Block (data storage) - Abstract
Constructing the tree type logic structure of a file and introducing the view mechanism of database system into the file system, the hierarchical structure of view files is created. Different files ( view files ) are provided to different users according to their levels or interests by mapping layer by layer with beginning of the minimum access unit (the logic block). It can solve the problem of the discrepancy of users views.
- Published
- 2013
15. Hybrid File System - A Strategy for the Optimization of File System
- Author
-
Pei Rong Wang, Rui Liu, and Ya Rong Wang
- Subjects
Computer science ,Stub file ,computer.software_genre ,Design rule for Camera File system ,Data file ,Data_FILES ,Versioning file system ,Cloud storage system ,Distributed File System ,SSH File Transfer Protocol ,File system fragmentation ,File system ,Hardware_MEMORYSTRUCTURES ,Indexed file ,Database ,Computer file ,Directory information tree ,General Engineering ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,computer - Abstract
The hybrid file system is designed to optimize the latency of the response of File System I/Os and extend the capacity of the local file system to cloud by taking the advantage of Internet. Our hybrid file system is consist of SSD, HDD and Amazon S3 cloud file system. We store small files, directory tree and metadata of all the files in SSD, because SSD has a good performance for the response of small and random I/Os. HDD is good at responding big and sequential I/Os, so we use it just like a warehouse to store big files which are linked by the symbolic files in the SSD. We also extend the local file system to cloud in order to enlarge its capacity. In this paper we describe test data of our hybrid file system and also its design and implement details.
- Published
- 2013
16. PRIVACY PRESERVATION FOR FILE SHARING SCHEME USING SECURED FILE BLOCK ID WITH BINARY TREES
- Author
-
J. Bhuvana, S. Chenthur Pandian, and M. Balamurugan
- Subjects
Multidisciplinary ,Indexed file ,Database ,Computer science ,Computer file ,Stub file ,computer.software_genre ,Unix file types ,File Control Block ,Data_FILES ,File area network ,Versioning file system ,SSH File Transfer Protocol ,computer - Abstract
Privacy, the security of information from unauthorized access is gradually needed on the Internet and yet progressively more important while each user acts as both consumer and producer. The lack of privacy is mainly employed for peer-to peer file sharing applications, in which users in the network shared file with each other and their actions are easily monitored by the unauthorized users. Several techniques have been presented to monitor the unauthorized access of files in the network. Our previous work described the secured file sharing using cryptographic key value pairs which shares the file among the users based on the key location of the file. But it does not provide an efficient privacy preservation scheme for file sharing concepts. To enhance the study progress, in this study, we proposed the design and implementation of secured file sharing through online by assigning a secured file block and a participant id that provides users with explicit, configurable control over their file. A File Security Packet (FSP) is developed to maintain a collection of users’ file assigned with its respective file and block id without disclosing the users’ privacy data to the public. Then the file sharing is done with file block id relating to the participant id using binary trees which represents the exact location of data present in the file to be shared. Binary trees keeps all those files to be shared with a relevant file and block id for each users’ file in a form of tree pattern framework. The proposed secured file sharing using B-tree is optimized for systems that read and write large blocks of files in a chronological manner. An experimental evaluation is done with several user clients in terms of communication key round, number of participants and the size of the file for exchange to estimate the performance of the proposed privacy of file sharing using secured file block id using with Binary Trees [PFSBT].
- Published
- 2013
17. File System Backup to Object Storage for On-Demand Restore
- Author
-
Atsushi Sutoh, Masaaki Iwasaki, and Jun Nemoto
- Subjects
020203 distributed computing ,Computer science ,Computer file ,Working directory ,Stub file ,020206 networking & telecommunications ,02 engineering and technology ,Data loss ,computer.software_genre ,Backup ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,File area network ,Versioning file system ,computer ,File system fragmentation - Abstract
A new backup method, which achieves on-demand restore as part of file system backup to object storage, is proposed. On-demand restore is a function that restores only a certain directory or file according to the request from an end user. The proposed method backs up relationships between a file and an object on a directory basis in order to efficiently handle the relationships in the restore. It is experimentally shown that the proposed method reduces the response time of file access involving restore by over an order of magnitude.
- Published
- 2016
18. Fileshader : Entrusted Data Integration Using Hash Server
- Author
-
Juhyeon Oh and Chae Y. Lee
- Subjects
Database ,business.industry ,Computer science ,Stub file ,computer.file_format ,Unix file types ,computer.software_genre ,Torrent file ,Self-certifying File System ,Data_FILES ,File area network ,File transfer ,Versioning file system ,SSH File Transfer Protocol ,business ,computer ,Computer network - Abstract
The importance of security is increasing in a current network system. We have found a big security weakness at the file integration when the people download or upload a file and propose a novel solution how to ensure the security of a file. In particular, hash value can be applied to ensure a file due to a speed and architecture of file transfer. Hash server stores all the hash values which are updated by file provider and client can use these values to entrust file when it downloads. FileShader detects to file changes correctly, and we observed that it did not show big performance degradation. We expect FileShader can be applied current network systems practically, and it can increase a security level of all internet users.
- Published
- 2016
19. A performance study of the Babel file system
- Author
-
Ricardo Marcelín-Jiménez, Orlando Munoz-Texzocotetla, and Jorge Luis Ramirez-Ortiz
- Subjects
File size ,Indexed file ,Self-certifying File System ,Computer science ,Computer file ,Stub file ,Operating system ,Versioning file system ,File area network ,computer.software_genre ,Unix file types ,computer - Abstract
The performance of a distributed file system is defined by its hardware components, as well as its operational parameters. Even a slight change on a working condition may induce the major impact, for instance, on service response times. In this paper, we propose a set of experiments on the Babel file system using a client that sends requests for either file storage or retrieval, with two file sizes (512 MB and 1 GB). The aim is to achieve the best working conditions. To measure the performance of the Babel system we took the throughput and the response time when any of two operations (storage or retrieval) was running. The analysis of results showed that, for a given set of operational parameters, there is an optimal file size that gets the best out of the system's performance.
- Published
- 2016
20. Base Station Assisted Device-to-Device Communications for Content Update Network
- Author
-
Jianxin Chen, Zhifeng Wu, Mingkai Chen, Tao Yu, and Yujie Ma
- Subjects
Base station ,SIMPLE (military communications protocol) ,Computer science ,business.industry ,Real-time computing ,Physical layer ,File area network ,Cloud computing ,Cache ,business ,Computer network - Abstract
Inspired by D2D communication which is regarded as one of the potential technology in the next generation of wireless communication, we propose a novel framework in which cluster head might update cache file and the user acquire a file under the eNB guidance. The content cluster update deployment method is based on device-to-device (D2D) system, where the devices have cached video files from a media cloud. Moreover, each device downloads file from some other devices nearby through a direct link or multi-hop without going through eNB. In particular, we describe the physical layer as a simple "protocol-model", without interference. The highlights of this work lie in defining the cluster selecting, update file methods and shortest multi-hop problem and analyzing this method according simulation result.
- Published
- 2015
21. Overlay Structure Efficiency in Peer-to-peer File-Sharing Networks
- Author
-
G.V. Poryev
- Subjects
Structure (mathematical logic) ,business.industry ,Computer science ,Peer to peer file sharing ,Overlay network ,File area network ,Overlay ,business ,Computer network - Published
- 2011
22. Measurement Based Analysis of One-Click File Hosting Services
- Author
-
Pere Barlet-Ros, Josep Solé-Pareta, and Josep Sanjuàs-Cuxart
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Strategy and Management ,Internet traffic ,computer.file_format ,Torrent file ,World Wide Web ,Self-certifying File System ,File sharing ,Hardware and Architecture ,File area network ,business ,Global Namespace ,SSH File Transfer Protocol ,Distributed File System ,computer ,Information Systems ,Computer network - Abstract
It is commonly believed that file sharing traffic on the Internet is mostly generated by peer-to-peer applications. However, we show that HTTP based file sharing services are also extremely popular. We analyzed the traffic of a large research and education network for three months, and observed that a large fraction of the inbound HTTP traffic corresponds to file download services, which indicates that an important portion of file sharing traffic is in the form of HTTP data. In particular, we found that two popular one-click file hosting services are among the top Internet domains in terms of served traffic volume. In this paper, we present an exhaustive study of the traffic generated by such services, the behavior of their users, the downloaded content, and their server infrastructure.
- Published
- 2011
23. CA-NFS
- Author
-
James Lentini, Arkady Kanevsky, Thomas Talpey, Alexandros Batsakis, and Randal Burns
- Subjects
File system ,Computer science ,computer.software_genre ,Virtual file system ,Shared resource ,Self-certifying File System ,File server ,Hardware and Architecture ,Data_FILES ,Operating system ,File area network ,Network File System ,SSH File Transfer Protocol ,computer - Abstract
We develop a holistic framework for adaptively scheduling asynchronous requests in distributed file systems. The system is holistic in that it manages all resources, including network bandwidth, server I/O, server CPU, and client and server memory utilization. It accelerates, defers, or cancels asynchronous requests in order to improve application-perceived performance directly. We employ congestion pricing via online auctions to coordinate the use of system resources by the file system clients so that they can detect shortages and adapt their resource usage. We implement our modifications in the Congestion-Aware Network File System (CA-NFS), an extension to the ubiquitous network file system (NFS). Our experimental result shows that CA-NFS results in a 20% improvement in execution times when compared with NFS for a variety of workloads.
- Published
- 2009
24. Improving Data Accessibility with File Area Networks
- Author
-
D. Geer
- Subjects
General Computer Science ,Database ,business.industry ,Computer science ,Data management ,Unstructured data ,computer.software_genre ,World Wide Web ,Storage area network ,Data model ,Information system ,File area network ,business ,computer - Abstract
Many companies have complex information systems with a growing amount of unstructured data - information that isn't organized into fixed categories. Databases have built-in tools for understanding and managing structured data. However, managing unstructured data - including migrating it to new storage equipment, backing it up, maintaining user access to material, and keeping information to satisfy governmental regulatory requirements - is a challenge. The current approaches - which entail manually moving files and examining the information they contain to determine how best to handle them - are not adequate for coping with the exploding quantity of unstructured data. Because of this, companies are turning to a new file-management approach: the file area network. A FAN is a set of technologies that organize, route, switch, replicate, and otherwise handle files over networks, all without interrupting user access to information, thereby providing a flexible, intelligent, cost-effective platform to move and manage data.
- Published
- 2007
25. Secure parallel file distribution through a streaming worm network
- Author
-
Michael J. Sheehan
- Subjects
business.industry ,Computer science ,Computer file ,computer.software_genre ,Unix file types ,Data_FILES ,Operating system ,File area network ,File transfer ,Network File System ,Versioning file system ,Electrical and Electronic Engineering ,Secure copy ,SSH File Transfer Protocol ,business ,computer ,Computer network - Abstract
This paper introduces the novel concept of streaming worms and applies the concept to secure parallel file transfer. A streaming worm (sworm) is a powerful class of software that can replicate itself as well as a chunk of arbitrary payload code on a predetermined set of nodes in a network very quickly, while streaming data between all of the nodes in parallel. By harnessing the parallelism and scalability of sworms in a file distribution application, large gigabyte files can be efficiently and securely distributed to a large number of nodes over a Transmission Control Protocol/Internet Protocol (TCP/IP) network without congesting the network. But unlike traditional file transfer tools such as File Transfer Protocol (FTP), remote copy (RCP), or secure copy (SCP), the total sworm transfer time is relatively independent of the number of target nodes for large files. As such, this method of parallel file distribution is particularly useful when a large array or cluster of similar computers has to be quickly updated with a large amount of identical software or data. © 2007 Alcatel-Lucent.
- Published
- 2007
26. The Design of New Journaling File Systems: The DualFS Case
- Author
-
José M. García, Juan Piernas, and Toni Cortes
- Subjects
Computer science ,Stub file ,computer.software_genre ,Theoretical Computer Science ,Design rule for Camera File system ,Data file ,Data_FILES ,Versioning file system ,SSH File Transfer Protocol ,File system fragmentation ,Flash file system ,File system ,Indexed file ,resolv.conf ,Database ,Computer file ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Computational Theory and Mathematics ,Hardware and Architecture ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,computer ,Software - Abstract
This paper describes the foundation, design, implementation, and evaluation of DualFS, a new high-performance journaling file system which has the same consistency guarantees as traditional journaling file systems but a greater performance. DualFS places data and metadata in different devices (usually, two partitions of the same storage device) and manages them in very different ways. The metadata device is organized as a log-structured file system, whereas the data device is organized as groups. The new design allows DualFS not only to recover the consistency quickly after a system crash, but also to improve the overall file system performance. We have evaluated DualFS and we have found that it greatly reduces the total I/O time taken by the file system in most workloads as compared to other file systems (Ext2, Ext3, ReiserFS, XFS, and JFS). The work carried out has also allowed us to draw some lessons which ought to be taken into account when implementing new file systems
- Published
- 2007
27. File Systems-FAT 12/16
- Author
-
Frederic Guillossou and Albert J. Marcella
- Subjects
File Control Block ,Indexed file ,Computer science ,Computer file ,Operating system ,File area network ,Versioning file system ,computer.software_genre ,Unix file types ,computer - Published
- 2015
28. A small file performance optimization algorithm on P2P distributed file system
- Author
-
Yuchang Zhang, Jie Ren, Yinchao Xue, Chen Yingzhuang, Er-teng Liu, Qifei Zhang, and Chaofan Tu
- Subjects
Computer science ,Stub file ,computer.software_genre ,Data file ,Data_FILES ,Versioning file system ,File synchronization ,SSH File Transfer Protocol ,Distributed File System ,File system fragmentation ,Random access memory ,Indexed file ,business.industry ,Computer file ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,business ,computer ,Merge (version control) ,fstab ,Computer network - Abstract
With the further development of the Internet, the amount of data on the network grows exponentially in recent years. While the fastest increasing objects are the mass small files from blogs, forums, etc. Master/Slave structure distributed file systems have some shortcomings, such as the poor access performance with small files, single-point bottlenecks, etc. Although these problems have been solved in the P2P structure distributed file system to some extent, there is still some improvement room of accessing performance for small files in the P2P structure distributed file system, so a small file merging strategy(SFMS) is proposed in this paper. The throughput rate of reading small files is increased significantly. Experiments show that the throughput rate of reading small files is increased by 90% compared with the original system, and 25 times higher than the TFS which is based on Master/Slave structure.
- Published
- 2015
29. A NOVEL MULTIMEDIA FILE SPLITTING TECHNIQUE FOR MEDICAL DATA GRID STORAGE
- Author
-
Rohaya Latip, Mien May Chong, Masnida Hussin, and Hamidah Ibrahim
- Subjects
Multimedia ,Computer science ,Computer file ,Stub file ,Grid file ,General Engineering ,computer.software_genre ,File Control Block ,Data_FILES ,File area network ,Versioning file system ,SSH File Transfer Protocol ,computer ,File system fragmentation - Abstract
Nowadays, videos and images dimensions are improved from 1D and 2D images into 3D or 4D (3 spatial dimension + time) images / videos. The improvement for these video data file were also increased the size of the data. Therefore, single and small storage is not enough for storing them. To overcome the problem, grid-based file storage service is introduced. However, the quality-of-service is very important for this applications and service. Therefore, several of quality-of-service requirements such as delay for data transmission, average CPU time per chunk used, and the total download time for a complete video file are need to be investigate. In this paper, our grid-based file storage test bed architecture, and the previous existing multimedia file splitting techniques will be discussed. Besides, our new multimedia file splitting technique – “Exponential-And-Uniform-based Splitting Technique” is also being proposed.
- Published
- 2015
30. A massive tile data organization and management strategy based on file tree
- Author
-
Ping Jiang, Xiangxiang Li, Hongqiang Wang, and Fanghong Gao
- Subjects
Hierarchy ,Database ,Computer science ,Computer file ,Working directory ,Root directory ,Directory ,computer.software_genre ,Tree (data structure) ,visual_art ,Data_FILES ,Operating system ,visual_art.visual_art_medium ,File area network ,Tile ,computer - Abstract
This paper proposes a massive Remote sensing tile data organization and management strategy based on file tree. The storage rule of tree-structured file directory is: root directory\ tile pyramid hierarchy\subdirectory\tile file data. Here, subdirectory is dynamically created by the strategy of alternating growth. The strategy can maintenance a dynamic balance between subdirectory numbers and file numbers under tile directory hierarchy. However, there are some problems need to be elaborated.
- Published
- 2015
31. An Energy-Aware File Relocation Strategy Based on File-Access Frequency and Correlations
- Author
-
Yuhui Deng and Cheng Hu
- Subjects
business.industry ,Computer science ,Device file ,Cold storage ,Energy consumption ,computer.software_genre ,Self-certifying File System ,Node (computer science) ,Operating system ,File area network ,business ,Distributed File System ,computer ,File system fragmentation ,Computer network - Abstract
Energy consumption has become a big challenge of the traditional storage systems due to the explosive growth of data. A lot of research efforts have been invested in reducing the energy consumption of those systems. Traditionally, the frequently accessed data are concentrated into a small part of hot storage nodes, and other cold storage nodes are switched to a low-power state, thus saving energy. However, due to the energy penalty and time penalty, it takes extra energy and generates additional delay to switch a cold storage node from a low-power state to an active state. In contrast to the existing work, this paper proposes a Skew File Relocate (SFR) strategy which aggregates the correlated cold files to the same cold storage node in addition to concentrating the frequently accessed files to the hot nodes. Because the correlated files are normally accessed together, SFR can significantly reduce the number of power state transitions and lengthen the idle periods that the cold storage nodes are experienced, thus saving more energy and improving the system response time. Furthermore, other three relocation strategies are designed to explore the performance behavior of SFR. Experimental results demonstrate that SFR can significantly reduce the energy consumption while maintaining the system performance at an acceptable level.
- Published
- 2015
32. A Subscription Overlay Network for Large-Scale and Efficient File Parallel Downloading
- Author
-
Cristopher Barrientos, Patricio Galdames, and Claudio Gutiérrez-Soto
- Subjects
Upload ,business.industry ,Computer science ,Operating system ,Overlay network ,Graph (abstract data type) ,File area network ,Cloud computing ,Latency (engineering) ,computer.software_genre ,business ,computer ,Computer network - Abstract
This paper presents a subscription-based overlay network that supports file parallel downloading for cloud collaboration. First, our system lets users to register to a central server and allows this server to incrementally build a topology graph containing the network connections among the subscribers. With this topology graph in place, we plan to address the challenges of minimizing network traffic and choosing the best set of nodes storing a chosen file for parallel downloading. When a subscriber wants to access a chosen file stored in the cloud, our system obtains for her a list of nodes having this file. Nodes in this list, are sorted considering both their network distance to the subscriber and their workloads. Second, selecting those top nodes, a bandwidth-aware parallel downloading technique is executed. Finally, our proposed system also features leveraging idling nodes for file downloading. More specifically, the subscribers who are on-line but not participating in downloading are recruited to reduce both network traffic and average latency.
- Published
- 2015
33. The broken-point continuingly-transferring scheme of large files based on HTML5
- Author
-
Limin Wang, Zhenghe Liang, and Quanfeng Duan
- Subjects
Computer science ,Stub file ,Class implementation file ,JavaScript ,computer.software_genre ,Upload ,Data file ,Data_FILES ,Web application ,Versioning file system ,SSH File Transfer Protocol ,File synchronization ,File system fragmentation ,Server-side ,computer.programming_language ,Indexed file ,business.industry ,Computer file ,Device file ,computer.file_format ,Client-side ,Unix file types ,Virtual file system ,Torrent file ,Memory-mapped file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,business ,computer - Abstract
merging; Abstract. In Web applications, it is often needed to upload a file to the server. With current file upload methods, it is difficult to deal with large file uploading and user experience is also bad. Uploading big files often failed because of network interruption and the client had to reupload.With the development of HTML5 technology,a series of API about file operation emerged.This makes it possible to use javascript on the client side to slice local files and further achieve the function of file broken-point continuingly-transferring.On the basis,this paper solves timeout problem of merging files and correctness problem of the final file on the server side.
- Published
- 2015
34. Configuring a File Server
- Author
-
Sander van Vugt
- Subjects
Software_OPERATINGSYSTEMS ,Computer science ,Stub file ,Unix file types ,computer.software_genre ,AppleShare ,File server ,Self-certifying File System ,Data_FILES ,Operating system ,File area network ,Versioning file system ,SSH File Transfer Protocol ,computer - Abstract
Avery common task that people use Linux for is to configure it as a file server. With regard to this task, Linux is very versatile; it offers support for all common protocols. In this chapter, you’ll learn how to configure Linux as a file server using either Samba or NFS.
- Published
- 2015
35. OPTIMAL CLUSTERING SIZE OF SMALL FILE ACCESS IN NETWORK ATTACHED STORAGE DEVICE
- Author
-
Na Helian, Yuhui Deng, Ke Zhou, Frank Z. Wang, and Dan Feng
- Subjects
business.industry ,Computer science ,Computer file ,Stub file ,computer.software_genre ,Unix file types ,Theoretical Computer Science ,Storage area network ,Hardware and Architecture ,Data_FILES ,Operating system ,Versioning file system ,File area network ,business ,SSH File Transfer Protocol ,computer ,Software ,File system fragmentation ,Computer network - Abstract
Email and short message service are pervasive on the Internet now and continue to grow rapidly, which propels the research on small file access in storage systems. The Clustering technology places logical data blocks of multiple small files on physically contiguous disk blocks and accesses them as a single unit, which is normally adopted to improve small file access performance. This paper constructs a mathematical analysis model to discover the optimal clustering size of small file access in Network Attached Storage (NAS). The analysis results indicate that the optimal clustering size for small file access is the product of one cylinder size and disk number in NAS. Experimental results give a useful validation of our analysis. The analysis results can be applied to optimize the NAS oriented system software and the corresponding application software design.
- Published
- 2006
36. File- and device-sharing
- Author
-
Mike Hendry
- Subjects
File Control Block ,Indexed file ,Computer science ,Computer file ,Stub file ,Operating system ,ZAP File ,File area network ,Versioning file system ,computer.software_genre ,Unix file types ,computer - Published
- 2014
37. Small file access optimization based on GlusterFS
- Author
-
Liang A-lei and Xie Tao
- Subjects
Computer science ,Stub file ,Class implementation file ,computer.software_genre ,Design rule for Camera File system ,Data file ,Data_FILES ,Versioning file system ,Distributed File System ,SSH File Transfer Protocol ,File system fragmentation ,File system ,Indexed file ,Database ,Computer file ,ext4 ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,computer ,Merge (version control) - Abstract
This paper describes a strategy to optimize small file's reading and writing performance on traditional distributed file system. Traditional distributed file system like GlusterFS stores data within local file system (XFS, EXT3, EXT4, etc.), which shows a significant bottleneck on file metadata lookup. We try to re-design metadata structure by merging small file into large file, thus to reduce size of metadata, so we can store the whole files' metadata inside main memory. We design and implement the whole strategy on GlusterFS, test results show a great performance optimization on small file operation.
- Published
- 2014
38. Design Methodologies of Transaction-Safe Cluster Allocations in TFAT File System for Embedded Storage Devices
- Author
-
Keshava Munegowda, G. T. Raju, Veeramanikandan Raju, and T. N. Manjunath
- Subjects
Computer science ,Stub file ,computer.software_genre ,Flash memory ,Design rule for Camera File system ,File allocation table ,Data_FILES ,Versioning file system ,Distributed File System ,SSH File Transfer Protocol ,File system fragmentation ,Flash file system ,File system ,business.industry ,Computer file ,Windows CE ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Embedded system ,Computer data storage ,Operating system ,ZAP File ,File area network ,Fork (file system) ,business ,computer - Abstract
The File Allocation Table (FAT) file system is widely used file system in tablet personal computers, mobile phones, digital cameras and other embedded devices for data storage and multi-media applications such as video imaging, audio/video playback and recording. The FAT file system is not power fail-safe. This means that, the uncontrolled power loss or abrupt removal of storage device from computer/embedded system causes the file system corruption. The TFAT (Transaction safe FAT) file system is an extension of FAT file system to provide power fail-safe feature to the FAT file system. This paper explores the design methodologies of cluster allocation algorithms of TFAT file system by conducting various combinations of file system operations in Windows CE (Compact Embedded) 6.0 Operating System (OS). This paper also records the performance bench-marking of TFAT file system in comparison with FAT File system.
- Published
- 2014
39. RWFS: Design and implementation of file system executing access control based on user's location
- Author
-
Yuki Yagi, Yoshito Tobe, Hiroki Saito, and Naofumi Kitsunezaki
- Subjects
Computer science ,Stub file ,Directory ,computer.software_genre ,Design rule for Camera File system ,Data file ,Data_FILES ,Versioning file system ,SSH File Transfer Protocol ,File system fragmentation ,File system ,Indexed file ,Database ,Computer file ,Working directory ,Device file ,computer.file_format ,Unix file types ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,ZAP File ,Operating system ,File area network ,Fork (file system) ,computer - Abstract
In this research, we designed and implemented Real-World File System (RWFS), which can manage files as if we can put them onto or pick them up from the places of the real world. RWFS regards the places of the real world as directories of the file system by associating a directory with a place. We create directories called Real-World Directory (RWD) which forms a hierarchical structure to reflect the natural property of places. In addition to the conventional access rights of read, write, and execute as implemented in other file systems, RWFS accommodates utilizing location information of the target user in access rights; RWFS can decide whether or not the user can access to a particular file or directory based on the user's location. Therefore, accessible files for a user change depending on the user's location. This mechanism enables creating information that can be read or written by users who physically stay at a particular place. We evaluated this system by measuring turnaround time to operate the file system together with simulation.
- Published
- 2014
40. Architecture distributed file system
- Author
-
I. A. Botygin and Vladimir Popov
- Subjects
File Control Block ,Self-certifying File System ,Computer science ,Computer file ,Operating system ,File area network ,computer.file_format ,SSH File Transfer Protocol ,Unix file types ,computer.software_genre ,computer ,File system fragmentation ,Torrent file - Published
- 2014
41. CSFS: a Java enabled network file storage system
- Author
-
Xiaoming Li, Hua Han, and Yafei Dai
- Subjects
Name server ,File Transfer Protocol ,Computer Networks and Communications ,Computer science ,Download ,Stub file ,computer.software_genre ,Theoretical Computer Science ,Upload ,File server ,Data_FILES ,Versioning file system ,SSH File Transfer Protocol ,Global Namespace ,Distributed File System ,File system fragmentation ,File system ,Indexed file ,resolv.conf ,Database ,Computer file ,Andrew File System ,Device file ,computer.file_format ,Everything is a file ,Unix file types ,Virtual file system ,Computer Science Applications ,Torrent file ,File Control Block ,Self-certifying File System ,Computational Theory and Mathematics ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,computer ,File storage ,Software - Abstract
The CSFS (cryptographic storage file system) is a network file storage system that is suitable for a small-to-medium sized networks such as a campus network. In a CSFS, a lot of distributed file servers are organized in a star-liked architecture. File names are stored in a name server and file data are stored in distributed file servers. A CSFS has good scalability and is able to accommodate hundreds of file servers. We implemented a CSFS in pure Java and tested the system with a benchmark. The test results show that CSFS delivers acceptable performance for general file operations. Its performance of file upload and download is as efficient as FTP. It can support more than 450 concurrent online users. Copyright © 2005 John Wiley & Sons, Ltd.
- Published
- 2005
42. The MOSIX Direct File System Access Method for Supporting Scalable Cluster File Systems
- Author
-
Lior Amar, Amnon Barak, and Amnon Shiloh
- Subjects
Computer Networks and Communications ,Computer science ,Stub file ,Access method ,Class implementation file ,computer.software_genre ,File server ,Computer cluster ,Data file ,Data_FILES ,Versioning file system ,Global Namespace ,Distributed File System ,SSH File Transfer Protocol ,Process migration ,File system fragmentation ,File system ,Computer file ,Device file ,computer.file_format ,Unix file types ,Virtual file system ,Torrent file ,File Control Block ,Self-certifying File System ,Journaling file system ,Operating system ,File area network ,Fork (file system) ,computer ,Software - Abstract
MOSIX is a cluster management system that supports preemptive process migration. This paper presents the MOSIX Direct File System Access (DFSA), a provision that can improve the performance of cluster file systems by allowing a migrated process to directly access files in its current location. This capability, when combined with an appropriate file system, could substantially increase the I/O performance and reduce the network congestion by migrating an I/O intensive process to a file server rather than the traditional way of bringing the file's data to the process. DFSA is suitable for clusters that manage a pool of shared disks among multiple machines. With DFSA, it is possible to migrate parallel processes from a client node to file servers for parallel access to different files. Any consistent file system can be adjusted to work with DFSA. To test its performance, we developed the MOSIX File-System (MFS) which allows consistent parallel operations on different files. The paper describes DFSA and presents the performance of MFS with and without DFSA.
- Published
- 2004
43. Global namespace for files
- Author
-
R. Sarkar, M. Pereira, J. Xu, Owen T. Anderson, Leo Shyh-Wei Luan, and C. Everhart
- Subjects
General Computer Science ,Database ,business.industry ,Computer science ,Storage Resource Broker ,Computer file ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Virtual file system ,Theoretical Computer Science ,Self-certifying File System ,Computational Theory and Mathematics ,Data_FILES ,File area network ,Versioning file system ,SSH File Transfer Protocol ,business ,Global Namespace ,computer ,Software ,Information Systems ,Computer network - Abstract
We propose a name service that enables construction of a uniform, global, hierarchical namespace, a key feature needed to create a file-system grid. Combined with other grid replication and location-lookup mechanisms, it supports independence of position for users and applications as well as transparency of data location in a scalable and secure fashion. This name service enables federation of individual files as well as file-system trees that are exported by a variety of distributed file systems and is extensible to include nonfile-system data such as databases or live data feeds. Such a federated namespace for files can be rendered by network file servers, such as NFS (Network File System) or CIFS (Common Internet File System) servers, proxies supporting the NAS (network-attached storage) protocol, or grid data service interfaces. File access proxies, which handle protocol translation, can also include caching and replication support to enhance data access performance. A uniform namespace with global scope and hierarchical ownership allows sharing file data between and within organizations without compromising security or autonomy.
- Published
- 2004
44. GnuViz – Mapping the Gnutella Network to its Geographical Locations
- Author
-
Gerald Kunzmann and Rüdiger Schollmeier
- Subjects
Computer science ,business.industry ,Server ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Network mapping ,File area network ,Overlay network ,Context (language use) ,Web crawler ,business ,Host (network) ,Protocol (object-oriented programming) ,Computer network - Abstract
Gnutella is a classical Peer-to-Peer network designed for file-sharing. The absence of pure servers is one of its main properties, given that every Gnutella host is client and host in one. It uses the resources of the participants to distribute content, e.g. mp3 compressed audio files, and shares the processing capacity to provide the routing and searching capabilities for the network. In this work we present GnuViz, a tool to visualize the geographical context of the virtual overlay network established by the Gnutella Protocol. Therefore a Gnutella Network Crawler is used, so that real life measurements of the Gnutella Network can be performed. With the additional aid of a geographical database the acquired IP address of each logged participant can be assigned to its geographical coordinates. A Java based script finally displays the network structure on arbitrary world maps. By using GnuViz we are able to substantiate the shortcomings of Gnutella, and propose protocol modifications to improve the network behavior of Gnutella.
- Published
- 2003
45. ALGORITHMS FOR HIGH PERFORMANCE, WIDE-AREA DISTRIBUTED FILE DOWNLOADS
- Author
-
Scott Atchley, James S. Plank, Micah Beck, and Ying Ding
- Subjects
business.industry ,Computer science ,computer.software_genre ,Storage model ,Theoretical Computer Science ,Upload ,Resource (project management) ,Hardware and Architecture ,Wide area network ,Computer data storage ,Redundancy (engineering) ,Operating system ,File area network ,Cache ,business ,Algorithm ,computer ,Software - Abstract
As peer-to-peer and wide-area storage systems become in vogue, the issue of delivering content that is cached, partitioned and replicated in the wide area, with high performance, becomes of great importance. This paper explores three algorithms for such downloads. The storage model is based on the Network Storage Stack, which allows for flexible sharing and utilization of writable storage as a network resource. The algorithms assume that data is replicated in various storage depots in the wide area, and the data must be delivered to the client either as a downloaded file or as a stream to be consumed by an application, such as a media player. The algorithms are threaded and adaptive, attempting to get good performance from nearby replicas, while still utilizing the faraway replicas. After defining the algorithms, we explore their performance downloading a 50 MB file replicated on six storage depots in the U.S., Europe and Asia, to two clients in different parts of the U.S. One algorithm, called progress-driven redundancy, exhibits excellent performance characteristics for both file and streaming downloads.
- Published
- 2003
46. Hybrid Channel Model in Parallel File System
- Author
-
Dae Wha Seo, Yoon Young Lee, and Jun Hyung Hwangbo
- Subjects
File system ,File Control Block ,Self-certifying File System ,Computer science ,Computer file ,Stub file ,Operating system ,Device file ,File area network ,computer.software_genre ,SSH File Transfer Protocol ,computer - Abstract
Parallel file system solves I/O bottleneck to store a file distributedly and read it parallel exchanging messages among computers that is connected multiple computers with high speed networks. However, they do not consider the message characteristics and performances are decreased. Accordingly, the current study proposes the Hybrid Channel model (HCM) as a message-management method, whereby the messages of a parallel file system are classified by a message characteristic between control messages and file data blocks, and the communication channel is divided into a message channel and data channel. The message channel then transfers the control messages through TCP/IP with reliability, while the data channel that is implemented by Virtual Interface Architecture (VIA) transfers the file data blocks at high speed. In tests, the proposed parallel file system that is implemented by HCM exhibited a considerably improved performance.
- Published
- 2003
47. [Untitled]
- Author
-
Darrell C. Anderson and Jeffrey S. Chase
- Subjects
Computer Networks and Communications ,Computer science ,Stub file ,computer.software_genre ,Storage area network ,File server ,Data_FILES ,Network File System ,SSH File Transfer Protocol ,Distributed File System ,Global Namespace ,File system fragmentation ,Block (data storage) ,Atomicity ,business.industry ,Computer file ,Device file ,Unix file types ,Virtual file system ,Shared resource ,File Control Block ,Self-certifying File System ,Scalability ,Computer data storage ,Operating system ,File area network ,Data striping ,business ,computer ,Software ,Computer network - Abstract
This paper presents a recovery protocol for block I/O operations in Slice, a storage system architecture for high-speed LANs incorporating network-attached block storage. The goal of the Slice architecture is to provide a network file service with scalable bandwidth and capacity while preserving compatibility with off-the-shelf clients and file server appliances. The Slice prototype virtualizes the Network File System (NFS) protocol by interposing a request switching filter at the client's interface to the network storage system. The distributed Slice architecture separates functions typically combined in central file servers, introducing new challenges for failure atomicity. This paper presents a protocol for atomic file operations and recovery in the Slice architecture, and related support for reliable file storage using mirrored striping. Experimental results from the Slice prototype show that the protocol has low cost in the common case, allowing the system to deliver client file access bandwidths approaching gigabit-per-second network speeds.
- Published
- 2002
48. Dynamic Routing in Distributed File System
- Author
-
Radek Strejc, Jindřich Skupa, Jiří Šafařík, Ladislav Pesicka, and Luboš Matějka
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Computer file ,Stub file ,computer.software_genre ,Self-certifying File System ,Operating system ,Versioning file system ,File area network ,Network File System ,Fork (file system) ,business ,SSH File Transfer Protocol ,computer ,Computer network - Abstract
Mobile devices, such as smart phones, tablets, and netbooks, are now becoming increasingly popular. Effective access to remote files on servers in the network should be granted to the user. Today’s common distributed file systems, such as OpenAFS, Coda, and others, are not suitable for use on mobile devices. The user is forced either to access a particular server or at least to write to a server containing a master write replica. The path to the data is not chosen by the file system. So, the data transmission cannot reflect the user’s actual position. Wepropose a new file system supporting a multi-master write replica and routing the data requests
- Published
- 2014
49. Evolution Towards Distributed Storage in a Nutshell
- Author
-
Mocanu Mariana, Moldoveanu Florica, Negru Catalin, Asavei Victor, Moldoveanu Alin, Geanta Horia, and Pistirica Sorin Andrei
- Subjects
Computer science ,Distributed computing ,Cloud computing ,inode ,computer.software_genre ,Storage area network ,Server ,Distributed data store ,Storage security ,Data_FILES ,Network File System ,Versioning file system ,SSH File Transfer Protocol ,Global Namespace ,Distributed File System ,File system fragmentation ,File system ,Distributed database ,business.industry ,Storage Resource Broker ,Computer file ,Andrew File System ,computer.file_format ,Unix file types ,Virtual file system ,Replication (computing) ,Torrent file ,Object storage ,Self-certifying File System ,Journaling file system ,File area network ,Data center ,business ,computer ,Computer network - Abstract
Distributed storage systems have greatly evolveddue to cloud computing upsurge in the past several years. Thedistributed file systems inherit many components fromcentralized ones and use them in a distributed manner. There aretwo ways to grow the storage capacity: by scaling-up or byscaling-out and growing the number of storage devices in astorage system. The growth of storage devices impose manychallenges related to interconnection protocols and topologies,error handling, data consistency, security and so on. In thisarticle we have studied how distributed and parallel storageshave evolved from direct connected storages in terms ofarchitecture, data management and organization and how thenew challenges imposed by data distribution have been solved.We have selected for studying several of the most representativedistributed storages solutions: Andrew File System, Google FileSystem, General Parallel File System, Lustre and Ceph. First, weemphasize how a generic distributed storage layout has inspiredfrom structured disk layout (Berkeley Fast File System). Second,we describe the evolution path of distributed storages from awide variety of perspectives, including: distributed units whichare moving from blocks to objects due to their undeniableadvantages or distribution methods that have evolved from listsmuch like inode mapping to deterministic hash functions likeRUSH or CRUSH. Third, the networks are evolving very fast interms of topologies and protocols. Using graph theory,researchers are continuously improving different aspects ofcluster networks. Fourth, storage security is a critical componentdue to the demand of storing sensitive data for long term, sharingit in a secure way and impacting as little as possible the systemperformance.
- Published
- 2014
50. I-Shadow: A Wide Area File Service Using Autonomous Disks
- Author
-
Jun Miyazaki, Tsutomu Fujiwara, and Shunsuke Uemura
- Subjects
Computer science ,business.industry ,Computer file ,Stub file ,Unix file types ,computer.software_genre ,File Control Block ,Self-certifying File System ,Data_FILES ,Operating system ,File area network ,SSH File Transfer Protocol ,business ,computer ,File system fragmentation ,Computer network - Abstract
In this paper, we propose the architecture of a wide area file service infrastructure, called I-Shadow. I-Shadow can provide both short delay file access and dependable file management for mobile users by using a set of active rules. We also show that average file access time of I-Shadow is shorter than that of cache based Coda distributed file system through a simulation study.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.