18 results on '"Dong, Guiqiang"'
Search Results
2. Hybrid hard-/soft-decision LDPC decoding strategy for NAND flash memory
- Author
-
Zhao, Wenzhe, Dong, Guiqiang, Sun, Hongbin, Zhang, Tong, and Zheng, Nanning
- Published
- 2014
- Full Text
- View/download PDF
3. Reducing latency overhead caused by using LDPC codes in NAND flash memory
- Author
-
Zhao, Wenzhe, Dong, Guiqiang, Sun, Hongbin, Zheng, Nanning, and Zhang, Tong
- Published
- 2012
- Full Text
- View/download PDF
4. LDPC Decoding with Limited-Precision Soft Information in Flash Memories
- Author
-
Wang, Jiadong, Dong, Guiqiang, Courtade, Thomas, Shankar, Hari, Zhang, Tong, and Wesel, Richard
- Subjects
FOS: Computer and information sciences ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Data_CODINGANDINFORMATIONTHEORY ,Computer Science::Information Theory - Abstract
This paper investigates the application of low-density parity-check (LDPC) codes to Flash memories. Multiple cell reads with distinct word-line voltages provide limited-precision soft information for the LDPC decoder. The values of the word-line voltages (also called reference voltages) are optimized by maximizing the mutual information (MI) between the input and output of the multiple-read channel. Constraining the maximum mutual-information (MMI) quantization to enforce a constant-ratio constraint provides a significant simplification with no noticeable loss in performance. Our simulation results suggest that for a well-designed LDPC code, the quantization that maximizes the mutual information will also minimize the frame error rate. However, care must be taken to design the code to perform well in the quantized channel. An LDPC code designed for a full-precision Gaussian channel may perform poorly in the quantized setting. Our LDPC code designs provide an example where quantization increases the importance of absorbing sets thus changing how the LDPC code should be optimized. Simulation results show that small increases in precision enable the LDPC code to significantly outperform a BCH code with comparable rate and block length (but without the benefit of the soft information) over a range of frame error rates.
- Published
- 2012
5. Exploiting Intracell Bit-Error Characteristics to Improve Min-Sum LDPC Decoding for MLC NAND Flash-Based Storage in Mobile Device.
- Author
-
Sun, Hongbin, Zhao, Wenzhe, Lv, Minjie, Dong, Guiqiang, Zheng, Nanning, and Zhang, Tong
- Subjects
DATA integrity ,NAND gates ,LOW density parity check codes ,SOLID-state lasers ,MOBILE operating systems - Abstract
A multilevel per cell (MLC) technique significantly improves the storage density, but also poses serious data integrity challenge for NAND flash memory. This consequently makes the low-density parity-check (LDPC) code and the soft-decision memory sensing become indispensable in the next-generation flash-based solid-state storage devices. However, the use of LDPC codes inevitably increases memory read latency and, hence, degrades speed performance. Motivated by the observation of intracell unbalanced bit error probability and data dependence in the MLC NAND flash memory, this paper proposes two techniques, i.e., intracell data placement interleaving and intracell data dependence aware LDPC decoding, to efficiently improve the LDPC decoding throughput and energy efficiency for the MLC NAND flash-based storage in a mobile device. Experimental results show that, by exploiting the intracell bit-error characteristics, the proposed techniques together can improve the LDPC decoding throughput by up to 84.6% and reduce the energy consumption by up to 33.2% while only incurring less than 0.2% silicon area overhead. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
6. Improving min-sum LDPC decoding throughput by exploiting intra-cell bit error characteristic in MLC NAND flash memory.
- Author
-
Zhao, Wenzhe, Sun, Hongbin, Lv, Minjie, Dong, Guiqiang, Zheng, Nanning, and Zhang, Tong
- Published
- 2014
- Full Text
- View/download PDF
7. Quasi-nonvolatile SSD: Trading flash memory nonvolatility to improve storage system performance for enterprise applications.
- Author
-
Pan, Yangyang, Dong, Guiqiang, Wu, Qi, and Zhang, Tong
- Abstract
This paper advocates a quasi-nonvolatile solid-state drive (SSD) design strategy for enterprise applications. The basic idea is to trade data retention time of NAND flash memory for other system performance metrics including program/erase (P/E) cycling endurance and memory programming speed, and meanwhile use explicit internal data refresh to accommodate very short data retention time (e.g., few weeks or even days). We also propose SSD scheduling schemes to minimize the impact of internal data refresh on normal I/O requests. Based upon detailed memory cell device modeling and SSD system modeling, we carried out simulations that clearly show the potential of using this simple quasi-nonvolatile SSD design strategy to improve system cycling endurance and speed performance. We also performed detailed energy consumption estimation, which shows the energy consumption overhead induced by data refresh is negligible. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
8. Reducing data transfer latency of NAND flash memory with soft-decision sensing.
- Author
-
Dong, Guiqiang, Zou, Yuelin, and Zhang, Tong
- Abstract
With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing latency and memory-to-controller data transfer latency. This paper presents two simple design techniques that can reduce the memory-to-controller data transfer latency. The key is to appropriately apply entropy coding to compress the memory sensing results. Simulation results show that the proposed design solutions can reduce the data transfer latency by up to 64% for soft-decision memory sensing. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
9. Using Lifetime-Aware Progressive Programming to Improve SLC NAND Flash Memory Write Endurance.
- Author
-
Dong, Guiqiang, Pan, Yangyang, and Zhang, Tong
- Subjects
NAND gates ,LOGIC circuits ,FLASH memory ,RANDOM access memory ,COMPUTER programming ,FAULT-tolerant computing - Abstract
This paper advocates a lifetime-aware progressive programming concept to improve single-level per cell NAND flash memory write endurance. NAND flash memory program/erase (P/E) cycling gradually degrades memory cell storage noise margin, and sufficiently strong fault tolerance must be used to ensure the memory P/E cycling endurance. As a result, the relatively large cell storage noise margin in early memory lifetime is essentially wasted in conventional design practice. This paper proposes to always fully utilize the available cell storage noise margin by adaptively adjusting the number of storage levels per cell, and progressively use these levels to realize multiple 1-bit programming operations between two consecutive erase operations. This simple progressive programming design concept is realized by two different implementation strategies, which are discussed and compared in detail. On the basis of an approximate NAND flash memory device model, we carried out simulations to quantitatively evaluate this design concept. The results show that it can improve the write endurance by 35.9% and in the meanwhile improve the average programming speed by 12% without sacrificing read speed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Enhanced Precision Through Multiple Reads for LDPC Decoding in Flash Memories.
- Author
-
Wang, Jiadong, Vakilinia, Kasra, Chen, Tsung-Yi, Courtade, Thomas, Dong, Guiqiang, Zhang, Tong, Shankar, Hari, and Wesel, Richard
- Subjects
FLASH memory ,LINEAR codes ,DECODERS & decoding ,ELECTRIC potential ,ERROR rates ,PERFORMANCE evaluation - Abstract
Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional reads allows frame error rate (FER) performance to approach that of full-precision soft information and enables an LDPC code to significantly outperform a BCH code. A constant-ratio constraint provides a significant simplification in the optimization with no noticeable loss in performance. For a well-designed LDPC code, the quantization that maximizes the mutual information also minimizes the FER in our simulations. However, for an example LDPC code with a high error floor caused by small absorbing sets, the MMI quantization does not provide the lowest frame error rate. The best quantization in this case introduces more erasures than would be optimal for the channel MI in order to mitigate the absorbing sets of the poorly designed code. The paper also identifies a trade-off in LDPC code design when decoding is performed with multiple precision levels; the best code at one level of precision will typically not be the best code at a different level of precision. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
11. Reducing Read Latency of Shingled Magnetic Recording With Severe Intertrack Interference Using Transparent Lossless Data Compression.
- Author
-
Venkataraman, Kalyana Sundaram, Dong, Guiqiang, Xie, Ningde, and Zhang, Tong
- Subjects
- *
MAGNETIC recording media , *ELECTRIC interference , *LOSSLESS data compression , *SIGNAL-to-noise ratio , *DATA compression , *MAGNETIC recording heads - Abstract
With the distinct advantage of retaining conventional head and media, the emerging shingled recording technology improves areal storage density through intentional track overlapping that nevertheless introduces severe intertrack interference (ITI). An economically tenable option for shingled drives is to utilize a single read head. As we continue to increase its areal storage density, there will be a higher probability that a read operation demands reading multiple adjacent tracks for explicit ITI compensation. This directly results in a significant read latency penalty when using a single read head. In this work, we propose a simple design strategy to reduce such ITI-induced read latency penalty. If a sector of user data can be compressed to a certain extent, it will leave more storage space for coding redundancy and, hence, opportunistically enable the use of a stronger-than-normal error correction code (ECC). The stronger ECC can accordingly reduce the probability of reading multiple adjacent tracks for explicit ITI compensation. Beyond a simple intrasector compression, the absence of the update-in-place feature in shingled recording makes it feasible to apply lossless compression across multiple consecutive sectors. This can further improve the compression efficiency and, hence, reduce the probability of reading multiple adjacent tracks. We carried out simulations that successfully demonstrate the effectiveness of the proposed design strategies on reducing the read latency penalty caused by severe ITI in shingled recording. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
12. Error Rate-Based Wear-Leveling for nand Flash Memory at Highly Scaled Technology Nodes.
- Author
-
Pan, Yangyang, Dong, Guiqiang, and Zhang, Tong
- Subjects
BIT error rate ,FLASH memory ,ALGORITHMS ,ERROR correction (Information theory) ,SIGNAL processing ,TECHNOLOGY - Abstract
This brief presents a nand Flash memory wear-leveling algorithm that explicitly uses memory raw bit error rate (BER) as the optimization target. Although nand Flash memory wear-leveling has been well studied, all the existing algorithms aim to equalize the number of programming/erase cycles among all the memory blocks. Unfortunately, such a conventional design practice becomes increasingly suboptimal as inter-block variation becomes increasingly significant with the technology scaling. This brief presents a dynamic BER-based greedy wear-leveling algorithm that uses BER statistics as the measurement of memory block wear-out pace, and guides dynamic memory block data swapping to fully maximize the wear-leveling efficiency. Simulations have been carried out to quantitatively demonstrate its advantages over existing wear-leveling algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
13. Using Quasi-EZ-NAND Flash Memory to Build Large-Capacity Solid-State Drives in Computing Systems.
- Author
-
Pan, Yangyang, Dong, Guiqiang, Xie, Ningde, and Zhang, Tong
- Subjects
- *
FLASH memory , *RANDOM access memory , *COMPUTER storage device industry , *COMPUTER systems , *DIGITAL signal processing , *DIGITAL communications - Abstract
Future flash-based solid-state drives (SSDs) must employ increasingly powerful error correction code (ECC) and digital signal processing (DSP) techniques to compensate the negative impact of technology scaling on NAND flash memory device reliability. Currently, all the ECC and DSP functions are implemented in a central SSD controller. However, the use of more powerful ECC and DSP makes such design practice subject to significant speed performance degradation and complicated controller implementation. An EZ-NAND (Error Zero NAND) flash memory design strategy is emerging in the industry, which moves all the ECC and DSP functions to each memory chip. Although EZ-NAND flash can simplify controller design and achieve high system speed performance, its high silicon cost may not be affordable for large-capacity SSDs in computing systems. We propose a quasi-EZ-NAND design strategy that hierarchically distributes ECC and DSP functions on both NAND flash memory chips and the central SSD controller. Compared with EZ-NAND design concept, it can maintain almost the same speed performance while reducing silicon cost overhead. Assuming the use of low-density parity-check (LDPC) code and postcompensation DSP technique, trace-based simulations show that SSDs using quasi-EZ-NAND flash can realize almost the same speed as SSDs using EZ-NAND flash, and both can reduce the average SSD response time by over 90 percent compared with conventional design practice. Silicon design at 65 nm node shows that quasi-EZ-NAND can reduce the silicon cost overhead by up to 44 percent compared with EZ-NAND. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. Techniques Mitigating Update-Induced Latency Overhead in Shingled Magnetic Recording.
- Author
-
Venkataraman, Kalyana Sundaram, Dong, Guiqiang, and Zhang, Tong
- Subjects
- *
MAGNETIC recorders & recording , *SIGNAL processing , *DENSITY currents , *SIGNAL-to-noise ratio , *BIT error rate , *TRACKING control systems , *SIMULATION methods & models - Abstract
Shingled writing has recently emerged as a promising candidate to sustain the historical growth of magnetic recording storage areal density. However, since the convenient update-in-place feature is no longer available in shingled recording, in order to update one sector, many sectors must be read and written back, leading to a significant update-induced latency overhead. This work develops two simple design techniques that can reduce such a latency overhead. Because the spatial locality of update-invoked read operations naturally enables the use of the 2-D read channel signal processing, the first technique aims to reduce update-invoked read latency by trading the SNR gain obtained by a 2-D read channel for higher disk rotation speed. Since update-induced latency overhead strongly depends on the location of the sectors being updated within each shingled region, the second technique aims to reduce the latency overhead by leveraging the data access locality in most real-time workloads in order to determine appropriate data placement. Through extensive simulations, we show that disk rotation speed boost assisted by a 2-D read channel can reduce the update latency by up to 33%, and data access characteristic sector placement can reduce the update latency by over one order of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
15. Enabling NAND Flash Memory Use Soft-Decision Error Correction Codes at Minimal Read Latency Overhead.
- Author
-
Dong, Guiqiang, Xie, Ningde, and Zhang, Tong
- Subjects
- *
NAND gates , *FLASH memory , *ERROR correction (Information theory) , *ITERATIVE decoding , *DATA compression , *SYSTEMS design - Abstract
With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing latency and memory-to-controller data transfer latency. Leveraging well-established lossless data compression theories, this paper presents several simple design techniques that can reduce such latency penalty caused by soft-decision ECCs. Their effectiveness have been well demonstrated through extensive simulations, and the results suggest that the latency can be reduced by up to 85.3%. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
16. On the Use of Soft-Decision Error-Correction Codes in nand Flash Memory.
- Author
-
Dong, Guiqiang, Xie, Ningde, and Zhang, Tong
- Subjects
- *
FLASH memory , *COMPUTER simulation , *ERROR analysis in mathematics , *MATHEMATICAL formulas , *INFORMATION storage & retrieval systems , *COMPUTER storage devices - Abstract
As technology continues to scale down, nand Flash memory has been increasingly relying on error-correction codes (ECCs) to ensure the overall data storage integrity. Although advanced ECCs such as low-density parity-check (LDPC) codes can provide significantly stronger error-correction capability over BCH codes being used in current practice, their decoding requires soft-decision log-likelihood ratio (LLR) information. This results in two critical issues. First, accurate calculation of LLR demands fine-grained memory-cell sensing, which nevertheless tends to incur implementation overhead and access latency penalty. Hence, it is critical to minimize the fine-grained memory sensing precision. Second, accurate calculation of LLR also demands the availability of a memory-cell threshold-voltage distribution model. As the major source for memory-cell threshold-voltage distribution distortion, cell-to-cell interference must be carefully incorporated into the model. However, these two critical issues have not been ever addressed in the open literature. This paper attempts to address these open issues. We derive mathematical formulations to approximately model the threshold-voltage distribution of memory cells in the presence of cell-to-cell interference, based on which the calculation of LLRs is mathematically formulated. This paper also proposes a nonuniform memory sensing strategy to reduce the memory sensing precision and, thus, sensing latency while still maintaining good error-correction performance. In addition, we investigate these design issues under the scenario when we can also sense interfering cells and hence explicitly estimate cell-to-cell interference strength. We carry out extensive computer simulations to demonstrate the effectiveness and involved tradeoffs, assuming the use of LDPC codes in 2-bits/cell nand Flash memory. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
17. Using Data Postcompensation and Predistortion to Tolerate Cell-to-Cell Interference in MLC nand Flash Memory.
- Author
-
Dong, Guiqiang, Li, Shu, and Zhang, Tong
- Subjects
- *
FLASH memory , *RANDOM access memory , *INFORMATION storage & retrieval systems , *ELECTRONIC data processing , *TELECOMMUNICATION - Abstract
With the appealing storage-density advantage, multilevel-per-cell (MLC) nand Flash memory that stores more than 1 bit in each memory cell now largely dominates the global Flash memory market. However, due to the inherent smaller noise margin, the MLC nand Flash memory is more subject to various device/circuit variability and noise, particularly as the industry is pushing the limit of technology scaling and a more aggressive use of MLC storage. Cell-to-cell interference has been well recognized as a major noise source responsible for raw-memory-storage reliability degradation. Leveraging the fact that cell-to-cell interference is a deterministic data-dependent process and can be mathematically described with a simple formula, we present two simple yet effective data-processing techniques that can well tolerate significant cell-to-cell interference at the system level. These two techniques essentially originate from two signal-processing techniques being widely used in digital communication systems to compensate communication-channel intersymbol interference. The effectiveness of these two techniques have been well demonstrated through computer simulations and analysis under an information theoretical framework, and the involved design tradeoffs are discussed in detail. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
18. Using Lossless Data Compression in Data Storage Systems: Not for Saving Space.
- Author
-
Xie, Ningde, Dong, Guiqiang, and Zhang, Tong
- Subjects
- *
DATA compression , *DATA security , *INFORMATION retrieval , *ERROR analysis in mathematics , *DATA analysis , *FLASH memory , *DATA disk drives - Abstract
Lossless data compression for data storage has become less popular as mass data storage systems are becoming increasingly cheap. This leaves many files stored on mass data storage media uncompressed although they are losslessly compressible. This paper proposes to exploit the lossless compressibility of those files to improve the underlying storage system performance metrics such as energy efficiency and access speed, other than saving storage space as in conventional practice. The key idea is to apply runtime lossless data compression to enable an opportunistic use of a stronger error correction code (ECC) with more coding redundancy in data storage systems, and trade such opportunistic extra error correction capability to improve other system performance metrics in the runtime. Since data storage is typically realized in the unit of equal-sized sectors (e.g., 512 B or 4 KB user data per sector), we only apply this strategy to each individual sector independently in order to be completely transparent to the firmware, operating systems, and users. Using low-density parity check (LDPC) code as ECC in storage systems, this paper quantitatively studies the effectiveness of this design strategy in both hard disk drives and NAND flash memories. For hard disk drives, we use this design strategy to reduce average hard disk drive read channel signal processing energy consumption, and results show that up to 38 percent read channel energy saving can be achieved. For NAND flash memories, we use this design strategy to improve average NAND flash memory write speed, and results show that up to 36 percent write speed improvement can be achieved for 2 bits/cell NAND flash memories. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.