248 results on '"Zhengya Zhang"'
Search Results
2. Visualization of micro-agents and surroundings by real-time multicolor fluorescence microscopy
3. A Fully Integrated Reprogrammable CMOS-RRAM Compute-in-Memory Coprocessor for Neuromorphic Applications
4. Optimization of Grinding Parameters for the Workpiece Surface and Material Removal Rate in the Belt Grinding Process for Polishing and Deburring of 45 Steel
5. Design Approach for Die-to-Die Interfaces to Enable Energy-Efficient Chiplet Systems.
6. 2.8 A 21.9ns 15.7 Gbps/mm² (128,15) BOSS FEC Decoder for 5G/6G URLLC Applications.
7. Arvon: A Heterogeneous System-in-Package Integrating FPGA and DSP Chiplets for Versatile Workload Acceleration.
8. TetriX: Flexible Architecture and Optimal Mapping for Tensorized Neural Network Processing.
9. TT-CIM: Tensor Train Decomposition for Neural Network in RRAM-Based Compute-in-Memory Systems.
10. An 11.4mm2 40.2Gbps 17.4pJ/b/Iteration Soft-Decision Open Forward Error Correction Decoder for Optical Communications.
11. AR-PIM: An Adaptive-Range Processing-in-Memory Architecture.
12. eNODE: Energy-Efficient and Low-Latency Edge Inference and Training of Neural ODEs.
13. ANSA: Adaptive Near-Sensor Architecture for Dynamic DNN Processing in Compact Form Factors.
14. Arvon: A Heterogeneous SiP Integrating a 14nm FPGA and Two 22nm 1.8TFLOPS/W DSPs with 1.7Tbps/mm2 AIB 2.0 Interface to Provide Versatile Workload Acceleration.
15. TAICHI: A Tiled Architecture for In-Memory Computing and Heterogeneous Integration.
16. VOTA: A Heterogeneous Multicore Visual Object Tracking Accelerator Using Correlation Filters.
17. HiMA: A Fast and Scalable History-based Memory Access Engine for Differentiable Neural Computer.
18. Point-X: A Spatial-Locality-Aware Architecture for Energy-Efficient Graph-Based Point-Cloud Deep Learning.
19. Exploration of Energy-Efficient Architecture for Graph-Based Point-Cloud Deep Learning.
20. DNC-Aided SCL-Flip Decoding of Polar Codes.
21. Control of Magnetically-Driven Screws in a Viscoelastic Medium.
22. QuickNN: Memory and Performance Optimization of k-d Tree Based Nearest Neighbor Search for 3D Point Clouds.
23. Near-Sensor Distributed DNN Processing for Augmented and Virtual Reality.
24. A Configurable Successive-Cancellation List Polar Decoder Using Split-Tree Architecture.
25. SNAP: An Efficient Sparse Neural Acceleration Processor for Unstructured Sparse Deep Neural Network Inference.
26. A 0.58-mm2 2.76-Gb/s 79.8-pJ/b 256-QAM Message-Passing Detector for a 128 × 32 Massive MIMO Uplink System.
27. An 8-bit 20.7 TOPS/W Multi-Level Cell ReRAM-based Compute Engine.
28. NetFlex: A 22nm Multi-Chiplet Perception Accelerator in High-Density Fan-Out Wafer-Level Packaging.
29. CASCADE: Connecting RRAMs to Extend Analog Dataflow In An End-To-End In-Memory Processing Paradigm.
30. An Sram-Based Accelerator for Solving Partial Differential Equations.
31. A 2048-Neuron Spiking Neural Network Accelerator With Neuro-Inspired Pruning And Asynchronous Network On Chip In 40nm CMOS.
32. A 1.87-mm2 56.9-GOPS Accelerator for Solving Partial Differential Equations.
33. Efficient Post-Processors for Improving Error-Correcting Performance of LDPC Codes.
34. HiMA: A Fast and Scalable History-based Memory Access Engine for Differentiable Neural Computer.
35. A 0.23mW Heterogeneous Deep-Learning Processor Supporting Dynamic Execution of Conditional Neural Networks.
36. LEIA: A 2.05mm2 140mW lattice encryption instruction accelerator in 40nm CMOS.
37. Inference and Learning Hardware Architecture for Neuro- Inspired Sparse Coding Algoerithm.
38. Optimizing a Fuzzy Equivalent Sliding Mode Control Applied to Servo Drive Systems.
39. Efficient Post-Processors for Improving Error-Correcting Performance of LDPC Codes.
40. A 135-mW 1.70TOPS Sparse Video Sequence Inference SoC for Action Classification.
41. A 2.4-mm2 130-mW MMSE-Nonbinary LDPC Iterative Detector Decoder for 4×4 256-QAM MIMO in 65-nm CMOS.
42. Editorial TVLSI Positioning - Continuing and Accelerating an Upward Trajectory.
43. VOTA: A 2.45TFLOPS/W Heterogeneous Multi-Core Visual Object Tracking Accelerator Based on Correlation Filters.
44. PETRA: A 22nm 6.97TFLOPS/W AIB-Enabled Configurable Matrix and Convolution Accelerator Integrated with an Intel Stratix 10 FPGA.
45. A 256Gb/s/mm-shoreline AIB-Compatible 16nm FinFET CMOS Chiplet for 2.5D Integration with Stratix 10 FPGA on EMIB and Tiling on Silicon Interposer.
46. A 2.56mm2 718GOPS configurable spiking convolutional sparse coding processor in 40nm CMOS.
47. A 1.25pJ/bit 0.048mm2 AES core with DPA resistance for IoT devices.
48. Post-Processing Methods for Improving Coding Gain in Belief Propagation Decoding of Polar Codes.
49. A 1.8Gb/s 70.6pJ/b 128×16 link-adaptive near-optimal massive MIMO detector in 28nm UTBB-FDSOI.
50. A Maximum-Likelihood Sequence Detection Powered ADC-Based Serial Link.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.