99 results on '"Peng, Xiaochen"'
Search Results
2. Cost-effectiveness thresholds or decision-making threshold: a novel perspective
3. New Security Challenges on Machine Learning Inference Engine: Chip Cloning and Model Reverse Engineering
4. DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training
5. Estimating Power, Performance, and Area for On-Sensor Deployment of AR/VR Workloads Using an Analytical Framework
6. 34.4 A 3nm, 32.5TOPS/W, 55.0TOPS/mm2 and 3.78Mb/mm2 Fully-Digital Compute-in-Memory Macro Supporting INT12 × INT12 with a Parallel-MAC Architecture and Foundry 6T-SRAM Bit Cell
7. 3-D Heterogeneous Integration of RRAM-Based Compute-In-Memory: Impact of Integration Parameters on Inference Accuracy
8. Cross-point memory design challenges and survey of selector device characteristics
9. Estimation of the value of curative therapies in oncology: a willingness-to-pay study in China.
10. Achieving High In Situ Training Accuracy and Energy Efficiency with Analog Non-Volatile Synaptic Devices
11. Secure XOR-CIM Engine: Compute-In-Memory SRAM Architecture With Embedded XOR Encryption
12. Heterogeneous 3-D Integration of Multitier Compute-in-Memory Accelerators: An Electrical-Thermal Co-Design
13. DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training
14. Thermal Reliability Considerations of Resistive Synaptic Devices for 3D CIM System Performance
15. Compute-in-Memory: From Device Innovation to 3D System Integration
16. RRAM for Compute-in-Memory: From Inference to Training
17. A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference
18. NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark
19. Compute-in-RRAM with Limited On-chip Resources
20. NeuroSim Validation with 40nm RRAM Compute-in-Memory Macro
21. Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks
22. Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning
23. Cryogenic Performance for Compute-in-Memory Based Deep Neural Network Accelerator
24. First Experimental Demonstration of Robust HZO/β-Ga₂O₃ Ferroelectric Field-Effect Transistors as Synaptic Devices for Artificial Intelligence Applications in a High-Temperature Environment
25. Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine
26. A Runtime Reconfigurable Design of Compute-in-Memory based Hardware Accelerator
27. Compute-in-Memory Chips for Deep Learning: Recent Trends and Prospects
28. Benchmarking Monolithic 3D Integration for Compute-in-Memory Accelerators: Overcoming ADC Bottlenecks and Maintaining Scalability to 7nm or Beyond
29. Thermal Modeling of 3D Polylithic Integration and Implications on BEOL RRAM Performance
30. Cryogenic Benchmarks of Embedded Memory Technologies for Recurrent Neural Network based Quantum Error Correction
31. Ferroelectric Transistors for Synaptic Devices: Challenges and Prospects
32. XOR-CIM
33. MINT: Mixed-Precision RRAM-Based IN-Memory Training Architecture
34. A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out
35. Architectural Design of 3D NAND Flash based Compute-in-Memory for Inference Engine
36. Benchmark of the Compute-in-Memory-Based DNN Accelerator With Area Constraint
37. A Two-way SRAM Array based Accelerator for Deep Neural Network On-chip Training
38. Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-in-Memory Architectures
39. Overcoming Challenges for Achieving High in-situ Training Accuracy with Emerging Memories
40. Compute-in-Memory with Emerging Nonvolatile-Memories: Challenges and Prospects
41. CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays
42. Benchmark of Ferroelectric Transistor-Based Hybrid Precision Synapse for Neural Network Accelerator
43. DNN+NeuroSim: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators with Versatile Device Technologies
44. CIMAT
45. Inference engine benchmarking across technological platforms from CMOS to RRAM
46. CIMAT.
47. Inference engine benchmarking across technological platforms from CMOS to RRAM.
48. MLP+NeuroSimV3.0
49. Design Guidelines of RRAM based Neural-Processing-Unit
50. MAX2: An ReRAM-Based Neural Network Accelerator That Maximizes Data Reuse and Area Utilization
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.