485 results on '"Hoi-Jun Yoo"'
Search Results
2. DualNet: Efficient Integration of Artificial Neural Network and Spiking Neural Network with Equivalent Conversion.
3. A 3.55 mJ/frame Energy-efficient Mixed-Transformer based Semantic Segmentation Accelerator for Mobile Devices.
4. Two-Step Spike Encoding Scheme and Architecture for Highly Sparse Spiking-Neural-Network.
5. A 28.6 mJ/iter Stable Diffusion Processor for Text-to-Image Generation with Patch Similarity-based Sparsity Augmentation and Text-based Mixed-Precision.
6. LUTein: Dense-Sparse Bit-Slice Architecture With Radix-4 LUT-Based Slice-Tensor Processing Units.
7. A Low-power and Real-time Neural-Rendering Dense SLAM Processor with 3-Level Hierarchical Sparsity Exploitation.
8. A Low-Power Neural Graphics System for Instant 3D Modeling and Real-Time Rendering on Mobile AR/VR Devices.
9. NoPIM: Functional Network-on-Chip Architecture for Scalable High-Density Processing-in-Memory-based Accelerator.
10. 20.7 NeuGPU: A 18.5mJ/Iter Neural-Graphics Processing Unit for Instant-Modeling and Real-Time Rendering with Segmented-Hashing Architecture.
11. 20.5 C-Transformer: A 2.6-18.1μJ/Token Homogeneous DNN-Transformer/Spiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models.
12. 20.8 Space-Mate: A 303.5mW Real-Time Sparse Mixture-of-Experts-Based NeRF-SLAM Processor for Mobile Spatial Computing.
13. LOG-CIM: An Energy-Efficient Logarithmic Quantization Computing-In-Memory Processor With Exponential Parallel Data Mapping and Zero-Aware 6T Dual-WL Cell.
14. Scaling-CIM: eDRAM In-Memory-Computing Accelerator With Dynamic-Scaling ADC and Adaptive Analog Operation.
15. An Overview of Computing-in-Memory Circuits With DRAM and NVM.
16. A 92 fps and 2.56 mJ/Frame Computing-In-Memory-Based Human Pose Estimation Accelerator With Resource-Efficient Macro for Mobile Devices.
17. An 2.31uJ/Inference Ultra-Low Power Always-on Event-Driven AI-IoT SoC With Switchable nvSRAM Compute-in-Memory Macro.
18. A 8.81 TFLOPS/W Deep-Reinforcement-Learning Accelerator With Delta-Based Weight Sharing and Block-Mantissa Reconfigurable PE Array.
19. A Low-Power Artificial-Intelligence-Based 3-D Rendering Processor With Hybrid Deep Neural Network Computing.
20. COOL-NPU: Complementary Online Learning Neural Processing Unit.
21. C-DNN: An Energy-Efficient Complementary Deep-Neural-Network Processor With Heterogeneous CNN/SNN Core Architecture.
22. DynaPlasia: An eDRAM In-Memory Computing-Based Reconfigurable Spatial Accelerator With Triple-Mode Cell.
23. A 3.8-mW 1.9-mΩ/√Hz Electrical Impedance Tomography IC With High Input Impedance and Loading Effect Calibration for 3-D Early Breast Cancer Detect System.
24. MetaVRain: A Mobile Neural 3-D Rendering Processor With Bundle-Frame-Familiarity-Based NeRF Acceleration and Hybrid DNN Computing.
25. A 3.8 mW 1.9 m Ω/√Hz Electrical Impedance Tomography Imaging with 28.4 M Ω High Input Impedance and Loading Calibration.
26. A 332 TOPS/W Input/Weight-Parallel Computing-in-Memory Processor with Voltage-Capacitance-Ratio Cell and Time-Based ADC.
27. A 5.99 TFLOPS/W Heterogeneous CIM-NPU Architecture for an Energy Efficient Floating-Point DNN Acceleration.
28. A Reconfigurable 1T1C eDRAM-based Spiking Neural Network Computing-In-Memory Processor for High System-Level Efficiency.
29. A 15.9 mW 96.5 fps Memory-Efficient 3D Reconstruction Processor with Dilation-based TSDF Fusion and Block-Projection Cache System.
30. Sibia: Signed Bit-slice Architecture for Dense DNN Acceleration with Slice-level Sparsity Exploitation.
31. Opportunities and Challenges of DRAM-based Computing-in-Memory for AI Accelerator.
32. A Software-Hardware Co-Optimized Sense Amplifier for 2T1C Cell-based DRAM In-Memory-Computing.
33. NeRF-Navi: A 93.6-202.9µJ/task Switchable Approximate-Accurate NeRF Path Planning Processor with Dual Attention Engine and Outlier Bit-Offloading Core.
34. Dyamond: A 1T1C DRAM In-memory Computing Accelerator with Compact MAC-SIMD and Adaptive Column Addition Dataflow.
35. A 28.6 mJ/iter Stable Diffusion Processor for Text-to-Image Generation with Patch Similarity-based Sparsity Augmentation and Text-based Mixed-Precision.
36. A 0.5V, 6.2μW, 0.059mm2 Sinusoidal Current Generator IC with 0.088% THD for Bio-Impedance Sensing.
37. LOG-CIM: A 116.4 TOPS/W Digital Computing-In-Memory Processor Supporting a Wide Range of Logarithmic Quantization with Zero-Aware 6T Dual-WL Cell.
38. A 33.6 FPS Embedding based Real-time Neural Rendering Accelerator with Switchable Computation Skipping Architecture on Edge Device.
39. An Energy-Efficient Heterogeneous Fourier Transform-Based Transformer Accelerator with Frequency-Wise Dynamic Bit-Precision.
40. A Resource-Efficient Super-Resolution FPGA Processor with Heterogeneous CNN and SNN Core Architecture.
41. COOL-NPU: Complementary Online Learning Neural Processing Unit with CNN-SNN Heterogeneous Core and Event-driven Backpropagation.
42. A Low-power Neural 3D Rendering Processor with Bio-inspired Visual Perception Core and Hybrid DNN Acceleration.
43. A Mobile 3-D Object Recognition Processor With Deep-Learning-Based Monocular Depth Estimation.
44. C-DNN V2: Complementary Deep-Neural-Network Processor With Full-Adder/OR-Based Reduction Tree and Reconfigurable Spatial Weight Reuse.
45. SNPU: An Energy-Efficient Spike Domain Deep-Neural-Network Processor With Two-Step Spike Encoding and Shift-and-Accumulation Unit.
46. DSPU: An Efficient Deep Learning-Based Dense RGB-D Data Acquisition With Sensor Fusion and 3-D Perception SoC.
47. An Efficient Deep-Learning-Based Super-Resolution Accelerating SoC With Heterogeneous Accelerating and Hierarchical Cache.
48. Neuro-CIM: ADC-Less Neuromorphic Computing-in-Memory Processor With Operation Gating/Stopping and Digital-Analog Networks.
49. NeuGPU: A Neural Graphics Processing Unit for Instant Modeling and Real-Time Rendering on Mobile AR/VR Devices.
50. Space-Mate: A 303.5mW Real-Time NeRF SLAM Processor with Sparse-Mixture-of-Experts-based Acceleration.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.