Search

Showing total 10 results
10 results

Search Results

1. SWPU: A 126.04 TFLOPS/W Edge-Device Sparse DNN Training Processor With Dynamic Sub-Structured Weight Pruning.

2. Neural Network Training on In-Memory-Computing Hardware With Radix-4 Gradients.

3. PL-NPU: An Energy-Efficient Edge-Device DNN Training Processor With Posit-Based Logarithm-Domain Computing.

4. Sparse Compressed Spiking Neural Network Accelerator for Object Detection.

5. LSMCore: A 69k-Synapse/mm 2 Single-Core Digital Neuromorphic Processor for Liquid State Machine.

6. QuantBayes: Weight Optimization for Memristive Neural Networks via Quantization-Aware Bayesian Inference.

7. Base-2 Softmax Function: Suitability for Training and Efficient Hardware Implementation.

8. A High Performance Multi-Bit-Width Booth Vector Systolic Accelerator for NAS Optimized Deep Learning Neural Networks.

9. TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning.

10. DetectX—Adversarial Input Detection Using Current Signatures in Memristive XBar Arrays.