Search

Your search keyword '"Benini, Luca"' showing total 126 results

Search Constraints

Start Over You searched for: Author "Benini, Luca" Remove constraint Author: "Benini, Luca" Topic 020202 computer hardware & architecture Remove constraint Topic: 020202 computer hardware & architecture
126 results on '"Benini, Luca"'

Search Results

1. Compressing Subject-specific Brain-Computer Interface Models into One Model by Superposition in Hyperdimensional Space

2. Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms

3. Independent Body-Biasing of P-N Transistors in an 28nm UTBB FD-SOI ULP Near-Threshold Multi-Core Cluster

4. HERO: Heterogeneous Embedded Research Platform for Exploring RISC-V Manycore Accelerators on FPGA

5. HPC Cooling: A Flexible Modeling Tool for Effective Design and Management

6. Exploring Shared Virtual Memory for FPGA Accelerators with a Configurable IOMMU

7. Streamlining the OpenMP Programming Model on Ultra-Low-Power Multi-core MCUs

8. XwattPilot: A Full-stack Cloud System Enabling Agile Development of Transprecision Software for Low-power SoCs

9. XpulpNN: Accelerating Quantized Neural Networks on RISC-V Processors Through ISA Extensions

10. TRANSPIRE: An energy-efficient TRANSprecision floating-point Programmable archItectuRE

11. Countdown Slack: A Run-Time Library to Reduce Energy Footprint in Large-Scale MPI Applications

12. Pushing On-chip Memories Beyond Reliability Boundaries in Micropower Machine Learning Applications

13. A Multi-Sensor and Parallel Processing SoC for Miniaturized Medical Instrumentation

14. Quantifying the Impact of Variability and Heterogeneity on the Energy Efficiency for a Next-Generation Ultra-Green Supercomputer

15. The Quest for Energy-Efficient I$ Design in Ultra-Low-Power Clustered Many-Cores

16. YodaNN: An Architecture for Ultralow Power Binary-Weight CNN Acceleration

17. Origami: A 803-GOp/s/W Convolutional Network Accelerator

18. Smart Energy-Efficient Clock Synthesizer for Duty-Cycled Sensor SoCs in 65 nm/28nm CMOS

19. Energy-Efficient Near-Threshold Parallel Computing: The PULPv2 Cluster

20. Logic-Base Interconnect Design for Near Memory Computing in the Smart Memory Cube

21. Controlling NUMA effects in embedded manycore applications with lightweight nested parallelism support

22. Paving the Way Toward Energy-Aware and Automated Datacentre

23. An Energy-Efficient Integrated Programmable Array Accelerator and Compilation flow for Near-Sensor Ultra-low Power Processing

24. Thermal Analysis and Interpolation Techniques for a Logic + WideIO Stacked DRAM Test Chip

25. A 60 GOPS/W, −1.8 V to 0.9 V body bias ULP cluster in 28 nm UTBB FD-SOI technology

26. Optimizing memory bandwidth exploitation for OpenVX applications on embedded many-core accelerators

27. ANTAREX: A DSL-based Approach to Adaptively Optimizing and Enforcing Extra-Functional Properties in High Performance Computing

28. A Scalable Framework for Online Power Modelling of High-Performance Computing Nodes in Production

29. Live Demonstration: Body-Bias Based Performance Monitoring and Compensation for a Near-Threshold Multi-Core Cluster in 28nm FD-SOI Technology

30. Neuraghe: Exploiting CPU-FPGA synergies for efficient and flexible CNN inference acceleration on zynQ SoCs

31. Synergistic HW/SW Approximation Techniques for Ultralow-Power Parallel Computing

32. Hardware Transactional Memory Exploration in Coherence-Free Many-Core Architectures

33. Micro Kinetic Energy Harvesting for Autonomous Wearable Devices

34. Runtime Support for Multiple Offload-Based Programming Models on Clustered Manycore Accelerators

35. Work-in-Progress: Quantized NNs as the Definitive solution for inference on low-power ARM MCUs?

36. A Heterogeneous Multi-Core System-on-Chip for Energy Efficient Brain Inspired Computing

37. Leakage bounds for Gaussian side channels

38. Always-ON visual node with a hardware-software event-based binarized neural network inference engine

39. Heprem: Enabling predictable GPU execution on heterogeneous soc

40. XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference

41. CHIPMUNK : A Systolically Scalable 0.9 mm2, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference

42. Energy proportionality in near-threshold computing servers and cloud data centers: Consolidating or Not?

43. Efficient, long-term logging of rich data sensors using transient sensor nodes

44. Hyperdrive: A systolically scalable binary-weight CNN Inference Engine for mW IoT End-Nodes

45. Mr. Wolf: A 1 GFLOP/s Energy-Proportional Parallel Ultra Low Power SoC for IOT Edge Processing

46. A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

47. An 826 MOPS, 210uW/MHz Unum ALU in 65 nm

48. QUENN: Quantization engine for low-power neural networks

49. An energy efficient E-skin embedded system for real-time tactile data decoding

50. Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?

Catalog

Books, media, physical & digital resources