44 results on '"Arindam Mallik"'
Search Results
2. Accelerating Large Language Model Training with In-Package Optical Links for Scale-Out Systems.
3. Evaluating the Effects of FeFET Device Variability on Charge Sharing Based AiMC Accelerator.
4. Performance Modeling and Workload Analysis of Distributed Large Language Model Training and Inference.
5. SAfEPaTh: A System-Level Approach for Efficient Power and Thermal Estimation of Convolutional Neural Network Accelerator.
6. AIMC Modeling and Parameter Tuning for Layer-Wise Optimal Operating Point in DNN Inference.
7. DIANA: An End-to-End Hybrid DIgital and ANAlog Neural Network SoC for the Edge.
8. Tiny ci-SAR A/D Converter for Deep Neural Networks in Analog in-Memory Computation.
9. Write-Verify Scheme for IGZO DRAM in Analog in-Memory Computing.
10. DIANA: An End-to-End Energy-Efficient Digital and ANAlog Hybrid Neural Network SoC.
11. AERO: Design Space Exploration Framework for Resource-Constrained CNN Mapping on Tile-Based Accelerators.
12. Dynamic Quantization Range Control for Analog-in-Memory Neural Networks Acceleration.
13. Design-Technology Space Exploration for Energy Efficient AiMC-Based Inference Acceleration.
14. Charge Sharing and Charge Injection A/D Converters for Analog In-Memory Computing.
15. Noise tolerant ternary weight deep neural networks for analog in-memory inference.
16. Sequential 3D: Key integration challenges and opportunities for advanced semiconductor scaling.
17. Analog In-memory Computing in FeFET-based 1T1R Array for Edge AI Applications.
18. A 22 nm, 1540 TOP/s/W, 12.1 TOP/s/mm2 in-Memory Analog Matrix-Vector-Multiplier for DNN Acceleration.
19. Lateral NWFET optimization for beyond 7nm nodes.
20. FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference.
21. Design Technology co-optimization for N10.
22. TEASE: a systematic analysis framework for early evaluation of FinFET-based advanced technology nodes.
23. Automatic Extraction of Pipeline Parallelism for Embedded Software Using Linear Programming.
24. Automatic parallelization of embedded software using hierarchical task graphs and integer linear programming.
25. Mapping Embedded Applications on MPSoCs: The MNEMEE Approach.
26. Mapping Embedded Applications on MPSoCs: The MNEMEE Approach.
27. A framework for automatic parallelization, static and dynamic memory optimization in MPSoC platforms.
28. User- and process-driven dynamic voltage and frequency scaling.
29. PICSEL: measuring user-perceived performance to control dynamic frequency scaling.
30. Learning and Leveraging the Relationship between Architecture-Level Measurements and Individual User Satisfaction.
31. Automated task distribution in multicore network processors using statistical analysis.
32. Smart bit-width allocation for low power optimization in a systemc based ASIC design environment.
33. Engineering Over-Clocking: Reliability-Performance Trade-Offs for High-Performance Register Files.
34. Load elimination for low-power embedded processors.
35. A Case for Clumsy Packet Processors.
36. Design and implementation of correlating caches.
37. Low-Power Optimization by Smart Bit-Width Allocation in a SystemC-Based ASIC Design Environment.
38. Application-Level Error Measurements for Network Processors.
39. Low Power Correlating Caches for Network Processors.
40. MNEMEE: a framework for memory management and optimization of static and dynamic data in MPSoCs.
41. Power reduction through measurement and modeling of users and CPUs: summary.
42. User-Driven Frequency Scaling.
43. Variable latency caches for nanoscale processor.
44. The user in experimental computer systems research.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.