1. Efficient large-scale transistor-level transient analysis
- Author
-
Zhu, Zhengyong
- Subjects
Computer Science::Hardware Architecture ,UCSD Dissertations, Academic Computer science. (Discipline) ,Computer Science::Emerging Technologies ,Hardware_INTEGRATEDCIRCUITS - Abstract
With increasing design complexity, huge size of extracted interconnect data is pushing the capacity of transistor level simulation tools to the limits. Direct methods, such as LU decomposition used in Berkeley SPICE and its variations, are prohibitive due to the super linear complexity. In last decade, various numerical methodologies, such as circuit partitioning, fast linear solver, model order reduction, approximated device model, simplified numerical integration or linearization, piecewise linear waveform approximation and so on, have been introduced to improve the performance of simulation under the rising demand from advanced technologies. Although with significant runtime improvement, those methods usually trade off accuracy for speed. Inaccurate simulation results could lead to over-design that increases the product cost especially for nanometer high performance integrated circuit designs. Inspired by the increasing gap between the design complexity and post- layout transistor-level simulation tools, we propose two efficient transistor-level analysis approaches with spice accuracy for deep-submicron and nanometer VLSI circuits: (1) Efficient technique to solve linearized circuit equation: A novel two-stage Newton-Raphson approach is implemented to dynamically model the linear network and nonlinear devices interfaces. Coupled linear networks are solved by the adaptive algebraic multigrid method. The circuit latency and activity variations are well captured by adaptive strategies to greatly avoid the unnecessary repeated calculation. The proposed approach employs extra iterations between linear and nonlinear circuits inside the linearization process to ensure the global convergence. (2) New Numerical Integration Procedure: We propose a generalized operator splitting method for transistor-level transient analysis and demonstrate that the generalized method is unconditionally stable. Following the generalized approach, we partition the circuits and alternate the explicit and implicit numerical integrations between the partitions. The splitting algorithm is derived to significantly reduce the overhead during LU factorization. Thus the robust direct method still remains efficient for large-scale circuits. Unlike the existing fast transistor-level simulation methods, both approaches proposed in the dissertation offer guaranteed simulation accuracy as well as significant runtime advantage. They can be used in post-layout transistor-level analysis of large-scale digital and mix- signal circuits
- Published
- 2005