407 results on '"Todd Austin"'
Search Results
202. Architectural optimizations for low-power, real-time speech recognition
- Author
-
Scott Mahlke, Todd Austin, and Rajeev Krishna
- Subjects
Exploit ,Application domain ,Computer science ,Speech recognition ,Interface (computing) ,Task parallelism ,Energy consumption ,Power domains ,Domain (software engineering) ,Microarchitecture - Abstract
The proliferation of computing technology to low power domains such as hand--held devices has lead to increased interest in portable interface technologies, with particular interest in speech recognition. The computational demands of robust, large vocabulary speech recognition systems, however, are currently prohibitive for such low power devices. This work begins anexploration of domain specific characteristics of speech recognition that might be exploited to achieve the requisite performance within the power constraints of such devices. We focus primarily on architectural techniques to exploit the massive amounts of potential thread level parallelism apparent in this application domain, and consider the performance / power trade-offs of such architectures. Our results show that a simple, multi-threaded, multi-pipelined processor architecture can significantly improve the performance of the time-consuming search phase of modern speech recognition algorithms, and may reduce overall energy consumption by drastically reducing dissipation of static power. We also show that the primary hurdle to achieving these performance benefits is the data request rate into the memory system, and consider some initial solutions to this problem.
- Published
- 2003
- Full Text
- View/download PDF
203. Efficient dynamic scheduling through tag elimination
- Author
-
Daniel J. Ernst and Todd Austin
- Subjects
business.industry ,Computer science ,Instruction scheduling ,Instruction window ,Processor scheduling ,General Medicine ,Dynamic priority scheduling ,Parallel computing ,Content-addressable memory ,Operand ,Instruction set ,Content-addressable storage ,business ,Critical path method ,Computer hardware - Abstract
An increasingly large portion of scheduler latency is derived from the monolithic content addressable memory (CAM) arrays accessed during instruction wakeup. The performance of the scheduler can be improved by decreasing the number of tag comparisons necessary to schedule instructions. Using detailed simulation-based analyses, we find that most instructions enter the window with at least one of their input operands already available. By putting these instructions into specialized windows with fewer tag comparators, load capacitance on the scheduler critical path can be reduced, with only very small effects on program throughput. For instructions with multiple unavailable operands, we introduce a last-tag speculation mechanism that eliminates all remaining tag comparators except those for the last arriving input operand. By combining these two tag-reduction schemes, we are able to construct dynamic schedulers with approximately one quarter of the tag comparators found in conventional designs. Conservative circuit-level timing analyses indicate that the optimized designs are 20-45% faster and require 10-20% less power, depending on instruction window size.
- Published
- 2003
- Full Text
- View/download PDF
204. Across the great divide: examination of simulation data with actual silicon waveforms improves device characterization and production test development
- Author
-
C. Canlas, Todd Austin, B. Morgan, and J.L. Rodriguez
- Subjects
Engineering ,Data acquisition ,Information engineering ,Computer engineering ,Product design ,business.industry ,Design flow ,New product development ,System testing ,Integrated circuit design ,business ,Electronic data interchange - Abstract
Engineers face major challenges when they try to debug problems with a device being characterized or tested at a remote site. They find it difficult, time consuming, and error prone to gather waveform data from a separate facility for analysis. This paper discusses a tool for specifying data to be collected, automating its capture, and the means of displaying and comparing to waveforms previously collected through simulation of the device during the product design phase. Specific ways of viewing not only these different waveforms but also the relationships between them are presented. We also describe the development of such a system, its inclusion in the design flow for new IC development, and the resulting improvements in new product introduction from the viewpoint of design and test.
- Published
- 2003
- Full Text
- View/download PDF
205. Safety and effectiveness of methohexital for procedural sedation in the emergency department
- Author
-
Ellen Nyheim, Theodore C. Chan, Todd Austin, Donna Kelly, and Gary M. Vilke
- Subjects
Adult ,medicine.medical_specialty ,Adolescent ,Sedation ,Conscious Sedation ,Intensive care ,medicine ,Humans ,Child ,Aged ,Retrospective Studies ,Aged, 80 and over ,business.industry ,Retrospective cohort study ,Emergency department ,Airway obstruction ,Middle Aged ,medicine.disease ,Surgery ,Hypoventilation ,Anesthesia ,Methohexital ,Emergency Medicine ,Vomiting ,medicine.symptom ,business ,Emergency Service, Hospital ,Anesthetics, Intravenous ,medicine.drug - Abstract
Use of methohexital as an agent for moderate procedural sedation in the Emergency Department (ED) recently has increased. As a barbiturate, potential complications include respiratory and myocardial depression. We conducted a retrospective review of medical records and procedural flow charts for all use of methohexital in our ED during a 31-month period. We collected data on medication use, adjunctive medications, indications, procedural success, and complications. Overall, there were 114 orthopedic procedures performed using methohexital (mean dose of 1.43 mg/kg) for sedation on 104 patients. Procedures included shoulder dislocation reduction (26.3%), hip dislocation reduction (25.4%), elbow dislocation reduction (15.2%), and fracture reduction (25.4%). There was an 80.8% success rate with the first dose of methohexital. Complications occurred in 20.2% of patients and included oxygen desaturation, hypotension, hypoventilation, vomiting, tremor, and airway obstruction. All complications were transient and managed without sequelae. Use of concurrent parenteral opioid medications had no significant impact on success or complications.
- Published
- 2003
206. The legalization of drugs: why prolong the inevitable?
- Author
-
Brenner, Todd Austin
- Subjects
Legalization of narcotics -- Social aspects - Published
- 1989
207. Creating a mixed-signal simulation capability for concurrent IC design and test program development
- Author
-
Todd Austin
- Subjects
Computer architecture ,Concurrent engineering ,business.industry ,Test data generation ,Computer science ,Embedded system ,Design for testing ,Test Management Approach ,Test plan ,business ,Test harness ,Test (assessment) ,Test data - Abstract
The ability to link mixed-signal IC design and test databases can shorten product development cycles in multiple ways. By allowing designers to simulate device tests and by giving test engineers access to the results, such a link promotes testability from the earliest stages of design, and generates data usable in test program development. Moreover, by enabling test program development to take place in simulation, design/test integration frees test engineers to both work in parallel with designers rather than having to wait for a fabricated device, and to debug test programs and hardware off-line at a workstation rather than waiting for time on a busy test system. This paper describes the development of both the test instrument models required for a design simulator to generate device test data, and of software links to let the design simulator share data with the test programming environment. The resulting integration supports concurrent design and test engineering efforts in developing new mixed-signal IC products. >
- Published
- 2002
- Full Text
- View/download PDF
208. Testing the design: the evolution of test simulation
- Author
-
C. Force and Todd Austin
- Subjects
Engineering ,business.industry ,Design for testing ,Design flow ,System testing ,Analogy ,Integrated circuit ,law.invention ,Reliability engineering ,Test (assessment) ,Information engineering ,law ,New product development ,business ,Simulation - Abstract
In work done in cooperation with Texas Instruments, Analogy, and Teradyne it has been demonstrated that a simulation of a complete test system when combined with design models of an integrated circuit can reduce the cycle time required to get a new product to market. This paper will describe the development of such a system, its inclusion in the design flow for new IC development, and the resulting improvements in new product introduction from the viewpoint of design and test.
- Published
- 2002
- Full Text
- View/download PDF
209. Dynamic test emulation for EDA-based mixed-signal test development automation
- Author
-
N. Khouzam, Todd Austin, and Jean Qincui Xia
- Subjects
Emulation ,Automatic test equipment ,Engineering ,business.industry ,Embedded system ,System testing ,Electronic design automation ,Test Management Approach ,business ,Automation ,Test harness ,Test (assessment) - Abstract
This paper presents the analysis and development of an Electronic Design Automation (EDA)-based Test Development Automation (TDA) system. We explore the need for such a system and provide a real example of the system at work. The focus of the paper is the concept of dynamic test emulation which understands that a mixed-signal test often consists of obtaining information from a large number of measurements taken at different times through the use of complex digital and analog test patterns. Our work concentrates on bringing together two existing communities, EDA and test development, by linking their separate environments and providing a platform for test and device development that does not require the existence of a physical integrated circuit and a physical Automatic Test Equipment (ATE) system.
- Published
- 2002
- Full Text
- View/download PDF
210. A fault tolerant approach to microprocessor design
- Author
-
Todd Austin and Christopher T. Weaver
- Subjects
Multi-core processor ,Finite-state machine ,Computer science ,business.industry ,Real-time computing ,Static timing analysis ,Fault tolerance ,law.invention ,Instruction set ,Microprocessor ,Alpha (programming language) ,law ,Component (UML) ,Embedded system ,business - Abstract
We propose a fault-tolerant approach to reliable microprocessor design. Our approach, based on the use of an online checker component in the processor pipeline, provides significant resistance to core processor design errors and operational faults such as supply voltage noise and energetic particle strikes. We show through cycle-accurate simulation and timing analysis of a physical checker design that our approach preserves system performance while keeping area overheads and power demands low. Furthermore, analyses suggest that the checker is a fairly simple state machine that can be formally verified, scaled in performance, and reused. Further simulation analyses show virtually no performance impacts when our simple checker design is coupled with a high-performance microprocessor model. Timing analyses indicate that a fully synthesized unpipelined 4-wide checker component in 0.25 /spl mu/m technology is capable of checking Alpha instructions at 288 MHz. Physical analyses also confirm that costs are quite modest; our prototype checker requires less than 6% the area and 1.5% the power of an Alpha 21264 processor in the same technology. Additional improvements to the checker component are described which allow for improved detection of design, fabrication and operational faults.
- Published
- 2002
- Full Text
- View/download PDF
211. Efficient checker processor design
- Author
-
Todd Austin, Saugata Chatterjee, and Christopher T. Weaver
- Subjects
Multi-core processor ,Computer science ,business.industry ,Pipeline (computing) ,Reliability (computer networking) ,Processor design ,computer.software_genre ,law.invention ,Microprocessor ,law ,Computer data storage ,Operating system ,Cache ,business ,computer - Abstract
The design and implementation of a modern microprocessor creates many reliability challenges. Designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. In our previous work, we proposed a solution to these problems by adding a simple, easily verifiable checker processor at pipeline retirement. Performance analyses of our initial design were promising, overall slowdowns due to checker processor hazards were less than 3%. However slowdowns for some outlier programs were larger. The authors closely examine the operation of the checker processor. We identify the specific reasons why the initial design works well for some programs, but slows others. Our analyses suggest a variety of improvements to the checker processor storage system. Through the addition of a 4 k checker cache and eight entry store queue, our optimized design eliminates virtually all core processor slowdowns. Moreover, we develop insights into why the optimized checker processor performs well, insights that suggest it should perform well for any program.
- Published
- 2002
- Full Text
- View/download PDF
212. High Performance and Energy Efficient Serial Prefetch Architecture
- Author
-
Brad Calder, Todd Austin, and Glenn Reinman
- Subjects
Instruction prefetch ,Hardware_MEMORYSTRUCTURES ,Memory hierarchy ,Computer science ,business.industry ,CPU cache ,Fetch ,Branch predictor ,Embedded system ,Scalability ,Cache ,business ,Queue ,Efficient energy use - Abstract
Energy efficient architecture research has flourished recently, in an attempt to address packaging and cooling concerns of current microprocessor designs, as well as battery life for mobile computers. Moreover, architects have become increasingly concerned with the complexity of their designs in the face of scalability, verification, and manufacturing concerns. In this paper, we propose and evaluate a high performance, energy and complexity efficient front-end prefetch architecture. This design, called Serial Prefetching, combines a high fetch bandwidth branch prediction and efficient instruction prefetching architecture with a low-energy instruction cache. Serial Prefetching explores the benefit of decoupling the tag component of the cache from the data component. Cache blocks are first verified by the tag component of the cache, and then the accesses are put into a queue to be consumed by the data component of the instruction cache. Energy is saved by only accessing the correct way of the data component specified by the tag lookup in a previous cycle. The tag component does not stall on a I-cache miss, only the data component. The accesses that miss in the tag component are speculatively brought in from lower levels of the memory hierarchy. This in effect performs a prefetch, while the access migrates through the queue to be consumed by the data component.
- Published
- 2002
- Full Text
- View/download PDF
213. Effective support of simulation in computer architecture instruction
- Author
-
Christopher T. Weaver, Todd Austin, and Eric D. Larson
- Subjects
Enterprise architecture framework ,Cellular architecture ,Computer architecture ,Computer architecture simulator ,Computer science ,Applications architecture ,Reference architecture ,Data architecture ,Space-based architecture ,Database-centric architecture - Abstract
The use of simulation is well established in academic and industry research as a means of evaluating architecture trade-offs. The large code base, complex architectural models, and numerous configurations of these simulators can consternate those just learning computer architecture. Even those experienced with computer architecture, may have trouble adapting a simulator to their needs, due to the code complexity and simulation method. In this paper we present tools we have developed to make simulation more accessible in the classroom by aiding the process of launching simulations, interpreting results and developing new architectural models.
- Published
- 2002
- Full Text
- View/download PDF
214. Challenges for Architectural Level Power Modeling
- Author
-
Trevor Mudge, Dirk Grunwald, Nam Sung Kim, and Todd Austin
- Subjects
Engineering ,Computer architecture simulator ,Order (exchange) ,business.industry ,Gauge (instrument) ,Embedded system ,Engineering design process ,business ,Power (physics) ,Reliability engineering ,Microarchitecture - Abstract
The power aware design of microprocessors is becoming increasingly important. Power aware design can best be achieved by considering the impact of architectural choices on power early in the design process. A natural solution is to build a power estimator into the cycle simulators that are used to gauge the effect of architectural choices on performance. Cycle simulators intentionally omit considerable implementation detail in order to be efficient. The challenge, is to select the details that must be put back in if the simulator is required to also produce meaningful power figures. In this paper we propose how to augment a cycle simulator to produce these power figures.
- Published
- 2002
- Full Text
- View/download PDF
215. Application specific architectures
- Author
-
Lisa Wu, Christopher T. Weaver, Todd Austin, and Rajeev Krishna
- Subjects
Reduction (complexity) ,Focus (computing) ,Computer science ,business.industry ,Processor design ,Embedded system ,Application-specific instruction-set processor ,Key (cryptography) ,Cryptography ,business ,Electrical efficiency ,Domain (software engineering) - Abstract
The general purpose processor has long been the focus of intense optimization efforts that have resulted in an impressive doubling of performance every 18 months. However, recent evidence suggests that these efforts may be faltering as pipelining and ILP processing techniques experience diminishing returns. Application specific architectures hold great potential as an alternative means to continue scaling application performance. The approach works by specializing a design to a small domain of important applications, and it benefits from improved performance, greater power efficiency, and reduced area costs. This technique is well matched to embedded targets, where application domains are typically narrowIn this paper we present a case study of an application specific processor design. Our design, called the CryptoManiac processor, is an architecture specialized to efficiently execute cryptographic ciphers. We carefully highlight the domain specific application characteristics we identified and their accompanying optimizations in the CryptoManiac design. Detailed analyses of the design makes a strong case for application specific optimization. The CryptoManiac processor runs popular ciphers at twice the speed of a high-end general purpose processor, and the design renders nearly two orders-of-magnitude reduction in power consumption and area costs. Finally, we identify two key challenges that stand as barriers to wide spread adoption of application specific architectures.
- Published
- 2001
- Full Text
- View/download PDF
216. CryptoManiac
- Author
-
Christopher T. Weaver, Lisa Wu, and Todd Austin
- Subjects
Triple DES ,business.industry ,Computer science ,computer.internet_protocol ,Advanced Encryption Standard ,Cryptography ,General Medicine ,Cryptographic protocol ,Cipher ,Secure communication ,Embedded system ,IPsec ,business ,Communications protocol ,computer - Abstract
The growth of the Internet as a vehicle for secure communication and electronic commerce has brought cryptographic processing performance to the forefront of high throughput system design. This trend will be further underscored with the widespread adoption of secure protocols such as secure IP (IPSEC) and virtual private networks (VPNs). In this paper, we introduce the CryptoManiac processor, a fast and flexible co-processor for cryptographic workloads. Our design is extremely efficient; we present analysis of a 0.25um physical design that runs the standard Rijndael cipher algorithm 2.25 times faster than a 600MHz Alpha 21264 processor. Moreover, our implementation requires 1/100 th the area and power in the same technology. We demonstrate that the performance of our design rivals a state-of-the-art dedicated hardware implementation of the 3DES (triple DES) algorithm, while retaining the flexibility to simultaneously support multiple cipher algorithms. Finally, we define a scalable system architecture that combines CryptoManiac processing elements to exploit inter-session and inter-packet parallelism available in many communication protocols. Using I/O traces and detailed timing simulation, we show that chip multiprocessor configurations can effectively service high throughput applications including secure web and disk I/O processing.
- Published
- 2001
- Full Text
- View/download PDF
217. Scalable hybrid verification of complex microprocessors
- Author
-
Maher Mneimneh, Karem A. Sakallah, Todd Austin, Fadi Aloul, Saugata Chatterjee, and Christopher T. Weaver
- Subjects
Physical verification ,Functional verification ,Computer architecture ,business.industry ,Computer science ,Software deployment ,Embedded system ,Scalability ,Verification ,business ,Formal methods ,Formal verification ,Intelligent verification - Abstract
We introduce a new verification methodology for modern micro-processors that uses a simple checker processor to validate the exe-cution of a companion high-performance processor. The checker can be viewed as an at-speed emulator that is formally verified to be compliant to an ISA specification. This verification approach en-ables the practical deployment of formal methods without impact-ing overall performance.
- Published
- 2001
- Full Text
- View/download PDF
218. Compiler controlled value prediction using branch predictor based confidence
- Author
-
Todd Austin and Eric D. Larson
- Subjects
Speedup ,Computer science ,Value (computer science) ,Compiler ,Parallel computing ,Hardware_CONTROLSTRUCTURESANDMICROPROGRAMMING ,Arithmetic ,Branch predictor ,computer.software_genre ,Instruction-level parallelism ,Branch misprediction ,computer ,Critical path method - Abstract
Value prediction breaks data dependencies in a program thereby creating instruction level parallelism that can increase program performance. Hardware based value prediction techniques have been shown to increase speed, but at great cost as designs include prediction tables, selection logic, and a confidence mechanism. This paper proposes compiler-controlled value prediction optimizations that obtain good speedups while keeping hardware costs low. The branch predictor is used to estimate the confidence of the value predictor for speculated instructions. This technique obtains 4.6% speedup when completely implemented in software and 15.2% speedup when minimal hardware support (a 1 KB predictor table) is added. We also explore the use of critical path information to aid in the selection of value prediction candidates. The key result of our study is that programs with long dynamic dependence chains benefit with this technique while programs with shorter chains benefit more so from simple selection methods that favor optimization frequency. A new branch instruction that ignores innocuous value mispredictions is shown to eliminate unnecessary mispredictions when program semantics aren't violated by confidence branch mispredictions.
- Published
- 2000
- Full Text
- View/download PDF
219. Architectural support for fast symmetric-key cryptography
- Author
-
Todd Austin, Jerome Burke, and John W. McDonald
- Subjects
Triple DES ,Speedup ,Modular arithmetic ,business.industry ,Computer science ,Cryptography ,Encryption ,Computer Graphics and Computer-Aided Design ,Permutation ,Secure communication ,Cipher ,Computer engineering ,Symmetric-key algorithm ,Embedded system ,Strong cryptography ,business ,Software - Abstract
The emergence of the Internet as a trusted medium for commerce and communication has made cryptography an essential component of modern information systems. Cryptography provides the mechanisms necessary to implement accountability, accuracy, and confidentiality in communication. As demands for secure communication bandwidth grow, efficient cryptographic processing will become increasingly vital to good system performance.In this paper, we explore techinques to improve the performance of symmetric key cipher algorithms. Eight popular strong encryption algorithms are examined in detail. Analysis reveals the algorithms are computaionally complex and contain little parallelism. Overall throughput on high-end microprocessor is quite poor, a 600 Mhz processor is incapable of saturation a T3 communication line with 3DES (triple DES) encrypted data.We introduce new instructions taht improve the efficiency of the analyzed algorithms. Our approach adds instruction set support for fast substitutions, general permutations, rotates, and modular arithmetic. Performance analysis of the optimized ciphers shows an overall speedup of 59% over a baseline machine with rotate instructions and 74% speedup over a baseline without rotates. Even higher speedups are demonstrated with optimized subtitutions (SBOXes) and additional functional unit resources. our analyses of the original and optimized algorithms suggest future directions for the design of high-performance programmable cryptographic processors.
- Published
- 2000
- Full Text
- View/download PDF
220. Inhibition of rabbit muscle isozymes by vitamin C
- Author
-
Anita Williams, Todd Austin, and Percy J. Russell
- Subjects
Vitamin K ,medicine.medical_treatment ,Phosphofructokinase-1 ,Ascorbic Acid ,Biochemistry ,Isozyme ,Models, Biological ,chemistry.chemical_compound ,Menadione ,medicine ,Animals ,Vitamin E ,Sulfhydryl Compounds ,Enzyme Inhibitors ,Muscle, Skeletal ,chemistry.chemical_classification ,Oxidase test ,Glycogen ,Vitamin C ,L-Lactate Dehydrogenase ,Adenylate Kinase ,Skeletal muscle ,Molecular biology ,Dehydroascorbic Acid ,Enzyme ,medicine.anatomical_structure ,chemistry ,Liver ,Molecular Medicine ,Rabbits ,Oxidation-Reduction - Abstract
The ability of vitamins C, E and K to inhibit enzymes directly has been investigated. It was found that vitamin E and some analogs and menadione (vitamin K3) inhibited several enzymes irreversibility at concentrations below one millimolar. Ascorbate inhibits rabbit muscle 6-phosphofructokinase (MPFK-1; EC 2.7.1.11), muscle type LDH (EC 1.1.1.27), and muscle AK (EC 2.7.4.3) at low concentrations that do not inhibit equivalent liver isozymes. Ascorbate Ki values for muscle-type LDH and heart-type LDH isozymes are 0.007 and 3 mM, respectively. The ascorbate Ki value for rabbit skeletal muscle PFK-1 is 0.16 mM; liver PFK-I is not inhibited by ascorbate. Dehydroascorbate does not inhibit any enzyme at ascorbate concentrations normally found in cells. All ascorbate inhibitions are completely reactivated or nearly so by L-ascorbate oxidase, CYS, GSH, or DTT. We propose a hypothesis that ascorbate facilitates glycogen storage in muscle by inhibiting glycolysis. The relationship between ascorbate metabolism and diabetes is discussed.
- Published
- 2000
221. Classifying load and store instructions for memory renaming
- Author
-
Brad Calder, Glenn Reinman, Todd Austin, Gary Tyson, and Dean M. Tullsen
- Subjects
Profiling (computer programming) ,Memory address ,Dependency (UML) ,Computer science ,Value (computer science) ,Table (database) ,Compiler ,Parallel computing ,computer.software_genre ,computer ,Bottleneck - Abstract
Memory operations remain a significant bottleneck in dynamically scheduled pipelined processors, due in part to the inability to statically determine the existence of memory address dependencies. Hardware memory renaming techniques have been proposed to predict which stores a load might be dependent upon. These prediction techniques can be used to speculatively forward a value from a predicted store dependency to a load through a value prediction table. However, these techniques require large, timeconsuming hardware tables. In this paper we propose a software-guided approach for identifying dependencies between store and load instructions and the Load Marking (LM) architecture to communicate these dependencies to the hardware. Compiler analysis and profiles are used to find important store/load relationships, and these relationships are identified during execution via hints or an n-bit tag. For those loads that are not marked for renaming, we then use additional profiling information to further classify the loads into those that have accurate value prediction and those that do not. These classifications allow the processor to individually apply the most appropriate aggressive form of execution for each load.
- Published
- 1999
- Full Text
- View/download PDF
222. Cache-conscious data placement
- Author
-
Simmi John, Todd Austin, Brad Calder, and Chandra Krintz
- Subjects
Snoopy cache ,Hardware_MEMORYSTRUCTURES ,Computer science ,Cache coloring ,CPU cache ,MESI protocol ,Pipeline burst cache ,Global Assembly Cache ,Parallel computing ,Cache pollution ,Cache-oblivious algorithm ,MESIF protocol ,Smart Cache ,Virtual address space ,Cache invalidation ,Write-once ,Bus sniffing ,Locality of reference ,Page cache ,Cache ,Least frequently used ,Cache algorithms ,Heap (data structure) - Abstract
As the gap between memory and processor speeds continues to widen, cache eficiency is an increasingly important component of processor performance. Compiler techniques have been used to improve instruction cache pet$ormance by mapping code with temporal locality to different cache blocks in the virtual address space eliminating cache conflicts. These code placement techniques can be applied directly to the problem of placing data for improved data cache pedormance.In this paper we present a general framework for Cache Conscious Data Placement. This is a compiler directed approach that creates an address placement for the stack (local variables), global variables, heap objects, and constants in order to reduce data cache misses. The placement of data objects is guided by a temporal relationship graph between objects generated via profiling. Our results show that profile driven data placement significantly reduces the data miss rate by 24% on average.
- Published
- 1998
- Full Text
- View/download PDF
223. The SimpleScalar tool set as an instructional tool
- Author
-
Todd Austin
- Subjects
Set (abstract data type) ,Knowledge management ,Computer architecture ,Computer science ,business.industry ,Software engineering ,business - Published
- 1998
- Full Text
- View/download PDF
224. High-bandwidth address translation for multiple-issue processors
- Author
-
Todd Austin and Gurindar S. Sohi
- Subjects
Hardware_MEMORYSTRUCTURES ,Computer science ,Locality ,Translation lookaside buffer ,Branch predictor ,law.invention ,Microprocessor ,Memory management ,Computer architecture ,law ,Virtual memory ,Hit rate ,Concurrent computing ,Page - Abstract
In an effort to push the envelope of system performance, microprocessor designs are continually exploiting higher levels of instruction-level parallelism, resulting in increasing bandwidth demands on the address translation mechanism. Most current microprocessor designs meet this demand with a multi-ported TLB. While this design provides an excellent hit rate at each port, its access latency and area grow very quickly as the number of ports is increased. As bandwidth demands continue to increase, multi-ported designs will soon impact memory access latency.We present four high-bandwidth address translation mechanisms with latency and area characteristics that scale better than a multiported TLB design. We extend traditional high-bandwidth memory design techniques to address translation, developing interleaved and multi-level TLB designs. In addition, we introduce two new designs crafted specifically for high-bandwidth address translation. Piggyback ports are introduced as a technique to exploit spatial locality in simultaneous translation requests, allowing accesses to the same virtual memory page to combine their requests at the TLB access port. Pretranslation is introduced as a technique for attaching translations to base register values, making it possible to reuse a single translation many times.We perform extensive simulation-based studies to evaluate our designs. We vary key system parameters, such as processor model, page size, and number of architected registers, to see what effects these changes have on the relative merits of each approach. A number of designs show particular promise. Multi-level TLBs with as few as eight entries in the upper-level TLB nearly achieve the performance of a TLB with unlimited bandwidth. Piggyback ports combined with a lesser-ported TLB structure, e.g., an interleaved or multi-ported TLB, also perform well. Pretranslation over a single-ported TLB performs almost as well as a same-sized multi-level TLB with the added benefit of decreased access latency for physically indexed caches.
- Published
- 1996
- Full Text
- View/download PDF
225. Scene Understanding for the Visually Impaired Using Visual Sonification by Visual Feature Analysis and Auditory Signatures
- Author
-
Silvio Savarese, Todd Austin, Jason Clemons, Yingze Bao, and Mohit Bagra
- Subjects
Ophthalmology ,Communication ,Computer science ,business.industry ,Visually impaired ,Sonification ,Speech recognition ,business ,Sensory Systems - Published
- 2012
- Full Text
- View/download PDF
226. Performance simulation tools
- Author
-
Shubhendu S. Mukherjee, Joel Emer, Todd Austin, Peter S. Magnusson, and Sarita V. Adve
- Subjects
General Computer Science ,Computer architecture ,Computer science ,Systems architecture ,Isolation (database systems) - Abstract
Understanding the performance of microprocessors, multiprocessors, and distributed computers requires studying them in isolation as well as observing their interaction with the entire system architecture.
- Published
- 2002
- Full Text
- View/download PDF
227. Benefits of Airtightness Testing.
- Author
-
Todd, Austin and Labbe, Greg
- Subjects
TESTING ,MASS media - Published
- 2020
228. Dynamic dependency analysis of ordinary programs
- Author
-
Gurindar S. Sohi and Todd Austin
- Subjects
Cellular architecture ,Programming language ,Computer science ,Fortran ,Data parallelism ,Task parallelism ,Parallel computing ,Scalable parallelism ,computer.software_genre ,Minimal instruction set computer ,Memory-level parallelism ,Concurrent computing ,Implicit parallelism ,Instruction-level parallelism ,computer ,computer.programming_language - Abstract
A quantitative analysis of program execution is essential to the computer architecture design process. With the current trend in architecture of enhancing the performance of uniprocessors by exploiting fine-grain parallelism, first-order metrics of program execution, such as operation frequencies, are not sufficient; characterizing the exact nature of dependencies between operations is essential.This paper presents a methodology for constructing the dynamic execution graph that characterizes the execution of an ordinary program (an application program written in an imperatibve language such as C or FORTRAN) from a serial execution trace of the program. It then uses the methodology to study parallelism in the SPEC benchmarks. We see that the prallelism can be bursty in nature (periods of lots of parallelism followed by periods of little parallelism), but the average parallelism is quite high, ranging from 13 to 23,302 operations per cycle. Exposing this parallelism requires renaming of both registers and memory, though renaming registers alone exposes much of this parallelism. We also see that fairly large windows of dynamic instructions would be required to expose this parallelism from a sequential instruction stream.
- Published
- 1992
- Full Text
- View/download PDF
229. Architecting a reliable CMP switch architecture
- Author
-
Bin Zhang, Todd Austin, Kypros Constantinides, Michael Orshansky, Jason Blome, Scott Mahlke, Stephen M. Plaza, and Valeria Bertacco
- Subjects
Router ,business.industry ,Computer science ,Transistor ,Control reconfiguration ,Multiprocessing ,Hardware_PERFORMANCEANDRELIABILITY ,Chip ,law.invention ,Process variation ,Bathtub curve ,Hardware and Architecture ,law ,Embedded system ,Redundancy (engineering) ,business ,Software ,Information Systems - Abstract
As silicon technologies move into the nanometer regime, transistor reliability is expected to wane as devices become subject to extreme process variation, particle-induced transient errors, and transistor wear-out. Unless these challenges are addressed, computer vendors can expect low yields and short mean-times-to-failure. In this article, we examine the challenges of designing complex computing systems in the presence of transient and permanent faults. We select one small aspect of a typical chip multiprocessor (CMP) system to study in detail, a single CMP router switch. Our goal is to design a BulletProof CMP switch architecture capable of tolerating significant levels of various types of defects. We first assess the vulnerability of the CMP switch to transient faults. To better understand the impact of these faults, we evaluate our CMP switch designs using circuit-level timing on detailed physical layouts. Our infrastructure represents a new level of fidelity in architectural-level fault analysis, as we can accurately track faults as they occur, noting whether they manifest or not, because of masking in the circuits, logic, or architecture. Our experimental results are quite illuminating. We find that transient faults, because of their fleeting nature, are of little concern for our CMP switch, even within large switch fabrics with fast clocks. Next, we develop a unified model of permanent faults, based on the time-tested bathtub curve. Using this convenient abstraction, we analyze the reliability versus area tradeoff across a wide spectrum of CMP switch designs, ranging from unprotected designs to fully protected designs with on-line repair and recovery capabilities. Protection is considered at multiple levels from the entire system down through arbitrary partitions of the design. We find that designs are attainable that can tolerate a larger number of defects with less overhead than naïve triple-modular redundancy, using domain-specific techniques, such as end-to-end error detection, resource sparing, automatic circuit decomposition, and iterative diagnosis and reconfiguration.
- Published
- 2007
- Full Text
- View/download PDF
230. Differentiating clinical characteristics of perianal inflammatory bowel disease from perianal hidradenitis suppurativa.
- Author
-
Yamanaka‐Takaichi, Mika, Nadalian, Soheila, Loftus, Edward V., Ehman, Eric C. Jr, Todd, Austin, Grimaldo, Anna B., Yalon, Mariana, Matchett, Caroline L., Patel, Nisha B., Isaq, Nasro A., Raffals, Laura E., Wetter, David A., Murphree, Dennis H. Jr, Cima, Robert R., Dozois, Eric J., Goldfarb, Noah, Tizhoosh, Hamid R., and Alavi, Afsaneh
- Subjects
- *
INFLAMMATORY bowel diseases , *CROHN'S disease , *BODY mass index , *RANDOM forest algorithms , *CUTANEOUS manifestations of general diseases - Abstract
Background Methods Results Conclusions Perianal draining tunnels in hidradenitis suppurativa (HS) and perianal fistulizing inflammatory bowel disease (IBD) present diagnostic and management dilemmas.We conducted a retrospective chart review of patients with perianal disease evaluated at Mayo Clinic from January 1, 1998, through July 31, 2021. Patients' demographic and clinical data were extracted, and 28 clinical features were collected. After experimenting with several machine learning techniques, random forests were used to select the 15 most important clinical features to construct the diagnostic prediction model to distinguish perianal HS from fistulizing perianal IBD.A total of 263 patients were included (98 with HS, 100 with IBD, and 65 with both IBD and HS). Patients with HS had a higher mean body mass index, a higher smoking rate, and more commonly showed cutaneous manifestations of tunnels and comedones, while fistulas, abscesses, induration, anal tags, ulcers, and anal fissures were more common in patients with IBD. In addition to having lesions in the perianal area, patients with IBD often had lesions in the buttocks and perineum, while those with HS had additional lesions in the axillae and groin. Among the statistically significant features, the 15 most important were identified by random forest: fistula, tunnel, digestive symptom, knife‐cut ulcer, perineum, body mass index, age, axilla, abscess, tags, smoking, groin, genital cutaneous edema, erythema, and bilateral/unilateral.The results of this study may help differentiate perianal lesions, especially perineal HS and fistulizing perineal IBD, and provide promise for a better therapeutic outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
231. Digital dermatopathology implementation: Experience at a multisite academic institution.
- Author
-
Proffer, Sydney L., Reinhart, Jacob, Ridgeway, Jennifer L., Barry, Barbara, Kamath, Celia, Gerdes, Erin Wissler, Todd, Austin, Cervenka, Derek J., DiCaudo, David J., Sokumbi, Olayemi, Johnson, Emma F., Peters, Margot S., Wieland, Carilyn N., and Comfere, Nneka I.
- Abstract
Background: Technology has revolutionized not only direct patient care but also diagnostic care processes. This study evaluates the transition from glass‐slide microscopy to digital pathology (DP) at a multisite academic institution, using mixed methods to understand user perceptions of digitization and key productivity metrics of practice change. Methods: Participants included dermatopathologists, pathology reporting specialists, and clinicians. Electronic surveys and individual or group interviews included questions related to technology comfort, trust in DP, and rationale for DP adoption. Case volumes and turnaround times were abstracted from the electronic health record from Qtr 4 2020 to Qtr 1 2023 (inclusive). Data were analyzed descriptively, while interviews were analyzed using methods of content analysis. Results: Thirty‐four staff completed surveys and 22 participated in an interview. Case volumes and diagnostic turnaround time did not differ across the institution during or after implementation timelines (p = 0.084; p = 0.133, respectively). 82.5% (28/34) of staff agreed that DP improved the sign‐out experience, with accessibility, ergonomics, and annotation features described as key factors. Clinicians reported positive perspectives of DP impact on patient safety and interdisciplinary collaboration. Conclusions: Our study demonstrates that DP has a high acceptance rate, does not adversely impact productivity, and may improve patient safety and care collaboration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
232. Streamlining data cache access with fast address calculation
- Author
-
Gurindar S. Sohi, Todd Austin, and Dionisios Pnevmatikatos
- Subjects
Computer science ,Cache coloring ,CPU cache ,Distributed computing ,Pipeline burst cache ,General Medicine ,Parallel computing ,Cache pollution ,Cache-oblivious algorithm ,Smart Cache ,Write-once ,Cache invalidation ,Page cache ,Cache ,Cache algorithms - Abstract
For many programs, especially integer codes, untolerated load instruction latencies account for a significant portion of total execution time. In this paper, we present the design and evaluation of a fast address generation mechanism capable of eliminating the delays caused by effective address calculation for many loads and stores.Our approach works by predicting early in the pipeline (part of) the effective address of a memory access and using this predicted address to speculatively access the data cache. If the prediction is correct, the cache access is overlapped with non-speculative effective address calculation. Otherwise, the cache is accessed again in the following cycle, this time using the correct effective address. The impact on the cache access critical path is minimal; the prediction circuitry adds only a single OR operation before cache access can commence. In addition, verification of the predicted effective address is completely decoupled from the cache access critical path.Analyses of program reference behavior and subsequent performance analysis of this approach shows that this design is a good one, servicing enough accesses early enough to result in speedups for all the programs we tested. Our approach also responds well to software support, which can significantly reduce the number of mispredicted effective addresses, in many cases providing better program speedups and reducing cache bandwidth requirements.
233. Monoclonal gammopathy in the setting of Pyoderma gangrenosum.
- Author
-
Saeidi, Vahide, Garimella, Vishal, Shaji, Kumar, Wetter, David A., Davis, Mark Denis P., Todd, Austin, Dutz, Jan, and Alavi, Afsaneh
- Abstract
Pyoderma gangrenosum (PG) is a neutrophilic dermatosis characterized by ulcerative painful lesions with violaceous undermined borders. Up to 75% of PG cases develop in association with an underlying systemic disease. Monoclonal gammopathy is reportedly a concomitant condition with PG, with studies indicating immunoglobulin (Ig) A gammopathy as the most common. Whether gammopathy is associated with PG or is an incidental finding has been debated. We sought to investigate the association and characteristics of gammopathy in patients with PG. We retrospectively identified PG patients at our institution from 2010 to 2022 who were screened for plasma cell dyscrasia. Of 106 patients identified, 29 (27%) had a gammopathy; subtypes included IgA (41%), IgG (28%), and biclonal (IgA and IgG) (14%). Mean age was similar between those with and without gammopathy (60.7 vs. 55.9 years; P =.26). In addition, hematologic or solid organ cancer developed in significantly more patients with vs. without gammopathy (8/29 [28%] vs. 5/77 [6%]; P =.003). Among the subtypes of gammopathy, IgG monoclonal gammopathy had the highest proportion of patients with subsequent cancer development (4 of 8 patients, 50%). Study limitations include a retrospective, single-institution design with a limited number of patients. Overall, our data show a high prevalence of gammopathy in patients with PG; those patients additionally had an increased incidence of cancer, especially hematologic cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
234. Prevalence and spectrum of infectious and inflammatory dermatologic conditions occurring in pediatric heart transplant patients on a predominantly mTOR‐based immune suppressive regimen: A retrospective chart review.
- Author
-
Rydberg, Ann, Ameduri, Rebecca, Brown, Trista, Johnson, Jonathan N., Todd, Austin, Tollefson, Megha M., and Anderson, Katelyn
- Subjects
- *
HEART transplant recipients , *DRUG eruptions , *CHILD patients , *URTICARIA , *ACNEIFORM eruptions , *PEDIATRIC clinics , *HEART transplantation - Abstract
Introduction: Pediatric heart transplant patients are routinely followed in dermatology clinics due to elevated risk of cutaneous malignancy. However, transplant patients may experience other, non‐cancer‐related dermatologic conditions including skin infections, inflammatory diseases, and drug eruptions that can cause significant medical and psychosocial comorbidity. Methods: A retrospective chart review of all pediatric heart transplant patients at Mayo Clinic Children's Center in Rochester, MN, was performed to determine the prevalence and spectrum of non‐cancer dermatologic conditions. Statistical analysis was conducted to look for associations between episodes of rejection and skin condition development. Results: Of the 65 patients who received heart transplants under the age of 18 and were followed at Mayo Clinic, 69% (N = 45) were diagnosed with at least one skin condition between transplant and the time of most recent follow‐up. Sixty‐two percent (N = 40) of patients were diagnosed with an inflammatory skin condition (most commonly acne and atopic dermatitis), 45% (N = 29) with an infectious skin condition (most commonly warts and dermatophyte infection), and 32% (N = 21) with a drug eruption (most commonly unspecified rash and urticaria). No association was found between presence of skin disease and number of rejection episodes. Conclusions: Non‐cancer dermatologic conditions are prevalent within pediatric heart transplant recipients and may directly impact their medical needs and quality of life. Dermatologist involvement in the care of post‐transplant pediatric patients is important, not only for cancer screening but also for diagnosis and treatment of common infectious and inflammatory skin conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
235. Pemphigoid gestationis and polymorphic eruption of pregnancy: treatment and outcomes in a retrospective cohort study.
- Author
-
Xie, Fangyi, Lehman, Julia S., Baban, Farah, Johnson, Emma F., Theiler, Regan N., Todd, Austin, and Davis, Dawn M. R.
- Subjects
- *
PREGNANCY outcomes , *DELIVERY (Obstetrics) , *COHORT analysis , *RETROSPECTIVE studies , *TREATMENT duration - Abstract
This article examines the treatment and outcomes of patients with pemphigoid gestationis (PG) and polymorphic eruption of pregnancy (PEP), two types of pruritic dermatoses that occur during pregnancy. The study analyzed medical records and direct immunofluorescence reports from 1995 to 2020. The findings revealed differences in patient characteristics, such as age, ethnicity, parity, gravidity, and gestation at rash, between the PG and PEP groups. The article also provides information on maternal outcomes, including the mode of delivery, for both conditions. The study concludes that PG tends to have an earlier onset and delivery, lower neonatal weight, and requires more systemic therapies for a longer duration compared to PEP. However, it is important to note that the study's retrospective design limits its findings. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
236. Systemic Atrioventricular Valve Surgery in Patients With Congenitally Corrected Transposition of the Great Vessels.
- Author
-
Abdelrehim, Ahmed A., Stephens, Elizabeth H., Miranda, William R., Todd, Austin L., Connolly, Heidi M., Egbe, Alexander C., Burchill, Luke J., Ashikhmina, Elena A., and Dearani, Joseph A.
- Subjects
- *
TRANSPOSITION of great vessels , *VENTRICULAR ejection fraction , *HEART assist devices , *HEART block , *VALVES , *TRICUSPID valve , *HEART transplantation - Abstract
Limited data exist regarding the long-term outcomes of systemic atrioventricular valve (SAVV) intervention (morphologic tricuspid valve) in congenitally corrected transposition (ccTGA). The goal of this study was to evaluate the mid- and long-term outcomes of SAVV surgery in ccTGA. We performed a retrospective review of 108 ccTGA patients undergoing SAVV surgery from 1979 to 2022. The primary outcome was a composite endpoint of mortality, cardiac transplantation, or ventricular assist device implantation. The secondary outcome was long-term systemic right ventricular ejection fraction (SVEF). Cox proportional hazard and linear regression models were used to analyze survival and late SVEF data. The median age at surgery was 39.5 years (Q1-Q3: 28.8-51.0 years), and the median preoperative SVEF was 39% (Q1-Q3: 33.2%-45.0%). Intrinsic valve abnormality was the most common mechanism of SAVV regurgitation (76.9%). There was 1 early postoperative mortality (0.9%). Postoperative complete heart block occurred in 20 patients (18.5%). The actuarial 5-, 10-, and 20-year freedom from death or transplantation was 92.4%, 79.1%, and 62.9%. The 10- and 20-year freedom from valve reoperation was 100% and 93% for mechanical prosthesis compared with 56.6% and 15.7% for bioprosthesis (P < 0.0001). Predictors of postoperative mortality were age at operation (P = 0.01) and preoperative SVEF (P = 0.04). Preoperative SVEF (P < 0.001), complex ccTGA (P = 0.02), severe SAVV regurgitation (P = 0.04), and preoperative creatinine (P = 0.003) were predictors of late postoperative SVEF. SAVV surgery remains a valuable option for the treatment of patients with ccTGA, with low early mortality and satisfactory long-term outcomes, particularly in those with SVEF ≥40%. Timely referral and accurate patient selection are the keys to better long-term outcomes. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
237. The influence of larval migration and dispersal depth on potential larval trajectories of a deep-sea bivalve.
- Author
-
McVeigh, Doreen M., Eggleston, David B., Todd, Austin C., Young, Craig M., and He, Ruoying
- Subjects
- *
BIVALVES , *LARVAL dispersal , *MARINE ecology , *METAPOPULATION (Ecology) - Abstract
Many fundamental questions in marine ecology require an understanding of larval dispersal and connectivity, yet direct observations of larval trajectories are difficult or impossible to obtain. Although biophysical models provide an alternative approach, in the deep sea, essential biological parameters for these models have seldom been measured empirically. In this study, we used a biophysical model to explore the role of behaviorally mediated migration from two methane seep sites in the Gulf of Mexico on potential larval dispersal patterns and population connectivity of the deep-sea mussel “ Bathymodiolus” childressi , a species for which some biological information is available. Three possible larval dispersal strategies were evaluated for larvae with a Planktonic Larval Duration (PLD) of 395 days: (1) demersal drift, (2) dispersal near the surface early in larval life followed by an extended demersal period before settlement, and (3) dispersal near the surface until just before settlement. Upward swimming speeds varied in the model based on the best data available. Average dispersal distances for simulated larvae varied between 16 km and 1488 km. Dispersal in the upper water column resulted in the greatest dispersal distance (1173 km ± 2.00), followed by mixed dispersal depth (921 km ± 2.00). Larvae originating in the Gulf of Mexico can potentially seed most known seep metapopulations on the Atlantic continental margin, whereas larvae drifting demersally cannot (237 km ± 1.43). Depth of dispersal is therefore shown to be a critical parameter for models of deep-sea connectivity. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
238. Histopathologic features predictive of perivascular deposition of IgA on direct immunofluorescence in cases of leukocytoclastic vasculitis: A retrospective study of 112 specimens.
- Author
-
Xie, Fangyi, Johnson, Emma F., Wetter, David A., Camilleri, Michael J., Todd, Austin, and Lehman, Julia S.
- Subjects
- *
LEUKOCYTOCLASTIC vasculitis , *IMMUNOFLUORESCENCE , *IMMUNOGLOBULIN A , *FISHER exact test , *MICROSCOPY , *CHI-squared test , *EOSINOPHILIA - Abstract
IgA vasculitis is a small‐vessel vasculitis subtype with increased risk of systemic involvement. We aimed to investigate if any light‐microscopic features can predict the presence of perivascular granular IgA deposits on direct immunofluorescence (DIF) microscopy. We performed a retrospective search of cutaneous pathology reports from our internal and consultation practice (January 1, 2010–October 5, 2021) with a diagnosis of leukocytoclastic vasculitis and accompanying DIF. A blinded dermatopathologist reviewed standard microscopy slides for predetermined histopathological features. Fifty‐six biopsies (48 patients) and 56 biopsies (42 patients) met inclusion criteria for IgA+ and IgA−, respectively. The presence of eosinophils and mid and deep dermal inflammation were statistically more associated with IgA− (41/56 [73.2%] and 31/56 [55.4%], respectively) than IgA+ cases (28/56 [50.0%] and 14/56 [25.0%]; p = 0.049 and 0.006, respectively, chi‐squared test). Other microscopic criteria recorded were not significantly different between the two groups (p > 0.05, chi‐squared and Fisher's exact tests). In this retrospective study of 112 cases, we found that while the absence of eosinophils and absence of mid‐ and deep inflammation were correlated with increased likelihood of IgA perivascular deposition on DIF, no other histopathological features on light microscopy tested could reliably predict the presence of IgA perivascular deposition on DIF. Therefore, DIF remains a necessary component for the accurate diagnosis of cutaneous IgA vasculitis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
239. Then and Now: A Comparison of the Attacks of December 7, 1941 and September 11, 2001 as Seen in the New York Times with an Analysis of the Construction of the Current Threat to the National Interest
- Author
-
Williams, Todd Austin
- Subjects
- Sociology, General, September 11, 2001, Pearl Harbor, New York Times, Newspaper Coverage, Terrorism
- Abstract
The aim of this research is to compare and contrast the coverage of the state of affairs in the United States in the two month periods following the attack on Pearl Harbor (December 7, 1941) and the attacks on the World Trade Center and Pentagon (September 11, 2001) as reflected in the primary news articles and OP/ED/Letters pieces in the New York Times. The comparative examination is carried out by means of quantitative contents analysis as well as qualitative observations of the news, graphic, and advertising content of the New York Times for the periods examined. The study is supplemented by an analysis of the claims made by high ranking governmental officials regarding the nature and urgency of the threat of terrorism to the national interest of the United States. The study reports on the construction of reality via mass mediated news narratives during times of national crisis.
- Published
- 2003
240. Histopathological features of pemphigoid gestationis and polymorphic eruption of pregnancy: A blinded retrospective comparative study of 31 cases.
- Author
-
Baban, Farah, Xie, Fangyi, Lehman, Julia S., Theiler, Regan, Todd, Austin, Davis, Dawn M., and Johnson, Emma F.
- Subjects
- *
HISTOPATHOLOGY , *PREGNANCY , *REFERENCE values , *EOSINOPHILS , *COMPARATIVE studies , *EOSINOPHILIA - Abstract
Background: Pemphigoid gestationis (PG) and polymorphic eruption of pregnancy (PEP) are pregnancy‐related dermatoses. Definitive diagnosis often relies upon histopathology and direct immunofluorescence (DIF). PG is associated with fetal and neonatal risks, while PEP confers minimal risk. Objective: We aimed to compare histopathologic features to determine key differentiators. Methods: A retrospective cohort study of PG and PEP cases, with accompanying DIF, conducted from 1995 to 2020. Skin biopsies were examined independently in a blinded fashion by two dermatopathologists for a list of histopathological features. Results: Twenty‐one cases of PG and 10 cases of PEP were identified. PG had significantly denser eosinophils than PEP (mean 155 vs. 48 cells/5 hpf; p < 0.018). PG was also noted to have eosinophilic spongiosis and eosinophils at the dermal–epidermal junction more frequently compared to PEP (80% PG vs. 10% PEP; p < 0.001). A mean cutoff value of 86 eosinophils and a mean optimal sensitivity and specificity of 81% and 83%, respectively, for eosinophils density's diagnostic power of PEP versus PG were achieved. Subepithelial separation was exclusively seen in PG (40% vs. 0%; p < 0.007). Conclusion: Eosinophilic spongiosis, eosinophilic epitheliotropism, and dense superficial dermal eosinophils were diagnostic of PG. Given overlapping clinicopathologic features, however, DIF results with clinicopathologic correlation, remain the gold standard. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
241. MILD IMPAIRMENT IN PRE-OPERATIVE CARDIOPULMONARY EXERCISE TESTING DOES NOT ADVERSELY IMPACT LONG TERM PROGNOSIS AFTER SURGERY FOR SEVERE PRIMARY MITRAL REGURGITATION.
- Author
-
Afoke, Jonathan, Schaff, Hartzell V., Crestanello, Juan A., Elbenawi, Hossam, Chopko, Trevor, Aguilar, Myriam Monserrat Martinez, Todd, Austin, Michelena, Hector I., Nkomo, Vuyisile Tlhopane, Allison, Thomas G., and Bagameri, Gabor
- Subjects
- *
PREHABILITATION , *MITRAL valve insufficiency , *EXERCISE tests , *PROGNOSIS , *SURGERY - Published
- 2024
- Full Text
- View/download PDF
242. Microscopy with ultraviolet surface excitation (MUSE): A novel approach to real‐time inexpensive slide‐free dermatopathology.
- Author
-
Qorbani, Amir, Fereidouni, Farzad, Levenson, Richard, Lahoubi, Sana Y., Harmany, Zachary T., Todd, Austin, and Fung, Maxwell A.
- Subjects
- *
DERMATOPATHOLOGY , *BIOELECTRONICS , *ULTRAVIOLET radiation , *MICROSCOPICAL technique , *MEDICAL microscopy - Abstract
Traditional histology relies on processing and physically sectioning either frozen or formalin‐fixed paraffin‐embedded (FFPE) tissue into thin slices (typically 4‐6 μm) prior to staining and viewing on a standard wide‐field microscope. Microscopy using ultraviolet (UV) surface excitation (MUSE) represents a novel alternative microscopy method that works with UV excitation using oblique cis‐illumination, which can generate high‐quality images from the cut surface of fresh or fixed tissue after brief staining, with no requirement for fixation, embedding and histological sectioning of tissue specimens. We examined its potential utility in dermatopathology. Concordance between MUSE images and hematoxylin and eosin (H&E) slides was assessed by the scoring of MUSE images on their suitability for identifying 10 selected epidermal and dermal structures obtained from minimally fixed tissue, including stratum corneum, stratum granulosum, stratum spinosum, stratum basale, nerve, vasculature, collagen and elastin, sweat glands, adipose tissue and inflammatory cells, as well as 4 cases of basal cell carcinoma and 1 case of pseudoxanthoma elasticum deparaffinized out of histology blocks. Our results indicate that MUSE can identify nearly all normal skin structures seen on routine H&E as well as some histopathologic features, and appears promising as a fast, reliable and cost‐effective diagnostic approach in dermatopathology. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
243. Improved Test Solutions for COTS-Based Systems in Space Applications
- Author
-
Ernesto Sanchez, Andrea Floridia, Matteo Sonza Reorda, Sara Carbonara, Riccardo Cantoro, Jan-Gerd Mess, Politecnico di Torino = Polytechnic of Turin (Polito), German Aerospace Center (DLR), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
010302 applied physics ,Focus (computing) ,Matching (statistics) ,MaMMoTH-Up ,Computer science ,business.industry ,COTS ,Testing ,Space Applications ,02 engineering and technology ,Space (commercial competition) ,01 natural sciences ,020202 computer hardware & architecture ,Reliability engineering ,Test (assessment) ,Identification (information) ,Software ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Dependability ,[INFO]Computer Science [cs] ,Electronics ,business - Abstract
International audience; In order to widen the spectrum of available products, companies involved in space electronics are exploring the possible adoption of COTS components instead of space-qualified ones. However, the adoption of COTS devices and boards requires suitable solutions able to guarantee the same level of dependability. A mix of different solutions can be considered for this purpose. Test techniques play a major role, since they must guarantee that a high percentage of permanent faults can be detected (both at the end of the manufacturing and during the mission) while matching several constraints in terms of system accessibility and hardware complexity. In this paper we focus on the test of the electronics used within launchers, and outline an approach based on Software-based Self-test. The proposed solutions are currently being adopted within the MaMMoTH-Up project, targeting the development of an innovative COTS-based system to be used on the Ariane5 launcher. The approach aims at testing both the OR1200 processor and the different peripheral modules adopted in the system, while providing new techniques for the identification of safe faults. The results show the effectiveness and current limitations of the method, also including a comparison between functional and structural test approaches.
- Published
- 2019
- Full Text
- View/download PDF
244. Mapping Spiking Neural Networks on Multi-core Neuromorphic Platforms: Problem Formulation and Performance Analysis
- Author
-
Andrea Acquaviva, Enrico Macii, Francesco Barchi, Gianvito Urgese, Dipartimento di Automatica e Informatica [Torino] (DAUIN), Politecnico di Torino = Polytechnic of Turin (Polito), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, WG 10.5, Bombieri N. Pravadelli G. Fujita M. Austin T. Reis R., Francesco Barchi, Gianvito Urgese, Enrico Macii, and Andrea Acquaviva
- Subjects
Spiking neural network ,Neuromorphic Computing ,Multi-core processor ,Multicore neuromorphic architecture ,Spiking neural networks ,Computer science ,Reliability (computer networking) ,Globally asynchronous locally synchronous ,Pipeline (computing) ,Distributed computing ,Graph mapping ,Multicore neuromorphic architectures ,Neuromorphic Engineering ,02 engineering and technology ,020202 computer hardware & architecture ,Reduction (complexity) ,03 medical and health sciences ,Task (computing) ,0302 clinical medicine ,Neuromorphic engineering ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,030217 neurology & neurosurgery - Abstract
International audience; In this paper, we propose a methodology for efficiently mapping concurrent applications over a globally asynchronous locally synchronous (GALS) multi-core architecture designed for simulating a Spiking Neural Network (SNN) in real-time. The problem of neuron-to-core mapping is relevant as a non-efficient allocation may impact real-time and reliability of the SNN execution. We designed a task placement pipeline capable of analysing the network of neurons and producing a placement configuration that enables a reduction of communication between computational nodes. We compared four Placement techniques by evaluating the overall post-placement synaptic elongation that represents the cumulative distance that spikes generated by neurons running on a core have to travel to reach their destination core. Results point out that mapping solutions taking into account the directionality of the SNN application provide a better placement configuration.
- Published
- 2019
- Full Text
- View/download PDF
245. Optimizing Performance and Energy Overheads Due to Fanout in In-Memory Computing Systems
- Author
-
Adnan Zaman, Rajeev Joshi, Srinivas Katkoori, University of South Florida [Tampa] (USF), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
In-memory computing ,Computer science ,Resistive memory ,Crossbar ,020208 electrical & electronic engineering ,Initialization ,02 engineering and technology ,Memristor ,Parallel computing ,MAGIC ,Logic synthesis ,020202 computer hardware & architecture ,law.invention ,Resistive random-access memory ,Fanout ,law ,Control theory ,In-Memory Processing ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Crossbar switch ,Energy (signal processing) - Abstract
International audience; For NOR-NOT based memristor crossbar architectures, we propose a novel approach to address the fanout overhead problem. Instead of copying the logic value as inputs to the driven memristors, we propose that the controller reads the logic value and then applies it in parallel to the driven memristors. We consider two different cases based on the initialization of the memristors to logic-1 at the locations where we want keep the first input memristor of the driven gates. If the memristors are initialized, it falls under case 1, otherwise case 2. In comparison to recently published works, experimental evaluation on ISCAS’85 benchmarks resulted in average performance improvements of 51.08%, 38.66%, and 63.18% for case 1 and 50.94%, 42.08%, and 60.65% for case 2 considering three different mapping scenarios (average, best, and worst). In regards to energy dissipation, we have also obtained average improvements of 91.30%, 88.53%, and 74.04% for case 1 and 86.03%, 78.97%, and 51.89% for case 2 considering the aforementioned scenarios.
- Published
- 2019
- Full Text
- View/download PDF
246. Assessment of Low-Budget Targeted Cyberattacks Against Power Systems
- Author
-
Anastasis Keliris, Charalambos Konstantinou, XiaoRui Liu, Marios Sazos, Michail Maniatakos, Florida State University [Tallahassee] (FSU), Florida Agricultural and Mechanical University (FAMU), University of Florida [Gainesville] (UF), NYU Tandon School of Engineering, New York University [Abu Dhabi], NYU System (NYU), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
Public infrastructure ,Spoofing attack ,Computer science ,business.industry ,020209 energy ,Phasor ,02 engineering and technology ,Computer security ,computer.software_genre ,Units of measurement ,Electric power system ,Electric power transmission ,Assisted GPS ,0202 electrical engineering, electronic engineering, information engineering ,Global Positioning System ,[INFO]Computer Science [cs] ,business ,computer - Abstract
International audience; The security and well-being of societies and economies are tied to the reliable and resilient operation of power systems. In the next decades, power systems are expected to become more heavily loaded and operate closer to their stability limits and operating constraints. On top of that, in recent years, cyberattacks against computing systems and networks integrated in the power grid infrastructure are a real and growing threat. Such actions, especially in industrial environments such as power systems, are generally deemed feasible only by resource-wealthy nation state actors. This chapter challenges this perception and presents a methodology, named Open Source Exploitation (OSEXP), which utilizes information from public infrastructure to assess an advanced attack vector on power systems. The attack targets Phasor Measurement Units (PMUs) which depend on Global Positioning System (GPS) signals to provide time-stamped circuit quantities of power lines. Specifically, we present a GPS time spoofing attack using low-cost commercial devices and open source software. The necessary information for the instantiation of the OSEXP attack is extracted by developing a test case model of the power system in a digital real-time simulator (DRTS). DRTS is also employed to evaluate the effectiveness and impact of the developed OSEXP attack methodology. The presented targeted attack demonstrates that an actor with limited budget has the ability to cause significant disruption to a nation.
- Published
- 2018
- Full Text
- View/download PDF
247. Energy-Accuracy Scalable Deep Convolutional Neural Networks: A Pareto Analysis
- Author
-
Andrea Calimera, Valentino Peluso, Department of Computer Engineering (DAUIN), Politecnico di Torino = Polytechnic of Turin (Polito), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
Artificial neural network ,Heuristic (computer science) ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Pareto principle ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,020202 computer hardware & architecture ,Keyword spotting ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Artificial intelligence ,Quantization (image processing) ,business ,Pareto analysis ,computer ,Assignment problem - Abstract
International audience; This work deals with the optimization of Deep Convolutional Neural Networks (ConvNets). It elaborates on the concept of Adaptive Energy-Accuracy Scaling through multi-precision arithmetic, a solution that allows ConvNets to be adapted at run-time and meet different energy budgets and accuracy constraints. The strategy is particularly suited for embedded applications made run at the “edge” on resource-constrained platforms. After the very basics that distinguish the proposed adaptive strategy, the paper recalls the software-to-hardware vertical implementation of precision scalable arithmetic for ConvNets, then it focuses on the energy-driven per-layer precision assignment problem describing a meta-heuristic that searches for the most suited representation of both weights and activations of the neural network. The same heuristic is then used to explore the optimal trade-off providing the Pareto points in the energy-accuracy space. Experiments conducted on three different ConvNets deployed in real-life applications, i.e. Image Classification, Keyword Spotting, and Facial Expression Recognition, show adaptive ConvNets reach better energy-accuracy trade-off w.r.t. conventional static fixed-point quantization methods.
- Published
- 2018
- Full Text
- View/download PDF
248. A 65 nm CMOS Synthesizable Digital Low-Dropout Regulator Based on Voltage-to-Time Conversion with 99.6% Current Efficiency at 10-mA Load
- Author
-
Tetsuya Iizuka, Kunihiro Asada, Toru Nakura, Naoki Ojima, The University of Tokyo (UTokyo), Fukuoka University, VLSI Design and Education Center, Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
Physics ,Low-dropout regulator ,Comparator ,business.industry ,020208 electrical & electronic engineering ,Electrical engineering ,02 engineering and technology ,Phase detector ,020202 computer hardware & architecture ,CMOS ,0202 electrical engineering, electronic engineering, information engineering ,Inverter ,Verilog ,[INFO]Computer Science [cs] ,business ,computer ,Voltage reference ,computer.programming_language ,Voltage - Abstract
A synthesizable digital LDO implemented with standard-cell-based digital design flow is proposed. The difference between output and reference voltages is converted into delay difference using inverter chains as voltage-controlled delay lines, then compared in the time-domain. Since the time-domain difference is straightforwardly captured by a simple DFF-based phase detector, the proposed LDO does not need an analog voltage comparator, which requires careful manual design. All the components in the LDO can be described with Verilog codes based on their specifications, and placed-and-routed with a commercial EDA tool. This automated layout design relaxes the burden and time of implementation, and enhances process portability. The proposed LDO implemented in a 65 nm standard CMOS technology occupies 0.015 mm\(^2\) area. With 10.4 MHz internal clock, the tracking response of the LDO to 200 mV switching in the reference voltage is \(\sim \)4.5 \(\upmu \)s and the transient response to 5 mA change in the load current is \(\sim \)6.6 \(\upmu \)s. At 10 mA load current, the quiescent current consumed by the LDO core is as low as 35.2 \(\upmu \)A, which leads to 99.6% current efficiency.
- Published
- 2018
- Full Text
- View/download PDF
249. Efficient Hardware/Software Co-design for NTRU
- Author
-
Konstantin Braun, Thomas Schamberger, Georg Maringer, Christoph Frisch, Johanna Sepulveda, Tim Fritzmann, Technische Universität Munchen - Université Technique de Munich [Munich, Allemagne] (TUM), Nicola Bombieri, Graziano Pravadelli, Masahiro Fujita, Todd Austin, Ricardo Reis, TC 10, and WG 10.5
- Subjects
business.industry ,NTRU ,Computer science ,Cryptography ,02 engineering and technology ,Padding ,Side-Channel Attack ,020202 computer hardware & architecture ,Software ,HW/SW co-design ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Cryptosystem ,Hardware acceleration ,020201 artificial intelligence & image processing ,[INFO]Computer Science [cs] ,Side channel attack ,Lattice-based cryptography ,business - Abstract
International audience; The fast development of quantum computers represents a risk for secure communications. Current traditional public-key cryptography will not withstand attacks performed on quantum computers. In order to prepare for such a quantum threat, electronic systems must integrate efficient and secure post-quantum cryptography which is able to meet the different application requirements and to resist implementation attacks. The NTRU cryptosystem is one of the main candidates for practical implementations of post-quantum public-key cryptography. The standardized version of NTRU (IEEE-1363.1) provides security against a large range of attacks through a special padding scheme. So far, NTRU hardware and software solutions have been proposed. However, the hardware solutions do not include the padding scheme or they use optimized architectures that lead to a degradation of the security level. In addition, NTRU software implementations are flexible but most of the time present a low performance when compared to hardware solutions. In this work, for the first time, we present a hardware/software co-design approach compliant with the IEEE-1363.1 standard. Our solution takes advantage of the flexibility of the software NTRU implementation and the speedup due to the hardware accelerator specially designed in this work. Furthermore, we provide a refined security reduction analysis of an optimized NTRU hardware implementation presented in a previous work.
- Published
- 2018
- Full Text
- View/download PDF
250. Mohs Micrographic Surgery With Melanocytic Immunostains for T1a/b Invasive Melanoma Yields <1% Local Recurrence and Disease-specific Mortality.
- Author
-
Bangalore Kumar A, Trischman T, Asamoah E, Todd A, Vidal NY, and Demer AM
- Subjects
- Humans, Male, Retrospective Studies, Female, Aged, Middle Aged, Aged, 80 and over, Adult, Neoplasm Staging, Neoplasm Invasiveness, Immunohistochemistry, Melanocytes pathology, Mohs Surgery, Melanoma surgery, Melanoma mortality, Melanoma pathology, Skin Neoplasms surgery, Skin Neoplasms mortality, Skin Neoplasms pathology, Neoplasm Recurrence, Local epidemiology, Neoplasm Recurrence, Local pathology
- Abstract
Background: The use of Mohs micrographic surgery with melanocytic immunostains (MMS-I) for cutaneous melanoma is increasing., Objective: To assess local recurrence, melanoma-specific death rates in patients with invasive melanoma treated with MMS-I., Materials and Methods: A single-center retrospective review of patients with invasive melanoma treated with MMS-I from January 2008 to December 2018., Results: Three hundred fifty-two patients (359 melanomas) were included. The median age was 71 years; most patients were male (252%; 71.6%). Most tumors were T1a/b (341, 95%), H/N (322; 89.7%), and lentigo maligna subtype (281, 78.3%). At a median follow-up of 4.3 years, local recurrence rates were 1.4% (5) and 0.9% (3) among all-stage and T1a/b tumors, respectively. There were 3 melanoma-related deaths (0.9%)., Conclusion: MMS-I is associated with <1% risk of local recurrence and disease-specific mortality for T1a/b melanomas., (Copyright © 2024 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.