Note: Slides from these talks are in the Archive for 2025 page
After sixty years of relentless scaling, we've reached the point where a single chip holds more transistors than there are neurons in the human brain. So it's not entirely surprising that we can now build machines that seem human, or even super-human in their intelligence. The underlying Large Language Models require brute force compute that is only possible thanks to the scale and integration density of modern chips. Decades of steady progress in Electronic Design Automation have enabled the design of such massive chips. Correctly synthesizing, placing and wiring billions of components while balancing cost, performance, power, and reliability is no small feat. This talk will start with an overview of the key algorithms and methodologies that made it possible to design chips at this scale, and the stubborn EDA challenges that have remained. We'll look how AI can realistically improve design productivity to match the scale of future Chips and AI systems. Will it be “agentic” AI solution that are basically glorified for-loops, or will AI methods also replace existing “classic” EDA algorithms?
The exponential growth of high-performance computing—driven by AI and other data-intensive applications—is pushing traditional semiconductor technologies to their limits, as Moore's Law and Dennard Scaling slow. Silicon Photonics offers a breakthrough path forward, enabling faster, more energy-efficient data transmission that reduces latency and power usage in next-generation data centers. In parallel, advanced packaging approaches—such as 2.5D/3D integration and chiplet architectures—tackle power, thermal, and performance bottlenecks by increasing interconnect density and silicon efficiency. This talk explores how these two technologies complement each other to meet future computing demands. We will emphasize the importance of an error-free design philosophy and the need to leverage multi-physics analysis—spanning optical, signal integrity, power integrity, thermal, and mechanical domains—to deliver robust, high-volume-ready designs. The discussion will cover the current state of the art, ongoing integration efforts, and key industry challenges that must be addressed to achieve scalable adoption.
The talk explores the critical hurdles encountered at every layer of the modern semiconductor and system design stack. From the physical realities of transistors to the vast complexity of AI data centers, each stage introduces unique challenges in device modeling, circuit complexity, system integration, packaging, and large-scale infrastructure management. The discussion will detail the essential EDA and CAE tools required for high-performance, high-speed (RF) operation, emphasizing the vital importance of signal and power integrity. Finally, the session will map out how AI-driven methodologies are transforming design, optimization, and processes, driving the next evolutionary leap from silicon up to data centers.
Harnessing machine learning (ML) and agentic AI offers transformative potential for the simulation of co-packaged optics in 3D heterogeneous integration. This talk presents an intelligent framework where advanced ML models, combined with agent-based orchestration, dynamically optimize photonic–electronic co-design for next-generation packaging. By leveraging agentic AI's distributed, adaptive capabilities, our approach enables more accurate thermal, stress, power, signal, and performance co-optimization, accelerates multi-physics modeling and simulation, and streamlines the optimal design of co-packaged optics.
AI is poised to revolutionize hardware design just as it has transformed software—but the path is more complex, more domain-specific, and rich with opportunity. In this keynote, we explore how large language models and agentic systems are reshaping the hardware design stack, with RTL coding as a proving ground. We trace the evolution from post-training approaches that deliver high-quality Verilog through synthetic data generation and reasoning-augmented test time compute, to agentic systems capable of autonomous repair and synthesis using planning and waveform feedback. We'll highlight frontier works in agentic RTL optimization for PPA, debug assistance for formal verification, and broader design task orchestration with dynamic self-improving agent. These examples hint at a larger future: AI agents that integrate with hardware design tools, reason over hardware-specific languages, and automate the design and verification process end- to-end. Realizing this vision will require advances in agent/model co-optimization, toolchain integration, and domain-specific language programming—but the path to autonomous hardware design has already begun.
Agentic AI is the latest advancement in Generative AI, and is set to transform every facet of our lives. With powerful intelligence and learning in the loop, agents can independently plan, orchestrate, reason and make decisions to achieve high level goals. Applications span from personalized web interactions to scientific discovery and complex engineering workflows, including for the semiconductor industry. In this talk, we discuss the state of the art and the future of agentic systems, human-agent collaboration, emerging issues like trust, security, persistence, and the need for guardrails as we embrace this new reality.
As designs grow in complexity, traditional EDA workflows struggle to keep pace with the demands for speed, precision, and adaptability. Agentic AI, AI systems capable of autonomous decision-making and collaborative task execution, offers a paradigm shift. This keynote explores what is agentic AI, how could redefining 3D IC & PCB design by automating both process and design tasks such as: constraint resolution, optimizing layout, and enabling intelligent design reuse. Drawing on real-world examples, we'll examine how agentic AI agents function as digital co-designers, orchestrating workflows across schematic and layout. Attendees will gain insights into the architecture of agentic systems, their integration into existing EDA tools, and the roadmap toward more autonomous design environments.
This presentation will discuss our multi-agent approach to design high performance systems from concept, through design and validation. AI agents are well suited for domain knowledge ingestion (RAGing agent) and controlling EDA for design and simulation (EDA API agent). However, most of the system SI/PI involves optimization in high dimensions. We will discuss the limitations and adjustment of AI agents for design optimization tasks. Real life DDR5 and PAM4 SerDes performance tuning will be used as examples to demonstrate the capabilities and limitations of the multi-agent design flow.
Developing AI engineers has become a central challenge at the intersection of engineering and AI. In this talk, we begin with our recent work on benchmarking the engineering design capabilities of LLMs, aimed at advancing toward engineering AGI. Our EngDesign benchmark evaluates LLM systems across nine engineering domains using simulation-based verification rather than traditional question answering. The results reveal a clear hierarchy of difficulty, with analog integrated circuit (IC) design emerging as the most challenging domain due to its reliance on creative topology synthesis, deep intuition about device physics, and the navigation of intricate trade-offs. Building on these insights, we further discuss several of our advancements on test-time scaling and reinforcement learning for improving LLMs in the engineering context, and outline how these techniques can potentially lead to a plausible path for building AI analog IC design engineers with lightweight, open-source LLM bases.
We present ChipAgentsBench, the first real-world agentic benchmark for LLM agents in chip design and verification. Compared to previous benchmarks in hardware design, ChipAgentsBench is more realistic: a) it focuses on agentic tasks with complex file structures and environments, requiring not only the ability to generate RTL code, but also to navigate a complex project; b) its input context is significantly larger than existing benchmarks, with 80x more lines and 30x more files; c) it has new, unique tasks reflective of industry standards such as UVM generation and waveform debugging. We present results and initial analyses of multiple agents' performance on the benchmark and call for community contributions to further develop the benchmark.
With the evolution of AI and IoT technologies, the Edges of networks are increasingly being used for data collection and inference. Collected data is traditionally sent to central servers in a data center. There, it is processed to train new AI models, which are then available for decision-making in the field. Adding sufficient intelligence and storage capacities to the Edge devices will enable them to make incremental updates to the model and use them to perform AI inference operations in the field. This paper introduces a smart helmet with sensors connected to a data center for remote monitoring and real-time decision-making. Such a product has the potential to improve road safety and save lives.
With the evolution of AI and IoT technologies, the Edges of networks are increasingly being used for data collection and inference. Collected data is traditionally sent to central servers in a data center. There, it is processed to train new AI models, which are then available for decision-making in the field. Adding sufficient intelligence and storage capacities to the Edge devices will enable them to make incremental updates to the model and use them to perform AI inference operations in the field. This paper introduces a smart helmet with sensors connected to a data center for remote monitoring and real-time decision-making. Such a product has the potential to improve road safety and save lives.
The AI supercycle surrounds us. It is both powered by the silicon and systems that we create and has the power to help us design the next generation of products which will deliver the future advances of AI. We see three phases in the adoption of AI. The most immediate is Infrastructure AI – the buildout of the data centers and the edge compute needed to train new AI models and deliver AI to consumers. We're at the dawn of the next phase: physical AI – the pervasive autonomy for transportation, drones, and robotics. And in the future, the application of AI across all sciences: medicines, materials, and the environment. Through each of these phases, the complexity and volume of silicon and system design is ever increasing. We'll examine the relentless pursuit of automation and quality of results from the tools used by the engineers and scientists powering this revolution, how AI is integral to this process, and an example of using a state-of-the-art, custom-built AI-appliance to unlock the future of design.