ShowBiz & Sports Lifestyle

Hot

Cadence (CDNS) Q4 2025 Earnings Call Transcript

- - Cadence (CDNS) Q4 2025 Earnings Call Transcript

Motley Fool Transcribing, The Motley FoolFebruary 18, 2026 at 5:20 AM

0

Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Tuesday, Feb. 17, 2026 at 5 p.m. ET

CALL PARTICIPANTS -

President and Chief Executive Officer — Anirudh Devgan

Senior Vice President and Chief Financial Officer — John M. Wall

Corporate Vice President, Finance and Investor Relations — Richard Gu

Need a quote from a Motley Fool analyst? Email [email protected]

TAKEAWAYS -

Annual revenue -- $5.3 billion for the year, representing 14% growth.

Q4 revenue -- $1.4 billion for the quarter.

Operating margin -- Fiscal 2025 GAAP operating margin was 28.2%, and non-GAAP operating margin was 44.6% (period ended Dec. 31, 2025).

Q4 operating margin -- GAAP operating margin was 32.2%, and non-GAAP operating margin was 45.8%.

EPS -- Fiscal 2025 GAAP EPS was $4.06, and non-GAAP EPS was $7.14.

Q4 EPS -- GAAP EPS was $1.42, and non-GAAP EPS was $1.99.

Backlog -- Ended fiscal 2025 with a record backlog of $7.8 billion.

Cash and debt -- Year-end cash balance was $3.0 billion; principal value of outstanding debt was $2.5 billion.

Operating cash flow -- $553 million in Q4; $1.7 billion for the year.

Share repurchases -- $925 million used to repurchase shares during fiscal 2025.

Core EDA revenue growth -- 13% increase in fiscal 2025.

Recurring software growth -- Returned to double-digit growth in the fourth quarter.

Hardware business -- Achieved another record year, adding over 30 new customers with heightened AI and hyperscaler demand.

Digital full flow wins -- 25 new digital full flow customer logos added in fiscal 2025.

AI product launch -- Introduced ChipsTech AI SuperAgent, claimed as "the world's first agentic AI solution for automating chip design and verification," with up to 10x productivity improvement for some tasks.

AI adoption -- Samsung US realized "4x productivity improvement" with Cerberus AI Studio; Altera cited "7 to 10x productivity improvement" in targeted flow segments.

IP revenue -- Nearly 25% growth in fiscal 2025, driven by strength in IP portfolio and demand from AI, HPC, and automotive sectors.

System design & analysis revenue growth -- 13% increase in fiscal 2025.

3D IC platform -- Identified as a "key enabler" for multichip architectures for next-generation AI and high-performance computing.

Q1 2026 guidance -- Revenue expected between $1.4 billion and $1.5 billion, GAAP operating margin 30%-31%, non-GAAP operating margin 44%-45%, GAAP EPS $1.16-$1.22, and non-GAAP EPS $1.89-$1.95.

2026 full-year guidance -- Revenue of $5.9 billion-$6.0 billion; GAAP operating margin 31.75%-32.75%, non-GAAP operating margin 44.75%-45.75%, GAAP EPS $4.95-$5.05, non-GAAP EPS $8.05-$8.15, and operating cash flow of approximately $2.0 billion.

2026 share repurchase plan -- Plans to use about 50% of free cash flow for share repurchases.

Backlog contribution to 2026 revenue -- Approximately 67% of 2026 revenue expected to come from starting backlog.

Geographic mix -- China represented 13% of fiscal 2025 revenue and expected to contribute 12%-13% in 2026.

Recurring revenue mix -- Recurring revenue expected to remain around 80% in 2026.

Cadence Design Systems (NASDAQ:CDNS) reported that its agentic AI solutions are increasing customer tool usage and driving new monetization opportunities across its product portfolio. Management provided guidance reflecting continued growth in both recurring and upfront business, supported by a record backlog and strong customer demand in AI, high-performance computing, and automotive end markets. The company emphasized ongoing strategic relationships and expanded collaborations with leading foundries and hyperscalers focused on advanced AI design and deployment. The AI-driven transformation in chip and system design, including the launch of ChipsTech AI SuperAgent and customer-reported productivity improvements, was highlighted as a key competitive differentiation and driver of future growth.

Management asserted, "we have seen absolutely no discussion with customers of reducing the usage on the contrary, you know, all these AI tools are increasing the usage of our tools."

John M. Wall stated that "Around 67% of 2026 revenue is coming from beginning backlog," providing strong multiyear visibility.

Operating cash flow guidance excludes any potential impact from the pending Hexagon acquisition.

The company indicated pricing models for agentic AI flows may include value-based, usage-based, and virtual engineer licensing, suggesting new recurring revenue potential beyond core models.

Annual incremental margin reached 59% in fiscal 2025, with guidance for 51% in 2026, described as "one of the strongest guides that we've we've ever had."

The company communicated that "multiyear subscription remains at the core of our business," with AI solutions expected to amplify demand rather than disrupt the existing revenue structure.

INDUSTRY GLOSSARY -

EDA: Electronic Design Automation, software and hardware tools for designing and verifying integrated circuits.

Agentic AI: AI workflow employing autonomous agents that call multiple underlying tools to automate aspects of chip design and verification.

RTL: Register-Transfer Level, a design abstraction representing digital circuits flow for verification and synthesis.

COT: Customer-Owned Tooling, a semiconductor production arrangement where customers use their own design tools and flows for chip manufacturing.

3D IC: Three-Dimensional Integrated Circuit, a chip architecture stacking silicon dies to improve performance and integration.

PPA: Power, Performance, and Area—key optimization metrics in integrated circuit and semiconductor design.

SDA: System Design and Analysis, a product group for system-level design, modeling, and simulation of complex systems.

Dynamic Duo: Refers to Cadence customers using both its software and hardware systems for design, as noted for top accounts in 2025.

Digital twin: A virtual representation of physical assets, including systems or devices, used for simulation and analysis.

Token model: Usage-based licensing method in which customers consume tokens for running additional tool capacity or features.

Full Conference Call Transcript

Anirudh Devgan: that Cadence Design Systems, Inc. delivered excellent results the fourth quarter. Closing an outstanding 2025 with 14% revenue growth, and 45% operating margin for the year. We finished 2025 with a record backlog of $7,800,000,000. Well ahead of plan. Reflecting broad based portfolio strength, and increasing contributions from our AI solutions. I would like to emphasize the essential nature of Cadence Design Systems, Inc.'s engineering software. As I have stated previously, our platform is best viewed as a three layer cake framework. Accelerate compute being the base layer, principal simulation and optimization,

Richard Gu: as the critical middle layer and AI

Anirudh Devgan: as the top layer to drive intelligent exploration and generation. This holistic approach ensures that our AI solutions are not just fast, but physically accurate and grounded in scientific truth. Building on this foundation, we are deploying agentic AI workflows powered by intelligent agents that autonomously call our underlying tools. AI flows act as a force multiplier. Enabling our customers to significantly expand design exploration and accelerate time to market. While driving increased product usage. And deeper engagement across our entire platform. We see growing momentum on both AI for design and design for AI fronts. On AI for design, our Cadence AI portfolio continues to gain traction with market shaping customers. Last week, we launched ChipsTech AI SuperAgent.

The world's first agentic AI solution for automating chip design and verification. It's built upon a proven physically accurate product. And provides up to 10x productivity improvement for various tasks. Including design coding, generating test benches, and debugging. ChipsTech has received compelling endorsements from Qualcomm, NVIDIA, Altera, and Tenstorin. Among others. Our other AI product, such as Cadence Cerebras, Vericium, and Allegro xAI, are proliferating at scale. And our LLM based design agents powered by JEDI Data Platform, are delivering impressive results. On design for AI, the infrastructure AI phase is in full screen. With AI architectures growing in scale and complexity.

Customers are increasingly standardizing on Cadence Design Systems, Inc.'s full flows to address their performance power and time to market challenges. We continue to closely collaborate with market leaders on their next generation AI designs spanning training, inference, and scaling. We deepened our long standing partnership with Broadcom, through a strategic collaboration to develop pioneering agentic AI workflows to help design Broadcom's next generation product. We also expanded our footprint at multiple marquee hyperscalers. Across our EDA hardware IP and system software solutions. And we are particularly excited by the emerging physical AI opportunity and our broad based portfolio uniquely positions us to enable autonomous driving and robotic companies to address multimodal silicon and system challenges.

Richard Gu: In addition,

Anirudh Devgan: we are increasingly applying AI internally to improve efficiency. Across engineering, go to market, and operations. In 2025, we also further our partnerships with leading foundries. We expanded our collaboration with TSMC to power next gen AI flows on TSMC's N2 and A16 technologies. We strengthen our engagement with Intel foundry by officially joining the Intel Foundry Accelerator Design Services Alliance.

Richard Gu: RAPIDUS made a wide ranging commitment to our core EDA software portfolio

Anirudh Devgan: across digital, custom analog, and verification solutions. And Samsung Foundry expanded its collaboration with Cadence Design Systems, Inc., leveraging our AI driven design solutions and IP solutions. Now turning to product highlights for Q4. And 2025. Accelerating compute demand driven by the AI infrastructure build out and demanding next generation data center requirements continue to create significant opportunities for our core EDA portfolio. Our core EDA business delivered strong performance with revenue growing 13% in 2025. Our recurring software business reaccelerated to double digit growth in Q4 a testament to the strength and durability of our model.

Richard Gu: Our hardware business

Anirudh Devgan: delivered another record year with over 30 new customers and substantially higher repeat demand from AI and hyperscalers. Seven out of the top 10 customers 2025 were Dynamic Duo customers. Underscoring the differentiated value provided by our hardware systems. With a strong backlog, entering 2026, we expect this year to be yet another record year for hardware. Our digital portfolio delivered a strong year driven by continued proliferation of our full flow solutions as we added 25 new digital full flow logos in 2025. We expanded our footprint at a top hyperscaler. Growing our AI driven synthesis and implementation solutions, including our 3D IC platforms.

A marquee hyperscaler embraced the Cadence Design Systems, Inc. digital full flow for its first full customer owned tooling AI chip tape out. Broad proliferation of Cadence Cerberus continues. And adoption of our Cadence Cerberus AI Studio is accelerating. Recently,

Richard Gu: Samsung US used it to tape out the SF2 design.

Anirudh Devgan: Achieving 4x productivity improvement. In custom and analog, our Spectre circuit simulator saw significant growth at leading AI and memory companies. Our flagship Virtuoso Studio, the industry standard for custom and mixed signal design, saw continued traction in AI driven design migration across its vast installed base. A top multinational electronics and EV customer reported a 30% layout efficiency gain using our AI driven design migration. Our IP business saw strong momentum. With revenue growing nearly 25% in 2025. Reflecting both the strength of our expanding IP portfolio and the critical role our STAR IP solutions play in the AI, HPC, and automotive verticals. We achieved both significant expansions and meaningful competitive wins. At marquee customers.

Demonstrating the superior performance capabilities of our IP solutions across HBM, UCI, PCIe, DDR, and SerDes titles. We are seeing particularly strong adoption of our industry leading memory IP solutions. Including our groundbreaking LPDDR6 memory IP. Which is enabling customers to achieve the memory performance efficiency required for next generation AI workloads. In Q4, we launched our Tensilica HiFi IQ DSP. Offering up to 8x higher AI performance, and more than 25% energy savings for automotive infotainment smartphone, and home entertainment markets.

Richard Gu: Our system

Anirudh Devgan: design and analysis business delivered 13% revenue growth in 2025. Earlier in the year, we introduced the new Millennium M2000 AI supercomputer. Featuring NVIDIA Blackwell. Which is ramping nicely and with growing customer interest across multiple end markets. Our 3D IC platform has become a key enabler for the industry's transition to multichip architectures. Which are increasingly critical for next generation AI infrastructure, HPC, and advanced mobile application. Adoption of our AI driven Allegro X platform is accelerating.

Richard Gu: Earlier in Q3, Infineon standardized on Allegro X

Anirudh Devgan: and in Q4, ST Microelectronics decided to adopt our Allegro X solution to design printed circuit boards. Our reality data digital twin solution continued its strong momentum. And was deployed at several leading hyperscalers and marquee AI companies. Beta CAE continues to unlock tremendous opportunities particularly in the automotive segment. With our previously announced acquisition of Hexagon's DNE business, we'll be poised to accelerate our strategy around physical AI. Including an autonomous vehicles, and robotics. In closing, I'm pleased with our strong performance in 2025. And I'm excited about the strong momentum across our business.

As the AI era continues to accelerate, our AI driven EDA SDA, and IP portfolio powered by new AI agents and accelerated computing, positions Cadence Design Systems, Inc. extremely well to capture these massive opportunities. Now I will turn it over to John to provide more details on the Q4 results and our 2026 outlook.

Richard Gu: Thanks, Anarud. Good afternoon, everyone. I'm pleased to report that Cadence Design Systems, Inc. delivered an excellent finish to 2025 with broad based momentum across all our businesses.

Anirudh Devgan: Robust design activity, and strong customer demand drove 14% revenue growth and 20% EPS growth for the year.

Richard Gu: Productivity improvements across the company helped us achieve an operating margin of 44.6% for the year. Fourth quarter bookings were exceptionally strong, and we began 2026 with a record backlog of $7,800,000,000. Here are some of the financial highlights from the fourth quarter and the year starting with the P&L. Total revenue was $1,440,000,000.

John M. Wall: For the quarter and $5,297,000,000 for the year. GAAP operating margin was 32.2% for the quarter, and 28.2% for the year. Non-GAAP operating margin was 45.8% for the and 44.6% for the year. GAAP EPS was $1.42 for the quarter, and $4.06 for the year. Non-GAAP EPS was $1.99 for the quarter, and $7.14 for the year.

Anirudh Devgan: Next,

John M. Wall: turning to the balance sheet and cash flow. Our cash balance was $3,001,000,000 at year end. While the principal value of debt outstanding was $2,500,000,000. Operating cash flow was $553,000,000 in the fourth quarter and $1,729,000,000 for the full year. DSOs were sixty four days, we used $925,000,000 to repurchase Cadence Design Systems, Inc. shares during the year. Before I provide our outlook for 2026, I'd like to share that it contains our usual assumption that export control regulations, that exist today remain substantially similar for the remainder of the year. And our current 2026 outlook does not include our pending acquisition of Hexagon's design and engineering business.

For our outlook for 2026, we expect revenue in the range of $5,900,000,000 to $6,000,000,000. GAAP operating margin in the range of 31.75 to 32.75%, non-GAAP operating margin in the range of 44.75% to 45.75%. GAAP EPS in the range of $4.95 to $5.05, non-GAAP EPS in the range of $8.05 to $8.15. Operating cash flow of approximately $2,000,000,000, and we expect to use approximately 50% of our free cash flow to repurchase Cadence Design Systems, Inc. shares in 2026. For Q1, we expect revenue in the range of $1,420,000,000 to $1,460,000,000. GAAP operating margin in the range of 30 to 31%, non-GAAP operating margin in the range of 44 to 45%.

GAAP EPS in the range of $1.16 to $1.22, and non-GAAP EPS in the range of $1.89 to $1.95. And as usual, we published a CFO commentary document on our investor relations website, which includes our outlook for additional items. As well as further analysis and GAAP to non-GAAP reconciliations. In conclusion, I am pleased that we delivered strong top line and earnings growth for 2025. And we finished the year with a record backlog and ongoing business momentum, setting ourselves up for a great 2026. As always, I'd like to thank our customers, partners, and our employees their continued support and with that, operator, we will now take questions.

Operator: Thank you. And at this time, I would like to remind everyone who wants to ask a question to please press star and then the number one on your telephone keypad. As a courtesy to all participants, we ask that you please limit yourself to one question. And we will pause for just a moment to compile the Q&A roster. And our first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.

Anirudh Devgan: Thanks for taking my question. Anil, I'm curious have you seen any disruption or change of thinking whatsoever at your customers in terms of them using AI to reduce or eliminate a demand for EDA or IP or any other computer aided engineering tools. Is there a scenario at all, that you have discussed

James Edward Schneider: right, or your customers might contemplate where they can use more of their internal tools or AI to displace what you're doing right now. Thank you.

Anirudh Devgan: Yeah. Hi, Vivek. Thank you for the question. I know this is a is a topical know, question on top of mind for investors. But like I said before, I mean, for us, know, we always look things as a as a three layer cake. You know, there's different kinds of software. You know, there's lot of discussion in terms of will AI replace you know, some form of software, but you know well, anyway, there are different kind of software. Our software is engineering software. You're doing very, very complex, you know, physics based mathematical operations.

So any AI tools that we are developing or our customers are using basically, the end, call our software to get the job done properly. So what we are saying instead is that and you can see that in our results. You can we can see this in our discussion with customers. Is there is as we move to these agenda flows, it uses more of our soft to get the job done. Than the other way around. So as these even like our own super agent, which is ChipStack, you know, it is doing a part of the flow, first of all. That was not automated.

You know, even in regular AI, there is a lot of automation in coding. You know, that's one of the big applications. But if you move that over to chip design, know, if you look at our flow, you know, there is a equivalent of coding, which is RTL code, which this describes the describes the chip or the system. But that part has been mostly manual. And then after that, our tools kick in, you know, to optimize the RTL to simulate, verify the RTL.

So what we are doing with our AI flows, the top layer is we are adding extra tools that will automate the writing of RTL but then still, it calls lot of middle layer tools, you know, lot of the base tools to implement and verify that. And I've said before, like, what we are seeing at our customers, you know, they want to use more AI and I think they will all they will invest more in R&D. I they will also hire more engineers but as a percentage of spend, the more spend will go to automation and compute. Because the other thing which is unique, to our end market is that the workload is exponential.

You know, if the chip goes from 100,000,000,000 now to 1,000,000,000,000 in few years, they need to do a lot more work. And then some of the work will be done by AI agents, you know, calling our base tool. So overall, to answer your question, we have seen absolutely no discussion with customers of reducing the usage on the contrary, you know, all these AI tools are increasing the usage of our tools. And, of course, then the AI build out also, you know, as customer design more and more chips, know, that is also increasing the usage of our tools.

James Edward Schneider: And our next

Operator: question comes from the line of Joseph D. Vruwink with Baird. Your line is open.

James Edward Schneider: Great. Thanks. I maybe wanted to ask about how you're approaching the outlook for

John M. Wall: 2026. Looks like recurring revenue is set to accelerate, and that's normally well supported by backlog. Maybe can you talk about the key contributors to the recurring improvements? And then just on the 20% or so of revs that come from upfront sources, obviously, it had an incredible 2025 with your hardware platforms and sounds like you're expecting growth there again. I think we're in year two of that platform. Now. Can you kinda see a repeat of what you observed back in 2023? That was a very strong year too for the second gen product out how maybe are you thinking about that product and just where it is in its life cycle?

John M. Wall: Yeah. Thanks for the question, Joel. This is John. As usual, at this time of the year, our guidance will reflect what we believe to be a prudent and well calibrated view of the year. We finished the year with very strong momentum on backlog and we saw that strength right across the board, across all lines of business. And as Andrew says, you know, our view of the AI area is that increases workload faster than headcount grows and Cadence Design Systems, Inc. monetizes workload through broad portfolio proliferation across EDA, IP, hardware, and SDA, and we're we're seeing that flow through into all lines of business for us.

Now, typically, at this time of the year, our, you know, hardware is a up pipeline business. Expecting very strong first half for hardware. But because we only typically see two quarters in the pipeline, we're quite prudent in the second half of the year in this current guide. But that's no different to what we normally do. Same. We typically try to derisk the guide for things like hardware and China. At this time of the year.

And if you look at how China's performed in the last two years, I think it was 12% of our revenue in 2024, 13% in 2025, and we expect it to be in that kind of range, 12 to 13% of our revenue as well for this year. The but, yeah, we're we're seeing absolutely huge strength across the board. Delighted with the with the strength of the guide. And just a key a key transparency metric you'll see in the CFO commentary. Around 67% of 2026 revenue is coming from beginning backlog. And that gives us strong visibility into multiyear recurring base. So we're very, very happy to see that recurring base.

Richard Gu: Get back to kind of

John M. Wall: double digits, kinda low teen growth.

Operator: Our next question comes from the line of Joseph Michael Quatrochi with Wells Fargo.

John M. Wall: Yeah. Thanks for taking the questions. Just kind of curious maybe following up on that.

Harlan L. Sur: You know, on the verification and emulation hardware cycle, any sort of help on just kinda where you think you are at in that cycle? Then is there anything we should think about just in terms of memory availability from that perspective or just anything about margins given pretty significant price increases that we've seen across the DRAM spectrum?

Anirudh Devgan: Yeah. Good question. So hardware, like, you know, like, you know, is in a you know, multi every year is a record for hardware. And I expect that trend to continue. And the reason being, of course, you know, these hardware systems become indispensable. To the design of complex chips and systems. Actually, no complex AI chip or any other, you know, mobile or automotive chip. Any complex chips are not designed without hardware systems, and we have a the best hardware system on the market because we design just to remind you, we design our own chips. Know, made by TSMC, and we maybe sell full racks.

You know, these things have, you know, you know, trillions of transistors to emulate, you know, other chips. So in even though it is reported upfront, as you know, because the customers will buy and use the systems, you know, for multiple years, the big customers are buying them almost every year. Okay? And I see that trend changing and, like, even I indicated, like, when we launched Z3, even Z2 was a very good system. So the fact that the is second year now, I think there's still it has capacity to design, systems of 1,000,000,000,000 transistors. Okay, which will last for several years to go. And, you know, in few years, anyway, we launch our next system.

So we're always ahead of what the market will need. In terms of demand, you know, we don't see any difference, you know, versus if you ask me this year, versus last year, the demand is only stronger, you know, and we you can see that in the backlog. And then how much this will grow, we will see. Like John said, you know, beginning of the year, we are we, you know, we are a little careful with hardware. But we'll update you middle of the year depending on how things are going. But hardware system is performing well. We are taking share and, actually, what I feel is we are taking share in all our major product segments.

So we are taking share in hardware. We are taking share in IP. Which is really good to see now. Will be almost, you know, third year of strong IP growth. You know us. Right? We don't one year doesn't make a trend for us. So but after three years, I can see that I feel good about our IP business. Hardware has been strong for a while. EDA, you know, our core business is doing phenomenal. Okay. 3D IC, we are taking share. Agendaq AI, we are first to market. We already have of customer using our Agendaq AI flow.

So not only I feel good about the hardware business and where it is, actually, I feel really good about our overall portfolio. And how we are performing.

James Edward Schneider: And our next

Operator: question comes from the line of James Edward Schneider with Goldman Sachs. Your line is open.

John M. Wall: Good afternoon. Thanks for taking my question.

James Edward Schneider: I was wondering if you could talk about a little bit more about your excuse me, your AI workflows and if it's possible to quantify any of the benefits that your customers are getting from those workflows today, whether that be time market enhanced productivity per seat, or so on, and maybe separately kind of address how you're able to monetize that and then how broad that is across your portfolio today? Thank you.

Lee John Simpson: Yes, Jim.

Anirudh Devgan: I mean, first of all, the results are quite remarkable with AI. You know? And, like, few years ago, there was some skepticism of how much AI can benefit. But now, I mean, this is true in other areas too. But definitely, in chip design, the results are fantastic and real. And I think there is a difference, I believe, chip design versus other industries. Because, you know, one of the issues with AI flows is that you really don't know whether the AI result is correct or not.

This has been one issue even in white coding or software, like, okay, generate some code, but, you know, you spend a lot of time verifying that it is correct or not. And in some other industries, you know, there is no, like, formal languages to design things. But in chip design, first of all, we have formal languages to design things, which is RTL. Over the last you know, twenty, thirty years, we have built all these products whose job is to make sure that the RTL is correct. Okay? So all our, you know, middle layer tools, know, verification, simulation, optimization, so therefore, AI can be a force and accelerate into chip design. Versus other areas. Okay?

And so the way and the results, you know, just to highlight, like, we talked about Samsung getting 4x productivity. You know, this is code from the customer. Or, you know, Altera talking about, you know, 7 to 10x productivity improvement. Now they're on parts of the flow. You know, for, like, RTL writing, which has been kinda manual. So there can be massive improvement in productivity. And in the back end, for example, physical design, you know, there could be 7%, 10%, PPA improvement, 12% in that range.

So just so you know that you know, when you go from one node to another node, like, five to three or three nanometer to two nanometer, the gain could be, like, 10, 20%. So you're getting half the gain or almost the same gain as a node migration through better optimization with AI. Okay. I think the results are real. We have demand from almost all customers now to engage rapidly because they want to deploy AI in their R&D function. And you have to remember the way our customers deploy R&D in their apply AI in their R&D function is through Cadence Design Systems, Inc. and Cadence Design Systems, Inc. tools. Right?

So they are all very anxious to try all these things. We all these engagements with all the top customers. And our monetization and I'm I'm you know, I always said in the past that it takes some time for monetization to happen. You know, it takes two contract cycles, and I think we are you know, well into that now. So I think we are we are seeing the monetization now. Is reflected in our results. Is reflected in our record backlog. And AgenTik AI can give further monetization. You know? So the way we go to market with AgenTeq AI will be different. Because this is a new tool category. Of something that EDA never automated.

You know, writing of RTL or test benches was right? So we will price it as, a virtual engineer or agent. So that would be, you know, extra business. And our customers are willing to spend on because it is it is productivity improvement for them. And then on top of that, just like before, it will call the base tools and they become lot more lot more licenses or usage will happen our base tools. And the reason for that is, like, in a non AI four, you know, this is a misnomer that, you know, we are, like, seat count limited. We are exploration limited.

Even if a, you know, user, like, a manual user is running our tool, they will run, three or four, five experiments in parallel to see what is the best PPA. But with the agent AI flow, you know, it could run 10 or 100 experiments in parallel. So our plan for monetization, which is working well, we'll add the agent tech app part. We will charge for the agent tech flows a virtual engineer, things like RTL writing, and then, of course, for the licenses in the base layer. And see how that goes. But from a customer standpoint, mean, there's a lot of demand to try all these new tools.

Operator: And our next question comes from the line of Gary Wade Mobley with Loop Capital. Your line is open.

John M. Wall: Hi, guys. Let me extend my congratulations on the strong finish to the year. John, I believe there's been a an effort to move your SD and A customers into one year license terms and if we're not mistaken, that's been an impediment to growth. So the question is, is that the reason why SD and A revenue grew only 13% in 2025? And what's the consideration for 2026 and then what's the consideration for Hexagon when you roll that business? And I believe they were at a $240,000,000 revenue run

James Edward Schneider: rate. Does that see a more limited

John M. Wall: is that number limited because of this one year license term transition?

John M. Wall: Yeah. Thanks for the question, Gary. Yeah. I'd and you're right in terms of SDNA. You know, we lap some tough comps in SD and A in Q4 twenty five, partly due to the multiyear business. So we did some multiyear business in Q4 twenty four through our Beta subsidiary, and we have deliberately been moving to more annual subscription arrangements for Beta in 2025. And that impacts the year over year numbers. And seeing all that, we're we're very pleased with SDNA's strategic trajectory. And its role in the chip to systems thesis from a mix standpoint, SD and A was, like, 16% of revenue and 25 consistent with '24 when you look at the year.

The and we expect it to grow. We expect all product groups to grow, but we're not guiding by segment. In relation to Hexagon, I think the annualized I think we've said this before at some fireside chats that the annualized revenue for Hexagon is about 200,000,000 for on a year basis. Now what that means, of course, that's it's kinda like Beta where Beta did a lot of January 1 deals. But the like, if deal closed by the '1, you're probably looking at a 150,000,000 revenue for the year. But we're not guiding. We don't have final numbers for anything like that now. But so we haven't got anything to do with Hexagon in this guide.

John M. Wall: And our next

Operator: question comes from the line of Charles Shi with Needham. Your line is open.

Harlan L. Sur: Anirudh, thought that the highlight of the quarter was the announcement around the marquee hyperscaler customer Cadence full digital full flow I think you characterized it as for the first field chip that they're gonna take out. So it sounds like we should expect a that particular hyperscaler having a COV chip, coming out in two or three years down the road. And just kinda wanna

Anirudh Devgan: ask a question. Like, how many hyperscaler customers

Harlan L. Sur: right now are doing COT and, even for that particular customer having the first chip on COT, wonder what's your what do you think the ramp is gonna be? Like, how

Anirudh Devgan: how will they proliferate COP for the other chips they are developing? Because every hyperscaler these days have more than one chip. That's my understanding. And that's just wanna get some sense for you where you are in terms of that whole COP

Harlan L. Sur: proliferation, and I believe this is one of the great stories about the cadence about EDA in general, but I want to get it with that.

John M. Wall: Thank you.

Anirudh Devgan: Yeah. Thanks for the question, Charles. I mean, without getting into, like, specifics of a particular customer, but you know, I have said for some time now because we work with our customers confidentially. You know, we share our road map with them. You know, they share map with us, and we are in a unique position to work with all the leading companies across the globe. Right? And so I have said for a while that, you know, this trend of first of all, the trend that the customers especially these big hyperscalers, will do their own chips is even more firm now. Than one or two years ago.

And it's it's evident now with some of the big hyperscalers the success they're having with their own chips, right, especially in the last six months. That has become evident. Because it was not clear, like, one or two years ago, people thought, you know, people will not design their own chips. Doesn't mean that the merch semi will not do well. A merchant semi will do fabulous. But the big customers will design their own chips. Okay. And then this is also true that, over time, the big customers will do more and more things in house.

You know, starting with ASIC to hybrid COT to COT because these chips are I mean, this is more there's another step these days versus the old days, which is hybrid COD because these chips have multiple chiplets in them. So the customers can do some of the chiplets themselves. Some can be outsourced, and then they can do all of them. Themselves. So I think this trend is going to happen and the reason we talk about it, it is it is happening. You know? And different customers will do it at different pace. But, eventually, I think, there will be multiple customers with their own chips.

There'll be multiple, of course, very significant semi standard, you know, general purpose chips. And almost all of them will over time do more and more COG and like you said, they do multiple chips now, at least three major platforms. For each hyperscaler. So all this is good for us. Good for more EDA consumption at the system companies. More IP being used internally, you know, of course, more hardware. More system tools because they are nature system companies in nature. We just wanna make sure we are well positioned for that. But the trend is only accelerating of these big companies doing more.

Themselves and then as you know, this will also then apply to other verticals like automotive and, you know, robotics and things like that.

Operator: And our next question comes from the line of Sitikantha Panigrahi with Mizuho. Your line is open.

John M. Wall: Thanks for taking my question. You talked about

Anirudh Devgan: robust design activity. Can you give us some color

James Edward Schneider: any kind of improvement on your traditional semi segment versus AI or automobile if you could give some color, that would be helpful. And

Anirudh Devgan: and Anil is on the physical AI side. That was a big focus at CES recently. Have you started seeing any traction

John M. Wall: in that space? When do you think that will be a significant contributor?

Anirudh Devgan: Yes. Thanks for the question, Siti. On both I mean, the design activity is accelerating. You know, like I was saying. And that's true for system companies and semi companies. And, actually, you know, I mean, a lot of the projections are that we might hit the industry. You know, semi might hit 1,000,000,000,000 this year. Which is like you know, it used to be 2030, and we are, like, four years ahead of that. This is very good news for the industry. And, of course, we have deep partnerships with all the major semi players and definitely the AI leaders like, with NVIDIA and with Broadcom, actually, in this prepared remarks also, we highlighted our new collaboration with Broadcom.

Which are, of course, doing phenomenally well, and so is so is NVIDIA. And then, you know, because all the memory companies are doing phenomenally well. So, overall, I think the semi companies along with system companies are doing great. And I do see, you know, especially in AI, and memory, but we do see the general market you know, I'm sure you follow that. You know, the mixed signal companies, the regular, let's call it, the regular semi companies are also I think, have a better outlook for '26 than '25. So it's good to see a broad based strength in the semi business, which is about 55% of our business.

And that just creates a better environment for us to deploy our new solutions and they all want to deploy AI, you know, like we discussed earlier. And that's true for both semi and system companies. So overall, I feel that the environment is much more healthier. It's starting '26 than it was, like, a year ago.

James Edward Schneider: And our next

Operator: question comes from the line of Lee John Simpson with Morgan Stanley. Your line is open.

John M. Wall: Great. Thanks for squeezing me in here. I just wanted to go back to ChipStack, if I could. I mean, it seems relatively clear that you see the super agent as something that can transform from Verilog

Harlan L. Sur: to RTL, or the coding thereof at least. And then it would pull in basically our tools for debug and optimization so you get a more deterministic outcome for customers. But you teased us a little bit with the idea about where the further monetization would

John M. Wall: come. It didn't sound like it would be on a

Jay Vleeschhouwer: subscription basis. It would be on a sort of value to customer basis. Wonder if you can maybe just expand a little on that and how that would be would be monetized. And maybe in particular, or not this would be margin accretive. You know, you're at 45% now already. So could this help kick that on?

Lee John Simpson: Thanks.

John M. Wall: That's a great question, Lee. The if I might jump in here on the monetization side, that we don't see AI forcing a wholesale change from subscriptions to consumption. Our customers still want predictable access to trusted sign off engines. And certified flows. So multiyear subscription remains at the core of our business. What AI does is it changes how much customers run the tools and where value is created. There's more automation. There's more iterations. There's more compute. So we'll attach more usage based pricing for incremental capacity and AI driven optimization. We have card models and token models that handle all those things.

And then in a few areas, on the services side, we can offer outcome oriented packages that's, you know, structured around measurable improvements, like cycle time, closure productivity, with clear scope and governance, and that's kinda how we've been going to market in recent times. And it's worked out well for us, and you can see how it's turning around already our recurring revenue. Now we've been prudent in our outlook, and we're we're we're not expecting an uptick in that, but it definitely is there's plenty of opportunity for Cadence Design Systems, Inc. in AI.

But, you know, as Anarud said at the beginning when he when he in his opening comments there, that's you know, there's two real things that differentiate Cadence Design Systems, Inc. First, we're engineering software. Anchored in physics and mathematically rigorous optimization. And that's not a nice to have. It's a core truth that our customers require as complexity rises. And then secondly, AI is not replacing our product. It's amplifying demand and accelerating adoption. And you see that in our results for 2025, and I think you see it in our guide for 2026. Anything to add?

Anirudh Devgan: No. It's great, John. Yeah.

Operator: And our next question comes from the line of Jason Vincent Celino with KeyBanc Capital Markets. Your line is open.

Harlan L. Sur: Hey. Great. Thank you for taking my question. Looks like IP had a phenomenal year. I know you have a slate of new exciting titles coming out, but I just wanted to ask how that translates pipeline. Like, does it take time to sell these new IP titles and then with the with the guide overall, it looks mostly first half weighted. You know, does your visibility in the IP today look more first half or second half? Thanks.

Anirudh Devgan: IP is doing great. I mean, like I said, you know, we wanna see multiple years of performance before we call it out. And, you know, starting last year, I started to call it out because we saw, like, multiple years and good outlook into '26, which I think should come true. So our starting backlog and everything in IP is strong. And then we are also talking to I mean, not just our traditional business with TSMC, which is doing phenomenal, but we have opportunity to engage with, you know, the newer foundries. So overall, I think IP will be good this year. And we'll see how it progresses. We'll we'll keep you keep you .

But should be a strong year for IP in '26.

Operator: And our next question comes from the line of Jay Vleeschhouwer with Griffin Securities.

Richard Gu: Thank you. Good evening. On a route, if we think about what currently occurring with the AI phenomenon in large

Jay Vleeschhouwer: EDA historical terms, the last time I would argue that there was a major

Richard Gu: let's call it generational, technical, and procedural change in the industry was nearly 2 thousands. And I'd like to ask how this time might be different from that phenomenon in the sense that the last time it was fairly narrowly based in terms of the number of products that grew or were newly adopted. We saw the very interesting phenomenon where average contract durations actually shrank I think as customers were looking to perhaps mitigate technical risk and wanted to retain some vendor flexibility or optionality. Hence, the shorter durations at that time.

You say that this time around, adoption phenomenon might last longer than just a few years of the of the earlier generation I mentioned that there wouldn't be necessarily an adverse effect on contract durations, perhaps maybe even a lengthening with longer commitments from customers and maybe talk about how in those big respects this phenomenon might be broader and more long lasting than what occurred, again, many years ago, but it has some similarities.

Anirudh Devgan: Yeah. That's a great point, Jay. And I mean, we have to see how it unfolds. Know, because each, you know, each time is similar but different. But we are not seeing any change in the in the duration. Which is good. You know, we don't want to do but there is always more opportunity to see more and more add ons you know, like, we have mentioned in the past. Now it will affect all parts of the flow you know, like, in the three layer cake. You know, the top two layers will fuse together, you know, AI and core engines. And I think there is opportunity to add, like I said, add new product categories.

Especially in the front end, you know, this kind of super agent to write RTL, which and write not just write RTL, know, which is this is different from, you regular kind of vibe coding. You know? So what is it? About ChipStack is it's not just writing RTL, but also writing test benches. Writing verification, flows. Because, you know, you know that, Jay, anyway, that, you know, verification is as important as chip design. You know, if you can't verify then the thing you know, because all our customers want things to be first time. Right? So I think the opportunities of AI and verifications are huge. Because that's the NP complete exponential problem.

I think what is also exciting to me on the AgenTeq AI new tools is ability to verify much more. Accurately, and then we go from there. I mean, I think I feel good about the strength of the you know, at this point, I feel good about all the three layers of the cake. You know? We have been innovating. We have been first to market. Importing our software to new hardware platforms. Whether they're parallel CPUs or GPUs or custom chips. You know, our base tools are performing remarkably well. We are taking share in almost all segments. And then we are first to market with AgenTik AI. So I feel good about the portfolio.

I feel good about the engagement. Now how exact it will unfold, I think it should be more long lasting, but we'll we'll we'll you know, it's very difficult to so we'll keep you posted, but so far, so good.

Lee John Simpson: Yeah.

John M. Wall: This is John. Just tell me, you know, we've we've been around a long time. Know, in terms of chasing Moore's Law for the longest time and we built sales models that generally adapt to aligning price with value while preserving the durability of our recurring revenue model. I think what you can count on us to do is that we won't undermine customer predict predictability, but subscriptions will remain the anchor in terms of our primary engagement with our with our customers and then we won't take unbounded outcome risk either. Outcomes will be scoped and measurable. And we'll value we'll price on value metrics.

Customers can control things like job and runs and compute and throughput and things like that. But so it's it's it'll it'll be very, very deliberate and thoughtful in terms of how we grow as we always are.

Lee John Simpson: And our next question comes from the line of Gianmarco Paolo Conti with

Operator: Deutsche Bank. Your line is open.

Lee John Simpson: Yes. Hi. Thank you for

Harlan L. Sur: squeezing me in, and congrats on a great quarter.

James Edward Schneider: Have a long question. Sorry to go back and shift back. But could we start by giving you some detail about how can we bridge the gap between chip stack which we know is about artificial automation, and where it holds the cerebris is about implementation. With regards to none. I guess my question is about whether there could be some cannibalization in the future. And staying on the AI theme, could we have some information about given where model presence is happening in AI? Whether you're seeing more competitions

Harlan L. Sur: particularly from my start ups. I know that's not quite present there. And I'm wondering that's kind of coming up whenever you're pitching this to clients.

James Edward Schneider: Finally, just to pay it up,

Lee John Simpson: are there any harder constraints when you're running more agents? Given that you're gonna require more compute, especially at high design scales?

Anirudh Devgan: Yeah. Hi. Sorry. I there's some noise on the line. So I think I got the gist of the question, but, I mean, I've gotten all the points. So sorry. Apologize in advance. I think your question is also about you know, the front end agent versus Cerberus and also start ups, if I so first of all, I think the Cerrebus is super critical. I mean so I think there will be several kind of AI identity flows that will be needed. Now we highlighted chip stack because it's kind of new and it's a new category of, you know, RTL design and verification. But there are at least several agents that we are actively developing okay.

You know, at Cerebras, we also extended the Cerebras to full flow. So there has to be a front end design agent like Cerrebus as a back end agent for physical implementation because that takes a lot of time right now, and there's a lot of demand for making the implementation more efficient. And there's similar principles apply in Cerebras AI Studio. We do more exploration, and the customer gets better results as a result of that. There will be a lot of activity we will highlight in the future on the back end, on physical design. So there's digital design and verification is one area. Physical design, another area.

Analog, of course, is ripe for finally, we have new technology to see if we can automate more and more of analog and, you know, migration flows. And then on packaging and system design. So you know, we highlight chip stack because super excited about it, but that doesn't mean that all the other you know, there are four or five big agentic flows that we are developing. On the start ups, we always watch all the start ups. You know you know, we have a history of also acquiring them if they are if they are good, but more the earlier stages like we did with Chipstack. I think that was the best AI startup out there.

And we are very confident in our own R&D. You know, we have, like, 10,000 people the best R&D team in competition software. You know, half of them have advanced degrees. You know, we have 3,000 people. With customer support engineers. You know, we're regularly meeting with customers, you know, with big customers. On a on a given week, we'll have multiple R&D meetings with their R&D. So we keep track of what the customer wants.

Lee John Simpson: We have massive investment in R&D.

Anirudh Devgan: And, typically, I think the startups are successful in areas we don't focus in. Or if you wanna enter in new areas. But in terms of AI, we are completely focused and, you know, we always use start up as an accelerant if need to, but we will have massive investment in this space. All the major domains, that our customers want. Yeah.

Operator: And our next question comes from the line of Ruben Roy with Stifel. Your line is open.

John M. Wall: Yes. Thank you.

Harlan L. Sur: Anirud, you answered bits and pieces of what I'm about to ask. But I was hoping to put together a question on, SDNA and just to understand sort of the longer term strategy. It seems like some companies, enterprises, industrials, otherwise are maybe thinking about pulling some simulation workloads in house or partnering with the AI infrastructure ecosystem. We've seen Synopsys and NVIDIA talk about targeting Omniverse digital twins for that type of thing. How should investors think about your strategy?

Is it sort of a neutral strategy and you'll work with you know, accelerated compute providers, etcetera, and they're you know, tools or you know, are you trying to build sort of a an ecosystem that's Cadence Design Systems, Inc. specific? I'm just trying to understand your kind of longer term strategy and thinking around SDNA. Thank you.

Anirudh Devgan: Yeah. Thank you for the question. So in SDNA, like, you know, there are two critical areas for us. So one is 3D IC and all the innovation that's happening. You know, both at the package level analysis, and then the other is physical AI, you know, physical simulation, like, for planes and cars and robots and drones and that's one of the big reasons to acquire Beta and then Hexagon. But we are focused on building the core engines. Okay? And the core engines will work with the you know, with the accelerate compute.

Like, we have a lot you know, we have done GPU joint work with Jensen in India for years, you know, and we were the first to port all our soft solvers to kind of accelerate compute platform because the physical simulation word just that, you know, is part of the simulation and physical, you know, like cars and planes and robots kinda CFD and structural simulation. And I've said this before. It's it's naturally, without getting too technical, is naturally matrix multiply. Okay? And GPUs and, you know, NVIDIA is exceptional at that because AI at score is matrix multiply.

So it's a good fit and then we work with Omniverse and all, but that is not in you know, Omniverse is a great platform, but when they actually run Omniverse, they will run our tools. You know, through that. So there's another way to go to market. And then also directly, you know, with customers. So we are neutral that, but Omniverse is a great platform to deploy our products and, you know, and NVIDIA has highlighted that with several of our customers. But our goal is to build the basic you know, we are an engineering software company.

We build the basic solver that solve the most difficult problems, combine them with AI, combine them with compute, and deploy to all platforms.

Lee John Simpson: Okay.

Anirudh Devgan: So I feel good about our position that way. Yeah.

Lee John Simpson: And our next

Operator: question comes from the line of Joshua Alexander Tilton with Wolfe Research. Your line is open.

Harlan L. Sur: Hey, guys. Thanks for, sneaking me in, and I will echo my congratulations on a strong quarter.

John M. Wall: I

Harlan L. Sur: I kind of have a high level one. I know a lot of times they focus on, like, what the three year CAGR has been. And I think on this call, Andrew mentioned that semi companies now represent or still represent, I think, my understanding, about 55% of the business. So my question is, how do we think about growth over the next three years as the mix of semis and systems levels out and what feels like the mix of, upfront and recurring leveled out at what I'm assuming is kind of more sustainable level than the shifts you've seen over the last few years?

Anirudh Devgan: Yeah. I think, you know, we are super excited about the system companies doing more silicon. You know, there have been some questions in the past. And like I had said before, I think the this is irreversible. And accelerating trend. Okay? And, of course, we gave several examples this time. And especially because of AI, you know, the system companies will do a lot and then with physical AI, they will do even more. Now that number fifty five forty five, first of all, moves very, very slowly. Because the semi companies are doing well too. I mean, we are growing at a at a at a record pace, but both of them are growing.

You know, semi companies, okay. What NVIDIA has done, of course, you know, is phenomenal. What is happening with Broadcom phenomenal. Know? And then so, you know, Qualcomm, MediaTek, there's so many semi companies, you know, are doing phenomenally well. So the so the ratio I think, more and more like, system companies will contribute more, but it move it doesn't move as fast. As you would think, which is a good thing because the semi companies are also growing. Rapidly. And, of course, semi companies will have an essential role in the build out of AI, which is driving all this growth. So that's what I would like to say. Yeah.

John M. Wall: Yeah. And, Josh, the, you know, think I mentioned before, we expect the recurring revenue mix to remain around 80% in fiscal '26, and that's consistent with 2025. And when we say that we have a prudent guide for 2026, I think there's as much upside in our recurring revenue side of the business as there is in the upfront side. Strategically, we like the balance. Recurring provides durability. You know, upfront reflects areas where customer demand is accelerating, and we have differentiated assets. But we're seeing strength right across board, and I think that's that's why Andrew is talking about share gains.

Operator: And our final question comes from the line of Nay Soe Naing with Berenberg. Your line is open.

John M. Wall: Hi. Thank you for taking my question. Maybe one for John. I think you've mentioned about leveraging AI internally, and I was

James Edward Schneider: wondering how we should, think about that in our models and how should we think about your incremental margin going forwards? I think

John M. Wall: with your 26 guide, what you're implying is incremental margins

Jay Vleeschhouwer: of about 51%, you know, which is slightly below

James Edward Schneider: the rate that you've been trending in the last

Jay Vleeschhouwer: recent or last few years as well. So I just want to say, you know, triangulate with the internal AI leverage and how regarding margin for '26 and how we should think about margin a bit longer term in the age of AI?

Lee John Simpson: Thank you.

John M. Wall: Yeah. I said thanks for the question. I think if you have a look at what we achieved in 2025, we achieved incremental margin of 59%, I think. And I think that points to the fact that there's no near term ceiling on operating leverage for the company. I mean, the company's performed at about 45% operating margin, so there's lot of upside to that incremental margin of 59% that we achieved in 2025. Now, generally, you know, we're we're more prudent with our guide at the at the start of the year. We try to build from there.

But so I think if you if you compare the right compare for the 51% that's in the current guide is probably against what we would guide incremental margin at the start of each year. But and I think it's one of the strongest guides that we've we've ever had. And then in relation to your commentary about AI and our use of that internally, that's absolutely right. That's what Anru is talking about for years now that it's designed for AI and AI for design. Internally, we learn a huge amount from our own internal group in terms of how AI is used.

But and if you like, mean, we've we've we've built a great business around emulating hardware and a lot of our AI usage is, like, emulating engineering flows. But and, you know, we take advantage of those, and they're helping us to get more value out of the R&D investments that we're making. But we expect to do the same as our customers. In that when you have access to more engineering capability and being able to do things faster and leverage AI, probably do more R&D and it'll be more people, more AI not less people.

Operator: I will now turn the call back to Anirudh Devgan for closing remarks.

Lee John Simpson: Thank you all for joining us this afternoon.

Anirudh Devgan: It's exciting time for Cadence Design Systems, Inc. as we begin 2026. With product leadership and strong business momentum. Our continued execution of the intelligent system design strategy customer first mindset, and our high performance culture are driving accelerated growth. Great place to work, and Fortune magazine recognize Cadence Design Systems, Inc., as one of the fortunes 100 best companies to work for. In 2025. Ranking it number 11. And on behalf of our employees, and our board of directors, we thank our customers partners, and investors for their continued trust and confidence in Cadence Design Systems, Inc.

Operator: And ladies and gentlemen, thank you for participating in today's Cadence Design Systems, Inc. fourth quarter and fiscal year 2025 earnings conference call. This concludes today's call, and you may now disconnect.

Should you buy stock in Cadence Design Systems right now?

Before you buy stock in Cadence Design Systems, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Cadence Design Systems wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $414,554!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,120,663!*

Now, it’s worth noting Stock Advisor’s total average return is 884% — a market-crushing outperformance compared to 193% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.

See the 10 stocks »

*Stock Advisor returns as of February 17, 2026.

This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.

The Motley Fool has positions in and recommends Cadence Design Systems. The Motley Fool has a disclosure policy.

Original Article on Source

Source: “AOL Money”

We do not use cookies and do not collect personal data. Just news.