hckrnws
Circular Financing: Does Nvidia's $110B Bet Echo the Telecom Bubble?
by miltava
I worked at a mom and pop ISP in the 90s. Lucent did seem at the forefront of internet equipment at the time. We used Portmaster 3s to handle dial up connections. We also looked into very early wireless technology from Lucent.
Something I wanted to mention, only somewhat tanget. The Telecommunications Act of 1996 forced telecommunication companies to lease out their infrastructure. It massively reduced the prices an ISP had to pay to get T1, because, suddenly, there was competition. I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
But, wouldn't you know it, the Telecommunication companies sued the FCC and the Telecommunications Act was gutted in 2003
https://en.wikipedia.org/wiki/Competitive_local_exchange_car...
> I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
It varied a lot by region. At the mom and pop ISP I worked at, we went from paying around $1,500/month for a T1 to $500 to, eventually around $100/month for the T1 loop to the customer plus a few grand a month for an OC12 SONET ring that we used to backhaul the T1 (and other circuits) back to our datacenter.
But, all of it was driven by the Telecommunications Act requirement for ILECs to sell unbundled network facilities - all of the CLECs we purchased from were using the local ILEC for the physical part of the last mile for most (> 75%) of the circuits they sold us.
One interesting thing that happened was that for a while in the late 90’s, when dialup was still a thing, we could buy a voice T1 PRI for substantially less than a data T1 ($250 for the PRI vs $500 for the T1.) The CLEC’s theory was our dialup customers almost all had service from the local ILEC, and the CLEC would be paid “reciprocal compensation” fees by the ILEC for the CLEC accepting calls from them.
In my market, when the telecommunications act reform act was gutted, the ILEC just kept on selling wholesale/unbundled services to us. I think they had figured out at that point that it was a very profitable line of business if they approached it the right way.
I worked at a startup during the telecom boom. Then, startups were getting acquired by the likes of Cisco before the startup had a deployed product. And, back then, IPOs were the only form of liquidity event and engineers were locked up for 6 months. The lucky ones had their startups go IPO or get acquired with enough time to spare to get out before the ensuing bust. After the bust, funding dried up and most startups folded including the one I worked at. There was wipeout and desolation for a few years. Subsequently, green shoots started appearing in the form of a new wave of tech companies.
Capitalism only works if there is lots of competition.
Monopolies gum up the system, reward the institutional capital rather than innovation capital, and prevent new entrants from de-ossifying and being the renewing forest fire.
We've been so lax on antitrust. Google, Apple, Meta, Amazon - they all need to be broken up. Our economy and our profession would be better for it.
Innovation should be a treadmill.
YC and a16z want this.
I don't agree with everything in this book, Goliath,[0] but his main argument was that business monopoly and democracy are in direct opposition. And that since the 1970s we've chosen to enable monopoly again and again.
0: https://www.amazon.com/Goliath-100-Year-Between-Monopoly-Dem...
How does Stoller explain why since the 1970’s there’s been massive turnover whether you look at the largest 10 companies, the largest 100, or the largest 1,000?
> YC and a16z want this.
I don't think so. I think they want their own monopolies. That is what Peter Thiel's book[1] recommends.
You're implying only 4 years of regulation was enough to shift the balance of power between telecoms and smaller ISPs."
If it's true that this regulation was what helped jumpstart the internet it's an interesting counterpoint to the apocalyptic predictions of people when these regulations are undone. (net neutrality comes to mind as well)
I've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".
> 've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".
Why would it be enough? This legislation prevents monopolies from abusing position, therefore we will repeal it the moment it turns out to be useful?
Yeah, it takes time to consolidate power again, that does not mean the legislation is not good.
> Why would it be enough?
It worked out just fine? Are you saying that post-2003 internet access should have had more regulation to allow open access?
I've never heard anyone complain about that before- is there a specific issue that could have been fixed?
You bring up an interesting point. This is something I discovered years ago when I would have "political discussion lunch" with three of my friends who all had very different political views. After years of going to lunch together and debating things like, "Was giving AT&T a monopoly in 1913, good, or bad?"
https://en.wikipedia.org/wiki/Kingsbury_Commitment
The answer is... nobody will ever agree on anything. You can always cherry pick some detail to bolster your case, whatever it may be.
We can never visit the alternate reality where another choice was made and so you can not win an argument.
Now, you can go and find similar circumstances. You can find other countries who did not grant a monopoly (for instance). But then, your opponent will argue all the differences between that instance and what occurred.
Also, I think it is a shame your original reply is getting voted down. I am against people voting down comments just because they disagree. Voting down should be used for comments that are low quality.
Th gutting of telecom competition, the allowance of total monopoly power, was a travesty of the court system. The law was quite plain & clear & the courts decided all on their own that, well, since fiber to the home is expensive to deploy, we are going to overrule the legislative body. The courts aren't supposed to be able to overturn laws they don't like as they please but that's what happened here.
Regarding the price of connection, it's also worth mentioning that while T1 and other T-channel and OCx connection remains in high use, 1996-1999 is also the period where DSL became readily available & was a very fine choice for many needs. This certainly created significant cost pressure on other connectivity options.
Changes in the legislative landscaped may have influenced the timing of the price war and the telecoms bubble.
But the price war was inevitable. And the telecoms bubble was highly likely in any case.
Telecoms investment was a response to crazy valuations of dot-com stocks.
Fiber networks were using less
than 0.002% of available capacity,
with potential for 60,000x speed
increases. It was just too early.
I doubt we will see unused GPU capacity. As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.If a minute of GPU usage is currently $0.10, a night of GPU usage is 8 * 60 * 0.1 = $48. Which might very well be worth it for an improved codebase. Or a better design of a car. Or a better book cover. Or a better business plan.
> I doubt we will see unused GPU capacity
I'd argue we very certainly will. Companies are gobbling up GPUs like there's no tomorrow, assuming demand will remain stable and continue growing indefinitely. Meanwhile LLM fatigue has started to set in, models are getting smaller and smaller and consumer hardware is getting better and better. There's no way this won't end up with a lot of idle GPUs.
>Meanwhile LLM fatigue has started to set in
Has it?
I think there is this compulsion to think that LLMs are made for senior devs, and if devs are getting wary of LLMs, the experiment is over.
I'm not a programmer, my day job isn't tech, and the only people I know who express discontent with LLMs are a few of programmer friends I have. Which I get, but everyone else is using them gleefully for all manner of stuff. And now I am seeing the very first inklings of completely non-technical people making bespoke applets for themselves.
From OpenAI, programming is ~4% of chatGPTs usage. That's 96% being used for other stuff.
I don't see any realistic or grounded forecast that includes a diminishing demand for compute. We're still at the tip of adoption...
You should get on Reddit, people hate AI with a passion there. People I meet in real life hate it also. I think the public actually hates AI more than it should now.
I spent 13 years chronically on reddit before stumbling into a exit hatch of the bubble chamber.
Those people (well really it's teens and college kids) live on reddit, they are so far from an accurate representation of reality its insane.
Worse when you find out there’s a couple dozen of the same moderators running nearly all the top 500 subreddits.
That makes some sense. Are they paid for this?
By reddit? No. By the users? No.
But they are paid.
You’d like to substantiate that? I’d love to have someone pull the curtain back and learn by whom, if that’s the case.
I don't know about money, but they definitely get ego strokes.
So they own the media...
Everyone should learn the concept of a Skinner Box. [1]
Reddit is a Skinner Box. HN is too, though to a much lesser extent [2]. Every Skinner Box has one dominant opinion on every matter, which means, by simply using the product, your beliefs on any matter will shift towards the dominant opinion of the platform.
I was a chronically online Reddit user once. I can spot any chronically online Reddit user in just a few minutes in any social event by their mannerisms and the way they talk. I’ll ask and without fail indeed they are a daily Reddit user. It’s even more obvious in writing where you can spot them in just a few always-grammatically-correct text messages flavored with reddit-funny remarks and snarks and jokes.
Same goes for chronic X users. Their signature behavior is talking about social/political issues unprompted. It’s even easier to spot them.
I think the main reason behind platforms shaping user behavior is this: The most upvoted content will always surface to the top, where it will be seen by most users, meaning, its belief-shaping impact is exponential instead of linear. In the same manner unpopular opinions will be pushed to the bottom, and will have exponentially small impact. Some opinions will even be banned or shadowbanned, which means they are beyond the Overton Window of the specific platform.
This way, the platform both nudges you towards the dominant opinion and limits the range of possible opinions you will be exposed to. Over time, this affects your personality and character.
1: https://en.m.wikipedia.org/wiki/Operant_conditioning_chamber
2: The HN moderators and the algorithm both actively resist the effect and try to increase diversification.
I had to quit Reddit after a decade of heavy use because of the doomerism. It's a place you go if you want to kill your spirit. It's just not healthy.
The irony of this is so much of Reddit comments these days are AI generated.
It's a pretty biased sample. Not to mention that people who are neutral and is just using AI won't be bothered to comment. So you only ever see one extreme or another.
The view I have is that people hate having AI slop spewed at them, but will find value in asking an LLM about things they're interested in / help with things.
> From OpenAI, programming is ~4% of chatGPTs usage. That's 96% being used for other stuff.
I think it's important to remember that a good bunch of this is going to be people using it as an artificial friend, which is not really productive. Really that's destructive, because in that time you could be creating a relationship with an actual person instead of a context soon to be deleted.
But on the other hand, some people are using it as an artificial smart friend, asking it questions that they would be embarrassed to ask to other people, and learning. That could be a very good thing, but it's only as good as the people who own and tune the LLMs are. Sadly, they seem to be a bunch of oligarchs who are half sociopaths and half holy warriors.
As for compute, people using it as an artificial friend are either going to have a low price ceiling, or in an even worse case scenario they are not and it's going to be like gambling addiction.
Productive or destructive, demand is there, so it isn’t late bubble. It’s still early. (Which is scary, I’ll readily admit.)
But demand isn't there (or rather, proven to be there.) Demand is measured in dollars, and right now VC is paying. This is peak bubble - farthest distance from valuation and income.
Even if its there, will it be in 10 years?
Microsoft and meta are not VCs and they’re spending money on data centers like there’s no tomorrow, doesn’t seem very low demand.
Test time compute has made consumption highly elastic. More compute = better results. Marginal cost of running these GPUs when they would otherwise be idle is relatively very low. It will be utilized.
Look at the human body.
2% of it is dedicated to thinking.
My guess is that as a species, we will turn a similar percentage of our environment into thinking matter.
If there are a billion houses on planet earth, 2% of it are 20 million datacenters we still have to build.
An analogy is not proof. It is not even evidence.
What’s the lifetime of these things once they’ve been running hot for 2-3 years
> There's no way this won't end up with a lot of idle GPUs.
Nvidia is betting the farm on reinventing GPU compute every 2 years. The GPUs wont end up idle, because they will end up in landfills.
Do I believe that's likely, no, but it is what I believe Nvidia is aiming for.
Your bet is that people will simply use less compute, for the first time in the history of the human race?
No, mostly less external compute
This. I just found out that for my MCP needs, Qwen3 4B running local is good enough! So I just stopped using Gemini API.
> As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.
That is nothing. Coding is done via text. Very soon people will use generative AI for high resolution movies. Maybe even HDR and high FPS (120 maybe?). Such videos will very likely cost in the range of $100-$1000 per minute. And will require lots and lots of GPUs. The US military (and I bet others as well) are already envisioning generative AI use for creating a picture of the battlespace. This type of generation will be even more intensive than high resolution videos.
> improved codebase
I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.
It depends what you're fitting to. At the simplest, you can ask for a reduction in cyclomatic/cognitive complexity measured using a linter, extraction of methods (where a paragraph of code serves no purpose other than to populate a variable) or complex conditionals, move from an imperative to a declarative approach, etc. These are all things that can be caught through pattern matching and measured using a linter or code review tool (CodeRabbit, Sourcery or Codescene).
Other things might need to be done in two stages. You might ask the agent to first identify where code violates CQRS, then for each instance, explain the problem, and spawn a sub-agent to address that problem.
Other things the agent might identify this way: multiple implications, use of conflicted APIs, poor separation of concerns at a module or class level.
I don't typically let the agent do any of this end to end, but I would typically manually review findings before spawning subagents with those findings.
With improvements on the algorithm side and new techniques, even older hardware will become useful.
I get what you're saying and the reasoning behind it, but older hardware has never been useful where power consumption is part of determining usefulness.
This is the biggest threat to the GPU economy – software breakthroughs that enable inference on commodity CPU hardware or specialized ASIC boards that hyperscalers can fabricate themselves. Google has a stockpile of TPUs that seem fairly effective, although it’s hard to tell for certain because they don’t make it easy to rent them.
I don't think we will need to wait for anything as unpredictable as a breakthrough. Optimizing inference for the most clearly defined tasks, which are also the tasks where value is most readily quantified, like coding, is underway now.
More efficient inference = more reasoning token. Hyperscaler ASICs are closing the gap at the hardware/system level, yes.
> As soon as we can prompt…
This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.
Getting gold in the math Olympiad is a pretty strong indicator of operating independently on substantive problems.
A year ago they need an extensive harness to get silver, and two years ago they could hardly multiply 1000x10000.
Terence Tao tweeted yesterday about using GPT5 to help quickly solve a problem he was working on.
Yes but why did ChatGPT work on math Olympiad problems? Because it got a prompt giving it the instruction and context etc.
Why did GPT5 help Terence Tao solve a math problem, because he gave it a prompt and the context etc.
None of these models are useful without a human prompting them and giving it tasks, goals, context etc, they don't operate independently, they don't get ideas of work to be done, they don't operate over long time horizons, they can't accept long term goals and sub-divide those goals into sub goals, and sub tasks etc.
They are useless without humans telling them what to do.
You should see what happens when you let them talk to each other
This is such a short sighted take glaringly ommitting a crucial ingredient in learning or improvement - both for humans or machines alike: feedback loops.
And you can't really hack / outsmart feedback loops.
Just because something is conceptually possible, interaction with the real rest of the world separates a possible from an optimal solution.
The low hanging fruits/ obvious incremental improvements might be quickly implemented by LLMs based on established patterns in their training data.
That doesn't get you from 0 to 1 dollar, though and that's what it's all about.
this. Was highlighted by Sutton in a recent podcast rather starkly.
LLMs are a great tool. But, the real world is far too nuanced to be captured in text and tokens. So, LLMs will be a great productivity boosting tool like a calculator or a spreadsheet. Expecting it to do more is science fiction.
I just had to double check (have not been paying attention for a couple of years) but indeed it seems GPU underutilization remains a fact and the numbers are pretty significant. Main issues are being memory bound so the compute sits idle.
The actual computation speed isn't as important nowadays but it doesn't really change the conclusion with respect to whether they're underutilized.
Because the main reason for the price premiums in AI-class GPUs are the gobs of insanely fast memory, and that is very much not underutilized. AI companies grab GPUs with as much memory (at the fastest memory bandwidth) as possible and underclock the GPU to save on power. Linus Tech Tips had a great video about the H200 that touched on this this week: https://www.youtube.com/watch?v=lNumJwHpXIA
Tasks being memory bound is not the same thing as GPU's being idle for economic reasons though.
Knowing history of past bubbles is only mildly informative. The dotcom bubble was different than the railroads bubble etc.
The only thing to keep in mind is that all of this is about business and ROI.
Given the colossal investments, even if the companies finances are healthy and not fraudulent, the economic returns have to be unprecedented or there will be a crash.
They are all chasing a golden goose.
With Bezos openly stating the goal to build 10 GW+ data-centers in space, it almost feels in question whether this is about ROI, or simply building the Neuromancer future where the ultra-rich can finally be free of their need for the rest of us. Not needing labor at all would be the final return on investment. https://news.ycombinator.com/item?id=45465480 https://www.datacenterdynamics.com/en/news/jeff-bezos-claims...
I'm not sure we should pay too much attention to what Bezos says now he's out of the day to day running of Amazon. It feels like a lot of his life choices now are more about being a megawealthy play boy than anything economically motivated
>With Bezos openly stating the goal to build 10 GW+ data-centers in space, it almost feels in question whether this is about ROI, or simply building the Neuromancer future where the ultra-rich can finally be free of their need for the rest of us. Not needing labor at all would be the final return on investment.
Why does "Not needing labor at all" need to be in space?
It's so the billions of disgruntled former workers can't storm the castle.
Still doesn't make sense. The PLA (largest army in the world) can't even capture Taiwan. If they wanted an impenetrable fortress a random island is all they need.
It is folly to take these statements at their words.
Bezos is just saying shit to generate hype. All these executives are just saying shit. There is no plan. You must treat these people as morons who understand nothing.
Anyone who knows even the slightest details about datacenter design knows what moving heat is the biggest problem. This is the exact thing that being in space makes infinitely harder. "Datacenters in space" is an idea you come up with only if you are a moron who knows nothing about either datacenters or space.
If nothing else this is the singular reason you should treat AI as a bubble. All of the people at the helm of it have not a single fucking clue what they're talking about. They all keep running their mouth with utter nonsense like this.
> only mildly informative
I agree, but would like to maybe build out that theory. When we start talking about the mechanisms of the past we end up over-constricting the possibility space. There were a ton of different ways the dotcom bubble COULD have played out, and only one way it did. If we view the way it did as the only way it possibly could have, we'll almost certainly miss the way the next bubble will play out.
I’m concerned that the accounting differences mentioned between Lucent and Nvidia, Microsoft, OpenAI, Google just mean we have gotten much better at lying and misrepresenting things as true. Then the bubble pops and you get the real numbers and we are all like “yep it was the same thing all over again”.
Of course, CFOs are all very aware of what failed previously.
This time they have fiat money and government on their side. so that is also different.
It just means we're all going to be hurt by the collapse, not just investors. In line with socialized loses, privatized profits.
That is also true :)
I was there in the middle of the dotcom crash and the telecoms crash which was much worse. Fiber does not rust, and while there was vast overcapacity, not all of it was lit, or indeed worth lighting. 10 years after, thanks to DWDM there were 8 strand cables where only 2 strands were lit, albeit with many more wavelengths than envisaged before. Even though demand had grown.
How much is a 10 year old GPU worth? Where is the “dwdm but for GPUs?”.
There truly are interesting times and we have the benefit of being in them.
I think the fundamental issue is the uncertainty of achieving AGI with baked in fundamentals of reasoning.
Almost 90% of topline investments appear to be geared around achieving that in the next 2-5 years.
If that doesn’t come to pass soon enough, investors will loose interest.
Interest has been maintained by continuous growth in benchmark results. Perhaps this pattern can continue for another 6-12 months before fatigue sets in, there are no new math olympiads to claim a gold medal on…
Whats next is to show real results, in true software development, cancer research, robotics.
I am highly doubtful the current model architecture will get there.
AGI is not near. At best, the domains where we send people to years of grad school so that they can do unnatural reasoning tasks in unmanageable knowledge bases, like law and medicine, will become solid application areas for LLMs. Coding, most of all, will become significantly more productive. Thing is, once the backlog of shite code gets re-engineered, the computing demand for a new code creation will not support bubble levels of demand for hardware.
> AGI is not near
There's also plenty of argument to be made that it's already here. AI can hold forth on pretty much any topic, and it's occasionally even correct. Of course to many (not saying you), the only acceptable bar is perfect factual accuracy, a deep understanding of meanings, and probably even a soul. Which keeps breathing life into the old joke "AI is whatever computers still can't do".
I will give you an example from just two days ago, I asked chatgpt pro to take some rough address data and parse it into street number, street name, street type, city, state, zip fields.
The first iteration produced decent code, but there was an issue some street numbers had alpha characters in it that it didn't treat as street numbers, so I asked it to adjust the logic of code so that even if the first word is alpha or numeric consider it a valid street number. It updated the code, and gave me both the sample code and sample output.
Sample output was correct, but the code wasn't producing correct output.
It spent more than 5 mins on each of the iterations (significantly less than what a normal developer would, but the normal developer would not come back with broken code).
I can't rely on this kind of behavior and this was a completely green field straight forward input and straight forward output. This is not AGI in my book.
When an agent can work independently over an 8 hour day, incorporating new information and balancing multiple conflicting goals—then apply everything it learned in context to start the next day with the benefit of that learning, repeat day after day—then I'll call it AGI.
> AI can hold forth on pretty much any topic, and it's occasionally even correct.
You consider occasionally being correct AGI?
Did I really need to use the /s tag? But a four-year-old is occasionally correct. Are they not intelligent? My cat can't answer math problems, is it a mere automaton? If we can't define what "true" intelligence is, then perhaps a simulation that fools people into calling it "close enough" is actually that, close enough.
> There's also plenty of argument to be made that it's already here
Given you start with that I would say yes the /s is needed.
A 4 year old isn’t statistically predicting the next word to say; its intelligence is very different from an LLM. Calling an LLM “intelligent” seems more marketing than fact based.
I actually meant that first sentence too. One can employ sarcasm to downplay their own arguments as well, which was my intent, as in that it might be possible that AGI might not be a binary definition like "True" AI, and that we're seeing something that's senile and not terribly bright, but still "generally intelligent" in some limited sense.
And now after having to dissect my attempt at lightheartedness, like a frog or a postmodern book club reading, all the fun has gone out. There's a reason I usually stay out of these debates, but I guess I wouldn't have been pointed to that delightful pdf if I hadn't piped up.
https://ai.vixra.org/pdf/2506.0065v1.pdf
I don't think I'll ever stop finding this funny.
It was written by Opus 4 too.
It's still forgetting what it's talking about from minute to minute. I'm honestly getting tired of bullying them into following the directions I've already given them three times.
I think the main problem with AGI as a goal (other than I don't think it's possible with current hardware, maybe it's possible with hypothetical optical transistors) is that I'm not sure AGI would be more useful. AGI would argue with you more. People are not tools for you, they are tools for themselves. LLMs are tools for you. They're just very imperfect because they are extremely stupid. They're a method of forcing a body of training material to conform to your description.
But to add to the general topic: I see a lot of user interfaces to creative tools being replaced not too long from now by realtime stream of consciousness babbling by creatives. Give those creatives a clicker with a green button for happy and a red button for sad, and you might be able to train LLMs to be an excellent assistant and crew on any mushy project.
How many people are creative, though, as compared to people who passively consume? It all goes back to the online ratio of forum posters to forum readers. People who post probably think 3/5 people post, when it's probably more like 1/25 or 1/100, and the vast majority of posts are bad, lazy and hated. Poasting is free.
Are there enough posters to soak up all that compute? How many people can really make a movie, even given a no-limit credit card? Have you noticed that there are a lot of Z-grade movies that are horrible, make no money, and have budgets higher than really magnificent films, budgets that in this day and age give them access to technology that stretches those dollars farther than they ever could e.g. 50 years ago? Is there a glut of unsung screenwriters?
Comment was deleted :(
Not sure why you're getting downvoted.
If you speak with AI researchers, they all seem reasonable in their expectations.
... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.
12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.
Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.
I'm not sure either - for a second I thought perhaps llm agents are prowling around to ensure the right messages are floating up, but who knows...
> their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.
This is entirely on the AI companies and their boosters. Sam Altman literally says gpt 5 is "like having a team of PhD-level experts in your pocket." All the commercials sell this fantasy.
This is really the biggest red flag, non-technical people's (by extension investors and policymakers) general lack of understanding of the technology and its limitations.
Of course the valuation is going to be insanely inflated if investors think they are investing in literal magic.
I mean, the AI companies have £200 a month plans for a reason. And if you look at Blitzy for example, their plans sit at the £1000 a month mark.
I would blame the business people for being so gullible too.
There's some blame there, sure. But generally people would agree that between a con man and his victims, the con man has the greater moral failing.
In general yes. But here we talk about businessmen who are paid quite a lot of money literally to make decisions like these.
It is kind of like when a cop allows his gun to be stolen. Yes, the criminal is the guilty, but also the cop was the one person supposed to guard against it.
> If you speak with AI researchers, they all seem reasonable in their expectations.
An extraordinary claim for which I would like to see the extraordinary evidence. Because every single interview still available on YT form 3 years ago ...had these researchers putting AGI 3 to 5 years out ...A complete fairy tale as the track to AGI is not even in sight.
If you want to colonize the Solar System the track is clear. If you to have Fusion, the track is clear. AGI track ?
Fair point, and I should be more clear. The AI researchers I speak with don't expect AGI and are more reasonable in trying to build good tech rather than promising the world. My point was that these AI researchers aren't the ones inflating the bubble.
Hyperscalers are only spending less than half of their operating cash flows on AI capex. Full commitment to achieving AGI within a few years would look much different.
The biggest issue with Nvidia is their revenue is not recurring but the market is treating their stock as it were, which is correlated with all semi stocks, with a one-time massive CAPEX investment lasting 1-2 years.
Simple as this - as to why its just not possible for this to continue.
NVDA stock does not trade at a huge multiple. Only 25x EPS despite very rapid top line growth and a dominant position at the eve of possibly the most important technology transition in the history of humankind. The market is (and has been) pricing in a slowdown.
What do you think is going to happen to their earnings when CAPEX slows?
Their earnings will certainly decline or at least decelerate if capex slows. I’m just saying, if the market wasn’t pricing in a slowdown, NVDA would be trading at 40-60x next year EPS, not 25x.
Comment was deleted :(
most important technology transition in the history of humankind but Nvidia itself is not leading the software part? Are they selling shovels or why would they give that part of being the head develop the AGI and GOD?
They just committed to invest $100b (!) in OpenAI and said $100b is only the start.
Since you aren't talking about the green transition, whatever technology transition you have in mind is obviously second most important at best.
If we get ASI it will figure out how to do the green transition for us!
This is peak techno-solutionism
If we are not headed to ASI, the spending will slow down and the problem will solve itself.
> > the eve of possibly the most important technology transition in the history of humankind.
Funnily enough when you spend some months thinking into this intensively the result is that a monetary investment into the company that will bring about the singularity / AGI is the most irrational thing one can do.
If the enterprise is successful and the singularity/AGI is benign you won't need money anymore, if the experiment fail the possibility of things going rogue is very high, or even the panic from a possible series of rogue events.
So for the first time the rational thing would be to either spend those money to learn poker/chess/videogames or whatever game we will play with each other to feel cool while the AI takes care of everything else, or maybe outright spend money on coke and strippers given the chance of doomsday.
They are doing it for the love of the game, IMO
What are you talking about? It's trading at 55x EPS and 41x forward EPS.
Consensus EPS for FY27 (~CY26) is $6.40. Buy side is higher!
That's 30x for earnings that will be known Jan 2027 (1 year and 3 months away). It's 40x earnings for Jan 2026 (3 months away).
For reference Enron was 40x earnings for current year forward estimates early in the year of the crash.
Comment was deleted :(
Same could be said of the covid-darlings - zoom, pelaton etc. They got bid up assuming the present to continue into the future. That is the nature of the markets. Same story with fake meat companies. Across time you will find this pattern - 3d printing etc, all ushering some new faddish technology. Also, explains the investments into openai as a hedge against capex slowdown, so there is a captive customer.
This. It’s basic economics. The second there’s a blip the market will be flooded with cheap used GPUs and there will be zero reason to buy new ones. At that point it will be impossible for NVidia to sustain their revenue numbers.
I'm pretty sure Nvidia sells out all their cards instantly once they are announced.
Their margin is ridiculous and they are still unable to meet demand.
TSLA is the same. Tge market is basically a new rich persons bank, abstracted by loans and lines of credit.
Obviously its a bubble but thats meaningless for anyone but the richest to manage.
The rest of us are just ants.
The one thing I don't understand is this assumption that demand for GPUs for training is going to keep growing at the rate they grew so far.
I get the demand for new applications, which require inference, but nowadays with so many good (if not close to SOTA) models available for free and the ability to run them on consumer hardware (apple M4 or AMD Max APUs), is there any demand for applications that justify a crazy amount of investment in GPUs?
Apologies for the second reply, but it also occurs to me that reinforcement learning is the new battleground. Look at the changes between o1, o3 and GPT-5 thinking. Sonnet 3.7, Sonnet 4, and Sonnet 4.5. And so forth.
I expect models will get larger again once everyone is doing their inference on B200s, but the RL training budget is where the insatiable appetite sits right now.
Isn’t the whole point of the arms race that the more GPUs you have the closer you get to AGI? Which is the supposed goal here.
I do not believe for a second that any of those people investing tens of billions of dollars are doing it to "get to AGI". They would only be able to profit from a AGI if it could be simultaneously (a) weaponized and (b) strictly controlled by one party, and there is no one crazy enough that these could be achieved.
If you tell me that people are pouring all that money into data centers because they believe that most applications will use some form of LLM or VLM as the main driver of machine-to-machine and machine-to-person interface, I'd be more inclined to buy it. But then I'd respond that it seems that LLMs are reaching a point of diminishing returns and the big next move is to make it easy and faster to distill/fine-tune the LLMs for specific business needs, which is something that should be possible to do with the existing infra already (I guess?)
I suspect continuous learning will be the next driver of GPU usage.
Inference will be cheapest when run in a shared cloud environment, simply due to the LLMs roofline. Thus, most B2B use cases are likely to be datacenter based, like AWS today.
Of course, cern is still going to use their FPGA hyper-optimized for their specific trigger model for the LHC, and apple is gojng to use a specialized low power ASIC running a quantized model for hello Siri, but I meant the majority usecase.
I do not buy this premise. I think it will end up being cheaper to simply run the LLMs directly on the user device.
I think that there are plenty of competitors in the "LLMs with open weights" space to essentially make the models a commodity, so all that is left is the compute cost and there is no way that someone will be running a datacenter in a way that is cheaper than "the computer that I already have running on my desk".
> Nvidia’s vendor financing becomes exposure to customers building competitive alternatives.
This is surely the most important line in the piece? In what world would this much demand not lead to alternatives emerging?
(Assuming the upside, yes if the demand is not there in two years then yes it’s all going to burn)
There was a similar circular effect in the dot com boom around ads. VCs poured money into startups, which put the money into ads on Yahoo and other properties. Yahoo was getting huge revenue from the ads, which pumped up the stock price. The rising price and revenue, and hence stock price of Yahoo pumped up the market for other dot coms, as it proved you could make money on the Internet, so the market for dot com IPOs was strong. That drew more VC money. More VC money meant more ads.
At a "telecom of telecom" we (they) were still lighting up dark fiber 15 years later (2015) when mobile data for cell carriers finally created enough demand. Hard to fathom the amount of overbuild.
The only difference is fiber optic lines remained useful the whole time. Will these cards have the same longevity?
(I have no idea just sharing anecdata)
New fiber isn't significantly more power efficient. The other side of the coin is that backhoes haven't become more efficient since the fiber was buried.
Directional drilling is a game changer and has become accessible in the last decade.
You're looking for advancement in carriages unaware of the 'automobile' that made 5g and ftth deployment at scale possible.
In 2005 telecom was a cash cow because of long distance charges and if your mechanical phone switch was paid off you were printing money (regulations guaranteed revenue)
This didn't last that much longer and many places were trying to diversify into managed services (data dog for companues on Orem network and server equipment,etc) which they call "unregulated" revenue.
Add written an things business, irrational exuberance can kill you.
I think the chips themselves won't have longevity, but the r&d gone into them is useful. Question is whether the value of that can be captured.
Depends on which companies we're talking about. Nvidia's annualized operating income is so high right now that it'll be capturing more value (op income) in the next four quarters (~$120 billion) than its R&D expenditures have cost over its 32 year history combined. For Nvidia the return has long since been achieved.
As the AI spending bubble gives out, Nvidia's profit growth will slow dramatically (single digits), and slamming into a wall (as Cisco did during the telecom bubble; leading up to the telecom crash, Cisco was producing rather insane quarter over quarter growth rates).
> Will these cards have the same longevity?
The article cites anecdotal 1-2 years due to the significant stress.
I think the high-density data centers that are being built to support the hyperscalers are more analogous to the dark fiber overbuild. When you lit that fiber in 2015, you (presumably) were not using a line card bought back in 1998.
Everything in that data center is depreciating as soon as they turn on the power.
AC, ups, generators not to mention the severs.
That's the thing with fiber it was still useful. The cards at either end are easy to add, waaaayyy cheaper and higher perf (they're were no cards on end of dark lines) 15 years later.
In reference 14 we read
> However, what’s become clear is that OpenAI plans to pay for Nvidia’s graphics processing units (GPUs) through lease arrangements, rather than upfront purchases
I wish someone here could explain it to a dummy like me. Nvidia tells OpenAI: heres some GPUs, can you pay for them over 5 years. How is this an "investment" by Nvidia? That reference keeps calling this an investment, but what they describe is a lease agreement. Why do they call it an investment? What am I missing?
That Nvidia has to front the costs of the product at the beginning and arguably the risk that the allocation of assets end up not being paid off (bankruptcy, etc.) By carrying those costs early and the associated risk, Nvidia expects a return on that. If the risk is realized they'll lose but otherwise they'll gain. That has all the hallmarks of an investment.
Thanks for your reply, but:
NVidia could protect itself against OpenAI bankrupcy by adding a clause to the lease saying that if OpenAI goes bankrupt, Nvidia gets its GPUs back. So the risk would only be that the lease would be aborted sooner than expected.
> So the risk would only be that the lease would be aborted sooner than expected.
That is, in fact, the risk.
But Im saying something different than the person I was responding to. They said that the risk was due to company going bankrupt and therefore Nvidia losing its "investment" - read: the GPUs that it leased. Whereas Im saying that the risk is due to company going bankrupt, Nvidia getting its GPUs back, but now they have too many at hand than they can usefully deploy/sell. The two are risks triggered by the same event, but the former is about 1 order of magnitude greater than the latter. The former is lost capital, the latter is lost opportunity - read: return on the capital.
In leasing to OpenAI, they aren't deploying those GPUs to other productive uses.
The costs are not just the raw cost of the physical asset, but also the opportunity costs of doing "thing a" vs. "thing b".
The GPUs will also depreciate during the time they're in OpenAI's possession, so again, if that investment doesn't pay off they aren't just getting GPUs back, they're getting "used GPUs" or "more used GPUs" back, not the original or full value of the originally leased asset. Naturally, the lease has to have those costs built in to be a good deal, but the lease has to fulfill to terms for that to happen.
In the end, leasing them means they carry the early risk of the lease not being fulfilled... but they should gain more in the than if they just straight up sold them.
In the world in which OpenAI goes bankrupt, their used GPUs would be worth perhaps 10% of their value today.
Speculation by a non-accountant:
At the end Nvidia retains ownership of what are probably very low value assets.
Contrast that with car leases: there is a robust market for used cars.
Nvidia is in effect financing the GPUs by not requiring the full payment up front.
Do the lease payments add up to the total cost?
The GPUs do/might lose value over time, but perhaps Nvidia is betting that they wont? Going by recent years, their value went up in some cases.
EDIT: Interesting note
> CPUs historically have 5-10 years of useful life , while GPUs in AI datacenters last 1-3 years in practice , despite 6-year accounting assumptions.30,31 Evidence from Google architects shows GPUs at 60-70% utilization survive 1-2 years , with 3 years maximum.31 Meta’s Llama 3 training experienced 9% annual GPU failure rates , suggesting 27% failure over 3 years.31
Where can you track GPU utilization rates? Assuming private data but curious if not.
Are these companies developing InfiniBand-class interconnects to pair with their custom chips? Without equivalent fabric, they can’t replace NVIDIA GPUs for large-scale training.
recent Huang podcast went into this, making the point that custom chips won't be competitive to Nvidia's as they are now making specialised chips instead of just 'gpu's'.
https://open.spotify.com/episode/2ieRvuJxrpTh2V626siZYQ?si=2...
Thank you for the pointer!
I wonder if the buying customers of Nvidia are going to find the self’s left with the overcapacity. Certainly people are waking up to LLM challenges and as budgets focus more on useful applications, smaller language models, how much of that demand will remain.
Also, depreciation schedules beyond useful life of an asset may not be fraud but I’d call it a bit too creative for my liking.
Time will tell.
Distinguishing that in-hindsight Lucent was committing accounting fraud and present firms aren't is a load-bearing assumption here; for all we know the big players in the AI bubble just haven't been outed yet.
Good analysis.
But the answer is, "kinda"? There are similarities, but the AI buildout is worse in some ways (more concentration, GPU backed debt) and better in others (capacity is being used, vendors actually have cash flow).
The conclusion:
> Unlike the telecom bubble, where demand was speculative & customers burned cash , this merry-go-round has paying riders.
Seems a little short sighted to me. IMO, there is a definite echo, but we are in the mid-late stage, not the end stage.
It's simply not fair to compare Lucent at the end of a bubble with Nvidia in the middle, and that is what the author did.
If you haven't listened to the referenced interview between Thompson and Kedrosky, I'd do so: https://www.theringer.com/podcasts/plain-english-with-derek-...
This article is pretty confusing, doesn't really have a thesis, just listing some stats. Maybe that's the intent.
I had the same response.
Certainly it suggests that “this time is different” without saying it in a quotable fashion.
The metrics it provides seem useful. What are the metrics it is missing?
The thing that this doesn't get is that 10bn a year is basically 70%+ of yearly R&D and inference budget of OpenAI.. so this Nvidia deal is actually great (for OpenAI) in terms of protecting its cashflow
The telecom bubble built infrastructure for something that didn’t exist, they built anticipating the need for high didn’t come in time.
The gpu bubble is different. Nvidia is actually selling gpus in spades. So it’s not comparable to the telecom bubble. Now the question remains how many more gpus can they sell? That depends on the kind of services that are built and how their adoption takes off. So now is it a bubble or just frothy at the top? There is definitely going to be a pull back and some adjustment, but I cannot say how bad it is
The smartest finance folks I know say that this “irrational exuberance” works until it doesn’t. Meaning nobody really thinks it’s sustainable, but companies and VCs chasing the AI hype bubble have backed themselves into a corner where the only way to stop the bubble from bursting is to keep inflating the bubble.
The fate of the bubble will be decided by Wall Street not tech folks in the valley. Wall Street is already positioning itself for the burst and there’s lots of finance types ready to call party over and trigger the chaos that lets them make bank on the bubble’s implosion.
These finance types (family offices, small secret investment funds) eat clueless VCs throwing cash on the fire for lunch… and they’re salivating at what’s ahead. It’s a “Big Short” once in 20-30 years type opportunity.
>These finance types (family offices, small secret investment funds) eat clueless VCs throwing cash on the fire for lunch… and they’re salivating at what’s ahead. It’s a “Big Short” once in 20-30 years type opportunity.
No - it's very hard to successfully bet against anything in finance, and VCs and non-public investments are particularly hard. When you go long, you simply buy something and hold it until you decide to sell. If you short, you have to worry about borrowing shares, paying short fees, and having unlimited risk.
How would you even begin to bet against OpenAI specifically? The closest proxy I can think of is shorting NVDA.
There's also nobody whose job it is to make big one-time shorts. Like you said, it's a once in 20-30 years opportunity, so no one builds a hedge fund dedicated to sitting around for decades waiting for that opportunity. There will certainly be exceptions, and maybe they'll make a Big Short 2 about the scrappy underdogs who saw the opening and timed it perfectly. But the vast majority of Wall Street desperately wants the party to continue.
oracle's announcement of a 300b purchase commitment from openai followed soon by a 100B investment into openai. The pace and size of these announcements is reaching a fever-pitch which seems like an attempt to keep the music playing.
> have backed themselves into a corner where the only way to stop the bubble from bursting is to keep inflating the bubble.
They are not in any corner. They rightly believe that they won't be allowed to fail. There's zero cost to inflating the bubble. If they tank a loss, it's not their money and they'll go on to somewhere else. If they get lucky (maybe skillful?) they get out of the bubble before anyone else, but get to ride it all the way to the top.
The only way they lose is if they sit by and do nothing. The upside is huge, and the downside is non-existent.
Great points. I am bullish on AI but also wary of accounting practices. Tom says Nvidia's financials are different from Lucent's but that doesn't mean we shouldn't be wary.
The Economist has a great discussion on depreciation assumptions having a huge impact on how the finances of the cloud vendors are perceived[1].
Revenue recognition and expectations around Oracle could also be what bursts the bubble. Coreweave or Oracle could be the weak point, even if Nvidia is not.
[1] https://www.economist.com/business/2025/09/18/the-4trn-accou...
> Evidence from Google architects shows GPUs at 60-70% utilization survive 1-2 years , with 3 years maximum.
Really?! I'm not used to chips having such a short lifespan.
This sounds like major FUD unless the data is public.
Feels like a teensy tiny conflict of interest, coming from GCP.
Additionally - GPUs have multiple components. Which parts are at 60-70% load, the SM unit or the memory controller? If you're throttling the GPU but not the memory, it makes perfect sense why you're burning the damn card out...
Looking at the last chapter of the essay, there was a lot of illegal activity by lucent in the runup to the collapse. Today, We won’t know the list of shady practices until the bubble bursts. I doubt Tom could legally speculate, he’d likely be sued into oblivion if he even hinted at malfeasance by these trillion dollar companies.
Glad to see Tom's blog on HN - as usual a great write up. A number of us have been chatting about this for several months now, and the take is fairly sober.
Meta commentary but I've grown weary of how commentary by actual domain experts in our industry are underrepresented and underdiscussed on HN in favor of emotionally charged takes.
> actual domain experts
Calling a VC a "domain expert" is like calling an alcoholic a "libation engineer." VC blogs are, in the best case, mildly informative, and in the worst, borderline fraudulent (the Sequoia SBF piece being a recent example, but there are hundreds).
The incentives are, even in a true "domain expert" case (think: doctors, engineers, economists), often opaque. But when it comes to VCs, this gets ratcheted up by an order of magnitude.
Martin Casado is a counter-example. His writings on technology starting with his phd thesis are very informative. [0] He’s the real thing as are many others.
Tom has had a fairly solid track record at Redpoint and now Theory in Data, Enterprise SaaS, and AI/ML. And it's not like we see many posts by engineers, doctors, or economists on HN either - most posts are listicles about the "culture" of technology, an increased amount of political articles growing increasingly tenuously related to the tech industry, and a portion of actually interesting technical content.
Comment was deleted :(
Some great insights with some less interesting in there. I didn’t know about the SPVs, that’s sketchy and now I wanna know how much of that is going on. The MIT study that gets pulled out for every critical discussion of AI was an eye roll for me. But very solid analysis of the quants.
How much of a threat is custom silicon to Nvidia remains an open question to me. I kinda think, by now, we can say they’re similar but different enough to coexist in the competitive compute landscape?
> How much of a threat is custom silicon to Nvidia remains an open question to me
Nvidia has also begun trying to enter the custom silicon sector as well, but it's still largely dominated by Broadcom, Marvell, and Renesas.
With all the major players like Amzn, Msft and Alphabet going for their own custom chips and restrictions on selling to China it will be interesting to see how Nvidia does.
I personally would prefer China to get to parity on node size and get competitive with nvidia. As that is the only way I see the world not being taken over by the tech oligarchy.
The custom chips don’t seem to be gaining traction at scale. On paper the specs look good but the ecosystem isn’t there. The bubble popping and flooding the market with CUDA GPUs means it will make even less sense to switch.
I think we are at the PS3/Xbox 360 phase of AI.
By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
Today, any game you make for a modern system is a game you could have made for the PS3/Xbox 360 or perhaps something slightly more powerful.
Certainly there have been experiences that use new capabilities that you can’t literally put on those consoles, but they aren’t really “more” in the same way that a PS2 offered “more” than the PlayStation.
I think in that sense, there will be some kind of bubble. All the companies that thought that AI would eventually get good enough to suit their use case will eventually be disappointed and quit their investment. The use cases where AI makes sense will stick around.
It’s kind of like how we used to have pipe dreams of certain kinds of gameplay experiences that never materialized. With our new hardware power we thought that maybe we could someday play games with endless universes of rich content. But now that we are there, we see games like Starfield prove that dream to be something of a farce.
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
The PS3 is the last console to have actual specialized hardware. After the PS3, everything is just regular ol' CPU and regular ol' GPU running in a custom form factor (and a stripped-down OS on top of it); before then, with the exception of the Xbox, everything had customized coprocessors that are different from regular consumer GPUs.
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
I hope that's where we are, because that means my experience will still be valuable and vibe coding remains limited to "only" tickets that take a human about half a day, or a day if you're lucky.
Given the cost needed for improvements, it's certainly not implausible…
…but it's also not a sure thing.
I tried "Cursor" for the first time last week, and just like I've been experiencing every few months since InstructGPT was demonstrated, it blew my mind.
My game metaphor is 3D graphics in the 90s: every new release feels amazing*, such a huge improvement over the previous release, but behind the hype and awe there was enough missing for us to keep that cycle going for a dozen rounds.
* we used to call stuff like this "photorealistic": https://www.reddit.com/r/gaming/comments/ktyr1/unreal_yes_th...
We did get more : the return of VR couldn't have been possible without drastically improved hardware.
But the way how it stayed niche shows how it's not just about new gameplay experiences.
Compare with the success of the Wii Sports and Wii Fit, which I would guess managed it better, though through a different kind of hardware that you are thinking about ?
And I kind of expect the next Nintendo console to have a popular AR glasses option, which also would only have been made possible thanks to improving hardware (of both kinds).
That’s exactly what I mean, too. We obviously will get much better AI. It just seems like the value that most people are getting out of it is already captured, just like how technically impressive stuff like VR is very niche.
I could be very wrong, obviously.
"This time it's different"
This reminds me of SGI at the peak of the dot-com bubble.
SGI (Silicon Graphics) made the 3D hardware that many companies relied on for their own businesses, in the days before Windows NT and Nvidia came of age.
Alias|Wavefront and Discreet were two companies where their product cycles were very tied in the SGI product cycles, with SGI having some ownership, whether it be wholly owned or spun out (as SGI collapsed). I can't find the reporting from the time, but it seemed to me that the SGI share price was propped up by product launches from the likes of Alias|Wavefront or Discreet. Equally, the 3D software houses seemed to have share prices propped up by SGI product launches.
There was also the small matter of insider trading. If you knew the latest SGI boxes were lemons then you could place your bets of the 3D software houses accordingly.
Eventually Autodesk, Computer Associates and others eventually owned all the software, or, at least, the user bases. Once upon a time these companies were on the stock market and worth billions, but then they became just another bullet point in the Autodesk footer.
My prediction is that a lot of AI is like that, a classic bubble, and, when the show moves on, all of these AI products will get shoehorned into the three companies that will survive, with competition law meaning that it will be three rather than two eventual winners.
Equally, much like what happened with SGI, Nvidia will eventually come a cropper due to the evaluations due to today's hype and hubris not delivering.
One of the things before AI in the market was that capital had limited growth opportunities. Tech, which was basically a universe of scaled out crud apps, was where capital would keep going back into.
AI is a lot more useful than hyper scaled up crud apps. Comparing this to the past is really overfitting imho.
The only argument against accumulating GPUs is that they get old and stop working. Not that it sucks, not that it’s not worth it. As in, the argument against it is actually in the spirit of “I wish we could keep the thing longer”. Does that sound like there’s no demand for this thing?
The AI thesis requires getting on board with what Jenson has been saying:
1) We have a new way to do things
2) The old ways have been utterly outclassed
3) If a device has any semblance of compute power, it will need to be enhanced, updated, or wholesale replaced with an AI variant.
There is no middle ground to this thesis. There is no “and we’ll use AI here and here, but not here, therefore we predictably know what is to come”.
Get used to the unreal. Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
> Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
We've technically been able to play board games by entering our moves into our telephones, sending them to a CPU to be combined, then printing out a new board on paper to conform to the new board state. We do not do this because it would be stupid. We can not depend on people starting to do this saving the paper, printer, and ink industries. Some things are not done because they are worthless.
TLDR: Lucent was committing various forms of accounting fraud and had an unhealthy cash flow position, and had their primary customers on economically dangerous ground. Nvidia meanwhile appears to be above board, has strong cash flow, and has extremely strong dominant customers (eg customers that could reduce spending but can survive a downturn). Therefore there's no clear takeaway: similarities but also differences. Risk and a lot of debt as well as hyperscalers insulating themselves from some of that risk... but at the same time as lot more cash to burn.
Crafted by Rajat
Source Code