hckrnws
I'd see Chinese RAM manufacturers like CXMT filling the void left in the market for consumer-grade RAM modules, I appreciate they face challenges (like lack of access to cutting edge EUV machines), but the RAM just needs to be fast enough and affordable enough for the average user for these companies to make significant inroads into the market that Micron, Samsung and SK Hynix are abandoning to chase the AI server market.
Their scale is simply too small to affect the market outside China, majority of their chips will be eaten up by HBM3 production with yet unknown yield rate.
They are forbidden to buy foreign equipment beyond their current process node, which is already obsolete, die size is 40% bigger than Samsung, not to mention lithography, the big 3 are using EUV while they are stuck with lobotomized DUV.
They can start making some decent money now, but vastly expanding capacity as is means enormous losses if the cycle went downward a few years later, that's how all previous makers went bankrupt.
They can squeeze out a bit more performance if they are ready to go beyond their current node using only domestic equipment and be blacklisted by the US government.
But the cap is there, unless they can make a working EUV machine in 5 years, they are doomed to be a minor player, if the current cycle even lasts that long.
They will grow exponentially and catch the western market unawares in 10-15 years with a sudden flood of cheap, effective chips. Just like everything else China makes. Electric vehicles for example.
Sure, if they've got production grade EUV, but right now they don't even have production grade DUV.
I'm also sure they can go as far as 5nm like SMIC if they really wanted to, since it's strategic for China, but the cost would only be justified if the current cycle lasts long enough.
I was corrected elsewhere when I thought RAM was more expensive 10 years ago. RAM was actually cheaper 10 years ago, when it was DDR3/DDR4 too. If any company can replicate the 10 year old SOTA, they can bring prices down.
This is what I expect to happen. 2016's ram was good enough for consumers then and probably still is for a huge class of consumers now. I'd rather 32GB of DDR3 than 8gb of DDR5.
DRAM rarely break, yes, I have bought cottage industry recycled DDR3 with no problem whatsoever.
The problem, however, is IO controller support has been dropped, many new CPUs don't even support DDR4 any more, especially mobile ones.
You can get like terabytes of DDR3 used. No one wants that shit. Too slow. Power hog.
This is where China's crazy solar advantage affects real day to day outcomes. When you have electric costs going into 6-8 cents per kwh then you can run older nodes that slurp more electricity. They aren't even done lowering the price. I've thought about this recently. If the dream of meterless electricity came to fruition then that terabytes of DDR3 could essentially be run until it literally burned out and then recycled back into its core components. The sun provides more power than the entirety of society could possibly currently use and so its a shame that the ram is being tossed instead of used.
Well yes, but it isn't too cheap for how 8ld it is.
Some dude literally gave away a couple of terabytes on Reddit homelab subreddit the other day.
China has a luxury of being able to not really care about the cost when it comes to what they view as a strategic advantage.
This option is available to any sovereign country.
In fact, it is what most European countries used to do for their strategic industries.
The problem is that the chances of the bursting of AI bubble seem far more likely than this happening first which you say about 10-15 years.
plus, the ram manufacturer cycle moves and does this all the time.
Atrioc does a really good job explaining these cycles[0]
But the point is that AI demand peaked when the supply was at its lowest which is why we are caught up in this messed up timeline that we live in. And this has sort of happen in the past too and this industries notorious for it (again watch the video, definitely worth it imo)
But still it feels like we are in this atleast for a year or two hard. Micron is iirc like suggesting what hundreds of billions of $ in factory investment right now and saying that the fastest might open in early 2027
Some estimates 2028 idk, I do feel like the chances of AI bubble popping around this time are likely too.
But still for atleast 1-2 years, we either as consumers or as small vps providers (yes the people who create vps providers are same people like you and me) are absolutely f*ed and the question is around that imo.
[0] The AI Tax Has Started: https://www.youtube.com/watch?v=nipeaKC3dWs
The thing is, even if the market bursts, prices are already inflated. RAM manufacturers know that everything 5xed and they aren't likely going to rush out and drop the price levels to pre-expansion. Once the AI market bursts, you can expect slow and methodical decreases in price (if any).
And that will ultimately buy China a lot of time to shove their ram into the market cutting ram manufacturers out of most non-US markets.
I think the major memory manufacturers are simply banking on their ability to flood the market if worst comes to worse. That or I could see some standards trickery around DDR6 (or some new BS standard). It'd not shock me if they coordinated with AMD/Intel to keep the standard secret as long as possible simply give themselves a lead in production.
Graphic cards prices normalized quite quickly after crypto boom. Before going nuts for AI training of course.
They're already producing 10nm DRAM with their current nodes, and they're working on producing 3d DRAM which may make node size somewhat moot.
Not 10nm, they are producing with 18.5nm and 17nm now, which technically already is in breach of US restrictions, the US government can blacklist them if they feel like it.
3D DRAM is no magic, it will only give them maybe 2 generations' breathing room if they got the required etching equipment figured out. But others will be doing 3D DRAM with EUV by then.
Are you sure?
>CXMT has begun mass production of DRAM using a D1z (sub-16nm) process.
https://www.globalsmt.net/advanced-packaging/decoding-cxmt-d...
They call it "10nm class" later in the article.
It's hard to find much concrete info tbh.
If the US government bans the import of RAM, it guarantees the immediate collapse of the US.
I don't think so, Lutnick just made sure the US wouldn't need to import DRAM with his newest threats.
Wow. They are 100% tariffing all RAM. RAM is not made in the USA.
This destroys all the AI companies, and more.
> RAM is not made in the USA
Micron Fab 6 currently makes about 2% of global memory production in Virginia.
The plant is currently being expanded and upgraded to Micron's 1a node.
I don't understand why we don't just cut to the chase and elect the Joker in 2028. The same people will vote for him, and he won't be any worse for the country.
If only a single look could capture the appropriate reaction to that suggestion .. https://imageio.forbes.com/specials-images/imageserve/65545b...
I think that happened in 2024.
> They can squeeze out a bit more performance if they are ready to go beyond their current node using only domestic equipment and be blacklisted by the US government.
Which suits the rest of the world just fine. More for the rest of us, and if the single-digit-percent portion of their market that the US represents wants to lock itself out, no skin off anyone else's nose.
Obsolete process node hardly matters when the rest of the market is bottlenecked on production capacity; small overall scale still might. Expanding capacity may or may not make sense; it depends on your prediction of the way the market will go.
China also needs RAM for AIs, especially since they have plenty of electrical power and building speed to pump out data-centers.
Turns out their wind "opercapacity" maybe isn't. Maybe they are trading chip efficiency for raw power.
Something I’ve been sort of wondering about—LLM training seems like it ought to be the most dispatchable possible workload (easy to pause the thing when you don’t have enough wind power, say). But, when I’ve brought this up before people have pointed out that, basically, top-tier GPU time is just so valuable that they always want to be training full speed ahead.
But, hypothetically if they had a ton of previous gen GPUs (so, less efficient) and a ton of intermittent energy (from solar or wind) maybe it could be a good tradeoff to run them intermittently?
Ultimately a workload that can profitably consumer “free” watts (and therefore flops) from renewable overprovisioning would be good for society I guess.
First: Almost anything can be profitable if you have free inputs.
Second: Even solar and wind are not really "free" as the capital costs still depreciate over the lifetime of the plant. You might be getting the power for near-zero or even negative cost for a short while, but the power cost advantage will very quickly be competed away since it's so easy to spend a lot of energy. Even remelting recycled metals would need much less capital investment than even a previous-gen datacentre.
That leaves the GPUs. Even previous gen GPUs will still cost money if you want to buy them at scale, and those too depreciate over time even if you don't use them. So to get the maximum value out of them, you'd want to run them as much as possible, but that contradicts the business idea of utilizing low cost energy from intermittent sources.
Long story short: in might work in very specific circumstances if you can make the numbers work. But the odds are heavily stacked against you because typically energy costs are relatively minor compared to capital costs, especially if you intend to run only a small fraction of the time when electricity is cheap. Do your own math for your own situation of course. If you live in Iceland things might be completely different.
They are amazing at making batteries as well. How does adding batteries to the mix change the calculation?
> top-tier GPU time is just so valuable that they always want to be training full speed ahead.
I don't think this makes much sense because the "waste" of hardware infrastructure by going from 99.999% duty cycle to 99% is still only ~1%. It's linear in the fraction of forgone capacity, while the fraction of power costs you save from simply shaving off the costliest peaks and shifting that demand to the lows is superlinear.
I think as such intermittent power comes on the grid in the coming decades, people will find creative uses for it.
They will first fill the local demand for all their electronics manufacturing. Then their massive computer infra and AI. And if any is left, it will be bundled to local PC exporters like Lenovo.
It’s fine if it’s just filling Chinese manufacturing. Low-cost VPS hosts are going to be using brands like Supermicro anyways. It still gets exported.
Except for RAM from YMTC, which the USA gave a near-death sentence to by placing it on the Dept. of Commerce “Entity List” so no USA-associated business can do business with YMTC now.
This is true.
We use ASRock Rack servers, mainly because the only option for our industry are OEMs like Supermicro and ASRock. Dell and HPE are non-starters, except for our "storage" offering.
Back in 2019, HPE was a good midrange option. Then came ASRock Rack who obliterated HPE with the X470D4U, relegating HPE to high-end enterprise servers. But also made Ryzen-based VPS hosts including yours truly, BuyVM, et al.
Doesn't YMTC focus on NAND (i.e. flash storage) rather than DRAM? Regardless, point stands.
>market that Micron, Samsung and SK Hynix are abandoning to chase the AI server market
These three have collectively committed what, approaching $50B towards construction of new facilities and fabs in response to the demand?
The memory industry has traditionally projected demand several years out and proactively scheduled construction and manufacturing to be able to meet the projected demand. The last time they did that, in the crypto boom, the boom quickly turned into a bust and the memory makers got burned with a bad case of oversupply for years. With that context, can you blame them for wanting to go a bit more slowly with this boom?
Sure, the new fabs won't be up and at volume production until late 2027 / early 2028, but committing tens of billions of dollars to new production facilities, including to facilities dedicated to DRAM rather than NAND or HBM, is hardly 'abandoning'. They're pivoting to higher profit margin segments - rational behavior for a for-profit corporation - but thanks to the invisible hand of the (not quite as free as it should be) market, this is, partially, a self-solving issue, as DRAM margins soar while HBM margins compress, and we're already seeing industry response to that dynamic, too: https://www.guru3d.com/story/samsung-reallocates-of-hbm3-cap...
That's probably what is going to happen, it's a strategic opportunity for the Chinese government here, there's a big market demand that can fuel their domestic production capabilities that nobody wants to take.
It would be a strategic opportunity for Intel, if they weren't run by imbeciles. DDR4 doesn't require the latest and greatest nodes. It's boring old technology. Even DDR5 is pretty boring. Intel could clean up fabbing DRAM (like they used to). But alas no. They're part of the semiconductor cartel and uninterested in the supply of DRAM increasing. Prices would drop and the fabs would only make stupid margins instead of disgusting margins.
Intel would be in the very same bind as every other DRAM producer who's trying to expand production today, only far worse because they have practically no experience fabbing DRAM compared to logic. (You can fab eDRAM on logic processes, but you'd only do that out of sheer desperation since the cost per bit is much higher.)
What they could do very easily in this market is bring back frickin Optane and hook it up to a modern PCIe bus with modern PCIe performance.
I am very hopeful of CXMT. But then then it could take a while for them to ramp up production. Maybe by then, the AI bubble would've burst.
One problem with US sanctions is it could hurt US companies too, like in the case of cutting-edge EUV and CXMT. This is when China is actually a hero and not a villain.
We can certainly do with less plastic junk and fast fashion. But on the high end it hard to argue that cheaper Chinese products are ever a bad thing.
If corporations in western (aligned) countries stopped feeding sovereign wealth funds and private equity with profits and actually invested something maybe they could compete with China more closely, even with whatever shenanigans the CPC get up to with state support.
I truly wish Chinese Ram manufacturers luck to fulfill this market. Seriously, the amount of ram and its downstream effects can be hardly understated imo. it really just starts impacting everything.
While not a small host, I thought I would mention what I observed with OVH's VPS offering. I was considering their line of VPSes recently because of how generous the cores/ram quantities were given the price. For example, the smallest offering is 4 cores / 8GB at just over $4 a month.
What I found is that it is cheap because the cores, and presumably ram, is old. Like, 2013 era Xeon E3-1275 v3 old. But that's fine! Old hardware like this uses old ram that is less affected by the current shortage. It's good enough for my needs.
I previously ran another VPS host who did the same exact thing (before OVH did it?).
Unlike Fourplex.net which uses modern ASRock Ryzen 9000 servers, Qeru.net used older HPE DL360 Gen9 servers.
I gave 3GB of RAM for $3-4/mo then. But these servers weren't very fast. I ended up selling the business, and am happy I did.
Question: how a person can provide VPS host srevices without being a reseller? Did you own the hardware? I am super interested. Hey, I even would pay for a series of articles about this!
I used to do site/vps hosting (millions of sites); we just bought slightly older 1U rack servers off ebay (and later our rack neighbour in the hosting room, who always had to buy the latest for his clients allowed us to buy of him) and kept filling local racks. We paid for a large bad quality bandwidth pipe for the large bandwidth eaters and good quality for the smaller traffic properties; we served some % of traffic per host through the good and after that through the bad. The porn / warez or whatever people didn't care and the business or hobby folk were happy as well.
In the end it got too much work with that many servers, something is always broken ; we had some virtualisation and failover stuff, but you have to go and repair it. We were basically two fulltime guys + some freelance and that meant getting into a car and driving for an hour every other day, storm or ice on the roads, christmas or birthday, doesn't matter. It made good money and we sold nicely, but servers are loud and we have been so often there for hours that even with ear protection I think it messed something up. Also crawling in small spaces for wiring etc isn't great for your body. I did learn a lot about Linux and the popular packages; we had our own patches and versions of most to shield from 0-days and save processing waste to put more on one machine without it degrading quality.
I would not do it with reselling ; you have no control; when the police called us for illegal/child materials and so on, we were in charge of removing/blocking that; if you resell, they will likely first just shutdown everything you have and then ask you for an explanation. And after a few time delete you and that's it. Or when there is something wrong technically, you are left holding the bag anyway as many will just blame you (and then after 'some time' your service 'suddenly' is back). You can hire much more expensive stuff with much better support but in our experience, that does not help much when there is a LOT of abuse (and with millions of sites, you have a lot of abuse that you cannot check).
It's fun though; just too much work.
Start reading /r/homelab if you don't already. Old enterprise hardware can be had for pennies.
You obviously won't host the service at home, but it's a good intro to the hardware side of things.
This is something I'd love to know, too. I like servers, infrastructure, and terminals, so doing something like this has been in the back of my head for a while now
OVH doesn't list the eco range on their site nav now, their servers start at $90/mo unless you search OVH eco and go directly to the page.
This article looks like actual gibberish to me?
It goes on about DSL and dial-up for some reason?
And yes VPS providers are affected by ram shortage. Turns out things that need ram are affected by ram shortages. 5 stars for the insight
The Bell / telephony analogy is unrelated and forced. Feels like the author had an Idea, and asked an LLM to force the analogy.
I agree, I think the author had some thoughts in their head that they forgot to write down, because it feels like at least a few paragraphs making the connection are missing. Happens to me all the time when I'm writing something too honestly.
Most low-end providers will just keep using old hardware for longer.
IPv4 shortages didn’t kill it, and I don’t think this will either.
For providers like us, we have to lease IPv4. We came long after IPv4 was already depleted. IPv4 prices did go down. Despite that, the $15/year 128MB BuyVM plan is long-gone.
But for a new provider like us, we'd have to spend more than an established player like BuyVM or RackNerd who bought most of their servers pre-AI-boom.
Have you tried doing ipv6-only plans?
Vultr has one that's $2.5/month v6 only. Probably good if you just need something tiny to run some automation.
What are the issues faced by v6-only hosts and are there countries where it is a non-issue?
GitHub is the main problem currently. Some software like composer does not work due to this.
https://github.com/orgs/community/discussions/10539
For countries, if you meaning connecting to VPS, lot of countries have good IPv6 connectivity now. For me both ISPs I use have native v6. This will differ from person to person.
or just offer v4 HTTPS LB bundled. Never understood why more places didn't do this.
Responsibility and controls. If the host/dc assigns a dedicated addresses the contract can be essentially "the customer assumes all liability behind traffic". With NAT/LB you need at the very least quite robust, evidence-grade monitoring mechanisms tagging all traffic and keeping historical data. In practice, some for of active abuse prevention is required, otherwise huge chunk of your address space is going to effectively linger in blacklist limbo.
That is, if being unreachable below "presentation layer" is acceptable in the first place, but I guess the question kind of presupposes this.
Exactly this. The old hardware from a year ago is fine as the typical use-case for a VPS didn't change.
I don't know if you can consider Netcup "small", but their RS 4000 G12 ("root server", basically a VPS with dedicated/guaranteed resources) costs ~€31 for a monthly contract for any location in Europe without VAT included.
It's 12 dedicated cores of a modern EPYC CPU, 32GB RAM, 1TB NVMe.
I got that offer during their Black Friday sale and pay €25/month (price before VAT), plus the offer I got has a 2TB NVMe instead of the 1TB one.
So with 8 customers per box (AMD EPYC™ 9645 CPU has 96 cores) if they have single-cpu boxes, that would need 256 GiB RAM.
CPU launch price 11000 USD. RAM will likely be another 10000 USD
20000 / 8 customers / 40 USD/mo = 62 months just to recoup CPU and RAM let alone other components.
Weird, whenever I napkin math offers of any HW for renting, I get that I could buy it myself in 1-2 years of rent. Sometimes faster.
Do they not intend to recoup the costs of HW? :)
Oversubscribe your hardware and you can do better. Most of your customers are idle most of the time.
Yea some cloud providers are notorious for it.
Netcup does oversubscribe/overshare but not sooo much. I have a server there and I don't really observe too much but although I haven't really gotten ways to detect that stealing factor but there are definitely scripts to detect it, maybe I will run it some day but oh well the laziness.
The most overshared vps provider I know is contabo. Literally search anything on reddit,lowendbox, literally anywhere where there are people and they mention about how ~20-30% figure top of my head could be oversubscribed
I am not exactly sure but my point is that when I first saw them, I found them the cheapest option (with their contabo auctions for something at really scale like 96 gb ram or something) but they are literally out of my book as well even as a frugal guy just because of how unstable they are or how much consistent I have seen people struggle about contabo. It's simply unrecommended imho. Netcup's 10x more pleasant from what I see other people's reaction to. People do mention some stealing factor on netcup but overall its really good and that sort of aligns with my experience with them too ig/
Their goal should be to keep stealing close to zero while utilizing all their hardware. You want this too, because it keeps costs down. It does mean sometimes there will be stealing spikes, but most customers don't want to spend twice as much to avoid that, so it's a win/win.
Agreed. From what I know in hosting business/vps providers, they usually have it for a 2x factor. And honestly its not even noticable so much mostly as you mention not unless they have a crazy factor like from what I hear contabo has from forums
This is also the reason why vps's are said to not use 100% usage 24/7 as it can be noisy for other people.
Also another interesting idea by netcup is that they launche virtual dedicated cores (but still vps at heart) where you actually can use 100% usage 24/7 but to be honest on websites like lowendtalk, I have heard that be described as eerily similar to either bare metal instance or this new cloud metal instance terminology as well but the difference to me feels vdedicores atleast on netcup are focused to be more cheaper than dedicated in many instances but I haven't compared it but I have heard it be described as such in lowendtalk ig.
> 20000 / 8 customers / 40 USD/mo = 62 months just to recoup CPU and RAM let alone other components.
the most important question is the power, such a setup will blow through easily 500W of power. Granted, a datacenter may not pay 33 ct/kWh but even then, 1/3rd of the monthly income will just go towards naked electricity, not including cooling.
Electricity costs are ridiculous.
Comment was deleted :(
I got a netcup 500 gb server 8 gigs ram 4v cpu and everything with my frugal nature with getting a 5 euro voucher and I was following/seeing their advent calendar too which offered 1 month free
so for 3 months, I got 1 month free, another month with the 5 euro voucher and essentially only paid for the last month.
I ended up paying 8$ in total for the 3 months. Although after the 3 months, I have to pay more around ~5-6$ per month but its still a recurring deal so I guess I might as well continue it (given I have setten everything up)
Some of it is definitely lose making as well to lock in. I usually try to see what gets me to buy a deal because i usually don't buy many things especially online and its usually within some deal concept where I feel like "winning" that I do.
But to be honest, I did some napkin math and honestly, at least for my VPS (although Its usually idle but I have hosted many services on it or experimented with it, its honestly pretty fun imo, I definitely feel so much more freedom imo somehow after getting a vps its hard to explain I suppose or maybe not idk)
But my napkin maths suggests that they lose quite a lot of money with atleast my deal in particular so I got better off getting this deal than say if I was netcup myself somehow (because they probably lose net money on my deal if I actually really use my allocated limits I guess) atleast for the 3 months but I feel like netcup's still one of the more cheaper options and I got a recurring discount deal with it too sooo I would still consider them to have only marginal profits over a long term deal as well.
SO I guess overall, this was a really profitable deal for me.
Your deal's pretty cool too, honestly at large scale. I have done scraping of lowendtalk and analyzed and done many decisions and infrastructural decisions and honestly I will say taht for anything somewhat large scale like you mention 32 gb or something, I found hetzner boxes to be the best/most referred to option & OVH kimsufi line is created to be the cheapest dedicated boxes.
But yeah regarding OVH i recommend this website that I found regarding OVH KS line or just in general https://checkservers.ovh
Just checked it right now and I see Xeon E-2274G 4c / 8t 4.4 GHz+ 32 GB DDR4 ECC 2666 MHz 2 x 960 GB NVMe for around 37-38 euros ish
Also Netcup's definitely not small. I looked at anexia group and I wouldn't constitute them as small. For understanding what small means I guess when I mean small, I mean lowendbox single shop providers and others.
Also regarding Netcup, its payment system was really ass from what I felt like. Like It was the first company which made me feel like I wanted to pay it money but I had to make a CCP or some account nad literally had to go through like 3-4 really long things imo which really made me say (wtf) in my head each time and frustrated me only to give me a stripe link in the end.
It had me so pissed that for 2-3 days one idea of simplifying its whole process and other such providers stuck in my head. Its payment / the whole process of I want to pay -> create account -> actually pay had like 5-6 maybe 7 steps or something and I seriously can't tell you how I frustrated I was but the deal was too lucrative lol
The RAM prices could cause serious scaling issues for everyone right now, including small businesses that deal with healthcare for example. Speaking from personal experience.
RAM is still cheaper than 10 years ago. Every time you ask "are RAM prices killing X?" ask yourself if we had X 10 years ago.
I'm not sure this holds up when you look at the actual numbers. In 2016, you could get a DDR4-2400 16GB kit for $81 [1], and a comparable G.SKILL Ripjaws V kit was around $52 [2]. Today (January 2026), that same G.SKILL kit costs $105-146 [3], and most 16GB DDR4 kits are running $95-150 [4]. So RAM is actually more expensive now than it was 10 years ago, not cheaper.
The "did we have X 10 years ago" argument also misses a a lot of modern software requirements. Yes, we had mid-range laptops and budget gaming PCs in 2016, but the memory footprint of everyday applications has exploded since then. Electron-based apps (Slack, Teams, VS Code, Discord) routinely consume 500MB-1GB+ each [5], and it's entirely normal to have multiple running simultaneously. A typical "light" workload in 2026 easily uses 2-3x the RAM that a comparable workflow needed in 2016.
So we're getting squeezed from both sides: applications demand more memory whilst memory itself costs more. An 8GB laptop was perfectly serviceable for office work in 2016; in 2026, it's borderline unusable with a few Chrome tabs and Teams open. The same kit that cost $55 in June 2025 now costs $150 [2, again]. That's a 150%+ increase in six months, pushing capable systems out of reach for budget-conscious users precisely when software bloat is making RAM more essential than ever.
[1] https://gamersnexus.net/industry/3212-ram-price-investigatio... [2] https://gamersnexus.net/news/ram-wtf [3] https://www.newegg.com/p/pl?d=ddr4+ram+8gb [4] https://pcpartpicker.com/trends/price/memory/ [5] https://josephg.com/blog/electron-is-flash-for-the-desktop/
I don't think people really understand how big of a wall outside of GPU/TPUs the CPU, ram and even flash market has hit. We're paying as much if not more for the same stuff we were buying 4-5 years ago.
I do think we're more efficient and matrix operations are better (again, GPU/TPUs), but by and large, the computer hardware world has stopped exponential growth.
The period from 1990 to 2005 was amazing. Both in transistor counts and clock speed, it seemed like we were nearly *doubling* performance with every new generation. Memory and disk space had similar gains.
In late 2024 I bought a 32GB kit with unbuffered ECC for ~80$ /shrug
late 2024 is neither 2016 nor is it 2026, it's irrelevant to the discussion.
Oh you're right, I was thinking in the context of self-hosting and not a VPS nor the same era. oops.
In March 2015 I could buy a "Corsair Vengeance RED 32GB (4x8GB) DDR3 PC3-14900C10 1866MHz Dual/Quad Channel Kit" for £259.99 [1]
Today, I can buy a "Corsair Vengeance LPX 32GB (4x8GB) DDR4 PC4-28800C18 3600MHz Quad Channel Kit" for £259.99 [2]
[1] https://web.archive.org/web/20150303141114/http://www.overcl... [2] https://www.overclockers.co.uk/corsair-vengeance-lpx-32gb-4x...
I suspended my current K8S startup plan exactly due to the RAM shortage. Instead I sold my existing investment in RAM for a pretty good markup
Out of curiosity, what advantages do the small VPS hosts offer compared to the big 3 (AWS, Azure, Google Cloud)? Customer Service? Pricing? Local Data Center?
Until recently I had a 4gb ram 80gb ssd+2tb hd VPS running debian in a Montreal data centre with a real use 700 mbit pipe to my city with a budget provider for the equivalent of $80USD/year. When fio speeds were slow they moved me to a less crowded server. I gave it up as don't need it and moved my personal sites back to NFS for peanuts a year and services to my NAS. The pricing, offsite storage for my backups, Canadian sovereignty, lack of perceived complexity with a big provider was all attractive. I'm a physician with a tech hobby and last serious tech work was in the LAMP days with perl and php. Trying to think of learning about AWS and screwing up usage based billing was daunting!
Yeah, don't try AWS. I tried it once and now I'm stuck with $0 bill emails coming each month that I can't stop.
A few months ago I was going through my secondary email and noticed I was getting a $0.01 monthly bill from AWS.
Having not used AWS for years, I logged in to check it out, navigated through the Kafkaesque maze of their services until I found what I was looking for:
A lone S3 storage bucket, with one file, "Squirrel.jpg". A 200kB picture of a squirrel that I uploaded 8 years ago and can't remember why.
> I was getting a $0.01 monthly bill from AWS.
I wonder what the cost to AWS was for keeping track of that and running your CC. There's no way they made money off you / that 12 cents/year cost them *at least* 12 cents to collect every year
That's funny. I kept getting a -$100 bill from a credit card for a few months after closing it. Eventually called them and suggested they can send me a cheque instead of a bill next time for similar reasons...
IIRC the CC they had on hand had long expired and they never actually managed to charge me for these minuscule amounts, which is why I didn't notice it for so long.
My vps provider bills in $5 blocks
That should be below the threshold for AWS’s free tier. I have more than that in S3 and I’m not being charged a cent.
AWS did some weird security thing and it invalidated my 2FA. I can't login to my account to update my expired card.
I have $6 in charges and so now my account is locked. Lol. Fuck off AWS.
> Trying to think of learning about AWS and screwing up usage based billing was daunting!
One of the hard rules we learned pre-pandemic was that services attached to usage based billing should really exit on error. It's a lesson I'm keeping in mind working with agents and routing (and the main reason I'm local-first).
Canadian here, could you share the name of the provider? I'd love to move to something more local and just need a basic small vps for a simple apache host. I know of a couple providers but never talked to anyone actually using one.
I did a detailed review of a few Canadian VPS providers last year.
https://lukecyca.com/2025/canadian-vps-review.html
Last year, I moved from DigitalOcean to FullHost (their Vancouver datacentre) for hosting a small SaaS and a bunch of personal projects. It's cheaper and FAR better performance.
Thanks! I'll check it out!
It was ServaRICA as someone else suggested. It was a Black Friday hybrid VPS deal from a few years ago, looks like they still have comparable stuff on their site. For the cost I would generally assume anything important needs to be duplicated in case the company folds or a fire unless you pay them for such a service. (I don't have any vested interest in suggesting them.)
Thanks! Nothing important, personal site with the source stored in a git repo replicated to a few places, so them folding would just be a minor inconvenience.
They're probably talking about ServaRICA. They post deals on LowEndTalk.
Thank you!
Much better prices, and simplicity. The power you get from Hetzner or Kimsufi is crazy compared to AWS.
If I need to host something small, I don’t want to mess around with the many permissions and quirks that are required to deal with AWS. It is often much easier to just setup the server on a standalone service.
This.
When I worked at Microsoft, I seldom used Azure for personal use due to it being expensive and complicated.
Whereas I have plenty of Fourplex.net servers because even on half the salary, it's affordable enough for 16 Tor exit relays and two personal web/email/Mastodon servers.
How's the legal exposure of running 16 exit relays?
A fixed plan that is not so flexible, not pay-as-you-go, but predictable and economical. Elastic cloud are elastic in terms of that you can change the compute you want, you can change the storage, either block or object, and you can use their premium network as much as you can, long as you have the money and got clearance on the end of the month. Scaling is therefore what those elastic cloud offers, albeit in a premium price.
Meanwhile, small service providers might not actually need those premium features, and just want something that is cheap and makes economical sense. They don't need the state-of-the-art hardware and just want something that works.
That's why while the AAGO (AWS, Azure, GCP and Oracle) attracted a lot of big corpo, that is, almost all of Forbes 500s used them, DigitalOcean and Vultr, with their $5 plan, is those who won the small businesses.
Most of my customers (small VPS host here) don't like the companies behind AWS, Azure, and Google Cloud, especially the amount of influence they have in the world and how they wield it. And the pricing often isn't that much different between a small VPS host and either a cloud provider or one of the larger VPS providers (Akamai/Linode, Digital Ocean, etc.) - larger providers have economies of scale, but smaller providers don't have as much overhead for paying sales and C-suite.
There's also the human touch in terms of who you talk to: a lot of the smallest VPS hosts are 1-2 people, both technical, so customer support = sysadmin = contact for everything.
If the only thing you need is "server, accessible via the internet, always online", and you're not interested in all the vendor lock-in masquerading as useful services offered by the big cloud providers, then small VPS hosts are 100% the way to go. For mid-sized servers they're cheaper (i.e., stuff that wouldn't be free on the big clouds, but not "I want a petaflop"), with more transparent pricing (I pay $12, every month. If I get inundated with traffic, I'll get cut off until I choose to pay more).
If you need a lot of power you should also look into dedicated servers from small hosts, they usually have very good power to cost ratios.
The set of companies that sell VPS and the set of companies that sell dedicated servers aren't the same set.
I could pay like 30 bucks a month for an absolutely overspecc'd VPS (64GB/16c) that would cost around 20X on AWS (According to ChatGPT; which sounds about right based on the last time I cared to even look into it)
Does it have a billion 9's of reliability? No, but I don't care, it has literally never not worked when I've used it
Customer Service so far has been human, but that will vary greatly for the provider
I also use a different provider for work related hosting, and the reduced latency of being within 20 ms of the DC has been probably the single biggest (perceived) perf improvement my users have ever seen, specially on the legacy webforms platform we recently decomissioned (We're a bit too geographically far for most Datacenters of most large providers)
I'd use digital ocean over AWS for any SMB or lean startup (so... anyone not attached to an infinite money hose that has to either scale to NEED AWS, or die trying) just because of 1) their UI not being broken glass you have to crawl over and 2) not having eight trillion features that make doing simple things hard and 3) pricing
Same. For small projects, I always recommend Vultr or Digital Ocean. Vultr has some neat network features, like BGP support.
Price. 1 vCore, 2GB RAM, 20GB SSD, unlimited traffic (though throttled to 200mbit/s after transferring 2TB within 24 hours) = 1.85€
That is a nice way to have a static IP on the internet and enough resources to do small things like host a nameserver and/or OpenVPN/Wireguard.
I may have had 4 hours of downtime in one year, always announced days in advance.
I used a vps service hosted in a country with strong digital privacy laws to host a personal wireguard+pihole vpn. I could probably think up a decent argument why that privacy with the smaller guy was only nominal but I could absolutely think up a good argument why doing that on a big name would have no privacy guarantees at all, especially as someone who would be in the bottom rung payment-wise.
Never had problems with downtime and I payed, like, 40 bucks a year over 3 years. I think I had to restart the thing once because of something dumb I did on my end.
Low cost, simplicity and customer service.
AWS does offer Lightsail which is similar pricing.
The terms of use of Lightsail say you can't use it in a way that's intended to reduce your costs over EC2. So they have more free bandwidth, but you're not allowed to intentionally send your traffic through Lightsail to avoid the extortion of EC2 bandwidth pricing. They have cheaper CPU, but you're not allowed to use 100% of your CPU.
Very much like a similarly priced VPS though. It lets me persuade people to use a VPS because it is from a familiar big brand. I have come across a lot of people who use EC2 (usually just a single instance) when a VPS would work fine for them.
You cannot use Lightsail to work around EC2 costs, but you can use Lightsail instead of using EC2.
There are ones that don’t price gouge you on bandwidth.
Much cheaper. The big 3 are costing you as if it's still 2009, and they don't change this because people have an irrational attachment to overpaying.
The only advantage is cheapness, for personal use.
If you’re a government agency or a company you don’t care about saving $14/month, you want a secure provider. And these hosts are not secure, you’re basically just on your own.
Without hiring/being a cloud expert, it's hard to be sure that you didn't leave some door wide open due to a configuration error. Both approaches offer more than enough opportunities to royally screw up.
If you're a government agency or anyone for whom a security failure costs more than sending an apology letter, you should really have your equipment in a locked rack, if not on prem.
Speaking for myself, managing a team of 3, the simpler management interface on Hetzner compared to AWS is a major professional advantage.
Not being US owned can be an important one in this geopolitical climate.
Small VPS hosts oversell like crazy and they offer much lower prices. Also their reliability might be worse, because they don't migrate VM between hosts.
AWS EC2 does not migrate between hosts - They want you to pay more for redundancy, and they encourage you to use all the tools (paying for every single byte processed or transferred, of course). And if it is goes down, it is because you didn't follow AWS best practices (= paying even more)
Some. Depends on your needs though, and the provider. You can get shit performance for cheap, you can also get good performance for cheaper than AWS.
Spending caps is the biggest reason for me. Granted, some VPS don't offer this (vital!) feature, but none of the big 3 or similar services do.
10-100x cheaper
Where small VPS hosts can make a difference: require no KYC, accept crypto payments.
Some, but those are usually much more expensive to make up for the legal risk because half of their customers are doing something illegal.
Illegal according to whom? The servers are usually located in places where things such as sharing music, copyrighted or not under U.S. law, is not illegal.
Providers who go out of their way to avoid KYC tend to attract the types of customers who do things that are illegal in any country.
Not all of the people who want to avoid KYC are doing something illegal ... but all of the people who are doing something illegal are looking for a no-KYC provider.
Simplicity, price, stability.
Others have mentioned the general pricing, simplicity etc.
Outbound data pricing is a potentially huge saving.
AWS is as much as $90/TB outbound with 1GB free. Hetzner is $1.20/TB (in EU and US) with 1TB/20TB (US/EU) free.
(Good) Smaller places are more likely to have actual technical staff you can talk to.
I have a dedicated server at one provider with $0.30/TB overage, and it's that cheapest I've ever seen. This provider appears to optimize for bandwidth–heavy use, advertising their connectivity heavily.
I like Vultr for the simplicity of my own projects. I really hate spending my time on provisioning and similar labyrinths.
Pricing (both cheaper and more predictable), and reduced complexity.
This is why I moved off of Azure and over to Hetzner's US VPS's. For what I was deploying (a few dozen websites, some relatively complex .NET web apps, some automated scripts, etc.), the pricing on Azure just wasn't competitive. But worse for me was the complexity; I found that using Azure encouraged me to introduce more and more complex deployment pipelines, when all I really needed was Build the container -> SCP it into a blue/green deployment scheme on a VPS -> flip a switch after testing it.
Interesting. I'd have thought these giants would have better pricing because of the scale...
The last comparison I did was Hetzner offers 14x the performance per dollar
Not including the faster SSD & included traffic
Is that based on their cloud or dedicated offering?
Quite the opposite. They have mindshare lock–in and don't face competitive pressure to reduce prices. AWS boasts it never increased prices but it also never reduced them by much, even as hardware got an order of magnitude cheaper.
They might be if they were trying to compete on price. But my understanding is their margins are... healthy shall we say.
They're selling all their capabilities; using them as a VPS is like using a battleship to cut cheese.
But if all you really do with cloud stuff is "ssh into a server I have" (which covers a ton!) then you'll find much cheaper/more performant elsewhere.
> They're selling all their capabilities; using them as a VPS is like using a battleship to cut cheese.
A lot of people do it.
People feel the battleship is safe and familiar. For most businesses the extra cost is not even noticed. Even a small business spending $500/month on hosting instead of $50 is not going to notice.
Also, if something goes wrong (e.g. your AWS region goes down) its far easier to explain to a manager or client that "its Amazon's fault and lots of stuff is down", rather than "its Digital Ocean's fault".
They give potentially worse pricing on a lot of the basic things (egress bandwidth, basic VM hosting, storage pricing) because their real value-add are all the extra managed services they offer on top of those things, the scale they're able to offer, and the more enterprise features.
If you're using AWS/GCP/Azure to just host a couple of VMs for a small group you're massively overpaying.
I haven't been professionally involved in AWS in some time, and never was involved in pricing.
Personally, the only thing I know of that is a true deal vs. competition is cold storage of data. Using the s3 glacier tiers for long term data that is saved solely for emergencies is really cheap, something like $1/100GB a month or less.
AWS is usually not the cheapest EVER when it comes to offerings like EC2. If you aren't doing cloud-native or serverless at AWS, you're probably spending too much.
Glacier Deep Archive is around $1/TB/month. This is also about the good deal price for storage servers right now, although Glacier offers redundancy which storage servers don't.
They don't. AWS is the most expensive hosting provider in the world.
AWS outbound data is as much as 75x the cost of eg Hetzner.
I view a large percentage of "cloud" usage like Teslas stock price: it's completely detached from reality by people who have drunk the kool aid and can't get out.
Predictable and extremely low costs for less critical stuff. My 2 main ones are respectively around 4 and 8 EUR per _year_.
I use them to run wireguard to evade geoblocks when I'm travelling, a few redundant monitoring scripts alerting me of reachability issues of more critical stuff I care about, they serve as contingency access channels to my home (and home assistant) if my primary channels are down.
I get no support, no updates, it's all on me - which is fine, it allows me to stay current and not lose hands-on practice on skills which I anyway need for my job (and which are anyway my passion). I don't even get an entire IPv4 - I get.... 1/3000th of it? (21 ports, the rest are forwarded to other customers). Suits me fine.
And it's always that price, apart from bandwidth overage on some but not all providers.
Simplicity, and low price.
VPS services are usually really, really simple and fairly cheap.
I'd say that actually VPS prices is where we actually see computing prices going down rather than on the big 3.
AWS used to optimize further and pass down the savings to the customers back in the day, now they don't do it anymore.
Pricing mostly, and also simplicity. Big cloud is incredibly expensive when you really look at it. The markup is huge.
It’s weird because in most of this industry scale results in lower prices. Not in cloud.
Comment was deleted :(
Pricing. They overprovision aggressively but most people actually just need a 0.1 CPU available remotely for the majority of their use cases.
I replaced with a home server and it costs way more just in power hahaha.
Not being Amazon, Microsoft and Google.
I moved from AWS to Hetzner, because: 1. lower prices, 2. not American
If everyone is being hit by the same cost issues, small VPS hosts just need to charge more to operate the same. Most small VPS hosts are dirt cheap and I don't think many people would be shocked if prices go up in this environment.
Yes, even if prices would triple, VPSs would still be an attractive offering.
At a certain price people choose not to buy
Sure, the market may shrink some but it's also relative to alternatives that are also increasing. There's no advice in the article the market is shrinking or that pricing is causing it to. People leaving the market on this type of cost will likely be people that have VPS's as idle resources, like some forgotten subscription they don't use but never canceled because it's <$10 per year. If that goes up to $20 per year maybe it triggers them to cancel it. It's so low cost I know this is a portion of their profit from customers like this, but it's just a business problem to solve and shouldn't kill any provider on it's own.
The comparison to 2000s telcom market does not seem similar at all to me.
$10 per year is extremely rare. $5 per month is common, and if it goes up to $10, a lot of people will still keep it if they're using it.
The provider is still making the same money for the same cost on that pre–existing service, so they might not even choose to raise the price for old customers. They're only losing an opportunity cost, and different businesses weight that differently.
I dunno if DSL-based ISPs were going to last in the US. I mean, in a big country the range limits of DSL make it hard to compete with cable. I get 20Mbps at my location with fiber-to-the-node, but people a few miles down the road get 10x that speed with Time Warner cable for the same price. In some place like the Netherlands or South Korea it might be different, but not here.
DSL is infinitely faster than a nonexistent cable wiring that will never be financially viable to provide.
These companies were already paid to wire up the US with fiber using taxpayer dollars.
and they took the money and didn't do it and didn't get prosecuted
Exactly. Points out how "will never be financially viable to provide" is a completely BS argument in this context.
It isn't. I gave up on waiting and switched to starlink. The existence of starlink provides even less financial reward for for paying to wire up my house.
They have no examples in the article of any VPS hosts that have recently died though???..
hello,
as always: imho (!)
idk ... the article is like ... comparing apples with oranges:
telecommunication-networks and similar kind of infrastructure are also called "natural monopolies"
* https://en.wikipedia.org/wiki/Natural_monopoly
and often those monopolies where (initially) build with public funding.
i don't think these characteristics apply to DRAM/semiconductor related facilities as we have them today.
i think the only "thing" which could "save us" from our own and the DRAM manufactures "greed" are new factories ... anywhere, but right now china looks "the most promising" at least to me.
additionally: i think these articles could be seen as somewhat related
"TSMC Risk"
* https://stratechery.com/2026/tsmc-risk/
and
"The Benefits of Bubbles"
* https://stratechery.com/2025/the-benefits-of-bubbles/
in a nutshell: the "upside" / result if the ongoing AI bubble pops could be
1. more semiconductor facilities
2. more power-generation facilities
cheers a..z
The US passed the Chips act in 2022. Is there really nobody reaching for those billions ready to crank out some memory four years later?
This was in the news recently: https://investors.micron.com/news-releases/news-release-deta...
Coming soon, in late 2030.
https://www.syracuse.com/micron/2025/11/micron-chip-factorie...
The idea behind the CHIPS act made some sense, but with these monkeys in charge (and the voters who elected them), no one has any idea what's going to happen next week, next month, next year, or over the next 10 years. It turns out that predictable, consistent, and coherent trade policy is actually fucking important. Who would have guessed?
Can we get smaller VM hosts? I’ve seen some minimums at 512MB for a host. I need 8MB at most sometimes.
Update: Fourplex (this host) uses a 1GB minimum.
What's the use case for a VM with 8MB memory?
A simple web server with some static content to server.
Running on what?... a quarter-century old linux kernel and busybox?
Rackspace is doubling the costs across the board of everything on their Rackspace Cloud products. They gave us 30 days notice and told us to switch to the new Rackspace Cloud (OpenStack Flex) with no tools to do so.
If we follow events at high-zoom:
- first graphics cards
- then RAM
- now NVME drives
The result? PCs become hyper-expensive and also hard to assemble for mere lack of components, then we get to VPS and keep going. The computing that remains is mobile and cloud, "the only integrated platforms," meaning the absence of digital ownership for most people, total control by the giants.
Think of it in these terms: nowadays GNU/Linux is starting to be a sought-after desktop, nowadays we can self-host with levels of accessibility even for newbies that we didn't have in the past, and the interest in doing so for those who know something is stronger than ever. And here come the reasons not to do it, economic and also physical.
Does it sound like "flat-earthism"? Well, it also sounds very realistic, at least net of the effects, we can say it's conspiracy theorizing, but thinking ill is a sin but you often guess right, said a prime minister a long time ago with a list of extraordinary scandals behind him.
[dead]
Okay this is the type of post I want to see and the answer is absolutely an yes.
For context, I think I have recently been part of the lowendbox community and discussing things with providers etc.
Recently one of their providers actually cited the price increase directly to ram increase.
I was in talks with other VPS providers one of whom literally shared me a photo in the forum of what it costs them to get ram in their area.
I have been in talks with many many people within the VPS community and I have said this so many ocuntless times in here or lowendtalk (you can read my comments although I used to say this ~1 month ago iirc)
Heck, I wanted to build my own cloud. My father works in the bandwidth business (essentially you can say broadband/wifi)for ~10-15 years/ has very strong ties in it.
I on the other hand like/love tinkering with software side of things including virtualization and have made projects about it. I shared some request one month ago to talk to my dad and he said he was interested in datacenter idea
But I shit you not, it literally doesn't make sense even though A) my father has its own office, b) he can get essentially lots of bandwidth for free, C) we live in country with very cheap electricity comparative to many others.
And even if you have lots of hardwares, it doesn't just make sense to still have it if the price of replacement has increased so much far far.
There is also a forum called lowendspirit and I saw someone shut down their service citing this as well.
There are many posts on lowendbox (in which I have talked to) talking about the same thing as well
Simply I don't know how to state it, but the 4x price increase within ram simply literally didn't make sense. No thanks. I (or we?) are gonna wait the AI bubble or do something else/work more on software sides/just relax instead of making this.
I haven't even read the post but this is the most connected post I have felt in a long time!
To be really honest, I will admit that right now, personally I am either looking at mac 512 gb ultra or something which are still not ramflated from what I can tell or something or to be really honest, most cloud providers are burning money to not create a frenzy or still have pricing the same.
Personally, I love small cloud providers but recent times indiciate to me to favour stability. I have a netcup server myself but hetzner is an absolute love too from what i can tell and I can recommend any & OVH is another brilliant solution. All European.
Because these have/probably have large supplies. Soo like I know that OVH literally creates their own machiens, buys land and does everything and is a stock company so they are kind of a capital expenditure -> getting maximal profits machine in some sense (like yknow eliminating as much as middleman as possible kind of thing?) soo I guess the chances of OVH has lots of money to burn through to survive & brand reputation is crucial too and usually many of these run Black friday sometimes too
I guess another part is that most cloud providers can offset the losses right now for the hardware which can get cheap when the AI hardware bubble bursts but I odn't really know. I guess OVH or others might internally discuss this possibility too.
Atleast for me, I am gonna build software regarding the projects. I have found a huge disprecancy where whmcs and others have an almost monopoly so I am probably gonna spend a few months creating an open source project alternative (technically I already have a preview working with gvisor which I used with my netcup vps itself because gvisor can work in kvm machines itself not requiring nested kvm)
To be honest, shit's really hit the fan in this industry in ways that I can't tell. I have/had been in talks with an UK provider (https://xhosts.uk) [promoting because seems like a good guy] & he said that he's suffering the losses/taking the hits (for additional context he is disabled but iirc doesn't take disability checks) & he is from what I can tell actually impacted by such rising costs.
So being honest, this mad fuckery of an AI bubble is hurting an disabled person who wants to make a genuine living doing things he like.
I really want this bubble to pop right now. Like please. Just pop it and have the prices of hardware low because I talk to these providers and they are normal people like you and me and their business is impacted too all the while these large companies like AWS,GCP,Azure is what people use to rent or some shit in the first place with the returns/rent seeking that they are while you can get things for cheaper in lowendbox and other places in the first place but yeah of course you won't get the same stability (but will get pretty damn close imo) for much cheaper but what many people want is the ability to discount that when half of internet shuts down because of AWS east 1 just feeling like it, the customers wouldn't blame them directly for picking AWS but I guess if they use something even reliable lik ehetzner and say hetzner might have one downtime (which is rare but yea lets assume so) then maybe customers or business owners or anyone higher or lower in the team might push on the original person who might've suggested hetzner and they might get (fired?) or these are the justifications I see as to when people suggest that hey aws is bloated and burns so quickly, why not use alternatives like hetzner and many many other awesome services.
Thanks for listening to my (rant) & have a nice day (as one might have after the AI bubble given that no matter how harshly I write, no matter what I do, quite frankly I have no say in the AI bubble, so I personally what I like to do is actually use AI for free or the cheap and burn so much money of these AI companies to actually build open source cloud solution or something when the bubble pops but its a bit of an ideal but something I am working on I guess with the gvisor thing.
I don't know how to point it out but the tech behind AI is cool. There are gonna be some persisting changes (open weights model like kimi k2.5) etc. but I just don't know.
I really don't like the current timeline I live in regarding AI man. And I would consider myself to be sort of a person who actually uses AI many times but I do feel as a net, I feel as if AI might have been negative or atleast AI in the past year for some reason. Claude 3.5 sonnet or Deepseek v3 & things were going okay. Beyond that something definitely changed I mean there was already a bubble during DSv3 days but somehow it was still realist (the stock market). Right now I feel a fundamental disconnect between reality and market and the longer it continues, the worse its aftermath.
I really don't know how to describe what I am feeling but hopefully I have done a good job (with this lengthy comment) to explain all the things that i felt like talkjing about
Thank you to the author for creating this article so that we can have this relevant discussion. I haven't read it so I am gonna go ahead and do it just now.
RAM shortage or competent programmer shortage?
Can't get a Linux box to idle (or even install) under 512M these days.
Can't find a web developer worth a shit who doesn't think he needs a Python backend application server to print "Hello, world" when you could do this with a static page served with something like OpenBSD with two-digit RAM requirements.
It's not the RAM that's changed; it's everyone around the RAM.
A coddled generation who were taught that AWS is the Internet and live in abstractions certainly hasn't helped.
My NixOS SSH jump host server here idles at 234 MB of which 64 MB is systemd-journald (which I assume can be reduced with some settings of how much to keep in RAM).
>which 64 MB is systemd-journald
why
Windows NT was routinely run with 32 MB of RAM TOTAL and the event log is basically unchanged 30 years later.
Achtung, you will draw the ire of the systemd downvote zealots.
Edit: Haha, withing a handful of seconds I got a downvote. :-D
>which 64 MB is systemd-journald
Wait till they rewrite it in Rust!
You definitely can use Linux with few simple servers with 128 MB RAM.
Install can be tricky indeed, but if you have installed system, it's easier.
Yeah I'll need conclusive proof of that.
This is not difficult, you just need to run `htop` and perform addition of the RES column (which is in KB unless a unit is shown). Example:
USER RES▽ Command
root 70436 systemd-journald
root 14268 amazon-ssm-agent
root 13508 systemd
root 12160 systemd --user
root 10240 sshd: root@pts/0
root 9088 sshd: root [priv]
root 8944 systemd-udevd
root 8704 systemd-logind
root 8320 nix-daemon --daemon
systemd-ti 8192 systemd-timesyncd
systemd-oo 7808 systemd-oomd
root 6492 -zsh
nscd 6272 nsncd
messagebus 5888 dbus-daemon --system --address=systemd: --nofork --nopidfile -
root 5888 htop
sshd 4904 sshd: root [net]
root 4736 sshd: sshd -D -f /etc/ssh/sshd_config [listener] 1 of 10-100
root 2960 (sd-pam)
root 2816 agetty --login-program login ttyS0 --keep-baud
root 2192 dhcpcd: [privileged proxy]
dhcpcd 1680 dhcpcd: [manager] [ip4] [ip6]
dhcpcd 1468 dhcpcd: [BPF ARP] ens5 172.31.8.86
dhcpcd 1168 dhcpcd: [control proxy]
dhcpcd 1040 dhcpcd: [network proxy]>> You definitely can use Linux with few simple servers with 128 MB RAM. > > This is not difficult, you just need to run `htop` and perform addition of the RES column (which is in KB unless a unit is shown). Example:
I'm not quite sure what points this makes... That's supposed to fit on 128MB? And it doesn't include memory consumed by the kernel itself (which is not negligible at this scale), and linux needs spare for cache to work remotely decently.
$ awk '{ tot+=$2 } END { print tot /1024 }' < list
214.035
I'm sure you can run a linux with 128MB of ram, but certainly not with systemd and a default kernel... Perhaps DSL (damn small linux) or alpine.Toms root boot (TOMSRTBT) is what you need! Used to fit on a floppy disk.
Why are you using systemd in a minimalist system?
netBSD! ... o wait not linux... damn
Small VPS hosts shouldn’t really exist. They’re either resellers or just half-assing it.
How can you trust Gary from GaryHosting not to just steal all your data? How can you trust him to have redundant networks? You just can’t.
On the contrary, it's impossible to trust Amazon not to be evil, because eventually some suit with an MBA is going to go "we can make 0.001% more money this year by having orphans hand-deliver packets across the freeway, frogger style. What are people going to do, leave? Where else will they host their application built around our proprietary FireHouse LightWave Message Comorbidifier?"
On the other hand, I can trust Gary. Gary's personally responsible for GaryHosting, and he obviously takes that role seriously, given he slapped his name on the front. And if Gary fails, I can just switch to a different provider. Gary doesn't have a moat, he sells a commodity. His only advantage in this world is treating his customers well enough that they don't leave.
Ideally all cloud applications hostable on any platform would just provide the following services to clients:
A. rendezvous services so clients can connect to one another,
B. storage/retrieval of encrypted data where the host does not have the key to decrypt,
C. transport of encrypted data which cannot be known by the host due to B above.
> How can you trust him to have redundant networks
You can't, so abstract that away at the application layer. Make it not dependent on a single host or network.
Your data has to be decrypted somewhere to be useable, how would that work?
at the endpoints
[dead]
Crafted by Rajat
Source Code