hckrnws
Show HN: Shadeform – Single Platform and API for Provisioning GPUs
by edgoode
Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform (https://www.shadeform.ai/), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available.
From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds.
With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference.
This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in.
We shipped a few features to help teams access GPUs today:
- a “single plane of glass” for GPU availability and prices;
- a “single control plane” for provisioning GPUs in any cloud through our platform and API;
- a reservation system that monitors real time availability and launches GPUs as soon as they become available.
Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more.
You can try our platform at https://platform.shadeform.ai. You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee.
We’d love your feedback on how we’re approaching this problem. What do you think?
First off, the color and the font of the hero look so neat together. Just giving straight up simple, professional but modern vibes. Good job whoever picked it!
Now, regarding the product - this is amazing. From both the perspective of saving time and money digging through providers to the part that actually I find the most impacting - the simplification of the AWS console mess to a niche use case. While I understand GPU's are the hot thing now and there is a scramble for a single Flop, if you ever decide to pivot, I'd gladly pay more money each month to use such a simplified niche AWS/Generic cloud console.
Can't wait to have a chance to play with this more, keep up the good work and good luck!
Thank you for the kind words! I'm Ronald, one of the cofounders of Shadeform.
Simplifying provisioning instances in AWS is definitely one of our goals! With our current AWS integration, we are setting up a VPC networking stack so all users have to do is worry about picking their instance. We also hope to integrate more cloud features and managed services that will make this a fully-fledged cross cloud console.
Problem for our use case is saving on gpus is pointless if we have to keep paying egress fees for our 250 TB training dataset.
The single interface for any cloud GPU is cool, but hard to imagine it taking off without some additional features.
I think for lots of shops the hardest part isn't the compute but moving the data around. Ie for us, we use s3, some lustre caching and spot instance nodegroups. We are a deep learning research team that spends roughly 40-50k/month on aws compute for training jobs. I imagine this is somewhat mid tier, maybe more than some but certainly far less than others.
For inference, data egress costs could be less of an issue, but your service would really need to be almost invisible. It probably would be pretty complicated for a number of reasons, but if you could design a "virtual on-demand nodegroup"™ that I could add to my existing clusters and then map to whatever k8s stuff I want, that would probably be useful. I would need to be able to auto deploy a base image to the machine and then provision the node and register with my cluster.
Just some unorganized thoughts. Good luck and have fun.
What are your use case that justify a cost 40-50k/month just for training?
Here are two demos of provisioning and reserving GPUs through our platform:
Provisioning: https://www.youtube.com/watch?v=7WyKPMS80Pk
Reservations: https://www.youtube.com/watch?v=Ab5GmfMYWKA
- SSH access isn’t super useful. If I have to author a bootstrapping script for my system it’s too much friction.
- the people who thrive at this use orchestration, like Slurm or Kubernetes. So the nodes I buy should join automatically to my orchestration control plane.
- people who don’t use orchestration or don’t own their orchestration will not run big jobs or be repeat customers. It doesn’t make sense to use nonstandard orchestration. I understand that it is something that people do, but it’s dumb.
- so basically I would pay for a ClusterAutoscaler across clouds. I would even pay a 5% fee for it automatically choosing the cheapest of the fungible nodes. I am basically describing Karpenter for multiple clouds. Then at least the whole offering makes sense from a sophisticated person’s POV: your Karpenter clone can see eg a Ray CRD and size the nodes, giving me a firm hourly rate or even upfront price to approve.
- I wouldn’t pay that fee to use your control plane, I don’t want to use a startup’s control plane or scheduler.
- I’m not sure why the emphasis on GPU availability or blah blah blah. Either AWS/GCE/AKS grants you quota or it doesn’t. Your thing ought to delegate and automate the quota requests, maybe you even have an account manager at every major cloud for that to bundle it all.
- as you probably have noticed, the off brand clouds play lots of games with their supposed inventory. They don’t have any expertise running applications or doing networking, they are ex crypto miners. I understand that they offer a headline price that is attractive but for an LLM training job, they “vast”ly overpromise their “core” offering.
- if you really want to save people money on GPUs, buy a bunch of servers and rack them and sell a lower hourly rate.
Thank you for the feedback. We're still early in this and are planning on moving in some of the directions you mentioned.
- We agree that moving towards 'Karpenter for multiple clouds' would be more valuable for some use cases and hope to support that feature soon.
- We do help customers with one-off quota requests, and it is a feature we want to bake into our platform on top of aggregating demand in our accounts. Many companies with AWS/GCE/AKS quota still cannot reliably get on-demand instances due to capacity shortages.
Yeah I mean I’m sure you look at Karpenter and think “well it does everything for free, and the code to choose the cheapest node would be straightforward.” Kubernetes already has sophisticated scheduling algorithms that could consider price as a constraint.
I can’t say what will people actually pay for, because CTOs and engineers are penny pinchers, they will go through a lot of pain to pay $0. They are the worst customers. IMO most allegedly B2B Y Combinator offerings are really B2C in disguise, selling productivity apps and pretty interfaces to 22 year olds with busy schedules of Bumble swiping who happen to work as developers and PMs at big enterprises. Because the senior people I know with the real budgets, they look at a thing and think “I’d program this with my headcount to save a 5% fee.” This is coming from someone who does charge a royalty only because it is customary in my business to do so.
People who spend money love their pricing “formatted” a certain way. CTOs love it to be formatted as “free” with a bunch of trickle priced exorbitant usage gotchas (Snowflake). They don’t love prices formatted as royalties. Time will tell of course.
Anyway, most use cases don’t even make sense, they are deep in the negative for ROI. Most enterprises cannot do software R&D like LLM model training or even serving. The biggest success story in town uses Kubernetes. I’m not sure if there’s space for 10 more control planes to run on top of your control planes, they add a lot of complexity for little gain.
A bunch of Kubernetes manifests to fine tune LLaMA 2 on a dataset hosted in blob storage on DGX machines is a commodity. People think it’s sensitive, there’s a bonanza for people who can author that YAML, it’s inevitable that someone will release a proper multi node training job with vanilla resources. Yet here we are, with a dozen “free” trickle priced weird CRD control plane-esque products obscuring this.
Time will tell but I think you're discounting the amount of amateur that will join the gold rush with a dream an angel and just enough knowledge to run a customized alpaca-lora job, looking for gpus while an outsider controls the runway.
This is an iPhone 3 level event, with new millionaires being created out of the LLM equivalent of the mobile fart button.
People selling tools of any kind will make bank.
Comment was deleted :(
Congrats on the launch!
My co-founder and I always joke that there are only two hair-on-fire problems in 2023 and they can be summarised in 6 letters: GPU & PMF.
Really love what you're building.
Appreciate it! We're hoping we can help solve one while achieving the other.
What's PMF?
Product Market Fit
Be super careful inserting yourself as a reseller of GPUs (ShadeCloud).
You'll quickly find that your platforms primary use is to turn stolen credit cards into cryptominers.
crypto doesn't use gpus to mine any more. after ETH switched to PoS, the whole gpu mining world was decimated (thankfully). even on a free tier, you'd be lucky to get a few dollars a day, for a whole lot of upfront work.
that said, i agree that you do have to be careful reselling anything... people will find nefarious uses, it just isn't mining anymore.
I'm not referring to normal users that are trying to generate ROI. When your actual cost is $0, even GPU mining Monero or shit coins is cash flow positive and relative low risk.
A "few dollars a day" is good money for people in some parts of the world.
I didn’t discount what you said at all. I just clarified that mining is less of a concern these days. It isn’t even a few dollars a day at this point, it is pennies.
Monero is far more efficient to mine with CPUs, not GPUs.
Easy to mitigate with credit card signup and individual approval.
Go try to get an account with CoreWeave and you’ll see what I mean.
Appreciate the feedback and will definitely watch out!
Surprisingly little engagement with this post. I'm not in the market but can people who use gpus but didn't find their offering attractive explain why?
The pricing or lack there of is a huge turn off. We use Replicate and Runpod. On Replicate everyone just shares the GPU cloud. On Runpod I had to bundle 8 together into my own cloud but I can't see that this service solves my problems (queueing image generation and restarting jacked stable diffusion nodes).
This is great feedback for us. We'll make it more transparent. We charge whatever the underlying provider charges + 5% when you deploy into one of our accounts. All provider prices are displayed on our main platform page.
It’s like Cloud66 but with the GPU in the headline, isn’t? What’s difference with Cloud66?
Any plans to add providers like TensorDock or Vast?
We're working on adding those as well.
Crafted by Rajat
Source Code