hckrnws
Just ordered a $12k mac studio w/ 512GB of integrated RAM.
Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.
LM Studio is newish, and it's not a perfect interface yet, but it's fantastic at what it does which is bring local LLMs to the masses w/o them having to know much.
There is another project that people should be aware of: https://github.com/exo-explore/exo
Exo is this radically cool tool that automatically clusters all hosts on your network running Exo and uses their combined GPUs for increased throughput.
Like HPC environments, you are going to need ultra fast interconnects, but it's just IP based.
I love LM studio but I’d never waste 12k like that. The memory bandwidth is too low trust me.
Get the RTX Pro 6000 for 8.5k with double the bandwidth. It will be way better
Why would they pay 2/3 of the price for something with 1/5 of ram?
The whole point of spending that much money for them is to run massive models, like the full R1, which the Pro 6000 cant
Because waiting forever for initial prompt processing with realistic number of MCP tools enabled on a prompt is going to suck without the most bandwidth possible
And you are never going to sit around waiting for anything larger than the 96+gb of ram that the RTX pro has.
If you’re using it for background tasks and not coding it’s a different story
If the MPC tools come first in the conversation, it should be technically possible to cache the activations, so you do not have to recompute them each time.
(Replying to both siblings questioning this)
If the primary use case is input heavy, which is true of agentic tools, there’s a world where partial GPU offload with many channels of DDR5 system RAM leads to an overall better experience. A good GPU will process input many times faster, and with good RAM you might end up with decent output speed still. Seems like that would come in close to $12k?
And there would be no competition for models that do fit entirely inside that VRAM, for example Qwen3 32B.
You can't run deepseek-v3/r1 on the RTX Pro 6000, not to mention the upcomming 1 million context qwen models, or the current qwen3-235b.
I'd love to host my own LLMs but I keep getting held back from the quality and affordability of Cloud LLMs. Why go local unless there's private data involved?
Offline is another use case.
Nothing like playing around with LLMs on an airplane without an internet connection.
If I can afford a seat above economy with room to actually, comfortably work on a laptop, I can afford the couple bucks for wifi for the flight.
If you are assuming that your Hainan airlines flight has wifi that isn't behind the GFW, even outside of cattle class, I have some news for you...
Getting around the GFW is trivially easy.
I'm using it on MacBook Air M1 / 8 GB RAM with Qwen3-4B to generate summaries and tags for my vibe-coded Bloomberg Terminal-style RSS reader :-) It works fine (the laptop gets hot and slow, but fine).
Probably should just use llama.cpp server/ollama and not waste a gig of memory on Electron, but I like GUIs.
8 GB of RAM with local LLMs in general is iffy: a 8-bit quantized Qwen3-4B is 4.2GB on disk and likely more in memory. 16 GB is usually the minimum to be able to run decent models without compromising on heavy quantization.
But 8GB of Apple RAM is 16GB of normal RAM.
https://www.pcgamer.com/apple-vp-says-8gb-ram-on-a-macbook-p...
Interestingly it was AI (Apple Intelligence) that was the primary reason Apple abandoned that hedge.
I concur. I just upgraded from m1 air with 8gb to m4 with 24gb. Excited to run bigger models.
I love LM Studio. It's a great tool. I'm waiting for another generation of Macbook Pros to do as you did :).
> I'm going to download it with Safari
Oof you were NOT joking
Safari to download LM Studio. LM Studio to download models. Models to download Firefox.
The modern ninite
I've been using openwebui and am pretty happy with it. Why do you like lm studio more?
Not OP, but with LM Studio I get a chat interface out-of-the-box for local models, while with openwebui I'd need to configure it to point to an OpenAI API-compatible server (like LM Studio). It can also help determine which models will work well with your hardware.
LM Studio isn't FOSS though.
I did enjoy hooking up OpenWebUI to Firefox's experimental AI Chatbot. (browser.ml.chat.hideLocalhost to false, browser.ml.chat.provider to localhost:${openwebui-port})
Open WebUI can leverage the built in web server in LM Studio, just FYI in case you thought it was primarily a chat interface.
i recently tried openwebui but it was so painful to get it to run with local model. that "first run experience" of lm studio is pretty fire in comparison. can't really talk about actually working with it though, still waiting for the 8GB download
Interesting. I run my local llms through ollama and it's zero trouble to get that working in openwebui as long as the ollama server is running.
Comment was deleted :(
Nice. Ironically well suited for non-Apple Intelligence.
What are you going to do with the LLMs you run?
Currently I'm using gemini 2.5 and claude 3.7 sonnet for coding tasks.
I'm interested in using models for code generation, but I'm not expecting much in that regard.
I'm planning to attempt fine tuning open source models on certain tool sets, especially MCP tools.
I already got one of these. I’m spoiled by Claude 4 Opus; local LLMs are slower and lower quality.
I haven’t been using it much. All it has on it is LM Studio, Ollama, and Stats.app.
> Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.
lol, yup. same.
Yup, I'm spoiled by Claude 3.7 Sonnet right now. I had to stop using opus for plan mode in my Agent because it is just so expensive. I'm using Gemini 2.5 pro for that now.
I'm considering ordering one of these today: https://www.newegg.com/p/N82E16816139451?Item=N82E1681613945...
It looks like it will hold 5 GPUs with a single slot open for infiniband
Then local models might be lower quality, but it won't be slow! :)
I was using Claude 3.7 exclusively for coding, but it sure seems like it got worse suddenly about 2–3 weeks back. It went from writing pretty solid code I had to make only minor changes to, to being completely off its rails, altering files unrelated to my prompt, undoing fixes from the same conversation, reinventing db access and ignoring existing coding 'standards' established in the existing codebase. Became so untrustworthy I finally gave OpenAi O3 a try and honestly, I was pretty surprised how solid it has been. I've been using o3 since, and I find it generally does exactly what I ask, esp if you have a well established project with plenty of code for it to reference.
Just wondering if Claude 3.7 has seemed differently lately for anyone else? Was my go to for several months, and I'm no fan of OpenAI, but o3 has been rock solid.
Could be the prompt and/or tool descriptions in whatever tool you are using Claude in that degraded. Have definitely noticed variance across Cursor, Claude Code, etc even with the exact same models.
Prompts + tools matter.
Me too. (re: Claude; I haven’t switched models.) It sucks because I was happily paying >$1k/mo in usage charges and then it all went south.
I’m firehosing about $1k/mo at Cursor on pay-as-you-go and am happy to do it (it’s delivering 2-10k of value each month).
What cards are you gonna put in that chassis?
The GPUs are the hard things to find unless you want to pay like 50% markup
That’s just what they cost; MSRP is irrelevant. They’re not hard to find, they’re just expensive.
[dead]
[dead]
MCP terminology is already super confusing, but this seems to just introduce "MCP Host" randomly in a way that makes no sense to me at all.
> "MCP Host": applications (like LM Studio or Claude Desktop) that can connect to MCP servers, and make their resources available to models.
I think everyone else is calling this an "MCP Client", so I'm not sure why they would want to call themselves a host - makes it sound like they are hosting MCP servers (definitely something that people are doing, even though often the server is run on the same machine as the client), when in fact they are just a client? Or am I confused?
MCP Host is terminology from the spec. It's the software that makes llm calls, build prompts, interprets tool call requests and performs them etc.
So it is, I stand corrected. I googled mcp host and the lmstudio link was the first result.
Some more discussion on the confusion here https://github.com/modelcontextprotocol/modelcontextprotocol... where they acknowledge that most people call it a client and that that's ok unless the distinction is important.
I think host is a bad term for it though as it makes more intuitive sense for the host to host the server and the client to connect to it, especially for remote MCP servers which are probably going to become the default way of using them.
The initial experience with LMStudio and MCP doesn't seem to be great, I think their docs could do with a happy path demo for newcomers.
Upon installing the first model offered is google/gemma-3-12b - which in fairness is pretty decent compared to others.
It's not obvious how to show the right sidebar they're talking about, it's the flask icon which turns into a collapse icon when you click it.
I set the MCP up with playwright, asked it to read the top headline from HN and it got stuck on an infinite loop of navigating to Hacker News, but doing nothing with the output.
I wanted to try it out with a few other models, but figuring out how to download new models isn't obvious either, it turned out to be the search icon. Anyway other models didn't fare much better either, some outright ignored the tools despite having the capacity for 'tool use'.
Gemma3 models can follow instructions but were not trained to call tools, which is the backbone of MCP support. You would likely have a better experience with models from the Qwen3 family.
That latter issue isnt a lmstudio issue... its a model issue,
claude going mcp over remote kinda normalised the protocol for inference routing. now with lmstudio running as local mcp host, you can just tunnel it (cloudflared/ngrok), drop a tiny gateway script and boom your laptop basically acts like a mcp node in hybrid mesh. short prompts hit qwen local, heavier ones go claude. with same payload and interface we can actually get multihost local inference clusters wired together by mcp
Great to see more local AI tools supporting MCP! Recently I've also added MCP support to recurse.chat. When running locally (LLaMA.cpp and Ollama) it still needs to catch up in terms of tool calling capabilities (for example tool call accuracy / parallel tool calls) compared to the well known providers but it's starting to get pretty usable.
hey! we're building Cactus (https://github.com/cactus-compute), effectively Ollama for smartphones.
I'd love to learn more about your MCP implementation. Wanna chat?
LMStudio works surprisingly well on M3 Ultra 64gb and 27b models.
Nice to have a local option, especially for some prompts.
I’ve been wanting to try LM Studio but I can’t figure out how to use it over local network. My desktop in the living room has the beefy GPU, but I want to use LM Studio from my laptop in bed.
Any suggestions?
Use an openai compatible API client on your laptop, and LM Studio on your server, and point the client to your server. LM Server can serve an LLM on a desired port using the openai style chat completion API. You can also install openwebui on your server and connect to it via a web browser, and configure it to use the LM Studio connection for its LLM.
[>_] -> [.* Settings] -> Serve on local network ( o)
Any OpenAI-compatible client app should work - use IP address of host machine as API server address. API key can be bogus or blank.What models are you using on LM Studio for what task and with how much memory?
I have a 48GB macbook pro and Gemma3 (one of the abliterated ones) fits my non-code use case perfectly (generating crime stories which the reader tries to guess the killer).
For code, I still call Google to use Gemini.
Comment was deleted :(
I've been using the Google Gemma QAT models in 4B, 12B, and 27B with LM Studio with my M1 Max. https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat...
Comment was deleted :(
I would recommend Qwen3 30B A3B for you. The MLX 4bit DWQ quants are fantastic.
Comment was deleted :(
Is there an app that uses OpenRouter / Claude or something locally but has MCP support?
I’m looking for something like this too. Msty is my favourite LLM UI (supports remote + local models) but unfortunately has no MCP support. It looks like they’re trying to nudge people into their web SaaS offering which I have no interest in.
I've been considering building this. Havent found anything yet.
vscode with roocode... just use the chat window :S
LM Studio has quickly become the best way to run local LLMs on an Apple Silicon Mac: no offense to vllm/ollama and other terminal-based approaches, but LLMs have many levers for tweaking output and sometimes you need a UI to manage it. Now that LM Studio supports MLX models, it's one of the most efficient too.
I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.
Ollama doesn’t even have a way to customize the context size per model and persist it. LM studio does :)
This isn't true. You can `ollama run {model}`, `/set parameter num_ctx {ctx}` and then `/save`. Recommended to `/save {model}:{ctx}` to persist on model update
I just wish they did some facelifting of UI. Right now is too colorfull for me and many different shades of similar colors. I wish they copy some color pallet from google ai studio or from trae or pycharm.
> I'm not bullish on MCP
You gotta help me out. What do you see holding it back?
tl;dr the current hype around it is a solution looking for a problem and at a high level, it's just a rebrand of the Tools paradigm.
It's "Tools as a service", so it's really trying to make tool calling easier to use.
Near as I can tell it's supposed to make calling other people's tools easier. But I don't want to spin up an entire server to invoke a calculator. So far it seems to make building my own local tools harder, unless there's some guidebook I'm missing.
Your not spinning up a whole server lol, most MCP's can be run locally, and talked to over stdio, like their just apps that the LLM can call, what they talk to or do is up to the MCP writer, its easier to have a MCP that communicates what it can do and handles the back and forth, than writing a non-standard middleware to handle say calls to an API or handle using applescript, or vmware or something else...
It's a protocol that doesn't dictate how you are calling the tool. You can use in-memory transport without needing to spin up a server. Your tool can just be a function, but with the flexibility of serving to other clients.
LM Studio is quite good on Windows with Nvidia RTX also.
care to elaborate? i have rtx 4070 12gb vram + 64gb ram, i wonder what models I can run with it. Anything useful?
I'll be nice to have the MCP servers exposed like LMStudio OpenAI-like endpoints.
I wish LM Studio had a pure daemon mode. It's better than ollama in a lot of ways but I'd rather be able to use BoltAI as the UI, as well as use it from Zed and VSCode and aider.
What I like about ollama is that it provides a self-hosted AI provider that can be used by a variety of things. LM Studio has that too, but you have to have the whole big chonky Electron UI running. Its UI is powerful but a lot less nice than e.g. BoltAI for casual use.
Oh, that horrible Electron UI. Under Windows it pegs a core on my CPU at all times!
If you're just working as a single user via the OpenAI protocol, you might want to consider koboldcpp. It bundles a GUI launcher, then starts in text-only mode. You can also tell it to just run a saved configuration, bypassing the GUI; I've successfully run it as a system service on Windows using nssm.
https://github.com/LostRuins/koboldcpp/releases
Though there are a lot of roleplay-centric gimmicks in its feature set, its context-shifting feature is singular. It caches the intermediate state used by your last query, extending it to build the next one. As a result you save on generation time with large contexts, and also any conversation that has been pushed out of the context window still indirectly influences the current exchange.
There's a "headless" checkbox in settings->developer
Still, you need to install and run the AppImage at least once to enable the "lms" cli which can later be used. Would be nice with a completely GUI-less installation/use method too.
The UI is the product. If you just want the engine, use mlx-omni-server (for MLX) or llama-swap (for GGUF) and huggingface-cli (for model downloads).
Comment was deleted :(
good.
Not to be confused with FL Studio
Closed source - wont touch.
I use https://ollamac.com/ to run Ollama and it works great. It has MCP support also.
Is this related to the open source ollamac at all? https://github.com/kevinhermawan/Ollamac
That's clearly your own product (it links to Koroworld in the footer and you've posted about that on Hacker News in the past).
Are you sharing any of your revenue from that $79 license fee with the https://ollama.com/ project that your app builds on top of?
It is even worse; they are offering a commercial product under the same name of the open source project it is based on: https://github.com/kevinhermawan/Ollamac?tab=readme-ov-file#... and https://github.com/gregorym/ollamac-pro/issues/1.
The UI's not even as nice as lmstudio lol wtf and their gonna charge 79$?!?!?
Crafted by Rajat
Source Code