hckrnws
This is all just too much Stockholm syndrome. Apple’s DX (developer experience) has always been utterly abysmal, and these continued blog posts just goes to show just how bad it is.
Proprietary technologies, poor or no documentation, silent deprecations and removals of APIs, slow trickle feed of yearly WWDC releases that enable just a bit more functionality, introducing newer more entrenched ways to do stuff but still never allowing the basics that every other developer platform has made possible on day 1.
A broken UI system that is confusing and quickly becomes undebuggable once you do anything complex. Replaces Autolayout but over a decade of apps have to transition over. Combine framework? Is it dead? Is it alive? Networking APIs that require the use of a 3rd party library because the native APIs don’t even handle the basics easily. Core data a complete mess of a local storage system, still not thread safe. Xcode. The only IDE forced on you by Apple while possibly being the worst rated app on the store. Every update is a nearly 1 hour process of unxipping (yes, .xip) that needs verification and if you skip it, you could potentially have bad actors code inject into your application from within a bad copy of Xcode unbeknownst to you. And it crashes all the time. Swift? Ha. Unused everywhere else but Apple platforms. Swift on server is dead. IBM pulled out over 5 years ago and no one wants to use Swift anywhere but Apple because it’s required.
The list goes on. Yet, Apple developers love to be abused by corporate. Ever talk to DTS or their 1-1 WWDC sessions? It’s some of the most condescending, out of touch experience. “You have to use our API this way, and there’s this trick of setting it to this but then change to that and it’ll work. Undocumented but now you know!”
Just leave the platform and make it work cross platform. That’s the only way Apple will ever learn that people don’t want to put up with their nonsense.
I don't disagree with you, but there simply isn't an alternative for pro audio developers. You go where the users are and the majority of the market (by revenue) are Mac users.
Now a lot of people may reply to this that Windows isn't that bad with ASIO (third party driver framework) or modern APIs like WASAPI (which is still lacking), or how pipewire is changing things on Linux so you don't need jack anymore (but god forbid, you want to write pipewire native software in a language besides C, since the only documented API are macros). Despite these changes you have to go where the revenue is, which is on MacOS.
> You go where the users are and the majority of the market (by revenue) are Mac users.
One of the worst things about Apple is how much time and effort they spend trying to lock you into their platform if you want to support it. There's no excuse for it. Even once they have you on their system, they're doing everything in their power to lock you in to their workflows and development environments. It's actually insane how shamelessly hostile OSX is.
On the old Mac OS, and early OS X day, the documentation was great, I dunno what happened to the documentation team.
Swift on the server is for Apple ecosystem developers, to share code, just like all those reasons to apparently use JavaScript on the server instead of something saner.
> JavaScript on the server instead of something saner
JS on the server is actually really fast and well supported. Not really sure what you're driving at here.
Marketing pushed for features faster than engineering could build them properly to the former standard (quality and documentation).
It's honestly nuts that so many developers continue to try to make software on MacOS. I understand the appeal of their current hardware, and I used to even be a big fan of the user experience, but it really seems like attempting to build software in MacOS is like trying to build a house on a sandbar.
Apple has done nothing and continues to do nothing to engender any confidence in their platform as a development target.
> Apple has done nothing and continues to do nothing to engender any confidence in their platform as a development target.
You're missing the forest for the trees. Apple is very difficult to work with indeed, but they have a shit-ton of paying users. Still to this day, iOS is a better revenue maker than Android. Same for macOS compared to Windows. You want to make a living? Release on macOS. People there pay for software.
> Same for macOS compared to Windows.
This hasn't ever been my experience. Maybe if you're in a really specific market niche where most of the userbase is on Mac. Only 5% of users on Windows paying for the software still absolutely dwarfs 100% of Mac users paying for it. We have more sales on Linux than we do Mac.
Paying users is the key.
I’d imagine that people have failed to attract users who pay on Linux or windows and developers know that people use their software via piracy.
I don't believe this. For iOS, sure. But for MacOS? The number of people that uses Windows dwarfs MacOS.
Dwarfs MacOS, sure; but the user base has been conditioned, like Android, to never purchase anything. Why would they purchase anything, when most of their time is spent in the web browser and maybe a few Adobe apps?
iOS is 27% of the mobile market; but total revenue through the App Store in 2024 was $103 billion. For Google Play, it was $46 billion. Double the sales, from a market 1/3rd the size. Whether we like it or not, the whole open platform of Windows being a breeding ground for viruses and piracy, and the ongoing cultural expectations that set, caused a direct effect on people's willingness to buy Windows software from unknown publishers without a third party (Steam, Microsoft Store) vetting them.
I expect it's highly situational. Don't expect to sell many games on Mac. However, I do find it interesting that services like SetApp exist on Mac, but nobody has tried anything with that level of quality on Windows. SetApp also hasn't shown any interest in expanding to Windows.
Comment was deleted :(
The problem is that this could be easily applied to many things. To paraphrase:
It’s honestly nuts that so many developers continue to try to make software using a bloated JavaScript framework and thousands of Node dependencies.
That might also be true but that misses the point - programming is not engineering; nothing is done to an engineer’s preferred standard; and probably never will.
It’s like being a CNC Technician and complaining about how 90% of stuff on store shelves is plastic. A metal gallon of milk would be so much more durable! Less milk would be spilled from puncturing! Production costs, and how they go downstream, are being ignored.
(Edit for the downvotes, dispute me if you care enough, but literally nobody other than computer programmers ogles your clean code. Just like how nobody other than CNC mechanics are going to ogle the milk carton made on a lathe.)
It is still better engineered that dealing with the distribution of the day, reinventing the way to do sound, graphics stack, UI, ......
Once upon a time I thought either GNOME or KDE would win, and we could all enjoy the one Linux distribution, I was proven wrong.
Then again, I have been back on Windows as main OS since Windows 7.
I don't know why you are downvoted here.
The engineering standards, and churn within the Linux desktop, are hilariously bad.
Nobody who uses it has a right to complain about how node_modules has a thousand dependencies and makes your JavaScript app brittle. Their superior Linux desktop won't even be capable of running the same software build outside of a Flatpak without crashes in three years.
As for lack of documentation, good luck pulling together all the pieces you need to write a fully native Linux application without using Qt, GTK, or a cross-platform solution. A simple request, fairly accomplishable on Mac. The lack of documentation outside of that privileged route will make Apple's documentation look like a gold standard. Heck, even if you stay on the privileged route, you're still probably in for a bad time.
I’ve worked in two high profile companies with very prominent apps on the Apple App Store.
The team we talked to at Apple never ever cared about our problems, but very often invited us to their office to discuss the latest feature they were going to announce at WWDC to strong arm us into supporting it. That was always the start and stop of their engagement with us. We had to burn technical support tickets to ever get any insight into why their buggy software wasn’t working.
Apples dev relations are not serious people.
Some folks may have seen my Show HN post for Anukari here: https://news.ycombinator.com/item?id=43873074
In that thread, the topic of macOS performance came up there. Basically Anukari works great for most people on Apple silicon, including base-model M1 hardware. I've done all my testing on a base M1 and it works wonderfully. The hardware is incredible.
But to make it work, I had to implement an unholy abomination of a workaround to get macOS to increase the GPU clock rate for the audio processing to be fast enough. The normal heuristics that macOS uses for the GPU performance state don't understand the weird Anukari workload.
Anyway, I finally had time to write down the full situation, in terrible detail, so that I could ask for help getting in touch with the right person at Apple, probably someone who works on the Metal API.
Help! :)
> This is going to be a VERY LONG HIGHLY TECHNICAL post, so either buckle your seatbelt or leave while you still can.
Well, I read it all and found it not too long, extremely clear and well-written, and informative! Congrats on the writing.
I've never owned a Mac and my pc is old and without a serious GPU, so it's unlikely that I'll get to use Anukari soon, but I regret it very much, as it looks sooo incredibly cool.
Hope this gets resolved fast!
Interesting post & problem. I wonder if the reason that the idea of running the tasks on the same queue fails is for the same reason you have a problem in the first place - variable clock rate means it’s impossible to schedule precisely and you end up aliasing your spin stop time ideal time based on how the OS decided to clock the GPU. But that suggests that maybe your spin job isn’t complex enough to run the GPU at the highest clock because if it is running at max then you should be able to reliably time the stop of the spin even without adding a software PLL (which may not be a bad idea). I didn’t see a detailed explanation of how the spin is implemented and I suspect a more thorough spin loop that consistently drives more of the GPU might be more effective at keeping the clock rate at max perf.
Did you try this entitlement? https://developer.apple.com/documentation/bundleresources/en...
wonder if com.apple.developer.sustained-execution also goes the other way around...
I missed the Show HN, but the first thing that came to mind after seeing it was that this looks like it would lend itself well to making some very creative ASMR soundscapes with immersive multidimensional audio. I selfishly hope you or one of your users will make a demo. Congrats on the project and I hope you receive help on your Apple issues.
[flagged]
It’s technical to over half of programmers who don’t need to know these types of details about hw/sw interactions.
Have you filed a feedback? Seems like the right next step.
The post opens with the following TL;DR:, snipped for brevity:
> It would be great if someone can connect me with the right person inside Apple, or direct them to my feedback request FB17475838 as well as this devlog entry.
Feedbacks often go into a black hole unless either: 1. A bunch of people file effectively the same bug report (unlikely here) 2. An individual Apple employee champions the issue internally 3. Someone makes a fuss on Twitter/X and it starts to go viral
Sounds like the OP is trying to get #2 to happen, which is probably his best bet.
Feedback is as effective as creating a change.org petition to some politician to stop doing crimes please. You'll be lucky to get an acknowledgement that something's a real issue after months.
Side note: Anukari should put out a Mick Gordon sound pack and share revs with him. That dude is making some crazy crazy stuff; his demo is awesome. Pairing up with artists once you have such a strong tool is good business and good for the world. If you like Mick Gordon. Which I do.
The problem with exposing an API for this is that far too many developers will force the highest performance state all the time. I don't know if there's really a good way to stop that and have the API at the same time.
Developers aren't (yet) abusing audio workgroups for all their thread pools to get pcore scheduling and higher priority. So it would imply that if an audio workgroup is issuing commands to the GPU there should be some kind of timeout to the GPU downclocking based on the last time a workgroup sent data to it.
GPU audio is extremely niche these days, but with the company mentioned in TFA releasing their SDK recently it may become more popular. Although I don't buy it because if you're doing thing on GPU you're saying you don't care about latency, so bump your i/o buffer sizes.
There already is an unending number of ways for just one app to waste charge on battery-powered devices. It all already relies on developers not unnecessarily running energy-intensive tasks, either intentionally or accidentally. Adding one more API that has the potential to waste energy if not used appropriately will not change that.
macOS also has a bunch of mechanisms to inform the user about this! IIRC the battery menu has entries for apps draining a lot of power (iterm always shows up there for me!)
My potentially incorrect understanding is that iTerm generally only shows up when the processes you run inside it are consuming a bunch of energy. It only shows up in the battery menu for me when I’m running simulations or other big CPU intensive stuff on the command line.
The article mentions game mode, which is a feature of the latest Apple operating systems that is optimised for cases like this. Game mode pops up a notification when it’s enabled, which most applications wouldn’t want to happen. So far I haven’t seen anything abuse it.
But as the author mentions, they already do it by having a process spin indefinitely. If they want to abuse it, they will and can already.
It's better to trust, the amount of people that won't abuse it far outweigh the ones that do.
Abusing the API would still be more efficient than running fake busy workloads to do the same, which apps can already fo without the API (or permissions the API could require).
Manual permission? Maybe hidden somewhere, it's probably necessary for very niche apps.
And default deny at the OS level for Zoom, Teams and web browsers :)
>The Metal profiler has an incredibly useful feature: it allows you to choose the Metal “Performance State” while profiling the application. This is not configurable outside of the profiler.
Seems like there might be a private API for this. Maybe it's easier to go the reverse engineering route? Unless it'll end up requiring some special entitlement that you can't bypass without disabling SIP.
There has to be a private API for this; the post says:
> The Metal profiler has an incredibly useful feature: it allows you to choose the Metal “Performance State” while profiling the application. This is not configurable outside of the profiler.
How would the Metal profiler be able to do that if not for a private API? (Could some debugging tool find out what's going on by watching the profiler?)
Lol I just read the parent comment without noticing that they were quoting the exact same sentence from the blog! ;-)
Sorry about that!
Comment was deleted :(
It's an interesting trade-off. For decades the answer to having a reliable Windows computer has been to turn off as many power saving features as possible. Saving power on USB plugs for instance makes your machine crash. Let your CPU state drop to the minimum and you'll find your $3000 desktop computer takes about a second to respond to keypresses. Power savings might not be real, but the crashes and poor performance are very real.
Best way to do this:
1. Go through WWDC videos and find the engineer who seems the most knowledgable about the issue you're facing.
2. Email them directly with this format: mthomson@apple.com for Michael Thomson.
Or his brother Pichael at pthomson.
One thing I don’t understand: if latency is important for this use case, why isn’t the CPU busy preparing the next GPU ‘job’ while a GPU ‘job’ is running?
Is that a limitation of the audio plug-in APIs?
That's pipelining and it's good for throughput but it sacrifices latency. Audio is not a continuous bit stream but a series of small packets. To begin working on the next one on the CPU while the previous one is on the GPU requires 2 samples in flight which necessarily means higher latency
I don’t see that. If the CPU part starts processing packet #2 while the GPU processes packet #1, not after it has done so, it will have the data that has to be sent to the GPU for packet #2 ready earlier, so it can send it earlier, potentially the moment the GPU has finished processing packet #1 (if the GPU is powerful enough, possibly even before that)
That’s why I asked about the plug-in APIs. They may have to be async, with functions not returning when they’re fully done processing a ‘packet’ but as soon as they can accept more data, which may be earlier.
Audio is already asynchronous.
But in general no, you can't begin processing a buffer before finishing the previous buffer because the processing is stateful and you would introduce a data race. And you can't synchronize the state with something simple like a lock, because locking the audio playback is forbidden in real time.
You can buffer ahead of time, this introduces latency, which again, is the entire thing that people try to avoid. You cannot do things ahead of time without introducing delay, because of causality. Essentially, you can't start processing packet #2 while packet #1 is in flight because packet #2 _hasn't happened yet_.
this might trick the heuristics in the right direction ie. feed the GPU a bunch of small tasks (i.e. with a small number of samples) instead of big tasks.
I have zero need for this app but it's so cool. Apps like these bring the "fun" back into computing. I don't mean there's no fun at the moment, but reminds me of the old days with more graphical and experimental programs that floated around, even the demoscene.
Be careful what you wish for here. Knowing Apple, they will stonewall any API requests, and may very well shut your app out for the private API workarounds described.
https://xkcd.com/1172/ feels a lot like the workaround OP describes
That's more like "I had to trick the OS into thinking that spacebar was held for my application to run at all".
>Any MTLCommandQueue managed by an Audio Workgroup thread could be treated as real-time and the GPU clock could be adjusted accordingly.
>The Metal API could simply provide an option on MTLCommandQueue to indicate that it is real-time sensitive, and the clock for the GPU chiplet handling that queue could be adjusted accordingly.
Realtime scheduling on a GPU and what the GPU is clocked to are separate concepts. From the article it sounds like the issue is with the clock speeds and not how the work is being scheduled. It sounds like you need something else for providing a hint for requesting a higher GPU clock.
[flagged]
Some of them probably do... This is still a funny comment though
Crafted by Rajat
Source Code