hckrnws
Everybody knows about standards and adding a new one, but I must say if I trust one company to get it right it is bitwig. The way they implemented modulations, MIDI Polyphonic Expresions etc within their own DAW shows that they really do care about creating good solutions with a fresh look — something that I would not trust AVID or Steinberg with at all.
https://news.ycombinator.com/item?id=31759128 10 days ago|135 comments
quite some time ago, i used to write audio plugins. i'd started by using WDL-OL[1], but quickly ended up re-writing most of the front-end code to use Direct2D/Cocoa + OpenGL, as i began to run into limitations with the rendering capabilities. in time, i ended up rewriting most of the framework in order to take advantage of VST3 and AAX's more.. separated APIs.
Steinberg did some really good work with the design of VST3, and it's frustrating that it never really took off. it's the only plugin API that can actually guarantee sample-accuracy. the initial design required UI and processor separation.
it turns out that latter point seems to unstick many developers. contrasted with AAX (which i'm also keen on), AAX allows for even more modular architectures (which you'd need to operate around, if you wanted to use their DSP hardware), however, AVID - perhaps learning from VST3's mistakes - made "singleton" architectures allowable straight away.
a subsequent release of VST3 allowed for singleton architectures, given that so many people seemed to be put off with the (imo slightly) increased complexity, but it was too late. at least until recently, people are still writing VST2 plugins, and VST3 support across DAWs is not as extensive as VST2's.
EDIT: ah! it looks like CLAP has delivered something i've been after in an audio API for some time: it appears to allow for dynamic parameters. i'm surprised it took this long! overall it looks well written too. i wonder if this will gain any traction in the DAW space.
Besides the fact that you're the first person I've ever heard say something positive about VST3 (I'm quite active in the audio developer community, frequently doing contract work for the major players), this part is incorrect:
> it's the only plugin API that can actually guarantee sample-accuracy.
This is part of both the AudioUnit and AAX specification, and there's nothing stopping a host implementing sample accurate event processing (audio processing is always sample-accurate) for VST2.
> processor separation
???. Again, AU and AAX are designed with this in mind, but there's nothing about VST3 that guarantees this. A VST3 host can throw _everything_ in its own thread if it wants.
> VST3 support across DAWs is not as extensive as VST2's
Every major DAW that supports VST2 also supports VST3, subtract Cubase/Nuendo which is only VST3 now. I suppose of you include some smaller/older softwares with dozens of users? Even products like Renoise and Radium support VST3 though.
> [sample accuracy] is part of both the AudioUnit and AAX specification
heh, i did wonder if i'd get picked up on this :)
whilst practically speaking often true, this is actually not the case. w.r.t. AAX: during parameter automation events that involve a continuous change (e.g., a linear increase in gain), the process block size is reduced to 64 samples (if i recall correctly), and parameters are interpolated by the host and fed into the processor as fixed values for that time period. it means that it's not possible to answer the question: "at sample n, what is the value of this parameter?"
> Again, AU and AAX are designed with this in mind
you're right, i omitted AU (only because i didn't have fond memories of it), but i stated AAX allows for this type of seperation.
> but there's nothing about VST3 that guarantees this.
which is the same guarantee that AAX and AU give you. i'm not talking about how the host is implemented, i'm talking about what the API models. VST3 - like AAX - defines its presentation and processing interfaces completely seperately. VST3 (initially) didn't allow for a combined processor and interface model, whereas AAX did (through its "singleton" description).
> Every major DAW that supports VST2 also supports VST3
you're right, i was being pessimistic: at the time, i wanted to drop VST2 support, but i couldn't at the time because of ableton live, although i see they now support VST3. but you still have lots of dinosaurs; i still had to support RTAS...
> whilst practically speaking often true, this is actually not the case. w.r.t. AAX: during parameter automation events that involve a continuous change (e.g., a linear increase in gain), the process block size is reduced to 64 samples (if i recall correctly), and parameters are interpolated by the host and fed into the processor as fixed values for that time period. it means that it's not possible to answer the question: "at sample n, what is the value of this parameter?"
This is correct if you implement parameter events naively (i.e. following the documentation!), but there's a way around it to get per-sample events. The unfortunate thing here is that it's not possible to talk about the protocol openly, AND a lot of devs use JUCE (or whatever wrapper) which doesn't support AAX's capabilities in this area.
The bigger issue here is that VST3 doesn't pass MIDI events (CC/RPN/NRPN) directly to/through the plugin, which is a major hassle if you're doing anything that heavily relies on those facilities like scripters for orchestral plugins.
> This is correct if you implement parameter events naively (i.e. following the documentation!)
interesting, i tried to solve this issue - i ended up talking to rob major at AVID (as far as i can tell, the lead engineer behind AAX), over this exact matter. he didn’t think there was a way around this.
this was 2017 though, so perhaps it’s changed. if you have something i could read, i’m interested!
> It's the only plugin API that can actually guarantee sample-accuracy.
Nothing about VST3 guarantees sample accurate automation, next to no plugins implement it correctly, and users don't seem to care all that much about it.
It's actually hard to write the process loop correctly since VST3 doesn't hand you a list of events and sample offsets, it hands you a list of queues of events and sample offsets so you need to buffer internally and sort to avoid allocation in realtime.
In practice most plugins just dump all the parameter handling to the top of the process callback and ignore it. This is what JUCE does, for example.
> Nothing about VST3 guarantees sample accurate automation, next to no plugins implement it correctly, and users don't seem to care all that much about it.
i’m not entirely sure what you mean here;
> It's actually hard to write the process loop correctly
i agree that it was a bit of leap from VST2, but i don’t think it’s hard
you get a sequence of parameter values, and sample offsets at which they are true at, linear interpolation will find you the value of that parameter for a specific sample. given that the spec doesn’t allow for quadratic/cubic/parabolic interpolation between values, this is in fact sufficient for sample accuracy.
> In practice most plugins just dump all the parameter handling to the top of the process callback and ignore it. This is what JUCE does, for example.
yes, i seem to recall WDL-OL’s approach was to use the parameter value at the start of the process block for the entire block. it’s “mostly good enough” (and honestly it probably was). my school of thought at the time was “sample accurate means sample accurate”. i hadn’t yet learned that the perfect is the enemy of the good :)
> EDIT: ah! it looks like CLAP has delivered something i've been after in an audio API for some time: it appears to allow for dynamic parameters. i'm surprised it took this long!
What does this mean? I can imagine it being reasonable to have, for example, a wet/dry pot for a thing that's "not present" if you didn't enable the thing, since that pot is useless. Whether you want to reflect that as the pot magically "not existing" or just "disable" it or something so that user knows it isn't important is a small matter.
I'm way more dubious about genuinely dynamic parameters. If I can't look at your plug-in and decide it has 24 parameters, but must instead reason that it could have any number of parameters and those might change (every sample? every bar? On reload? Once per week?) then automation becomes unmanageable. If you say, but some of the parameters are not automatable - too bad, somebody will want to automate those parameters too anyway and they will fight you to do it.
> What does this mean?
so the particular use-case was for a sampler. we had a bunch of different samples that had their own domain parameters, which needed to be automatable. hell, even multi-sample sample selection (the process of choosing which samples of a multisampled sample-set to use) was automatable. it was hell.. aha.
that said, i envisenged a few other scenarios although they mostly revolve around variable IO (e.g. variable numbers of sidechained inputs, etc.)
> I'm way more dubious about genuinely dynamic parameters.
you're right to be. the approach i came up with was a conceptually a mess - although i guess it worked - the dynamic portions of automation could be mapped basically the same way that one routes audio in the daw. you have, say, 10 automatable gain parameters visible to the host, then in the plugin you'd route that to your chosen parameter.
no joke, there was the question of making the parameter mapping itself automatable. i told them it was impossible and i would die on that hill, which is technically not a lie.
I think they were referring to the audio API itself when mentioning wanting dynamic parameters to implement when writing plugins. I may be wrong though!
Also, I just want to say in general, devs in music tech need a lot more love :( Fighting a dev over a design decision in a plugin is really commonplace online for some reason and its unfortunate being that most complaints can be easily solved with a solid foundation in sound and digital audio.
In this context by "fight you" I meant things like you find that even though you surface an automation API as expected, since they can't automate the things they want through your API they cause your GUI to run, inside a virtual environment in which the "pointer" is controlled by their software so as to click on gadgets at the appropriate time and then they whine that your plug-in (which is rendering a GUI) is too bloated and slow compared to the plug-ins that they're doing conventional automation for.
I don't know a ton about this domain but isn't the name "CLAP" stupidly hard to search for compared to "VST"? It'll certainly confuse a few search engines for a while...
Crafted by Rajat
Source Code