hckrnws
Working at a company that uses react-native I wish nothing more than for the end of app stores and differing platform languages.
We're heavily considering just having a website next year with a mobile app using webview, and then native code for the native notifications, GPS and healthkit / health connect.
I feel like AI is changing the equation, its nearly better to write your business UI 3 times one for each platform.
I did this and never looked back.
It’s called a “WebView app” and you can get a really good experience on all platforms using them. Just:
- don’t make any crazy decisions on your fundamental UI components, like breadcrumbs, select dropdowns, etc
- add a few platform-specific specialisations to those same components, to make them feel a bit more familiar, such as button styling, or using a self-simplifying back-stack on Android
- test to make sure your webview matches the native browser’s behaviour where it matters. For example, sliding up the view when the keyboard is opened on mobile, navigating back & forth with edge-swipes on iOS, etc
I also went the extra step and got service workers working, for a basic offline experience, and added a native auto network diagnostic tool that runs on app startup and checks “Can reach local network” “Can reach internet (1.1.1.1)” “Can resolve our app’s domain” etc etc, so users can share where it failed to get quicker support. But this is an app for small-to-medium businesses, not consumer-facing, and the HTML5 part is served from the server and cached. I haven’t thought much about what you might need to do additionally for a consumer app, or a local-first app.
I have never once experienced a WebView app that I would say had “a really good experience.”
I made a (hobby) project that utilized this strategy (Flutter + wrapped webview app), and it honestly seems like the way to go for my needs.
Works until you need complex native code for things like automatic image capture assisted by a bounding model.
I personally have a preference for Apple's native frameworks. From a purely engineering standpoint, they're very well thought out and have very clear separations of concerns. Spending my time with their libraries helped me write good, scalable code for platforms beyond their own.
That said, platform lock-in is bad for business because it makes operations dependent on a single provider, but I have no delusions that a web front-end is better.
From an engineering standpoint, front-end web frameworks are less complete and require too many third-party libraries and tooling to assemble. From a UX standpoint, it's actually worse--almost every website you visit today spams you upfront with Google sign-in and invasive cookie permission requests that you can't refuse. But never mind that--from a purely business standpoint, a single platform accessible anywhere saves costs. Most importantly, however, the web is a "safe space" for deploying software anti-patterns without an intermediary entity (i.e an app store) to police your code, so you can do whatever the heck you want.
I'd wish for nothing more than the end of web and app front-ends in favor of purely structured data derived from natural language prompts by users. However, the more realistic mindset seems to be that: the front-end layer is such a high level of abstraction with a very low barrier to entry, so that its tech stack will be in constant flux, in favor of whoever's currently the best-financed entity seeking the most market share, the most developer mind-share, and the most behavioral control among its users.
That. And specifically, fuck Apple and their prohibition on JITs.
We have a React Native app that shares some code with a webapp, and that needs to do some geometry processing. So we're constantly playing the game of "will it interpret quick enough". Everything works fine in browsers, but in a RN app it often slows down to unusable speeds.
Wholeheartedly agree.
> Valdi is a cross-platform UI framework that delivers native performance without sacrificing developer velocity. Write your UI once in declarative TypeScript, and it compiles directly to native views on iOS, Android, and macOS—no web views, no JavaScript bridges.
“We’ve got both kinds. Country and western!”
I was at Snap during this project’s early days (Screenshop!) and spent a bit of time debugging some stuff directly with Simon. He’s a wonderful engineer and I’m thrilled to see this project out in the open. Congratulations Snap team! Well deserved.
Definitely one of the cooler projects to watch while I was there. I recall the goal was to open-source it from early on, so I'm glad to see it come to fruition!
Would you use this framework for a project today?
“Composer” ;)
I’m not sure I trust snap of all companies to make a good cross platform framework after how terrible their android app has been.
I think it’s been changed since, but wow was it weird finding out that instead of taking photos, the Android app used to essentially take a screenshot of the camera view.
I worked on the Snapchat Android back in 2017. It's only weird for people who have never had to work with cameras on Android :) Google's done their best to wrangle things with CameraX, but there's basically a bajillion phones out there with different performance and quality characteristics. And Snap is (rightfully) hyper-fixated on the ability to open the app and take a picture as quickly as possible. The trade off they made was a reasonable one at the time.
I worked on the camera in Instagram iOS for a while. There at least, there could be a 5,000ms latency delta between the “screen preview” and the actual full quality image asset from the camera DSP in the SOC.
I don’t know a thing about Android camera SDK but I can easily see how this choice was the right balance for performance and quality at the time on old hardware (I’m thinking 2013 or so).
Users didn’t want the full quality at all, they’d never zoom. Zero latency would be far more important for fueling the viral flywheel.
> Users didn’t want the full quality at all, they’d never zoom.
Dating apps use awful quality versions of the photos you upload too. Seems to be good enough for most people.
:) this is exactly how we used to do it even on iOS, back in the days before camera APIs were not made public, but Steve Jobs personally allowed such apps to be published in the iOS App Store (end of 2009) ...
Things have improved since then, but as I understand it, the technical reason behind that is that it used to be that only the camera viewfinder API was universal between devices. Every manufacturer implemented their cameras differently, and so developers had to write per-model camera handling to take high quality photos and video.
This is so cool! I'm a React-Native developer, and I'm glad to see more options like this coming into existence.
I wish the native iOS part was written in Swift rather than Objective-C like RN.
Why though? You aren’t interacting with it. What difference does it make?
How are you not interacting with it? It’s a UI library, no?
This looks promising. I would love to see more examples of what this can do along with screenshots. As is, there is a single Hello World and the components library is “coming soon”. But if it can deliver what it promises that would be pretty cool. React Native is I think the main popular framework in this space.
Not related to this, but abandoning Key DB was the worst thing they could do.
So this is like all those other frameworks that compile to native components, except this one is natively Typescript?
I’ll take it
I think? there isn't a typescript runtime? just a build time? I'm not positive how business logic gets executed but:
> it compiles directly to native views
One of the Valdi's authors here. It's using native views under the hood, like React Native, and there are 3 modes of compilation/execution for the TS source. It can be interpreted from JS (TS compiled to minified JS source), interpreted from JS bytecode (TS compiled to JS source, minified, then compiled to JS bytecode ahead of time), or compiled to native code directly (TS compiled to C ahead of time).
An AOT TS -> C compiler is fantastic - how much of the language is supported, what are the limitations on TS support? I assume highly dynamic stuff and eval is out-of-scope?
Most of the TS language is supported, things that are not can be considered bugs that we need to fix. Eval is supported but it won't be able to capture variables outside of the eval string itself. We took a reverse approach than most other TS to native compiler projects: we wanted the compiler to be as compatible with JS as possible, at the expense of reducing performance initially, to make it possible to adopt the native compiler incrementally at scale.
There are significant trade-offs with this compiler at the moment: it uses much more binary size than minified JS or JS bytecode, and performance improvements goes from 2x to sometimes zero. It's a work-in-progress, it's pretty far along in what it supports, but its value-proposition is not yet where it needs to be.
Rename it Snapp
So now I can finally implement the most god-awful, ugly, cumbersome and unintuitive GUI methodology ever to face a large population of users into my own apps? This abomination that started the whole user-experience decline by making this kind of yuck the gold standard for apps today is finally open source?
Color me yellow.
I hope it has "load spam ads directly into the list the user was about to touch somehow the millisecond before they touch it using magical force field technology so they click the wrong thing every time" functionality. I've been missing that in my apps
Now offering 4 swipe directions!
With instant, subpixel precision!
God forbid someone try something different. The app isn’t really made for people that only know how to doom scroll.
Not to troll , Do you need such shims in the era of llm ?
Yes? Dear lord I want determinism
Crafted by Rajat
Source Code