hckrnws
Show HN: VoxConvo – "X but it's only voice messages"
by siim
Hi HN,
I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.
So I built VoxConvo.
Why this exists:
AI-generated content is drowning social media. ChatGPT replies, bot threads, AI slop everywhere.
When you hear someone's actual voice: their tone, hesitation, excitement - you know it's real. That authenticity is what we're losing.
So I built a simple platform where voice is the ONLY option.
The experience:
Every post is voice + transcript with word-level timestamps:
Read mode: Scan the transcript like normal text or listen mode: hit play and words highlight in real-time.
You get the emotion of voice with the scannability of text.
Key features:
- Voice shorts
- Real-time transcription
- Visual voice editing - click a word in transcript deletes that audio segment to remove filler words, mistakes, pauses
- Word-level timestamp sync
- No LLM content generation
Technical details:
Backend running on Mac Mini M1:
- TypeGraphQL + Apollo Server
- MongoDB + Atlas Search (community mongo + mongot)
- Redis pub/sub for GraphQL subscriptions
- Docker containerization for ready to scale
Transcription:
- VOSK real time gigaspeech model eats about 7GB RAM
- WebSocket streaming for real-time partial results
- Word-level timestamp extraction plus punctuation model
Storage:
- Audio files are stored to AWS S3
- Everything else is local
Why Mac Mini for MVP? Validation first, scaling later. Architecture is containerized and ready to migrate. But I'd rather prove demand on gigabit fiber than burn cloud budget.
Impressive tech execution, but the format has fundamental scaling issues.
Clubhouse lost 93% of users from peak. WhatsApp sends 7 billion voice messages daily - but those are DMs, not feeds.
The math doesn't work: reading is 50-80% faster than listening. You can skim 50 text posts in 100 seconds. 50 voice posts? 15 minutes.
Voice works async 1-to-1. You built Twitter where every tweet is a 30-second voicemail nobody has time to listen to.
The transcription proves it - users will read, not listen. Which makes this "text feed with worse UX"
How would this prevent someone from just plugging ElevenLabs into it? Or the inevitable more realistic voice models? Or just a prerecorded spam message? It's already nearly impossible to tell if some speech is human or not. I do like the idea of recovering the emotional information lost in speech -> text, but I don't think it'd help the LLM issue.
Detecting "human speech" means shutting out people who cannot speak and rely on TTS for verbal communication.
Also speech impediments, accents, physical disabilities, etc etc.
Tech culture just refuses to even be aware of people as physical beings. It's just spherical users in a vacuum and if you don't fit the mold, tough.
Or also a genuine human voice reading a script that’s partially or almost entirely LLM written? I think there must be some video content creators who do that.
> I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.
> Why this exists: AI-generated content is drowning social media.
> Real-time transcription
... So you want to filter out AI content by requiring users to produce audio (not really any harder for AI than text), and you add AI content afterward (the transcriptions) anyway?
I really think you should think this through more.
The "authenticity" problem is fundamentally about how users discover each other. You get flooded with AI slop because the algorithm is pushing it in front of you. And that algorithm is easily gamed, and all the existing competitors are financially incentivized to implement such an algorithm and not care about the slop.
Also, I looked at the page source and it gives a strong impression that you are using AI to code the project and also that your client fundamentally works by querying an LLM on the server. It really doesn't convey the attitude supposedly motivating the project.
Nice tech demo though, I guess.
Neat idea! Not sure if I'm willing to register just try it, though. Having the main feed public would be nice! Or even a sample feed.
So you're going to reject recordings detected as computer generated, or human recorded from a computer-generated script?
I feel like you are making your users jump through hoops to do bot and slop detection, when you ought to be investing in technology to do the same. Here is a focusing question: would you still demand audio recordings if you had that technology?
Maybe you will court an interesting set of users when you do this? I just know I will not be one of them; ain't got time for that. Good luck.
Did you ever use AirChat?
Idea is cool, but the STT is bad (at least with an accent), and the fact that you need to edit each word is too cumbersome
“Sign in with Google”
:grimace:
Sorry, but I have to pass.
Crafted by Rajat
Source Code