hckrnws
The statement in the article's title is very strong, and I have not found a confirmation of it in a logical sense. Author observes the current state of things with LLMs and makes a conclusion based on how things turned out to be, somewhat fitting the conclusion to the observation.
I personally find it nicer when the AI communicates quite clearly "Hi there, sorry to interrupt, but I have just launched a nuclear first strike on the enemy. This I thought would best for the current situation." instead of "WARNING! Nuclear first strike began".
Gives destruction that human touch.
Why are we counting sand grains at the beach. Yesterday we're talking about AI driven weapons of mass destruction and today we're arguing whether AIs should have a personality or not. F'A!
"But you nuked the wrong target??"
"You are absolutely right and I apologize. Let me try a different approach..."
I think this misses something, which is that there is absolutely the option to progress towards a region that is more "tool-like". See the difference between kimi k2 and many of the leading LLM providers. Its a lot better at avoiding sycophancy, avoiding emotive reasoning, etc. It's not as capable as others, and it is of course possible that thats why, but I find use for it regardless because of its personality
I don't think personality is an issue either way. Long term memory seems like a much strong candidate for psychosis - if the person goes down a rabbit hole and the bot not only amplifies that but does so over and extended time in an enduring way.
The actor playing Data in Star Trek has a personality, but can give a neutral sounding answer to a question.
I still think someone should set up a voice chat bot that answers to "Computer!" and has Majel Barrett's monotone voice.
This is a very optimistic, pro-technology-cleverness point of view.
I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.
ok but then why is ChatGPT's personality so infuriating? "It's not just X, it's Y." "Here it is, no extra text, no fluff."
I used chatgpt often but switched to Lumo a few days ago. I like Lumo a lot. It almost never ends with a follow up question. If it does it's a sensible/useful one. It readily searches the web if it's not quite sure what the correct answer is. Also it's privacy first. It's based on a Mistral model.
I tried ChatGPT over the holidays (paid) vs. claude.ai (paid). After trying some prompts that worked well on Claude in ChatGPT, I understand why people are so annoyed about AI slop. The speech patterns in text output for ChatGPT are both obvious and annoying, and impossible to unsee when people use them in written communication.
Claude isn't without problems ("You're absolutely right"), but I feel that some of the perception there is around the limited set of phrases the coding agent uses regularly, and comes less from the multi-paragraph responses from the chatbot.
Part of what makes it so infuriating is that it uses the same patterns so often, the other part is that it's not very good at using them—the revelation that it's Y and not X is typically incredibly banal, not some profound observation.
But it was always going to attempt to do some things it's not good at too often. It's these things in particular because skilled human writers do use similar flourishes quite a lot. So imitating them allows the model to superficially appear like a good writer, which is worse than actually being a good writer, but better than superficially appearing like a bad writer.
A different training process might try to limit the model to only attempt things it can do 100% perfectly, but then there wouldn't be a lot it could do at all.
Crafted by Rajat
Source Code