hckrnws
The real creepy thing is the way they force you to give up your data with these products. If it were just useful add ons, it wouldn’t bother me, but the fact that Gemini requires you to turn activity history off for paid plans for the promise they won’t train on your data or allow a person to view your prompts is insanity. If you’re paying $20 for Pro or 249.99 for Ultra, you should be able to get activity history without training or review or storing your data for several years.
I have a pixel watch, and my main use for it is setting reminders, like "reminder 3pm put the laundry in the dryer". It's worked fine since the day I bought it.
Last week, they pushed an update that broke all of the features on the watch unless I agreed to allow Google to train their AI on my content.
My Android phone comes hobbled unless I give it all my data to be used for training data (or whatever). I just asked, "Ok Google, play youtube music." And it responded with, "I cannot play music, including YouTube Music, as that tool is currently disabled based on your preferences. I can help you search for information about artists or songs on YouTube, though. By the way, to unlock the full functionality of all Apps, enable Gemini Apps Activity."
I'm new to Android, so maybe I can somehow still preserve some privacy and have basic voice commands, but from what I saw, it required me to enable Gemini Apps Activity with a wall of text I had to agree to in order to get a simple command to play some music to work.
Just stop talking to your computer and use the screen interface, that still works.
We need consumer protection laws that protect against functional regressions like this -- if a widget could do X when I bought it, it should keep doing X for the life of the product and I shouldn't have to "agree" to an updated license for it to be able to keep doing X.
International coordinated action by consumers taking a company to small claims court at the same time around the world to see redress about defective products would be an effective strategy.
Are you proposing a "World Sue A Tech Giant Day"? A global bonanza of micro-litigation that bleeds AI-leviathans dry by a thousand cuts?
I'm in, but let's have it in October or something when I'm less busy.
I like this idea, though I'm concerned about how we could make sure the courts are ready to handle the deluge of activity.
I’m in too, but give me a couple weeks to divest my portfolio from big tech real quick.
Did you agree, or did you give up your data?
> I have a pixel watch
you rented/leased a watch for an undefined amount of time.
And the fact that even if you don't want it, don't use it they still charge you as if you do.
https://gitlab.com/natural_aliens/geminichatsaver/-/tree/mai... pull requests and any other feedback welcome
Lots of things in life seem to be the majority having to go along with the decisions of the minority. I remember in 2012 when Facebook put white chevrons for previous- and next-photo in the web photo gallery product and thinking how this one product decision by a handful of punks has now been foisted on the world. At the time I was really into putting my photography on FB and, somewhat pretentiously, it really pissed me off to start having UI elements stuck on it!
Car dashboards without buttons, TVs sold with 3D glasses (remember that phase?), material then flat design, larger and larger phones: the list is embarrasing to type because it feels like such a stereotypical nerd complaint list. I think it’s true though — the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.
And now with AI, too. I just interacted with duck.ai, duck duck go’s stab at a bot. I long for a little more conservatism.
> And let’s be clear: We don't need AGI (Artificial General Intelligence).
In general, I think we want to have it, just like nuclear fusion, interplanetary and interstellar colonization, curing cancer, etc. etc.
We don't "need" it similar to people in 1800s don't need electric cars or airports.
Who owns AGI or what purpose the AGI believe it has is a separate discussion - similar to how airplanes can be used to transport people or fight wars. Fortunately today, most airplanes are made to transport people and connect the world.
Microsoft is all about this. You know how they also force stuff you don't want on the OS? Somewhere within Microsoft there might be a dashboard where they show their investors people are using Bing and Copilot. Borderline financial scam if you think about it.
They've been all about this since Windows 95.
Copy and paste is not working reliably in in windows anymore; coincidentally it's breaking at the same time Msoft is moving to replace all copy/paste with OCR only. It's garbage
Genuinely VSCode has been broken for me for with copying due to it desperately trying to vibe code for me. You’ve reminded me to fix that.
Had the same issue switching to cursor. Cmd + k multiple selection skip is now no longer the key map. Drives me fucking nuts.
I haven't noticed this, also how exactly would OCR copy paste work? In order to copy text I would need to select text, which would mean it's already encoded as text.
That's why this was the year I finally dropped Windows and VSCode forever. Not that hard for me because all the games I play work flawlessly in Proton, and I already used Linux at work.
What is your replacement for VSCode?
Not who you responded to, but for a GUI editor I tend to like Zed, and for terminal I like Helix. Yes, Neovim is probably better to learn because Vim motions are everywhere, but I like Helix's more "batteries included" approach.
> I will not allow AI to be pushed down my throat just to justify your bad investment.
Pretty much my sentiment too.
The neat thing about all this is that you don’t get a choice!
Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.
It’s happening, with you or to you.
I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.
Reasonably far off topic:
Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.
Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?
If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?
One hallucinates data, one does not?
almost the same as RTO mandates:
we’ll force you to come back to justify sunk money in office space.
Don't forget about the poor local businesses. Someone needs to pay to keep the executives' lunch spots open.
AI reminds me of the time Google+ was being shoved down our throats. If you randomly clicked on more that 7 hyperlinks on the internet, you'd magically sign up for google plus.
Around that time, one of my employer's website had added google plus share buttons to all the links on the homepage. It wasn't a blog, but imagine a blog homepage with previews of the last 30 articles. Now each article had a google plus tag on it. I was called to help because the load time for the page had grown from seconds to a few minutes. For each article, they were adding a new script tag and a google plus dynamic tag.
It was fixed, but so much resources were wasted for something that eventually disappeared. Ai will probably not disappear, but I'm tired of the busy work around it.
The difference was that Google Plus was actually kind of cool. I'm not excusing them shoving it down your throat, but at least it was well designed.
Most of the AI currently represent misadventures in software design at a time when my Fitbit charge can't even play nice with my pixel 7 phone. How does that even happen?
What I also dislike about the AI is that it promotes a mainframe-like development workflow. Schedule your computation, pay for the usage, etc. Any chance this particular trend stops or reverses? Are we ever going to have local AI that is in practice comparable and sufficient?
This might be obvious but I think the only way forwards is to disengage in services offered by these mega-tech companies. Degoogling has become popular enough to foster open communities that prioritize their time and effort to keep softwares free from parasitic enterprises.
For instance, I am fiddling with LineageOS on a Pixel (ironically enough) that minimizes my exposure to Google's AI antics. That doesn't mean to say it is easy or sustainable, but enough of us need to stop participating in their bad bets to force upon that realization.
I'm hoping "degoogle" is the 2026 word of the year.
No one with a white-collar job in the US can get away from Google and Microsoft. We're forced to use one or the other, and some of us are forced to use both.
That's not to mention all the other tech companies pushing AI (which is honestly all of them).
Agreed. At the level of companies, it is hard to find any practical solution. Personally, I am trying to do what I can.
My Healthcare providers App in Germany refuses to work on anything that isn't a Phone running official Google^tm verified^(r) and hardware attested OS. Same with some banks.
I feel things like these should be illegal. There must be other genuine ways to verify the end user.
Is it possible to permanently disable Gemini on Android? I keep getting it inserted into my messages and other places, and it's horrible to think that I'm one misclick away from turning it on.
Sorry, you've irrevocably consented by touching a button that appeared above what you were trying to tap half a millisecond earlier.
That only happens with Apple, so it's fine.
My feeling is we need laws to stop it
The industry agrees with you, hence the regulatory capture.
Too big to fail now
If it only takes a few years for a private entity to become "too big to fail" and quasi-immune to government regulation, we have a real problem.
You don't like some features being added to products so you want laws against adding certain features?
I might not like a certain feature, but I'd dislike the government preventing companies from adding features a whole lot more. The thought of that terrifies me.
(To be clear, legitimate regulations around privacy, user data, anti-fraud, etc. are fine. But just because you find AI features to be something you don't... like? That's not a legitimate reason for government intervention.)
I think it's more about enforcing having easy mechanisms to opt out, which seem to be absent with regards to AI integration.
It's better to assume good faith when providing a counter argument.
What is so terrifying about exerting democratic control over software critical to exist in society?
> over software critical to exist in society?
I don't know what that means grammatically.
But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?
The answer is, because it infringes on freedom. As long as these AI features aren't harming anyone -- if your only complaint is you find their presence annoying, in a product you have a free choice in using or not using -- then there's no democratic justification for passing laws against them. Democratic rights take precedence.
> As long as these AI features aren't harming anyone
Why do you say this ? They are clearly harming the privacy of people. Or you don't in privacy as a right ? But, a lot of people do - democratically.
This is the argument against all customer protection as well as things like health codes, right?
Nobody is FORCING you to go to that restaurant so it's antidemocracy to take away their freedom to not wash their hands when they cook?
Newsflash
If voters Democratically decide to do something, that's democracy at work.
You're trying to make it sound like a corporation's right to force AI on us is equivalent to an individual's right to speech, which is idiotic in its face. But I'd also point out that speech is regulated in the US, so you're still not making the point you think you're making.
And as far as I'm concerned, as long as Google and Apple have a monopoly on smartphone software, they should be regulated into the ground. Consumers have no alternatives, especially if they have a job.
I uninstall the gemini app and disable the google app. It seems they are heavily linked so remmoving it may do the trick. As a practice I don't use any google apps if I can find a good replacement so I am not sure if messages is impacted.
> We don't need AGI (Artificial General Intelligence). We don't need a digital god. We just need software that works.
Yeah, I think the days of working software are over (at least deterministically)
> We don't need to do it this quarter just because some startup has to do an earnings call.
What startups are doing earning calls?
All the public ones?
I'd say at that point they are no longer startups, they've already started up
Lots of businesses like to claim being a "startup" as it brings connotations of innovation, dynamism, coolness, being the "next big thing" etc. There are many senses of the word, and it can be used in different ways (e.g. I work at a small business which has some elements of startup culture, and it's not an incorrect way to give people a sense of what it's like here - but we're definitely well established) but I think often being one of the "cool kids" is part of the motivation.
VSCode feels like it’s in the “brand withdrawal” phase of its lifespan. I’ve turned off the sneakily named “Chat” and yet it still shows the chat sometimes when I toggle the bottom bar visibility.
> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.
I’m not sure I have an idea of what this might look like. Do they want money? What might that model look like? Do they want credit? How would that be handled? Do they want to be consulted? How does that get managed?
It probably starts with a reexamination of copyright law, which has always been a pragmatic rather than a principled system, but has not noticeably changed since the digital revolution.
Copyright is meant to encourage more publication by providing publishers with (temporary) control over their products after having released them to the public. Once the copyright expires, the work enters the public domain, which is really the end goal of copyright law. If publishers start to feel like LLMs are undermining that control, they might publish less original work, and therefore less stuff will eventually enter the public domain, which is considered bad for society. We're already seeing some effects of this as traffic (and ad revenue) to various websites has fallen significantly in the wake of LLM proliferation, which lowers the incentive to publish anything original in the first place.
Anyway, I'm not sure how best to adapt copyright law to this new world. Some people have thought about it though: https://en.wikipedia.org/wiki/Copyright_alternatives
Just like one of crypto’s biggest real world uses ended up being scams, LLMs are tools for bypassing copyright and enabling plagiarism with plausible deniability.
Boomers in the manager class love AI because it sells the promise of what they've longed for for decades: a perfect servant that produces value with no salary, no need for breaks, no pushback, no workers comp suits, etc.
The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.
There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.
It still kinda sucks though. You can make it work, but you can also easily end up wasting a huge amount of time trying to make it do something that it's just incapable of. And it's impossible to know upfront if it will work. It's more like gambling.
I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.
The Copilot button that comes on new laptops is the Darkest Pattern I have ever seen. UI exploitation that has jumped the software / hardware gap.
A student will be showing me something on their laptop, their thumb accidentally grazes it because it's larger than the modifier keys and positioned so this happens as often as possible. The computer stops all other activity and shifts all focus to the Copilot window. Unprompted, the student always says something like "God, I hate that so much."
If it was so useful they wouldn't have to trick you into using it.
> Unprompted, the student always says something like "God, I hate that so much."
... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)
> If it was so useful they wouldn't have to trick you into using it.
They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.
And the telemetry doesn't lie! Look how many people are clicking that button! KPIs go brrrrrrr
What's sad is how real this is.
BTW, the context is that one thing I teach is 3d modeling software, so the students are following my instructions to enter keyboard commands. It's usually Rhino3d where using the spacebar to repeat the last command is common.
> ... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)
Just as a generalization, the dozen or so times this has happened this semester the pop-up is accompanied by an "ugh" then after the window pops up from the taskbar the student immediately clicks back into the program we're using. It seems like they're used to dealing with it already. I haven't seen any voice interaction.
I mean, the statistics say the students use AI plenty - they just seem annoyed by the interruption. Which I can agree with.
> They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.
Exactly.
I'm having a hard time believing any of this, and am tempted to think this might be in bad faith. It's true it's a bit ambitious on their part that they replaced the right side key, but it isn't larger than normal and it's not positioned any differently than normal keys. Working with hundreds of laptops and humans, several ham fisted, on a daily basis I've not seen this at all.
Further, a dark pattern is where you are led towards a specific outcome but are pulled insidiously towards another. This doesn't really fall into that definition.
Amazon's Price History feature certainly doesn't need to open their AI assistant, but in addition to be graph I came for, I get a little summary of the graph. I really hope they aren't using an LLM for that when all it's doing is telling me it's the lowest price in 30 days.
That's the kind of lazy bullshit idea that, to me, exemplifies the AI hype slop era we're in. The point of a chart is to communicate visually. If the chart isn't clear without a supplemental explanation, why is it there?
If user research indicates your chart isn't clear enough, then improve the chart. But what are the odds they did any user research? They probably just ran an A/B test and saw number go up because of the novelty factor.
What examples of AI integrations annoy you? Because I have such wonderful time randomly discovering AI integrations where they actually fit nicely: 1) marimo documentation has ask button to quickly get some help, kind of like way smarter RAG; 2) postman has AI that can write little scripts that visualize responses however you want (for example I turned bunch of user ids into profile links so that I could visit all of them); 3) Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about. 4) Google's AI Mode saved me many clicks, even just Gemini that can quickly fetch when certain TV Show goes live and make reminder is amazing
> Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about.
when Charlie Kirk was shot, and the video was posted to Twitter, people asked Grok to "fact-check" it...and Grok told them the videos were fake and Kirk was alive. [0]
Grok also spread misinformation about the identity of the shooter. [1]
> On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok's replies to X users' inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.
and that's just one particularly egregious event in a long string of problems, such as the MechaHitler thing. [2] and the Elon Musk piss-drinking thing. [3]
so if you're going to defend these "AI" integrations as being useful and helpful...I dunno, Grok is probably not a good example to point to.
0: https://www.engadget.com/ai/grok-claimed-the-charlie-kirk-as...
1: https://www.cbsnews.com/news/ai-false-claims-charlie-kirk-de...
2: https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...
3: https://www.404media.co/elon-musk-could-drink-piss-better-th...
On Windows: Notepad and Edge Developer Tools
In addition to the annoyances mentioned, the pushing of AI may be leading to a massive waste of money and resources. I'm sure if, instead of AI being shoved in, want it or not, they said pay $1 if you want AI, the number of data centers needed would be reduced dramatically.
>> the pushing of AI may be leading to a massive waste of money and resources
And massive amounts of energy to run these new fangled AI data centers. Not sure if you lumped that in with "resources", but yes we're already seeing it:
A typical AI data center uses as much electricity as 100,000 households, and the largest under development will consume 20 times more, according to the International Energy Agency (IEA). They also suck up billions of gallons of water for systems to keep all that computer hardware cool.
https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-...
I don’t we’re at the “companies bought too many GPUs” stage yet. My understanding is they still can’t get enough GPUs, or data centers to put them in, or power to run them. Most companies don’t even own them, they rent from the clouds.
We are, however, at the “we need an AI strategy” stage, so execs will throw anything and everything at the wall to see what sticks.
They can't get enough GPUs right now, with VCs pumping money into them, but that can very quickly turn into "we're out of money, what do we do with all these GPUs?"
The worst usage of AI is “content dilution” where you take a few bullet points and generate 5 paragraphs of nauseating slop. These days, I would gladly take badly written content from humans filled with grammatical errors and spelling mistakes over that.
Comment was deleted :(
> generate 5 paragraphs of nauseating slop
Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.
The amount of waste is quite staggering in this back and forth game.
> Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.
Which more often than not will lose or distort the original intention behind the first 5 bullet points.
Which is why I avoid using LLMs for writing.
It's pretty awesome that we now have nondeterministic .Zip
/dev/yolo
As usual, those who spent too much money on it use it as a way to show their investors they didn't waste all that money and to get them to spend even more. That's why it's so messed up.
[dead]
The AI push is not just hype, it’s a scramble for cash. For now the only game plan is to scale up massively with a giant investment gamble, to try to get beyond the obvious limitations that threaten to burst the bubble.
Plus the general economy outlook is negative, AI is the bright spot. They are striving to keep growth up amid downward pressure.
This seems like a fairly reasoned screed. I can't find much to disagree with.
I am, personally, quite optimistic about the potential for "AI," but there's still plenty of warts.
Just a few minutes ago, I was going through a SwiftUI debugging session with ChatGPT. SwiftUI View problems are notoriously difficult to debug, and the LLM got to the "let's try circling twice widdershins" stage. At that point, I re-engaged my brain, and figured out a much simpler solution than the one proposed.
However, it gave me the necessary starting point to figure it out.
If AI was as great as they pretend, there would be no need to force it on us.
It's the classic "You don't need to tell a child that something is fun. The more times you a child something is fun, the more they will doubt you. It's easy to tell if something is fun, because it's fun."
Its the only game in town, and reasonably expected to be close to the last.
What do you mean by that?
A few days ago i took a photo of some water pipes, and asked chatgpt to review it .
Unbeknownst to me that there was an issue. It pointed out multiple signs of slow leaks and then described what i should do to test and repair it easily.
I see a lot of negative energy about the 'AI' tech we have today to the point where you will get mass downvoted for saying something positive.
I mean, we're in the upslope stage of the hype/bubble cycle. Once this pops and 80% of invested people lose their shirts, the long-term adoption cycle will play out much more reasonably, more like OP wishes.
assume it is being ‘done wrong’, not due to the usual trifecta of greed/evil/stupidity, but due to socio-economic pressure that demands this approach.
AI really needs R&D time, where we first figure out what it’s good for and how best to exploit it.
But R&D for SW is dead. Users proved to be super-resilient to buggy or mis-matched sw. They adapt. ‘Good-enough’ often doesn’t look it. Private equity sez throw EVERYTHING at the wall and chase what sticks…
yeah, we're f&%^ed
Fired?
Your regex parser is broken if your answer is a five char string.
As I click into this thread from the front page, "Writing a Good Claude.md" appears immediately above it. Sigh.
Ok fair enough, feel more of the AGI.
>> It is time to do AI the right way.
Some "No True Scotsman"-flavored cope.
try watching a televised American football game. So many ads for AI. Of course ads appeal most to the gullible.
Have you seen the Workday ads featuring all the washed-up rock stars? They're pushing managing people AND AI agents - using AI. sigh...
Oh how I'd like the AI bubble to pop already when ROIs don't justify the cost. I like AI for things like getting recommendations or classifying images. And yet execs feel the need to force every possible use case down our throats even if they don't make any sense or make quality worse.
E.g. Programming - and I do judge not only those who use AI to code but execs who force people to use AI to code. Sorry, I'd like to know how my code works. Sorry, you're not an efficient worker, you're just making yourself dumber and churning out garbage software. It will be a competitive advantage for me when slop programmers don't know how to do anything and I can actually solve problems. Silicon Valley tech utopians cannot convince me otherwise. I don't think poorly socialized dweebs know much about anything other than their AI girlfriends providing them with a simulation of what it feels like to not be lonely.
If AI was amazing you wouldn’t need to push it, people would demand it!
You need to push slop, because people don’t really want it.
> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.
i support this but the Smarter Than Me types say it's impossible. It's not possible to track down an adequate number of copyright holders, much less get their permission, much less afford to pay them, for the number of works required in order to get the LLM to achieve "liftoff".
I would think that as I use Claude for coding, it would work just as well if it didnt suck down the last three years of NYT articles as if it did. There's a vast amount of content that is in the public domain, and if you're ChatGPT you can make deals with a bunch of big publishers to get more modern content too. But that's my know-nothing take.
maybe the issue is more about the image content. Screw the image content (and definitely the music content, spotify pushing this slop is immensely offensive), pay the artists. My code OTOH is open source, MIT licensed. It's not art at all. Go get it (though throw me a few thousand bucks every year because you want to do the right thing).
It's not' impossible', it's economically unviable. There's a difference. We really should be mandating that companies that don't pay fair market prices for the data they use to train their models must open source everything as reparation to humanity.
It is not an axiom that LLMs even have the right to achieve "liftoff". They are obvious instruments of plagiarism that often just reorder sentences so as not to get caught. They can be forbidden.
If you don't mind the oligarchs stealing your code, that is your prerogative. Many other people do mind.
Comment was deleted :(
AI can be thought of as a parasitic lifeform: it feeds on truth and excretes slop. We know AI is no good for us, but those pushing it have a nefarious plan: make people dependent on it, so we can't get rid of this parasite without destroying our society.
[dead]
[flagged]
And this is the new, fashionable, easy way to insult someone. Just say their work sounds like AI, and job done.
In this case they're not doing themselves any favors by leading their posts with obviously AI generated images. That just primes the reader to suspect the author is slopping it up before they even start reading.
You are right. And "work sounds like AI" is an insult says everything you need to know about the generations by AI :)
Do you want AI pushed down your throat?
Comment was deleted :(
Comment was deleted :(
[flagged]
Yeah, my Windows rebooted and then bam, this web page opened itself.
And the subscription fee for your browser went up 40% because adding anti-ai content is a value add. And no, you cant opt out
[flagged]
You're implying they're the same in some way, but you haven't explained why.
Disable your script execution and cookie storage for the OP site and then attempt to view it. The page and content load fine; the host injecting coercing messages for enabling tracking cookies and scripts are the reason AI has been integrated into everything.
You either comply of face unnecessary roadblocks. OP has complied by sharing the link. My right to choose tracking cookies and script execution is parallel to my right to not utilize or be forced to utilize AI. This issue has to be addressed universally as is not simple "no ai" on the web; it's freedom to use the web or compliance with violation of that freedom
The article renders well even if you block javascript though
GP's point is that the page attempts to run unnecessary JS and that this is objectionable.
Hyperbole much? These are two different issues, you can of course write your own blog post about that.
Benefit of the doubt: this person wants to get their word out and it's more energy than they had to track down a pristine, pure, sparkling blogging engine.
Can you explain javascript and cookies similarity to LLMs?
You have the ability to turn that off…
[flagged]
I think the author cares about both.
Did that single sentence in this relatively short, 36 sentence post really make you flip the table as hard as you imply? That's surprising if so.
On the contrary, cannibalizing the commercial viability of original content creation is possibly the most short-sighted aspect of the current AI push. That isn't 'political', it's just a relatively conservative assessment of the content market.
Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.
I’m not opposed to any of the above, necessarily. I’ve just always been the type to want to adopt things as they are needed and bring demonstrable value to myself or my employer and not a moment before. That is the problem that capital has tried to solve through “founder mode” startups and exploitative business models: “It doesn’t matter whether you need it or not, what matters is that we’re forcing you to pay for it so we get our returns.”
How could say cloud is overhyped when we're at a point where running physical machines is a rare and specialized skill and companies couldn't run their own hardware anymore.
The difference is the level of investment and consumer application for each service - most customers would never be able to tell you what an erp is.
Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.
They said the exact same thing when electricity was invented too.Gas companies said electricity was a fad. Some doctors said electric light harms the eyes. It's too expensive for practical use. Need too much infrastructure investment. AC will kill people with shocks. Electrification will destroy jobs, said gas lamp unions. It's unnatural, said some clergy. And on and on and on...
Electric light indeed harms the eyes.
This post is just one multipurpose category error.
What an utterly bizarre comparison.
Even the block chain comparison isn't valid because it didn't consist of an "AI" button getting crammed into every single product and website, turned into a popover etc.
There are nuances to the examples.
For example, I'm not a big fan of blockchain. In fact, I think crypto is just 99% scam.
But big data led to machine learning and LLMS right? Cloud led to cheaper software with faster deploy times right? In fact, cloud also means many browser based apps replacing old Windows apps.
None of these were fads. They are still tremendously useful today.
AI will with certainty increase productivity of % and the rest will fall behind, perhaps dramatically. Effectiveness with AI can still be a grind, beyond simple prompting, we are getting lots of expensive AI tools heavily subsidized right now, that may not always be the case.
Crafted by Rajat
Source Code