This website contains age-restricted materials including nudity and explicit depictions of sexual activity.
By entering, you affirm that you are at least 18 years of age or the age of majority in the jurisdiction you are accessing the website from and you consent to viewing sexually explicit content.
Alright I don’t like the direction of AI same as the next person, but this is a pretty fucking wild stance. There are multiple valid applications of AI that I’ve implemented myself: LTV estimation, document summary / search / categorization, fraud detection, clustering and scoring, video and audio recommendations… "Using AI” is not the problem, “AI charlatan-ing” is. Or in this guy’s case, “wholesale anti-AI stanning”. Shoehorning AI into everything is admittedly a waste, but to write off the entirety of a very broad category (AI) is just silly.
I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
Also search just seems like overkill. If I type in “population of london”, i just want to be taken to a reputable site like wikipedia. I don’t want a guessing machine to tell me.
Other use cases maybe. But there are so many poor uses of AI, it’s hard to take any of it seriously.
Removed by mod
If I understand how AI works (predictive models), kinda seems perfectly suited for translating text. Also exactly how I have been using it with Gemini, translate all the memes in ich_iel 🤣. Unironically it works really well, and the only ones that aren’t understandable are cultural not linguistic.
Removed by mod
Oh that’s the best part, since it’s memes honestly never know if it’s even meant to be completely sensible. So even if it does hallucinate, just adds a bit of spice 🤌🤌
I also like the thought that probably billions was spent to make something that is best suited for deep frying memes
deleted by creator
Removed by mod
deleted by creator
Removed by mod
deleted by creator
I feel like letting your skills in reading and communicating in writing atrophy is a poor choice. And skills do atrophy without use. I used to be able to read a book and write an essay critically analyzing it. If I tried to do that now, it would be a rough start.
I don’t think people are going to just up and forget how to write, but I do think they’ll get even worse at it if they don’t do it.
deleted by creator
Our plant manager likes to use it to summarize meetings (Copilot). It in fact does not summarize to a bullet point list in any useful way. Breakes the notes into a headers for each topic then bullet points The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points. Truly stunning work. Even better with a “Please review the meeting transcript yourself as AI might not be 100% accurate” disclaimer.
Truely worthless.
That being said, I’ve a few vision systems using an “AI” to recognize product that doesn’t meet the pre taught pattern. It’s very good at this
I think your manager has a skill issue if his output is being badly formatted like that. I’d tell him to include a formatting guideline in his prompt. It won’t solve his issues but I’ll gain some favor. Just gotta make it clear I’m no damn prompt engineer. lol
I didn’t think we should be using it at all, from a security standpoint. Let’s run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I’m not going to attempt to help make it’s use better at all. Hopefully if it’s trash enough, it’ll blow over once no one reasonable uses it. Besides, the man’s derided by production operators and non-kool aid drinking salaried folk He can keep it up. Lol
Okay, then self host an open model. Solves all of the problems you highlighted.
Removed by mod
Removed by mod
Right, I just don’t want him to think that, or he’d have me tailor the prompts for him and give him an opportunity to micromanage me.
deleted by creator
But if the text you’re working on is small, you could just do it yourself. You don’t need an expensive guessing machine.
Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that’s impressive. but also kind of a stupid waste, because I could’ve just tied them with my hands.
deleted by creator
I guess this really depends on the solution you’re working with.
I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.
When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.
Its just a statistics game. When 99% of stuff that uses or advertises the use of “AI” is garbage, then having a mental heuristic that filters those out is very effective. Yes you will miss those 1% of useful things, but thats not really an issue for most people. If you need it you can still look for it.
But what about me and my overly simplistic world views where there is no room for nuance? Have you thought about that?
I have ADHD and I have to ask A LOT of questions to get my brain around concepts sometimes, often cause I need to understand fringe cases before it “clicks”, AI has been so fucking helpful to be able to just copy a line from a textbook and say “I’m not sure what they meen by this, can you clarify” or “it says this, but also this, aren’t these two conflicting?” and having it explain has been a game changer for me. I still have to be sure to have my bullshit radar on, but thats solved by actually reading to understand and not just taking the answer as is. In fact, scrutinizing the answer against what I’ve learned and asking further questions has felt like its made me more engaged with the material.
Most issues with AI are issues with capitalism.
Congratulations to the person who downvoted this
They use a tool to improve their life?! Screw them!
Here’s hoping over the next few years we see little baby-sized language models running on laptops entirely devour the big tech AI companies, and that those models are not only open source but ethically trained. I think that will change this community here.
I get why they’re absolutist (AI sucks for many humans today) but above your post as well you see so much drive-by downvoting, which will obviously chill discussion.
Edit for clarity: Don’t hate the science behind the tech, hate the people corrupting the tech for quick profit.