How much do you use AI?
-
This post did not contain any content.
-
This post did not contain any content.
I have no need for a confidently incorrect plagiarism machine.
-
I have no need for a confidently incorrect plagiarism machine.
You're using the term "plagiarism machine" derogatorily (I assume). Putting aside the discussion of "can it truly reason" and whatnot and assuming it can be accurately summed up as a plagiarism machine, why is that a bad thing?
-
You're using the term "plagiarism machine" derogatorily (I assume). Putting aside the discussion of "can it truly reason" and whatnot and assuming it can be accurately summed up as a plagiarism machine, why is that a bad thing?
There is no discussion to be had about current LLMs being able to reason. They cannot, full stop. They are an advanced form of autocomplete, nothing more. If you genuinely think that LLMs can reason, or are even close to reasoning, you need to research how they work. Not saying an AI that can genuinely reason is out of the question, it just can't be achieved with the methods used to create LLMs.
Also, plagiarism is bad? Like, many of the ways LLMs use the content they scrape would be indirectly illegal for a human to do, and as lawsuits are settled, will likely be illegal for LLMs to do as well.
-
There is no discussion to be had about current LLMs being able to reason. They cannot, full stop. They are an advanced form of autocomplete, nothing more. If you genuinely think that LLMs can reason, or are even close to reasoning, you need to research how they work. Not saying an AI that can genuinely reason is out of the question, it just can't be achieved with the methods used to create LLMs.
Also, plagiarism is bad? Like, many of the ways LLMs use the content they scrape would be indirectly illegal for a human to do, and as lawsuits are settled, will likely be illegal for LLMs to do as well.
What exactly do you mean when you say "reason"? 90% of AI discourse is people using the same words and talking past each other because they mean different things, so it's good to define that sort of thing up front.
-
What exactly do you mean when you say "reason"? 90% of AI discourse is people using the same words and talking past each other because they mean different things, so it's good to define that sort of thing up front.
Sure, when I say "reason" here I am using this dictionary definition: "The capacity for logical, rational, and analytic thought; intelligence."
-
Sure, when I say "reason" here I am using this dictionary definition: "The capacity for logical, rational, and analytic thought; intelligence."
It's going to be tough to explore this through internet comments, but that just raises the question of "what do you mean by thought and intelligence?", which then turns into "what do you mean by understanding?" and lots of other similar questions, down a deep rabbit hole. I don't think it's really possible to make strong statements either way until we've come up with a more coherent theory underlying basic terms like that. I'd love to see a rigid and objective definition that we can measure LLMs against.
-
This post did not contain any content.
Depends where you draw the line on what is AI.
AI as a noise isolation on phones and voips
- I use noise cancellation all the time, nvidia, discord, and whatever android has.
AI as imagine processing, text recognition, translation
- I use google lens extensively to translate menus, signs
- I use translation apps (many of which are built on neural networks) to communicate
- I use image processing to improve photos, remove noise, increase lighting, etc
- I use image to text grabbers all the time in PDFs, imagemagick, I read OCRed books
AI as text summarization
- When I post youtube videos on lemmy I use AI to summarize the transcript - because lemmy is text based and a translation from a visual medium to a text medium is helpful for people to participate in the conversation
AI as voice summarization
- I use video subtitle generation all the time, I use voice mail text transcription all the time
AI as search
- Unavoidably I've been served AI "results" for searches in both google, duckduckgo, and others... and honestly, when i'm searching for something like "What is the hotkey to mute voice chat in application X" and the AI result is "the U key"... I'm just going to hit the U key and see if it works.
AI as a chatbot
- Never, I don't trust the hallucinations. Knowing enough about markov chains and token generation means I will avoid using AI as a source of truth for any decision making process.
Image generation
- Never, I'm not philosophically opposed, but the services that do this all want some account/relationship. There isn't some duck.ai open and anonymous service yet
The term AI is being used for algorithm now-a-days and even when we talk about things that are hand crafted human algorithms, self weighting "machine learning" matrices, markov chains, and multi-layer neural networks (which are just matrices again)... There is no artificial intelligence that makes its down logically consistent reasoning. Right now everything called AI is just some tool or generated content (a tool).
Human effort asymmetry
Human time is expensive, AI time is cheap. someone is using something cheap to burn a expensive resource its a insult and antagonistic.
AI is often used to waste human time, and I think this is the source of most of the anger. Talking to a chat bot when you want to talk to a human is wasting your time. Spam calls, spam messages, spam posts, are all exploiting this asymmetry offensively.
Dead internet theory (pretty much a given at this point) is the logical conclusion of this asymmetry of effort.
I like places like lemmy, or reputation based forums, where interactions genuine and have a high probability of being from a human. I'll gladly spend all day with humans because I feel that my contributions are improving someones day. I wouldn't do that without the humans.
-
It's going to be tough to explore this through internet comments, but that just raises the question of "what do you mean by thought and intelligence?", which then turns into "what do you mean by understanding?" and lots of other similar questions, down a deep rabbit hole. I don't think it's really possible to make strong statements either way until we've come up with a more coherent theory underlying basic terms like that. I'd love to see a rigid and objective definition that we can measure LLMs against.
LLMs generate tokens based on probabilities - they do not create thoughts that they can perform discrete logic with.
The chat bots are deceptive because you can ask questions with discrete logic requirements and they answer convincingly well, but that is because their training data set had many such questions in it, so its really token generation.
If you never played with a old school "chat with eliza" bot, its worth the effort. LLMs are just that super charged, there has to be some input to train on to make the response.
Of course people are trying to glue math and discrete algebraic systems on top of LLM output, but that still does not solve the problem of artificial general intelligence.
-
LLMs generate tokens based on probabilities - they do not create thoughts that they can perform discrete logic with.
The chat bots are deceptive because you can ask questions with discrete logic requirements and they answer convincingly well, but that is because their training data set had many such questions in it, so its really token generation.
If you never played with a old school "chat with eliza" bot, its worth the effort. LLMs are just that super charged, there has to be some input to train on to make the response.
Of course people are trying to glue math and discrete algebraic systems on top of LLM output, but that still does not solve the problem of artificial general intelligence.
Why don't they "create thoughts"? I mentioned this in another comment, but most discussions around AI are people talking past each other because they use the same words to mean different things.
It might seem absurd, but it's a lot harder to define words like "thought" than you'd think, because often the definition just leads to more questions. Wikipedia for example says "In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation.", but then what does "conscious" mean? Until we have a rigid definition for words like that all the way down to first principles, I wouldn't agree with definitive statements.
ELIZA is fundamentally different from an LLM though, it's much more an expert system.
-
Why don't they "create thoughts"? I mentioned this in another comment, but most discussions around AI are people talking past each other because they use the same words to mean different things.
It might seem absurd, but it's a lot harder to define words like "thought" than you'd think, because often the definition just leads to more questions. Wikipedia for example says "In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation.", but then what does "conscious" mean? Until we have a rigid definition for words like that all the way down to first principles, I wouldn't agree with definitive statements.
ELIZA is fundamentally different from an LLM though, it's much more an expert system.
I see what your doing, but your asking for too much formalism in a casual context. To satisfy the entire vocabulary from first principles would be a non-trivial task - its so daunting I don't even want to attempt it here.