They sound *exactly* the same
-
A probable answer. That's not a sensible question so a ridiculous answer is expected.
Strait is misspelled. Both straits and sounds are bodies of water so it's a very sensible question. You might also ask what the difference between a cove and a bight is.
-
Strait is misspelled. Both straits and sounds are bodies of water so it's a very sensible question. You might also ask what the difference between a cove and a bight is.
Didn't make the connection. Very difficult for transformers since they do not listen to the words. They also don't read the letters. So this is a 'don't use an ai for something it fundamentally cannot do' example.
-
Didn't make the connection. Very difficult for transformers since they do not listen to the words. They also don't read the letters. So this is a 'don't use an ai for something it fundamentally cannot do' example.
An error in a question should either result in correcting the question or indicating that the question doesn't make sense.
Calling "straight" and "sound" homophones is a pure demonstration of the LLM's ignorance. Maybe it got fooled by "straight" and "strait" being homophones and some how crossed wires, but that's actually the point. It is ignorant, despite how "intelligent" it might sound.
-
An error in a question should either result in correcting the question or indicating that the question doesn't make sense.
Calling "straight" and "sound" homophones is a pure demonstration of the LLM's ignorance. Maybe it got fooled by "straight" and "strait" being homophones and some how crossed wires, but that's actually the point. It is ignorant, despite how "intelligent" it might sound.
I think you're holding a fundamental misunderstanding of what today's LLMs are.
An error in a question should either result in correcting the question
LLMs don't have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like "people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X." This is exactly what it did.
or indicating that the question doesn’t make sense.
This would require the holy grail of AI which doesn't exist yet: Artificial General Intelligence (AGI)
AGI is the ability to reason that humans (and some animals) can. None of today's LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.
-
I think you're holding a fundamental misunderstanding of what today's LLMs are.
An error in a question should either result in correcting the question
LLMs don't have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like "people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X." This is exactly what it did.
or indicating that the question doesn’t make sense.
This would require the holy grail of AI which doesn't exist yet: Artificial General Intelligence (AGI)
AGI is the ability to reason that humans (and some animals) can. None of today's LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.
I think you're holding a fundamental misunderstanding of what today's LLMs are.
I think you have severe misunderstanding of what this community is.
-
I think you're holding a fundamental misunderstanding of what today's LLMs are.
I think you have severe misunderstanding of what this community is.
I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.
I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.
-
I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.
I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.
I...assumed it was a community to point where AI should would, but doesn't.
...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.
Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?
I don't make fun of my screwdriver because its horrible and hammering in nails.
Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.
-
I...assumed it was a community to point where AI should would, but doesn't.
...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.
Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?
I don't make fun of my screwdriver because its horrible and hammering in nails.
Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.
I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.
If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.
-
I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.
If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.
Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.
-
Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.
Go ahead, I’d love to see what you have to say. I’d much prefer that to an arrogant implication of my stupidity.
Not knowing how the underlying technology works isn't stupidity, but I can get from your tone you're spoiling for a fight and not interested in an friendly exchange of ideas. As I said, I'm not here to create drama in your community. I'll step away. I hope you have a great day.