Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

isurg

  1. Home
  2. Artificial Ignorance
  3. They sound *exactly* the same

They sound *exactly* the same

Scheduled Pinned Locked Moved Artificial Ignorance
artificialignor
12 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • johnnycanuck@lemmy.caJ This user is from outside of this forum
    johnnycanuck@lemmy.caJ This user is from outside of this forum
    johnnycanuck@lemmy.ca
    wrote last edited by
    #1

    Sent to me by a friend, don't judge the misspelling of "strait" lol.

    Link Preview Image
    G 1 Reply Last reply
    0
    • johnnycanuck@lemmy.caJ johnnycanuck@lemmy.ca

      Sent to me by a friend, don't judge the misspelling of "strait" lol.

      Link Preview Image
      G This user is from outside of this forum
      G This user is from outside of this forum
      gladaed@feddit.org
      wrote last edited by
      #2

      A probable answer. That's not a sensible question so a ridiculous answer is expected.

      drbob@lemmy.caD 1 Reply Last reply
      0
      • G gladaed@feddit.org

        A probable answer. That's not a sensible question so a ridiculous answer is expected.

        drbob@lemmy.caD This user is from outside of this forum
        drbob@lemmy.caD This user is from outside of this forum
        drbob@lemmy.ca
        wrote last edited by
        #3

        Strait is misspelled. Both straits and sounds are bodies of water so it's a very sensible question. You might also ask what the difference between a cove and a bight is.

        G 1 Reply Last reply
        0
        • drbob@lemmy.caD drbob@lemmy.ca

          Strait is misspelled. Both straits and sounds are bodies of water so it's a very sensible question. You might also ask what the difference between a cove and a bight is.

          G This user is from outside of this forum
          G This user is from outside of this forum
          gladaed@feddit.org
          wrote last edited by
          #4

          Didn't make the connection. Very difficult for transformers since they do not listen to the words. They also don't read the letters. So this is a 'don't use an ai for something it fundamentally cannot do' example.

          johnnycanuck@lemmy.caJ 1 Reply Last reply
          0
          • G gladaed@feddit.org

            Didn't make the connection. Very difficult for transformers since they do not listen to the words. They also don't read the letters. So this is a 'don't use an ai for something it fundamentally cannot do' example.

            johnnycanuck@lemmy.caJ This user is from outside of this forum
            johnnycanuck@lemmy.caJ This user is from outside of this forum
            johnnycanuck@lemmy.ca
            wrote last edited by
            #5

            An error in a question should either result in correcting the question or indicating that the question doesn't make sense.

            Calling "straight" and "sound" homophones is a pure demonstration of the LLM's ignorance. Maybe it got fooled by "straight" and "strait" being homophones and some how crossed wires, but that's actually the point. It is ignorant, despite how "intelligent" it might sound.

            P 1 Reply Last reply
            0
            • johnnycanuck@lemmy.caJ johnnycanuck@lemmy.ca

              An error in a question should either result in correcting the question or indicating that the question doesn't make sense.

              Calling "straight" and "sound" homophones is a pure demonstration of the LLM's ignorance. Maybe it got fooled by "straight" and "strait" being homophones and some how crossed wires, but that's actually the point. It is ignorant, despite how "intelligent" it might sound.

              P This user is from outside of this forum
              P This user is from outside of this forum
              partial_accumen@lemmy.world
              wrote last edited by
              #6

              I think you're holding a fundamental misunderstanding of what today's LLMs are.

              An error in a question should either result in correcting the question

              LLMs don't have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like "people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X." This is exactly what it did.

              or indicating that the question doesn’t make sense.

              This would require the holy grail of AI which doesn't exist yet: Artificial General Intelligence (AGI)

              AGI is the ability to reason that humans (and some animals) can. None of today's LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.

              johnnycanuck@lemmy.caJ 1 Reply Last reply
              0
              • P partial_accumen@lemmy.world

                I think you're holding a fundamental misunderstanding of what today's LLMs are.

                An error in a question should either result in correcting the question

                LLMs don't have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like "people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X." This is exactly what it did.

                or indicating that the question doesn’t make sense.

                This would require the holy grail of AI which doesn't exist yet: Artificial General Intelligence (AGI)

                AGI is the ability to reason that humans (and some animals) can. None of today's LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.

                johnnycanuck@lemmy.caJ This user is from outside of this forum
                johnnycanuck@lemmy.caJ This user is from outside of this forum
                johnnycanuck@lemmy.ca
                wrote last edited by
                #7

                I think you're holding a fundamental misunderstanding of what today's LLMs are.

                I think you have severe misunderstanding of what this community is.

                P 1 Reply Last reply
                0
                • johnnycanuck@lemmy.caJ johnnycanuck@lemmy.ca

                  I think you're holding a fundamental misunderstanding of what today's LLMs are.

                  I think you have severe misunderstanding of what this community is.

                  P This user is from outside of this forum
                  P This user is from outside of this forum
                  partial_accumen@lemmy.world
                  wrote last edited by
                  #8

                  I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.

                  I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.

                  johnnycanuck@lemmy.caJ 1 Reply Last reply
                  0
                  • P partial_accumen@lemmy.world

                    I...assumed it was a community to point where AI should would, but doesn't. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.

                    I don't make fun of my screwdriver because its horrible and hammering in nails. If that's what this community is for, then the mistake is mine to post in here. My apologies.

                    johnnycanuck@lemmy.caJ This user is from outside of this forum
                    johnnycanuck@lemmy.caJ This user is from outside of this forum
                    johnnycanuck@lemmy.ca
                    wrote last edited by
                    #9

                    I...assumed it was a community to point where AI should would, but doesn't.

                    ...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.

                    Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?

                    I don't make fun of my screwdriver because its horrible and hammering in nails.

                    Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.

                    P 1 Reply Last reply
                    0
                    • johnnycanuck@lemmy.caJ johnnycanuck@lemmy.ca

                      I...assumed it was a community to point where AI should would, but doesn't.

                      ...and that's what's happening in this case. You're acting like it's completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn't AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn't start making up weird unrelated connections.

                      Also, it's not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn't handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?

                      I don't make fun of my screwdriver because its horrible and hammering in nails.

                      Holy strawman. We aren't asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words "homophones". Yeah I wouldn't make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.

                      P This user is from outside of this forum
                      P This user is from outside of this forum
                      partial_accumen@lemmy.world
                      wrote last edited by
                      #10

                      I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.

                      If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.

                      johnnycanuck@lemmy.caJ 1 Reply Last reply
                      0
                      • P partial_accumen@lemmy.world

                        I started typing out a point by point response to your post. You have many things wrong in your post, but you've already communicated to me that this place isn't for discussion about how LLMs work or their underlying limits. I respect this is your Lemmy Community and I have no intention of coming into your club house and crapping all over your hobby in whatever way you define it. This is your space and I will play by your rules, and take my criticisms with me on my way out.

                        If I've misunderstood and you want me to respond to your post, I'm happy to do so, but I won't without your permission.

                        johnnycanuck@lemmy.caJ This user is from outside of this forum
                        johnnycanuck@lemmy.caJ This user is from outside of this forum
                        johnnycanuck@lemmy.ca
                        wrote last edited by
                        #11

                        Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.

                        P 1 Reply Last reply
                        0
                        • johnnycanuck@lemmy.caJ johnnycanuck@lemmy.ca

                          Go ahead, I'd love to see what you have to say. I'd much prefer that to an arrogant implication of my stupidity.

                          P This user is from outside of this forum
                          P This user is from outside of this forum
                          partial_accumen@lemmy.world
                          wrote last edited by
                          #12

                          Go ahead, I’d love to see what you have to say. I’d much prefer that to an arrogant implication of my stupidity.

                          Not knowing how the underlying technology works isn't stupidity, but I can get from your tone you're spoiling for a fight and not interested in an friendly exchange of ideas. As I said, I'm not here to create drama in your community. I'll step away. I hope you have a great day.

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups