Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

iSURG

  1. Home
  2. Discuss
  3. How much do you use AI?

How much do you use AI?

Scheduled Pinned Locked Moved Discuss
discuss
11 Posts 4 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M This user is from outside of this forum
    M This user is from outside of this forum
    m_f@discuss.online
    wrote last edited by
    #1
    This post did not contain any content.
    S J 2 Replies Last reply
    0
    • M m_f@discuss.online
      This post did not contain any content.
      S This user is from outside of this forum
      S This user is from outside of this forum
      salacious_coaster@infosec.pub
      wrote last edited by
      #2

      I have no need for a confidently incorrect plagiarism machine.

      M 1 Reply Last reply
      0
      • S salacious_coaster@infosec.pub

        I have no need for a confidently incorrect plagiarism machine.

        M This user is from outside of this forum
        M This user is from outside of this forum
        m_f@discuss.online
        wrote last edited by
        #3

        You're using the term "plagiarism machine" derogatorily (I assume). Putting aside the discussion of "can it truly reason" and whatnot and assuming it can be accurately summed up as a plagiarism machine, why is that a bad thing?

        C 1 Reply Last reply
        0
        • M m_f@discuss.online

          You're using the term "plagiarism machine" derogatorily (I assume). Putting aside the discussion of "can it truly reason" and whatnot and assuming it can be accurately summed up as a plagiarism machine, why is that a bad thing?

          C This user is from outside of this forum
          C This user is from outside of this forum
          carrot@lemmy.today
          wrote last edited by
          #4

          There is no discussion to be had about current LLMs being able to reason. They cannot, full stop. They are an advanced form of autocomplete, nothing more. If you genuinely think that LLMs can reason, or are even close to reasoning, you need to research how they work. Not saying an AI that can genuinely reason is out of the question, it just can't be achieved with the methods used to create LLMs.

          Also, plagiarism is bad? Like, many of the ways LLMs use the content they scrape would be indirectly illegal for a human to do, and as lawsuits are settled, will likely be illegal for LLMs to do as well.

          M 1 Reply Last reply
          0
          • C carrot@lemmy.today

            There is no discussion to be had about current LLMs being able to reason. They cannot, full stop. They are an advanced form of autocomplete, nothing more. If you genuinely think that LLMs can reason, or are even close to reasoning, you need to research how they work. Not saying an AI that can genuinely reason is out of the question, it just can't be achieved with the methods used to create LLMs.

            Also, plagiarism is bad? Like, many of the ways LLMs use the content they scrape would be indirectly illegal for a human to do, and as lawsuits are settled, will likely be illegal for LLMs to do as well.

            M This user is from outside of this forum
            M This user is from outside of this forum
            m_f@discuss.online
            wrote last edited by
            #5

            What exactly do you mean when you say "reason"? 90% of AI discourse is people using the same words and talking past each other because they mean different things, so it's good to define that sort of thing up front.

            C 1 Reply Last reply
            0
            • M m_f@discuss.online

              What exactly do you mean when you say "reason"? 90% of AI discourse is people using the same words and talking past each other because they mean different things, so it's good to define that sort of thing up front.

              C This user is from outside of this forum
              C This user is from outside of this forum
              carrot@lemmy.today
              wrote last edited by
              #6

              Sure, when I say "reason" here I am using this dictionary definition: "The capacity for logical, rational, and analytic thought; intelligence."

              M 1 Reply Last reply
              0
              • C carrot@lemmy.today

                Sure, when I say "reason" here I am using this dictionary definition: "The capacity for logical, rational, and analytic thought; intelligence."

                M This user is from outside of this forum
                M This user is from outside of this forum
                m_f@discuss.online
                wrote last edited by
                #7

                It's going to be tough to explore this through internet comments, but that just raises the question of "what do you mean by thought and intelligence?", which then turns into "what do you mean by understanding?" and lots of other similar questions, down a deep rabbit hole. I don't think it's really possible to make strong statements either way until we've come up with a more coherent theory underlying basic terms like that. I'd love to see a rigid and objective definition that we can measure LLMs against.

                J 1 Reply Last reply
                0
                • M m_f@discuss.online
                  This post did not contain any content.
                  J This user is from outside of this forum
                  J This user is from outside of this forum
                  jet@hackertalks.com
                  wrote last edited by
                  #8

                  Depends where you draw the line on what is AI.

                  AI as a noise isolation on phones and voips
                  • I use noise cancellation all the time, nvidia, discord, and whatever android has.
                  AI as imagine processing, text recognition, translation
                  • I use google lens extensively to translate menus, signs
                  • I use translation apps (many of which are built on neural networks) to communicate
                  • I use image processing to improve photos, remove noise, increase lighting, etc
                  • I use image to text grabbers all the time in PDFs, imagemagick, I read OCRed books
                  AI as text summarization
                  • When I post youtube videos on lemmy I use AI to summarize the transcript - because lemmy is text based and a translation from a visual medium to a text medium is helpful for people to participate in the conversation
                  AI as voice summarization
                  • I use video subtitle generation all the time, I use voice mail text transcription all the time
                  AI as search
                  • Unavoidably I've been served AI "results" for searches in both google, duckduckgo, and others... and honestly, when i'm searching for something like "What is the hotkey to mute voice chat in application X" and the AI result is "the U key"... I'm just going to hit the U key and see if it works.
                  AI as a chatbot
                  • Never, I don't trust the hallucinations. Knowing enough about markov chains and token generation means I will avoid using AI as a source of truth for any decision making process.
                  Image generation
                  • Never, I'm not philosophically opposed, but the services that do this all want some account/relationship. There isn't some duck.ai open and anonymous service yet

                  The term AI is being used for algorithm now-a-days and even when we talk about things that are hand crafted human algorithms, self weighting "machine learning" matrices, markov chains, and multi-layer neural networks (which are just matrices again)... There is no artificial intelligence that makes its down logically consistent reasoning. Right now everything called AI is just some tool or generated content (a tool).

                  Human effort asymmetry
                  Human time is expensive, AI time is cheap. someone is using something cheap to burn a expensive resource its a insult and antagonistic.

                  AI is often used to waste human time, and I think this is the source of most of the anger. Talking to a chat bot when you want to talk to a human is wasting your time. Spam calls, spam messages, spam posts, are all exploiting this asymmetry offensively.

                  Dead internet theory (pretty much a given at this point) is the logical conclusion of this asymmetry of effort.

                  I like places like lemmy, or reputation based forums, where interactions genuine and have a high probability of being from a human. I'll gladly spend all day with humans because I feel that my contributions are improving someones day. I wouldn't do that without the humans.

                  1 Reply Last reply
                  0
                  • M m_f@discuss.online

                    It's going to be tough to explore this through internet comments, but that just raises the question of "what do you mean by thought and intelligence?", which then turns into "what do you mean by understanding?" and lots of other similar questions, down a deep rabbit hole. I don't think it's really possible to make strong statements either way until we've come up with a more coherent theory underlying basic terms like that. I'd love to see a rigid and objective definition that we can measure LLMs against.

                    J This user is from outside of this forum
                    J This user is from outside of this forum
                    jet@hackertalks.com
                    wrote last edited by
                    #9

                    LLMs generate tokens based on probabilities - they do not create thoughts that they can perform discrete logic with.

                    The chat bots are deceptive because you can ask questions with discrete logic requirements and they answer convincingly well, but that is because their training data set had many such questions in it, so its really token generation.

                    If you never played with a old school "chat with eliza" bot, its worth the effort. LLMs are just that super charged, there has to be some input to train on to make the response.

                    Of course people are trying to glue math and discrete algebraic systems on top of LLM output, but that still does not solve the problem of artificial general intelligence.

                    M 1 Reply Last reply
                    0
                    • J jet@hackertalks.com

                      LLMs generate tokens based on probabilities - they do not create thoughts that they can perform discrete logic with.

                      The chat bots are deceptive because you can ask questions with discrete logic requirements and they answer convincingly well, but that is because their training data set had many such questions in it, so its really token generation.

                      If you never played with a old school "chat with eliza" bot, its worth the effort. LLMs are just that super charged, there has to be some input to train on to make the response.

                      Of course people are trying to glue math and discrete algebraic systems on top of LLM output, but that still does not solve the problem of artificial general intelligence.

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      m_f@discuss.online
                      wrote last edited by
                      #10

                      Why don't they "create thoughts"? I mentioned this in another comment, but most discussions around AI are people talking past each other because they use the same words to mean different things.

                      It might seem absurd, but it's a lot harder to define words like "thought" than you'd think, because often the definition just leads to more questions. Wikipedia for example says "In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation.", but then what does "conscious" mean? Until we have a rigid definition for words like that all the way down to first principles, I wouldn't agree with definitive statements.

                      ELIZA is fundamentally different from an LLM though, it's much more an expert system.

                      J 1 Reply Last reply
                      0
                      • M m_f@discuss.online

                        Why don't they "create thoughts"? I mentioned this in another comment, but most discussions around AI are people talking past each other because they use the same words to mean different things.

                        It might seem absurd, but it's a lot harder to define words like "thought" than you'd think, because often the definition just leads to more questions. Wikipedia for example says "In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation.", but then what does "conscious" mean? Until we have a rigid definition for words like that all the way down to first principles, I wouldn't agree with definitive statements.

                        ELIZA is fundamentally different from an LLM though, it's much more an expert system.

                        J This user is from outside of this forum
                        J This user is from outside of this forum
                        jet@hackertalks.com
                        wrote last edited by
                        #11

                        I see what your doing, but your asking for too much formalism in a casual context. To satisfy the entire vocabulary from first principles would be a non-trivial task - its so daunting I don't even want to attempt it here.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups