• 0 Posts
  • 90 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle





  • I mean I agree that it’s probably vastly overvalued as a whole, the leap between current LLM capabilities and an actual trusted engineer is pretty big and it seems like a lot of people are valuing them at the level of engineer capabilities.

    But the caveats are that simulated neural networks are a technological avenue that theoretically could get there eventually (probably, there’s still a lot of unknowns about cognition, but large AI models are starting to approach the scale of neurons in the human brain and as far as we can tell there’s no quantum magic involved in cognition, just synapses firing which neural networks can simulate).

    And the other caveat is like the bear trash can analogy… the whold park ranger story where they said that it’s impossible to make a bear-proof trash can because there’s significant overlap between the smartest bears and the dumbest humans.

    Now I don’t think AI is even that close to bear level in terms of general intelligence, but a lot of current jobs don’t require that much intelligence to do them, we just have people doing them because theres some inherent problem or step in the process that’s semantic or fuzzy pattern matching based and computers / traditional software just previously couldn’t do it, so we have humans doing stuff like processing applications where they’re just mindlessly reading, looking for a few keywords and stamping. There are a lot of industries where AI could literally be the key algorithm needed to fully automated the industry, or radically minimize the number of human workers needed.

    Crypto was like ‘hey that decentralized database implementation is pretty cool’, in what situations would that be useful? And the answer was basically just ‘laundering money’.

    Neural network algorithms on the other hand present possible optimizations for a truly massive number of tasks in society that were otherwise unautomatable.








  • I don’t misunderstand how they work at all.

    Quite frankly what you’re saying doesn’t matter in the context of my point. It literally does not matter whatsoever that they are not logic based but language based, as long as they produce helpful results, and they inarguably do. You are making the same types of point that my middle school librarians made about Wikipedia. You’re getting hung up on how it works, and since that’s different than how previous information sources work, you’re declaring that they cannot be trusted and ignoring the fact that regardless of how they work, they are still right most of the time.

    As I said, it is far faster to ask copilot web a question about salesforce and verify its answers, then it is to try and manually search through their nightmarish docs. Same goes for numerous other things.

    Everyone seems so caught up in the idea that it’s just a fancy text prediction machine and fail to consider what it means about our intelligence that those text prediction machines are reasonably correct so much. All anthropological research has suggested that language is a core part of why humans are so intelligent, yet everyone clowns on a language based collection of simulated neurons like it can’t have anything remotely to do with intelligence.


  • … Y’know, everyone sucking off “AI” while it’s still wrong a great number of times kinda makes sense, when you factor in how fucking stupid most people are…

    Everyone who clowns on AI for being wrong sometimes sound like my hysterical middle school teachers talking about how you can’t trust wikipedia because anyone could edit it.

    There are lots of systems that know they will inherently have errors, and need to rely on error correction mechanisms to accomodate. The computer RAM in satellites is constantly being bombarded with cosmic rays and having bits randomly flipped, but it can account for this by having error correcting memory. At a simplified level, Error Correcting functions like this: when you write data you can write your output to three bits instead of just one, then to read a single bit, you instead check all three and discard any outliers. ECC memory actually uses more complicated math so that it only has to store 8 error correcting bits for every 64 normal ones, but that is the general principle of error correction.

    Similarly, quantum computers have been proven to have inherent fluctuations and unpredictability in their results due to the underlying nature of quantum mechanics. But they are still so much faster at solving certain problems that you can run them multiple times, discard the outliers, and still get your answer orders of magnitude faster than a classical computer.

    AI being wrong sometimes is like this and this is why not everyone thinks it’s a huge deal. Copilot web can still parse and search the nightmarish spider web of salesforce docs and give me an answer orders of magnitude faster than I can, even using Google. It doesn’t matter if it’s occasionally wrong and it’s answers require me to double check them when it’s that much faster to give each answer.