I see this unfolding pretty rapidly in my tech job. Rather than write code by hand, now, many people use AI. I can now tell when someone is doing that because it doesn't really work, nor does the AI code generator stuff have a good template, apparently, for packing things up like a person would. There's weird emphasis on nonsensical details, and big things are missing.
Similarly people do "research" by collecting a bunch of AI slop into documents, so now the whitepapers and other things that corporations do to generate and work with ideas are now just computer generated stuff that's often filled with incorrect information. For some strange reason, people assume the AI generated stuff is complete and authoritative in spite of numerous contrary anecdotes out there on YouTube, or contrary personal experience.
In my personal experience the LLM generated slop has maybe a 50/50 chance of being "correct" in a useful way about specific queries. Some code it generates contains syntax errors, for example, which is pretty bizarre to me; I thought it would somehow run things through a compiler to cross check its generated code, but apparently not. Estoreic subjects with very little online content are generally just outright incorrect.
The AI looks "productive", but it's actually akin to rabbit meat for a starving man, or ocean water to a thirsty man. The "value" for a person or institution like a corporation in people working on ideas is to build up a sort of shared concept model of whatever it is people are doing. This enables people to work together and make things. The reports, white papers, plans, and other similar dross that corporations produce is actually worthless and often people never look at it again.
The AI sloppification of work will result in piles of the same dross, but no associated creation of a shared concept model because people will be engaged with basically a hallucination of their ego projected into a chatbot, rather than constructively building concepts with their own brain. There are some reports out there of people descending into psychosis after chatting with their chatbot hallucinations for hundreds of hours. They end up acting like the Hollywood depictions of insanity. Back in the 70s and 80s one of the movie tropes about insane people was they thought they were Napolean or Jesus. People are chatting themselves into a similar scenario.
It will not take long for this scenario to reach its logical conclusion. The stuff companies make, especially tech companies, or similarly IP heavy companies like law firms, or tech dependent companies like manufacturers who rely on programmed machine tools will slowly and steadily become moribund and dysfunctional. Most corporations are already some version of that. The zero nutrition AI stuff will exacerbate that tendency.
No comments:
Post a Comment