I am not anti-AI
In every corner of my life right now I cannot seem to escape the increasing fervor of the ongoing debate regarding AI and machine learning in general, where on one side you have people who are completely anti-AI and others who are completely pro-AI, and any point made in opposition to either of these things is seen as a hardline stance that is an attack or something to be corrected.
As with basically _everything_ there is nuance, and a lot of conflicting things that tend to be collapsed down into single talking points, and I am getting caught in the middle. Over the last few days this has reached a fever pitch, and it’s gotten to the point that I feel like I’m constantly on the defense and on the edge of a panic attack.
I’m also very tired, and I want to just write this thing in my own little space where I can get some complete thoughts out before someone jumps down my throat in the middle of a series of posts. I’m not sure that this will have any real positive effect, but maybe a public rumination will help me work through the latest panic attack that woke me up after a particularly emotionally-draining day.
First off, a big part of the problem is that “AI” is a very broad term that means a lot of different things. In the current discourse, it mostly comes down to LLMs (Large Language Models) and similar approaches to consolidating all of human knowledge into a gigantic model that can do all the things.
But AI covers a lot of ground. Even the same basic techniques for building an AI can have different implications at different scales.
_Generally-speaking,_ as a rule of thumb: I am in favor of things that operate at a smaller scale, and opposed to things that work at a larger scale.
If you are locally training a model based on data that you can store locally, and where that data can be vetted for accuracy, sure! That’s great! This covers a lot of ground. Data modeling, predicting, local search on internal knowledge, image segmentation, image classification, parameter tuning, all good stuff.
But if you are training a gigantic model that requires oodles of processing power and energy, and which also requires the non-consensual harvesting of data at a large scale, causing operational effects on other peoples' websites/services/livelihoods, and consuming every last shred of data on the Internet, that’s where I have a problem.
If you are building something that works as an assistive technology to someone, that’s great. Helping people with disabilities, helping people to come up with ideas for how to build things, helping with education and learning, boosting someone’s creativity? I love that.
If you are building something that _replaces_ human creativity or understanding, though, I have a problem.
If your justification to something is, “well, people have been doing this for years” and “this” refers to something that would generally be considered cheating if a human were caught doing it (such as copying other peoples' work, especially without attribution), that’s not a great defense of it.
For an example of what I mean by that: a common defense of LLM-based programming is that “people just copy-paste stuff from Stack Overflow, this is no different.” But it _is_ different:
* People post answers to Stack Overflow for the purpose of being used; there is _consent_ involved
* People post to Stack Overflow to help other people learn how to do things, not to have the code directly copied
* People (hopefully) consult Stack Overflow to learn how to do a thing, and not just how to get the specific piece of code to do the specific thing they’re trying to solve
Or another defense of LLM-based stuff comes down to “citing Wikipedia” or the like, and many of the same counterarguments apply. People edit Wikipedia for the purpose of teaching others things, and Wikipedia is not meant to be a primary source, but a place to find primary sources. Wikipedia is a totally fine resource for finding things for further research, but it is not a thing to copy from wholesale. It is not a homework engine.
Another thing: I am totally in favor of technology that helps people to not have to work so hard. But one of the awful side-effects of the current AI push is to make people work harder, and with the work they’re doing being less creative and more mundane.
I want AI to help me to automate my mundane things away so that I have more time and energy to work on the stuff that I enjoy doing. But the AI that’s being shoved down my throat is trying to replace the things I _like_ doing, such as solving problems or making music or drawing artwork, while I’m still left holding the bag of the tedium that exacerbates my chronic pain, like filling out forms and re-entering metadata ad infinitum.
AI can be an amazing, powerful tool to help people do more things more easily, but that isn’t how I’m seeing it be used. Instead, I’m seeing it as a force that’s giving businesses a justification to lay off knowledge workers while exploiting the ones who remain as babysitters to the AI, where LLMs are generating all of the web content and the code behind it and humans are only being used to clean up the messes left behind when the statistical models fail to see the whole picture.
I am tired of having to constantly find new ways of preventing AI-written scrapers for AI models from bringing down my hobby websites, because even though I go _out of my way_ to send the appropriate signals for which pages are worth scraping and which ones are just redundant views _for humans to use_ , instead I have all these crawlers pretending to be humans while doing inhuman levels of hammering my server from hundreds of thousands of IP addresses trying to extract as much data as possible, as if _this_ combination of tags is going to suddenly make new information appear out of the ether.
I am tired of how every time I open up my monthly budget spreadsheet I get told that things could be so much better if I pay $12/month for access to all-new creative ways of having AI “improve” the “creativity.”
I am tired of every web search leading to pages full of garbage based on random dissosiation of facts that look like they’re related, all because words mean different things in different contexts.
I am tired of every single webpage I visit having an obnoxious chat bot appear in the corner trying to answer the questions that I don’t have, and how when I’m trying to get information that isn’t already on the website it all goes through an AI that just regurgitates information from it and every other website, when the reason I’m asking is because it wasn’t on any website to begin with.
I am tired of looking for images of something for reference, only to find that the image is a complete fabrication, often deceptive.
I am tired of every semiconductor that I need to use for my own purposes becoming super expensive because the manufacturing capacity has been pre-committed to gigantic data centers that have yet to be built to fill in the supposed need for the things that are brandished as a threat.
I am tired of being told that if I don’t embrace the tools of my own destruction I’m going to be left behind, when I’m already struggling just to survive after having been chewed up and spit out by an industry I used to be excited to be a part of.
I am tired of how whenever I run into a problem with an AI-driven knowledge base and point out flaws in its reasoning, I am told that I am simply prompting it wrong.
I am tired of the feeling that everything that I do must be in service of the AI models, and that the only thing that is of value is that which increases “global productivity,” usually at the expense of my own ability to survive.
I am tired of every decision about my ability to live being handed off to AI models that are trained on flawed data that do not get a complete picture.
I am fucking **_tired._**
Anyway, the thing I’m trying to get at: Tools can be useful, but the tool is a means to an end, not the end itself. Don’t confuse the two.
And with that off my chest, maybe now I can get some sleep.