I’ve been in a bit of a funk for the last week or so. I’m really good at giving myself misery. The funk mostly comes from the kind of work I’m having to do with my Day Job. It really has me down, which leads to me thinking all sorts of dark thoughts.
At the moment I’m dealing with the funk by sitting in a Starbucks with Michael Gallowglas. We’re both working on different writing projects. He’s writing in a notebook with different colored pens. I’m typing on a keyboard I made myself, filling in a post for a blog that I’ve been maintaining for over 10 years. From a certain perspective, our writing is a never-ending pile of work.
Unlike the Day Job tasks, when I look at all the writing I want to do, I don’t feel dread. I don’t feel overwhelmed. It’s the opposite, actually. I look forward to the writing. I’m glad I have so many writing projects in front of me, and I look forward to those times when I have the energy and time to invest.
Yesterday, I spent most of the day moving things around in my garage. It’s labor that I’ve needed to do for some time, but I kept putting it off. I still have more to do, and sneaking off to Starbucks is a way for me to put it off further. There is a lot of work to do, but at some point yesterday, I stopped dreading it so much. I started to see the benefits of having a clean space in the garage, with things put in their right place. There is value in getting everything in order out there, because it also means I’ll get to work on keyboards again.
Looking back at my Day Job, things there have changed in such a way that there is always a mountain of work in front of me. It is overwhelming and discouraging, and I’m starting to hate my job. I’m not sure how much longer I can keep going like this, and I’ve been there long enough that I’m afraid that anywhere else I go will be just as bad or worse. Furthermore, I don’t think I’m as good a programmer as I was five or ten years ago. I’m still extremely valuable to my current employer, even though the work has changed to something I cannot stand.
It would be really nice if I could make writing my day job.
I’m not afraid to work hard. I look forward to it. I just want it to be work that I believe in, and that fulfills me. We should all be striving for that. In a perfect world, all of our basic needs are taken care of, leaving us to pursue our passions so that we can make the world an even better place through our art and the things we love. Maybe that isn’t a perfect world as much as a fantastical one, but it’s still worth pursuing.
There is a difference between work that is put on our shoulders versus the work that we pick up ourselves. I’m looking forward to getting home and getting the rest of my workspace in order so that I can work on a keyboard that I intend on giving away at the Writing Excuses Retreat at the beginning of September. There is a lot of effort between here and a finished product, but that effort will satisfy me and leave me feeling fulfilled.
And even some unexpected work can be a treat if it’s the right kind of work. To bring this full circle, the keyboard that I’m using right now needs work. I accidentally left it in the car for the last week, and the Sacramento sun did some very unfortunate reshaping of the thing. It’s still full functional, but the case is cracked and warped. It looks like ass. I discovered it in this state a few minutes ago, and what I see is an opportunity to reprint it, rebuild it, and make it shiny and new again. It’s a lot of work that I didn’t expect, but I kind of love it, all the same.
If you’ve spent any time at all on the Internet, you’ve probably seen someone state information that is completely incorrect. They may or may not be stating this information with a degree of confidence. Perhaps it’s a comment on a YouTube video. Perhaps it is a blog post, like this one. How do you respond? What is your emotional journey when presented with something you know to be false?
Have you ever been wrong on the Internet? What was that experience like for you?
From what I have observed, the greatest sin on the Internet is to be wrong about something, or hold to an unpopular opinion. This is what I want to address today, sort of as an exercise in empathy.
Basic Ignorance and XKCD
Let’s start with basic ignorance, and Randall Monroe‘s take on it:
If you read much XKCD, you’ll find a genuine enthusiasm for learning things. The first step in learning is admitting that you don’t know a thing, that there is a void in your knowledge, and then the process of learning is what we do to fill that void.
I find that basic ignorance is relatively easy to forgive.
Just now, I described ignorance as a void in our knowledge. Sometimes, we can ignore that void. Other times, we bridge over it with assumptions. Let me give you an embarrassing example.
How the seasons work
I was deep into my 20’s before I learned that the seasons were opposite between the northern and southern hemispheres. I knew that the Earth was tilted on its axis, but I didn’t attribute the changing of the seasons to that tilt. Maybe I wasn’t taught that part in school, or maybe I wasn’t paying attention that day. I don’t know that I have an excuse.
I knew we had seasons, and I knew the Earth rotated around the sun, and that our orbit is somewhat elliptical. In my mind, it made sense that in the summer, that’s when the Earth is closest to the sun, so I assumed that the seasons were entirely driven by Earth’s proximity to our nearest star.
How was I corrected? I was talking with someone about writing a weather system for a game, and it grew into a very technical discussion on how to implement the system according to the tilt along a planet’s axis. My conversation partner was a genuine rocket scientist, and she was alarmed to discover how wrong I was on this subject.
She was alarmed, but she wasn’t cruel. She tried to correct me, and I provided some resistance at first, because my assumptions were deeply embedded. I wasn’t rude, and then I learned something, and I felt embarrassed.
Knowledge is knowledge, ignorance is ignorance
The example I just gave is a fairly harmless subject. We were literally just talking about the weather. I want to suggest the idea that the subject doesn’t matter when it comes to knowledge or ignorance. We have either learned something, or we haven’t, and that regardless of the subject, it is not a moral shortcoming.
A more harmful subject would be around homophobia. Some simple facts that homophobes do not know: homosexuality isn’t a choice, gay people have the same feelings as straight folk, exposure to literature that features homosexuality does not make a person gay. There are lots of facts a homophobe may not be aware of, actually, but that seems like a good list to start with.
Why am I choosing this particular topic? Because it is another one in which I was the ignorant asshole. Correcting that ignorance has had a profound impact on my life.
I grew up in the 70’s and 80’s, and my parents were conservative. In my teens, I had a girlfriend that went to church, and then I started going and became a bit of a Bible thumper myself. The environment I grew up in wasn’t particularly cruel to gay people, but it was not kind or empathetic, either. No one sat down and had a conversation with me about homosexuality. The subject wasn’t brought up until I went to church, at which point, I was taught it was a sin.
During that time, using “gay” as a derogatory descriptor was not out of the question. It was in my vocabulary. And I didn’t really think about it.
Fast forward to early 1994. I had not met Melissa yet, and I was stationed at Holloman AFB, where “there is a beautiful woman behind every tree.” I was still feeling trauma after a painful breakup, and I was single and lonely. That’s when I found out my high school friend Nancy was going to school in Phoenix, which was just an easy 6 hour drive away. There had never been any kind of romantic attraction between Nancy and me, but you can bet I contemplated the possibilities during that long drive to go visit her.
We met and had a good time. We talked about High School, and cars, and what it was like for each of us escaping Oregon. Then, back at her apartment, she opened up to me. She trusted me enough to tell me about her girlfriend.
I didn’t blow up or yell at her, but I didn’t respond very well. I was not understanding or supportive. Honestly, I don’t remember what I told her. I probably regurgitated some garbage I’d picked up from church. I didn’t raise my voice, but I did make her cry.
That gave me another 6 hour drive to think about things, and I knew I’d somehow made a mistake. I hurt my friend. I didn’t mean to. But I hurt her, and to this day, I haven’t spoken to her since. I wish I could. I truly hope that she’s happy and free to be herself, the true self she was born to be.
Ignorance is forgivable, but actions have consequences
The consequences of my actions were that I lost a dear friend and have a weight of regret to carry in my heart for the rest of my days. What I did or said in ignorance was light by comparison to what we’ve seen bigots do in recent years.
I want to make it clear that it is okay to be wrong about something. The ignorance is forgivable. But you are still responsible for what you do in your ignorance. The ignorance itself is not an excuse to be cruel or do evil.
We are always responsible for our actions, no matter how ignorant we are while we’re doing them.
What to do when we find an ignorant person
For starters, try not to attack them.
I’ve given two examples of my own points of ignorance, and in neither case was I personally insulted or attacked for what I did not know. When you start with personal attacks, the other person’s mind closes. They stop listening. They are no longer capable of learning. All they can and will do is defend their position, regardless of how little standing they have.
Everyone has their pride. If you allow someone to hold onto their pride, they are more likely to retain the information they are given.
All too often, I see people get absolutely dog-piled for saying something stupid on the Internet. They get called names. They get insulted. They are torn down. They are given no room to retain any sort of composure. It’s like people want to make an example of them, and keep other ignorant people from speaking up.
My words here are not going to change the way people behave on the Internet. However, if you’re reading what I’m saying, and sometime in the future you find yourself presented with a person that is espousing something wildly inaccurate, please take a moment to consider why they might be so wrong, and what will happen if you attack them for their wrongness.
That’s probably all I have to say on this subject for now. Please be patient when you can. While I implore everyone to treat each other with a greater amount of kindness and empathy, I know that this is a big ask sometimes, and it’s okay to just ignore someone’s mistakes and ignorance, too. There are a lot of ignorant people on the Internet, and it is not your responsibility to correct everyone.
A couple of days ago, I wrote down a bunch of thoughts on A.I. I didn’t really get into how it works. I talked about what it produces, and how it is best to use it to assist artists rather than replace them. By not talking about how these large language models work, I skipped right over the biggest issue with this software, so that’s why I’m revisiting this topic so soon.
Current A.I.’s are trained on tremendous datasets. ChatGPT would not be able to create a story in the style of Mark Twain if it wasn’t fed a bunch of Mark Twain stories. Art A.I.’s are trained on art in much the same way. We feed a huge amount of data into the machine, and then based on what we’ve given it, we ask for extrapolations from its datasets in order to generate a “new” product.
It’s the training data where things get spicy. Were the writers and artists that produced the work that is fed into the machine asked if their work could be used in such a way? If not, how is that not theft?
As I said just after my birthday, you cannot copyright style. But let’s not be cowards and hide behind the law. Is it moral and right for Grammarly to feed its A.I. using customer data? Is it okay for Google to scan emails and Google Drive documents in order to feed its A.I.?
The cost to acquire this technological marvel is the dataset that is fed into the machine. These large companies are taking a socialist approach to handling that cost, but then preparing to turn around and sell the product in the most capitalist way. That’s one way to look at it, a way to unemotionally determine that what is happening with A.I. is wrong.
But fuck unemotional. No one wants their voice stolen and then reused without their consent. A writer or artist brings their voice to the material of their stories and their art. When we talk about an artist’s voice, are we not talking about style? Isn’t that what we’re training these machines to duplicate?
In my previous post, I truthfully state that I think the technology has the potential to be amazing. It does. Now I’m saying that the technology comes with a price tag, which is that your art, stories, and voice will get used, poured into a digital stew and served up to a machine god that will always be hungry for more.
Would it make any difference if the A.I. created from all of our combined voices could not be used to create profit for someone else?
I think that would help, but it wouldn’t solve the whole problem. I put this blog up for free, and I’m always hoping that my words will enter other people’s eyeballs or ears and provide pleasure or sustenance to someone else’s brain. If someone (foolishly) trained an A.I. using all of this publicly accessible version of my voice, even if they didn’t make a dime off of it (which they wouldn’t), they still stole my voice. Maybe I’d be okay with it if they used my voice to create some kind of public good. I think I would still want to be credited, or have some say in how my voice is used.
In summary, once again, I’m a fan of the technology in principle, but I am not a fan of how the technology is built. I think there are still questions of morality around building A.I. using the work of other people. And I still believe that artists, writers, and programmers are going to be hurt by this technology as companies learn to really misuse it.
One of the key points of the current strikes has to do with the use of A.I. in the arts. Writers don’t want to be replaced. Actors don’t want their voice and appearance copied and duplicated forever, used over and over by the studios without compensating the original actors. There are other factors involved, but today I’m focusing on A.I. and its use in the arts.
There are some talking points with regards to A.I. that are passed around. One is about semantics, in which people are saying, “It’s not really artificial intelligence.” There is another argument regarding the creativity of A.I., which says, “It’s not producing anything new, it’s just stealing from what’s come before.” I think there are a few others points, but these are the main two I see in social media.
Personally, I don’t think it matters if you call it A.I. or not. It performs. It’s generating stories, and artwork, and it’s writing code. The quality of what is being generated is up for debate, but what it’s creating is passable, and getting better all the time.
With regards to what the A.I.’s are producing, I don’t know that it matters if the machine is actually creating or not. To the artists that are getting duplicated, it matters. Do not misunderstand what I’m saying here. What I’m saying is that, a machine is taking a bunch of different sources and combining them to produce things that can appear novel. I don’t think it matters if the muse whispered into the A.I.’s figurative ear or not.
As an example, I asked the 3rd generation of ChatGPT to write a short Cyberpunk story in the style of Mark Twain. It generated what I asked for. Was it good? Not really. But it was interesting, and it was novel. It took styles that existed in the world before and blended them together to make something new. Arguing that the machine was not creative does not matter in the face of the evidence, which was a Cyberpunk story in the style of a writer that was dead a hundred years before the genre was even born.
Am I saying that any of this is good? Am I arguing for the machines, advocating for the A.I. right now?
Hell no.
The machines should be used to do the labor we don’t want to do, freeing up humans to make art and beauty. It should not be the other way around, which is what the studio execs seem to want.
I have a coworker that routinely uses ChatGPT to generate code for him. It is not making him a better programmer, but I can see the logic. My coworker is using the machine to do the menial tasks he doesn’t want to do himself. He’s trying to get A.I. to save him time, which seems like the correct use of the application.
I can see a future where gaming is made even more awesome with the addition of A.I. Imagine being a GM and while you’re running a game, the players decide to take a path you’re completely unprepared for. Imagine being able to turn to the A.I. and have it generate bits of the world you hadn’t considered, with some spontaneous NPCs, and maybe a dungeon or encounter to unveil? I don’t know. I think it could be a pretty cool asset in those cases.
Imagine a computer game where NPCs behave in interesting, semi-realistic ways, because an A.I. is powering their dialog and their reactions. Sandbox games could be immediately elevated with that kind of technology behind them.
Just because I think there are applications for A.I. does not mean I’m in favor of them taking jobs or joy away from people. The GM should still be in charge of the world and the story their running, even if they have an A.I. assistant. The game developers should still be able to inject their own story and fun into the game, even if it has A.I. elements to assist with making the world they created feel more alive.
In all of the examples I can think of where A.I. would be cool, it is acting as an assistant and not replacing real writers, programmers, or artists.
While I’m taking a stance that I think is pragmatic and open to the new technology, I’ve seen people advertising books and products on how to use A.I. to write stories or make art, which I think goes too far. There are people putting out guides for how to use A.I. tools to replace the creative components with the A.I., and I can’t help but think that these people are not serious writers or artists themselves. They are opportunists trying to cash in on the new technology while it’s still relevant.
I don’t know any writers or artists that are looking at what they do and thinking, “If only I could get rid of the part of this process where I get to actually be creative.”
Patrick H. Willems recently put out a video talking about the dangers of A.I. in filmmaking, and he highlighted a studio that uses A.I. exclusively, including the generation of the ideas. I don’t know what the point would be. Writers and artists may be hoping to get paid for their work, but most aren’t making a lot, and it’s the satisfaction of creating art that sustains us.
Let’s rap this up. I said a lot, but I might not have been perfectly clear or focused. To summarize, I think the current, common arguments against A.I. are shallow and not as important as the real problem with A.I., which is that we live in a capitalist society which values money over humanity, which further means that as the A.I. gets better and better, humanity — specifically artists, programmers, and writers — will be made to suffer in the name of profits. The technology itself has the potential to be amazing, but it’s how we use it that will determine whether the technology is good or bad.