• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Should we allow for AI-generated fiction writing?

But what if the AI develops a plot about an AI plotting to take over fan-fiction by developing all the plots?

(And how do you know I didn't use AI to develop that plot?)

rbs
 
I wouldn't, but I also don't think it particularly matters. We've already run up against the hard limits of LLMs, and usage has declined or stagnated. Sure there's some niche applications out there but most of it just being used for glorified chatbots on discord or SEO flooding, neither of which is particularly threatening to individual fan fiction groups or RPGs. Anyone who does use it for fan fiction is just going to be writing bad nonsensical fan fiction that no one's going to read, and probably get bored and leave since they're clearly not interested in actually engaging with the subject.
 
My chiropractor, who is a fan of my writing, was suggesting to me the other day that I make use of AI to write a screenplay for my novel. My stomach twisted up just hearing that, because I know he meant well, but it was infuriating.
My daughter is an art student at university right now, and next year every friend she has is dropping their art major and moving over to business because they don't see a future for their industry as AI becomes more prevalent. I had to then explain, in detail, to my father why AI would put his granddaughter out of a job. I did so by prompting ChatGPT to write my OWN NOVEL, coming up with plot points from my book that I hadn't even fed it. It was disturbing to see.

My answer to using AI? No truly creative person should WANT to use it. Just like no race car driver would want to use a self-driving car on the track, and no chef would want to just use a replicator.

I think it borders on dystopic that we are developing AI, and the first thing we're rushing to do with it is eliminate art, literature, and performance. What the hell is wrong with us that we'd like to just fob off our creative works to a machine so that we can spend more time, what, working? Mindlessly consuming the chum that AI determines we'd like to have instead of creating something ourselves? We're outsourcing the only thing that makes us unique.

Someday, maybe, we'll have something as sophisticated as Data, capable of creating something on its own. But right now we have programs regurgitating chunks of other people's hard work and feeding it back to us as something "new". And people want to encourage that?
 
^You expressed my thoughts on the matter far more cogently than anything I'd come up with to this point. Thank you.

The only AI I want to see 'creating' works of art is one that has no ability to directly lift elements of other peoples' (often copyrighted, so it's not just plagiarism but also theft) work.
 
"Should we allow" is a strange way of putting it, because at this point you can't really stop it from happening.
Eh, what? This is specifically about how the people who frequent this subforum feel about allowing or disallowing AI generated contributions. It’s not about whether AI generated fan fiction can be stopped or not. It’s about finding a consensus as to what is the agreed upon standard for posting creative works here. It’s about establishing a norm or sort of moral code of what goes and what doesn’t. Sure, the question that follows might be about how to go about bringing forth this standard, and the difficulty this might present. But that’s not what I understood this to be about.

When a society figures out what their established norms and morals are, the question of how to enforce them must be secondary. There’s so many societal norms and standards that are not and could never really be enforced. Think of stuff like being truthful, being kind to others, dress codes etc., in most cases it’s impossible to really stop someone from not adhering to them. But this doesn’t mean that it’s not important for a society to reach a consensus on what these norms are.

Training an AI on existing content is no different from training a human on existing content.
Well, I certainly view these very differently. The human imagination is not like an algorithm trying to piece together the data it’s being fed in a way that tries to appear like an original idea. Sure, feeding the human imagination with art that’s already out there in the world can help form it, but there’s also room for stuff like complete randomness, individual quirks and sensibilities, intuition, self-awareness or ethics and morals. The human imagination has the ability to generate completely new ideas and concepts that are not explicitly derived from existing data. Humans can think beyond the constraints of already existing patterns and rules, while AI algorithms operate based on patterns and data they have been trained on, which means they can never have a truly original idea.

I think it borders on dystopic that we are developing AI, and the first thing we're rushing to do with it is eliminate art, literature, and performance. What the hell is wrong with us that we'd like to just fob off our creative works to a machine so that we can spend more time, what, working? Mindlessly consuming the chum that AI determines we'd like to have instead of creating something ourselves? We're outsourcing the only thing that makes us unique.
Exactly this. It’s not a question of whether we can use AI to create art for us. The question is: Why the hell should we? Being creative is one of the must fundamentally human things we do. Why are so many people so eager to have a “tool” (I’d argue it isn’t really a tool, since tools are meant to help you do something, not take you out of the whole process and do everything for you) that takes creating art from us?
 
there’s also room for stuff like complete randomness, individual quirks and sensibilities, intuition, self-awareness or ethics and morals. The human imagination has the ability to generate completely new ideas and concepts that are not explicitly derived from existing data. Humans can think beyond the constraints of already existing patterns and rules, while AI algorithms operate based on patterns and data they have been trained on, which means they can never have a truly original idea.
Complete randomness is impossible because the brain consists of atoms, and all atoms behave deterministically. Functional magnetic resonance imaging can accurately predict actions before the person is consciously aware of choosing to perform them. The human mind is far more complex than any current artificial intelligence, but its complexity is finite: 86 billion neurons with up to a quadrillion synapses. Whether it takes hundreds, thousands, or millions of years, both artificial and our own enhanced human intelligence will eventually exceed the capacity of the current human mind in every way by many orders of magnitude, revealing to all what a few have already realized, that the human brain is capable of only so many thoughts, ideas, and expressions. Vastly more than any current artificial intelligence, but also vastly fewer than the human and artificial intelligences of the distant future. We're also constrained by existing patterns and rules; it's just that the set of patterns and rules to which we're limited is vastly larger than the set available to current AIs.

While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10³³ - 10³⁶ operations as a rough estimate[10]. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10⁴² operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
~ Nick Bostrom
 
Honestly, some of the arguments here remind me of when I expressed to a friend why I refused to eat at Chick-Fil-A because I disapproved of their policies, and he responded that a) my personal boycotting of them would make no difference, so why bother? and b) their food is delicious.

Even if it really was entirely inevitable that AI would permeate everything we do, and even if human brains weren't substantially different from AI, I would still maintain that that doesn't mean we can't make the effort to retain some level of (perceived, if you prefer) strictly human creativity.

That said, if the admins want to create a separate area for AI-generated writing, or where it can comingle with human-created writing, they have my blessing...as long as the context is clearly presented.
 
^You expressed my thoughts on the matter far more cogently than anything I'd come up with to this point. Thank you.

The only AI I want to see 'creating' works of art is one that has no ability to directly lift elements of other peoples' (often copyrighted, so it's not just plagiarism but also theft) work.
Copyrighted works can legally be used without the rightsholders' permission or knowledge—or even in spite of their vehement disapproval—in any way that meets the legal definition of fair use. Studying them is one such legal use. It's legal when humans do it and it's legal when AI does it, too. George Lucas didn't violate any copyrights when he viewed and read copyrighted works such as Flash Gordon, John Carter of Mars, Dune, and Triumph of the Will and incorporated various elements of them into new creations, and AI doesn't violate any copyrights when it studies existing works and incorporates elements of them into new creations, either.
 
Well, I certainly view these very differently. The human imagination is not like an algorithm trying to piece together the data it’s being fed in a way that tries to appear like an original idea. Sure, feeding the human imagination with art that’s already out there in the world can help form it, but there’s also room for stuff like complete randomness, individual quirks and sensibilities, intuition, self-awareness or ethics and morals. The human imagination has the ability to generate completely new ideas and concepts that are not explicitly derived from existing data. Humans can think beyond the constraints of already existing patterns and rules, while AI algorithms operate based on patterns and data they have been trained on, which means they can never have a truly original idea.

This is essentially why I don't think it's a big deal in fan fiction circles. What we're seeing in terms of LLM's ability to generate anything of interest is as good as it's going to get, it's all downhill from this year. AI companies have already exhausted existing sources of data to put into the model, the internet isn't filled with enough high quality data to make these models any better than they are or will be in the next few months. Some AI companies are even considering buying out publishers in an attempt to get more. As the web continues to be choked by poorly generated AI content, it's only going to have that content to feed itself on in order to grow. This leads to a Hapsburg problem where it's trained on garbage created by itself and becomes less and less useful as it does so. They might be able to stave if off for a couple years with supposedly high quality artificial data, but I really doubt they can do so for very long at the scales they're trying to operate at.

They're not going to be able to make anything more interesting than what we've already seen, and people are already growing bored or outright hostile to what it can do now.
 
Copyrighted works can legally be used without the rightsholders' permission or knowledge—or even in spite of their vehement disapproval—in any way that meets the legal definition of fair use. Studying them is one such legal use. It's legal when humans do it and it's legal when AI does it, too. George Lucas didn't violate any copyrights when he viewed and read copyrighted works such as Flash Gordon, John Carter of Mars, Dune, and Triumph of the Will and incorporated various elements of them into new creations, and AI doesn't violate any copyrights when it studies existing works and incorporates elements of them into new creations, either.
There is a fundamental difference there. He did not literally take them, chop their writing up into little pieces and deterministically reassemble it based on a prompt. He was inspired by it, but the Star Wars screenplay was completely original in that sense.
Artificial intelligence will evolve far beyond large language models. This is the beginning, not the end.
That's the thing! LLMs are not intelligence. Artificial Intelligence could be vital, but this is not artificial intelligence. Artificial intelligence's meaning in compsci circles is not the same as that in science fiction.
 
Complete randomness is impossible because the brain consists of atoms, and all atoms behave deterministically. Functional magnetic resonance imaging can accurately predict actions before the person is consciously aware of choosing to perform them. The human mind is far more complex than any current artificial intelligence, but its complexity is finite: 86 billion neurons with up to a quadrillion synapses. Whether it takes hundreds, thousands, or millions of years, both artificial and our own enhanced human intelligence will eventually exceed the capacity of the current human mind in every way by many orders of magnitude, revealing to all what a few have already realized, that the human brain is capable of only so many thoughts, ideas, and expressions. Vastly more than any current artificial intelligence, but also vastly fewer than the human and artificial intelligences of the distant future. We're also constrained by existing patterns and rules; it's just that the set of patterns and rules to which we're limited is vastly larger than the set available to current AIs.

While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10³³ - 10³⁶ operations as a rough estimate[10]. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10⁴² operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
~ Nick Bostrom
Well, this is the fan fiction section after all, so knock yourself out, I guess. :p

They're not going to be able to make anything more interesting than what we've already seen, and people are already growing bored or outright hostile to what it can do now.
I’ve observed some of that in creative circles, sure. Visual artist have begun to recognize generated images and their various tells (I’ve seen some refer to it as “AI spew”), growing more and more tired of how they are flooding image searches, stock libraries and basically all social media. But I’m not so sure the general, non-artistically inclined population is anywhere near that right now.

Are we sure about that? We still know very little about the human mind.
Let me preface what I’m saying by that I’m only conveying my personal understanding of these matters. While it’s true that we might not know everything about how exactly the human mind works right now, we do have an understanding of how the algorithms of AI are programmed. I’m sure it’s an oversimplification to put it that way, but it’s my understanding that AI’s main function is to generate something that fundamentally seems like it could be something a human has created. In terms of text generating AIs, it’s basically taking guesses as to what word might follow another because it often follows that word in human texts, to put it crudely. It has learned from studying all available text how common or likely particular phrases are as a response to a particular prompt. The human imagination, as I understand it, does not work like that. When I create a piece of artwork, my creative process does not include guessing what pixel is more likely to appear next to another pixel. I’m trying to render something I’m seeing in reality or something I’m “seeing with my mind’s eye”. I’ve learned to take into account how light and shadow work, how shapes in three dimensions work etc. The process might include experimentation, including aspects of randomness in my work, with me deciding every time if something works to convey my idea or not. And to me it does not seem like these two approaches are very similar.
 
Copyrighted works can legally be used without the rightsholders' permission or knowledge—or even in spite of their vehement disapproval—in any way that meets the legal definition of fair use. Studying them is one such legal use. It's legal when humans do it and it's legal when AI does it, too.
It's not legal when AI does it, mainly because AI doesn't actually study anything.
George Lucas didn't violate any copyrights when he viewed and read copyrighted works such as Flash Gordon, John Carter of Mars, Dune, and Triumph of the Will and incorporated various elements of them into new creations, and AI doesn't violate any copyrights when it studies existing works and incorporates elements of them into new creations, either.

Tell me you've never written anything professionally without telling me you've never written anything professionally.

Feeding my hard work into a program so it can mimic my style is not "studying". And it sure as hell is not fair fucking use.
 
Let me preface what I’m saying by that I’m only conveying my personal understanding of these matters. While it’s true that we might not know everything about how exactly the human mind works right now, we do have an understanding of how the algorithms of AI are programmed. I’m sure it’s an oversimplification to put it that way, but it’s my understanding that AI’s main function is to generate something that fundamentally seems like it could be something a human has created. In terms of text generating AIs, it’s basically taking guesses as to what word might follow another because it often follows that word in human texts, to put it crudely. It has learned from studying all available text how common or likely particular phrases are as a response to a particular prompt. The human imagination, as I understand it, does not work like that. When I create a piece of artwork, my creative process does not include guessing what pixel is more likely to appear next to another pixel. I’m trying to render something I’m seeing in reality or something I’m “seeing with my mind’s eye”. I’ve learned to take into account how light and shadow work, how shapes in three dimensions work etc. The process might include experimentation, including aspects of randomness in my work, with me deciding every time if something work to convey my idea or not. And to me it does not seem like these two approaches are very similar.
Bingo. AI has no idea about grammar or the structure of language. It doesn't understand anything outside of the positive reinforcement it gets from randomly stringing words together correctly. Midjourney doesn't understand the form of the dog, it just understands pixels in a certain order probably mean dog.
 
Copyrighted works can legally be used without the rightsholders' permission or knowledge—or even in spite of their vehement disapproval—in any way that meets the legal definition of fair use. Studying them is one such legal use. It's legal when humans do it and it's legal when AI does it, too. George Lucas didn't violate any copyrights when he viewed and read copyrighted works such as Flash Gordon, John Carter of Mars, Dune, and Triumph of the Will and incorporated various elements of them into new creations, and AI doesn't violate any copyrights when it studies existing works and incorporates elements of them into new creations, either.

Just because one can do something doesn't mean one should do something.

The question isn't "Can we legally permit AI generated fiction writing?" The question is, "Should we allow for AI generated fiction writing?" I've expressed my views, which aren't limited to whether we can legally post such 'content'.
 
It doesn't understand anything outside of the positive reinforcement it gets from randomly stringing words together correctly.
Now, if AI actually experienced positive reinforcement and learned from it, we are truly looking at intelligence, as well as the possibility of consciousness. There is no positive reinforcement in computers today. They do what they're programmed to do, the way they are programmed to do it, which includes learning algorithms. Computers and their programming experience no pleasure and they don't know when they have it right, only that it meets the correct parameters fed to it in their programming. Currently, if an AI program was never asked to perform a task, it wouldn't perform any tasks. It would just wait, no matter how much positive reinforcement it had previously received.

As an artist, I take my art, on occasion, to craft and art fairs where I display my drawings, paintings, and woodburnings (pyrography) to the public in a 10'x10' booth. There are a lot of artists that do this. Many of them post signs that say something like, "No pictures, please."

I don't put up such signs because I'm not selling pictures. If people just want a picture to hang on their wall or something like that, they can order a poster online, or shop at TJMax. I sell stories.

When a customer buys from me, they get an original, one of a kind, hand done, piece of me and my story. That can't be reproduced. That's why a Van Gogh original will sell for millions, while a poster of a Van Gogh will only cost you $14.99 on Amazon.

If, as an art student trying to develop a career, your prospects are in marketing or product design, you might have a serious concern for your job because of AI, but if you're trying to break into the highly competitive, and not so secure or lucrative world of high art, you will always have patrons. Right now, AI art is a novelty that fascinates buyers. But that will pale over time and people will want a different story to buy. They'll want something painted by an elephant, or a chimpanzee, or someone like me. Take all the pictures of my work you want. Show them to your friends, tell them where they can find me so they can take their own pictures. Maybe one of them will want an original WDG.

Writing is a little different than visual art. The story as art is what customers want. People don't usually care much about the unique, hand produced nature of the stories they choose to read. If another Star Trek novel came out and it satisfied that hunger for more Star Trek novels, people will buy and read it. But, if you are a creative person and writing feels like the right outlet to express that artist's desire, you'll write it yourself regardless, Hopefully, it's original (synthesis), and good, so that other people also like it and a publisher will pick it up.

The thesis, antithesis, synthesis process of creation is the commonly understood way to create original works of anything. Every creation came from somewhere. Every experience leaves an impression on the artist. Conscious of it or not, original ideas develop out of our past experiences.

-Will
 
Last edited:
He was inspired by it, but the Star Wars screenplay was completely original in that sense.
Frank Herbert disagreed, as do others.
Well, this is the fan fiction section after all, so knock yourself out, I guess. :p
Rigorous academic extrapolation is hardly fanfiction. The possibilities of going to the Moon and flying supersonically were mathematically proven before they were accomplished.
It's not legal when AI does it, mainly because AI doesn't actually study anything.
According to the law, it is legal.
Tell me you've never written anything professionally without telling me you've never written anything professionally.
I have written professionally as a ghostwriter.
And it sure as hell is not fair fucking use.
According to established law, it is certainly fair use in general, as supported by a wide range of academics, library associations, civil society groups, startups, major US companies, creators, authors, and others who have submitted comments to the US Copyright Office, as well as in the European Union, Japan, Singapore, Israel, and elsewhere around the world.
 
AI isn’t going to stop people from picking up pencil and paper, or musical instruments. Those have been with us since the dawn of mankind.

It will create a more competitive environment for those who wish to do it as a living. I’m not sure that is a bad thing.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top