• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

writers' strike and Trek

which it totally shouldn't because it's plagiarism, and why that isn't the end of the debate right there is bewildering to me)
Because people do not believe plagiarism is as big of a deal any more. I recall the big push when I was in grad school to avoid these things, and then you have all these people doing workarounds to avoid doing the work. I feel like, while not a new problem, it highlights just how much plagerism is just considered "Meh."
 
a friend of mine teaches high school and she says they have tools to try and figure out if something has been generated by AI.

Not sure how it works.
 
a friend of mine teaches high school and she says they have tools to try and figure out if something has been generated by AI.

Not sure how it works.
In my grad school program you had to submit your paper to a "comparison checker" (can't recall the name right off). It basically would look for common phrases or exact sentences. Annoying because it would flag referenced articles too, so I would have to modify my paper so it didn't flag as much from quoting articles.
 
  • Like
Reactions: drt
There is so much CSI and NCIS for an AI to learn from that I wouldn't be surprised if it could put out decent content in five years.
If those shows were just "crime is committed, crime is investigated, crime is solved and arrest is made" than, maybe, yeah. But there's more to the shows than that, particularly the character interactions and the character-based subplots each week which liven the episodes up and is not something AI can just churn out like butter, probably not even in five years.
In what way is this BBS considered a workplace environment?:shrug::guffaw:
You don't come here when you're on the job? Because I certainly don't. :shifty: ;)
 
In my grad school program you had to submit your paper to a "comparison checker" (can't recall the name right off). It basically would look for common phrases or exact sentences. Annoying because it would flag referenced articles too, so I would have to modify my paper so it didn't flag as much from quoting articles.
Yeah, I think it essentially uses a variation of the same technology.
 
AI is going to change our lives so much. Supposedly eventually put 49% of people out of work.

It already makes life easier, a friend used ChatGPT to write an appeal that got his parking fine cut down to a third of what it was. I've used it to make custom bedtime stories, combining Star Trek with Goldilocks and the 3 bears (and it even threw in Ewoks without being asked)

It will certainly be used to assist with writing going forward. Rough outlines will be quicker. AI can be used to figure out plausible ways to get characters from A to B.

Surely someone will make an AI written movie as a gimmick sometime soon, but eliminating writers entirely won't happen any more than anyone else will be affected (which goes back to that 49% we're looking at long-term)
 
This is a really intriguing story from the Chronicle of Higher Education:

Look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.

There’s a remarkable disconnect between how professors and administrators think students use generative AI on written work and how we actually use it. Many assume that if an essay is written with the help of ChatGPT, there will be some sort of evidence — it will have a distinctive “voice,” it won’t make very complex arguments, or it will be written in a way that AI-detection programs will pick up on. Those are dangerous misconceptions. In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own. Once that becomes clear, it follows that massive structural change will be needed if our colleges are going to keep training students to think critically.

The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. You can hand ChatGPT a prompt and ask it for a finished product, but you’ll probably get an essay with a very general claim, middle-school-level sentence structure, and half as many words as you wanted. The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step. You tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better.

The common fear among teachers is that AI is actually writing our essays for us, but this isn’t what happens.

As an example, I told ChatGPT, “I have to write a 6-page close reading of the Iliad. Give me some options for very specific thesis statements.” (Just about every first-year student at my university has to write a paper resembling this one.) Here is one of its suggestions: “The gods in the Iliad are not just capricious beings who interfere in human affairs for their own amusement but also mirror the moral dilemmas and conflicts that the mortals face.” It also listed nine other ideas, any one of which I would have felt comfortable arguing. Already, a major chunk of the thinking had been done for me. As any former student knows, one of the main challenges of writing an essay is just thinking through the subject matter and coming up with a strong, debatable claim. With one snap of the fingers and almost zero brain activity, I suddenly had one.


My job was now reduced to defending this claim. But ChatGPT can help here too! I asked it to outline the paper for me, and it did so in detail, providing a five-paragraph structure and instructions on how to write each one. For instance, for “Body Paragraph 1: The Gods as Moral Arbiters,” the program wrote: “Introduce the concept of the gods as moral arbiters in the Iliad. Provide examples of how the gods act as judges of human behavior, punishing or rewarding individuals based on their actions. Analyze how the gods’ judgments reflect the moral codes and values of ancient Greek society. Use specific passages from the text to support your analysis.” All that was left now was for me to follow these instructions, and perhaps modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster.

The vital takeaway here is that it’s simply impossible to catch students using this process, and that for them, writing is no longer much of an exercise in thinking. The problem isn’t with a lack of AI-catching technology — even if we could definitively tell whether any given word was produced by ChatGPT, we still couldn’t prevent cheating. The ideas on the paper can be computer-generated while the prose can be the student’s own. No human or machine can read a paper like this and find the mark of artificial intelligence.

When we want students to learn how to think, assignments become essentially useless once AI gets involved.

There are two possible conclusions. One is that we should embrace the role AI is beginning to play in the writing process. “So what that essays are easier to write now? AI is here for good; students might as well learn to use it.” Of course, it’s important to learn to put together a cohesive piece of written work, so it makes perfect sense to embrace AI on assignments that are meant to teach this skill. In fact, it would be counterproductive not to: If a tool is useful and widely available, students should learn how to use it. But if this is our only takeaway, we neglect the essay’s value as a method for practicing critical thinking. When we want students to learn how to think — something I’m sure all educators consider a top priority — assignments become essentially useless once AI gets involved.

So rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Colleges ought to prepare their students for the future, and AI literacy will certainly be important in ours. But AI isn’t everything. If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

As it stands right now, our systems don’t accomplish either of those goals. We don’t fully lean into AI and teach how to best use it, and we don’t fully prohibit it to keep it from interfering with exercises in critical thinking. We’re at an awkward middle ground where nobody knows what to do, where very few people in power even understand that something is wrong. At any given time, I can look around my classroom and find multiple people doing homework with the help of ChatGPT. We’re not being forced to think anymore.

People worry that ChatGPT might “eventually” start rendering major institutions obsolete. It seems to me that it already has.
 
AI is going to change our lives so much. Supposedly eventually put 49% of people out of work.

If it actually were AI, maybe. That's just hype. ChatGPT and the like are not actually "artificial intelligence." They're predictive text programs. That's all. People are just embracing exaggerated hype, as they usually do about technological progress.

As I've been trying to point out, the problem here is not AI. The problem is rapacious executives who are trying to undo generations of labor rights progress and are using the hype about AI as an excuse to disempower their employees. Dwelling on the tech is doing their job for them, letting them distract us from what's really going on.


It will certainly be used to assist with writing going forward. Rough outlines will be quicker. AI can be used to figure out plausible ways to get characters from A to B.

Hardly. The sample I quoted earlier was dreadfully structured and had no sense of the kind of specifics you're talking about. There wasn't a shred of plausible character motivation or behavior; characters just did things with no setup and for no good reason. To advance characters plausibly through a story, you have to understand their motivations and feelings, and all chatbots can do is predict the probability of the next word in a sentence.

Anyway, all these things could do at best is to average out a generic sample of typical story structure, and any competent writer will already know that from experience with reading. Our goal is to try to come up with something that isn't generic, that rises above the mean by doing things differently. Most of the work of "refining" a chatbot outline would probably entail tossing out its generic suggestions and irrelevant verbiage and thinking up something better to replace them, which would make the process take longer.


Surely someone will make an AI written movie as a gimmick sometime soon

As the quoted sample showed, we are still very far from the point where AI could generate a script that was actually usable as a production document without top-to-bottom rewriting. Scripts are different from prose; they aren't just telling a story to an audience, they're giving the filmmakers instructions on how to tell that story: what specific actions need to be depicted, what physical items need to be designed and built, what special effects and sounds need to be created, etc. This needs to be specified so that the filmmakers can estimate the probable budget and work out the logistical needs of the production in advance. The sample chatbot "script" had nothing of that. It wasn't a usable filming document, it was just something that superficially resembled a script.

Also, you're still ignoring the elephant in the room: chatbot writing is intrinsically plagiaristic. It's using many people's copyrighted work without permission. No professional studio should legally be allowed to use its output.
 
You're assuming a plateau. It's flawed now, but give it a few years.

The legalities are another matter, but someone is going to try it as a gimmick soon, I'm sure.
 
You're assuming a plateau. It's flawed now, but give it a few years.

That's assuming that what these programs do is actually the same as what writers do, just less so. My point is that it's fundamentally different, and it only superficially appears like writing to laypeople who don't know what our job actually entails. The only reason it looks like actual writing is because it's plagiarizing actual writing and rearranging the bits in a way that looks superficially coherent. There's not a shred of actual creativity in it, except for the creativity of the human beings whose work it's reshuffling.

Since I'm lazy by nature, I admit I've wondered whether I could use this tech to handle some of the drudge work of writing, to make it easier and faster, though of course I'd rephrase anything it produced rather than quote plagiarized text. But I just don't see how it could work for me as a science fiction writer, because I'm not writing about average or ordinary situations, but creating new worlds and species and putting characters in exceptional situations. Anything a chatbot put out would probably be too generic to be useful for me. I'm not just producing text, I'm exploring ideas and characters.


And again: The real issue here isn't technology; that's a distraction from the attack on worker rights. Even if the tech were as revolutionary as claimed, then business could either find a way to incorporate it while protecting worker rights, or use it as an excuse to strip away worker rights so the execs could buy bigger yachts when their old ones get wet. The issue here is that they're doing the latter. That's what we need to focus on and talk about. The question of what the technology can and can't do is a smokescreen. It's not what's actually at stake here.


The legalities are another matter, but someone is going to try it as a gimmick soon, I'm sure.

And hopefully they'll be sued massively for it and it will be a deterrent for future such abuses. It will also probably be awful.
 
I'm cheering for these strikes to go on for a long while.

It may mean interesting things for what is in development for Trek.

Hopefully Directors and Actors are next.
 
I'm cheering for these strikes to go on for a long while.

Please don't. The strikes mean hardship for a great many people. The writers are striking because changes in studio policies have deprived them of income, so they're already struggling, but striking means they won't be able to earn money and make a living. This is a terrible situation for them to be in, and they're fighting for their livelihood. The best thing that can happen is for the AMPTP executives to cave as soon as possible.


Hopefully Directors and Actors are next.

If they are, then the strike will probably be resolved far more quickly, since that will shut things down completely.
 
Studios are preparing for such a shutdown. Content slates are being revised as we speak so completed projects are delayed and can be staggered in their release.

Issues like the AI situation (for example) are not going to make this strike an easy one to resolve. It doesn't sound like Hollywood wants to play ball right now.

Especially when you have guys like Zaslav showing complete apathy to the striking writers.
 
Studios are preparing for such a shutdown. Content slates are being revised as we speak so completed projects are delayed and can be staggered in their release.

Which is insane. They've already lost far more money than they'd need to spend on fuflilling every one of the writers' demands. They're only hurting themselves financially for the sake of being stubborn.
 
One rumor going around is that the strike will last long enough for the studios to invoke force majeure and walk away from bad deals.
 
Hardly. The sample I quoted earlier was dreadfully structured and had no sense of the kind of specifics you're talking about. There wasn't a shred of plausible character motivation or behavior; characters just did things with no setup and for no good reason. To advance characters plausibly through a story, you have to understand their motivations and feelings, and all chatbots can do is predict the probability of the next word in a sentence.

So AI can construct a poor story. This sounds like the writers of "The Flash" which is one of the worst examples of human creativity. The show is so bad I feel like I blacked out and missed something, but it really just has people doing things in the next scene as if days worth of other things happened to get them to that point. But hey, what should anyone expect from a show that acts like having several Ph.D.s is a sign of success instead of a sign of being unwilling to lead any real work yourself.
 
So AI can construct a poor story. This sounds like the writers of "The Flash" which is one of the worst examples of human creativity.

No. The point is not that it's a bad story, the point is that it's not even a story, just a machine-generated text that mimics the superficial appearance of a story. People have been snowed by the use of "artificial intelligence" to describe these programs. That is an erroneous use of the term, or at least a misleading one. They are nothing more than predictive text algorithms trained by humans to produce a plausible mimicry of actual writing. They're not artists, they're parrots. Hell, they're less than parrots, because parrots are actually highly intelligent.

Also, again, please remember that every single text produced by these programs is a work of plagiarism. That alone outweighs any opinions about the quality of the work. Stealing should be off the table, full stop. https://www.theglobeandmail.com/opi...gpt-are-built-on-mass-copyright-infringement/
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top