Discussion in 'Science and Technology' started by rahullak, Mar 6, 2020.
Maybe we need more than four-and-a-half Earths...given Americans' propensity to blow things up.
At least a dozen
Well it's said the meek will inherit the earth. We'll be happy with everything else, thanks.
There is an interpretation where the AI's logic is not wrong, but it does not explain its reasoning very well. Luis or Drake's middle name might be Michael or Matilda's middle name might be Michelle, and one of them might prefer to be known as Mike. (I'm assuming gender based on name, but that doesn't affect the argument.) It isn't uncommon for people to prefer to be called by their middle name or even a name different to the one they were given. For example, people called John might prefer to be referred to as Jack.
However, any normal human being will point out that the fourth name is 'Mike'. Any person who brings up the above logic would be being pedantic. This is where there is a lot more about human social interactions that ChatGPT is not trained on. Certain "hidden knowledge" if you will. Things like how we detect sarcasm in areas or domains that we are ignorant of. ChatGPT would fail unless it were explicitly trained on that particular statement or sets of statements as being sarcastic.
Thanks for calling me pedantic. It can also be hard for humans to detect sarcasm, particularly in writing, as is often demonstrated in posts on this site. The example demonstrates that the AI has no curiosity. It doesn't ask why its response isn't what the questioner expected nor can it make an argument like the one I gave, I suspect.
Ah, didn't mean to. I should've been more specific: Humans understand context a lot better and have a wider variety of knowledge of contexts. The logic you brought up would be normal in certain contexts and pedantic in other contexts. For instance, if this were an exam question in a school in India, the logic you brought would not apply at all. But perhaps it does in the UK or certain schools in the UK. Maybe it depends on the level of education ie. middle school, high school, university, work training etc.
Humans can also be wary of a question that is set up to be a "Gotcha!". It comes from experience yes, but it need not be a very specific experiential learning. Once I know what a "Gotcha" is and have some context in which the question is being asked, I'd have several answers or query further.
You're right though that the AI doesn't ask further questions or be smart about saying that there are multiple answers to the question depending on intent and context.
The person who posed the question to try to catch out the AI is not as smart as he thinks he is. I have seen similar failings in logic questions posed to UK school children in 11+ exams. We had to learn to give the answer that the examiner expected.
Well "smart" is also not a universality. And to give the answer that the examiner expected is what I mean by context. Since the AI doesn't know the context it should have asked clarifying questions. Even humans should be doing the same if they don't know the context.
You can't ask questions in the middle of an examination, but yes, the AI falls short in this particular respect. I wonder if it's because it's not allowed to learn anything new, but instead it can only work with the knowledge on which it has been trained.
But you can ask the questions when you are being taught or in the classrooms.
Yes, AFAIK ChatGPT at the present time cannot be trained in real-time. Google's upcoming competing product has been said to have reinforcement learning features.
A class-action lawsuit has been filed against Stable Diffusion, Midjourney and DeviantArt in the US for their scraping of billions of copyrighted images for AI training. This could finally set a precedent in this field that's lacking in regulation.
Stable Diffusion Litigation
That'll be an interestingly different test of whether the art produced by the AIs is possibly overfitted. The similarity of elements in many of the generated images suggests to me that it might be so. If successful, would it open the door to artists bringing suit against other human creators if they feel the work is too similar? What is an objective metric of similarity in any case - some sort of cross-correlation statistic? A similar example with only humans involved that springs to mind is Roger Dean's failed case against the designs used in James Cameron's Avatar, which to me seemed like obvious plagiarism, but what do I know?
Judge Says “No” to Roger Dean’s Avatar Lawsuit: Should He Have Said “Yes” Instead? - Office of Copyright (nova.edu)
You bring up an interesting angle.
IMO, even if you include the "total concept and feel" of art (ALL art; visual, auditory, written), would we be saying that it is impossible to learn from another person's work and then create our own work which may use abstract elements or concepts from it?
The court in the Roger Dean lawsuit ruled that a layperson cannot find similarities and so it isn't copied. Well, who is a layperson? Can layperson not have varying degrees of intelligence or interest or ability to see what an "expert" can see? This seems flimsy to me.
@Serveaux you might be interested in these happenings.
Well, if the end goal is to turn every creator in the world into their own miniature Disney Corp. this is one way to go about it.
BTW, speaking as a layperson who's been a fan of Dean's work since the 70s, I don't see a case there.
As you've said, nothing is sui generis. The courts are in trouble, I think. They're gonna eventual rule some way and no one is going to like it. Maybe that's the best case. I wouldn't know what that middle ground is. Maybe what @Asbo Zaprudder suggested; some technical measure of similarity.
Even a panel of expert witnesses might disagree how similar two pictures are. The nonsense that sometimes goes on with alleged music copyright infringement is another example. Borrowed chord progressions are not sufficient grounds, but slight variations in melody are, and again there is no systematic metric for comparison. In the case of music, surely it should be easier than with the visual medium to establish some algorithm.
Fair enough, but when I saw some screenshots before the movie was in cinema, I thought "oh, cool, they've got Roger Dean on board for the designs".
Now, If there were pygmy elephants soaring on dragonfly wings on Pandora, I'd have screamed "rip-off."
Are there elephantine critters on Pandora in the first movie? I forget. They've got their pterodactyls and dragons and megapanther predators...
So, okay, I can Google...
They got a "Tapirus:"
And they got Titanotheres:
Separate names with a comma.