There's a calculation that was done, if every person on Earth had the lifestyle of the average American we would need four-and-a-half Earths.
There's a calculation that was done, if every person on Earth had the lifestyle of the average American we would need four-and-a-half Earths.
Maybe we need more than four-and-a-half Earths...given Americans' propensity to blow things up.![]()
There is an interpretation where the AI's logic is not wrong, but it does not explain its reasoning very well. Luis or Drake's middle name might be Michael or Matilda's middle name might be Michelle, and one of them might prefer to be known as Mike. (I'm assuming gender based on name, but that doesn't affect the argument.) It isn't uncommon for people to prefer to be called by their middle name or even a name different to the one they were given. For example, people called John might prefer to be referred to as Jack.
There is an interpretation where the AI's logic is not wrong, but it does not explain its reasoning very well. Luis or Drake's middle name might be Michael or Matilda's middle name might be Michelle, and one of them might prefer to be known as Mike. It isn't uncommon for people to prefer to be called by their middle name.
Thanks for calling me pedantic.However, any normal human being will point out that the fourth name is 'Mike'. Any person who brings up the above logic would be being pedantic. This is where there is a lot more about human social interactions that ChatGPT is not trained on. Certain "hidden knowledge" if you will. Things like how we detect sarcasm in areas or domains that we are ignorant of. ChatGPT would fail unless it were explicitly trained on that particular statement or sets of statements as being sarcastic.
You can't ask questions in the middle of an examination, but yes, the AI falls short in this particular respect. I wonder if it's because it's not allowed to learn anything new, but instead it can only work with the knowledge on which it has been trained.Well "smart" is also not a universality. And to give the answer that the examiner expected is what I mean by context. Since the AI doesn't know the context it should have asked clarifying questions. Even humans should be doing the same if they don't know the context.
You can't ask questions in the middle of an examination, but yes, the AI falls short in this particular respect. I wonder if it's because it's not allowed to learn anything new, but instead it can only work with the knowledge on which it has been trained.
That'll be an interestingly different test of whether the art produced by the AIs is possibly overfitted. The similarity of elements in many of the generated images suggests to me that it might be so. If successful, would it open the door to artists bringing suit against other human creators if they feel the work is too similar? What is an objective metric of similarity in any case - some sort of cross-correlation statistic? A similar example with only humans involved that springs to mind is Roger Dean's failed case against the designs used in James Cameron's Avatar, which to me seemed like obvious plagiarism, but what do I know?A class-action lawsuit has been filed against Stable Diffusion, Midjourney and DeviantArt in the US for their scraping of billions of copyrighted images for AI training. This could finally set a precedent in this field that's lacking in regulation.
Stable Diffusion Litigation
Fair enough, but when I saw some screenshots before the movie was in cinema, I thought "oh, cool, they've got Roger Dean on board for the designs".BTW, speaking as a layperson who's been a fan of Dean's work since the 70s, I don't see a case there.
We use essential cookies to make this site work, and optional cookies to enhance your experience.