I And why would we bother to do this in the first place, when artificial intelligence is ALOT easier to create, a lot more efficient in what we want it to do, and a lot less expensive to work with?
I am not going to presume to answer for the future, but I assume those cheap AIs you mention might be interested in doing some of that unethical experimentation you also mention, just to better understand from whence they came.
lol what?
They would actually get better results by googling "history of AI research" and then reading in depth the biographies and/or autobiographies of the individual researchers themselves.
But that proceeds from the incredibly bizzare assumption that AIs would have any reason to "understand from whence they came." Unless somebody is developing a computerized philosophy teacher, such understanding is totally irrelevant to the tasks they are programmed to carry out.
Or my grandchildren might want to run ancestor simulations.
Which, again, would be more easily accomplished by a well-programmed AI.
Or the same kind of folks that play Sim City now would play a much more sophisticated form of Sim City a hundred years from now.
Which, AGAIN, would be more easily accomplished by a well-programmed AI.
This is almost like suggesting that some day production companies are going to replace television shows and replace them with shape-shifting androids that make live performances in your living room. It's actually ALOT simpler to just film and broadcast a performance. More importantly, with the inevitable growth of computing power it may soon be simpler to use CG animation than actors, especially once you can get a computer to render a truly lifelike human form in a truly lifelike environment.
Just to be clear on something: nobody is ever going to develop an AI with the intention of acting out a part in a screenplay. They WILL, however, develop an AI that is capable of writing a screenplay, based on the data the AI collects on just what kind of storylines sell best with audiences and what parameters (budget, subject matter, run time, rendering capabilities) it has to work with.
This AI may very well be asked to write a screenplay examining the origins of machine intelligence and to predict what the future may hold for AI evolution. A smart AI will probably download the entire collected works of Isaac Asimov and Robert Heinlein and dozens of others, indexing references to AI and machine intelligence and cross-reference themes from sci-fi/action movies of the 20th and 21st centuries to come up with a combination of winning story elements; it'll run a use-case analysis to avoid (what have become) tired cliches and it'll scan research articles and metadata to look for effective ways of inverting/subverting/modifying existing tropes in order to produce original material. At some point it'll produce a rough draft of its screenplay for the producers, who will like some of its ideas and ask for a rewrite on others.
The one thing such an AI is never going to do, at ANY level of sophistication, is examine its origins just for curiosity's sake. Even were it to do so, that "origin data" would be just that:
data. Information it can use to accomplish its task.
As far as fidelity goes, isn't our universe "granular" if you take the Planck space to be a void?
Yes, but the universe doesn't suffer compression losses or single-bit errors when moving objects from one place to the next. Nor does the universe have a finite bandwidth for data transmission or a finite capacity for storage. So there is a practical upper limit to how much fidelity you need in order to simulate a particular thing.
When a child jumps into a snowbank, that action has billions upon billions of effects, from the macroscopic to the subatomic and everything in between. If your goal is to model the interaction between toddlers and snowbanks, then the toddler and the snowbank only need to be modeled with enough fidelity to capture the macroscopic behavior of the snowbank and the toddler who dives into it. If your goal is to model the behavior of individual snowflakes, then that's the lower limit of your simulation's rendering. You could, with a large enough computer, probably model the behavior of individual atoms within the snowflakes and the individual atoms within the toddler to get an almost-realistic simulation of that interaction, but even THAT simulation cannot account for all possible interactions since its data set grows more limited the further down you go in scale.
Now consider that the simulation you're talking about only has precise data on human brains. That means you can model the states of the scanned brain down to the limit of the scanner's resolution (which is inevitably much lower than reality, uncertainty principle being what it is). Since NOTHING ELSE in that simulation is so precisely modeled, for all practical purposes the only thing you've simulated is a disembodied brain in a digital jar (like the "Betas" in Alistair Reynolds "Revelation Space" novels. It's a good way of preserving the knowledge and experience of living people, but everyone knows Betas aren't real people).
And as for the eventual cost of ultra-high fidelity brain scan emulators, I don't know. If you'd asked me in 1985 how much a hand held computer phone with millions of times the storage and speed of my Sanyo MBC 555 would eventually cost, I'd have said the same thing.
And yet the Sanyo MBC555 cost a little under a thousand dollars when it was first released. The capabilities of such machines has increased a thousandfold over the years, but the class they belong to -- desktop computers -- hasn't gotten any cheaper.
The kinds of systems that could handle the simulations you're talking about would be the futuristic equivalents of IBM's Watson or Deep Blue. And yes, it would (and will) be highly interesting to see those computers managing to produce realistic simulations of existing people along with all the knowledge and experience they once possessed. But by that time, nobody will be wondering anymore if the simulated personalities are genuinely conscious or not, since the AIs that created them clearly aren't.