The OpenWorm project is an international effort to simulate a type of roundworm, Caenorhabditis elegans at the cellular level. It has one of the most simple nervous systems found among organisms. Despite the worm’s simplicity, it provides a keen insight into biological systems. The project itself is not quite finished yet, but it appears that simulating an entire organism is indeed possible. Simulating an organism using such a bottom-up approach, as opposed to the top-down approach, means that any sort of intelligent behavior would be an emergent trait: intelligence itself is not simulated, only the underlying biological systems that cause intelligent behavior are.
It is not hard to imagine that if we are able to accurately simulate a simple organism such as C. Elegans, we could theoretically also simulate more complex organisms, such as humans. It would require a tremendous amount of processing power to simulate the behavior of all the individual cells (or for even more accuracy: organelles) in the human body, of which there are trillions. To illustrate: C. Elegans has 302 neurons in its entire body. Homo Sapiens have an 86 billion neurons in its brain alone. For the sake of argument, we will assume that we have enough processing power to do the calculations needed for the simulation. The idea here is that simulating individual cells and their interactions with one another results in simulated tissue, which would result in simulated organs, which would result in an authentically simulated organism. In a sense, this achieves a faithful synthetic copy of the organic original. Naturally, such an approach requires a physicalist view on our universe, in which mental aspects such as consciousness and intelligence are emergent traits which stem from mere material interactions.
The article Ethics of testing on a conscious simulation goes on about the ethical implications of performing medical tests on such a simulated organism, but there is another interesting aspect to these kinds of organisms. Namely, that should those simulated organisms be put inside a simulated world, it would be impossible for them to know whether or not they live inside a simulation. From their perspective, the input that their senses receive is indistinguishable from the input found in the "real" world. Many philosophers have thought about this problem, and the brain-in-a-vat argument is the main premise behind the popular movie The Matrix (1999). In short, the argument is based on the idea that there is no way to know for certain that you are, in fact, not just a brain in a vat hooked up to a computer that perfectly manipulates your senses (read: neurons). Our version of the argument goes one step further: not only are the senses being fed simulated information, but the senses themselves are simulated as well!
From an experimental perspective, the possibilities for anthropological and biological research are astounding. Scientists could simulate all sorts of scenarios with many types of different simulated humans (or other conscious organisms). The simulated organisms would be, essentially, puppets, that we - their creators and gods - could play with endlessly. We could place them into literal nightmares just to see how they would respond. We could cause a war to help us develop more effective fighting strategies. We completely own them and their world. Which begs the question: is that ethically justifiable? Do we, the gods of their universe, have a right to interfere with their simulated lives as we wish? Are they, despite their simulated origin, truly so different from us? Behaviorists see the mind as a black box of which it is not important to know exactly how it works. To them, how it reacts and responds to stimuli is what matters. Behaviorism could be stretched into the philosophical idea that physicality is not important at all. If the simulated organisms would behave intelligently, then wouldn't that entitle them to the same human rights that we hold in such high regard? If so, would shutting-down the simulation program be equal to the total genocide of an entire race of intelligent beings? Should we, having considered all these issues, even want to ever face this dilemma? Perhaps the best solution would be to not attempt to simulate complex organisms at all, although I doubt the exploratory nature of humanity would allow that.
Appendix: what about us?
At this point, it is not a stretch to come to the conclusion that if you were living inside a simulation, you could never know for sure either. Zohar Ringel and Dmitry Kovrizhi of Oxford University came to the conclusion we couldn't be in a simulation as they found a physical property a computer couldn't simulate. Although, if we are in a simulation who is to say what the physical laws of the parent system are? The only thing one can be perfectly sure about is one's own consciousness. All sensory input and bodily functions could be simulated or manipulated. From my perspective, the only thing I can be absolutely sure about is that I am currently thinking of the fact that I cannot be sure about anything but my own thinking. It's interesting food for thought. Perhaps you reading this article is the result of an intervention by beings that are responsible for simulating our world.