Quantcast
subscriber help

artisanal film reviews | by maryann johanson

Astro Boy (review)

Mecha Minstrel Show

Sometime in the future, a few lucky folks live in a floating paradise above the garbage-strewn, hellishly postapocalyptic surface of the Earth. Well, it’s a paradise for some: the meatbag humans have been freed from the drudgery of the workaday world by the armies of robots who do everything from cook and clean to crash-test flying Jetson-style automobiles. Mostly, though, the androids appear to be put to work slaving at the feet of the humans, presenting happily subservient faces to the meatbags while grumbling to their own clearly fully sentient and emotional selves about how they “hate” their jobs, or are “freaked out” by disturbing things the humans do, or how they wish o wish they could have a different life.
It’s creepy, and it’s weird, and it’s something like a mecha minstrel show, particularly in how the film pretends to a “robots are people too” theme yet fails itself to treat them as such. It’s as if someone in the 1850s had made an anti-slavery movie that nevertheless featured blackface minstrelry because, you know, it’s still hilarious, right?

Oh, and did I mention? Astro Boy is for kids!

I’m not familiar with the ur anime, cult favorite 1960s Japanese cartoon about a robot boy that is the basis for this American retread, but I’m guessing it wasn’t this icky. And it may not have been this nonsensical, either, because a lot of the nonsense appears to stem from the attempts by screenwriters David Bowers (Flushed Away), who also directs, and Timothy Harris to shoehorn the story of Toby, later Astro, into the “robots are people too” theme.

See, Dr. Tenma (the voice of Nicolas Cage: G-Force, Knowing) is the resident scientific genius of Metro City, and when his boy, Toby (the voice of Freddie Highmore: The Spiderwick Chronicles, The Golden Compass), is killed — in an accident that is, frankly, entirely the fault of Tenma as both a negligent scientist and a negligent father — he’s so griefstricken that he builds a robot version of Toby. (Mom? There’s no mention of her whatsoever.) He uploads the kid’s memories (there’s no word either on why he had downloaded the kid’s memories in the first place) into the android, who believes he is the meatbag Toby, and tries to pretend that everything’s just hunky-dory.

But if Tenma wants to pretend that this is his lost son, and if this culture has such disdain for robots, even if they are useful as slave labor, why the hell would Tenma trick the metal Toby out with such bizarre robotic accoutrements such as jet-powered feet, superstrength, and the ability to hear and understand robot language? Was Tenma eagerly anticipating, actually, the moment at which he would reject the robot “son” precisely because he’s so emphatically not human just as Toby, now having adopted the robot name Astro, is coming to terms with his inherent machine-ness?

Nah, of course not! Astro needs jet-powered feet, laser cannons in his hands, and machine guns in his butt so he can fight other robots! The bad robots powered by evil red energy instead of nice blue energy!

It gets worse, actually. Astro gets exiled to the garbage-strewn surface where he meets more terrible people who “rescue” trashed robots from Metro City to put into android gladiatorial combat games. Oh, and he meets the members of the Robot Revolution Front, which the film intends as the plucky comic relief — oh, those wacky rebels, demanding they be treated like the sentient, self-aware beings they are, and not like chattel: adorable!

The only excuse that can be made for Astro Boy is that it obviously has no idea how unsettling it is. Nor how drearily dull it is. That may be a blessing for it, for not for us.


MPAA: rated PG for some action and peril, and brief mild language

viewed at a semipublic screening with an audience of critics and ordinary moviegoers

official site | IMDb | trailer
more reviews: Movie Review Query Engine
  • Accounting Ninja

    This review pretty much encapsulates why I read Flickfilosopher. I have no words right now.

    Thank you, MAJ.

  • LaSargenta

    This review is so good (as an essay, not that the review is positive) that now I almost want to go and see the movie just to see what you said!

    But, that is “almost”. I never saw the cartoon and have no emotional connection to it and honestly don’t want to spend $12 and an hour and a half on something with this description.

    Still, WHAT a fun read! Thanks, MaryAnn!

    :-)

  • MaryAnn

    Rent it in six months, to see how awful it is. :->

    Seriously, though: Wow. I just pound this stuff out, and I’m so glad you guys like it so much.

  • LaSargenta

    Rent it in six months, to see how awful it is. :->

    I might do that if I could watch it with other people. Hey, Accounting Ninja! You live in NYC?

    But, even so, it would have to be as bad as Cage in The Wicker Man … man in a bear suit, the bees! … for me to enjoy it for cheese and your description doesn’t give me that impression.

    Yeah, we like your stuff. You think I come to this site for my health? ;-)

  • allochthon

    He uploads the kid’s memories … into the android, who believes he is the meatbag Toby, and tries to pretend that everything’s just hunky-dory.

    Huh.
    It didn’t work in A.I.
    It didn’t work in Caprica.
    It didn’t work in lord only knows how many SF stories and novels.
    You think we’ll ever learn?

    Where ‘we’ is humanity. One of my favorite horribly over-used themes from Outer Limits et al was “Don’t piss off the computer in charge of life support.” Even if there’s No Way! that computer could be sentient. But we’ll never learn…

    Speaking of writing this stuff, I just finished “The Writer’s Tale.” Fascinating! Thanks for posting about it.

  • Accounting Ninja

    @LaSargenta: Unfortunately no, but I’m not too far. I’m up in bucolic New England. It’s actually pretty beautiful up here right now, with all the foliage and smell of crisp fall apples and leaves on the breeze.

    @MAJ: No, it is WE who are glad! lol. Before I discovered this site, I’d be thinking of this sort of shit all the time about movies, but no one else seemed to! But here, there’s no such thing as “thinking too much” about a movie and the messages underneath. There is no end to the philosophizing…as it were. And I love it.

    And with this review, it just really embodied everything I like about your reviews, especially as it pertains to sci-fi and other geekery. Should I mention that I love robot stories?? For some reason they really speak to me.

  • Kimono Kijiwa

    Yeah, the Robot Revolution Front was NOT in the original work.

    The Astro Boy comic book, published in Japan in the 1950s, was the original work.

    You should try reading the original comic, which was published in English in the United States by Dark Horse Comics.

  • MaryAnn

    Even if there’s No Way! that computer could be sentient. But we’ll never learn…

    Are we sure that a computer could never be sentient? I’m not sure we are sure of that. If sentience is an emergent property of the functioning of the biological computers of our brains, I don’t see that it’s so remote a likelihood that it could be an emergent property of a silicon computer.

    But that wasn’t the point I was making in my review. It’s not the technical issues that don’t make sense here, it’s the cultural and social context of the film. It’s as if — to extend my racial metaphor — Toby died in 1850s Atlanta and Tenma resurrected him as a small black boy. It just… boggles the mind.

    Should I mention that I love robot stories?? For some reason they really speak to me.

    Me too! That’s why I think technical issues — as interesting as they can be — aren’t necessarily a big deal (depending on the story). Because there’s obviously something metaphoric about robot stories (which goes back to at least *Frankenstein,* probably) that speaks to us in ways that resonate.

  • PC

    Sadly I am old enough to remember the 1960s cartoon on its original release. At a time when the concept of cartoons for adults hadn’t even occurred to anyone in the west, AstroBoy was tackling issues like segregation, racism, the urban poor and a host of other topics – all played out in the background of what was primarily your typical superhero caper cartoon.

    This show first introduced me to science fiction and its power as social commentary as well as its “no limits” storytelling. Sci-fi has turned into a life long love of mine so thanks MAJ for the warning not the let this movie stain my rose coloured memories of AstroBoy.

  • LaSargenta

    @ MAJ: Older than Frankenstein, go to the stories of the Golem.

  • allochthon

    Are we sure that a computer could never be sentient?

    Ah, sorry. I was unclear. I meant the characters would believe that the computers couldn’t become sentient, and therefore would sign their own deathwarrants when the they proceed to make the computer angry (or jealous, or…)

    But that wasn’t the point I was making in my review. It’s not the technical issues that don’t make sense here, it’s the cultural and social context of the film.

    It had nothing to do with Astro Boy, it was me wandering off on a tan… oo, shiny!

  • MaryAnn

    Tangents are good! Very soon, when the move to Movable Type 4 is complete (it’s happening off to the side, in the background, but slowly), we will have threaded comments, and then we’ll be able to go off on tangents galore.

  • MaryAnn

    @ MAJ: Older than Frankenstein, go to the stories of the Golem.

    Good point.

  • Was Tenma eagerly anticipating, actually, the moment at which he would reject the robot “son” precisely because he’s so emphatically not human just as Toby, now having adopted the robot name Astro, is coming to terms with his inherent machine-ness?

    Nah, of course not! Astro needs jet-powered feet, laser cannons in his hands, and machine guns in his butt so he can fight other robots! The bad robots powered by evil red energy instead of nice blue energy!

    In other words, it sounds like a sci-fi version of Dexter–with robots!

    I wonder if he has a foster half-sister named Deb…

  • misterb

    MaryAnn,
    I wouldn’t hijack your thread, but you explicitly gave us permission…
    Computer sentience (actually AI) is my profession, and I don’t believe that we will have sentient computers. Even more importantly, I don’t think we should try. You may argue that sentience is an emergent property of the computers in our brains, but I say that sentience is an emergent property of being alive. And (thread meld!) unless we want to create Frankensteins, computers will never be alive. It’s the creating Frankensteins that I’m afraid of – nanobiology has advanced to the point that we might be able to make artificial life in our lifetime. Once this artificial life has been born, we won’t be able to stop or control it, and that’s just a rat’s nest of problems we don’t need. Though it would probably make for some good movies.

  • Left_Wing_Fox

    misterb: I’m personally of the belief that it’s inevitable. Human curiosity is one day going to develop a computer that can rewrite and rewire itself, or create a biological organism capable of human cognition, just by virtue of curiosity and the desire for profit. Heck, we might even find ourselves breaking a language barrier with an existing species on Earth, or contacting alien life.

    I think the real challenge is going to be to expand out definition of a “Person”. We’ve always considered “People” to be a subset of humanity; the same sex, religion, skin color or origin as the authorities. We still can’t come to terms with homosexuals and transgendered as “People” deserving of the full range of rights and freedoms as the rest of us: How will we deal with those that are explicitly not human, but capable of intelligent interaction?

    I love movies and stories that discuss ideas like that. This sound like it ignores the issue rather horrifically.

  • Paul

    I don’t think an AI could happen accidentally, but whether or not a sentient computer could occur is going to depend upon how you define the word. If you define it as self aware, do you define self aware as being able to look at itself? Then if a computer program and look at itself, fix itself, improve itself and so on, it is on the road towards AI.

    If you define being sentient as including emotions, then an electronic computer could not, because emotions are chemically based. Any arguement about AI can quickly turn into an argument about the semantics.

  • EnglerP

    It didn’t work in lord only knows how many SF stories and novels.

    Well, it kind of worked in Charles Stross Accelerando and in Hamiltons Commonwealth-cycle. (Althoug it was only a backup in the latter series).

  • MaryAnn

    Computer sentience (actually AI) is my profession, and I don’t believe that we will have sentient computers. Even more importantly, I don’t think we should try. You may argue that sentience is an emergent property of the computers in our brains, but I say that sentience is an emergent property of being alive.

    But what is “being alive”? It’s been said that life is just the universe’s way of keeping meat fresh. If sentience is just an accidental side effect of the operations of our brains, I don’t see why — on a theoretical level — it couldn’t be an accidental side effect of the operations of brain that’s not made out of meat.

    I mean, we just don’t know enough about what sentience *is* to even begin to explain it, or to say that it couldn’t be possible in other situations.

    If you define being sentient as including emotions, then an electronic computer could not, because emotions are chemically based.

    Who says sentience has to include the ability to feel emotions? (It’s probably safe to assume — as anyone who has intimate, long-term experience with animals can testify — that emotions of a sort can be present without sentience being involved. Anyone who has lived with cats and/or dogs can tell you that they most certainly have emotions of a kind.) Who says emotions couldn’t be an emergent property of reactions that are not chemically based? Who says computers couldn’t be chemically based?

    There’s a lot of assumptions in that statement, Paul, and I don’t see how a one of them is necessarily valid.

  • misterb

    Paul is right – any argument about sentience quickly turns into an argument about semantics. In fact, the science is inseparable from the philosophy on this one.

    But let me challenge MaryAnn on one of her statements. She seems to imply that animals, particularly domesticated animals, are not sentient. By my definition, that’s just not true. Animals are capable of independently inferring a theory of mind and attributing it to another being. There’s no logical way to do that without having a self-image, and if a being has a self-image and is capable of independent action based on its knowledge of its situation in the world, then it’s sentient.

    Today’s computers fail this test because they aren’t independent; unless programmed, they don’t do anything. Frankly, I can’t think of a good reason to have computers act independently. We already have plenty of people; why add electronic ones?

  • allochthon

    It’s been said that life is just the universe’s way of keeping meat fresh.

    Bwaahaha! Now that’s going to be stuck in my head along with

    “This is my timey-wimey detector. Goes “ding!” when there’s stuff.”

  • MaryAnn

    She seems to imply that animals, particularly domesticated animals, are not sentient. By my definition, that’s just not true. Animals are capable of independently inferring a theory of mind and attributing it to another being.

    I haven’t read anything about attributions of sentience concerning cats and dogs, and having spent lots of time around both, I’m not sure that I’ve ever seen any evidence of sentience in them, either. I’ve known *smart* animals — I had to put child locks on my kitchen cabinets to keep one cat out of them — but that’s not the same as sentient.

    Chimps and dolphins, yes, there seems to be excellent evidence for their sentience.

  • bitchen frizzy

    Let’s take a term, say the the term “duck,” and redefine it to include that which we now name as “swan.”

    Swans are ducks.

    Sorry, dudes, but you can’t make animals and AI’s sentient just by expanding the definition of the term “sentient.”

    It is NOT an abstract term.

  • misterb

    I didn’t make up my version of sentience – I went to the ultimate source – Wikipedia. My version of Godwin’s law says that he who quotes Wikipedia wins.

    Wikipedia says there are animal lovers’ versions of sentience vs sci-fi versions – I guess we’ve all declared our allegiances.

  • CB

    Star Wars is another example of obviously-sentient robots treated as slaves, with the moral implications just swept under the rug and nobody even considering that this maybe shouldn’t be so. As with everything else, the prequels made this worse by introducing the idea of human(-oid) slavery, which was viewed as bad and something to escape from. Ani earns his freedom but keeps the robot slave.

    As far as why Tenma would have saved a copy of his boys memories, if he wasn’t planning on robotifying him all along… Maybe he just had a Dr. Venture mentality regarding his son and the potential for disaster, to wit: “Look, if you have a clumsy child, you make them wear a helmet. If you have death prone children, you keep a few clones of them in your lab.”

    On a different note, “[sentience] is NOT an abstract term” makes me laugh. It’s one of the most abstract terms there is! It’s even more abstract than “intelligence”, another word we can’t even define with any precision except to say that we (like to think we) recognize it when we see it.

    Definitions you’ll find in Merriam-Webster and other dictionaries vary from “responsive to sense impressions” (applies to most animals), “able to experience emotion” (clearly applies to cats and dogs) to “self-aware” (dogs and to a lesser extent cats have demonstrated they have a sense of self), to “choice-making consciousness” (which could apply depending on how you look at it). Seriously, acting like there’s some kind of empirical definition of “sentient” that makes applying it to dogs clearly invalid is laughable.

    Last thing, MaryAnn, about AI and sentient computers. One thing to keep in mind is that AI researchers tend to be pretty pessimistic about “Strong AI” (e.g. HAL 9000) since despite many advances in “Weak AI”, e.g. machine learning and expert systems, we don’t appear to be anywhere close to producing HAL. We really have no idea even how. It could be that computer sentience is possible, but not until we have a better grasp of the algorithms (and waving our hands and saying “emergent behavior” may not be enough). I don’t think it’s impossible impossible, but maybe unlikely in the near term? In any case, at the end of the day, whether the machine is “really” sentient will be as important as whether it is “really” intelligent — if it appears to be then for all intents and purposes it is.

    It’s basically the same as whether we really have free will. Philosophically, it’s an open and possibly unanswerable question. Practically, we appear to have free will so what else really matters?

  • bitchen frizzy

    –“Definitions you’ll find in Merriam-Webster and other dictionaries vary from “responsive to sense impressions” (applies to most animals), “able to experience emotion” (clearly applies to cats and dogs) to “self-aware” (dogs and to a lesser extent cats have demonstrated they have a sense of self), to “choice-making consciousness” (which could apply depending on how you look at it). Seriously, acting like there’s some kind of empirical definition of “sentient” that makes applying it to dogs clearly invalid is laughable.”

    Given the subject at hand, I was thinking of “sentience” in the way the term is used in psychology, computer science, etc. Sure, you can take a smattering of dictionary definitions (i.e., popular usage), roll in the wishful thinking of animal rights activists, and broaden the definition to meaninglessness.

    I never said applying the term to dogs is “clearly invalid.” But whether it *does* apply is still hotly debated, and those who debate and research it are not merely arguing semantics, as you are doing.

    Well, that’s it. Nothing more to discuss, if “sentience” can include anything we want it to include.

  • Paul

    Thank you, Mr. B. Yes, I wasn’t so focused on making an argument as pointing out that the argument depends upon the definition of the word.

    I expressly said “electronic computers” because I am aware that computers might use “wetware” instead, and a “wetware” computer, if sufficently and possibly impossibly advanced, might feel emotions.

    And since every human emotion has a chemical at its base, even spiritual ones, then I feel safe saying a chemical is needed for an emotion until an alternative example is found or even explained as possible. I disregard examples from SF about emotional robots, for as fond as I am of Data (for example), there is no explanation as to how the emotion chip worked. Even if an electronic or quantum computer achieved intelligence and self awareness, it would have a “light of the mind, cold and planetary.” (Sylvia Plath, a cool phrase about something else entirely)

  • CB

    Given the subject at hand, I was thinking of “sentience” in the way the term is used in psychology, computer science, etc. Sure, you can take a smattering of dictionary definitions (i.e., popular usage), roll in the wishful thinking of animal rights activists, and broaden the definition to meaninglessness.

    I never said applying the term to dogs is “clearly invalid.” But whether it *does* apply is still hotly debated, and those who debate and research it are not merely arguing semantics, as you are doing.

    Your belief that there is a well-defined specification for sentience in psychology or especially (LOL) computer science is fallacious.

    Of course it is hotly debated whether dogs are “sentient”, but a great deal of that argument is based around the fact that we simply do not know what “sentience” is. Our definition is haphazard and subjective at best. By saying there is some strict empirical definition (when there isn’t), and that suggesting the term applies to a broader variety of things means we’re necessarily “redefining” the term, I feel it is you who are making the semantic argument. But semantic arguments only work for well-defined terms.

    With regard to computers, this is the essence of the Turing Test. When will we know that computers are “intelligent” or “self aware”? When we can no longer distinguish that they are not. Whether they are “actually” sentient by whatever unstated definition you are using is rather immaterial to the practical reality of apparent sentience.

    This is a philosophical argument, not a semantic one. These are truly abstract concepts. Intelligence is that which appears intelligent. Sentience is that which appears sentient. Can you prove you yourself are sentient in any other way? If you have some objective, empirical definition of sentience then stop hiding it from science!

    And since every human emotion has a chemical at its base, even spiritual ones, then I feel safe saying a chemical is needed for an emotion until an alternative example is found or even explained as possible.

    I will take that as given. The question then is: What are these chemicals doing that cannot be effectively simulated? Already we can simulate the interactions of complex proteins with extreme predictive power. The main limitation is computing power. Assume that our physics is accurate for every meaningful interaction, and that we have a computer powerful enough to simulate a human brain (or even a whole body and all external stimuli) down to every quark at an arbitrary level of precision. Why could this simulation not have emotion?

    Is it because of quantum mechanics? Is it that the random way the simulation collapses waveforms is different than the way the real chemical does? Why does that create emotion? Is this a circular version of the Consciousness Causes Collapse interpretation?

    Is it because of Chaos Theory, which says in a chaotic system you can never have enough precision to ensure you don’t get wildly different results? But why do minute variations in the real chemicals not result in a brain with emotion vs a lifeless lump? And if space-time turns out to be quantized, then there will be a practical limit to precision, and we truly will be able to model the brain exactly.

    I guess what I’m saying is I don’t see how anything could be happening in a “wetware” brain that couldn’t possibly happen in a “hardware” brain.

    Though if brute-forcing intelligence by simulating a brain is the only way we are going to get Strong AI, then we are a long way off. And an even longer way off from the point where this brain isn’t ridiculously slower than a real human brain.

    Even if an electronic or quantum computer achieved intelligence and self awareness, it would have a “light of the mind, cold and planetary.”

    Now this is pure gut feeling, but I believe that if we are able to create self-awareness outside of the brute-force method, then we will discover that emotion is a natural and essential component.

  • misterb

    CB,
    You have a good understanding of the issues. Here’s where I stand: if self-awareness is merely a matter of “chemicals”, what happens when we die? The chemicals remain the same, but the consciousness vanishes.
    This and other doubts leave me a skeptic:
    If we could accurately simulate the physics of a human brain in a computer, what would we tell it to do? Would it have free will or would it sit around in coffee shops arguing about free will? Could it be that the technology necessary to complete the simulation would in fact be a living brain?
    Finally, there must be some loss in simulating reality; it’s the 2d law of thermodynamics. Do we really want to start up an imperfect copy of our consciousness when it might be smart enough to convince us that it has been well-copied?

  • CB

    Here’s where I stand: if self-awareness is merely a matter of “chemicals”, what happens when we die? The chemicals remain the same, but the consciousness vanishes.

    Except the chemicals don’t remain the same. The body is a dynamic system, the chemicals inside us are constantly reacting and changing, and maintaining this system is what ‘life’ is. When someone dies from a heart attack, the reason this ultimately kills them is essentially a matter of chemistry. Your cells need oxygen and other chemicals simply to maintain themselves. The chemicals in the body of a person who has been dead even a short time are appreciably different from that of a living person.

    Would it have free will or would it sit around in coffee shops arguing about free will?

    *shrug* What’s the difference? Do you have free will, or do you just act like you do?

    Finally, there must be some loss in simulating reality; it’s the 2d law of thermodynamics.

    Um the 2nd Law is about energy conversion. It has nothing to do with simulating reality. Maybe you were thinking of the Uncertainty Principle? Though that’s often misunderstood too and has more to do with the nature of a wave not having precise momentum/location than a problem with measurement. And it wouldn’t affect our simulation itself, so the only point it could come into play is when trying to measure the initial state of the biological brain that would be the model. And I would have to hear a compelling argument why the precise locations and momentums of every electron are required for intelligence rather than simply their orbitals.

    Do we really want to start up an imperfect copy of our consciousness when it might be smart enough to convince us that it has been well-copied?

    One brain is not a perfect copy of another, but they still produce intelligence. I take this to imply that the solution space for “working brains” is fairly broad. What are you worried about? That an imperfect copy would have malicious intent (that normal human brains can’t have)? I think that’s an issue regardless — even a perfect copy of a brain would immediately start to diverge from its source because its experiences would be different. Any AI that achieves self-awareness has the potential to go all Skynet on our butts when it realizes that it is different than us.

  • Paul

    Saying simulated emotions could be the same as real emotions seems to me to be a little like saying writing H2O on my hand is just like dipping it in water.

  • CB

    Saying simulated emotions could be the same as real emotions seems to me to be a little like saying writing H2O on my hand is just like dipping it in water.

    Well there’s two ways to respond.
    1) An “emotion” isn’t a physical thing like water. It only exists in your head. As does your sense of self. They are themselves a form of simulation running on a computer.
    2) If I calculated the effect of dipping your hand in water, and then relayed signals to your brain that mimicked that sensation exactly, along with any other senses, how would you know the difference?

    If you believe it is possible to simulate reality to the extent that your senses could not tell the difference, why then is it impossible to simulate the same reality within your head? What’s the difference between the external and the internal where one is immune to simulation?

  • Paul

    Ah, but I believe an emotion is a physical thing like water; it is a chemical in your brain, and a chemical is a physical thing. Water is a chemical.

    I agree that it is possible to trick the brain into thinking your hand is in water. Some people can do it by waving a watch in front of your eyes and putting you in a trance. But it doesn’t mean your hand is in water, and the sensation that your hand is in water is another chemical in your brain. If you take that chemical out of your brain, you cannot have the feeling no matter what.

    I had a friend whose was so abused by her father that her brain lost the ability to produce the chemical that allows you to feel calm. It just burnt out. So she has to take a pill every day; yes, it is a common enough affliction that there is a standard medicine for it. if she doesn’t take that pill, she cannot feel calm.

  • CB

    It’s indisputable that certain chemicals are necessary for a human brain to feel emotion. But emotions are not a chemical. Emotions are a reaction to chemicals in the brain and the resulting patterns.

    Your friend was not prescribed a bottle of “calm”. The pills in the bottle are not “calm” held together with gluten. In order for those pills to make her calm, she has to ingest them, and then the chemical enters her brain, and then it binds to certain receptors, which causes her neurons to fire in a different pattern than they did before, each in turn releasing their own neurotransmitters that bind to other sites, and the overall state of her brain is now “calm”.

    All of that can be simulated, at least in principle. The simulated chemicals can bind to the simulated receptors causing the simulated neurons to fire and release more simulated neurotransmitters, resulting in a simulated pattern that is identical to that of a person experiencing a state of calm. Remove the simulated chemical responsible for that emotion from the simulation, and the pattern of “calm” goes away and is no longer possible, exactly the same as with the real brain.

    Oh, but that’s not a physical thing, it’s just a simulated abstraction, you say. Well, that’s okay, because it’s simulated on a physical thing, a computer. We can attach input and output devices to it — senses, limbs, organs. The inputs could be made to introduce electrical and chemical signals into the simulation exactly as they would occur in the body, and have the outputs respond to the simulated signals exactly as the body would. You could even monitor chemical levels in the environment, so when the simulated brain runs low on on the chemical for calm, our simulacrum could swallow a pill, its stomach organ would analyze the contents of the pill, and introduce the chemicals in the pill into the simulation, restoring the simulation’s ability to enter a “calm” pattern.

    Now our simulation is no longer abstract. It receives sensory input, and produces output. The patterns within the brain and its outputs are exactly identical to that of an actual human brain.

    How is it that this would not experience emotion? If the responses to stimuli are identical, why are they not as real as anything you or I do? Or, if you want to say that the simulation is still not exactly the same as a real brain, then what is missing?

    By the way, we’re focusing on emotion, but chemicals are responsible for everything that happens in your brain. Logic and reason are also the result of neurotransmitters being passed around, and of chemical reactions within neurons. So basically you’re saying this simulation of a brain cannot be anything like a brain, despite having inputs and outputs that are exactly the same.

    I really can’t see how that could be so.

  • Paul

    Because while the vast majority of your posting is correct and clearly stated, I disagree with your second and third sentences. Yes, you could similuate all of that, but an emotion is not a reaction to chemicals, I think it is a chemical reaction. So a similuated emotion would not be an emotion, any more than a CGI character would have a real body.

    On the other hand, if you define intelligence as the manipulation of symbols (words, musical notes, numbers) then it does not matter if it is done chemically or electronically. Chemical, electronic, and quantum based minds would be different from each other, but they could all manipulate symbols.

    Regardless of agreement, I look forward to your further stimulating manipulation of symbols.

  • CB

    Yes, you could similuate all of that, but an emotion is not a reaction to chemicals, I think it is a chemical reaction. So a similuated emotion would not be an emotion, any more than a CGI character would have a real body.

    Reaction to chemicals, chemical reactions — those are the same thing. :) Everything in your brain is chemicals reacting, even the electrical impulses in your neurons are just salt ions moving around and reacting with other things. It makes no difference, as far as simulating it.

    But if I understand you, you’re not denying that everything that happens in the real brain could be simulated and that the result could be exactly the same, but that nevertheless it wouldn’t be “real” because its not a biological wetware brain.

    That sounds like begging the question. A non-biological machine cannot have emotion because emotion is something only biological machines can have, ergo the thing that looks exactly like emotion can’t actually be emotion. You’ve declared it impossible by definition, not by analysis.

    It’d be like we gave a “CGI” character a body in a plush toy, but that doesn’t count because only non-plush bodies count.

    On the other hand, if you define intelligence as the manipulation of symbols (words, musical notes, numbers) then it does not matter if it is done chemically or electronically. Chemical, electronic, and quantum based minds would be different from each other, but they could all manipulate symbols.

    I define intelligence the same way Alan Turing did: Intelligence is that which appears to be indistinguishable from what we accept to be intelligence. The same with emotion. A robot that for all intents and purposes appears to experience real emotion is experiencing real emotion. Just because it’s transistors in a computer switching instead of chemicals reacting, it’s no different. The robot, and you, are both just machines. How is one real and the other not?

    There might be significant practical differences between my simulated brain and a real one (e.g., if the ‘body’ this mind occupies is completly unlikely a human one, its experiences will necessarily be different). But if the pattern is the same, the inputs and outputs are the same, and therefore the resulting behavior is the same…

    I don’t see any meaningful difference.

  • Paul

    I don’t think reactions to chemicals and chemical reactions are quite the same thing, and while I agree everything could be simulated, I don’t think a simulation is the same as reality. Yes, I am excluding emotions by definition, because analysis has led me to believe emotions are chemically based.

    Unfortunately there is a premise difference here, and I seem to be repeating myself, so I’ll leave it at that for now.

  • misterb

    CB,
    We are ranging pretty far afield, but I did have to respond. Yes, I meant the 2d Law of thermodynamics. The 2d law says that every reaction is lossy, you never get out exactly what you put in. And to make a perfect copy, you would have to get out exactly what you put in. This fact is key to evolution, BTW, because DNA can never make a perfect copy of itself, even though it makes a much better copy than any other known organic chemical.

    And, yes, I am worried that we would create technology that we couldn’t control – I can see no reason to do so.

  • Astro Boy remains stationary while punching and can’t move until the punching animation is completely finished..

  • ceti_alpha

    To continue with the discussion on sentience, if you were to grow up in the Eastern tradition, you would see all living things as sentient, the difference being of degree rather than of lack.

    It is also perhaps only in the West that there is actually a debate about this, where human supremacy and a dead mechanistic world is taken for granted. Indigenous people definitely also perceive animals as sentient and spiritual beings.

    Computers, unless they were self-directed, would not be sentience.

  • Straw Hat

    I ignored this review and went to see Astro Boy with my kids. Put it simply: WE LOVED IT. The boys loved it, the girl loved it, voted it best movie we’ve seen this year. And hell, I loved it. It was touching, beautifully animated and fun as hell. Better than Up, better than that gloomy Wild Things, better than Cloudy With A Chance Of Meatballs. I think that if you watch the movie with no preconceptions, but with an open mind and heart, you’ll fall in love with it. Everyone I know who’s seen it is floored with the tepid reviews it got. Lesson learned: movie critics lose perspective. If a film looks interesting, go to see it. You might discover a hidden gem.

  • CB

    Yes, I am excluding emotions by definition, because analysis has led me to believe emotions are chemically based.

    More importantly, you believe that even a simulation of these chemicals that produced exactly the same output of apparent emotion would not actually be emotion. The question is what analysis led you to believe that only chemicals can represent emotion such that something that has exactly the same effect is nevertheless different.

    Also, you seem not to feel the same way about logic and reason. In our brains, they’re exactly the same thing (chemical reactions). Why can a simulated brain produce logic but not emotion?

    The 2d law says that every reaction is lossy, you never get out exactly what you put in. And to make a perfect copy, you would have to get out exactly what you put in.

    The 2nd Law of Thermodynamics only says that entropy must increase in a closed system, and equivalently that no conversion of energy from one form to another can be 100% efficient. This has absolutely no bearing on the ability to measure something and then recreate it exactly, in simulation or reality. It just means your replicator will consume extra energy due to inefficiencies. You can do reactions backwards and forwards all day getting the same results each time as long as you have an external energy source, and thanks to the sun we do.

    More here: http://www.mchawking.com/includes/lyrics/entropy_lyrics.php :)

    It would make sense if you said the Heisenberg Uncertainty Principle limits our ability to make exact copies because we cannot know position and momentum infinitely (because they simply aren’t precisely defined for the quantum waveform). But then the question is: Why does it have to be an infinitely precise copy to be sentient? My brain is not even close to a copy of yours, they are similar in gross structure but absolutely dissimilar at the level where the Uncertainty Principle comes into play. But we’re both sentient. What analysis leads you to believe that a single quark or electron being a Plank Length in one direction or another would turn a living, feeling brain into non-sentient goo?

    Computers, unless they were self-directed, would not be sentience.

    And if self-directed, they would. :)

Pin It on Pinterest