Empty

Discuss the philosophy found in the various incarnations of Ghost in the Shell

Moderator: sonic

User avatar
sonic
Special
Posts: 274
Joined: Wed Nov 23, 2005 10:03 pm

Post by sonic »

Same laws and same customs = "being able to do all the same stuff as me" (from what I said).
as a sentiont being it would be able to challange the vary claim that humanity has on the world.
= the "and more" part.

The problem I have with thinking about the AI and it's ability to do that as something so "other" is that there are already plenty of "challenges" to "laying claim" on the world... Those coming from within humanity. An analogy is the world powers- the British Empire once held power in the world, but when it lost dominance it it did not make the people within the culture feel somehow less significant. If the US should stop being the greatest power in the world, I also doubt that Americans will feel as though it somehow makes them a lesser group of people either. Power is all subjective... and there is such a thing as sharing it too. If an artificial being does somehow wield more power over the world than humans though, what's to say it'd be any different from the next world empire rising (let's say that the AI conducts business and gets to own huge portions of land and successful corporations, thus beginning to dominate the world economy)? Doomsday sci-fi tellings of such events aside, we'd probably just learn to live with it's presence in the world, and it in turn would probably see the benefit in coexisting with us. Or it could simply ignore us and do it's own thing like we do with wild animals. Couldn't ignore us long I imagine, as we'd find a way of relaying to it that we are happy that it's doing stuff that messes up what we're doing (kind of like what we do to the wild animals' environment). But anyway, nobody and no thing should really be "laying claim" to the world. And there is nothing to suggest an AI would want to do such a thing unless we humans made it that way... It doesn't necessarily have to have a dominating streak. What I'm trying to say again, I suppose, is that is the importance of being human, and the significance of humanity, to have dominace over the world as we see it? For some people yes, but I really don't think it's a universal issue with people. Or else we would all be vying for the top positions in the world that we have created all the time. But it's a big world- there is a lot of room in it to find significance in one's humanity doing all sorts of things. Even in not interacting, just being able to sit in a cave and meditate or something. 8)
User avatar
james_sb
Posts: 47
Joined: Fri Feb 24, 2006 2:44 pm
Location: Dublin, Ireland

Post by james_sb »

Sylphisonic wrote:Umm... Where do they say that they cannot feel their own bodies? I don't remember hearing that anywhere. Why would they set it up that way? It is useful to be able to set up the bodies to have feeling, and one would think fairly simple to connect that feeling to the mind. Being able to turn off any pain at will is not the same as not ever having a sense of touch.
Agreed, especially when it would be possible to control the level of feeling involved. But they would be so used to the bodies with no feeling, like driving a car, you get use to the size and position the ther exterior, and without feeling you can still control the movement, the wheels avioding pot-holes, or the wing mirror avioding other cars.. Batou does mention that he doesn't install the software for feeling pain. He says he'll leave tilll he wants it. Just when he get's his arm replaced in the second movie after he shot it himself.

Sylphisonic wrote:Why would being human become insignificant? For that matter, what is the significance of being human (I mean, isn't it up to each of us to decide what makes humanity so special)? If the significance of being human for me were something as straightforward as being able to think intellectually, creating complex things with my own will, and having relationships/normal interactions with other humans, then the rise of super-AI self-awareness doesn't exactly take away the significance of being human for me, even if the AI can do all the same stuff as me and more...
This is where the philosophy comes in. The human brain, as I understand it (I see some of you actually know the biology of it, that's impressive, I don't) is like a computer, it processes loads of information. But there's something different about us. We know our own existence. Somewhere along the evolutionary chain, to become a better protection for our reproductive cells we developed this awareness which would allow us to better survive. Along with this we developed our bodies as they are now. If a computer were to become self aware what then would be the difference between their brain, and ours. Theoretically, nothing.
Of course initially, there would be the organic vs prostetic, and the issue of reproduction but these can be overcome, as both can be manipulated.

But think about this, and this is what I think is key in Ghost In The Shell.

When a cyberised brain becomes self-aware, and you compare this to a human brain self-aware, what then does it mean to be human? What's the signifigance of having a human brain self-aware, or having a cyberbrain self-aware? The answer I've suggested, as in the philosopy I've read, is that it would mean nothing to be human. We now have a sentient being who in all respects matches human, and can exceed humans in mental capacity, and by adaption, physically too. Though humans could also take this physical aspect as in GITS.

We have a brain that can think, and recognises itself, they have a brain that can think, and recognises itself. The other things are exterior.

Of course, before a complete parallel existed, inital self-aware A.I. would had problems with having no method to reproduce themselves, except to make a copy. Someone could build another self-aware A.I. and it's different expierences would make it a different character as portraied by the tachicoma, but a single A.I. even though it potentially could always create enough digital protection to keep up with any virus etc.., it asks will I suffer the weakness of evolution in never changing? The potential to be wiped out by a single event by not adapting? In the film this was asked by the Puppetmaster and answered by merging with Major.
In my opinion, an A.I. could construct new bodies, with new adaptions right down to brain functions. So i don't think this problem would actually arise. The A.I. could solve the riddle quickly anyway and have a solution.

A.I. developments on themselves is a scary idea though. Who knows what they may eventually come up with.


A.I.'s taking over: (Remember Termiator example)
As with self-aware individuals, there's a self-preservation build in. Otherwise we wouldn't survive. We wouldn't get up for work if we didn't want to live longer. We just wouldn't get up. For a self-aware A.I. that must be the same, it would programme itself with self-preservation with a thought. Being a sentient being we can only assume it would want to keep it that way (otherwise it's dead A.I. once you plug it in, and it's back to the drawing board). Then, as there become more Self-aware A.I.'s built, and potentially built by other A.I.'s, they could crowd out humans. Their wish for survival is as strong as ours, they are stronger phisically and smarter mentally, though their needs differ. But where will they get their source of power? Humans as in the matrix? The sun, by solar power. Natural resources we need? Who can say? Will they realise that by expanding space travel we can co-exist, look at the android in 'Aliens', or will they, in their childish infancy, initially try to take over in the traditional sence as in terminator? As indiviual A.I.'s age to great ages, how will their perceptions change? We may be able to bargain with them then, if not before. Will they establish a heirarchy by age? Or rather by progressions of models? There's still the potential that they could assimilate into our society. (In fact there's the potential that they create for themselves organic bodies that create power in the same way our bodies create power, by digesting foodstuffs.) Though that would be feared by most humans, as humans don't know what to expected. Fears like 'How can you tell a human from an A.I.' and other worries. Some of these worries are explored in most of these sci-fi programmes. Like the tachicoma said, their design is not andriod because it would be too upsetting for humans to see them like that, with thought of their function. The other robot androids were ok, because they are designed ergonomicly for home use so i'ts better they appear like humans, and they've only have so much processing power that all they do is type and make tea...

You see, here's where there's a depth of story-lines for writers in sci-fi. What happens when A.I. become self-aware. We are preparing ourselves for that event. Can it ever happen is another question. But it's an interesting train of thought to persue. Unless your major, and questioning if your human. Remeber she said in the elevator, you can't see inside yourself to see there's a huamn brain there. We know there is, but can you imagine the anxiety?
Are you ANTI-POP?
User avatar
Gillsing
Posts: 109
Joined: Fri Nov 25, 2005 8:58 pm
Location: Karlstad, Sweden

Post by Gillsing »

james_sb wrote:Agreed, especially when it would be possible to control the level of feeling involved. But they would be so used to the bodies with no feeling, like driving a car, you get use to the size and position the ther exterior, and without feeling you can still control the movement, the wheels avioding pot-holes, or the wing mirror avioding other cars.. Batou does mention that he doesn't install the software for feeling pain. He says he'll leave tilll he wants it. Just when he get's his arm replaced in the second movie after he shot it himself.
I think you're missing the point. As far as I know there are at least three different types of sensory nerves in the human skin. One type detects pressure, another detects temperature and a third detects pain. So shutting off the pain reception shouldn't affect the sense of touch at all. And the cybernetic bodies in GitS are highly advanced. They actually have more sensory nerves than ordinary human bodies, something which Shirow's Motoko takes full advantage of.

But perhaps your point isn't that they often walk around without feeling their bodies, but that they are used to having mental control over their level of feedback? I could see how that might make them feel as if their bodies are not really part of them, since they can be disconnected at will. And if an arm happens to get hurt because you weren't listening to any pain feedback, well that's just a job for the cyber clinic.
james_sb wrote:But where will they get their source of power? Humans as in the matrix?
Unpossible! I don't care how many BTUs of body heat a human produces, the amount of energy a human requires to stay warm and fed will always be more. Humans as batteries is just a poor excuse to make an IMO cool movie. Think of cows grazing and then being slaughtered for meat. It's incredibly ineffective energywise, since the plants they've eaten contain much more energy than the meat we humans get out of them. But it's quite effective for us humans, since we're not producing the plants or even feeding them to the cows. Nature produces the plants, and the cows eat them on their own.

But in the Matrix the robots are in charge of both producing the input and harvesting the output, so it's like a closed system, where energy can only be lost and never gained. And if they can harvest heat and use that as energy, there's plenty of heat to harvest from the earth's mantle - no need to bother with the piddly amounts that human bodies can supply.

A much better excuse would've been that the AIs were in charge of keeping humans alive, and decided that humans were too dangerous to wield power over their own lives, yet too sensitive to survive unless they believed themselves to be in charge of their own lives. Thus the robots decided to put the humans in a relatively safe zoo.
Recommended cyberpunk:
ImageImage
User avatar
james_sb
Posts: 47
Joined: Fri Feb 24, 2006 2:44 pm
Location: Dublin, Ireland

Post by james_sb »

Gillsing wrote: I think you're missing the point. As far as I know there are at least three different types of sensory nerves in the human skin. One type detects pressure, another detects temperature and a third detects pain. So shutting off the pain reception shouldn't affect the sense of touch at all. And the cybernetic bodies in GitS are highly advanced. They actually have more sensory nerves than ordinary human bodies, something which Shirow's Motoko takes full advantage of.

But perhaps your point isn't that they often walk around without feeling their bodies, but that they are used to having mental control over their level of feedback? I could see how that might make them feel as if their bodies are not really part of them, since they can be disconnected at will. And if an arm happens to get hurt because you weren't listening to any pain feedback, well that's just a job for the cyber clinic.
Aye, it's the disconnection from your body that I was trying to get across. Cybernetics may be so advanced that a human man not be able to tell the difference between the body parts and the implants, but this does not appear to be the case in Ghost In The Shell. Major has trouble controlling her hand when initially given a cyborg body. Humans must still be born and then transferred into a cybernetic body and they must learn how to use it, it's not all instantly hooked up, plug-and-play type interface. Though Major is now in full control of her body, she is still haunted but the timw when she couldn't control the hand and crushed the doll. She knows her body is not a part of her, but a machine. And she may feel disconnected with it.

What I was sugguseting is by imaging the sutuation upon yourself, you can get a sence of this disconnection. This is what, along with the philosophical theory I mentioned above, would cause what I believe her expression to be.
It's a self absorbed, thoughtful look hinted with worry.


By the way, I like your take on the Matrix. Very apt. I was just mentioning it by way of example.
Are you ANTI-POP?
User avatar
ghost
Posts: 76
Joined: Mon Feb 20, 2006 11:55 pm
Location: Amarica/Minnesota

Post by ghost »

Gillsing wrote:
james_sb wrote: quote]





A much better excuse would've been that the AIs were in charge of keeping humans alive, and decided that humans were too dangerous to wield power over their own lives, yet too sensitive to survive unless they believed themselves to be in charge of their own lives. Thus the robots decided to put the humans in a relatively safe zoo.

that wouldent have worked, becouse the matchines were trying to get rid of the humans, so if they didn't need us there wouldn't be much of a movie, would ther? :)

more to the point, I think we should be doing some thing to adress the posibility of an A.I.(namely trying to creat one, in a controled manner) so that we can learn more about our selves, and in the prosess have the pressidence that we would need to understand an A.I. If we were to construct an A.I. we would probably end up disstroying it to prevent it from becoming to intellagent.
User avatar
Lightice
Posts: 313
Joined: Thu Nov 24, 2005 2:22 am

Post by Lightice »

ghost wrote: If we were to construct an A.I. we would probably end up disstroying it to prevent it from becoming to intellagent.


Considering, that many labs in this world are currently obsessed with doing an actual sentient AI, it seems ludicrous to think, that they would just destroy it, just after creating it. Considering the possibilities of a sentient computer - or a synthetic lifeform, as the case may be, it would be incredibly foolish to try to get rid of it - especially when a thing that has been created once can always be created again.
Hei! Aa-Shanta 'Nygh!
User avatar
Gillsing
Posts: 109
Joined: Fri Nov 25, 2005 8:58 pm
Location: Karlstad, Sweden

Post by Gillsing »

ghost wrote:that wouldent have worked, becouse the matchines were trying to get rid of the humans, so if they didn't need us there wouldn't be much of a movie, would ther? :)
It's pretty much off topic, but even with my suggestion the machines would still want to get rid of the rebellious humans (for everyone's safety) while keeping the humans that were hooked up to the matrix alive. The difference between wanting to kill all humans and wanting to keep them safe is just a matter of "what would the machines want to do", which is pretty much arbitrary from a writer's standpoint. We humans have killed off animal species whereas we're trying to keep certain species alive, even though those species do little good. Why couldn't machines consider humans important enough to keep a bunch of them around for research and such? It's typical for humans to think of themselves as too important to ignore, but if we're not the masters of our planet, perhaps the masters wouldn't consider us threatening enough to destroy? Instead of 'coppertops' we'd be 'robopets'.
ghost wrote:If we were to construct an A.I. we would probably end up disstroying it to prevent it from becoming to intellagent.
Just like the smartest human alive doesn't have the power to destroy the earth, a newly created AI wouldn't either. I mean, what's it going to do when it's trapped inside whatever box its creators have designed for it? Is it going to connect itself to missile silos and set off a nuclear war? Nope, because it won't be given that kind of authority. It'd be different if the AI was given a humanoid body which it could control, because then it could potentially escape and become a master criminal and a threat to the human race. But that'd be highly unlikely.

It'd also be different if the AI was born on the net, like the Puppeteer, since that would make it much more difficult to contain. Just wait for Google to become sentient. :wink:
Recommended cyberpunk:
ImageImage
User avatar
Spica
Posts: 132
Joined: Fri Nov 25, 2005 9:35 pm
Location: The Sleeping Universe

Post by Spica »

The reason that the human brain is, and always will be, superior to a computer AI is that the human brain is almost infinatly adaptable. A fetus that suffers brain damage early enough in development can simply cram most of the necessary neural functions into the reduced brain area (there will be some loss of function). The human brain also hates to waste its cortical space, and so will remap unused cortical area that results from such things as a missing limb or blindness resulting from damage to the eyes. For example, in an individual born deaf (or to a lesser extent in individuals that go deaf later in life as a result of ear damage) the areas of the brain responsible for hearing will be remaped to the part of the visual system that deal with recognizing objects (such as hand symbols). The only way that an AI can adapt its function beyond its original parameters is to open itself up and install some new hardware, assuming new hardware is available.
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
-Hamlet
User avatar
ghost
Posts: 76
Joined: Mon Feb 20, 2006 11:55 pm
Location: Amarica/Minnesota

Post by ghost »

Good piont, Spica, :) however you mist one thing, the human brian may be adaptable, but it's also not big enough to keep up with the growing damand of this day and age, the more we adapte to the inviorment the
inviorment changes. Thuse the slow death that fallows the inability to cope with a growing world around us. Slowly we keep adding to the
new, and better untill we selfdistruct, we now have the ability to advance faster then we can agust.

Sorry I'm cind of a sinic. :?
User avatar
Lightice
Posts: 313
Joined: Thu Nov 24, 2005 2:22 am

Post by Lightice »

Spica wrote:The only way that an AI can adapt its function beyond its original parameters is to open itself up and install some new hardware, assuming new hardware is available.


Why you persume, that an AI couldn't be modelled after a human brain? Self-repairing systems that mimic the human neurons already exist, even though they're still quite rudimentary. The assumption that an AI couldn't do this or that is solely based on the assumptions that computers will always be made in the same model as today and just by looking at the advancements of computer science and neurology can tell you easily, that what we're having right now is only beginning - to compare their evolution to our organic counterpart, the computers have only just entered the multi-cell stage. And they develop in a constantly accelerating speed that can't be compared to anything in the organic evolution.
Hei! Aa-Shanta 'Nygh!
RealFact#9
Posts: 12
Joined: Thu Mar 02, 2006 1:31 pm

Post by RealFact#9 »

(The existentialists believed that there was no "meaning of life", in other words that it had no point)),
You're thinking of Nihilism. Existentailism is the idea that man/woman can only be judged my his or her actions. So to existentialists it's not that there is no meaning to life, but that meaning can only be derived from past actions made.
The problem I have with thinking about the AI and it's ability to do that as something so "other" is that there are already plenty of "challenges" to "laying claim" on the world... Those coming from within humanity. An analogy is the world powers- the British Empire once held power in the world, but when it lost dominance it it did not make the people within the culture feel somehow less significant. If the US should stop being the greatest power in the world, I also doubt that Americans will feel as though it somehow makes them a lesser group of people either. Power is all subjective... and there is such a thing as sharing it too. If an artificial being does somehow wield more power over the world than humans though, what's to say it'd be any different from the next world empire rising (let's say that the AI conducts business and gets to own huge portions of land and successful corporations, thus beginning to dominate the world economy)? Doomsday sci-fi tellings of such events aside, we'd probably just learn to live with it's presence in the world, and it in turn would probably see the benefit in coexisting with us. Or it could simply ignore us and do it's own thing like we do with wild animals. Couldn't ignore us long I imagine, as we'd find a way of relaying to it that we are happy that it's doing stuff that messes up what we're doing (kind of like what we do to the wild animals' environment). But anyway, nobody and no thing should really be "laying claim" to the world. And there is nothing to suggest an AI would want to do such a thing unless we humans made it that way... It doesn't necessarily have to have a dominating streak. What I'm trying to say again, I suppose, is that is the importance of being human, and the significance of humanity, to have dominace over the world as we see it? For some people yes, but I really don't think it's a universal issue with people. Or else we would all be vying for the top positions in the world that we have created all the time. But it's a big world- there is a lot of room in it to find significance in one's humanity doing all sorts of things. Even in not interacting, just being able to sit in a cave and meditate or something.
That sounds nice, but I don't think I can agree with that. Just take a look at capitalism and not tell me we don't live in a dominating society. And with Innocence adressing the fact that humans are now only creating themselves, who's to say that A.I.'s wouldn't crowd us out? Just look at humans, supposedly we're the only ones who are aware of their own "self." Then look at how they've affected the Earth and other life forms. So if we were to create something that is to later become aware of itself while being mentally and physically superior, I just don't see how we couldn't be crowded out or dominated.

I think an intersting idea though would be to try and create a self aware being. But instead of keeping it here, propell it into space onto another planet. Either in this galaxy or another. Don't give it any history of humans and such, just the basics for survival. Then see what types of decisions it makes. There's many flaws in the way I presented it, but I think you get the gerneral idea.
User avatar
james_sb
Posts: 47
Joined: Fri Feb 24, 2006 2:44 pm
Location: Dublin, Ireland

Post by james_sb »

To be honest, I don't think we'd have to worry ourselves too much with crowding out by AI:

Maybe for simple reasins, that by that time, we'll have easier space flight.

But more importantly beacuse there'll be our self defence mechanisms build in tandem with creation. Assuming a true sentient A.I. could not be bound by programming so that route of protection is out, we still have Electro Magnetic Pulses. If they build defences to that, we build weapons to get past their defences.

Example: Episode 2 of Stand Alone Complex: The Manufacturer of the Big Tank had already produced weponary to destroy or disable the new tank. Why? Because, once they sold all they can of the tank, the next thing wanted on the market will be defences to it. And thus, once those are sold it creats a new market for even newer designs. Basic business. The same should apply in our case.

The cases we see in cinematography climaxes usually where the A.I. becomes sentient in an unexpected manner, and has control over the net including miltary functions and seeks to use these all within seconds. A.I. will still be bound by processing speeds, (though we know these can be increased, ..but only with new hardware). Developers will produce in tandem with the creation of A.I. ways of disabling A.I. and with our vast outnumbering of the hostile A.I. we would be able to impliment them.
Are you ANTI-POP?
User avatar
Lightice
Posts: 313
Joined: Thu Nov 24, 2005 2:22 am

Post by Lightice »

james_sb wrote: Assuming a true sentient A.I. could not be bound by programming so that route of protection is out, we still have Electro Magnetic Pulses. If they build defences to that, we build weapons to get past their defences.


There are already quite reliable protectors against EMP. Do you think that military devices fail just because of some electromagnetic disturbance? Even some HiFi equipement has been designed to survive such things. Don't try to come up with an überweapon that would ensure human superiority over machine - such thing doesn't exist and never will. Any trick can be countered with another trick. There is no perfect strategy that always wins.

Rather, I wonder, what is the big idea of imagining that there will inevitably be a conflict between man and machine? Yes, humans are naturally conflict-prone, but what makes you think that we would ever form an united front against the other power, in the first place? Although it's propably too much to hope for, that wars would end after the birth of sentient AIs, it is far more likely, that these future wars will have both humans and machines on both sides. Also, the very border between human and machine will eventually thin into nothingness. Cybernetic technology, mind-imprints, as well as biological machines will most likely make it impossible to tell, in what form a particular sentient being was originally born.
A.I. will still be bound by processing speeds, (though we know these can be increased, ..but only with new hardware).


An AI that is actually sentient will already have processing speed equal or superior to the human brain. Most likely latter, since the processing speed alone doesn't create sentience.

You shouldn't think of sentient machines as clunky robots or ENIAC-lookalikes, but as new kind of life, that will intersect with our own. We've held symbiotic relationship with our technology for almost entire existance of our species. I don't see, why it couldn't continue after it has become aware of itself.
Hei! Aa-Shanta 'Nygh!
AlphonseVanWorden
Posts: 170
Joined: Sun Mar 05, 2006 12:10 am

Post by AlphonseVanWorden »

RealFact#9, I'm not sure I can agree with certain points of yours, or perhaps I'm not understanding them clearly.
Just take a look at capitalism and not tell me we don't live in a dominating society. And with Innocence adressing the fact that humans are now only creating themselves, who's to say that A.I.'s wouldn't crowd us out? Just look at humans, supposedly we're the only ones who are aware of their own "self." Then look at how they've affected the Earth and other life forms. So if we were to create something that is to later become aware of itself while being mentally and physically superior, I just don't see how we couldn't be crowded out or dominated.
When I first read these words, I thought of Nietzsche's Zaruthustra. "The Earth has a skin, and this skin has a sickness. One of these sicknesses is called 'man'." Then I felt something was wrong with the position itself, and I pondered the matter. I struggled with your words, trying to decide why the Nietzschean epigram struck me as less problematic than your argument did.

"Just take a look at capitalism and tell me we don't live in a dominating society." This is pretty vague. Since you've asked Sylphisonic to look around her for evidence, I'll ask you something. How does power- and relationships involving domination must involve power- work in the present capitalist system?

Elsewhere in your post, you wrote that existentialists believe that meaning comes from action. Although I wouldn't call myself an existentialist, I'm fond of looking at actions, and as I'm not convinced that categories or words can perform actions independently, I'd like to examine "capitalism" and "society" as terms that resemble formless and indistinct monsters, the rhetorical equivalent of the beastie under the child's bed.

I want to examine RealFact#9's rhetoric in action, as an act.

Whenever somebody uses the words "capitalism" and "society", I tend to wonder which brand of capitalism the person means (pun intended), and whose society the person is referring to. Someone uses the words, I ask questions about meaning. Cause, effect. Action, reaction, action.

If I ask which brand of capitalism you're referring to, I'm wondering which type of capitalism you mean. Even within the present-day economic system, more than one model of capitalism is at work; they're connected by shared communications networks and business interests, but that doesn't mean they share ideologies; some don't even have that many practices in common.

You might want to compare the "stewardship" model of capitalism, which fuses belief in individual or group intent with the quasi-religious notion of "good works", with the latter-day version of Hayek's self-organization argument: "Let the market sort itself out". Then consider the Chicago School, whose entire model is based on belief in that consumers are rational agents- or should be.

If one compares these groups, one finds that they behave differently; if one looks within each group, one finds a wide range of behaviors, some of which run contrary to the supposedly shared belief.

(I'm not advocating any of these schools of thought, nor am I endorsing their practitioners, nor am I embracing the way things are. And I'm not supporting Sylphisonic's position, either; I have some reservations about her post, too. I'm not even articulating my own position- at least, I'm not doing so at present, in this post. I am responding to RealFact#9.)

If you dislike "capitalism" in the abstract, you might decide that some forms of it are pretty innocuous. Maybe you'll decide that some capitalists have done some things that helped some folks, while others are silly and do little good- but have so little influence that they can't do much harm. You might find some types of capitalist so wrong-headed in thought and deed that they're downright dangerous.

You may find all types of capitalism distasteful, and want nothing to do with anyone who resembles a capitalist.

Whatever a capitalist is.

Still, it seemed to me that you were getting at something else; that this "dominating society" of "capitalism" or "capitalism" of the "dominating society" was the manifestation of a deeper problem, one which renders all forms of capitalism wrong-headed. Something that precedes both "capitalism" and the "dominating society." Something on an almost genetic level.

And that deeper evil seemed to be human nature.

"Just look at humans, supposedly we're the only ones who are aware of their own 'self.' Then look at how they've affected the Earth and other life forms."

The ground has shifted; the roots are revealed. When Sylphisonic looks around, she'll not only see what capitalism hath wrought; she'll see what human beings, who have the arrogance to think they're special, have done to the planet.

You seemed to be saying that capitalism does bad things and that our "dominating society" is a manifestation of capitalism, or is the product of capitalism, or is the producer of capitalism, or is identical with capitalism, or something of the sort. The terms "society" and "capitalism" were undefined, undescribed, and without example, and the relationship between the two terms remained unclear. But you implied they were a cause, and you told us where to find the effects. "Take a look around you."

I've written elsewhere: "Does one group speaking for another group have the effect of silencing the latter group?" I mention that, because your wording implied that Sylphisonic will have to agree with you if she looks around her. It implied that she hasn't looked around at all, or hasn't looked around in the right way, or is just naive. You seem to be asking, "Don't you have eyes?" But the statement can also be read, "If you see things differently, you're wrong."

I don't think telling Sylphisonic or anyone else to "look and see" proves anything. My experiences with- capitalism? society? human beings?- might be different from yours, and both of us might have experiences that differ from Sylph's experience.

You implied that capitalism and a "dominating society" were causes of the negative things around us. But they were also, you said, effects.

And what causes capitalism and the "dominating society"?

"Just look at humans, supposedly we're the only ones who are aware of their own 'self.' Then look at how they've affected the Earth and other life forms." "Humans claim this, then this happens." No human hubris, no capitalism, no dominating etc.

Does every human being claim that humans are self-aware, and that that's a characteristic unique to humanity? And do people who believe that humans are unique in their self-awareness also claim that self-awareness leads to positive results in all instances? Are you arguing that being un-self-aware, or recognizing other lifeforms or consciousnesses as self-aware, would result in a more positive world? Or are you arguing that it's just as well if our entire species dropped dead simultaneously?

Who supposes that humans are the only ones with self-awareness? Is this a universally-held belief, something common to all human cultures at all times in all places? What do you mean by "self-awareness"? Why the shift from second to third person; why the shift from "we" to "they"?

Somehow, the argument didn't disturb me as much as the Nietzsche quotation does. Instead, it reminded me of a line from an Ed Wood film: "You earth people are stupid, I tell you. Stupid! Stupid!" You kinda sounded like the alien in that movie for a sec. "Look at how they've affected the Earth and other life forms. Stupid, stupid earth people!"

Of course, I like Plan Nine from Outer Space, so I didn't dislike your post.

I mean, the alien in that film had a point. He just stated it a little more directly than you did.
So if we were to create something that is to later become aware of itself while being mentally and physically superior, I just don't see how we couldn't be crowded out or dominated.
I'll respond with a question: Anyone familiar with the terms "technological singularity" and "Law of Accelerating Returns"?

I think they're relevant to a discussion of Man-Machine Interface and Innocence, to a discussion of A.I. in general.

P.S. RealFact#9, I hope I don't come across as a long-winded bozo, and I certainly don't mean to imply that you're a jerk. I think quite the opposite: I think you're correct that Innocence rewrites the meaning of the first film, and I think the relationship between the two isn't dissimilar to the relationship between the original manga and Man-Machine Interface. And your post made me think, and my thinking about your post prompted me to think of A.I. research in terms of corporate capitalism (another type of capitalism, or perhaps more accurately a complex of capitalistic behaviors) and funding, and that made me think of the terms I mentioned, which I hope someone will run with.
Such is the soul in the body: this world is like her little turf of grass, and the heaven o'er our heads, like her looking-glass, only gives us a miserable knowledge of the small compass of our prison. - Bosola, in John Webster's The Duchess of Malfi
User avatar
Lightice
Posts: 313
Joined: Thu Nov 24, 2005 2:22 am

Post by Lightice »

I'll respond with a question: Anyone familiar with the terms "technological singularity" and "Law of Accelerating Returns"?


Well, I'm one. Quite a few of my arguments are based on them, but I've avoided using them directly, since the Singularity is so vague concept, that it's difficult to use it as an argument and the Law of Accelerating Returns only refers to the computer speeds, which in themselves only add potential - machine-power alone doesn't create a sentient construct.

P.S. You make very good points, but your posts are tad long and can get a bit tedious to read. You could concencrate on single point at the time - it would be easier to make replies, that way.
Hei! Aa-Shanta 'Nygh!
Post Reply