The Ghost in the Machine

Discuss the philosophy found in the various incarnations of Ghost in the Shell

Moderator: sonic

Post Reply
Sergeant X
Posts: 20
Joined: Wed Mar 26, 2014 6:22 pm

The Ghost in the Machine

Post by Sergeant X »

Allegedly the book the show was named after
I've heard good things but I haven't read it.
The whole Holon concept seems interesting to me
I think it might relate to that tachikoma episode where they talk about the Selfish Gene and Gaia theory
Has anyone read the book?
thoughts?

http://en.wikipedia.org/wiki/The_Ghost_in_the_Machine
“Poets are always taking the weather so personally. They're always sticking their emotions in things that have no emotions.”
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

I've been boning up on Western philosophers recently and I'll have to throw this in my pot and stir it around too.

I came across an article that sort of agrees (I think it does anyway - they both mention Descartes)
https://aeon.co/ideas/the-body-is-the-m ... t-machines

The body is the missing link for truly intelligent machines
It's tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.

I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We've made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I'm reminded that we're nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Dang it, I tried posting the rest of the article and it was too large and I got a message from the firewall that my IP address has been blocked. Again. Sigh.

I know from experience that it will time out, but until then I'll have to access from my cell phone instead of my home computer
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Jeff Georgeson
Site Admin
Posts: 436
Joined: Wed Nov 23, 2005 12:40 am

Re: The Ghost in the Machine

Post by Jeff Georgeson »

That's an amazing article! I've also thought machine learning has some particular ... dead ends? blind spots? issues with categorisation? and wished I could create a better way to mimic the modelling this article talks about. I tried a sort of categorising algorithm in my RealMemory system, which is meant to give an AI a human-like memory, complete with forgetfulness and misremembering. It is definitely limited, and it would be so much nicer if it could use a more efficient, more autonomous way of learning about the world around it (or, in this case, dealing with memories and retrieving them; e.g., being able to say everything it knows about cats without having to go from just a rote, given set of information). It would be possible, given a broad enough modelling system, to have it make up its own cats, with all kinds of believable variances, and possibly even 'dream' of slightly impossible cats whilst still being able to recognise them as cats.

I guess I'm also saying that it would understand, in some way, what cats are, rather than just guessing from a massive trove of data whether cats or giraffes are in front of it or being talked about.

Thanks for the link!
Jeff Georgeson
Quantum Tiger Games
www.quantumtigergames.com
the cat is out of the box
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Re: The Ghost in the Machine

Post by Freitag »

There are two things from my experience that I think are relevant here.

Back in my college days some of us from the Computer Science and Engineering classes create a Thursday night group that we called Mad Hackers Night. this was before the media stole the word hacker and redefined it as thief. One of the guys built what was then called a Neural Net. Then he trained it on "pictures" of letters of the alphabet (this is 1986/87, way way before OCR). The pictures were made of asterisks and spaces, I'll see if I can reproduce one here.

Code: Select all




   ***************   
   ***************   
   ***************   
   ***
   ***
   ***
   ************   
   ************   
   ************   
   ***
   ***
   ***
   ***************   
   ***************   
   ***************   
   
   
   
Anyway, one image per letter. Then he created a program to introduce "noise". It took as input an argument specifying a noise level in percentage and from standard input a letter picture file and it output to stdout a corrupted version of the image with the specified percent of the "pixels" randomly inverted.

Here would be the command to corrupt the letter E by ten percent and save the results. I don't recall the name of the program well.

Code: Select all

noise 10 < e.txt > e_bad.txt
Then he would feed his corrupted image into the trained software and it would spit out what it thought the image was. Low resolution so pretty fast, 1980s era multi-user computer so pretty slow. In all it took less than 10 seconds to recognize. It always worked with low levels of corruption. It usually worked with levels approaching even 50%. But something astounding happened when the corruption exceeded 50%. It emitted a letter image that it had never been trained on. It emitted a "photo-negative" (asterisks and spaces swapped) of the image. The results were totally unexpected and surprised us all. We had expected it to trigger the warning about a low percentage of accuracy, but instead it became more confident and other than the inversion, was correct.


The second thing was from an episode of the YouTube channel Smarter Every Day where Destin (the host) was learning how to pilot a helicopter and his trainer and several other pilots he's spoke to said that of the hand and foot controls you would struggle with keeping the machine obeying your intent until at some moment it would just "click" and your ability to make your hands and feet move correctly to operate the helicopter would stop being a top level executive function and would become as natural as walking. It happened to him on-camera.


Until machines can experience that "click" I don't think they are artificial or intelligent, I think they are just executing pattern recognition code, and can have unexpected and possible awkward or dangerous output. You don't want a surgery bot suddenly switching sides of your body while it has a knife in it's grippers.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Jeff Georgeson
Site Admin
Posts: 436
Joined: Wed Nov 23, 2005 12:40 am

Re: The Ghost in the Machine

Post by Jeff Georgeson »

That image programme seems so ahead of its time! And also demonstrates issues we still have today.

That's interesting about the 'click', where we suddenly can move something from, in effect, training mode into 'learned it' mode. I guess we sort of manually create that 'click' in algorithms when we ship the final product, as most AI systems that have been trained in some way have that training ability removed (as though they've 'learned it') so that the end user doesn't manage to mess something up. (My own engine is made to keep learning and changing, even though it isn't using neural nets, but I had to include a way to shut the learning/changing down so that games programmers could force NPCs to do certain things without fear that the NPC would have 'learned' that the player was, say, a deceitful jerk and refuse to give out vital info.)
Jeff Georgeson
Quantum Tiger Games
www.quantumtigergames.com
the cat is out of the box
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Re: The Ghost in the Machine

Post by Freitag »

Jeff Georgeson wrote: Thu Sep 23, 2021 1:07 pm ... but I had to include a way to shut the learning/changing down so that games programmers could force NPCs to do certain things without fear that the NPC would have 'learned' that the player was, say, a deceitful jerk and refuse to give out vital info.)
Having one NPC that was not the only source of some quest info that might learn could be a fun addition to the game. Adding replayability by having conflicting achievements - where you get achievement one by getting the info from the NPC and achievement two by getting them to hate you and withhold the info.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
Post Reply