Real tech is trying to catch up to fictional tech

Discuss the technology of any incarnation of Ghost in the Shell
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Memory manipulation is back in the news


http://www.bbc.com/future/story/2016100 ... in-therapy
Would it be ethical to implant false memories in therapy"

We can implant false memories with increasing ease – and it may well help you to live a healthier, happier life. But what are the ethics?


By Robert Nash

6 October 2016

Take a moment to remember an event that you experienced as a child. Pick something that’s important to you – an event that really shaped for the better the person you are today. Now ask yourself: are you sure this event truly happened?

Suppose, for example, that some well-intentioned person could have deliberately planted a vivid false memory of this fictional event in your consciousness, believing that the memory would change you in ways that would benefit your life. How would you feel to discover that this was the case? Perhaps you’d be touched that someone cared so much about your wellbeing that they would give you such a personal and life-changing ‘gift’? Or maybe outraged, that this person had brainwashed you without your consent?

The scenario sounds like a plot from a science fiction novel, but it’s not necessarily as implausible – at least in principle – as it might seem. For a start, memory researchers have known for decades that our recollections of the past are often inaccurate, and that sometimes we remember entire events that never happened at all. These false memories can occur spontaneously, but they are especially likely to occur when someone plants the seed of a false suggestion in our mind, a seed that grows into a more and more detailed recollection each time we think about it.

Importantly, just like memories of events that truly happened, we know that even false memories can influence how we behave. In one experiment that demonstrates this point, a group of research participants were told that their responses to numerous questionnaires had been fed into a clever computer algorithm, which could predict the likelihood of various childhood experiences. Apparently based on their results, the participants were falsely informed that during their childhoods they became sick from eating spoiled peach yoghurt. A second group of adults did not hear this false suggestion.

Two weeks later, both groups completed a taste test, sampling various foods as part of what seemed to be an unrelated study. The researchers found that both groups ate similar amounts of most of the foods, yet those people who had received the false suggestion ate about 25% less peach yoghurt than the others. The avoidance of peach yoghurt was most pronounced among those people who now said they could ‘remember’ the fictional sickly incident. In short, it isn’t too far-fetched in principle that somebody could deliberately give you a false memory, nor that the right kinds of false memory could have positive effects on your life. Inspired by several studies like the peach yoghurt experiment, some commentators have even imagined taking the idea one step further by inventing the “False Memory Diet.”

Could planting ‘beneficial’ false memories be the next big thing for tackling obesity, or myriad other health complaints from fear of the dentist to depression? Even if such an intervention is scientifically plausible, there still remains the fundamental question of whether it could ever be ethically justifiable.

Certainly, it would be naïve to say that nobody would ever try it. In fact, even looking back several decades, we can find documented cases in which therapists claimed to have tackled their clients’ psychological troubles by manipulating their memories. Asking ourselves whether this kind of intervention is justifiable, then, is important: not only because we can conceive of a future in which false-memory interventions are on the menu, but also because in at least some rare cases, practitioners have been ordering from that menu for years.

In new research funded by the Wellcome Trust, and published in the journal Applied Cognitive Psychology, we described a fictional ‘false memory therapy’ to almost one thousand members of the public from the UK and the USA. These participants were asked to imagine the case of an obese client seeking professional support for weight loss. Without this client’s knowledge, the therapist would attempt to plant false childhood events in the client’s memory – events designed to change the client’s unhealthy relationship with fatty foods. The therapist, however, would only reveal their deception many months after the therapy was complete. Our question for participants was: Would this fictional therapy be acceptable?

Many people are quite open in principle to the idea of deliberately manipulating memories, if doing so could benefit the patient

Remarkably, there was very little consensus on the answer. In fact, whereas 41% of respondents said it would generally be unacceptable for a therapist to treat them in this way if they were obese, 48% said that it would be acceptable. And whereas just over a quarter of people said that the therapy would be completely unethical, these people would be horrified to know that one in ten believed it completely ethical. Many people, it appears, are quite open in principle to the idea of deliberately manipulating memories, if doing so could benefit the patient.

These are striking findings, but they mirror those from a 2011 study examining people’s attitudes to so-called “memory dampening” drugs. In that study, just over half of people said that if they were the victim of a major trauma, they would want the option of receiving a drug that would weaken their traumatic memory. And in a separate Pew poll published this July, 23% of American adults said it would be morally acceptable to surgically implant devices in healthy people’s brains to improve their cognitive abilities. Incidentally, rather more of the respondents, 34% in total, said they would want such a device in their own brains.

So why are so many of us repulsed by the thought of having beneficial false memories planted in our minds, while so many others are positively enthusiastic about the prospect? To delve deeper, we asked 200 of our participants to elaborate on their reactions to the fictional ‘false memory therapy.’ For those who found the therapy appealing, the lure of helping people to improve their health was far more important than any other qualms they might have. Some even wished they could receive such a treatment themselves, or provide it for their loved ones. For many of these people, the potential drawbacks of ‘false memory therapy’ seemed no worse than certain existing health interventions. One American man wrote:

I do not see it as a problem... After all, many medical treatments involve taking drugs or having surgical operations. These involve putting real things into the body. Sometimes they do not turn out beneficial and may even result in more harm than good. So, just putting false thoughts into someone's thoughts (sic) does not seem nearly as invasive or potentially harmful.

Some participants foresaw “mission creep”, with the intervention eventually being used for nefarious purposes.

In contrast, many people found the fictional therapy unappealing and sinister, to say the least. Their reasons were more varied. Some were principally troubled by the mechanics of the therapy, pointing out that the notion of health professionals lying to their patients is hugely unethical. Others foresaw “mission creep”, with the intervention eventually being used for nefarious purposes. As one British woman wrote:

Far too dangerous. The first application I can see would be to persuade gay people they "ought" to be heterosexual. How long before the ruling party used it to "cure" people who voted for the opposition? That may seem far-fetched now but it may not if they actually had the power.

But for many people, the most unsettling idea was that planting false memories would rob us of our free will and authenticity – our personalities would no longer be genuine, our life decisions no longer truly ours. That’s no doubt a perspective with which we all can empathise, one that exposes both the intimacy of our relationship with memory, and the great value we place upon being able to trust it. After all, even those of us who study other people’s memory errors can still find ourselves hopelessly addicted to the misbelief that our own memories can be trusted just fine.

Somehow, I can’t foresee us ever truly endorsing the planting of false memories for widespread therapeutic use, but who knows what the future may hold? If memory-modifying treatments are possible, and if a substantial chunk of the population find the idea of planting memories strongly appealing, then we may need to ask ourselves important questions about the kind of relationship we wish to have with our memories.

Even if the day never arrives when your family doctor can prescribe a course of false memories, reflecting on this ethical minefield may remind us that recollections are among our most precious assets. Maybe false memories can be just as precious.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

I'm reminded of Motokos bragging about her skin sensitivity. I can't find a link to an image of it in the manga to put in here though. This article suggests that in addition to the perverted uses of a more sensitive skin that there are actual technical advantages to to it too.


http://spectrum.ieee.org/automaton/robo ... e-touching
Usually, if your robot is warm to the touch, it’s symptomatic of some sort of horrific failure of its cooling system. Robots aren’t supposed to be warm— they’re supposed to be steely and cold. Or at least, steely and ambient temperature. Heat is almost always a byproduct that needs to be somehow accounted for and dealt with. Humans and many other non-reptiles expend a lot of energy keeping at a near-constant temperature, and as it turns out, being warmish all the time provides a lot of fringe benefits, including the ability to gather useful information about things that we touch. Now robots can have this ability, too.

Most of the touch sensors used by robots are force detectors. They can tell how hard a surface is, and sometimes what kind of texture it has. You can also add some temperature sensors into the mix to tell you whether the surface is warm or cold. However, most of the time, objects around you aren’t warm or cold, they’re ambient—whatever the temperature is around them is the temperature they are.
Georgia Tech’s tactile robot skin uses an array of “taxels,” which the researchers built by layering piezoresistive fabric, thermistors, and a heating strip. They say the combination of force and thermal sensing works significantly better than force sensing alone.

When we humans touch ambient temperature things, we often experience them as feeling slightly warmer or colder than they really are. There are two reasons for this: The first reason is that we’re toasty warm, so we’re feeling the difference in temperature between our skin and the thing. The second reason is that we’re also feeling how much the thing is sucking up our toasty warmness. In other words, we’re measuring how quickly the thing is absorbing our body heat, and in even more other words, we’re measuring its thermal conductivity. Try it: Something metal will feel cooler to you than something fabric or wood, even if they’re both the same temperature, because the metal is more thermally conductive and is sucking the heat out of you faster. The upshot of this is that we have the ability to gather additional data about materials that we touch because our fingers are warm.

Joshua Wade, Tapomayukh Bhattacharjee, and Professor Charlie Kemp from Georgia Tech presented a paper at an IROS workshop last month introducing a new kind of robotic skin that incorporates active heating. When combined with traditional force sensing, the active heating results in a multimodal touch sensor that helps to identify the composition of objects.
Sensing skin
Image: Georgia Tech
Georgia Tech's multimodal fabric-based tactile sensing skin prototype.

Okay, so it’s not much to look at, but the combination of force and active thermal sensing works significantly better than force sensing alone. The fabric is made of an array of “taxels,” each of which consists of resistive fabric sandwiched between two layers of conductive fabric, two passive thermistors, and two active thermistors placed on top of a carbon fiber resistive heating strip. Using all three of these sensing modalities to validate each other, the researchers were able to identify wood and aluminum by touch up to 96 percent of the time while pressing on it, or 84 percent of the time with a sliding touch.

We should mention that this isn’t the first active thermal sensor—the BioTac sensor from SynTouch also incorporates a heater, although it’s only a fingertip, as opposed to the whole-arm fabric-based tactile skin that Georgia Tech is working on.

Tapo Bhattacharjee told us that there are plenty of different potential applications for a sensor like this. “A robot could use this skin for manipulation in cluttered or human environments. Knowing the haptic properties of the objects that a robot touches could help in devising intelligent manipulation strategies, [for example] a robot could push a soft object more than say a hard object. Or, if the robot knows it is touching a human, it can be more conservative in terms of applied forces.”
“Force and Thermal Sensing With a Fabric-Based Skin, by Joshua Wade, Tapomayukh Bhattacharjee, and Charles C. Kemp from Georgia Tech, was presented at the Workshop on Multimodal Sensor-Based Robot Control for HRI and Soft Manipulation at IROS 2016 in Seoul, South Korea.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
GhostLine
Posts: 638
Joined: Mon Dec 19, 2005 10:34 pm
Location: "the net is vast and infinite..."

Post by GhostLine »

User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

That's some cool stuff. Beats the heck out of a pirates wooden leg. A similar wrap up of tech in ten years ought to be pretty cool. All these semi and fully autonomous robots have been practicing walking and that data ought to make it's way back into the prosthetic research groups by then.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Electric muscles are here. Video in the link

http://spectrum.ieee.org/video/robotics ... y-swimming
Robot Ray Swims Using High-Voltage Artificial Muscles
This transparent, soft robot fish propels itself by flapping fins made of dielectric elastomers
Setup Timeout Error: Setup took longer than 30 seconds to complete.
By Evan Ackerman and Celia Gorman
Posted 16 Apr 2017 | 14:00 GMT

This robotic ray, developed at Zhejiang University in Hangzhou, China, is propelled by soft flapping wings made of dielectric elastomers, which bend when electricity is applied to them. Dielectric elastomers respond very quickly with relatively large motions, but they require very high voltages (on the order of 10 kilovolts) to get them to work. Traditionally, dielectric elastomers are covered in insulation, but for this aquatic application the researchers instead just submerged everything insulation free, relying on the water to act as both electrode and electric ground.

There are several other reasons why this design is notable. First, it’s almost entirely transparent, with the body, fins, tail, and elastomer muscles being completely see-through. The effect is slightly spoiled when you add the electronics and batteries required for untethered operation, but the fact that it can be self-contained at all is notable as well: A 450-mAh, 3.7-V battery will keep it swimming along at 1.1 centimeters per second for a solid 3 hours and 15 minutes, and it can even carry a tiny camera. Maximum untethered speed is 6.4 cm/s, and the robot fish will happily swim around in water temperatures ranging from slightly above freezing to nearly 75° C.

The overall efficiency of this robot is comparable to a rainbow trout, in the sense that a 25-cm long trout expends about 0.03 watt to move at 10 cm/s. A real trout can move much faster and more dynamically, of course, but for robots, hitting that biological level of efficiency is much more significant. The researchers aren’t yet ready to suggest any specific applications for the robot, so it’s probably better to simply look at it as proof that these technologies work, leaving a practical robot for the next generation.

I like the electric muscle thing. Might be good for artificial arms/legs one day. I'm a little perplexed by the high voltage claims though. They say it needs high voltage, then they give an example that is certainly high (10,000 volts) and then they describe the battery as being 3.7 volts and 450 mAh (wussy by present day cell phone standards). Assuming that both sets of info are correct I guess they've omitted a step up transformer from the description?
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Neuralink.

https://www.rt.com/viral/385780-neurali ... ace-urban/

Wow did they ever use an uncanny valley photo of Elon in that article. That's a creeper looking eyeroll if I ever saw one.
Elon Musk’s Neuralink could represent next stage of human evolution
Published time: 23 Apr, 2017 13:26
Get short URL
Elon Musk’s Neuralink could represent next stage of human evolution
Elon Musk's Neuralink will merge the human brain with A.I © Rashid Umar Abbasi / Reuters
More details on Elon Musk’s futuristic Neuralink venture have been revealed in an illustrated blog post, including how the advanced technology could see us communicate wirelessly just by thinking within 10 years.

It's finally here: the full story on Neuralink. I knew the future would be nuts but this is a whole other level. https://twitter.com/waitbutwhy/status/8 ... 04/photo/1
— Tim Urban (@waitbutwhy) April 20, 2017
<<<NOTE: original link was broken - I fixed when pasting article>>>

Wait But Why published the 36,400-word deep dive into Neuralink using stick figure illustrations to explain in simple terms what Musk has in mind for, well, our minds.

The blog post comes one month after Musk announced Neuralink, his latest plan to merge the human brain with AI using brain implants.

Long Neuralink piece coming out on @waitbutwhy in about a week. Difficult to dedicate the time, but existential risk is too high not to.
— Elon Musk (@elonmusk) March 28, 2017

The post, however, was written by blogger and cartoonist Tim Urban after studying Neuralink for six weeks, a process which involved meeting with Musk. Urban was secretly working on his Neuralink breakdown weeks before the billionaire inventor made the official announcement.

Eager fans have been waiting to hear more about the venture ever since Musk promised a long piece on the subject the day of the announcement.

@waitbutwhy@hstaudmyer I have probably checked your website about 100 times to see if the neuralink post has been complete!
— Tolga Kana (@tolgakana) April 19, 2017

@waitbutwhy@hstaudmyer Is it done? Is it done? Is it done?
— Shankar Narayanan (@archi_alchemist) April 17, 2017

Urban said he’s “convinced that Neuralink somehow manages to eclipse Tesla and SpaceX in both the boldness of its engineering undertaking and the grandeur of its mission.”

Urban has written other detailed breakdowns on Musk’s projects for Wait But Why, including Tesla and SpaceX.

READ MORE: New Elon Musk venture aims to connect human brain with AI

Neuralink’s “whole brain interface” will use tiny brain electrodes that will eventually allow us to communicate wirelessly with the world, and to share our exact thoughts and visions without having to use spoken or written language. The brain could also learn faster and have access to all the world’s knowledge.

“I think we are about eight to ten years away from this being usable by people with no disability,” Musk said in an interview with Urban.

Transcendence &#127909; in reality. Created by @ElonMusk Called @Neuralink_Corp Explained on @waitbutwhyhttps://t.co/Puf00hwI43#AI#CDSpic.twitter.com/7qnTHBSlgj
— &#946;yr&#959;&#951; (@Soulopoulos) April 22, 2017

According to Musk, who has taken on the role as Neuralink’s CEO, today’s existing technology already makes us “digitally superhuman.”

“The thing that would change is the interface,” Musk explained. “Having a high-bandwidth interface to your digital enhancements.”

Musk’s plan is to first create cutting-edge brain machine interfaces (BMIs) to be used to help people with brain injuries, building on existing technology used in medicine.

regram @waitbutwhy That's your brain in a few decades and that's the wizard hat it'll be wearing. Quick 38,000-word explanation for what that means on the site. #waitbutwhy #finally

A post shared by Robert Pieczarko (@rpieczarko) on Apr 20, 2017 at 2:10pm PDT

“We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” Musk told Urban.

This will help fund the company to make additional breakthroughs in implantation and increasing the bandwidth needed for the technology, which will in turn create industry-wide innovation.

It will eventually lead to mass adoption of the “whole brain interface,” which will allow us to communicate wirelessly with the world in a way that feels as natural as thinking does, Urban explains.

The future seems smart nuts, if you go with Elon Musk's #Neuralink by @waitbutwhy - amplifying the human brain with an #AI gateway; pic.twitter.com/DXbQGSXWfF
— Marcus Frei (@MarcusFrei_) April 22, 2017

Musk met with over 1,000 people to find the right cross-disciplinary team of experts who each bring their own unique knowledge and expertise to create a group that can think as a single “mega-expert.”

The SpaceX CEO describes the whole-brain interface as a “digital tertiary layer,” for our brains, and says his vision isn’t so far from where we are now.

“We already have a digital tertiary layer in a sense,” Musk said, “In that you have your computer or your phone or your applications.”

“You can ask a question via Google and get an answer instantly. You can access any book or any music. With a spreadsheet, you can do incredible calculations.”

“You’re already a different creature than you would have been twenty years ago, or even ten years ago,” he said, adding people are already “kind of merged with their phone and their laptop and their applications and everything,” he added.

A post shared by Cole Wilkins (@cashmoneycole) on Apr 20, 2017 at 3:53pm PDT

Musk’s plan faces a number of challenges before it can become widespread, including having enough ‘bandwidth’ to process the advanced technology and finding non-invasive and biocompatible methods to implant BMIs so they do not feel like a foreign object in our bodies.

The ten-year timeline also depends on a number of factors. “It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.” Musk said.

Urban highlighted a number of risks associated with mass adoption of the whole brain interface, including computer-like bugs and brain hacking.

As for fears that having such technology will allow people to see your thoughts, Musk explained, “People won’t be able to read your thoughts—you would have to will it. If you don’t will it, it doesn’t happen.”
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Musculoskeletal Robot Driven by Multifilament Muscles

Post by Freitag »

It's a long way from a shelling sequence, but it's a start

Musculoskeletal Robot Driven by Multifilament Muscles

https://www.youtube.com/watch?v=0ZBD2tcKOU4
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Artificial iris responds to light like real eyes

https://www.engadget.com/2017/06/25/art ... real-eyes/
The human iris does its job of adjusting your pupil size to meter the amount of light hitting the retina behind without you having to actively think about it. And while a camera's aperture is designed to work the same way as a biological iris, it's anything but automatic. Even point-and-shoots rely on complicated control mechanisms to keep your shots from becoming overexposed. But a new "artificial iris" developed at Tampere University of Technology in Finland can autonomously adjust itself based on how bright the scene is.

Scientists from the Smart Photonic Materials research group developed the iris using a light-sensitive liquid crystal elastomer. The team also employed photoalignment techniques, which accurately position the liquid crystal molecules in a predetermined direction within a tolerance of a few picometers. This is similar to the techniques used originally in LCD TVs to improve viewing angle and contrast but has since been adopted to smartphone screens. "The artificial iris looks a little bit like a contact lens," TUT Associate Professor Arri Priimägi said. "Its center opens and closes according to the amount of light that hits it."

The team hopes to eventually develop this technology into an implantable biomedical device. However, before that can happen, the TUT researchers need to first improve the iris' sensitivity so that it can adapt to smaller changes in brightness. They also need to get it to work in an aqueous environment. This new iris is therefore still a long ways away from being ready so we'll just have to keep shoving mechanical cameras into our eye sockets until then.
Image
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

‘The Third Thumb’ for all the times two just aren’t enough

Post by Freitag »

https://www.engadget.com/2017/07/06/thi ... nt-enough/
‘The Third Thumb’ is for all the times two just aren’t enough
The 3D-printed digit straps onto your hand for extra gripping power.

How many times have you wished for a third hand while trying to carry too many things? Well, you can't have that yet because it's not a thing (at least not an available thing), but maybe you can get yourself another thumb, which is almost as good. Dani Clode, a graduate student at the Royal College of Art in London, created The Third Thumb, a 3D-printed prosthetic that straps onto your hand.

The thumb is motorized and connected by cables to a bracelet. Pressure sensors underneath the wearer's feet connect to the thumb's motors via bluetooth. So, working the extra digit just requires you to press down with your foot. Clode said that she linked the thumb to foot controls because with actions like driving, using a sewing machine or playing piano, we already have practice completing tasks that require hands and feet to work together.

Clode says the project is meant to explore how we can add capabilities to our bodies with prosthetics. "The origin of the word 'prosthesis' meant 'to add, put onto', so not to fix or replace, but to extend," Clode said to Dezeen, "The Third Thumb is inspired by this word origin, exploring human augmentation and aiming to reframe prosthetics as extensions of the body."

In the video of The Third Thumb, which is just a prototype, people use the extra digit while playing cards, carrying wine glasses, cracking eggs and even playing guitar. Overall, extra appendages feel like a move towards Orphan Black's "Neolution", but The Third Thumb seems much less permanent and way less creepy
https://player.vimeo.com/video/220291411
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

I sort of knew this was possible, but had considered it so improbably that actually seeing it is astounding.

Things are opaque, translucent, or transparent because of how they interact with photons that bang into the electron (probability) cloud around an atom.

At the atomic scale things are really really far apart. That photons actually run into things is pretty amazing.

Putting this all together you get that cool effect when you stick a small flashlight in your nostril or mouth (at night) and your whole face lights up. You're seeing the photos that were not totally absorbed.

Now imagine a camera that can take those photos that come out and tell which ones bounced around and which ones came out directly (I'd presume by the changes in polarization and because you can add timestamp data to photons by emitting them in timed pulsing steams). The camera would collect the photons over time (think a long expoure) and then render an image of what it thinks it sees.

https://www.sciencealert.com/scientists ... human-body

We've already had cameras that can "see" around corners for several years using similar techniques, but this is just super cool.

I expect to see a variation of this in security checkpoints in the future. It's already in science fiction (Total Recall [1990], Ultraviolet, Ghost in the Shell: Arise) and may soon come to an airport near you.
A Completely New Type of Camera Can Actually See Through The Human Body

Way better than X-ray!
PETER DOCKRILL
5 SEP 2017

Medical techniques for looking inside our bodies have come a long way, but in the future it looks like doctors may be able to see absolutely everything going on under our skin.

Researchers have invented a new kind of camera that can actually see through structures inside the human body, detecting light sources behind as much as 20 centimetres (7.9 inches) of bodily tissue.

The current prototype, developed by researchers from the University of Edinburgh in the UK, is designed to work in conjunction with endoscopes – long, slender instruments that are often equipped with cameras, sensors and lights to peer inside hollow cavities inside the human body.

Endoscopes are valuable tools for all sorts of medical procedures, but up until now it's been difficult to externally confirm exactly where in the body the instrument is looking, without resorting to things like X-ray scans.

449 camera 1University of Edinburgh

Now that's no longer a problem, due to the new camera's capability to detect sources of light inside the body, such as the illuminated tip of the endoscope's long flexible tube.

Thanks to thousands of integrated photon detectors inside the camera, the device can detect individual particles of light being beamed through human tissue.

When photons come into contact with bodily structures, light usually scatters or bounces off the tissue, but the camera's sensitivity enables it to pick up any tiny traces of light that make it through.

By reconciling light signals that come directly to the camera with scattered photons – which travel longer distances and so take longer to reach it – the device is able to determine where the light-emitting endoscope is placed inside the body.

This technique, which differentiates between scattered and ballistic (direct) photons is called ballistic imaging, and it could help physicians to understand the exact location of the bodily interior they're looking at with the endoscope – which may be hugely valuable in terms of determining treatments.

449 camera 2University of Edinburgh

In the image above, you can see an example of the light the camera detects from an optical endomicroscope in use in sheep lungs.

The image on the left is what the prototype sees, with the ballistic imaging revealing the precise location of the instrument in the lungs.

On the right, the shot reveals what the scene looks like to a conventional camera, with the sensor picking up lots of noise in terms of scattered light, but unable to determine where the photons are originating, as the light particles bounce around the lung structures.

"This is an enabling technology that allows us to see through the human body," says senior researcher Kev Dhaliwal.

"The ability to see a device's location is crucial for many applications in healthcare, as we move forwards with minimally invasive approaches to treating disease."

Dhaliwal is the chief investigator of a collaborative, multi-institutional project called Proteus, which is researching a range of new imaging technologies to help visualise previously unseen biological secrets, with a focus on lung and respiratory diseases.

In this case, the researchers say the improved vision provided by the camera will enable doctors to visualise both the tip and length of the endoscope they're using, and the resolution of the imagery is expected to be refined in the future.

There's no word yet on when we can expect to see this camera used in clinical treatments, but it's a promising development in imaging and diagnostic technologies.

The findings are reported in Biomedical Optics Express.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

Micromachine therapy is real!


https://www.engadget.com/2018/02/12/res ... mors-mice/
Researchers use nanorobots to kill tumors in mice
The bots cut off the tumors’ blood supply.



Our current methods of fighting malignant tumors are wildly inadequate. Chemotherapy and radiation treatments, while sometimes successful, come with massive side effects, mainly because every other cell in the body is also getting bombarded with chemicals and radiation even though the main targets are the tumor cells. Finding a way to specifically target tumor cells while leaving healthy cells alone is something that many researchers are working towards and a new study out today demonstrates that nanorobots made out of DNA could be an effective option.

The research team took DNA from a virus and turned it into a sort of DNA sheet. That sheet was then loaded with an enzyme called thrombin -- a chemical that can clot blood -- and the sheet was then rolled into a tube, with the thrombin kept protected inside. To the ends of that DNA tube, the researchers attached small bits of DNA that specifically bind to a molecule found in tumor cells, and they served as a kind of guide for the DNA nanorobots. The idea is that once the nanorobots are introduced into an organism, they'll travel around and when those guiding bits of DNA come into contact with those tumor-associated molecules, they'll attach. Then, the DNA tube will open up, exposing the thrombin within. That thrombin will then clot the blood supply to the tumor, effectively cutting off its nutrients and ultimately killing it.

To test their nanorobots, the researchers injected them into mice infected with human breast cancer cells and human ovarian cancer cells as well as mouse models of human melanoma and lung cancer. In each case, the nanorobots extended the life of the mice and slowed or reversed tumor growth. Further, in the case of the melanoma model, the nanorobots appeared to be able to prevent the spread of melanoma to the liver and with the lung cancer model, the lungs even showed an ability to begin repairing themselves once the tumor growth had slowed.

Of course, the ability to treat tumors would be moot if the nanorobots themselves posed a risk to people. But the team showed that the bots didn't clot blood outside of the tumors and they didn't trigger any significant immune responses in either mice or pigs.

While they're still experimental and haven't been tested in humans, these nanorobots show a lot of promise for treating cancer. "Our research shows that DNA-based nanocarriers have been shown to be an effective and safe cancer therapy," Guangjun Nie, one of the researchers on the project, said in a statement. "We are currently working with a biotech firm to translate this revolutionary technology into a viable anti-tumor therapeutic."

The research was published today in Nature Biotechnology.

People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Post by Freitag »

I recall the conversation between the Tachikomas and the generic secretary AI where they lead the generic AI astray and trap it in some thought process too complicated for ti's ability to process. They pass the Turing test, she does not.

https://www.engadget.com/2018/05/08/pre ... ring-test/

So that whole Turing test metric, wherein we gauge how human-like an AI system appears to be based on its ability to mimic our vocal affectations? At the 2018 I/O developers conference on Tuesday, Google utterly dismantled it. The company did so by having its AI-driven Assistant book a reservation. On the phone. With a live, unsuspecting human on the other end of the line. And it worked flawlessly.

During the on-stage demonstration, Google played calls to a number of businesses including a hair salon and a Chinese restaurant. At no point did either of the people on the other end of the line appear to suspect that the entity they were interacting with was a bot. And how could they when the Assistant would even throw in random "ums", "ahhs" and other verbal fillers people use when they're in the middle of a thought? According to the company, it's already generated hundreds of similar interactions over the course of the technology's development.

This robo-vocalization breakthrough comes as the result of Google's Duplex AI system, which itself grew out of earlier Deep Learning projects such as WaveNet. As with Google's other AI programs, like AlphaGo, Duplex is designed to perform a narrowly defined task but do it better than any human. In this case, that task is talking to people over the phone.

Duplex's initial application, according to the company, will be in automated customer service centers. So rather than repeatedly shouting "operator" into your handset to get competent help the next time you call your bank, cable provider or power company, the digital agent will be able to assist you directly. "For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine," the release read.

But don't expect to be able to ask the agent any random question that pops into your head. "Duplex can only carry out natural conversations after being deeply trained in such domains," a Google release points out, "It cannot carry out general conversations." Fingers crossed that the company trained Duplex to at least say "does not compute" in such situations.

This news comes as the customer service industry continues to lean on automation and robotics to reduce operating costs and streamline wait times. Everyone from DoNotPay and Kodak to Microsoft and Facebook are looking to leverage chatbots and other automated respondents to their advantage. It's not even that hard to simply code your own. However, like every technology before it, Duplex can only be as useful as we make it. And unfortunately, we've not had a particularly sterling record when it comes to training AIs to be upstanding members of onine society. Yes, Tay, I'm looking right at you. And your racist chatbot buddy, Zo, too.

Of course, casual racism is only one of a number of ethical and societal challenges that such emerging technologies face. As always, there's the issue of privacy and data collection. "To obtain its high precision, we trained Duplex's RNN on a corpus of anonymized phone conversation data," Google's release explained. This AI wasn't taught how to talk like people just by conversing with its developers (we're still years away from that level of machine learning, unfortunately). No, it was trained on hundreds, if not thousands, of hours of recorded phone conversations skimmed from opted-in customer calls. Which brings us, yet again, back to the same debate we've been having for the better part of a decade between maintaining personal data privacy and advancing the boundaries of technological convenience.

Existential solutions to the tragedy of the public commons aside, the emergence of Duplex AI exposes a number of tangible issues that should probably be addressed before we start talking to machines like we do other people. For one thing, WaveNet is really, really good. WaveNet is Google's voice synthesizing program and unlike conventional text-to-speech engines, which have a voice actor basically read through the dictionary in order to populate a database of words and speech fragments, it relies on a smaller grouping of raw audio waveforms on which the computer's words and responses are built. This results in a system that is faster to train and which boasts a broader, more natural-sounding response range.

To illustrate, remember when TomToms were still a thing and had installable voices like James Earl Jones or Morgan Freeman to give you driving directions? Those were generated using the conventional "read a dictionary" method and were unable to pass for the real person given their stiff intonations. The new John Legend voice available for Google Assistant sounds far more like the man himself thanks to WaveNet's capabilities.

But what's to prevent someone from illicitly leveraging recordings of a public figure to train an AI and generate falsified audio? I'm not saying we're facing a "my name is my password" situation just yet but given that people are already able to fake images and video (even porn), being able to incorporate a layer of false audio will only make Google and Facebook's content moderation jobs even harder. Especially since their omega-level objectionably content censors are human -- exactly who the technology is designed to fool.

That said, the mere possibility of misuse should not be a deal-breaker for this technology (or any other). Duplex AI has the potential to revolutionize how we interact with computer systems, effectively making tactile inputs like keyboards, mice and touchpads obsolete. Just, next time a "family member" calls out of the blue asking for your social security number, maybe try to independently confirm their identities first by asking them a series of unrelated questions.
Another link to the same story. This story has an embedded to a Twitter post that contains a video that lets you listen to part of the hair booking appointment conversation.
https://futurism.com/google-assistant-b ... ut-duplex/

Image
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Jeff Georgeson
Site Admin
Posts: 436
Joined: Wed Nov 23, 2005 12:40 am

Post by Jeff Georgeson »

Yeah, I saw that news, too. I think part of it is the usual smoke and mirrors used to get a chatbot to sound a bit more real (the "shell" of the conversation, maybe?), but there's definitely something deeper to it. I'd love to have a little conversation with it to see how easily it gets tripped up. (And now I note that in the article you posted Google does say the AI's deeply trained in very specific domains, so they're definitely making only incremental steps ... but being Google, with all their resources, they could make big advances pretty quickly ... or drop the project as they've done with so many others lol.)
Jeff Georgeson
Quantum Tiger Games
www.quantumtigergames.com
the cat is out of the box
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Micromachines need micro computers to control them

Post by Freitag »

https://techxplore.com/news/2018-06-world-smallest.html
IBM's announcement that they had produced the world's smallest computer back in March raised a few eyebrows at the University of Michigan, home of the previous champion of tiny computing.

Now, the Michigan team has gone even smaller, with a device that measures just 0.3 mm to a side—dwarfed by a grain of rice.

The reason for the curiosity is that IBM's claim calls for a re-examination of what constitutes a computer. Previous systems, including the 2x2x4mm Michigan Micro Mote, retain their programming and data even when they are not externally powered.

Unplug a desktop computer, and its program and data are still there when it boots itself up once the power is back. These new microdevices, from IBM and now Michigan, lose all prior programming and data as soon as they lose power.

"We are not sure if they should be called computers or not. It's more of a matter of opinion whether they have the minimum functionality required," said David Blaauw, a professor of electrical and computer engineering, who led the development of the new system together with Dennis Sylvester, also a professor of ECE, and Jamie Phillips, an Arthur F. Thurnau Professor and professor of ECE.

In addition to the RAM and photovoltaics, the new computing devices have processors and wireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data.

One of the big challenges in making a computer about 1/10th the size of IBM's was how to run at very low power when the system packaging had to be transparent. The light from the base station—and from the device's own transmission LED—can induce currents in its tiny circuits.

"We basically had to invent new ways of approaching circuit design that would be equally low power but could also tolerate light," Blaauw said.

For example, that meant exchanging diodes, which can act like tiny solar cells, for switched capacitors.

Another challenge was achieving high accuracy while running on low power, which makes many of the usual electrical signals (like charge, current and voltage) noisier.

Designed as a precision temperature sensor, the new device converts temperatures into time intervals, defined with electronic pulses. The intervals are measured on-chip against a steady time interval sent by the base station and then converted into a temperature. As a result, the computer can report temperatures in minuscule regions—such as a cluster of cells—with an error of about 0.1 degrees Celsius.

The system is very flexible and could be reimagined for a variety of purposes, but the team chose precision temperature measurements because of a need in oncology. Their longstanding collaborator, Gary Luker, a professor of radiology and biomedical engineering, wants to answer questions about temperature in tumors.

Some studies suggest that tumors run hotter than normal tissue, but the data isn't solid enough for confidence on the issue. Temperature may also help in evaluating cancer treatments.

"Since the temperature sensor is small and biocompatible, we can implant it into a mouse and cancer cells grow around it," Luker said. "We are using this temperature sensor to investigate variations in temperature within a tumor versus normal tissue and if we can use changes in temperature to determine success or failure of therapy."

Even as Luker's experiments run, Blaauw, Sylvester and Phillips look forward to what purposes others will find for their latest microcomputing device.

"When we first made our millimeter system, we actually didn't know exactly all the things it would be useful for. But once we published it, we started receiving dozens and dozens and dozens of inquiries," Blaauw said.

And that device, the Michigan Micro Mote, may turn out to be the world's smallest computer even still—depending on what the community decides are a computer's minimum requirements.

What good is a tiny computer? Applications of the Michigan Micro Mote:

Pressure sensing inside the eye for glaucoma diagnosis
Cancer studies
Oil reservoir monitoring
Biochemical process monitoring
Surveillance: audio and visual
Tiny snail studies
The study was presented June 21 at the 2018 Symposia on VLSI Technology and Circuits. The paper is titled "A 0.04mm3 16nW Wireless and Batteryless Sensor System with Integrated Cortex-M0+ Processor and Optical Communication for Cellular Temperature Measurement."

Image
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
User avatar
Jeff Georgeson
Site Admin
Posts: 436
Joined: Wed Nov 23, 2005 12:40 am

Post by Jeff Georgeson »

So cool!!!
Jeff Georgeson
Quantum Tiger Games
www.quantumtigergames.com
the cat is out of the box
Post Reply