Posts Tagged ‘terminator’

This week the U.S. military took one step toward making the freakish humanoid robots of Arnold Schwarzenneger’s Terminator films a reality. For real. Thank heavens there’s friendlier (and tastier!) bot news, too.
 

petman

Bot Vid: DARPA’s New Pet Is Petman

DARPA just announced its most recent Robotics Challenge, a “game” that solicits innovative solutions to hypothetical future war problems. Mere moments later … it announced a winner! What gives? Well, since the new Challenge was for humanoid robots, and DARPA is funding a hugely advanced Terminator-like machine from Boston Dynamics called PETMAN….it’ll be of no surprise to learn PETMAN is the winner. To celebrate, there’s a new PETMAN video to send Arnie-like chills right up your spine.

Bot Vid: Sushi Bot

Sushi is an art–just check out the astonishingly charming film Jiro Dreams of Sushi for proof–but it’s also a delicious kind of food with global popularity, prompting mechanization of the delicate production process to suit the mass market. Cue Suzumo company’s new SushiBots, which can kick out up to 3,600 maki rolls an hour, all with the reported subtle touches that a human would apply, without cutting so much as a grain of rice.

Bot Vid: Shapely Balls

Once considered by the world’s thinkers to be the “perfect” shape, a sphere is evidently a pretty ideal form for many obejcts to take, because it can roll and smoothly be transported through chutes and pipes as well as being structurally strong. Now there’s a bot that borrows the sphere aesthetic and marries it to a standard hexapod walking system to make a compound machine that can maneuver using three different modes dependent on terrain and user requirements. It’s called MorpHex.

Bot News

Robots to protect the Titanic. Concerned that the effects of nearby shipping and tourist deep-dive vessels visiting the site are causing the wreck to rapidly degrade, the original discoverer of the Titanic’s broken body on the floor of the Atlantic ocean, Robert Ballard, is now proposing that a fleet of deep-sea robots permanently “man” the location. They would paint the vessel with anti-fouling paint so that bacteria wouldn’t eat any more of its iron skin and also monitor human visitor missions to make sure they don’t touch the wreck or damage it in any way. As of now, the 100 year-old shipwreck is a UNESCO-protected heritage site.

Did NASA’s robot find life on Mars in 1976? The robot Viking missions were a striking and powerful symbol of our early successes in space exploration, landing on the distant surface of Mars in the mid-’70s and returning photographs and science data from Mars that were the most revelatory ever about its makeup and life-bearing potential. At the time scientists concluded its experiments designed to search for the evidence of life drew a blank. But now new analysis of the data (which survives as printouts) has suggested that there really was evidence of complex behavior indicative of life in the soil samples the doughty little robot investigated. And if we want proof, the University of Southern California team suggests, all we need to do is fly a sufficiently powerful microscope to Mars…and we’d see microbes.

UAVs get a new launch trick. Utah Water Research scientists have looked at the rather tricky question of how best to launch surveillance drones into the air, and have come up with a fabulously biblical solution: A slingshot launch system. Their bungee-slingshot UAVs are being used to map the environment. 

Bot Futures: Robot production line workers

The ongoing, sticky mess involving worker conditions in Foxconn’s plants in China has this week resulted in almost unprecedented access for a journalist to the iPad production line. What we see is a highly human-centered process, but with countless pieces of machinery assisting almost every step of the assembly:

We know that Western public condemnation of worker conditions has pushed Apple to make unmatched efforts to improve the situation (even though much of the “condemnation” may be a little misplaced, particularly when it comes to worker salaries of “$14 a day,” due to misunderstanding global currency economics), but it’s definitely evident that the production line jobs are tedious and repetitive to the nth degree. That’s something Foxconn’s CEO has pledged to change, by augmenting his factories with still more robotic assistance.

But the rise of China as a manufacturing force for all sorts of goods, not just electronics, may actually change the local and global stage of robotic workers. That’s partly because of rising wages, which make 24-hour-reliable robots more efficient employees, partly due to the improved perfection a robot can achieve, and partly due to international criticisms of Chinese working conditions. Kuka robots, Europe’s most successful maker of industrial bots, is now reported to be building a Chinese hub…making 5,000 robots a year in the nation instead of the 1,000 or so it was making just two years ago. Other robot firms across the EU, in Japan, and the U.S. are also predicting rapid growth in China’s demand for robot production line units, and this rush is pushing the global market value of industrial robots skyrocketing to about $41 billion by 2020. China may swifty outpace Japan and South Korea as the most robotized nation.

Which is both good news and bad news for Chinese workers. What if the robots don’t just displace workers from tedious or dangerous roles into ones where humans excell and robots’ can’t match just yet (such as quality assurance) but displace them out of work? And then there’s a bigger question of the rise of robotic workers across the world. Some vocal Apple critics demaned that it reposition its manufacturing facilities in the U.S.–but can you picture a future where Apple did this, but peopled its floors with thousands upon thousands of robot workers, rather than fleshy ones? This could get complicated for the Teamsters.

Article Source Link: http://www.fastcompany.com/1830837/this-week-in-bots?partner=gnews

JP

Robots that ‘bleed’ like Arnold Schwarzenegger’s Terminator have come one step closer to reality.

Scientists have created a plastic ‘skin’ that oozes red blood when cut.

It can also ‘heal’ itself, building tiny molecular bridges inside in response to damage.

The red ‘blood’ might sound like a pointless Halloween novelty – but the idea is that the ‘skin’ can warn engineers that a structure such as an aicraft wing has been damaged.

The material could provide self-healing surfaces for a multitude of products ranging from mobile phones and laptops to cars, say researchers.

When cut, the plastic turns from clear to red along the line of the damage, mimicking what happens to skin.

It reacts to ordinary light, or changes in temperature or acidity, by mending broken molecular ‘bridges’ to heal itself.

U.S. scientists told how they created the material at the American Chemical Society’s annual meeting in San Diego, California.

Lead researcher Professor Marek Urban, from the University of Southern Mississippi, said: ‘Mother Nature has endowed all kinds of biological systems with the ability to repair themselves.

‘Some we can see, like the skin healing and new bark forming in cuts on a tree trunk. Some are invisible, but help keep us alive and healthy, like the self-repair system that DNA uses to fix genetic damage to genes.

‘Our new plastic tries to mimic nature, issuing a red signal when damaged and then renewing itself when exposed to visible light, temperature or pH changes.’

The material could flag up damage to critical aircraft structures, said Prof Urban. A decision could then be taken whether to replace the component or ‘heal’ it with a burst of intense light.

Scratches on vehicle fenders could be repaired the same way.
Prof Urban’s team is now working on incorporating the technology into plastics that can withstand high temperatures.

 
 
JP

In this weekly series, Life’s Little Mysteries explores the plausibility of popular sci-fi concepts. Warning: Some spoilers ahead!

If a bunch of sci-fi flicks have it right, a war pitting humanity against machines will someday destroy civilization. Two popular movie series based on such a “robopocalypse,” the “Terminator” and “Matrix” franchises, are among those that suggest granting greater autonomy to artificially intelligent machines will end up dooming our species. (Only temporarily, of course, thanks to John Connor and Neo.)

Given the current pace of technological development, does the “robopocalypse” scenario seem more far-fetched or prophetic? The fate of the world could tip in either direction, depending on who you ask.

While researchers in the computer science field disagree on the road ahead for machines, they say our relationship with machines probably will be harmonious, not murderous. Yet there are a number of scenarios that could lead to non-biological beings aiming to exterminate us.

“The technology already exists to build a system that will destroy the whole world, intentionally or unintentionally, if it just detects the right conditions,” said Shlomo Zilberstein, a professor of computer science at the University of Massachusetts.

Machines at our command

Let’s first consider the optimistic viewpoint: that machines always will act as our servants, not the other way around.

“One approach is not to develop systems that can be so dangerous if they are out of control,” Zilberstein said.

Something like Skynet – the computerized defense network in “The Terminator” that decides to wipe out humanity – is already possible. So why has such a system not been built? A big reason: Nuclear-armed nations such as the United States would not want to turn over any of the responsibility for launching warheads to a computer. “What if there is a bug in the system? No one is going to take that risk,” said Zilberstein. [What If There Were Another Technologically Advanced Species?]

On a smaller scale, however, a high degree of autonomy has been granted to predator drones flying in the Middle East. “The number of robotic systems that can actually pull the trigger autonomously is already growing,” said Zilberstein.

Still, a human operator monitors a drone and is given the final say whether to proceed with a missile strike. That certainly is not the case with Skynet, which, in the “Terminator” films, is given control of America’s entire nuclear arsenal.

In “The Terminator,” the military creates the program with the objective of reducing human error and slowness of response in case of an attack on the U.S.

When human controllers come around to realizing the danger posed by an all-powerful Skynet, they try to shut it down. Skynet interprets this act as a threat to its existence, and in order to counter its perceived human enemy, Skynet launching America’s nukesat Russia,  provoking a retaliatory strike. Billions die in a nuclear holocaust.Skynet then goes on to build factories that churn out robot armies to eliminate the remainder of humankind.

In a real-life scenario, Zilberstein thinks simple safeguards would prevent an autonomous system from threatening more people than it is designed to, perhaps in guarding country’s borders, for example. Plus, no systems would be programmed with the ability to make broad strategic decisions the way Skynet does.

“All the systems we’re likely to build in the-near future will have specific abilities,” Zilberstein said. “They will be able to monitor a region and maybe shoot, but they will not replace a [human] general.”

Robots exceeding our grasp

Michael Dyer, a computer scientist at the University of California, Los Angeles, is less optimistic. He thinks “humans will ultimately be replaced by machines” and that the transition might not be peaceful. [Americans Want Robots, and They’re Willing to Pay]

The continued progress in artificial intelligence research will lead to machines as smart as we are in the next couple hundred years, Dyer predicts. “Advanced civilizations reach a point of enough intelligence to understand how their own brain works, and then they build synthetic versions of themselves,” he says.

The desire to do so might come from attempts at establishing our own immortality – and that opportunity might be too much for humanity to resist. (Whowouldn’t want to spend their ever-after with their consciousness walking around in a robot shell?)

Maybe that sort of changeover from biology to technology goes relatively smoothly. Other rise-of-the-machines scenarios are less smooth.

Dyer suggests a new arms race of robotic system could result in one side running rampant. “In the case of warfare, by definition, the enemy side has no control of the robots that are trying to kill them,” Dyer said. Like Skynet, the manufactured might turn against the manufacturers.

Or an innocuous situation of overdependency on robots spirals out of control. Suppose a factory that makes robots is not following human commands, so an order is issued to shut off power to the factory. “But unfortunately, robots happen to manage the power station and so they refuse. So a command is issued by humans to stop the trucks from delivering necessary materials to the factory, but the drivers are robots, so they also refuse,” Dyer says.

Perhaps using the Internet, robotic intelligences wrest control of a society that depends too much on its automata. (“The Animatrix,” a 2003 collection of short cartoons, including some back stories for “The Matrix” movies, describes such a situation.)

Overall, a bit of wisdom would prevent humankind from falling into the traps dreamed up by Hollywood screenwriters. But the profit motive at companies has certainly engendered more automation, and the Cold War’s predication on the threat of mutually assured destruction points out that rationality does not always win.

“Doomsday scenarios are pretty easy to create, and I wouldn’t rule out that kind of possibility,” said Zilberstein. “But I’m personally not that worried.”

Plausibility rating: Military leaders and corporations probably will not be so stupid as to add high levels of programmed autonomy to catastrophically strong weapon systems and critical industrial sectors. We give the “robopocalypse” two out of four Rocketboys.

Article Source Link: http://news.yahoo.com/science-fiction-fact-could-robopocalypse-wipe-humans-175403124.html;_ylt=ApN8d.EGDv74bFaZSfNwaBsPLBIF;_ylu=X3oDMTNqODY0ZXZoBGNjb2RlA2N0LmMEcGtnA2JjYzgwZjFkLTQyODAtMzJlYy05YWIyLTVlMGNlZDIxYmZjZgRwb3MDNgRzZWMDbW9zdF9wb3B1bGFyBHZlcgNhN2Y0MTZlMC01ZjExLTExZTEtOWZkYi0wYTFkNGIzNWY0ZTk-;_ylg=X3oDMTFycGwxa2xhBGludGwDdXMEbGFuZwNlbi11cwRwc3RhaWQDBHBzdGNhdANzY2llbmNlBHB0A3NlY3Rpb25zBHRlc3QD;_ylv=3

Radical human modification is coming, like it or not, by the end of this century—if not earlier. How much are you willing to alter yourself?

eskobionics-body.jpg

This is my first column on TheAtlantic.com, which will regularly cover the interface between new discoveries in the life sciences and how it impacts people and society — and other random topics.

Last fall at the TEDMED meeting in San Diego I watched a man walk who was paralyzed from the waist down. Injured a year earlier, Paul Thacker hadn’t been able to stand since breaking his back in a snowmobile accident. Yet here he was walking, thanks to an early-stage exoskeleton device attached to his legs.

This wasn’t exactly on the level of “exos” we’ve seen in sci-fi films like Avatar and Aliens, which enable people to run faster, carry heavier loads, and smash things better. But Thacker’s device, called eLEGS — manufactured by Ekso Bionics in Berkeley, California — is one harbinger of what’s coming in the next decade or two to treat the injured and the ill with radical new technologies.

Other portents include first-generation machines and treatments that range from deep brain implants that can stop epileptic seizures to stem cells that scientists are using experimentally to repair damaged retinas.

No one would deny that these technologies, should they fulfill their promise, are anything but miraculous for Paul Thacker and others who need them. Yet none of this technology is going to remain exclusively in the realm of pure therapeutics. Even now some are breaking through the barrier between remedies for the sick and enhancements for the healthy.

Take the drug Adderall. A highly addictive pharmaceutical prescribed for patients with Attention Deficit Hyperactivity Disorder (ADHD), the drug works as a stimulant in people without ADHD — and is now used by at least one out of five college students to bump up their energy and attention when they want to perform well on tests or pull all-nighters.

Saying that college students are popping pills is like Claude Rains in Casablanca saying to Humphrey Bogart: “I’m shocked, shocked to find that gambling is going on in here.” Yet the widespread use — and acceptance — of Adderall and other stimulants by students to enhance their academic performance is bumping up against something new. It’s pushing us into a realm where taking powerful pharmaceuticals that boost, say, attention or memory is becoming acceptable beyond pure recreation.

Can we be too far from a greater acceptance of surgically implanted devices that increase our ability to hear or see? Or new legs that allow us to run like cheetahs and scramble up walls like geckos?

Or that allow us to run in the Olympics like Oscar Pistorius, the South African sprinter who may qualify for the games in London this year despite missing his lower legs? He runs using two sleek, metallic “legs” that combine with his natural speed and skill to do far more than overcome a disability.

Which leads us to the crucial question for the approaching age of human enhancement: How far would you go to modify yourself using the latest medtech?

Would you replace perfectly good legs with artificial ones if they made you faster and stronger?

Would you take a daily pill that not only stimulated your brain to help you do your best on a test, but also bumped up your memory?

Would you sign up for a genetic alteration that would make you taller and stronger?

Let’s up the ante and declare that these fixes had no deleterious side effects, and were deemed safe by a newly appointed U.S. Agency for Human Augmentation. Would this change your mind? (As an aside, I’m trying to imagine what the candidates now vying for the Republican nomination for president would say about an Agency for Human Augmentation.)

And what if everyone else at work — or all of the rest of the kids in your child’s class at school — were taking advantage of these enhancements?

Currently, none of these hypothetical modifications would be ethical, and most are illegal. Yet one doesn’t need to spend too much time delving into the world of near-future medtech to understand that each of these possibilities are likely to occur in one form or another in the lifetime of those college kids now swallowing Adderall.

For now, the device attached to Paul Thacker’s legs is clunky. The apparatus is little more than a pair of sophisticated braces with whirring mechanics attached to a computer he wears on his back — which is guided by a technician walking behind him, holding a control box attached to the computer with a wire. But it won’t be too long until this 37-year-old former champion snowmobile jumper will be walking with ease using an advanced exoskeleton.

In a few more years, you might be wearing your own eLEGS to carry heavy loads around the house, or as a soldier on patrol in some distant corner of the world (assuming we aren’t using only drones). Flash forward a few more years, and you may have the option of permanently implanting in your legs the “eLEGS LXII,” an endo-skeletal implant that stays with you like a futuristic hip or knee implant does today.

Back at TEDMED, Paul Thacker wasn’t thinking about anything nearly as grandiose as this. When I asked him what he wishes for most using the new eLEGS technology, he smiled and said something refreshingly mundane considering he is a herald of the future.

“Right now I’d like to be able to stand up and pee,” he said. “I really miss being able to do that.”

Article Source Link: http://www.theatlantic.com/health/archive/2012/02/redesigning-people-how-medtech-could-expand-beyond-the-injured/253236/

JP

 

Our ability to “upgrade” the bodies of soldiers through drugs, implants, and exoskeletons may be upending the ethical norms of war as we’ve understood them.

supersoldier_615.jpg

If we can engineer a soldier who can resist torture, would it still be wrong to torture this person with the usual methods? Starvation and sleep deprivation won’t affect a super-soldier who doesn’t need to sleep or eat. Beatings and electric shocks won’t break someone who can’t feel pain or fear like we do. This isn’t a comic-book story, but plausible scenarios based on actual military projects today.

In the next generation, our warfighters may be able to eat grass,communicate telepathically,resist stress, climb walls like a lizard, and much more. Impossible? We only need to look at nature for proofs of concept. For instance, dolphins don’t sleep (or they’d drown); Alaskan sled-dogs can run for days without rest or food; bats navigate with echolocation; and goats will eat pretty much anything. Find out how they work, and maybe we can replicate that in humans.

As you might expect, there are serious moral and legal risks to consider on this path. Last week in the UK, The Royal Society released its report ” Neuroscience, Conflict and Security.” This timely report worried about risks posed by cognitive enhancements to military personnel, as well as whether new nonlethal tactics, such as directed energy weapons, could violate either the Biological or Chemical Weapons Conventions.

While an excellent start, the report doesn’t go far enough, as I have been explaining to the US intelligence community , National Research Council, DARPA, and other organizations internationally. The impact of neural and physical human enhancements is more far-reaching than that, such as to the question of torturing the enhanced. Other issues, as described below, pose real challenges to military policies and broader society.

Why Enhancements?

Technology makes up for our absurd frailty. Unlike other animals, we’re not armed with fangs, claws, running speed, flight, venom, resilience, fur, or other helpful features to survive a savage world. We naked apes couldn’t survive at all, if it weren’t for our tool-making intellect and resourcefulness.

And therein lies a fundamental problem with how Homo sapiens wage war: As impressive as our weapon systems may be, one of the weakest links in armed conflicts-as well as one of the most valuable assets-continues to be the warfighters themselves. Hunger, fatigue, and the need for sleep can quickly drain troop morale and cause a mission to fail. Fear and confusion in the “fog of war” can lead to costly mistakes, such as friendly-fire casualties. Emotions and adrenaline can drive otherwise-decent individuals to perform vicious acts, from verbal abuse of local civilians to torture and illegal executions, making an international incident from a routine patrol. And post-traumatic stress can take a devastating toll on families and add pressure on already-burdened health services.

To be sure, military training seeks to address these problems, but it can do only so much, and science and technology help to fill those gaps. In this case, what’s needed is an upgrade to the basic human condition. We want our warfighters to be made stronger, more aware, more durable, more maneuverable in different environments, and so on. The technologies that enable these abilities fall in the realm of human enhancement, and they include neuroscience, biotechnology, nanotechnology, robotics, artificial intelligence, and more.

exoskeleton_615.jpg

While some of these innovations are external devices, such as exoskeletons that give the wearer super-strength, our technology devices are continually shrinking in size. Our mobile phones today have more computing power than the Apollo rockets that blasted to the moon. So there’s good reason to think that these external enhancements someday can be small enough to be integrated with the human body, for an even greater military advantage.

The use of human enhancement technologies by the military is not new. Broadly construed, vaccinations could count as an enhancement of the human immune system, and this would place the first instance of military human enhancement (as opposed to mere tool-use) at our very first war, the American Revolutionary War in 1775-1783. George Washington, as commander-in-chief of the Continental Army, ordered the vaccinations of American troops against smallpox, as the British Army was suspected of using the virus as a form of biological warfare. (Biowarfare existed for centuries prior, such as in catapulting corpses to spread the plague during the Middle Ages.) At the time, the Americans largely were not exposed to smallpox in childhood and therefore had not built up immunity to the disease, as the British had.

Since then, militaries worldwide have used caffeine and amphetamines to keep their troops awake and alert, an age-old problem in war. In fact, some pilots are required to take drugs-known as “go pills”-on long-distance missions, or else lose their jobs. And there’s ongoing interest in using pharmaceuticals, such as modafinil (a cognitive enhancer), dietary supplements, as well as gene therapy to boost the performance of warfighters.

The Questions

Some of the issues with military enhancements echo now-familiar debates, such as: whether the use of anabolic steroids by athletes is harmful to their health; whether that would set a bad example for impressionable children; whether Ritalin use in academia is cheating and unfair to others; whether longevity would bankrupt pension plans; whether manipulating biology amounts to ” playing God“; and so on. But there are new concerns as well.

Ethical and safety issues

Established standards in biomedical ethics-such as the Nuremberg Code, the Declaration of Helsinki, and others-govern the research stage of enhancements, that is, experimentation on human subjects. But “military necessity” or the exigencies of war can justify actions that are otherwise impermissible, such as a requirement to obtain voluntary consent of a patient. Under what conditions, then, could a warfighter be commanded (or refuse) a risky or unproven enhancement, such as a vaccine against a new biological weapon? Because some enhancements could be risky or pose long-term health dangers, such as addiction to “go pills”, should military enhancements be reversible? What are the safety considerations related to more permanent enhancements, such as bionic parts or a neural implant? 

soldiersinthemist_615.jpg

Tactical and logistical implications

Once ethical and safety issues are resolved, militaries will need to attend to the impact of human enhancements on their operations. For instance, how would integrating both enhanced and unenhanced warfighters into the same unit affect their cohesion? Would enhanced soldiers rush into riskier situations, when their normal counterparts would not? If so, one solution could be to confine enhancements to a small, elite force. (This could also solve the consent problem.) As both an investment in and potential benefit to the individual warfighters, is it reasonable to treat them differently from the unenhanced, such as on length of service and promotion requirements? On the other hand, preferential treatment to any particular group could lower overall troop morale.

Legal and policy issues

More broadly, how do enhancements impact international humanitarian law, or the laws of war? The Geneva and Hague Conventions prohibit torture of enemy combatants, but enhanced soldiers could reasonably be exempt if underlying assumptions disappear-that humans respond to a certain level of pain and need sleep and food-as I suggested at the beginning. Further, enhancements that transform our biology could violate the Biological Weapons Convention, if enhanced humans (or animals) plausibly count as “biological agents”, which is not a well-defined term. International law aside, there may be policy questions: Should we allow scary enhancements, which was the point of fierce Viking helmets or samurai masks? Could that exacerbate hostilities by prompting charges of dishonor and cowardice, the same charges we’re now hearing about military robots?

Military-civilian issues

As history shows, we can expect the proliferation of every military technology we invent. The method of diffusion is different and more direct with enhancements, though: Most warfighters return to society as civilians (our veterans) and would carry back any permanent enhancements and addictions with them. The US has about 23 million veterans-or one out of every 10 adults-in addition to 3 million active and reserve personnel, so this is a significant segment of the population. Would these enhancements, such as a drug or an operation that subdues emotions, create problems for the veteran to assimilate to civilian life? Would they create problems for other civilians who may be at a competitive disadvantage to the enhanced veteran who, for instance, has bionic limbs and enhanced cognition.

Soldier 2.0 is a Hybrid

soldier20.jpg 

The military technology getting the most public attention now is robotics, but we can think of it as sharing the same goal as human enhancement. Robotics aims to create a super-soldier from an engineering approach: they are our proxy mech-warriors. However, there are some important limitations to those machines. For one thing, they don’t have a sense of ethics-of what is right and wrong-which can be essential on the battlefield. Where it is child’s play to identify a ball or coffee mug or a gun, it’s notoriously tough for a computer to do that. This doesn’t give us much confidence that a robot can reliably distinguish friend from foe, at least in the foreseeable future.

In contrast, cognitive and physical enhancements aim to create a super-soldier from a biomedical direction, such as with modafinil and other drugs. For battle, we want our soft organic bodies to perform more like machines. Somewhere in between robotics and biomedical research, we might arrive at the perfect future warfighter: one that is part machine and part human, striking a formidable balance between technology and our frailties.

In changing human biology, we also may be changing the assumptions behind existing laws of war and even human ethics. If so, we would need to reexamine the foundations of our social and political institutions, if prevailing norms can’t stretch to cover new technologies. In comic books and science fiction , we can ignore or suspend disbelief about these details. But in the real world-as life imitates art, and “mutant powers” really are changing the world-the details matter.

Acknowledgements: This article is adapted from a research report, in progress, funded by The Greenwall Foundation, with co-investigators Maxwell Mehlman (Case Western Reserve University) and Keith Abney (Cal Poly).

Images: 1. US Marine Corps. 2. Lockheed Martin. 3. US Marine Corps. 4. US Marine Corps. Note: these images have been digitally enhanced.

 
JP

The Terminator

The Terminator: Real cyborgs will be less scary – the researchers think that the technology could help people suffering from brain malfunctions such as Parkinson’s disease – by replacing damaged or malfunctioning tissue with chips

Faulty parts of living brains have been replaced by electronic chips, in an astonishing and controversial scientific breakthrough.

It’s a move that has been anticipated many times in science fiction, with creatures such as The Terminator, a ‘cyborg’ hybrid of flesh and machinery.

But now, researchers at Tel Aviv University have successfully created circuits that can replace motor functions – such as blinking – and implanted them into brains.

They hope the technology could in the future help people suffering from brain malfunctions such as Parkinson’s disease – by replacing damaged or malfunctioning tissue with chips that perform the same function.

‘Imagine there’s a small area in the brain that is malfunctioning, and imagine that we understand the architecture of this damaged area,’ said Professor Matti Mintz, a psychobiologist, speaking to the BBC.

‘So we try to replicate this part of the brain with electronics.’

Mintz has already successfully implanted a robotic cerebellum into the skull of a rodent with brain damage, restoring its capacity for movement.

However, anti-vivisection campaigners have described the experiments as ‘grotesque’.

‘Imagine there’s a small area in the brain that is malfunctioning, and imagine that we understand how it works. We try to replicate this part of the brain with electronics,’ said Professor Mintz

The cerebellum is responsible for co-ordinating movement, says Mintz.

When wired to the brain, his ‘robo-cerebellum’ receives, interprets, and transmits sensory information from the brain stem, facilitating communication between the brain and the body.

To test this robotic interface between body and brain, the researchers taught a brain-damaged rat to blink whenever they sounded a particular tone.

The rat could only perform the behavior when its robotic cerebellum was functional.

According to the researcher, the chip is designed to mimic natural neuronal activity.

‘It’s a proof of the concept that we can record information from the brain, analyze it in a way similar to the biological network, and then return it to the brain,’ says Prof. Mintz, who recently presented his research at the Strategies for Engineered Negligible Senescence meeting in Cambridge, UK.

In the future, this robo-cerebellum could lead to electronic implants that replace damaged tissues in the human brain.

‘This type of research raises enormous ethical concerns, let alone the poor animals whose lives are wasted on dubious and ego-driven experiments,’ Jan Creamer, CEO of the National Anti-Vivisection Society, in an interview with the BBC.

JP

Guilt, tiredness, stress, shock – can specialised drugs help to mute the qualities that make soldiers human, asks Michael Hanlon?

The ancient Spartans believed that battlefield training began at birth. Those who failed the first round of selection, which took place at the ripe old age of 48 hours, were left at the foot of a mountain to die. The survivors would, in years to come, often wonder if these rejects were the lucky ones. Because to harden them up, putative Spartan warriors were subjected to a vigorous regime involving unending physical violence, severe cold, a lack of sleep and constant sexual abuse.

As with the English public schools, which used similar tactics to produce the warriors who carved out the British Empire, the Spartan regime worked; the alumni were the most feared soldiers in the eastern Mediterranean. And ever since then, military chiefs have wondered whether it may be possible to short-cut the long and demanding Spartan regime to produce a soldier who kills without care or remorse, shows no fear, can fight battle after battle without fatigue and generally behave more like a machine than a man.

In the post-war era, the future of fighting was thought to be about tanks and missiles, large impersonal machines that would fight huge battles over the open terrain of Northern Europe. The soldiers would be pressing buttons in a command centre. But despite the advent of drone aircraft, much of 21st-century warfare is turning out to be a drawn-out, messy business, fought on a human scale in the mud and dust of Afghanistan. And fought against a mercurial army of irregulars who melt away into the fields and farms once the skirmish is over. Modern soldiers are not the cannon fodder of before. Highly trained and super fit, each one represents a huge investment by the nation that sends them into battle. A soldier who is too tired to fight effectively, who has gone mad or who is suffering from severe stress is like a broken-down tank, no use to anybody. What if soldiers could be made that did not break down?

The era of The Terminator, the perfect robotic killing machine, is decades away; to date, all efforts to create a humanoid robot that can climb the stairs, let alone fight the Taliban, have been risible. But scientists are reporting breakthroughs with the next-best thing – the creation of human terminators, who feel less pain, less terror and less fatigue than “non-enhanced” soldiers and whose very bodies may be augmented by powerful machines.

Efforts to understand the brain of the soldier and put this knowledge to good use have been going on for some time. Professor Jonathan Moreno, a bioethicist at Pennsylvania State University, studies the way neuroscience is being co-opted by the military. “Right now, this is the fastest-growing area of science,” he says.

The Pentagon is currently spending $400m a year researching ways to “enhance” the human fighter. The defence giant Lockheed recently unveiled its “Hulc” (Human Universal Load Carrier), a science fiction-like, battery-powered exoskeleton that allows a human to lift 100kg weights and carry them at a fast run of 16kph (10mph). The videos of the Hulc in action are truly impressive. Superman strength is one thing, but soldiers still need to sleep. In Afghanistan the average soldier in combat gets only four hours’ rest a day and sleep deprivation is the single biggest factor in reducing fighting performance. Not only are tired soldiers less physically able to fight and run, they make more mistakes with the complex weapons systems at their disposal – mistakes that can prove deadly to themselves and their comrades.

Using chemistry to attack fatigue is, of course, nothing new. Two centuries ago, Prussian soldiers used cocaine to remain alert and Inca warriors used coca leaves to stay alert long before that. Since then, nicotine, amphetamines, caffeine and a new class of stimulants including the drug Modafinil have all been used successfully, to the extent that American soldiers can now operate normally even after 48 hours without sleep. Now the chemists are trying to tweak the molecular structure of this drug so that it will switch off the desire for sleep for even longer.

Tiredness is not the only psychological problem faced by soldiers. Combat is immensely stressful and although proper training means that men and women can remain focused while in mortal danger, it is afterwards that problems begin. During the Vietnam War, one in three soldiers was treated for post-traumatic stress disorder (PTSD) and in the Second World War a significant proportion of Allied conscripts never fired a shot in anger because of stress and fear before the battle had even begun. Up to now, PTSD has been treated by a mix of psychotherapy and antidepressants – effective techniques but expensive and time-consuming. But as with fatigue there may be a chemical shortcut for PTSD.

The trick is to erase unwanted memories, or at least take away their sting. Professor Roger Pitman, a psychiatrist at Harvard Medical School in the US, has been experimenting with a drug called propranolol, a “beta blocker” normally used to treat high blood pressure, which he believes can erase the effects of terrifying memories.

Professor Pitman has given the drug to young volunteers who have suffered extreme trauma in, for example, road accidents. Those given placebos suffered nightmares, and remained fearful of the road. When exposed to recordings describing their accidents they suffered typical stress responses – sweating, beating heart, dilated pupils. But those who had been on a course of propranolol showed no response at all. It was as though the trauma had not happened. For a soldier, memory-altering drugs such as this could mean violent combat becoming no more troubling, retrospectively, than a visit to the gym. “The problem is,” Professor Moreno says, “what else are they blocking when they do this? Do we want a generation of veterans who return without guilt?” You may not even need drugs to short-out the unwanted side effects of battle. Dr Albert “Skip” Rizzo, a psychologist from the University of Southern California, has created a “virtual Iraq” video game, in which veterans have been able to re-enact their experiences to release pent-up stress.

Generals not only want stronger, more alert and less stressed soldiers; they want smarter ones, too. One of the most bizarre neuroscience findings in recent years is that by immersing the human brain in a powerful magnetic field, its powers of reasoning and learning are almost magically enhanced.

No one knows exactly how “transcranial magnetic stimulation” (TMS) works, but the Australian neuroscientist Professor Allan Snyder believes that magnetic fields in some way “switch off” the higher levels of mental processing that normally cloud our thoughts, allowing a “pure” form of reasoning to take over.

“Each of us could draw like a professional, do lightning-fast arithmetic,” he says. In fact, some subjects in TMS experiments have acquired (temporarily) similar abilities to the rare “autistic savants”, people who are able to perform astounding arithmetical feats and memorise whole telephone directories (an autistic savant was played by Dustin Hoffman in the film Rain Man).

In 2009, a US Academy of Sciences report concluded that within 20 years we could be using TMS to enhance soldiers’ fighting capabilities. As Professor Moreno says, “there is talk of TMS machines being used on the battlefields within 10 years in vehicles and in 10 years more in helmets.” Why? Being a soldier demands a high level of technical expertise. It is no longer just a case of pointing a gun and shooting. Even combat rifles are now “systems” and mastering battlefield electronics requires a lot of training.

It may seem clear that if you could create a man with no scruples, who feels little pain and no fear, you would have an excellent fighting machine, but this may be a case of be careful what you wish for. We get scared for a reason – to avoid danger to ourselves and others. Fatigue may force us to rest before sustaining damaging injury. Even post-traumatic stress disorder may have a beneficial role. Moral scruples help soldiers to act as an effective team – in battle, troops will always say they are fighting for their mates before Queen and country.

Take away the humanity of the soldiers and there is a danger that the battles and wars we fight will become inhuman as well. Most of all there is, surely, a danger that these techniques, far from producing better soldiers, will actually produce a squad of zoned-out zombies, who will be no match for the determined, driven and highly motivated zealots of the Taliban.

SOURCE ARTICLE LINK:  http://www.independent.co.uk/news/science/super-soldiers-the-quest-for-the-ultimate-human-killing-machine-6263279.html

JP