Facebook Twitter Google+ Shout YouTube SoundCloud RSS

Hollywood Comes Real: Future of Warfare ‘Decided by Drones’ Not Humans

21st Century Wire says…

The technocratic nightmare born out of transhumanism and eugenics receives another gift from the military industrial complex at the expense of humanity.

Drones will soon be able to perform targeted killing without the consultation of their human masters, working autonomously, coldly responding to a set of criteria…

This latest sci-tech insanity proves that humanity is one step closer to an AI-singularity, a moment in the future when artificial intelligence will move beyond the abilities of the everyday human, putting our very existence at risk. This idea has been seeded into the public psyche by Hollywood for decades, most notably from director James Cameron’s Terminator series of films.


DRONES IN FILM: The notorious and deadly ‘Hunter Killer’ from the Terminator film series.

To see how dangerous artificial intelligence has become, all one has to do is look at the work of Eliezer Yudkowsky, a Research Fellow at the Machine Intelligence Research Institute (MIRI).

Eliezer’s work focuses on the evolution of AI self-modification where strong artificial intelligence or ‘Seed AI’ will be able to program itself, optimizing its own cognitive functions similar to the malevolent computer Hal in Stanley Kubrick’s  2001: A Space Odyssey.


IMAGE: Eliezer Yudkowsky ( controlling the narrative of AI)

In 2013, Hollywood hit science fiction film Oblivion depicts this same concept but in a different way, earth is controlled by a large hovering drone-computer overseeing the destruction of planet earth, while claiming to protect it, as scavengers (“scavs”) seek refuge from the techno-nightmare created by the machines themselves.


IMAGE: still from the film Oblivion depicts cloud platform for drones.

The dangers of drone technology are well documented, in an article from Policy Mic from just a year ago we see a clearer picture of how drones have been used by clandestine intelligence agencies:

There are estimates as high as 98% of drone strike casualties being civilians (50 for every one “suspected terrorist”). The Bureau of Investigative Journalism issued a report detailing how the CIA is deliberately targeting those who show up after the sight of an attack, rescuers, and mourners at funerals as a part of a “double-tap” strategy eerily reminiscient of methods used by terrorist groups like Hamas.”

Continuing, the article challenges how drones are currently being used:

While the CIA claims that the drone program operates “under a framework of legal and close government oversight,” multiple legal experts are challenging the legality of the drone program under both American and international law. But much like how the Obama administration is blocking any challenges to the provisions in the NDAA that essentially nullify habeus corpus and Posse Comitatus, any lawsuit or inquiry into the drone program has been met with staunch opposition — especially concerning the targeted assassinations by drones of Anwar Al-Awlaki and his 16-year old son, both U.S. citizens.”

In addition to the military aspect to drones, there have been many high profile accidents involving drones, just recently, a small helicopter drone crashed on a sidewalk in New York city nearly injuring a man.

In the book Our Final Invention by James Barrat, there is an outline of how machines will gain superintelligence at the expense of humans, the true human condition is AI superseding human thought, using us to accomplish their own goals.

Machine researchers tout the prowess of “Friendly AI”  as a way to avoid a dystopian future created by machines, appointing industry insiders to program human values into machines, ranging from scientists to economists.

What the machine-maker’s won’t admit is that our human operating system was hijacked long ago by those in charge various fields, designing our future everyday…

Predator-Drone
Predator drone operated by U.S. Office of Air and Marine (OAM), before its surveillance flight near the Mexican border at Fort Huachuca in Sierra Vista, Arizona.

Soon, Drones May Be Able to Make Lethal Decisions on Their Own

 

By Joshua Foust, Defense One
National Journal

Scientists, engineers and policymakers are all figuring out ways drones can be used better and more smartly, more precise and less damaging to civilians, with longer range and better staying power. One method under development is by increasing autonomy on the drone itself.

Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.

The basic conceit behind a LAR is that it can outperform and out-think a human operator. “If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm,” said Purdue University Professor Samuel Liles. “A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run.”

Though the pace for drone strikes has slowed down — only 21 have struck Pakistan in 2013, versus 122 in 2010 according to the New America Foundation — unmanned vehicles remain a staple of the American counterinsurgency toolkit. But drones have built-in vulnerabilities that military planners still have not yet grappled with. Last year, for example, an aerospace engineer told the House Homeland Security Committee that with some inexpensive equipment he could hack into a drone and hijack it to perform some rogue purpose.

Drones have been hackable for years. In 2009, defense officials told reporters that Iranian-backed militias used $26 of off-the-shelf software to intercept the video feeds of drones flying over Iraq. And in 2011, it was reported that a virus had infected some drone control systems at Creech Air Force Base in Nevada, leading to security concerns about the security of unmanned aircraft.

It may be that the only way to make a drone truly secure is to allow it to make its own decisions without a human controller: if it receives no outside commands, then it cannot be hacked (at least as easily). And that’s where LARs, might be the most attractive.

Though they do not yet exist, and are not possible with current technology, LARs are the subject of fierce debate in academia, the military and policy circles. Still, many treat their development as inevitability. But how practical would LARs be on the battlefield?

Heather Roff, a visiting professor at the University of Denver, said many conflicts, such as the civil war in Syria, are too complex for LARs. “It’s one thing to use them in a conventional conflict,” where large militaries fight away from cities, “but we tend to fight asymmetric battles. And interventions are only military campaigns — the civilian effects matter.”

Continue this article here

READ MORE ROBOT NEWS AT: 21st Century Wire Robot Files

SEE MORE DARPA NEWS AT: 21st Century Wire DARPA Files