Tag: artificial intelligence (page 1 of 3)

Ascended Twin Flame Andre – Date for an Encounter – 08JAN2017

View Article Here   Read More

Saint Germain – Portal Now Opened to incredible possibilities – November-12-2016

View Article Here   Read More

Mind Control Programs Exposed – Your Thoughts Are Not Your Own

Vic Bishop, Staff WriterResearch into the structure and function of the human brain continues to accelerate. Collaborations, such as the Human Brain Project in Europe and the BRAIN initiative in the United States, are exploring making great advances in understanding the brain’s circuitry and computing principles.The supposed goals of these research initiatives are to understand the cause of and to improve treatment of brain disorders, to create neu [...]

View Article Here   Read More

Hawking: Humans may lose to machines in a hundred years or so without even knowing it.

Excerpt from esbtrib.com

Stephen Hawking, the scientist and not Stephen King, the novelist has made some dire predictions about the coming conquest of humans by their own creations, robots. King can write something about this in effect but he will have a hard time surpassing the number one robot movie of all time, the terminator.
Humans’ dependence on electronic technology to make their life comfortable and much easier may one day backfire on them. The scientist said that humans have become so complacent that they may not survive in the future.
In a conference held just recently, Hawking noted that robots and artificial intelligence could take over the world and conquer mankind in the next 10 decades. By 2115, the world will cease to exist as we know it today. While speaking at the Zeitgeist conference held in London, Hawking explained that humans need to come to terms with how they should go forward and not fall into complacency with how robotics and artificial intelligence are taking over without them even knowing it.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”, he continued.
(Hawking continued to explain3ed that technology advancements that outsmarts financial markets, creating more and better inventions than humans, putting world leaders  under its influence and coming up with advanced weaponry are slowly putting humans at a disadvantage. Researches should be made considering what AI would mean for humans.
.Creation of Ai would be in no doubt the greatest achievement of what humans can do if they can do it. It might also their last act if they’re not careful about it.
Humans’ have notoriously slow biological evolution and their ability to challenge the AI is almost none existent compared to what the machines can muster. Elon Musk agrees with Hawking about the dangers posed by AIs.

View Article Here   Read More

Stephen Hawking Says Artificial Intelligence Will Take over Humanity in the Near Future

Excerpt from regaltribune.com

Technology has advanced so much that some scientists fear that one day robots will take over the world and humans will not be able to do anything about it.  
One of those scientists is Stephen Hawking, the most famous physicist and cosmologist in the world.
Hawking stated during a recent conference that robots and artificial intelligence in particular, could conquer humanity in the next 100 years.
The renowned scientist spoke at the Zeitgeist conference held in London, saying that computers will one day overtake us humans with their artificial intelligence and this could happen in less than 100 years.
Hawking added that if this happens, humans need to be sure that the robots have similar goals, or else.
But this is not the first time the author of “A Brief History of Time” made this kind of “doomy” statements about the future of humanity at the robotic hands of artificial intelligence.

At the beginning of this year, Stephen Hawking expressed his opinions on this matter, saying that artificial intelligence will advance so much that it could bring the end of human race.
Also, in an interview for BBC Hawking said that even though A.I. is not a threat to us humans at the present time, in the future the robots would get more intelligent, bigger and much stronger than their makers, the humans.
The scientist added that robots would start to redesign themselves and will evolve at an increasing rate that humans will not be able to keep the pace.
Hawking added that:
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
And Hawking is not the only famous scientist who has a gloomy vision regarding our future.
Ellon Musk, Tesla Motors CEO, said that artificial intelligence poses a real threat to human race.

According to Musk, humans must be extremely careful about artificial intelligence, because it could turn out to be our “biggest existential threat”. Musk even compared A.I. with a “demon”.
However, not every scientist envisions a dark future for human race. While many think of artificial intelligence as the driving force behind robots, A.I. is also used to power many devices, such as smartphones, tablets, laptops and apps.
Artificial intelligence is also used to protect emails from receiving spam.
Giant companies like Google and Facebook are currently working on developing new systems, which will one day lead to advanced artificial intelligence.

View Article Here   Read More

Consciousness Does Not Compute (and Never Will), Says Korean Scientist

Daegene Song's research into strong AI could be key to answering fundamental brain science questions Excerpt from prnewswire.com Within some circles in the scientific community, debate rages about whether computers will achieve technological singulari...

View Article Here   Read More

Desperately Seeking ET: Fermi’s Paradox Turns 65 ~ Part 2

Excerpt from huffingtonpost.comIntroductionWhy is it so hard to find ET? After 50 years of searching, the SETI project has so far found nothing. In the latest development, on April 14, 2015 Penn State researchers announced that after searching through...

View Article Here   Read More

Google’s AI Program Is Better At Video Games Than You


IBM's Watson supercomputer may be saving lives and educating children, but Google's new AI program can master video games without human guidance.

The artificial intelligence system from London-based DeepMind, which Google acquired last year for a reported $400 million, represents a major step toward a future of smart machines.

Computers running the deep Q-network (DQN) algorithm were exposed to 49 retro games on the Atari 2600 and told to play them, without any direction from researchers. Using the same network architecture and tuning parameters, the machines were given only raw screen pixels, available actions, and game score as input.

For each level passed or high score earned, the computer was automatically rewarded with a digital treat.

"Strikingly, DQN was able to work straight 'out of the box' across all these games," DeepMind's Dharshan Kumaran and Demis Hassabis wrote in a blog post. The executives cited classic titles like Breakout, River Raid, Boxing, and Enduro.

The AI crushed even the most expert humans at 29 games, sometimes composing what the creators called "surprisingly far-sighted strategies" that allowed maximum scoring possibilities. It also outperformed previous machine-learning methods in 43 of 49 instances.

Google DeepMind's findings were presented in a paper published in this week's Nature journal, which describes the key DQN features that allow it to learn.

"This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks," the researchers said. "This kind of technology should help us build more useful products."

Imagine asking the Google app to complete a complex task—like plan a backpacking trip through Europe, for example.

Google's DeepMind also hopes its technology will give researchers new ways to make sense of large-scale data, opening the door to discoveries in fields like climate science, physics, medicine, and genomics.

"And it may even help scientists better understand the process by which humans learn," Kumaran and Hassabis said, citing physicist Richard Feynman, who famously said, "What I cannot create, I do not understand."

For more, see How DeepMind Can Bring Google Artificial Intelligence to Life in the slideshow above.

View Article Here   Read More

Is playing ‘Space Invaders’ a milestone in artificial intelligence?

Excerpt from latimes.com

Computers have beaten humans at chess and "Jeopardy!," and now they can master old Atari games such as "Space Invaders" or "Breakout" without knowing anything about their rules or strategies.

Playing Atari 2600 games from the 1980s may seem a bit "Back to the Future," but researchers with Google's DeepMind project say they have taken a small but crucial step toward a general learning machine that can mimic the way human brains learn from new experience.

Unlike the Watson and Deep Blue computers that beat "Jeopardy!" and chess champions with intensive programming specific to those games, the Deep-Q Network built its winning strategies from keystrokes up, through trial and error and constant reprocessing of feedback to find winning strategies.

Image result for space invaders

“The ultimate goal is to build smart, general-purpose [learning] machines. We’re many decades off from doing that," said artificial intelligence researcher Demis Hassabis, coauthor of the study published online Wednesday in the journal Nature. "But I do think this is the first significant rung of the ladder that we’re on." 
The Deep-Q Network computer, developed by the London-based Google DeepMind, played 49 old-school Atari games, scoring "at or better than human level," on 29 of them, according to the study.
The algorithm approach, based loosely on the architecture of human neural networks, could eventually be applied to any complex and multidimensional task requiring a series of decisions, according to the researchers. 

The algorithms employed in this type of machine learning depart strongly from approaches that rely on a computer's ability to weigh stunning amounts of inputs and outcomes and choose programmed models to "explain" the data. Those approaches, known as supervised learning, required artful tailoring of algorithms around specific problems, such as a chess game.

The computer instead relies on random exploration of keystrokes bolstered by human-like reinforcement learning, where a reward essentially takes the place of such supervision.
“In supervised learning, there’s a teacher that says what the right answer was," said study coauthor David Silver. "In reinforcement learning, there is no teacher. No one says what the right action was, and the system needs to discover by trial and error what the correct action or sequence of actions was that led to the best possible desired outcome.”

The computer "learned" over the course of several weeks of training, in hundreds of trials, based only on the video pixels of the game -- the equivalent of a human looking at screens and manipulating a cursor without reading any instructions, according to the study.

Over the course of that training, the computer built up progressively more abstract representations of the data in ways similar to human neural networks, according to the study.
There was nothing about the learning algorithms, however, that was specific to Atari, or to video games for that matter, the researchers said.
The computer eventually figured out such insider gaming strategies as carving a tunnel through the bricks in "Breakout" to reach the back of the wall. And it found a few tricks that were unknown to the programmers, such as keeping a submarine hovering just below the surface of the ocean in "Seaquest."

The computer's limits, however, became evident in the games at which it failed, sometimes spectacularly. It was miserable at "Montezuma's Revenge," and performed nearly as poorly at "Ms. Pac-Man." That's because those games also require more sophisticated exploration, planning and complex route-finding, said coauthor Volodymyr Mnih.

And though the computer may be able to match the video-gaming proficiency of a 1980s teenager, its overall "intelligence" hardly reaches that of a pre-verbal toddler. It cannot build conceptual or abstract knowledge, doesn't find novel solutions and can get stuck trying to exploit its accumulated knowledge rather than abandoning it and resort to random exploration, as humans do. 

“It’s mastering and understanding the construction of these games, but we wouldn’t say yet that it’s building conceptual knowledge, or abstract knowledge," said Hassabis.

The researchers chose the Atari 2600 platform in part because it offered an engineering sweet spot -- not too easy and not too hard. They plan to move into the 1990s, toward 3-D games involving complex environments, such as the "Grand Theft Auto" franchise. That milestone could come within five years, said Hassabis.

“With a few tweaks, it should be able to drive a real car,” Hassabis said.

DeepMind was formed in 2010 by Hassabis, Shane Legg and Mustafa Suleyman, and received funding from Tesla Motors' Elon Musk and Facebook investor Peter Thiel, among others. It was purchased by Google last year, for a reported $650 million. 

Hassabis, a chess prodigy and game designer, met Legg, an algorithm specialist, while studying at the Gatsby Computational Neuroscience Unit at University College, London. Suleyman, an entrepreneur who dropped out of Oxford University, is a partner in Reos, a conflict-resolution consulting group.

View Article Here   Read More

Robots Can Learn to Perform Tasks by “Watching” YouTube Videos

University of Maryland computer scientist Yiannis Aloimonos (center) is developing robotic systems able to visually recognize objects and generate new behavior based on those observations. DARPA is funding this research through its Mathematics of Sensing, Exploitation and Execution (MSEE) program. (University of Maryland Photo)

From darpa.mil

January 29, 2015

DARPA program advances robots’ ability to sense visual information and turn it into action  

Robots can learn to recognize objects and patterns fairly well, but to interpret and be able to act on visual input is much more difficult.  Researchers at the University of Maryland, funded by DARPA’s Mathematics of Sensing, Exploitation and Execution (MSEE) program, recently developed a system that enabled robots to process visual data from a series of “how to” cooking videos on YouTube. Based on what was shown on a video, robots were able to recognize, grab and manipulate the correct kitchen utensil or object and perform the demonstrated task with high accuracy—without additional human input or programming.  

“The MSEE program initially focused on sensing, which involves perception and understanding of what’s happening in a visual scene, not simply recognizing and identifying objects,” said Reza Ghanadan, program manager in DARPA’s Defense Sciences Offices. “We’ve now taken the next step to execution, where a robot processes visual cues through a manipulation action-grammar module and translates them into actions.”

Another significant advance to come out of the research is the robots’ ability to accumulate and share knowledge with others. Current sensor systems typically view the world anew in each moment, without the ability to apply prior knowledge.

“This system allows robots to continuously build on previous learning—such as types of objects and grasps associated with them—which could have a huge impact on teaching and training,” Ghanadan said. “Instead of the long and expensive process of programming code to teach robots to do tasks, this research opens the potential for robots to learn much faster, at much lower cost and, to the extent they are authorized to do so, share that knowledge with other robots. This learning-based approach is a significant step towards developing technologies that could have benefits in areas such as military repair and logistics.”

The DARPA-funded researchers presented their work today at the 29th meeting of the Association for the Advancement of Artificial Intelligence. The University of Maryland paper is available here: http://ow.ly/I30im

View Article Here   Read More

The Future of Technology in 2015?

Excerpt from

The year gone by brought us more robots, worries about artificial intelligence, and difficult lessons on space travel. The big question: where's it all taking us?

Every year, we capture a little bit more of the future -- and yet the future insists on staying ever out of reach.
Consider space travel. Humans have been traveling beyond the atmosphere for more than 50 years now -- but aside from a few overnights on the moon four decades ago, we have yet to venture beyond low Earth orbit.
Or robots. They help build our cars and clean our kitchen floors, but no one would mistake a Kuka or a Roomba for the replicants in "Blade Runner." Siri, Cortana and Alexa, meanwhile, are bringing some personality to the gadgets in our pockets and our houses. Still, that's a long way from HAL or that lad David from the movie "A.I. Artificial Intelligence."
Self-driving cars? Still in low gear, and carrying some bureaucratic baggage that prevents them from ditching certain technology of yesteryear, like steering wheels.
And even when these sci-fi things arrive, will we embrace them? A Pew study earlier this year found that Americans are decidedly undecided. Among the poll respondents, 48 percent said they would like to take a ride in a driverless car, but 50 percent would not. And only 3 percent said they would like to own one.
"Despite their general optimism about the long-term impact of technological change," Aaron Smith of the Pew Research Center wrote in the report, "Americans express significant reservations about some of these potentially short-term developments" such as US airspace being opened to personal drones, robot caregivers for the elderly or wearable or implantable computing devices that would feed them information.
Let's take a look at how much of the future we grasped in 2014 and what we could gain in 2015.

Space travel: 'Space flight is hard'

In 2014, earthlings scored an unprecedented achievement in space exploration when the European Space Agency landed a spacecraft on a speeding comet, with the potential to learn more about the origins of life. No, Bruce Willis wasn't aboard. Nobody was. But when the 220-pound Philae lander, carried to its destination by the Rosetta orbiter, touched down on comet 67P/Churyumov-Gerasimenko on November 12, some 300 million miles from Earth, the celebration was well-earned.
A shadow quickly fell on the jubilation, however. Philae could not stick its first landing, bouncing into a darker corner of the comet where its solar panels would not receive enough sunlight to charge the lander's batteries. After two days and just a handful of initial readings sent home, it shut down. For good? Backers have allowed for a ray of hope as the comet passes closer to the sun in 2015. "I think within the team there is no doubt that [Philae] will wake up," lead lander scientist Jean-Pierre Bibring said in December. "And the question is OK, in what shape? My suspicion is we'll be in good shape."
The trip for NASA's New Horizons spacecraft has been much longer: 3 billion miles, all the way to Pluto and the edge of the solar system. Almost nine years after it left Earth, New Horizons in early December came out of hibernation to begin its mission: to explore "a new class of planets we've never seen, in a place we've never been before," said project scientist Hal Weaver. In January, it will begin taking photos and readings of Pluto, and by mid-July, when it swoops closest to Pluto, it will have sent back detailed information about the dwarf planet and its moon, en route to even deeper space.

Also in December, NASA made a first test spaceflight of its Orion capsule on a quick morning jaunt out and back, to just over 3,600 miles above Earth (or approximately 15 times higher than the International Space Station). The distance was trivial compared to those those traveled by Rosetta and New Horizons, and crewed missions won't begin till 2021, but the ambitions are great -- in the 2030s, Orion is expected to carry humans to Mars.
In late March 2015, two humans will head to the ISS to take up residence for a full year, in what would be a record sleepover in orbit. "If a mission to Mars is going to take a three-year round trip," said NASA astronaut Scott Kelly, who will be joined in the effort by Russia's Mikhail Kornienko, "we need to know better how our body and our physiology performs over durations longer than what we've previously on the space station investigated, which is six months."
There were more sobering moments, too, in 2014. In October, Virgin Galactic's sleek, experimental SpaceShipTwo, designed to carry deep-pocketed tourists into space, crashed in the Mojave Desert during a test flight, killing one test pilot and injuring the other. Virgin founder Richard Branson had hoped his vessel would make its first commercial flight by the end of this year or in early 2015, and what comes next remains to be seen. Branson, though, expressed optimism: "Space flight is hard -- but worth it," he said in a blog post shortly after the crash, and in a press conference, he vowed "We'll learn from this, and move forward together." Virgin Galactic could begin testing its next spaceship as soon as early 2015.
The crash of SpaceShipTwo came just a few days after the explosion of an Orbital Sciences rocket lofting an unmanned spacecraft with supplies bound for the International Space Station. And in July, Elon Musk's SpaceX had suffered the loss of one of its Falcon 9 rockets during a test flight. Musk intoned, via Twitter, that "rockets are tricky..."
Still, it was on the whole a good year for SpaceX. In May, it unveiled its first manned spacecraft, the Dragon V2, intended for trips to and from the space station, and in September, it won a $2.6 billion contract from NASA to become one of the first private companies (the other being Boeing) to ferry astronauts to the ISS, beginning as early as 2017. Oh, and SpaceX also has plans to launch microsatellites to establish low-cost Internet service around the globe, saying in November to expect an announcement about that in two to three months -- that is, early in 2015.
One more thing to watch for next year: another launch of the super-secret X-37B space place to do whatever it does during its marathon trips into orbit. The third spaceflight of an X-37B -- a robotic vehicle that, at 29 feet in length, looks like a miniature space shuttle -- ended in October after an astonishing 22 months circling the Earth, conducting "on-orbit experiments."

Self-driving cars: Asleep at what wheel?

Spacecraft aren't the only vehicles capable of autonomous travel -- increasingly, cars are, too. Automakers are toiling toward self-driving cars, and Elon Musk -- whose name comes up again and again when we talk about the near horizon for sci-fi tech -- says we're less than a decade away from capturing that aspect of the future. In October, speaking in his guise as founder of Tesla Motors, Musk said: "Like maybe five or six years from now I think we'll be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination." (He also allowed that we should tack on a few years after that before government regulators give that technology their blessing.)
Prototype, unbound: Google's ride of the future, as it looks today Google
That comment came as Musk unveiled a new autopilot feature -- characterizing it as a sort of super cruise control, rather than actual autonomy -- for Tesla's existing line of electric cars. Every Model S manufactured since late September includes new sensor hardware to enable those autopilot capabilities (such as adaptive cruise control, lane-keeping assistance and automated parking), to be followed by an over-the-air software update to enable those features.
Google has long been working on its own robo-cars, and until this year, that meant taking existing models -- a Prius here, a Lexus there -- and buckling on extraneous gear. Then in May, the tech titan took the wraps off a completely new prototype that it had built from scratch. (In December, it showed off the first fully functional prototype.) It looked rather like a cartoon car, but the real news was that there was no steering wheel, gas pedal or brake pedal -- no need for human controls when software and sensors are there to do the work.
Or not so fast. In August, California's Department of Motor Vehicles declared that Google's test vehicles will need those manual controls after all -- for safety's sake. The company agreed to comply with the state's rules, which went into effect in September, when it began testing the cars on private roads in October.
Regardless of who's making your future robo-car, the vehicle is going to have to be not just smart, but actually thoughtful. It's not enough for the car to know how far it is from nearby cars or what the road conditions are. The machine may well have to make no-win decisions, just as human drivers sometimes do in instantaneous, life-and-death emergencies. "The car is calculating a lot of consequences of its actions," Chris Gerdes, an associate professor of mechanical engineering, said at the Web Summit conference in Dublin, Ireland, in November. "Should it hit the person without a helmet? The larger car or the smaller car?"

Robots: Legging it out

So when do the robots finally become our overlords? Probably not in 2015, but there's sure to be more hand-wringing about both the machines and the artificial intelligence that could -- someday -- make them a match for homo sapiens. At the moment, the threat seems more mundane: when do we lose our jobs to a robot?
The inquisitive folks at Pew took that very topic to nearly 1,900 experts, including Vint Cerf, vice president at Google; Web guru Tim Bray; Justin Reich of Harvard University's Berkman Center for Internet & Society; and Jonathan Grudin, principal researcher at Microsoft. According to the resulting report, published in August, the group was almost evenly split -- 48 percent thought it likely that, by 2025, robots and digital agents will have displaced significant numbers of blue- and white-collar workers, perhaps even to the point of breakdowns in the social order, while 52 percent "have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution."

Still, for all of the startling skills that robots have acquired so far, they're often not all there yet. Here's some of what we saw from the robot world in 2014:
Teamwork: Researchers at the École Polytechnique Fédérale De Lausanne in May showed off their "Roombots," cog-like robotic balls that can join forces to, say, help a table move across a room or change its height.
A sense of balance: We don't know if Boston Dynamics' humanoid Atlas is ready to trim bonsai trees, but it has learned this much from "The Karate Kid" (the original from the 1980s) -- it can stand on cinder blocks and hold its balance in a crane stance while moving its arms up and down.
Catlike jumps: MIT's cheetah-bot gets higher marks for locomotion. Fed a new algorithm, it can run across a lawn and bound like a cat. And quietly, too. "Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground," MIT's Sangbae Kim, a professor of mechanical engineering, told MIT News. "This is kind of a new paradigm where we're controlling force in a highly dynamic situation. Any legged robot should be able to do this in the future."
Sign language: Toshiba's humanoid Aiko Chihira communicated in Japanese sign language at the CEATEC show in October. Her rudimentary skills, limited for the moment to simple messages such as signed greetings, are expected to blossom by 2020 into areas such as speech synthesis and speech recognition.
Dance skills: Robotic pole dancers? Tobit Software brought a pair, controllable by an Android smartphone, to the Cebit trade show in Germany in March. More lifelike was the animatronic sculpture at a gallery in New York that same month -- but what was up with that witch mask?
Emotional ambition: Eventually, we'll all have humanoid companions -- at least, that's always been one school of thought on our robotic future. One early candidate for that honor could be Pepper, from Softbank and Aldebaran Robotics, which say the 4-foot-tall Pepper is the first robot to read emotions. This emo-bot is expected to go on sale in Japan in February.

Ray guns: Ship shape

Damn the photon torpedoes, and full speed ahead. That could be the motto for the US Navy, which in 2014 deployed a prototype laser weapon -- just one -- aboard a vessel in the Persian Gulf. Through some three months of testing, the device "locked on and destroyed the targets we designated with near-instantaneous lethality," Rear Adm. Matthew L. Klunder, chief of naval research, said in a statement. Those targets were rather modest -- small objects mounted aboard a speeding small boat, a diminutive Scan Eagle unmanned aerial vehicle, and so on -- but the point was made: the laser weapon, operated by a controller like those used for video games, held up well, even in adverse conditions.

Artificial intelligence: Danger, Will Robinson?

What happens when robots and other smart machines can not only do, but also think? Will they appreciate us for all our quirky human high and low points, and learn to live with us? Or do they take a hard look at a species that's run its course and either turn us into natural resources, "Matrix"-style, or rain down destruction?
When the machines take over, will they be packing laser weapons like this one the US Navy just tried out? John F. Williams/US Navy
As we look ahead to the reboot of the "Terminator" film franchise in 2015, we can't help but recall some of the dire thoughts about artificial intelligence from two people high in the tech pantheon, the very busy Musk and the theoretically inclined Stephen Hawking.
Musk himself more than once in 2014 invoked the likes of the "Terminator" movies and the "scary outcomes" that make them such thrilling popcorn fare. Except that he sees a potentially scary reality evolving. In an interview with CNBC in June, he spoke of his investment in AI-minded companies like Vicarious and Deep Mind, saying: "I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome."
He has put his anxieties into some particularly colorful phrases. In August, for instance, Musk tweeted that AI is "potentially more dangerous than nukes." And in October, he said this at a symposium at MIT: "With artificial intelligence, we are summoning the demon. ... You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."
Musk has a kindred spirit in Stephen Hawking. The physicist allowed in May that AI could be the "biggest event in human history," and not necessarily in a good way. A month later, he was telling John Oliver, on HBO's "Last Week Tonight," that "artificial intelligence could be a real danger in the not too distant future." How so? "It could design improvements to itself and outsmart us all."
But Google's Eric Schmidt, is having none of that pessimism. At a summit on innovation in December, the executive chairman of the far-thinking tech titan -- which in October teamed up with Oxford University to speed up research on artificial intelligence -- said that while our worries may be natural, "they're also misguided."

View Article Here   Read More

Is AI a threat to humanity?

Excerpt from cnn.comImagine you're the kind of person who worries about a future when robots become smart enough to threaten the very existence of the human race. For years, you've been dismissed as a crackpot, consigned to the same category of peop...

View Article Here   Read More

How will the world end? From ‘demonic’ AI to nuclear war — seven scenarios that could end human race


Humanity may have already created its own nemesis, Professor Stephen Hawking warned last week. The Cambridge University physicist claimed that new developments in the field of artificial intelligence (AI) mean that within a few decades, computers thousands of times more powerful than in existence today may decide to usurp their creators and effectively end humanity’s 100,000-year dominance of Earth.
This Terminator scenario is taken seriously by many scientists and technologists. Before Prof. Hawking made his remarks, Elon Musk, the genius behind the Tesla electric car and PayPal, had stated that “with artificial intelligence, we are summoning the demon,” comparing it unfavourably with nuclear war as the most potent threat to humanity’s existence.
Aside from the rise of the machines, many potential threats have been identified to our species, our civilization, even our planet. To keep you awake at night, here are seven of the most plausible.
Getty Images / ThinkStock
Getty Images / ThinkStockAn artist's depiction of an asteroid approaching Earth.
Our solar system is littered with billions of pieces of debris, from the size of large boulders to objects hundreds of kilometres across. We know that, from time to time, these hit the Earth. Sixty-five-million years ago, an object – possibly a comet a few times larger than the one on which the Philae probe landed last month – hit the Mexican coast and triggered a global winter that wiped out the dinosaurs. In 1908, a smaller object hit a remote part of Siberia and devastated hundreds of square kilometres of forest. Last week, 100 scientists, including Lord Rees of Ludlow, the Astronomer Royal, called for the creation of a global warning system to alert us if a killer rock is on the way.
Probability: remote in our lifetime, but one day we will be hit.
Result: there has been no strike big enough to wipe out all life on Earth – an “extinction-level event” – for at least three billion years. But a dino-killer would certainly be the end of our civilization and possibly our species.
Warner Bros.
Warner Bros.When artificial intelligence becomes self-aware, there is a chance it will look something like this scene from Terminator 3.
Prof. Hawking is not worried about armies of autonomous drones taking over the world, but something more subtle – and more sinister. Some technologists believe that an event they call the Singularity is only a few decades away. This is a point at which the combined networked computing power of the world’s AI systems begins a massive, runaway increase in capability – an explosion in machine intelligence. By then, we will probably have handed over control to most of our vital systems, from food distribution networks to power plants, sewage and water treatment works, and the global banking system. The machines could bring us to our knees without a shot being fired. And we cannot simply pull the plug, because they control the power supplies.

Probability: unknown, although computing power is doubling every 18 months. We do not know if machines can be conscious or “want” to do anything, and sceptics point out that the cleverest computers in existence are currently no brighter than cockroaches.
Result: if the web wakes up and wants to sweep us aside, we may have a fight on our hands (perhaps even something similar to the man vs. machines battle in the Terminator films). But it is unlikely that the machines will want to destroy the planet – they “live” here, too.
Handout/AFP/Getty Images
Handout/AFP/Getty ImagesLaboratory technicians and physicians work on samples during research on the evolving Ebola disease in bats, at the Center for Emerging and Zoonotic Diseases research Laboratory of the National Institute for Communicable Diseases in Pretoria on Nov. 21, 2011.
This is possibly the most terrifying short-term threat because it is so plausible. The reason Ebola has not become a worldwide plague – and will not do so – is because it is so hard to transmit, and because it incapacitates and kills its victims so quickly. However, a modified version of the disease that can be transmitted through the air, or which allows its host to travel around for weeks, symptom-free, could kill many millions. It is unknown whether any terror group has the knowledge or facilities to do something like this, but it is chilling to realize that the main reason we understand Ebola so well is that its potential to be weaponized was quickly realized by defence experts.
Probability: someone will probably try it one day.
Result: potentially catastrophic. “Ordinary” infectious diseases such as avian-flu strains have the capability to wipe out hundreds of millions of people.
AP Photo/U.S. Army via Hiroshima Peace Memorial Museum
AP Photo/U.S. Army via Hiroshima Peace Memorial MuseumA mushroom cloud billows about one hour after a nuclear bomb was detonated above Hiroshima, Japan Aug. 6, 1945.
This is still the most plausible “doomsday” scenario. Despite arms-limitations treaties, there are more than 15,000 nuclear warheads and bombs in existence – many more, in theory, than would be required to kill every human on Earth. Even a small nuclear war has the potential to cause widespread devastation. In 2011, a study by NASA scientists concluded that a limited atomic war between India and Pakistan involving just 100 Hiroshima-sized detonations would throw enough dust into the air to cause temperatures to drop more than 1.2C globally for a decade.
Probability: high. Nine states have nuclear weapons, and more want to join the club. The nuclear wannabes are not paragons of democracy.
Result: it is unlikely that even a global nuclear war between Russia and NATO would wipe us all out, but it would kill billions and wreck the world economy for a century. A regional war, we now know, could have effects far beyond the borders of the conflict.
CERN)/MCTThis is one of the huge particle detectors in the Large Hadron Collider, a 17 mile-long tunnel under the French-Swiss border. Scientists are searching for evidence of what happened right after- and perhaps before- the Big Bang.
Before the Large Hadron Collider (LHC), the massive machine at CERN in Switzerland that detected the Higgs boson a couple of years ago, was switched on, there was a legal challenge from a German scientist called Otto Rossler, who claimed the atom-smasher could theoretically create a small black hole by mistake – which would then go on to eat the Earth.
The claim was absurd: the collisions in the LHC are far less energetic than those caused naturally by cosmic rays hitting the planet. But it is possible that, one day, a souped-up version of the LHC could create something that destroys the Earth – or even the universe – at the speed of light.
Probability: very low indeed.
Result: potentially devastating, but don’t bother cancelling the house insurance just yet.
AP Photo/Oculus Rift/Fox
AP Photo/Oculus Rift/FoxThis photo shows a scene fromX-Men: Days of Future Past virtual reality experience. Oxford University philosopher Nick Bostrom has speculated that our universe may be one of countless "simulations" running in some alien computer, much like a computer game.
Many scientists have pointed out that there is something fishy about our universe. The physical constants – the numbers governing the fundamental forces and masses of nature – seem fine-tuned to allow life of some form to exist. The great physicist Sir Fred Hoyle once wondered if the universe might be a “put-up job”.
More recently, the Oxford University philosopher Nick Bostrom has speculated that our universe may be one of countless “simulations” running in some alien computer, much like a computer game. If so, we have to hope that the beings behind our fake universe are benign – and do not reach for the off-button should we start misbehaving.
Probability: according to Professor Bostrom’s calculations, if certain assumptions are made, there is a greater than 50% chance that our universe is not real. And the increasingly puzzling absence of any evidence of alien life may be indirect evidence that the universe is not what it seems.
Result: catastrophic, if the gamers turn against us. The only consolation is the knowledge that there is absolutely nothing we can do about it.
AP Photo/Charles Rex Arbogast
AP Photo/Charles Rex ArbogastFloodwaters from the Souris River surround homes near Minot State University in Minot, N.D. on June 27, 2011. Global warming is rapidly turning America the beautiful into America the stormy and dangerous, according to the National Climate Assessment report released Tuesday, May 6, 2014.
Almost no serious scientists now doubt that human carbon emissions are having an effect on the planet’s climate. The latest report by the Intergovernmental Panel on Climate Change suggested that containing temperature rises to below 2C above the pre-industrial average is now unlikely, and that we face a future three or four degrees warmer than today.
This will not literally be the end of the world – but humanity will need all the resources at its disposal to cope with such a dramatic shift. Unfortunately, the effects of climate change will really start to kick in just at the point when the human population is expected to peak – at about nine billion by the middle of this century. Millions of people, mostly poor, face losing their homes to sea-level rises (by up to a metre or more by 2100) and shifting weather patterns may disrupt agriculture dramatically.
Probability: it is now almost certain that CO2 levels will keep rising to 600 parts per billion and beyond. It is equally certain that the climate will respond accordingly.
Result: catastrophic in some places, less so in others (including northern Europe, where temperature rises will be moderated by the Atlantic). The good news is that, unlike with most of the disasters here, we have a chance to do something about climate change now.

View Article Here   Read More

Older posts

Creative Commons License
This work is licensed under a
Creative Commons Attribution 4.0
International License
unless otherwise marked.

Terms of Use | Privacy Policy

Up ↑