DeepMind’s New Research on Linking Memories, and How It Applies to AI

Your daily selection of the latest science news!

According to (This article and its images were originally posted on Singularity Hub September 26, 2018 at 11:06AM.)

There’s a cognitive quirk humans have that seems deceptively elementary. For example: every morning, you see a man in his 30s walking a boisterous collie. Then one day, a white-haired lady with striking resemblance comes down the street with the same dog.

Subconsciously we immediately make a series of deductions: the man and woman might be from the same household. The lady may be the man’s mother, or some other close relative. Perhaps she’s taking over his role because he’s sick, or busy. We weave an intricate story of those strangers, pulling material from our memories to make it coherent.

This ability—to link one past memory with another—is nothing but pure genius, and scientists don’t yet understand how we do it. It’s not just an academic curiosity: our ability to integrate multiple memories is the first cognitive step that lets us gain new insight into experiences, and generalize patterns across those encounters. Without this step, we’d forever live in a disjointed world.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Singularity Hub] September 26, 2018 at 11:06AM. Credit to the original author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

DeepMind Created a Test to Measure an AI’s Ability to Reason

Your daily selection of the hottest trending tech news!

According to Futurism (This article and its images were originally posted on Futurism July 13, 2018 at 11:43AM.)

GENERAL INTELLIGENCE. AI has gotten pretty good at completing specific tasks, but it’s still a long way from having general intelligence, the kind of all around smarts that would let AI navigate the world the same way humans or even animals do.

One of the key elements of general intelligence is abstract reasoning — the ability to think beyond the “here and now” to see more nuanced patterns and relationships and to engage in complex thought. On Wednesday, researchers at DeepMind — a Google subsidiary focused on artificial intelligence — published a paper detailing their attempt to measure various AIs’ abstract reasoning capabilities, and to do so, they looked to the same tests we use to measure our own.

HUMAN IQ. In humans, we measure abstract reasoning using fairly straightforward visual IQ tests. One popular test, called Raven’s Progressive Matrices, features several rows of images with the final row missing its final image. It’s up to the test taker to choose the image that should come next based on the pattern of the completed rows.

The test doesn’t outright tell the test taker what to look for in the images — maybe the progression has to do with the number of objects within each image, their color, or their placement. It’s up to them to figure that out for themselves using their ability to reason abstractly.

To apply this test to AIs, the DeepMind researchers created a program that could generate unique matrix problems. Then, they trained various AI systems to solve these matrix problems.

Finally, they tested the systems. In some cases, they used test problems with the same abstract factors as the training set — like both training and testing the AI on problems that required it to consider the number of shapes in each image. In other cases, they used test problems incorporating different abstract factors than those in the training set. For example, they might train the AI on problems that required it to consider the number of shapes in each image, but then test it on ones that required it to consider the shapes’ positions to figure out the right answer.

BETTER LUCK NEXT TIME. The results of the test weren’t great. When the training problems and test problems focused on the same abstract factors, the systems fared OK, correctly answering the problems 75 percent of the time. However, the AIs performed very poorly if the testing set differed from the training set, even when the variance was minor (for example, training on matrices that featured dark-colored objects and testing on matrices that featured light-colored objects).

Ultimately, the team’s AI IQ test shows that even some of today’s most advanced AIs can’t figure out problems we haven’t trained them to solve. That means we’re probably still a long way from general AI. But at least we now have a straightforward way to monitor our progress.

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.

__

This article and images were originally posted on [Futurism] July 13, 2018 at 11:43AM. Credit to Author Kristin Houser and Futurism | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

 

This DeepMind AI Spontaneously Developed Digital Navigation ‘Neurons’ Like Ours

 

Your daily selection of the latest science news!

According to Singularity Hub


When Google DeepMind researchers trained a neural network to tackle a virtual maze, it spontaneously developed digital equivalents to the specialized neurons called grid cells that mammals use to navigate. Not only did the resulting AI system have superhuman navigation capabilities, the research could provide insight into how our brains work.

Grid cells were the subject of the 2014 Nobel Prize in Physiology or Medicine, alongside other navigation-related neurons. These cells are arranged in a lattice of hexagons, and the brain effectively overlays this pattern onto its environment. Whenever the animal crosses a point in space represented by one of the corners these hexagons, a neuron fires, allowing the animal to track its movement.

Mammalian brains actually have multiple arrays of these cells. These arrays create overlapping grids of different sizes and orientations that together act like an in-built GPS. The system even works in the dark and independently of the animal’s speed or direction.

Exactly how these cell work and the full range of their functions is still somewhat of a mystery though. One recently proposed hypothesis suggests they could be used for vector-based navigation—working out the distance and direction to a target “as the crow flies.”

That’s a useful capability because it makes it possible for animals or artificial agents to quickly work out and choose the best route to a particular destination and even find shortcuts.

So, the researchers at DeepMind decided to see if they could test the idea in silico using neural networks, as they roughly mimic the architecture of the brain.

To start with, they used simulations of how rats move around square and circular environments to train a neural network to do path integration—a technical name for using dead-reckoning to work out where you are by keeping track of what direction and speed you’ve moved from a known point.

They found that, after training, patterns of activity that looked very similar to grid cells spontaneously appeared in one of the layers of the neural network. The researchers hadn’t programmed the model to exhibit this behavior.

To test whether these grid cells could play a role in vector-based navigation, they augmented the network so it could be trained using reinforcement learning. They set it to work navigating challenging virtual mazes and tweaked its performance by giving rewards for good navigation.

The agent quickly learned how to navigate the mazes, taking shortcuts when they became available and outperforming a human expert, according to results published in the journal Nature this week.

To test whether the digital grid cells were responsible for this performance, the researchers carried out another experiment where they prevented the artificial grid cells from forming, which significantly reduced the ability of the system to efficiently navigate. The DeepMind team says this suggests these cells are involved in vector-based navigation as had been hypothesized.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” Edvard Moser, a neuroscientist at the Kavli Institute for Systems Neuroscience in Trondheim, Norway, and one of the Nobel winners who discovered grid cells, told Nature.

But how much can actually be learned about the human brain from the experiment is up for debate.

Stefan Leutgeb, a neurobiologist at the University of California, San Diego, told Quanta that the research makes a good case for grid cells being involved in vector navigation, but that it is ultimately limited by being a simulation on a computer. “This is a way in which it could work, but it doesn’t prove that it’s the way it works in animals,” he says.

Importantly, the research doesn’t really seem to explain how grid cells help with these kinds of navigating tasks, simply that they do. That’s in part due to the difficulty of interpreting neural networks, neuroscientists Francesco Savelli and James Knierim at Johns Hopkins University write in an accompanying opinion article in Nature.

“That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration,” they write. “The black-box character of deep-learning systems, however, means that it might be hard to determine what that something is.”

The DeepMind researchers are more optimistic though. In a blog post, they say their findings not only support the theory that grid cells are involved in vector-based navigation, but also more broadly demonstrate the potential of using AI to test theories about how the brain works. That knowledge in turn could eventually be put back to use in developing more powerful AI systems.

In general, DeepMind is profoundly interested in how the fields of neuroscience and AI can connect and inform each other—writing papers on the subject and using inspiration from the brain to make  powerful neural networks capable of amazing and surprising feats.

The research on grid-cells is still very much basic science, but being able to mimic the powerful navigational capabilities of animals could be extremely useful for everything from robots to drones to self-driving cars.

Image Credit: DeepMind

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and images were originally posted on [Singularity Hub] May 14, 2018 at 11:02AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Google DeepMind partnership with UK’s National Health Service ruled to be illegal

The UK’s privacy body, the Information Commissioner’s Office (ICO), has ruled that a research partnership arrangement between Google DeepMind and the National Health Service (NHS) was illegal …


Some 1.6 million patient records were shared with Google in an attempt to use AI to predict which patients would be at risk from kidney damage. While the initiative was well-intentioned, it was suggested back in May that the legal basis for the data-sharing was ‘inappropriate.’ The ICO has today found that it was in fact illegal.

Today my office has announced that the Royal Free NHS Foundation Trust did not comply with the Data Protection Act when it turned over the sensitive medical data of around 1.6 million patients to Google DeepMind, a private sector firm, as part of a clinical safety initiative.

It was the NHS, rather than Google, which broke the law.

The finding hinges on something of a technicality. The law says that patients are ‘implied’ to have consented to data being shared for the purposes of their direct care, but as the aim here was to develop an app that would help future patients, no consent could be assumed. It was therefore ruled that patient consent should have been sought to use their data for research purposes. In practice, most patients consent to both forms of sharing, so it’s likely that a similar number of records would have been shared either way.

The NHS Trust in question has now agreed to change the way in which it shares data, and the ICO is keen to stress that it does not believe that there need be a conflict between privacy and research.

It’s welcome that the trial looks to have been positive. The Trust has reported successful outcomes. Some may reflect that data protection rights are a small price to pay for this.

But what stood out to me on looking through the results of the investigation is that the shortcomings we found were avoidable. The price of innovation didn’t need to be the erosion of legally ensured fundamental privacy rights. I’ve every confidence the Trust can comply with the changes we’ve asked for and still continue its valuable work. This will also be true for the wider NHS as deployments of innovative technologies are considered.

The ICO is simply reminding NHS bodies that they must ensure the correct patient consent is in place.

The project is one of a number of medical research partnerships between the NHS and Google’s DeepMind.

__

This article and images was originally posted on [9to5Google] July 3, 2017 at 09:24AM

 

 

 

 

OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s

AI research has a long history of repurposing old ideas that have gone out of style. Now researchers at Elon Musk’s open source AI project have revisited “neuroevolution,” a field that has been around since the 1980s, and achieved state-of-the-art results.

The group, led by OpenAI’s research director Ilya Sutskever, has been exploring the use of a subset of algorithms from this field, called “evolution strategies,” which are aimed at solving optimization problems.

Despite the name, the approach is only loosely linked to biological evolution, the researchers say in a blog post announcing their results. On an abstract level, it relies on allowing successful individuals to pass on their characteristics to future generations. The researchers have taken these algorithms and reworked them to work better with deep neural networks and run on large-scale distributed computing systems.

“To validate their effectiveness, they then set them to work on a series of challenges seen as benchmarks for reinforcement learning.”

To validate their effectiveness, they then set them to work on a series of challenges seen as benchmarks for reinforcement learning, the technique behind many of Google DeepMind’s most impressive feats, including beating a champion Go player last year.

One of these challenges is to train the algorithm to play a variety of computer games developed by Atari. DeepMind made the news in 2013 when it showed it could use Deep Q-Learning—a combination of reinforcement learning and convolutional neural networks—to successfully tackle seven such games. The other is to get an algorithm to learn how to control a virtual humanoid walker in a physics engine.

To do this, the algorithm starts with a random policy—the set of rules that govern how the system should behave to get a high score in an Atari game, for example. It then creates several hundred copies of the policy—with some random variation—and these are tested on the game.

These policies are then mixed back together again, but with greater weight given to the policies that got the highest score in the game. The process repeats until the system comes up with a policy that can play the game well.

“In one hour training on the Atari challenge, the algorithm reached a level of mastery that took a [DeepMind] reinforcement-learning system…a whole day to learn.”

In one hour training on the Atari challenge, the algorithm reached a level of mastery that took a reinforcement-learning system published by DeepMind last year a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for Google’s approach.

One of the keys to this dramatic performance was the fact that the approach is highly “parallelizable.” To solve the walking simulation, they spread computations over 1,440 CPU cores, while in the Atari challenge they used 720.

This is possible because it requires limited communication between the various “worker” algorithms testing the candidate policies. Scaling reinforcement algorithms like the one from DeepMind in the same way is challenging because there needs to be much more communication, the researchers say.

The approach also doesn’t require backpropagation, a common technique in neural network-based approaches, including deep reinforcement learning. This effectively compares the network’s output with the desired output and then feeds the resulting information back into the network to help optimize it.

The researchers say this makes the code shorter and the algorithm between two and three times faster in practice. They also suggest it will be particularly suited to longer challenges and situations where actions have long-lasting effects that may not become apparent until many steps down the line.

The approach does have its limitations, though. These kinds of algorithms are usually compared based on their data efficiency—the number of iterations required to achieve a specific score in a game, for example. On this metric, the OpenAI approach does worse than reinforcement learning approaches, although this is offset by the fact that it is highly parallelizable and so can carry out iterations more quickly.

For supervised learning problems like image classification and speech recognition, which currently have the most real-world applications, the approach can also be as much as 1,000 times slower than other approaches that use backpropagation.

Nevertheless, the work demonstrates promising new applications for out-of-style evolutionary approaches, and OpenAI is not the only group investigating them. Google has been experimenting on using similar strategies to devise better image recognition algorithms. Whether this represents the next evolution in AI we will have to wait and see.

Image Credit: Shutterstock

__

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on Singularity Hub

Google’s AlphaGo bested the world’s top Go players

When Google’s artificial intelligence program AlphaGo made history by taking down Korea’s Lee Sedol—one of the world’s best Go players—in a landslide 4-1 victory in March, Chinese player Ke Jie was skeptical. He famously wrote on Weibo the next day, “Even if AlphaGo can defeat Lee Sedol, it can’t beat me,” and has since agreed to take on the AI at an undecided time.

But now even Ke, the reigning top-ranked Go player, has acknowledged that human beings are no match for robots in the complex board game, after he lost three games to an AI that mysteriously popped up online in recent days.

The AI turned out to be AlphaGo in disguise.

On Jan. 4, after winning more than 50 games against several of the world’s best Go players, Ke included, a user registered with an ID of “Master” on two Chinese board game platforms came forward to identify itself as AlphaGo.

“I’m AlphaGo’s Doctor Huang,” the user “Master” wrote on foxwq.com, according to screenshots from Chinese media reports. Taiwanese developer Aja Huang is a member of Google’s DeepMind team behind the AI.

Deepmind founder Demis Hassabis, whose London-based AI startup was acquired by Google in 2014, later confirmed on Twitter that Master is a new version of AlphaGo under “unofficial testing.”

Since Dec. 29, Master has defeated a long list of top Go players including Korea’s Park Jung-hwan (world No. 3), Japan’s Iyama Yuta (No. 5) and Ke in fast-paced games. He won 51 games straight before his 52nd rival, Chen Yaoye, went offline, forcing the game to be recorded as a tie. By Jan. 4 when the test was completed, Master had racked up 60 wins, plus the one tie, and zero loss, according to numerous reports (link in Chinese).

“A new storm is coming,” Ke writes. (Weibo)

Ke took to Weibo again ((link in Chinese, registration required) on Dec. 31, after his first two losses, writing:

“I have studied Go softwares for over half a year since March, learning theories and putting them into practices countless times. I only wondered why computers are better… Humans have evolved in games in thousands of years—but computers now tell us humans are all wrong. I think no one is even close to know the basics of Go.”

He added that Go players should now start to learn from computers to improve their skills.

Before Master claimed to be AlphaGo, there were unconfirmed reports by several Chinese media outlets, including the Western China Metropolis Daily (link in Chinese), that Master is an updated version of AlphaGo in a beta test, and that the two Chinese board game platforms had signed nondisclosure agreements with Google.

Even before Google confirmed the AI’s identity, there should have been no doubts that Master is not human. For one thing, just look at the unthinkable number of consecutive victories over top players in such a short period of time. As Chinese national Go team coach Yu Bin argued, only a robot could make a move almost every five seconds, in a fast-paced game that only requires each player to make at least three moves every 20 seconds.

AlphaGo’s March victory over Lee was the first powerful demonstration of how AI could buck conventional wisdom in a game that originated in China two or three millennia ago. Master’s—or AlphaGo’s—sweeping wins have now driven home the point more emphatically.

China’s Go world champion, Nie Weiping, said (link in Chinese) after he lost by 7.5 points to Master on Jan. 4, “Go is not as simple as we thought, there’s still huge room for we humans to explore. Either AlphaGo or Master, it’s sent by the ‘Go God’ to guide humans.”

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

Check out our Flipboard magazine ESIST  If you enjoy reading our posts. All of our handpicked articles will be delivered to the flipboard magazine every day.

This article was originally posted on Quartz

BY Zheping Huang

 

 

 

Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI – ESIST

1.jpg

Google just paid for part of its acquisition of DeepMind in a surprising way.

The internet giant is using technology from the DeepMind artificial intelligence subsidiary for big savings on the power consumed by its data centers, according to DeepMind Co-Founder Demis Hassabis.

In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York.

The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said.

The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.

Google said it used 4,402,836 MWh of electricity in 2014, equivalent to the average yearly consumption of about 366,903 U.S. family homes. A significant proportion of Google’s spending on electricity comes from its data centers, which support its globe-spanning web services and mobile apps.

Continue reading

Source: Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI – Bloomberg