DeepMind’s New Research on Linking Memories, and How It Applies to AI

Your daily selection of the latest science news!

According to (This article and its images were originally posted on Singularity Hub September 26, 2018 at 11:06AM.)

There’s a cognitive quirk humans have that seems deceptively elementary. For example: every morning, you see a man in his 30s walking a boisterous collie. Then one day, a white-haired lady with striking resemblance comes down the street with the same dog.

Subconsciously we immediately make a series of deductions: the man and woman might be from the same household. The lady may be the man’s mother, or some other close relative. Perhaps she’s taking over his role because he’s sick, or busy. We weave an intricate story of those strangers, pulling material from our memories to make it coherent.

This ability—to link one past memory with another—is nothing but pure genius, and scientists don’t yet understand how we do it. It’s not just an academic curiosity: our ability to integrate multiple memories is the first cognitive step that lets us gain new insight into experiences, and generalize patterns across those encounters. Without this step, we’d forever live in a disjointed world.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Singularity Hub] September 26, 2018 at 11:06AM. Credit to the original author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Researcher Discloses New Zero-Day Affecting All Versions of Windows

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on The Hacker News September 21, 2018 at 01:33PM.)

A security researcher has publicly disclosed an unpatched zero-day vulnerability in all supported versions of Microsoft Windows operating system (including server editions) after the company failed to patch a responsibly disclosed bug within the 120-days deadline.

Discovered by Lucas Leong of the Trend Micro Security Research team, the zero-day vulnerability resides in Microsoft Jet Database Engine that could allow an attacker to remotely execute malicious code on any vulnerable Windows computer.

The Microsoft JET Database Engine, or simply JET (Joint Engine Technology), is a database engine integrated within several Microsoft products, including Microsoft Access and Visual Basic.

|
+ Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [The Hacker News] September 21, 2018 at 01:33PM. Credit to the original author and The Hacker News | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

 

Military Pilots Can Control Three Jets at Once via a Neural Implant

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on Futurism September 19, 2018 at 10:28AM.)

MIND CONTROL

The military is making it easier than ever for soldiers to distance themselves from the consequences of war. When drone warfare emerged, pilots could, for the first time, sit in an office in the U.S. and drop bombs in the Middle East.

Now, one pilot can do it all, just using their mind — no hands required.

Earlier this month, DARPA, the military’s research division, unveiled a project that it had been working on since 2015: technology that grants one person the ability to pilot multiple planes and drones with their mind.

“As of today, signals from the brain can be used to command and control … not just one aircraft but three simultaneous types of aircraft,” Justin Sanchez, director of DARPA’s Biological Technologies Office, said, according to Defense One.

|
+ Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [Futurism] S

 

 

eptember 19, 2018 at 10:28AM. Credit to the original author and Futurism | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Are Digital Devices Altering Our Brains?

Your daily selection of the latest science news!

According to (This article and its images were originally posted on Scientific American Content September 11, 2018 at 08:04AM.)

Ten years ago technology writer Nicholas Carr published an article in the Atlantic entitled “Is Google Making Us Stupid?” He strongly suspected the answer was “yes.” Himself less and less able to focus, remember things or absorb more than a few pages of text, he accused the Internet of radically changing people’s brains. And that is just one of the grievances leveled against the Internet and at the various devices we use to access it–including cell phones, tablets, game consoles and laptops. Often the complaints target video games that involve fighting or war, arguing that they cause players to become violent.

But digital devices also have fervent defenders—in particular the promoters of brain-training games, who claim that their offerings can help improve attention, memory and reflexes. Who, if anyone, is right?

The answer is less straightforward than you might think. Take Carr’s accusation. As evidence, he quoted findings of neuroscientists who showed that the brain is more plastic than previously understood. In other words, it has the ability to reprogram itself over time, which could account for the Internet’s effect on it. Yet in a 2010 opinion piece in the Los Angeles Times, psychologists Christopher Chabris, then at Union College, and Daniel J. Simons of the University of Illinois at Urbana-Champaign rebutted Carr’s view: “There is simply no experimental evidence to show that living with new technologies fundamentally changes brain organization in a way that affects one’s ability to focus,” they wrote. And the debate goes on.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Scientific American Content] September 11, 2018 at 08:04AM. All credit to both the author Elena Pasquinelli and Scientific American Content | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Scientists Say They’ve Found The Driver of False Beliefs, And It’s Not a Lack of Intelligence

Your daily selection of the latest science news!

According to (This article and its images were originally posted on ScienceAlert September 9, 2018 at 07:59AM.)

This is why some people are so confident in their “facts”.

Why is it sometimes so hard to convince someone that the world is indeed a globe, or that climate change is actually caused by human activity, despite the overwhelming evidence?

Scientists think they might have the answer, and it’s less to do with lack of understanding, and more to do with the feedback they’re getting.

Getting positive or negative reactions to something you do or say is a greater influence on your thinking than logic and reasoning, the new research suggests – so if you’re in a group of like-minded people, that’s going to reinforce your thinking.

Receiving good feedback also encourages us to think we know more than we actually do.

In other words, the more sure we become that our current position is right, the less likely we are to take into account other opinions or even cold, hard scientific data.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [ScienceAlert] September 9, 2018 at 07:59AM. All credit to both the author DAVID NIELD and ScienceAlert | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

The 4 Waves of AI: Who Will Own the Future of ?

Your daily selection of the latest science news!

According to (This article and its images were originally posted on Singularity Hub September 7, 2018 at 12:11PM.)

Recently, I picked up Kai-Fu Lee’s newest book, AI Superpowers.

Kai-Fu Lee is one of the most plugged-in AI investors on the planet, managing over $2 billion between six funds and over 300 portfolio companies in the US and China.

Drawing from his pioneering work in AI, executive leadership at Microsoft, Apple, and Google (where he served as founding president of Google China), and his founding of VC fund Sinovation Ventures, Lee shares invaluable insights about:

  1. The four factors driving today’s AI ecosystems;
  2. China’s extraordinary inroads in AI implementation;
  3. Where autonomous systems are headed;
  4. How we’ll need to adapt.

With a foothold in both Beijing and Silicon Valley, Lee looks at the power balance between Chinese and US tech behemoths—each turbocharging new applications of deep learning and sweeping up global markets in the process.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Singularity Hub] September 7, 2018 at 12:11PM. All credit to both the author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Forecasting earthquake aftershock locations with AI-assisted science

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on The Official Google Blog August 30, 2018 at 01:32PM.) – Cover image via Techherald

From hurricanes and floods to volcanoes and earthquakes, the Earth is continuously evolving in fits and spurts of dramatic activity. Earthquakes and subsequent tsunamis alone have caused massive destruction in the last decade—even over the course of writing this post, there were earthquakes in New Caledonia, Southern California, Iran, and Fiji, just to name a few.

Earthquakes typically occur in sequences: an initial “mainshock” (the event that usually gets the headlines) is often followed by a set of “aftershocks.” Although these aftershocks are usually smaller than the main shock, in some cases, they may significantly hamper recovery efforts.  Although the timing and size of aftershocks has been understood and explained by established empirical laws, forecasting the locations of these events has proven more challenging.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [The Official Google Blog] August 30, 2018 at 01:32PM. Credit to Author Phoebe DeVries and The Official Google Blog | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

China Is Quickly Becoming an AI Superpower

Your daily selection of the latest science news!

According to Singularity Hub (This article and its images were originally posted on Singularity Hub August 29, 2018 at 11:04AM.)

Last year, China’s government put out its plan to lead the world in AI by 2030.

As Eric Schmidt has explained, “It’s pretty simple. By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries of AI.”

And the figures don’t lie.

With a $14 trillion GDP, China is predicted to account for over 35 percent of global economic growth from 2017 to 2019—nearly double the US GDP’s predicted 18 percent.

And AI is responsible for a big chunk of that.

PricewaterhouseCoopers recently projected AI’s deployment will add $15.7 trillion to the global GDP by 2030, with China taking home $7 trillion of that total, dwarfing North America’ $3.7 trillion in gains. In 2017, China accounted for 48 percent of the world’s total AI startup funding, compared to America’s 38 percent.

Already, Chinese investments in AI, chips, and electric vehicles have reached an estimated $300 billion. Meanwhile, AI giant Alibaba has unveiled plans to invest $15 billion in international research labs from the US to Israel, with others following suit.

Beijing has now mobilized local government officials around AI entrepreneurship and research, led by billions in guiding funds and VC investments. And behind the scenes, a growing force of driven AI entrepreneurs trains cutting-edge algorithms on some of the largest datasets available to date.

As discussed by Kai-Fu Lee in his soon-to-be-released book AI Superpowers, four main drivers are tipping the balance in China’s favor:

  1. Abundant data
  2. Hungry entrepreneurs empowered by new tools
  3. Growing AI expertise
  4. Mass government funding and support

Let’s dive in.

1. Abundant Data

Perhaps China’s biggest advantage is the sheer quantity of its data. Tencent’s WeChat platform alone has over one billion monthly active users. That’s more than the entire population of Europe. Take mobile payments spending: China outstrips the US by a ratio of 50 to 1.Chinese e-commerce purchases are almost double US totals.

But China’s data advantage involves more than just quantity. As China witnesses an explosion of O2O (online-to-offline) startups, their data is creating a new intelligence layer unparalleled in the West.

Whereas American users’ payment and transportation data are fragmented across various platforms, Chinese AI giants like Tencent have created unified online ecosystems that concentrate all your data in one place.

Take mobile payment data, for instance. While the US saw $112 billion worth of mobile payments in 2016, Chinese mobile payments exceeded $9 trillion in the same year.

That means mobile payment platforms like WeChat Wallet and Alipay have data on everything from your dumplings purchase from a street vendor to your recent 100 RMB donation to an earthquake relief fund. This allows them to generate complex maps charting hundreds of millions of users’ every move.

With the unequaled rise of bike-sharing startups like China’s ofo and Mobike, Chinese companies can now harness deeply textured maps of population movement, allowing them to intuit everything from your working habits to your grocery shopping routine.

And as China’s facial recognition capacities explode, these maps are increasingly populated with faces even when you’re not online.

As Chinese tech companies continue merging users’ online behavior with their physical world, the data they collect offers them a tremendous edge over their Silicon Valley counterparts.

This brings me to our second AI driver: hungry entrepreneurs.

2. Hungry Entrepreneurs

While China’s ‘copycat’ era saw a massive wave of mediocre-quality products and unoriginal mimicry, it also forged some of the most competitive, rapidly iterating entrepreneurs in the world.

Refined by fire, Chinese tech entrepreneurs have stopped at nothing to beat the competition, pulling every trick and tactic to smear, outpace and outsmart parallel startups.

Former founder-director of Google Brain Andrew Ng noted the hunger raving among Chinese entrepreneurs: “The velocity of work is much faster in China than in most of Silicon Valley. When you spot a business opportunity in China, the window of time you have to respond is very short.”

But as China’s AI expertise has exploded, and startups have learned to tailor American copycat products to a Chinese audience, these entrepreneurs are finally shrugging off their former ‘copycat’ reputation, building businesses with no analogs in the West.

Now home to three of the seven AI giants (Baidu, Alibaba, and Tencent), China also sees a thriving AI startup ecosystem.

Just this year, China’s computer vision startup SenseTime became the most valuable AI startup in the world. Capable of identifying your face, gauging your age and even your potential purchasing habits, SenseTime is now a world-class leader in facial recognition technologies, applying their AI to everything from traffic surveillance to employee authorization.

After a $600 million Alibaba-led funding round in April, SenseTime raised a further $620 million in its ‘Series C+’ round announced in May, now claiming a valuation of over $4.5 billion.

And SenseTime is not alone. As of this past April, China is home to 168 unicorns, collectively valued at over $628 billion.

But in order to leverage AI for billion-dollar startups, China counts on its growing expertise.

3. AI Expertise

It is important to note that China is still new to the game. When deep learning got its big break in 2012—when a neural network decimated the competition in an international computer vision contest—China had barely woken up to the AI revolution.

But in a few short years, China’s AI community has caught up fast. While the world’s most elite AI researchers still largely cluster in the US, favoring companies like Google, Chinese tech giants are quickly closing the gap.

Already in academia, Chinese AI researchers stand shoulder-to-shoulder with their American contemporaries. At AAAI’s 2017 conference, an equal number of accepted papers came from US- and China-based researchers.

We’ve also seen increased collaboration between China’s top tech firms and emerging student talent. Tencent, for instance, sponsors scholarships for students at a lab in Hong Kong’s University of Science and Technology, granting them access to masses of WeChat data.

Meanwhile, Baidu, Didi, and Tencent have all set up their own research labs.

China’s Face++ now leads the world in face and image recognition AI, beating out top teams from Google, Microsoft and Facebook at the 2017 COCO image-recognition competition.

Voice recognition software company iFlyTek has not only outcompeted teams from Alphabet’s DeepMind, Facebook and IBM Watson in natural-language processing, but has done so in its “second language” of English.

Now the most valuable AI speech company in the world, iFlyTek’s cutting edge technology could one day enable translation earpieces that instantaneously translate speech into any language.

But perhaps the greatest unifying force behind China’s skyrocketing AI industry is the country’s very own central government.

4. China’s Government Directive

The day DeepMind’s AlphaGo beat top-ranking Chinese Go player Ke Jie has gone down in history as China’s “Sputnik Moment.”

Within two months of the AI’s victory, China’s government issued its plan to make China the global center of AI innovation, aiming for a 1 trillion RMB (about $150 billion USD) AI industry by 2030.

But there is a critical difference between China’s New Generation AI Development Plan (released in July 2017) and America’s 2016 AI strategic plan, released under the Obama Administration to encourage ramped-up AI R&D.

While the White House report got modest news coverage and a mildly enthusiastic response from the AI community, this was barely a hiccup in comparison to China’s clarion call. When the CCP speaks, everyone listens.

Within a year, Chinese VC investors were pouring record sums into AI startups, surpassing the US to make up 48 percent of AI venture funding globally. Over the past decade, Chinese government spending on STEM research has grown by double digits year on year.

And China’s political system is set up such that local officials are incentivized to outcompete others for leadership in CCP initiatives, each striving to lure in AI companies and entrepreneurs with generous subsidies and advantageous policies.

Mayors across the country (largely in eastern China) have built out innovation zones, incubators and government-backed VC funds, even covering rent and clearing out avenues for AI startups and accelerators.

Beijing plans to invest $2 billion in an AI development park, which would house up to 400 AI enterprises and a national AI lab, driving R&D, patents and societal innovation.

Hangzhou, home to Alibaba’s HQ, has also launched its own AI park, backed by a fund of 10 billion RMB (nearly $1.6 billion USD). But Hangzhou and Beijing are just two of the 19 different cities and provinces investing in AI-driven city infrastructure and policy.

As I discussed last week, cities like Xiong’an New Area are building out entire AI cities in the next two decades, centered around autonomous vehicles, solar panel-embedded roads, and computer vision-geared infrastructure.

Lastly, local governments have begun to team with China’s leading AI companies to build up party-corporate complexes.

Acting as a “national team,” companies like Baidu, Alibaba, Tencent, and iFlyTek collaborate with national organizations like China’s National Engineering Lab for Deep Learning Technologies to pioneer research and supercharge innovation.

Pulling out all the stops, China’s government is flooding the market with AI-targeted funds as Chinese tech giants and adrenalized startups rise to leverage this capital.

Final Thoughts

Once disregarded as a market of ‘copycats’ looking to Silicon Valley for inspiration and know-how, China’s AI ecosystem has long departed this stage.

Propelled by an abundance of government funds, smart infrastructure overhauls, leading AI research, and some of the world’s most driven entrepreneurs, China’s AI ecosystem is unstoppable.

Join Me in China

(1) Webinar with Dr. Kai-Fu Lee: Dr. Kai-Fu Lee — one of the world’s most respected experts on AI — will discuss his latest book AI Superpowers: China, Silicon Valley, and the New World Order. Artificial Intelligence is reshaping the world as we know it. With US-Sino competition heating up, who will own the future of technology? Register here for the free webinar on September 4th, 2018 from 11am – 12:30pm PST

(2) Abundance Global — China: This year, I’m expanding A360 into three key emerging global markets: Central/South America (Rio de Janeiro, Brazil); MENA (Dubai, UAE); and Asia (Shanghai, China). Following my annual China Platinum Trip, A360 Shanghai will dive into China’s remarkable strides in AI and other newly emerging industries.

Image Credit: Toa55/Shutterstock.com

 

Continue reading… | Stay even more current with our live science feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Singularity Hub] August 29, 2018 at 11:04AM. All credit to both the author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Artificial intelligence nails predictions of earthquake aftershocks

Your daily selection of the latest science news!

According to Nature (This article and its images were originally posted on Nature August 29, 2018 at 01:15PM.)

An earthquake and its aftershocks rocked Japan’s Kumamoto prefecture in 2016, causing 48 deaths.Credit: Aflo/REX/Shutterstock

A machine-learning study that analysed hundreds of thousands of earthquakes beat the standard method at predicting the location of aftershocks.

Scientists say that the work provides a fresh way of exploring how changes in ground stress, such as those that occur during a big earthquake, trigger the quakes that follow. It could also help researchers to develop new methods for assessing seismic risk.

“We’ve really just scratched the surface of what machine learning may be able to do for aftershock forecasting,” says Phoebe DeVries, a seismologist at Harvard University in Cambridge, Massachusetts. She and her colleagues report their findings1 on 29 August in Nature.

Aftershocks occur after the main earthquake, and they can be just as damaging — or more so — than the initial shock. A magnitude-7.1 earthquake near Christchurch, New Zealand, in September 2010 didn’t kill anyone: but a magnitude-6.3 aftershock, which followed more than 5 months later and hit closer to the city centre, resulted in 185 deaths.

Seismologists can generally predict how large aftershocks will be, but they struggle to forecast where the quakes will happen. Until now, most scientists used a technique that calculates how an earthquake changes the stress in nearby rocks and then predicts how likely that change would result in an aftershock in a particular location. This stress-failure method can explain aftershock patterns successfully for many large earthquakes, but it doesn’t always work2.

There are large amounts of data available on past earthquakes, and DeVries and her colleagues decided to harness them to come up with a better prediction method. “Machine learning is such a powerful tool in that kind of scenario,” DeVries says.

Neural networking

The scientists looked at more than 131,000 mainshock and aftershock earthquakes, including some of the most powerful tremors in recent history, such as the devastating magnitude-9.1 event that hit Japan in March 2011. The researchers used these data to train a neural network that modelled a grid of cells, 5 kilometres to a side, surrounding each main shock. They told the network that an earthquake had occurred, and fed it data on how the stress changed at the centre of each grid cell. Then the scientists asked it to provide the probability that each grid cell would generate one or more aftershocks. The network treated each cell as its own little isolated problem to solve, rather than calculating how stress rippled sequentially through the rocks.

When the researchers tested their system on 30,000 mainshock-aftershock events, the neural-network forecast predicted aftershock locations more accurately than did the usual stress-failure method. Perhaps more importantly, DeVries says, the neural network also hinted at some of the physical changes that might have been happening in the ground after the main shock. It pointed to certain parameters as potentially important — ones that describe stress changes in materials such as metals, but that researchers don’t often use to study earthquakes.

The findings are a good step towards examining aftershocks with fresh eyes, says Daniel Trugman, a seismologist at the Los Alamos National Laboratory in New Mexico. “The machine-learning algorithm is telling us something fundamental about the complex processes underlying the earthquake triggering,” he says.

The latest study won’t be the final word on aftershock forecasts, says Gregory Beroza, a geophysicist at Stanford University in California. For instance, it doesn’t take into account a type of stress change that happens as seismic waves travel through Earth. But “this paper should be viewed as a new take on aftershock triggering”, he says. “That’s important, and it’s motivating.”

Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Continue reading… | Stay even more current with our live science feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Nature] August 29, 2018 at 01:15PM. All credit to both the author and Nature | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

 

AI Can Transform Anyone Into a Professional Dancer

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on NVIDIA Developer News CenterNVIDIA Developer News Center August 24, 2018 at 03:39PM.)

Think of it as style transfer for dancing, a deep learning based algorithm that can convincingly show a real person mirroring the moves of their favorite dancers.

The work, developed by a team of researchers from the University of California Berkeley, allows anyone to portray themselves as a world-class ballerina or a pop superstar like Bruno Mars.

“With our framework, we create a variety of videos, enabling untrained amateurs to spin and twirl like ballerinas, perform martial arts kicks or dance as vibrantly as pop stars,” the researchers stated in their paper. “Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance,” the team explained.

Using NVIDIA TITAN Xp and GeForce GTX 1080 Ti GPUs, with the cuDNN-accelerated PyTorch deep learning framework for both training and inference, the team first trained their conditional generative adversarial network on video of amateur dancers performing a range of poses filmed at 120 frames per second. Each subject completed the poses for at least 20 minutes.

The team then extracted pose key points for the body, face, and hands using the architecture provided by a state of the art pose detector OpenPose.

For the image translation, the team based their algorithm on the pix2pixHD architecture developed by NVIDIA researchers.

Training : Our model uses a pose detector P to create pose stick figures from video frames of the target subject. During training we learn the mapping G alongside an adversarial discriminator D which attempts to distinguish between the “real” correspondence pair ( x , y ) and the “fake” pair ( G ( x ) , y ) . (Bottom) Transfer : We use a pose detector P : Y ′ → X ′ to obtain pose joints for the source person that are transformed by our normalization process N or m into joints for the target person for which pose stick figures are created. Then we apply the trained mapping G.

“Overall our model can create reasonable and arbitrarily long videos of a target person dancing given body movements to follow through an input video of another subject dancing,” the team said.

The researchers concede their model isn’t perfect. “Even though we try to inject temporal coherence through our setup and pre smoothing key points, our results often still suffer from jittering. Errors occur particularly in transfer videos when the input motion or motion speed is different from the movements seen at training time,” the team explained.

To address the issues the team is working on using a different pose estimation framework that is optimized for motion transfer.

The work was published on ArXiv this week.

| Stay even more current with our live technology feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [NVIDIA Developer News CenterNVIDIA Developer News Center] August 24, 2018 at 03:39PM. Credit to Author  and NVIDIA Developer News CenterNVIDIA Developer News Center | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

A “GPS for inside your body”

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on MIT News – Computer Science and Artificial Intelligence Laboratory (CSAIL) August 20, 2018 at 12:21AM.)

Investigating inside the human body often requires cutting open a patient or swallowing long tubes with built-in cameras. But what if physicians could get a better glimpse in a less expensive, invasive, and time-consuming manner?

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) led by Professor Dina Katabi is working on doing exactly that with an “in-body GPS” system dubbed ReMix. The new method can pinpoint the location of ingestible implants inside the body using low-power wireless signals. These implants could be used as tiny tracking devices on shifting tumors to help monitor their slight movements.

In animal tests, the team demonstrated that they can track the implants with centimeter-level accuracy. The team says that, one day, similar implants could be used to deliver drugs to specific regions in the body.

ReMix was developed in collaboration with researchers from Massachusetts General Hospital (MGH). The team describes the system in a paper that’s being presented at this week’s Association for Computing Machinery’s Special Interest Group on Data Communications (SIGCOMM) conference in Budapest, Hungary.

Tracking inside the body

To test ReMix, Katabi’s group first implanted a small marker in animal tissues. To track its movement, the researchers used a wireless device that reflects radio signals off the patient. This was based on a wireless technology that the researchers previously demonstrated to detect heart rate, breathing, and movement. A special algorithm then uses that signal to pinpoint the exact location of the marker.

Interestingly, the marker inside the body does not need to transmit any wireless signal. It simply reflects the signal transmitted by the wireless device outside the body. Therefore, it doesn’t need a battery or any other external source of energy.

A key challenge in using wireless signals in this way is the many competing reflections that bounce off a person’s body. In fact, the signals that reflect off a person’s skin are actually 100 million times more powerful than the signals of the metal marker itself.

To overcome this, the team designed an approach that essentially separates the interfering skin signals from the ones they’re trying to measure. They did this using a small semiconductor device, called a “diode,” that mixes signals together so the team can then filter out the skin-related signals. For example, if the skin reflects at frequencies of F1 and F2, the diode creates new combinations of those frequencies, such as F1-F2 and F1+F2. When all of the signals reflect back to the system, the system only picks up the combined frequencies, filtering out the original frequencies that came from the patient’s skin.

One potential application for ReMix is in proton therapy, a type of cancer treatment that involves bombarding tumors with beams of magnet-controlled protons. The approach allows doctors to prescribe higher doses of radiation, but requires a very high degree of precision, which means that it’s usually limited to only certain cancers.

Its success hinges on something that’s actually quite unreliable: a tumor staying exactly where it is during the radiation process. If a tumor moves, then healthy areas could be exposed to the radiation. But with a small marker like ReMix’s, doctors could better determine the location of a tumor in real-time and either pause the treatment or steer the beam into the right position. (To be clear, ReMix is not yet accurate enough to be used in clinical settings. Katabi says a margin of error closer to a couple of millimeters would be necessary for actual implementation.)

“The ability to continuously sense inside the human body has largely been a distant dream,” says Romit Roy Choudhury, a professor of electrical engineering and computer science at the University of Illinois, who was not involved in the research. “One of the roadblocks has been wireless communication to a device and its continuous localization. ReMix makes a leap in this direction by showing that the wireless component of implantable devices may no longer be the bottleneck.”

Looking ahead

There are still many ongoing challenges for improving ReMix. The team next hopes to combine the wireless data with medical data, such as that from magnetic resonance imaging (MRI) scans, to further improve the system’s accuracy. In addition, the team will continue to reassess the algorithm and the various tradeoffs needed to account for the complexity of different bodies.

“We want a model that’s technically feasible, while still complex enough to accurately represent the human body,” says MIT PhD student Deepak Vasisht, lead author on the new paper. “If we want to use this technology on actual cancer patients one day, it will have to come from better modeling a person’s physical structure.”

The researchers say that such systems could help enable more widespread adoption of proton therapy centers. Today, there are only about 100 centers globally.

“One reason that [proton therapy] is so expensive is because of the cost of installing the hardware,” Vasisht says. “If these systems can encourage more applications of the technology, there will be more demand, which will mean more therapy centers, and lower prices for patients.”

Katabi and Vasisht co-wrote the paper with MIT PhD student Guo Zhang, University of Waterloo professor Omid Abari, MGH physicist Hsaio-Ming Lu, and MGH technical director Jacob Flanz.

| Stay even more current with our live technology feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [MIT News – Computer Science and Artificial Intelligence Laboratory (CSAIL)] August 20, 2018 at 12:21AM. Credit to Author Adam Conner-Simons | Rachel Gordon and MIT News – Computer Science and Artificial Intelligence Laboratory (CSAIL) | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

EGaming, the Humble Book Bundle: Machine Learning is LIVE!

 

The Humble Book Bundle: Machine Learning by O’Reilly just launched on Monday, August 27 at 11 a.m. Pacific time! Get titles like Introduction to Machine Learning with Python, Learning TensorFlow, and Thoughtful Machine Learning with Python. Plus, bundle purchases will support Code for America!

Humble Book Bundle: Machine Learning by O'Reilly

__

This article and images were originally posted on [ESIST] and sponsored by HUMBLE BUNDLE
+ Got any news, tips or want to contact us directly? Email esistme@gmail.com

ESIST may receive a commission for any purchases made through our affiliate links. All commissions made will be used to support and expand ESIST.Tech

 

 

 

Universal Method to Sort Complex Information Found

Your daily selection of the latest science news!

According to Quanta Magazine (This article and its images were originally posted on Quanta Magazine August 13, 2018 at 01:18PM.)

If you were opening a coffee shop, there’s a question you’d want answered: Where’s the next closest cafe? This information would help you understand your competition.

This scenario is an example of a type of problem widely studied in computer science called “nearest neighbor” search. It asks, given a data set and a new data point, which point in your existing data is closest to your new point? It’s a question that comes up in many everyday situations in areas such as genomics research, image searches and Spotify recommendations.

And unlike the coffee shop example, nearest neighbor questions are often very hard to answer. Over the past few decades, top minds in computer science have applied themselves to finding a better way to solve the problem. In particular, they’ve tried to address complications that arise because different data sets can use very different definitions of what it means for two points to be “close” to one another.

Now, a team of computer scientists has come up with a radically new way of solving nearest neighbor problems. In a pair of papers, five computer scientists have elaborated the first general-purpose method of solving nearest neighbor questions for complex data.

“This is the first result that captures a rich collection of spaces using a single algorithmic technique,” said Piotr Indyk, a computer scientist at the Massachusetts Institute of Technology and influential figure in the development of nearest neighbor search.

Distance Difference

We’re so thoroughly accustomed to one way of defining distance that it’s easy to miss that there could be others. We generally measure distance using “Euclidean” distance, which draws a straight line between two points. But there are situations in which other definitions of distance make more sense. For example, “Manhattan” distance forces you to make 90-degree turns, as if you were walking on a street grid. Using Manhattan distance, a point 5 miles away as the crow flies might require you to go across town for 3 miles and then uptown another 4 miles.

It’s also possible to think of distance in completely nongeographical terms. What is the distance between two people on Facebook, or two movies, or two genomes? In these examples, “distance” means how similar the two things are.

There exist dozens of distance metrics, each suited to a particular kind of problem. Take two genomes, for example. Biologists compare them using “edit distance.” Using edit distance, the distance between two genetic sequences is the number of additions, deletions, insertions and replacements required to convert one into the other.

Edit distance and Euclidean distance are two completely different notions of distance — there’s no way to reduce one to the other. This incommensurability is true for many pairs of distance metrics, and it poses a challenge for computer scientists trying to develop nearest neighbor algorithms. It means that an algorithm that works for one type of distance won’t work for another — that is, until this new way of searching came along.

Squaring the Circle

To find a nearest neighbor, the standard approach is to partition your existing data into subgroups. Imagine, for instance, your data is the location of cows in a pasture. Draw circles around groups of cows. Now place a new cow in the pasture and ask, which circle does it fall in? Chances are good — or even guaranteed — that your new cow’s nearest neighbor is also in that circle.

Then repeat the process. Partition your circle into subcircles, partition those partitions, and so on. Eventually, you’ll end up with a partition that contains just two points: an existing point and your new point. And that existing point is your new point’s nearest neighbor.

Algorithms draw these partitions, and good algorithm will draw them quickly and well — with “well” meaning that you’re not likely to end up in a situation where your new cow falls in one circle but its nearest neighbor stands in another. “From these partitions we want close points to end up in the same disc often and far points to end up in the same disc rarely,” said Ilya Razenshteyn, a computer scientist at Microsoft Research and coauthor of the new work along with Alexandr Andoni of Columbia University, Assaf Naor of Princeton University, Aleksandar Nikolov of the University of Toronto and Erik Waingarten of Columbia University.

Over the years, computer scientists have come up with various algorithms for drawing these partitions. For low-dimensional data — where each point is defined by only a few values, like the locations of cows in a pasture — algorithms create what are called “Voronoi diagrams,” which solve the nearest neighbor question exactly.

For higher-dimensional data, where each point can be defined by hundreds or thousands of values, Voronoi diagrams become too computationally intensive. So instead, computer scientists draw partitions using a technique called “locality sensitive hashing (LSH)” that was first defined by Indyk and Rajeev Motwani in 1998. LSH algorithms draw partitions randomly. This makes them faster to run but also less accurate — instead of finding a point’s exact nearest neighbor, they guarantee you’ll find a point that’s within some fixed distance of the actual nearest neighbor. (You can think of this as being like Netflix giving you a movie recommendation that’s good enough, rather than the very best.)

Since the late 1990s, computer scientists have come up with LSH algorithms that give approximate solutions to the nearest neighbor problem for specific distance metrics. These algorithms have tended to be very specialized, meaning an algorithm developed for one distance metric couldn’t be applied to another.

“You could get a very efficient algorithm for Euclidean distance, or Manhattan distance, for some very specific important cases. But we didn’t have an algorithmic technique that worked on a large class of distances,” said Indyk.

Because algorithms developed for one distance metric couldn’t be used in another, computer scientists developed a workaround strategy. Through a process called “embedding,” they’d overlay a distance metric for which they didn’t have a good algorithm on a distance metric for which they did. But the fit between metrics was usually imprecise — a square peg in a round hole type of situation. In some cases, embeddings weren’t possible at all. What was needed instead was an all-purpose way of answering nearest neighbor questions.

A Surprise Result

In this new work, the computer scientists began by stepping back from the pursuit of specific nearest neighbor algorithms. Instead, they asked a broader question: What prevents a good nearest neighbor algorithm from existing for a distance metric?

The answer, they thought, had to do with a particularly troublesome setting in which to find nearest neighbors called an “expander graph.” An expander graph is a specific type of graph — a collection of points connected by edges. Graphs have their own distance metric. The distance between two points on a graph is the minimum number of lines you need to traverse to get from one point to the other. You could imagine a graph representing connections between people on Facebook, for example, where the distance between people is their degree of separation. (If Julianne Moore had a friend who had a friend who is friends with Kevin Bacon, then the Moore-Bacon distance would be 3.)

An expander graph is a special type of graph that has two seemingly contradictory properties: It’s well-connected, meaning you cannot disconnect points without cutting many edges. But at the same time, most points are connected to very few other points. As a result of this last trait, most points end up being far away from each other (because the low-connectivity means you have to take a long, circuitous route between most points).

This unique combination of features — well-connectedness, but with few edges overall — has the consequence that it’s impossible to perform fast nearest neighbor search on expander graphs. The reason it’s impossible is that any effort to partition points on an expander graph is likely to separate close points from each other.

“Any way to cut the points on an expander into two parts would be cutting many edges, splitting many close points,” said Waingarten, a coauthor of the new work.

In the summer of 2016, Andoni, Nikolov, Razenshteyn and Waingarten knew that good nearest neighbor algorithms were impossible for expander graphs. But what they really wanted to prove was that good nearest neighbor algorithms were also impossible for many other distance metrics — metrics where computer scientists had been stymied trying to find good algorithms.

Their strategy for proving that such algorithms were impossible was to find a way to embed an expander metric into these other distance metrics. By doing so, they could establish that these other metrics had unworkable expanderlike properties.

The four computer scientists went to Assaf Naor, a mathematician and computer scientist at Princeton University, whose previous work seemed well-suited to this question about expanders. They asked him to help prove that expanders embed into these various types of distances. Naor quickly came back with an answer, but it wasn’t the one they had been expecting.

“We asked Assaf for help with that statement, and he proved the opposite,” said Andoni.

Naor proved that expander graphs don’t embed into a large class of distance metrics called “normed spaces” (which include distances like Euclidean distance and Manhattan distance). With Naor’s proof as a foundation, the computer scientists followed this chain of logic: If expanders don’t embed into a distance metric, then a good partitioning must be possible (because, they proved, expanderlike properties were the only barrier to a good partitioning). Therefore, a good nearest neighbor algorithm must also be possible — even if computer scientists hadn’t been able to find it yet.

The five researchers — the first four, now joined by Naor — wrote up their results in a paper completed last November and posted online in April. The researchers followed that paper with a second one they completed earlier this year and posted online this month. In that paper, they use the information they had gained in the first paper to find fast nearest neighbor algorithms for normed spaces.

“The first paper showed there exists a way to take a metric and divide it into two, but it didn’t give a recipe for how to do this quickly,” said Waingarten. “In the second paper, we say there exists a way to split points, and, in addition, the split is this split and you can do it with fast algorithms.”

Altogether, the new papers recast nearest neighbor search for high-dimensional data in a general light for the first time. Instead of working up one-off algorithms for specific distances, computer scientists now have a one-size-fits-all approach to finding nearest neighbor algorithms.

“It’s a disciplined way of designing algorithms for nearest neighbor search in whatever metric space you care about,” said Waingarten.

Update: On August 14, the researchers posted online the second of their two papers. This article has been updated with links to the second paper.

Continue reading… | Stay even more current with our live science feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Quanta Magazine] August 13, 2018 at 01:18PM. All credit to both the author Kevin Hartnett and Quanta Magazine | ESIST.T>G>S Recommended Articles Of The Day.

Donations are appreciated and go directly to supporting ESIST.Tech Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Watch 100,000 Volts of Electricity Course Through a Circuit Board in Slow Motion

Your daily selection of the latest science news!

According to Motherboard (This article and its images were originally posted on Motherboard August 22, 2018 at 10:51AM.)

Illinois’s resident mad scientist and moth-lover, Drake Anthony, is at it again. On his styropyro YouTube channel, Anthony conducts experiments with powerful lasers and chemical reactions. In his latest, he uses a large power supply to run high voltage through ignition coils to create a cool blue arc of high powered electricity.

He doesn’t stop there. The ignition coil arc is pretty but hooking up a blank, perforated circuit board to those ignition coils forces the electric arc down its various paths to create a gorgeous random pattern of dangerous electricity.

Electricity always follows the path of least resistance, so you’d expect it to ignore the copper board. But Anthony explains, using math, that the path of least resistance isn’t always obvious and that traveling the length of the copper board doesn’t add any resistance and, in most cases, is actually easier on the electricity than moving through the air.

Math is cooler when its results are as gorgeous and strange as using ignition coils to run high voltage through a circuit board. “I wish this thing wasn’t so lethal because I’d love one for my room,” Anthony said.

Continue reading… | Stay even more current with our live science feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Motherboard] August 22, 2018 at 10:51AM. All credit to both the author and Motherboard | ESIST.T>G>S Recommended Articles Of The Day.

Donations are appreciated and go directly to supporting ESIST.Tech Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

 

EGaming, the Humble Book Bundle: Big Data is LIVE!

 

bigdata_books_logo-dark-retina.png

The Humble Book Bundle: Big Data by Packt just launched on Monday, August 13 at 11 a.m. Pacific time! Get titles like Mastering MongoDB 3.x, Learning Elastic Stack 6.0, and Mastering Tableau 10. Plus, bundle purchases will support Mental Health Foundation – and a charity of your choice!

Humble Book Bundle: Big Data by Packt

__

This article and images were originally posted on [ESIST] and sponsored by HUMBLE BUNDLE
+ Got any news, tips or want to contact us directly? Email esistme@gmail.com

ESIST may receive a commission for any purchases made through our affiliate links. All commissions made will be used to support and expand ESIST.Tech

 

 

 

Build an oscilloscope using Raspberry Pi and Arduino

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on Raspberry Pi July 20, 2018 at 08:38AM.)

In this tutorial from The MagPi issue 71Mike Cook takes us through the process of building an oscilloscope using a Raspberry Pi and an Arduino. Get your copy of The MagPi in stores now, or download it as a free PDF here.

The oscilloscope is on the wish list of anyone starting out with electronics. Your author used to tell his students that it was your eyes, making electricity visible. Unfortunately, they are quite expensive: from a few hundred pounds to up to £5000 and beyond. However, by using an Arduino and some software on the Raspberry Pi, you can make a passable beginner’s oscilloscope.

Last September, in The MagPi #61, there was an article outlining the way the Raspberry Pi and the Arduino could be used together. We at the Bakery have been doing this for some time: we first had a major project in the Raspberry Pi Projects books by Andrew Robinson and Mike Cook. The big advantage of the Arduino from a signal processing point of view is that there is minimal interruption from the operating system and you can gather data at a constant uninterrupted rate. This is just what we need for making an oscilloscope. The idea is that the Arduino gathers a whole heap of voltage samples as fast as it can and stores it in memory. It then transfers that memory to the Raspberry Pi, again as fast as possible. The Pi plots the data and displays it, then the software allows measurements to be made on the samples.

So you can measure the time and voltage difference, known as a delta, between any two points on the samples. You can even display the frequency that the ‘time delta’ corresponds to by taking its reciprocal. These are features found in expensive oscilloscopes. We have also built in a trigger function; this is to synchronise the onset of the rapid data gathering with the occurrence of a positive transition on the input signal through a specified voltage. The result is that regular waveforms can look stable on the display.

The hardware

The schematic of the Arduino data acquisition module is shown in Figure 1.

Raspberry Pi Arduino oscilloscope magPi 71

You will notice that it is quite simple. It consists of three potentiometers for the oscilloscope’s controls and an AC coupled biased voltage input.

The capacitor ensures that no DC components from the input get through and gives a modicum of protection against overvoltage. The reference voltage, or ground, is similarly biased as +2.5V above the Pi’s ground level.

The use of a BNC socket for the input ensures that you can use this with proper oscilloscope probe leads; these normally have an X10 switchable attenuator fitted, thus allowing voltages of +/- 25V to be measured. Full construction details can be found in the numbered steps.

Raspberry Pi Arduino oscilloscope magPi 71

Arduino software

The software, or sketch, you need to put into the Arduino is shown in the Gather_A0.ino listing, and is quite simple. Normally an Arduino of this type will take samples at a rate of 10 000 per second — or as we say, a 10k sample rate. This is not too good for an oscilloscope, but we can increase this sample rate by speeding up the A/D converter’s clock speed from the default rate. It does not appear to affect the reading accuracy too much. By making this change, we can speed up the sample rate to 58k. This is much better and allows useful measurements to be made in the audio range.

Raspberry Pi Arduino oscilloscope magPi 71

So, first, the trigger function is optionally called and then the samples are gathered in and sent to the Pi. The trigger function has a time-out that means it will trigger anyway after one second, whether it sees a transition on the input signal or not. Then the three pots are measured and also sent to the Pi. Note here that the samples are ten bits wide and so have to be sent as two bytes that get joined together again in the Pi’s software.

Also note the use of the double read for the pots, with a bit of code between each. This ensures a more stable reading, as the input capacitor of the Arduino’s sample and hold circuit needs time to charge up, and it has less time than normal to do this due to the speeding up of the D/A. It does not affect the waveform samples too much, as in most waveforms one sample voltage is close to the previous one.

Raspberry Pi Arduino oscilloscope magPi 71

At the end of the transfer, the Arduino sits in a loop waiting for an acknowledge byte from the Pi so it can start again. This acknowledge byte also carries the information as to whether or not to use a trigger on the next sample.

Raspberry Pi Arduino oscilloscope magPi 71

Finally, before each buffer full of data is gathered, pin 13 on the board is lit, and turned off after. This is so that we could time the process on a commercial oscilloscope to find the sample rate — something you will not have to do if you use the recommended AVR-type Arduinos running at 16MHz.

Pi software

The software for the Raspberry Pi is written in Python 3 and used the Pygame framework. It proved to be a lot more tricky to write than we first imagined, and is shown in the Scope.py listing. Python 3 uses Unicode characters by default, and allowed us to display the delta (Δ) and mu (μ) Greek characters for the difference and the time. The code first sets up the non-display part of the window; this is only drawn once, and then parts of it are updated when necessary. Depending on what type of Arduino you have, it can show up as a different USB port; we found that ours showed up as one of two ports. Comment out which one is not applicable when defining the sampleInput variable at the start of the listing.

Finally, we cobbled together a 168×78 pixel logo for the top-left corner, using a piece of clip art and fashioning the word ‘Oscilloscope’ from an outlined version of the Cooper Black font. We called it PyLogo.png and placed it in an images folder next to the Python code.

Using the oscilloscope

The oscilloscope samples at 58 kHz, which in theory means you can measure waveforms at 29 kHz. But that only gives you two samples per cycle, and as the samples can be anywhere on the waveform, they do not look very good. As a rough guide, you need at least ten points on a waveform to make it look like a waveform, so that gives a top practical frequency of 5.8 kHz. However, by using the Time Magnify options along with the Freeze function, you can measure much higher frequencies. The time and voltage cursor lines let you find out the values on any point of the waveform, and by clicking the Save functions, the current cursor is replaced by a dotted line that is fixed, and measurements can be made relative to that. The oscilloscope in action can be seen in Figure 2.

Raspberry Pi Arduino oscilloscope magPi 71

Note that pressing the S key on the keyboard produces a screen dump of the display.

Taking it further

There are lots of ways you can take this project further. A simple upgrade would involve you having a second data buffer to allow you to display a saved waveform to compare against the current live one. You could also add a lower-speed acquisition mode to see slower waveforms. You can go the other way and use a faster Arduino so you can see the higher frequencies. This oscilloscope is AC coupled; you could add a DC coupling option with a switch potential divider and amplifier to the front end to extend the range of voltages you can measure. All these improvements, however, will need changes to the software to allow the measuring to take place on these wider-range parameters.

Finish the project

For the complete project code, download the free PDF of The MagPi issue 71, available on The MagPi website.

 

| Stay even more current with our live technology feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [Raspberry Pi] July 20, 2018 at 08:38AM. Credit to Author Mike Cook and Raspberry Pi | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

 

 

Synopsis: Making Quantum Computations Behave

Your daily selection of the latest science news!

According to Physics – spotlighting exceptional research (This article and its images were originally posted on Physics – spotlighting exceptional research July 17, 2018 at 12:08PM.)

A new computational method tackles many-body quantum calculations that have defied a suite of existing approaches.

Calculations of a quantum system’s behavior can spiral out of control when they involve more than a handful of particles. So for just about anything more complicated than the hydrogen atom, physicists forget about finding an exact solution to the Schrödinger equation and rely instead on approximation methods. Dean Lee of Michigan State University, East Lansing, and colleagues have now proposed an alternative method for when even the best approximation schemes fail. Their approach should be applicable to a variety of many-particle problems in atomic, nuclear, and particle physics.

The researchers considered the popular Bose-Hubbard model to illustrate their idea.​ In the model, which has been used to describe atoms in an optical lattice and in superconductors, bosons hop from point to point on a cubic grid, but they interact with one another only when they sit on the same site. Physicists are interested in how the particles behave as the strength of this interaction, U, varies. Using the so-called perturbative approach, the particles’ wave function can be calculated for a simple case (U=0) and then approximated at greater interaction strengths in terms of a power series in U. But this formula blows up when U is too large.

Instead, the team’s approach was to track the wave function’s changing shape at a few values of U where the functions can be accurately calculated. They then used this shape “trajectory” to predict the ground-state wave function at values of U that perturbation theory can’t reach, demonstrating the accuracy of their method for four bosons on a 4×4×4 grid.

Lee says the technique, which he and his colleagues have dubbed “eigenvector continuation,” should work well for calculations that involve a smoothly varying parameter, like interaction strength, but it might struggle with a discretely varying parameter, like particle number. The researchers are now planning to dive into some computations that are known to defy conventional methods, such as simulations involving large nuclei.

 

Continue reading… | Stay even more current with our live science feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Physics – spotlighting exceptional research] July 17, 2018 at 12:08PM. All credit to both the author Jessica Thomas and Physics – spotlighting exceptional research | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

Neuroscientists uncover secret to intelligence in parrots

Your daily selection of the latest science news!

According to Latest Science News — ScienceDaily (This article and its images were originally posted on Latest Science News — ScienceDaily July 3, 2018 at 01:59PM.)

University of Alberta neuroscientists have identified the neural circuit that may underlay intelligence in birds, according to a new study. The discovery is an example of convergent evolution between the brains of birds and primates, with the potential to provide insight into the neural basis of human intelligence.

“An area of the brain that plays a major role in primate intelligence is called the pontine nuclei,” explained Cristian Gutierrez-Ibanez, postdoctoral fellow in the Department of Psychology. “This structure transfers information between the two largest areas of the brain, the cortex and cerebellum, which allows for higher-order processing and more sophisticated behaviour. In humans and primates, the pontine nuclei are large compared to other mammals. This makes sense given our cognitive abilities.”

Birds have very small pontine nuclei. Instead, they have a similar structure called the medial spiriform nucleus (SpM) that has similar connectivity. Located in a different part of the brain, the SpM does the same thing as the pontine nuclei, circulating information between the cortex and the cerebellum. “This loop between the cortex and the cerebellum is important for the planning and execution of sophisticated behaviours,” said Doug Wylie, professor of psychology and co-author on the new study.

Not-so-bird brain

Using samples from 98 birds from the largest collection of bird brains in the world, including everything from chickens and waterfowl to parrots and owls, the scientists studied the brains of birds, comparing the relative size of the SpM to the rest of the brain. They determined that parrots have a SpM that is much larger than that of other birds.

“The SpM is very large in parrots. It’s actually two to five times larger in parrots than in other birds, like chickens,” said Gutierrez. “Independently, parrots have evolved an enlarged area that connects the cortex and the cerebellum, similar to primates. This is another fascinating example of convergence between parrots and primates. It starts with sophisticated behaviours, like tool use and self-awareness, and can also be seen in the brain. The more we look at the brains, the more similarities we see.”

Next, the research team hopes to study the SpM in parrots more closely, to understand what types of information go there and why.

“This could present an excellent way to study how the similar, pontine-based, process occurs in humans,” added Gutierrez. “It might give us a way to better understand how our human brains work.”

Story Source:

Materials provided by University of Alberta. Original written by Katie Willis. Note: Content may be edited for style and length.

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and its images were originally posted on [Latest Science News — ScienceDaily] July 3, 2018 at 01:59PM. All credit to both the author and Latest Science News — ScienceDaily | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

World’s tiniest ‘computer’ makes a grain of rice seem massive

Your daily selection of the hottest trending tech news!

According to Engadget (This article and its images were originally posted on Engadget June 23, 2018 at 06:03PM.)

You didn’t think scientists would let IBM’s “world’s smallest computer” boast go unchallenged, did you? Sure enough, University of Michigan has produced a temperature sensing ‘computer’ measuring 0.04 cubic millimeters, or about a tenth the size of IBM’s former record-setter. It’s so small that one grain of rice seems gigantic in comparison — and it’s so sensitive that its transmission LED could instigate currents in its circuits.

The size limitations forced researchers to get creative to reduce the effect of light. They switched from diodes to switched capacitors, and had to fight the relative increase in electrical noise that comes from running on a device that uses so little power.

The result is a sensor that can measure changes in extremely small regions, like a group of cells in your body. Scientists have suspected that tumors are slightly hotter than healthy tissue, but it’s been difficult to verify this until now. The minuscule device could both check this claim and, if it proves true, gauge the effectiveness of cancer treatments. The team also envisions this helping to diagnose glaucoma from inside the eye, monitor biochemical processes and even study tiny snails.

Why the air quotes around computer, then? The tiny size is leading the University to question what a computer is. This does have a full-fledged processor (based on an ARM Cortex-M0+ design), but it loses all data when it loses power, just like IBM’s device. That might be a deal-breaker for people who expect a computer to be more complete. Still, this pushes the limits of computing power and suggests that nearly invisible computing may be relatively commonplace before long.

 

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.

__

This article and images were originally posted on [Engadget] June 23, 2018 at 06:03PM. Credit to Author Jon Fingas@jonfingas and Engadget | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

Microsoft Just Put a Data Center on the Bottom of the Ocean

Your daily selection of the latest science news!

According to Motherboard (This article and its images were originally posted on Motherboard June 6, 2018 at 10:03AM.)

Microsoft just sent its first self-sufficient, waterproof data center to the bottom of the ocean floor near the Orkney Islands in Scotland, the company announced on Tuesday. About the size of a shipping container, the tubular data center holds 12 racks loaded with 864 servers and is attached to a large triangular weight that anchors it to the seabed over 100 feet beneath the ocean surface.

The deployment of the data center represents the culmination of a nearly four year research effort code-named Project Natick, which aimed to develop rapidly deployable data centers that can support cloud computing services near major cities.

Microsoft deploying the submarine data center to the ocean floor near Scotland. Image: Microsoft

In addition to cutting down the amount of time needed to create a data center on land from about 2 years to around 90 days, the submarine data center has the added benefit of natural cooling from the ocean, eliminating one of the biggest costs of running a data center on land. The bottom of the ocean is also isolated from many disasters that could affect land based data centers, such as war or hurricanes, although Microsoft did not mention how difficult it would be to make repairs to the servers inside the container should they malfunction.

The Orkney Islands was a strategic choice for the first data center since the islands are also testing experimental renewable energy projects. The islands are home of the European Marine Energy Center, which takes advantage of the naturally turbulent water to harvest tidal energy in addition to a substantial amount of wind energy generated on land to create 100 percent renewable energy for the island. The EMEC generates more than enough energy for the islands’ 10,000 residents and a cable linked to the Orkney Island grid powers Microsoft’s underwater data center.

The move is part of a larger push at Microsoft to become a leader in cloud computing, which is at the heart of most consumer-facing web applications you use on a day to day basis. Considering that over half of the world’s population lives within 120 miles of the coast, submarine data centers can ensure that major cities are always close to the physical servers that comprise the cloud.

Microsoft already runs more than 100 data centers around the world for its Azure cloud computing platform. Project Natick may allow the company to rapidly deploy dozens of other data centers in the coming years, but for now, Microsoft says its first submarine data center is an applied research project meant to determine the viability deploying the data centers at scale. It will monitor the container for the next year to monitor its performance before deploying another.

Read More: Why I’m Quitting Apple, Amazon, Facebook, Google and Microsoft for a
Month

“We are learning about disk failures, about rack design, about the mechanical engineering of cooling systems and those things will feedback into our normal datacenters,” Peter Lee, the leader of Microsofts New Experiences and Technologies group said in a statement. “When you go for a moonshot, you might not ever get to the moon. It is great if you do, but, regardless, you learn a lot.”

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and its images were originally posted on [Motherboard] June 6, 2018 at 10:03AM. All credit to both the author Daniel Oberhaus and Motherboard | ESIST.T>G>S Recommended Articles Of The Day.

 

 

International Natural Product Sciences Taskforce (INPST) 2018 Science Communication Award

Your daily selection of the latest science news!

According to International Natural Product Sciences Taskforce (INPST)

Rules for participation

  1. The INPST 2018 Science Communication Award will be given in Gold (2000 USD), Silver (1000 USD), and Bronze (500 USD) to the authors of the three best blog posts that will be published on the INPST website in 2018.

 

  1. Each blog post for participation in the INPST 2018 Science Communication Award needs to have a minimum of 1000 words and at least one image (photo, scheme, or other graphical representation), and needs to be send as a Word file to marc.diederich@me.com (the submission deadline is December 31, 2018). Example of the needed submission format can be viewed here.

 

  1. The submitted blog posts need to be focused on a life sciences-related topic, and to be written in easily understandable (layman’s) terms. Participation with more than one blog posts is allowed. Blogs with more than one authors are allowed (if a blog post with several authors is the winner, the award will be divided to equal parts among the participating authors). Example of a published blog can be viewed here.

 

  1. The winners will be selected based on the quality of the writing and on the provoked public interest (e.g., reflected in parameters such as the number of page views and the number of sharing on the social media). The winners will be announced in March 2019.

 

  1. Why participating in the INPST 2018 Science Communication Award contest? In addition to the monetary Awards, each of the three winners will be honored with a Certificate (in Gold/Silver/Bronze) issued by the distinguished Evaluation Committee. Each blog post published on the INPST website will confer an exceptional scientific and public visibility to both each participating author and to the covered topic (which could be a great chance to promote a scientific topic or research of particular personal interest).

 

Keywords: science communication, blogging, science writing awards, blogger contest, science communication awards, blogs, bloggers, blogging competition, science writing contest.

 

Evaluation Committee of the INPST 2018 Science Communication Award: Atanas G. Atanasov, Bernd L. Fiebich, Ge LinMarc Diederich (Chair of the Committee), Michael Heinrich, Oliver Grundmann, Rachel Mata, and Volkmar Weissig.

 

The INPST 2018 Science Communication Award is sponsored by Envision Biotechnology.

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and images were originally posted on [International Natural Product Sciences Taskforce (INPST)] May 2, 2018 at 04:16AM. Credit to Author and International Natural Product Sciences Taskforce (INPST) | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

This DeepMind AI Spontaneously Developed Digital Navigation ‘Neurons’ Like Ours

 

Your daily selection of the latest science news!

According to Singularity Hub


When Google DeepMind researchers trained a neural network to tackle a virtual maze, it spontaneously developed digital equivalents to the specialized neurons called grid cells that mammals use to navigate. Not only did the resulting AI system have superhuman navigation capabilities, the research could provide insight into how our brains work.

Grid cells were the subject of the 2014 Nobel Prize in Physiology or Medicine, alongside other navigation-related neurons. These cells are arranged in a lattice of hexagons, and the brain effectively overlays this pattern onto its environment. Whenever the animal crosses a point in space represented by one of the corners these hexagons, a neuron fires, allowing the animal to track its movement.

Mammalian brains actually have multiple arrays of these cells. These arrays create overlapping grids of different sizes and orientations that together act like an in-built GPS. The system even works in the dark and independently of the animal’s speed or direction.

Exactly how these cell work and the full range of their functions is still somewhat of a mystery though. One recently proposed hypothesis suggests they could be used for vector-based navigation—working out the distance and direction to a target “as the crow flies.”

That’s a useful capability because it makes it possible for animals or artificial agents to quickly work out and choose the best route to a particular destination and even find shortcuts.

So, the researchers at DeepMind decided to see if they could test the idea in silico using neural networks, as they roughly mimic the architecture of the brain.

To start with, they used simulations of how rats move around square and circular environments to train a neural network to do path integration—a technical name for using dead-reckoning to work out where you are by keeping track of what direction and speed you’ve moved from a known point.

They found that, after training, patterns of activity that looked very similar to grid cells spontaneously appeared in one of the layers of the neural network. The researchers hadn’t programmed the model to exhibit this behavior.

To test whether these grid cells could play a role in vector-based navigation, they augmented the network so it could be trained using reinforcement learning. They set it to work navigating challenging virtual mazes and tweaked its performance by giving rewards for good navigation.

The agent quickly learned how to navigate the mazes, taking shortcuts when they became available and outperforming a human expert, according to results published in the journal Nature this week.

To test whether the digital grid cells were responsible for this performance, the researchers carried out another experiment where they prevented the artificial grid cells from forming, which significantly reduced the ability of the system to efficiently navigate. The DeepMind team says this suggests these cells are involved in vector-based navigation as had been hypothesized.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” Edvard Moser, a neuroscientist at the Kavli Institute for Systems Neuroscience in Trondheim, Norway, and one of the Nobel winners who discovered grid cells, told Nature.

But how much can actually be learned about the human brain from the experiment is up for debate.

Stefan Leutgeb, a neurobiologist at the University of California, San Diego, told Quanta that the research makes a good case for grid cells being involved in vector navigation, but that it is ultimately limited by being a simulation on a computer. “This is a way in which it could work, but it doesn’t prove that it’s the way it works in animals,” he says.

Importantly, the research doesn’t really seem to explain how grid cells help with these kinds of navigating tasks, simply that they do. That’s in part due to the difficulty of interpreting neural networks, neuroscientists Francesco Savelli and James Knierim at Johns Hopkins University write in an accompanying opinion article in Nature.

“That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration,” they write. “The black-box character of deep-learning systems, however, means that it might be hard to determine what that something is.”

The DeepMind researchers are more optimistic though. In a blog post, they say their findings not only support the theory that grid cells are involved in vector-based navigation, but also more broadly demonstrate the potential of using AI to test theories about how the brain works. That knowledge in turn could eventually be put back to use in developing more powerful AI systems.

In general, DeepMind is profoundly interested in how the fields of neuroscience and AI can connect and inform each other—writing papers on the subject and using inspiration from the brain to make  powerful neural networks capable of amazing and surprising feats.

The research on grid-cells is still very much basic science, but being able to mimic the powerful navigational capabilities of animals could be extremely useful for everything from robots to drones to self-driving cars.

Image Credit: DeepMind

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and images were originally posted on [Singularity Hub] May 14, 2018 at 11:02AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

An electronic rescue dog

Your daily selection of the latest science news!

According to Latest Science News — ScienceDaily

Trained rescue dogs are still the best disaster workers — their sensitive noses help them to track down people buried by earthquakes or avalanches. Like all living creatures, however, dogs need to take breaks every now and again. They are also often not immediately available in disaster areas, and dog teams have to travel from further afield.

A new measuring device from researchers at ETH Zurich led by Sotiris Pratsinis, Professor of Process Engineering, however, is always ready for use. The scientists had previously developed small and extremely sensitive gas sensors for acetone, ammonia, and isoprene — all metabolic products that we emit in low concentrations via our breath or skin. The researchers have now combined these sensors in a device with two commercial sensors for CO2 and moisture.

Chemical “fingerprint”

As shown by laboratory tests in collaboration with Austrian and Cypriot scientists, this sensor combination can be quite useful when searching for entrapped people. The researchers used a test chamber at the University of Innsbruck’s Institute for Breath Research in Dornbirn as an entrapment simulator. Volunteers each remained in this chamber for two hours.

“The combination of sensors for various chemical compounds is important, because the individual substances could come from sources other than humans. CO2, for example, could come from either a buried person or a fire source,” explains Andreas Güntner, a postdoc in Pratsinis’ group and lead author of the study, published in the journal Analytical Chemistry. The combination of sensors provides the scientists with reliable indicators of the presence of people.

Suitable for inaccessible areas

The researchers also showed that there are differences between the compounds emitted via our breath and skin. “Acetone and isoprene are typical substances that we mostly breathe out. Ammonia, however, is usually emitted through the skin,” explains ETH professor Pratsinis. In the experiments in the entrapment simulator, the participants wore a breathing mask. In the first part of the experiment, the exhaled air was channelled directly out of the chamber; in the second part, it remained inside. This allowed the scientists to create separate breath and skin emission profiles.

The ETH scientists’ gas sensors are the size of a small computer chip. “They are about as sensitive as most ion mobility spectrometers, which cost thousands of Swiss francs and are the size of a suitcase,” says Pratsinis. “Our easy-to-handle sensor combination is by far the smallest and cheapest device that is sufficiently sensitive to detect entrapped people. In a next step, we would like to test it during real conditions, to see whether it is suited for use in searches after earthquakes or avalanches.”

While electronic devices are already in use during searches after earthquakes, these work with microphones and cameras. These only help to locate entrapped people who are capable of making themselves heard or are visible beneath ruins. The ETH scientists’ idea is to complement these resources with the chemical sensors. They are currently looking for industry partners or investors to support the construction of a prototype. Drones and robots could also be equipped with the gas sensors, allowing difficult-to-reach or inaccessible areas to also be searched. Further potential applications could include detecting stowaways and exposing human trafficking.

Story Source:

Materials provided by ETH Zurich. Note: Content may be edited for style and length.

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and images were originally posted on [Latest Science News — ScienceDaily] May 16, 2018 at 03:07PM. Credit to Author and Latest Science News — ScienceDaily | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

12 data science mistakes to avoid

Your daily selection of the hottest trending tech news!

According to CIO

AI, machine learning and analytics aren’t just the latest buzzwords; organizations large and small are looking at AI tools and services in hopes of improving business processes, customer support and decision making with big data, predictive analytics and automated algorithmic systems. IDC predicts that 75 percent of enterprise and ISV developers will use AI or machine learning in at least one of their applications in 2018.

But expertise in data science isn’t nearly as widespread as the interest in using data to make decisions and improve results. If your business is just getting started with data science, here are some common mistakes that you’ll want to avoid making.

1. Assuming your data is ready to use — and all you need

You need to check both the quality and volume of the data you’ve collected and are planning to use. “The majority of your time, often 80 percent of your time, is going to be spent getting and cleaning data,” says Jonathan Ortiz, data scientist and knowledge engineer at data.world. “That’s assuming that you’re even tracking what you need to be tracking for a data scientist to do their work.”…

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. Also subscribe now to receive daily or weekly posts.

__

This article and images were originally posted on [CIO] May 9, 2018 at 06:07AM. Credit to Author and CIO | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Computer scientists have found the longest straight line you could sail without hitting land

Your daily selection of the hottest trending tech news!

According to New on MIT Technology Review

Back in 2012, a curious debate emerged on the discussion website Reddit, specifically on a subreddit called /r/MapPorn. Here the user Kepleronlyknows posted a map of the world purporting to show the longest navigable straight-line path over water without hitting land. The route began in Pakistan and followed a great circle under Africa and South America until it hit eastern Russia.

The post generated huge debate, with much head-scratching and pawing over charts and globes. The big question was whether the claim was correct—could there be a different straight-line route over water that was longer but uninterrupted by land of any kind? At the same time, same question arose for land—what was the longest straight-line route uninterrupted by lakes or seas?

For cartographers, it is clear that the answers would have to follow a great circle: an arc along one of the many largest imaginary circles that can be drawn around a sphere. Great circles always follow the shortest path between two points on a sphere. But how to find the great circles that contain the solutions?

The longest straight-line land journey on Earth.

We now have an answer thanks to the work of Rohan Chabukswar at the United Technologies Research Center in Ireland and Kushal Mukherjee at IBM Research in India. These guys have developed an algorithm for calculating the longest straight-line path on land or sea.

One way to solve this problem is by brute force—measuring the length of every possible straight-line path over land and water. This would be time-consuming to say the least. A global map with resolution of 1.85 kilometers has over 230 billion great circles. Each of these consists of 21,600 individual points, making a total of over five trillion points to consider.

The longest straight-line sea journey without hitting land.

But Chabukswar and Mukherjee have developed a quicker method using an algorithm that exploits a technique known as branch and bound.

This works by considering potential solutions as branches on a tree. Instead of evaluating all solutions, the algorithm checks one branch after another. That’s called branching, and it is essentially the same as a brute-force search. But another technique, called bounding, significantly reduces the task. Each branch contains a subset of potential solutions, one of which is the optimal solution. The trick is to find a property of the subsets that depends on how close the solutions come to the optimal one.

The bounding part of the algorithm measures this property to determine whether the subset of solutions is closer to the optimal value. If it isn’t, the algorithm ignores this branch entirely. If it is closer, this becomes the best subset of solutions, and the next branch is compared against it.

This process continues until all branches have been tested, revealing the one that contains the optimal solution. The branching algorithm then divides this branch up into smaller branches and the process repeats until it arrives at the single optimal solution.

The trick that Chabukswar and Mukherjee have perfected is to find a mathematical property of great-circle paths that bounds the optimal solution for straight-line paths. They then create an algorithm that uses this to find the longest path.

“The algorithm returned the longest path in about 10 minutes of computation for water path, and 45 minutes of computation for land path on a standard laptop,” say the researchers.

It turns out that Kepleronlyknows was entirely correct. The longest straight-line path over water begins in Sonmiani, Balochistan, Pakistan, passes between Africa and Madagascar and then between Antarctica and Tierra del Fuego in South America, and ends in the Karaginsky District, Kamchatka Krai, in Russia. It is 32,089.7 kilometers long.

“This path is visually the same one as found by kepleronlyknows, thus proving his [sic] assertion,” say Chabukswar and Mukherjee.

The longest path over land runs from near Jinjiang, Fujian, in China, weaves through Mongolia Kazakhstan and Russia, and finally reaches Europe to finish near Sagres in Portugal. In total the route passes through 15 countries over 11,241.1 kilometers.

The question now is: who will be the first to make these journeys, when, and how?

Ref: arxiv.org/abs/1804.07389 : Longest Straight Line Paths on Water or Land on the Earth

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. Also subscribe now to receive daily or weekly posts.

__

This article and images were originally posted on [New on MIT Technology Review] April 30, 2018 at 02:54PM. Credit to Author and New on MIT Technology Review | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

AI better than most human experts at detecting cause of preemie blindness

Your daily selection of the latest science news!

According to Medical Xpress

AI better than most human experts at detecting cause of preemie blindness
This image of an eye shows how twisted and dilated vessels of the retina can indicate retinopathy of prematurity, or ROP, the leading cause of childhood blindness. Credit: Michael Chiang/OHSU

An algorithm that uses artificial intelligence can automatically and more accurately diagnose a potentially devastating cause of childhood blindness than most expert physicians, a paper published in JAMA Ophthalmology suggests.

The finding could help prevent blindness in more babies with the disease, called retinopathy of prematurity, or ROP. Musician Stevie Wonder went blind due to this condition.

The accurately diagnosed the condition in images of infant eyes 91 percent of the time. On the other hand, a team of eight physicians with ROP expertise who examined the same images had an average accuracy rate of 82 percent.

“There’s a huge shortage of ophthalmologists who are trained and willing to diagnose ROP. This creates enormous gaps in care, even in the United States, and sadly leads too many children around the world to go undiagnosed,” said the study’s co-lead researcher, Michael Chiang, M.D., a professor of ophthalmology and medical informatics & clinical epidemiology in the OHSU School of Medicine and a pediatric ophthalmologist at the Elks Children’s Eye Clinic in the OHSU Casey Eye Institute.

“This algorithm distills the knowledge of ophthalmologists who are skilled at identifying ROP and puts it into a mathematical model so clinicians who may not have that same wealth of experience can still help babies receive a timely, accurate diagnosis,” said the other lead researcher, Jayashree Kalpathy-Cramer, Ph.D., of the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital, who is also an associate professor of radiology at Harvard Medical School.

Leading cause of childhood blindness

Retinopathy of prematurity is caused by abnormal blood vessel growth near the retina, the light-sensitive portion in the back of an eye. The condition is common in and is the leading cause of globally.

The National Eye Institute of the National Institutes of Health reports that up to 16,000 U.S. babies experience retinopathy of prematurity to some degree, but only up to 600 become legally blind each year as a result. The condition is becoming more common as medical care for premature babies improves.

The disease is diagnosed by visually inspecting a baby’s eye. Physicians typically use a magnifying device that shines light into a baby’s dilated eye, but that approach can lead to variable and subjective diagnoses.

Computational smarts

Artificial intelligence, also called AI, enables machines to think like humans and is a growing field in health care. Last month, the FDA approved an AI device that detects diabetes-related eye disease. Others have tried developing computerized systems to diagnose retinopathy of prematurity, but none have been able to match the accuracy of visual diagnosis by physicians.

This algorithm specifically uses deep learning, a form of AI that mimics how humans perceive the world through vision, including identifying objects. The MGH researchers combined two existing AI models to create the algorithm, while the OHSU researchers developed extensive reference standards to train it.

They first trained the algorithm to identify retinal vessels in more than 5,000 pictures taken during infant visits to an ophthalmologist. Next, they trained it to differentiate between healthy and diseased vessels. Afterward, they compared the algorithm’s accuracy with that of trained experts who viewed the same images and discovered it performed better than most of the expert physicians.

The full research team is now working with a collaborator in India to see if the algorithm can diagnose ROP in Indian babies as well as it did for the group of primarily Caucasian involved in this study. They are also exploring whether the algorithm can diagnose the condition in images of other parts of the retina besides vessels. The ultimate goal is to enable physicians to incorporate the technology into their clinical practices.

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. Also subscribe now to receive daily or weekly posts.
    __

This article and images were originally posted on [Medical Xpress] May 3, 2018 at 02:15AM. Credit to Author and Medical Xpress | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

AI will be used by humanitarian organisations – this could deepen neocolonial tendencies

Your daily selection of the hottest trending tech news!

According to Science + Technology – The Conversation

Artificial intelligence, or AI, is undergoing a period of massive expansion. This is not because computers have achieved human-like consciousness, but because of advances in machine learning, where computers learn from huge databases how to classify new data. At the cutting edge are the neural networks that have learned to recognise human faces or play Go.

Recognising patterns in data can also be used as a predictive tool. AI is being applied to echocardiograms to predict heart disease, to workplace data to predict if employees are going to leave, and to social media feeds to detect signs of incipient depression or suicidal tendencies. Any walk of life where there is abundant data – and that means pretty much every aspect of life – is being eyed up by government or business for the application of AI.

One activity that currently seems distant from AI is humanitarianism; the organisation of on-the-ground aid to fellow human beings in crisis due to war, famine or other disaster. But humanitarian organisations too will adopt AI. Why? Because it seems able to answer questions at the heart of humanitarianism – questions such as who we should save, and how to be effective at scale. AI also resonates strongly with existing modes of humanitarian thinking and doing, in particular the principles of neutrality and universality. Humanitarianism (it is believed) does not take sides, is unbiased in its application and offers aid irrespective of the particulars of a local situation.

The way machine learning consumes big data and produces predictions certainly suggests it can both grasp the enormity of the humanitarian challenge and provide a data-driven response. But the nature of machine learning operations mean they will actually deepen some of the problems of humanitarianism, and introduce new ones of their own.

The maths

Exploring these questions requires a short detour into the concrete operations of machine learning, if we are to bypass the misinformation and mystification that attaches to the term AI. Because there is no intelligence in artificial intelligence. Nor does it really learn, even though its technical name is machine learning.

AI is simply mathematical minimisation. Remember how at school you would fit a straight line to a set of points, picking the line that minimises the differences overall? Machine learning does the same for complex patterns, fitting input features to known outcomes by minimising a cost function. The result becomes a model that can be applied to new data to predict the outcome.

Any and all data can pushed through machine learning algorithms. Anything that can be reduced to numbers and tagged with an outcome can be used to create a model. The equations don’t know or care if the numbers represent Amazon sales or earthquake victims.

This banality of machine learning is also its power. It’s a generalised numerical compression of questions that matter – there are no comprehensions within the computation; the patterns indicate correlation, not causation. The only intelligence comes in the same sense as military intelligence; that is, targeting. The operations are ones of minimising the cost function in order to optimise the outcome.

And the models produced by machine learning can be hard to reverse into human reasoning. Why did it pick this person as a bad parole risk? What does that pattern represent? We can’t necessarily say. So there is an opacity at the heart of the methods. It doesn’t augment human agency but distorts it.

Compartmentalising the world.
Zapp2Photo/Shutterstock.com

Logic of the powerful

Machine learning doesn’t just make decisions without giving reasons, it
modifies our very idea of reason. That is, it changes what is knowable and what is understood as real.

For example, in some jurisdictions in the US, if an algorithm produces a prediction that an arrested person is likely to re-offend, that person will be denied bail. Pattern-finding in data becomes a calculative authority that triggers substantial consequences.

Machine learning, then, is not just a method but a machinic philosophy where abstract calculation is understood to access a truth that is seen as superior to the sense-making of ordinary perception. And as such, the calculations of data science can end up counting more than testimony.

Of course, the humanitarian field is not naive about the perils of datafication. It is well known that machine learning could propagate discrimination because it learns from social data which is itself often biased. And so humanitarian institutions will naturally be more careful than most to ensure all possible safeguards against biased training data.

But the problem goes beyond explicit prejudice. The deeper effect of machine learning is to produce the categories through which we will think about ourselves and others. Machine learning also produces a shift to preemption: foreclosing futures on the basis of correlation rather than causation. This constructs risk in the same way that Twitter determines trending topics, allocating and withholding resources in a way that algorithmically demarcates the deserving and the undeserving.

What will AI add to the disciplinary ordering of the refugee camp?
Clemens Bilan/EPA

We should perhaps be particularly worried about these tendencies because despite its best intentions, the practice of humanitarianism often shows neocolonial tendencies. By claiming neutrality and universality, algorithms assert the superiority of abstract knowledge generated elsewhere. By embedding the logic of the powerful to determine what happens to people at the periphery, humanitarian AI becomes a neocolonial mechanism that acts in lieu of direct control.

As things stand, machine learning and so-called AI will not be any kind of
salvation for humanitarianism. Instead, it will deepen the already deep neocolonial and neoliberal dynamics of humanitarian institutions through algorithmic distortion.

But no apparatus is a closed system; the impact of machine learning is contingent and can be changed. This is as important for humanitarian AI as for AI generally – for, if an alternative technics is not mobilised by approaches such as people’s councils, the next generation of humanitarian scandals will be driven by AI.

 

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Science + Technology – The Conversation] April 23, 2018 at 10:54AM. Credit to Author and Science + Technology – The Conversation | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Robot developed for automated assembly of designer nanomaterials

Your daily selection of the latest science news!

According to Latest Science News — ScienceDaily

https://www.sciencedaily.com/images/2018/04/180418111618_1_900x600.jpg

A current area of intense interest in nanotechnology is van der Waals heterostructures, which are assemblies of atomically thin two-dimensional (2D) crystalline materials that display attractive conduction properties for use in advanced electronic devices.

A representative 2D semiconductor is graphene, which consists of a honeycomb lattice of carbon atoms that is just one atom thick. The development of van der Waals heterostructures has been restricted by the complicated and time-consuming manual operations required to produce them. That is, the 2D crystals typically obtained by exfoliation of a bulk material need to be manually identified, collected, and then stacked by a researcher to form a van der Waals heterostructure. Such a manual process is clearly unsuitable for industrial production of electronic devices containing van der Waals heterostructures

Now, a Japanese research team led by the Institute of Industrial Science at The University of Tokyo has solved this issue by developing an automated robot that greatly speeds up the collection of 2D crystals and their assembly to form van der Waals heterostructures. The robot consists of an automated high-speed optical microscope that detects crystals, the positions and parameters of which are then recorded in a computer database. Customized software is used to design heterostructures using the information in the database. The heterostructure is then assembled layer by layer by a robotic equipment directed by the designed computer algorithm. The findings were reported in Nature Communications.

“The robot can find, collect, and assemble 2D crystals in a glove box,” study first author Satoru Masubuchi says. “It can detect 400 graphene flakes an hour, which is much faster than the rate achieved by manual operations.”

When the robot was used to assemble graphene flakes into van der Waals heterostructures, it could stack up to four layers an hour with just a few minutes of human input required for each layer. The robot was used to produce a van der Waals heterostructure consisting of 29 alternating layers of graphene and hexagonal boron nitride (another common 2D semiconductor). The record layer number of a van der Waals heterostructure produced by manual operations is 13, so the robot has greatly increased our ability to access complex van der Waals heterostructures.

“A wide range of materials can be collected and assembled using our robot,” co-author Tomoki Machida explains. “This system provides the potential to fully explore van der Waals heterostructures.”

The development of this robot will greatly facilitate production of van der Waals heterostructures and their use in electronic devices, taking us a step closer to realizing devices containing atomic-level designer materials.

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Latest Science News — ScienceDaily] April 18, 2018 at 02:42PM. Credit to Author and Latest Science News — ScienceDaily | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

FDA approves AI-powered diagnostic that doesn’t need a doctor’s help

Your daily selection of the hottest trending tech news!

According to New on MIT Technology Review

On Wednesday the US Food and Drug Administration gave permission to a company called IDx to market the first diagnostic device that uses artificial intelligence. The approval marks medicine’s entrance into an era of “diagnosis by software.”

What it does: The software program is designed to detect greater than a mild level of diabetic retinopathy in adults, which causes vision loss and  affects 30 million people in the US with diabetes. It occurs when high blood sugar damages blood vessels in the retina.

How it works: The software uses an AI algorithm to analyze images of the eye taken with a special retinal camera. A doctor uploads the images to a cloud server, and the software then delivers a positive or negative result.

A regulatory milestone: The FDA has recently cleared a few other products that use AI. But this is the first device authorized by the agency to provide a screening decision without the need for a doctor to also interpret the image or results.

A look ahead: In a series of tweets by FDA commissioner Scott Gottlieb today, he hinted that more AI devices could get the agency’s seal of approval soon. Gottlieb said the FDA is “taking steps to promote innovation and support the use of artificial intelligence-based medical devices.” Not to worry, though: such AI diagnostics probably won’t be replacing doctors or other medical professionals anytime soon.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [New on MIT Technology Review] April 11, 2018 at 04:06PM. Credit to Author and New on MIT Technology Review | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

EGaming, the Humble Book Bundle: A.I. by Packt is LIVE!

 

artificialintelligence_books_logo-light-retina.png

The Humble Book Bundle: A.I. by Packt just launched on Monday, March 12 at 11 a.m. Pacific time! We’ve teamed up with Packt for our newest bundle. Get titles like Unreal Engine 4 AI Programming Essentials, Machine Learning with R, and Deep Learning with TensorFlow. Also, this bundle supports Code For America – or a charity of your choice!

Humble Book Bundle: A.I. by Packt

__

This article and images were originally posted on [ESIST] and sponsored by HUMBLE BUNDLE
+ Got any news, tips or want to contact us directly? Email esistme@gmail.com

ESIST may receive a commission for any purchases made through our affiliate links. All commissions made will be used to support and expand ESIST.Tech

 

 

 

Seeing is believing—precision atom qubits achieve major quantum computing milestone

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

Seeing is believing -- precision atom qubits achieve major quantum computing milestone
A scanning tunnelling microscope image showing the electron wave function of a qubit made from a phosphorus atom precisely positioned in silicon. Credit: UNSW

The unique Australian approach of creating quantum bits from precisely positioned individual atoms in silicon is reaping major rewards, with UNSW Sydney-led scientists showing for the first time that they can make two of these atom qubits “talk” to each other.

The team – led by UNSW Professor Michelle Simmons, Director of the Centre of Excellence for Quantum Computation and Communication Technology, or CQC2T – is the only group in the world that has the ability to see the exact position of their qubits in the solid state.

Simmons’ team creates the atom qubits by precisely positioning and encapsulating individual phosphorus atoms within a silicon chip. Information is stored on the quantum spin of a single phosphorus electron.

The team’s latest advance – the first observation of controllable interactions between two of these qubits – is published in the journal Nature Communications. It follows two other recent breakthroughs using this unique approach to building a quantum computer.

By optimising their nano-manufacturing process, Simmons’ team has also recently created quantum circuitry with the lowest recorded electrical noise of any semiconductor device.

And they have created an electron spin with the longest lifetime ever reported in a nano-electric device – 30 seconds.

“The combined results from these three research papers confirm the extremely promising prospects for building multi-qubit systems using our atom qubits,” says Simmons.

2018 Australian of the Year inspired by Richard Feynman

Simmons, who was named 2018 Australian of the Year in January for her pioneering research, says her team’s ground-breaking work is inspired by the late physicist Richard Feynman.

“Feynman said: ‘What I cannot create, I do not understand’. We are enacting that strategy systematically, from the ground up, atom by atom,” says Simmons.

“In placing our phosphorus atoms in the silicon to make a qubit, we have demonstrated that we can use a scanning probe to directly measure the atom’s wave function, which tells us its exact physical location in the chip. We are the only group in the world who can actually see where our qubits are.

“Our competitive advantage is that we can put our high-quality qubit where we want it in the chip, see what we’ve made, and then measure how it behaves. We can add another qubit nearby and see how the two wave functions interact. And then we can start to generate replicas of the devices we have created,” she says.

Seeing is believing -- precision atom qubits achieve major quantum computing milestone
UNSW Professor Michelle Simmons, Director of the Centre of Excellence for Quantum Computation and Communication Technology, with a scanning tunnelling microscope. Credit UNSW. Credit: UNSW

For the new study, the team placed two qubits – one made of two phosphorus atoms and one made of a single phosphorus atom – 16 nanometres apart in a silicon chip.

 

“Using electrodes that were patterned onto the chip with similar precision techniques, we were able to control the interactions between these two neighbouring qubits, so the quantum spins of their electrons became correlated,” says study lead co-author, Dr Matthew Broome, formerly of UNSW and now at the University of Copenhagen.

“It was fascinating to watch. When the spin of one electron is pointing up, the other points down, and vice versa.

“This is a major milestone for the technology. These type of spin correlations are the precursor to the entangled states that are necessary for a quantum computer to function and carry out complex calculations,” he says.

Study lead co-author, UNSW’s Sam Gorman, says: “Theory had predicted the two qubits would need to be placed 20 nanometres apart to see this correlation effect. But we found it occurs at only 16 nanometres apart.

“In our quantum world, this is a very big difference,” he says. “It is also brilliant, as an experimentalist, to be challenging the theory.”

Leading the race to build a quantum computer in silicon

UNSW scientists and engineers at CQC2T are leading the world in the race to build a quantum computer in silicon. They are developing parallel patented approaches using single atom and quantum dot qubits.

“Our hope is that both approaches will work well. That would be terrific for Australia,” says Simmons.

The UNSW team have chosen to work in silicon because it is among the most stable and easily manufactured environments in which to host qubits, and its long history of use in the conventional computer industry means there is a vast body of knowledge about this material.

In 2012, Simmons’ team, who use scanning tunnelling microscopes to position the individual phosphorus in silicon and then molecular beam epitaxy to encapsulate them, created the world’s narrowest conducting wires, just four across and one atom high.

In a recent paper published in the journal Nano Letters, they used similar atomic scale control techniques to produce circuitry about 2-10 nanometres wide and showed it had the lowest recorded electrical noise of any semiconductor circuitry. This work was undertaken jointly with Saquib Shamim and Arindam Ghosh of the Indian Institute of Science.

Seeing is believing -- precision atom qubits achieve major quantum computing milestone

An artist’s impression of two qubits — one made of two phosphorus atoms and one made of a single phosphorus atom — placed 16 nanometres apart in a silicon chip. UNSW scientists were able to control the interactions between the two qubits so the quantum spins of their electrons became correlated. When the spin of one electron is pointing up, the other points down. Credit: UNSW

“It’s widely accepted that electrical noise from the circuitry that controls the qubits will be a critical factor in limiting their performance,” says Simmons.

“Our results confirm that silicon is an optimal choice, because its use avoids the problem most other devices face of having a mix of different materials, including dielectrics and surface metals, that can be the source of, and amplify, electrical noise.

“With our precision approach we’ve achieved what we believe is the lowest electrical noise level possible for an electronic nano-device in silicon – three orders of magnitude lower than even using carbon nanotubes,” she says.

In another recent paper in Science Advances, Simmons’ team showed their precision qubits in silicon could be engineered so the electron spin had a record lifetime of 30 seconds – up to 16 times longer than previously reported. The first author, Dr Thomas Watson, was at UNSW undertaking his PhD and is now at Delft University of Technology.

“This is a hot topic of research,” says Simmons. “The lifetime of the electron spin – before it starts to decay, for example, from spin up to spin down – is vital. The longer the lifetime, the longer we can store information in its quantum state.”

In the same paper, they showed that these long lifetimes allowed them to read out the electron spins of two qubits in sequence with an accuracy of 99.8 percent for each, which is the level required for practical error correction in a quantum processor.

Australia’s first quantum computing company

Instead of performing calculations one after another, like a conventional computer, a quantum computer would work in parallel and be able to look at all the possible outcomes at the same time. It would be able to solve problems in minutes that would otherwise take thousands of years.

Last year, Australia’s first quantum computing company – backed by a unique consortium of governments, industry and universities – was established to commercialise CQC2T’s world-leading research.

Operating out of new laboratories at UNSW, Silicon Quantum Computing Pty Ltd has the target of producing a 10-qubit demonstration device in silicon by 2022, as the forerunner to a -based quantum computer.

The Australian government has invested $26 million in the $83 million venture through its National Innovation and Science Agenda, with an additional $25 million coming from UNSW, $14 million from the Commonwealth Bank of Australia, $10 million from Telstra and $8.7 million from the NSW Government.

It is estimated that industries comprising approximately 40% of Australia’s current economy could be significantly impacted by computing. Possible applications include software design, machine learning, scheduling and logistical planning, financial analysis, stock market modelling, software and hardware verification, climate modelling, rapid drug design and testing, and early disease detection and prevention.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] March 7, 2018 at 05:42AM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

EGaming, the Humble Book Bundle: Mad Scientist is LIVE!

madscientist_books_logo-dark-retina.png

The Humble Book Bundle: Mad Scientist by Make: just launched on Wednesday, March 7 at 11 a.m. Pacific time! We’ve teamed up with Make: for our newest bundle! Get titles like Illustrated Guide to Home Chemistry, Make: High-Power Rockets, and Make: Fire. Plus, this bundle supports Maker Education!

Humble Book Bundle: Mad Scientist by Make:

 


This article and images were originally posted on [ESIST] and sponsored by HUMBLE BUNDLE
+ Got any news, tips or want to contact us directly? Email esistme@gmail.com

ESIST may receive a commission for any purchases made through our affiliate links. All commissions made will be used to support and expand ESIST.Tech

 

 

 

Why Hasn’t AI Mastered Language Translation?

Your daily selection of the latest science news!

According to Singularity Hub


In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted  machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] March 4, 2018 at 11:03AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Google Wants to Make Military Spy Drones Even Smarter

Your daily selection of the latest science news!

According to Live Science


Google has partnered with the U.S. Department of Defense to help the agency develop smarter drone software. According to a report from Gizmodo, Google has agreed to provide the DOD with machine-learning software that will help the department’s computers better detect objects in surveillance drone footage.
The new partnership, which was leaked from an internal Google mailing list last week and confirmed yesterday (March 6) in a statement, is part of a DOD initiative called Project Maven (also known as the Algorithmic Warfare Cross-Function Team). According to a DOD news release issued last July, Project Maven aims to improve America’s ability to “[win] wars with computer algorithms and artificial intelligence” by rapidly upgrading the military’s ability to analyze drone footage. [5 Surprising Ways Drones Could Be Used in the Future]
The project’s first goal is to develop artificial intelligence capable of automatically detecting “38 classes of objects” regularly seen in military drone footage, the DOD said. This will ultimately help data analysts parse the “millions of hours of video” captured each year by drones surveilling combat zones in such countries as Iraq and Syria.
“AI will not be selecting a target [in combat] … any time soon,” Marine Corps Col. Drew Cukor, chief of the Algorithmic Warfare Cross-Function Team, said at a defense tech summit last year. “What AI will do is complement the human operator.”
Google will reportedly help the department achieve this goal by providing software building blocks known as TensorFlow application programming interfaces (APIs), which are often used in building neural networks.
“This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” a Google representative said in a  statement.”The technology flags images for human review, and is for non-offensive uses only.”
This explanation doesn’t sit well with some Google employees, Gizmodo reported, and some staffers are “outraged” at the company’s agreement to lend its technology to controversial drone operations. Countries around the world are nevertheless pouring funding into developing artificial intelligence for military purposes, which Cukor described as “an AI arms race.”
“No area will be left unaffected by the impact of this technology,” he said.
It is unknown whether the DOD is working with any other major tech companies as part of Project Maven at this time, or if Google stands alone.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Live Science] March 7, 2018 at 11:07AM. Credit to Author and Live Science | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Is Art Created by AI Really Art?

Your daily selection of the latest science news!

According to Scientific American Content: Global


You’ve probably heard that automation is becoming commonplace in more fields of human endeavor. Or, in headline-speak: “Are Robots Coming for Your Job?”

 

You may also have heard that the last bastions of human exclusivity will probably be creativity and artistic judgment. Robots will be washing our windows long before they start creating masterpieces. Right?

 

Not necessarily. In reporting a story for CBS Sunday Morning, for example, I recently visited Rutgers University’s Art and Artificial Intelligence Laboratory, where Ahmed Elgammal’s team has created artificial-intelligence software that generates beautiful, original paintings.

 

Software is doing well at composing music, too. At Amper Music (www.ampermusic.com), you can specify what kind of music you want based on mood, instrumentation, tempo and duration. You click “Render,” and boom! There’s your original piece, not only composed but also “performed” and “mixed” by AI software.

 

Amper’s software doesn’t write melodies. It does, however, produce impressive background tracks—that is, mood music. This company is going after stock-music houses, companies that sell ready-to-download music for reality TV shows, Web videos, student movies, and so on.

 

I found these examples of robotically generated art and music to be polished and appealing. But something kept nagging at me: What happens in a world where effort and scarcity are no longer part of the definition of art?

 

A mass-produced print of the Mona Lisa is worth less than the actual Leonardo painting. Why? Scarcity—there’s only one of the original. But Amper churns out another professional-quality original piece of music every time you click “Render.” Elgammal’s AI painter can spew out another 1,000 original works of art with every tap of the enter key. It puts us in a weird hybrid world where works of art are unique—every painting is different—but require almost zero human effort to produce. Should anyone pay for these things? And if an artist puts AI masterpieces up for sale, what should the price be?

 

That’s not just a thought experiment, either. Soon the question “What’s the value of AI artwork and music?” will start impacting flesh-and-blood consumers. It has already, in fact.

 

Last year the music-streaming service Spotify lured AI researcher François Pachet away from Sony, where he’d been working on AI software that writes music.

 

Earlier, reporters at the online trade publication Music Business Worldwide discovered something fishy about many of Spotify’s playlists: according to the report, songs within them appeared to be credited to nonexistent composers and bands. These playlists have names like Peaceful Piano and Ambient Chill—exactly the kind of atmospheric, melodyless music AI software is good at.

 

Is Spotify using software to compose music to avoid paying royalties to human musicians? The New York Times reported that the tracks with pseudonyms have been played 500 million times, which would ordinarily have cost Spotify $3 million in payments.

 

But Spotify says Pachet was hired to create tools for human composers. And it has flatly denied that the tracks in question were created by “fake” artists to avoid royalties: while posted under the names of pseudonyms, they were written by actual people receiving actual money for work that they own. (It’s still possible Spotify is paying lower royalties to these mysterious music producers.) But the broader issue remains. Why couldn’t Spotify, or any music service, start using AI to generate free music to save itself money? Automation is already on track to displace millions of human taxi drivers, truck drivers and fast-food workers. Why should artists and musicians be exempt from the same economics?

 

Should there be anything in place—a union, a regulation—to stop that from happening? Or will we always value human-produced art and music more than machine-made stuff? Once we’ve answered those questions, we can tackle the really big one: When an AI-composed song wins the Grammy, who gets the trophy?

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Scientific American Content: Global] February 14, 2018 at 08:31AM. Credit to Author and Scientific American Content: Global | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Silicon qubits plus light add up to new quantum computing capability

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

Better together: Silicon qubits plus light add up to new quantum computing capability
In a step forward for quantum computing in silicon — the same material used in today’s computers — researchers successfully coupled a single electron’s spin, represented by the dot on the left, to light, represented as a wave passing over the electron, which is trapped in a double-welled silicon chamber known as a quantum dot. The goal is to use light to carry quantum information to other locations on a futuristic quantum computing chip. Credit: Emily Edwards, University of Maryland.

A silicon-based quantum computing device could be closer than ever due to a new experimental device that demonstrates the potential to use light as a messenger to connect quantum bits of information—known as qubits—that are not immediately adjacent to each other. The feat is a step toward making quantum computing devices from silicon, the same material used in today’s smartphones and computers.

The research, published in the journal Nature, was led by researchers at Princeton University in collaboration with colleagues at the University of Konstanz in Germany and the Joint Quantum Institute, which is a partnership of the University of Maryland and the National Institute of Standards and Technology.

 

The team created qubits from single electrons trapped in silicon chambers known as double dots. By applying a magnetic field, they showed they could transfer quantum information, encoded in the electron property known as spin, to a particle of light, or photon, opening the possibility of transmitting the quantum information.

 

“This is a breakout year for silicon spin qubits,” said Jason Petta, professor of physics at Princeton. “This work expands our efforts in a whole new direction, because it takes you out of living in a two-dimensional landscape, where you can only do nearest-neighbor coupling, and into a world of all-to-all connectivity,” he said. “That creates flexibility in how we make our devices.”

 

Quantum devices offer computational possibilities that are not possible with today’s computers, such as factoring large numbers and simulating chemical reactions. Unlike conventional computers, the devices operate according to the quantum mechanical laws that govern very small structures such as single atoms and sub-atomic particles. Major technology firms are already building quantum computers based on superconducting qubits and other approaches.

 

“This result provides a path to scaling up to more complex systems following the recipe of the semiconductor industry,” said Guido Burkard, professor of physics at the University of Konstanz, who provided guidance on theoretical aspects in collaboration with Monica Benito, a postdoctoral researcher. “That is the vision, and this is a very important step.”

 

Jacob Taylor, a member of the team and a fellow at the Joint Quantum Institute, likened the light to a wire that can connect spin qubits. “If you want to make a quantum computing device using these trapped electrons, how do you send information around on the chip? You need the quantum computing equivalent of a wire.”

 

 

Silicon spin qubits are more resilient than competing technologies to outside disturbances such as heat and vibrations, which disrupt inherently fragile quantum states. The simple act of reading out the results of a quantum calculation can destroy the quantum state, a phenomenon known as “quantum demolition.”

 

The researchers theorize that the current approach may avoid this problem because it uses light to probe the state of the quantum system. Light is already used as a messenger to bring cable and internet signals into homes via fiber optic cables, and it is also being used to connect superconducting qubit systems, but this is one of the first applications in silicon spin qubits.

 

In these qubits, information is represented by the electron’s spin, which can point up or down. For example, a spin pointing up could represent a 0 and a spin pointing down could represent a 1. Conventional computers, in contrast, use the electron’s charge to encode information.

 

Connecting silicon-based qubits so that they can talk to each other without destroying their information has been a challenge for the field. Although the Princeton-led team successfully coupled two neighboring electron spins separated by only 100 nanometers (100 billionths of a meter), as published in Science in December 2017, coupling spin to light, which would enable long-distance spin-spin coupling, has remained a challenge until now.

 

In the current study, the team solved the problem of long-distance communication by coupling the qubit’s information—that is, whether the spin points up or down—to a particle of light, or photon, which is trapped above the qubit in the chamber. The photon’s wave-like nature allows it to oscillate above the qubit like an undulating cloud.

 

Graduate student Xiao Mi and colleagues figured out how to link the information about the spin’s direction to the photon, so that the light can pick up a message, such as “spin points up,” from the qubit. “The strong coupling of a single spin to a single photon is an extraordinarily difficult task akin to a perfectly choreographed dance,” Mi said. “The interaction between the participants—spin, charge and photon—needs to be precisely engineered and protected from environmental noise, which has not been possible until now.” The team at Princeton included postdoctoral fellow Stefan Putz and graduate student David Zajac.

 

The advance was made possible by tapping into light’s electromagnetic wave properties. Light consists of oscillating electric and magnetic fields, and the researchers succeeded in coupling the light’s electric field to the electron’s spin state.

 

The researchers did so by building on team’s finding published in December 2016 in the journal Science that demonstrated coupling between a single electron charge and a single particle of light.

 

To coax the qubit to transmit its to the photon, the researchers place the electron spin in a large magnetic field gradient such that the has a different orientation depending on which side of the quantum dot it occupies. The magnetic field gradient, combined with the charge coupling demonstrated by the group in 2016, couples the qubit’s spin direction to the photon’s electric field.

 

Ideally, the photon will then deliver the message to another qubit located within the chamber. Another possibility is that the ‘s message could be carried through wires to a device that reads out the message. The researchers are working on these next steps in the process.

 

Several steps are still needed before making a silicon-based quantum computer, Petta said. Everyday computers process billions of bits, and although qubits are more computationally powerful, most experts agree that 50 or more qubits are needed to achieve quantum supremacy, where quantum computers would start to outshine their classical counterparts.

 

Daniel Loss, a professor of physics at the University of Basel in Switzerland who is familiar with the work but not directly involved, said: “The work by Professor Petta and collaborators is one of the most exciting breakthroughs in the field of spin qubits in recent years. I have been following Jason’s work for many years and I’m deeply impressed by the standards he has set for the field, and once again so with this latest experiment to appear in Nature. It is a big milestone in the quest of building a truly powerful quantum computer as it opens up a pathway for cramming hundreds of millions of qubits on a square-inch chip. These are very exciting developments for the field ¬— and beyond.”website

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] February 14, 2018 at 01:30PM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Job One for Quantum Computers: Boost Artificial Intelligence

Your daily selection of the latest science news!

According to Quanta Magazine

In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”

Today the mashup of the two seems the most natural thing in the world. Neural networks and other machine-learning systems have become the most disruptive technology of the 21st century. They out-human humans, beating us not just at tasks most of us were never really good at, such as chess and data-mining, but also at the very types of things our brains evolved for, such as recognizing faces, translating languages and negotiating four-way stops. These systems have been made possible by vast computing power, so it was inevitable that tech companies would seek out computers that were not just bigger, but a new class of machine altogether.

Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. That’s still another decade off, at least. But even today’s rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and don’t choke on incomplete or uncertain data. “There is a natural combination between the intrinsic statistical nature of quantum computing … and machine learning,” said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. “‘Machine learning’ is becoming a buzzword,” said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. “When you mix that with ‘quantum,’ it becomes a mega-buzzword.”

Yet nothing with the word “quantum” in it is ever quite what it seems. Although you might think a quantum machine-learning system should be powerful, it suffers from a kind of locked-in syndrome. It operates on quantum states, not on human-readable data, and translating between the two can negate its apparent advantages. It’s like an iPhone X that, for all its impressive specs, ends up being just as slow as your old phone, because your network is as awful as ever. For a few special cases, physicists can overcome this input-output bottleneck, but whether those cases arise in practical machine-learning tasks is still unknown. “We don’t have clear answers yet,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who is always the voice of sobriety when it comes to quantum computing. “People have often been very cavalier about whether these algorithms give a speedup.”

Quantum Neurons

The main job of a neural network, be it classical or quantum, is to recognize patterns. Inspired by the human brain, it is a grid of basic computing units — the “neurons.” Each can be as simple as an on-off device. A neuron monitors the output of multiple other neurons, as if taking a vote, and switches on if enough of them are on. Typically, the neurons are arranged in layers. An initial layer accepts input (such as image pixels), intermediate layers create various combinations of the input (representing structures such as edges and geometric shapes) and a final layer produces output (a high-level description of the image content).

Crucially, the wiring is not fixed in advance, but adapts in a process of trial and error. The network might be fed images labeled “kitten” or “puppy.” For each image, it assigns a label, checks whether it was right, and tweaks the neuronal connections if not. Its guesses are random at first, but get better; after perhaps 10,000 examples, it knows its pets. A serious neural network can have a billion interconnections, all of which need to be tuned.

On a classical computer, all these interconnections are represented by a ginormous matrix of numbers, and running the network means doing matrix algebra. Conventionally, these matrix operations are outsourced to a specialized chip such as a graphics processing unit. But nothing does matrices like a quantum computer. “Manipulation of large matrices and large vectors are exponentially faster on a quantum computer,” said Seth Lloyd, a physicist at the Massachusetts Institute of Technology and a quantum-computing pioneer.

For this task, quantum computers are able to take advantage of the exponential nature of a quantum system. The vast bulk of a quantum system’s information storage capacity resides not in its individual data units — its qubits, the quantum counterpart of classical computer bits — but in the collective properties of those qubits. Two qubits have four joint states: both on, both off, on/off, and off/on. Each has a certain weighting, or “amplitude,” that can represent a neuron. If you add a third qubit, you can represent eight neurons; a fourth, 16. The capacity of the machine grows exponentially. In effect, the neurons are smeared out over the entire system. When you act on a state of four qubits, you are processing 16 numbers at a stroke, whereas a classical computer would have to go through those numbers one by one.

Lloyd estimates that 60 qubits would be enough to encode an amount of data equivalent to that produced by humanity in a year, and 300 could carry the classical information content of the observable universe. (The biggest quantum computers at the moment, built by IBM, Intel and Google, have 50-ish qubits.) And that’s assuming each amplitude is just a single classical bit. In fact, amplitudes are continuous quantities (and, indeed, complex numbers) and, for a plausible experimental precision, one might store as many as 15 bits, Aaronson said.

But a quantum computer’s ability to store information compactly doesn’t make it faster. You need to be able to use those qubits. In 2008, Lloyd, the physicist Aram Harrow of MIT and Avinatan Hassidim, a computer scientist at Bar-Ilan University in Israel, showed how to do the crucial algebraic operation of inverting a matrix. They broke it down into a sequence of logic operations that can be executed on a quantum computer. Their algorithm works for a huge variety of machine-learning techniques. And it doesn’t require nearly as many algorithmic steps as, say, factoring a large number does. A computer could zip through a classification task before noise — the big limiting factor with today’s technology — has a chance to foul it up. “You might have a quantum advantage before you have a fully universal, fault-tolerant quantum computer,” said Kristan Temme of IBM’s Thomas J. Watson Research Center.

Let Nature Solve the Problem

So far, though, machine learning based on quantum matrix algebra has been demonstrated only on machines with just four qubits. Most of the experimental successes of quantum machine learning to date have taken a different approach, in which the quantum system does not merely simulate the network; it is the network. Each qubit stands for one neuron. Though lacking the power of exponentiation, a device like this can avail itself of other features of quantum physics.

The largest such device, with some 2,000 qubits, is the quantum processor manufactured by D-Wave Systems, based near Vancouver, British Columbia. It is not what most people think of as a computer. Instead of starting with some input data, executing a series of operations and displaying the output, it works by finding internal consistency. Each of its qubits is a superconducting electric loop that acts as a tiny electromagnet oriented up, down, or up and down — a superposition. Qubits are “wired” together by allowing them to interact magnetically.

Processors made by D-Wave Systems are being used for machine learning applications.

To run the system, you first impose a horizontal magnetic field, which initializes the qubits to an equal superposition of up and down — the equivalent of a blank slate. There are a couple of ways to enter data. In some cases, you fix a layer of qubits to the desired input values; more often, you incorporate the input into the strength of the interactions. Then you let the qubits interact. Some seek to align in the same direction, some in the opposite direction, and under the influence of the horizontal field, they flip to their preferred orientation. In so doing, they might trigger other qubits to flip. Initially that happens a lot, since so many of them are misaligned. Over time, though, they settle down, and you can turn off the horizontal field to lock them in place. At that point, the qubits are in a pattern of up and down that ensures the output follows from the input.

It’s not at all obvious what the final arrangement of qubits will be, and that’s the point. The system, just by doing what comes naturally, is solving a problem that an ordinary computer would struggle with. “We don’t need an algorithm,” explained Hidetoshi Nishimori, a physicist at the Tokyo Institute of Technology who developed the principles on which D-Wave machines operate. “It’s completely different from conventional programming. Nature solves the problem.”

The qubit-flipping is driven by quantum tunneling, a natural tendency that quantum systems have to seek out their optimal configuration, rather than settle for second best. You could build a classical network that worked on analogous principles, using random jiggling rather than tunneling to get bits to flip, and in some cases it would actually work better. But, interestingly, for the types of problems that arise in machine learning, the quantum network seems to reach the optimum faster.

The D-Wave machine has had its detractors. It is extremely noisy and, in its current incarnation, can perform only a limited menu of operations. Machine-learning algorithms, though, are noise-tolerant by their very nature. They’re useful precisely because they can make sense of a messy reality, sorting kittens from puppies against a backdrop of red herrings. “Neural networks are famously robust to noise,” Behrman said.

In 2009 a team led by Hartmut Neven, a computer scientist at Google who pioneered augmented reality — he co-founded the Google Glass project — and then took up quantum information processing, showed how an early D-Wave machine could do a respectable machine-learning task. They used it as, essentially, a single-layer neural network that sorted images into two classes: “car” or “no car” in a library of 20,000 street scenes. The machine had only 52 working qubits, far too few to take in a whole image. (Remember: the D-Wave machine is of a very different type than in the state-of-the-art 50-qubit systems coming online in 2018.) So Neven’s team combined the machine with a classical computer, which analyzed various statistical quantities of the images and calculated how sensitive these quantities were to the presence of a car — usually not very, but at least better than a coin flip. Some combination of these quantities could, together, spot a car reliably, but it wasn’t obvious which. It was the network’s job to find out.

The team assigned a qubit to each quantity. If that qubit settled into a value of 1, it flagged the corresponding quantity as useful; 0 meant don’t bother. The qubits’ magnetic interactions encoded the demands of the problem, such as including only the most discriminating quantities, so as to keep the final selection as compact as possible. The result was able to spot a car.

Last year a group led by Maria Spiropulu, a particle physicist at the California Institute of Technology, and Daniel Lidar, a physicist at USC, applied the algorithm to a practical physics problem: classifying proton collisions as “Higgs boson” or “no Higgs boson.” Limiting their attention to collisions that spat out photons, they used basic particle theory to predict which photon properties might betray the fleeting existence of the Higgs, such as momentum in excess of some threshold. They considered eight such properties and 28 combinations thereof, for a total of 36 candidate signals, and let a late-model D-Wave at the University of Southern California find the optimal selection. It identified 16 of the variables as useful and three as the absolute best. The quantum machine needed less data than standard procedures to perform an accurate identification. “Provided that the training set was small, then the quantum approach did provide an accuracy advantage over traditional methods used in the high-energy physics community,” Lidar said.

Maria Spiropulu, a physicist at the California Institute of Technology, used quantum machine learning to find Higgs bosons.

In December, Rigetti demonstrated a way to automatically group objects using a general-purpose quantum computer with 19 qubits. The researchers did the equivalent of feeding the machine a list of cities and the distances between them, and asked it to sort the cities into two geographic regions. What makes this problem hard is that the designation of one city depends on the designation of all the others, so you have to solve the whole system at once.

The Rigetti team effectively assigned each city a qubit, indicating which group it was assigned to. Through the interactions of the qubits (which, in Rigetti’s system, are electrical rather than magnetic), each pair of qubits sought to take on opposite values — their energy was minimized when they did so. Clearly, for any system with more than two qubits, some pairs of qubits had to consent to be assigned to the same group. Nearby cities assented more readily since the energetic cost for them to be in the same group was lower than for more-distant cities.

To drive the system to its lowest energy, the Rigetti team took an approach similar in some ways to the D-Wave annealer. They initialized the qubits to a superposition of all possible cluster assignments. They allowed qubits to interact briefly, which biased them toward assuming the same or opposite values. Then they applied the analogue of a horizontal magnetic field, allowing the qubits to flip if they were so inclined, pushing the system a little way toward its lowest-energy state. They repeated this two-step process — interact then flip — until the system minimized its energy, thus sorting the cities into two distinct regions.

These classification tasks are useful but straightforward. The real frontier of machine learning is in generative models, which do not simply recognize puppies and kittens, but can generate novel archetypes — animals that never existed, but are every bit as cute as those that did. They might even figure out the categories of “kitten” and “puppy” on their own, or reconstruct images missing a tail or paw. “These techniques are very powerful and very useful in machine learning, but they are very hard,” said Mohammad Amin, the chief scientist at D-Wave. A quantum assist would be most welcome.

D-Wave and other research teams have taken on this challenge. Training such a model means tuning the magnetic or electrical interactions among qubits so the network can reproduce some sample data. To do this, you combine the network with an ordinary computer. The network does the heavy lifting — figuring out what a given choice of interactions means for the final network configuration — and its partner computer uses this information to adjust the interactions. In one demonstration last year, Alejandro Perdomo-Ortiz, a researcher at NASA’s Quantum Artificial Intelligence Lab, and his team exposed a D-Wave system to images of handwritten digits. It discerned that there were 10 categories, matching the digits 0 through 9, and generated its own scrawled numbers.

Bottlenecks Into the Tunnels

Well, that’s the good news. The bad is that it doesn’t much matter how awesome your processor is if you can’t get your data into it. In matrix-algebra algorithms, a single operation may manipulate a matrix of 16 numbers, but it still takes 16 operations to load the matrix. “State preparation — putting classical data into a quantum state — is completely shunned, and I think this is one of the most important parts,” said Maria Schuld, a researcher at the quantum-computing startup Xanadu and one of the first people to receive a doctorate in quantum machine learning. Machine-learning systems that are laid out in physical form face parallel difficulties of how to embed a problem in a network of qubits and get the qubits to interact as they should.

Once you do manage to enter your data, you need to store it in such a way that a quantum system can interact with it without collapsing the ongoing calculation. Lloyd and his colleagues have proposed a quantum RAM that uses photons, but no one has an analogous contraption for superconducting qubits or trapped ions, the technologies found in the leading quantum computers. “That’s an additional huge technological problem beyond the problem of building a quantum computer itself,” Aaronson said. “The impression I get from the experimentalists I talk to is that they are frightened. They have no idea how to begin to build this.”

And finally, how do you get your data out? That means measuring the quantum state of the machine, and not only does a measurement return only a single number at a time, drawn at random, it collapses the whole state, wiping out the rest of the data before you even have a chance to retrieve it. You’d have to run the algorithm over and over again to extract all the information.

Yet all is not lost. For some types of problems, you can exploit quantum interference. That is, you can choreograph the operations so that wrong answers cancel themselves out and right ones reinforce themselves; that way, when you go to measure the quantum state, it won’t give you just any random value, but the desired answer. But only a few algorithms, such as brute-force search, can make good use of interference, and the speedup is usually modest.

In some cases, researchers have found shortcuts to getting data in and out. In 2015 Lloyd, Silvano Garnerone of the University of Waterloo in Canada, and Paolo Zanardi at USC showed that, for some kinds of statistical analysis, you don’t need to enter or store the entire data set. Likewise, you don’t need to read out all the data when a few key values would suffice. For instance, tech companies use machine learning to suggest shows to watch or things to buy based on a humongous matrix of consumer habits. “If you’re Netflix or Amazon or whatever, you don’t actually need the matrix written down anywhere,” Aaronson said. “What you really need is just to generate recommendations for a user.”

All this invites the question: If a quantum machine is powerful only in special cases, might a classical machine also be powerful in those cases? This is the major unresolved question of the field. Ordinary computers are, after all, extremely capable. The usual method of choice for handling large data sets — random sampling — is actually very similar in spirit to a quantum computer, which, whatever may go on inside it, ends up returning a random result. Schuld remarked: “I’ve done a lot of algorithms where I felt, ‘This is amazing. We’ve got this speedup,’ and then I actually, just for fun, write a sampling technique for a classical computer, and I realize you can do the same thing with sampling.”

If you look back at the successes that quantum machine learning has had so far, they all come with asterisks. Take the D-Wave machine. When classifying car images and Higgs bosons, it was no faster than a classical machine. “One of the things we do not talk about in this paper is quantum speedup,” said Alex Mott, a computer scientist at Google DeepMind who was a member of the Higgs research team. Matrix-algebra approaches such as the Harrow-Hassidim-Lloyd algorithm show a speedup only if the matrices are sparse — mostly filled with zeroes. “No one ever asks, are sparse data sets actually interesting in machine learning?” Schuld noted.

Quantum Intelligence

On the other hand, even the occasional incremental improvement over existing techniques would make tech companies happy. “These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

Schuld also sees scope for innovation on the software side. Machine learning is more than a bunch of calculations. It is a complex of problems that have their own particular structure. “The algorithms that people construct are removed from the things that make machine learning interesting and beautiful,” she said. “This is why I started to work the other way around and think: If have this quantum computer already — these small-scale ones — what machine-learning model actually can it generally implement? Maybe it is a model that has not been invented yet.” If physicists want to impress machine-learning experts, they’ll need to do more than just make quantum versions of existing models.

Just as many neuroscientists now think that the structure of human thought reflects the requirements of having a body, so, too, are machine-learning systems embodied. The images, language and most other data that flow through them come from the physical world and reflect its qualities. Quantum machine learning is similarly embodied — but in a richer world than ours. The one area where it will undoubtedly shine is in processing data that is already quantum. When the data is not an image, but the product of a physics or chemistry experiment, the quantum machine will be in its element. The input problem goes away, and classical computers are left in the dust.

In a neatly self-referential loop, the first quantum machine-learning systems may help to design their successors. “One way we might actually want to use these systems is to build quantum computers themselves,” Wiebe said. “For some debugging tasks, it’s the only approach that we have.” Maybe they could even debug us. Leaving aside whether the human brain is a quantum computer — a highly contentious question — it sometimes acts as if it were one. Human behavior is notoriously contextual; our preferences are formed by the choices we are given, in ways that defy logic. In this, we are like quantum particles. “The way you ask questions and the ordering matters, and that is something that is very typical in quantum data sets,” Perdomo-Ortiz said. So a quantum machine-learning system might be a natural way to study human cognitive biases.

Neural networks and quantum processors have one thing in common: It is amazing they work at all. It was never obvious that you could train a network, and for decades most people doubted it would ever be possible. Likewise, it is not obvious that quantum physics could ever be harnessed for computation, since the distinctive effects of quantum physics are so well hidden from us. And yet both work — not always, but more often than we had any right to expect. On this precedent, it seems likely that their union will also find its place.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Quanta Magazine] January 29, 2018 at 05:25PM. Credit to Author and Quanta Magazine | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

MIT’s new UV-sensitive ink allows 3D-printed objects to change color on demand

Your daily selection of the hottest trending tech news!

According to Digital Trends

A paper describing the work has been accepted to the ACM CHI Conference on Human Factors in Computing Systems, which takes place in April in Montreal.

Do you remember the color-changing dress that proved especially divisive when it made the rounds a couple years back. Was it black and blue or white and gold? The Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is looking to stir up a similar controversy — although this time it may be possible for the dress in question to be both black and blue and white and gold.

No, we’re not talking about any kind of “Schrödinger’s cat” thought experiment, but rather a new system called ColorFab that allows 3D printed objects to change color, courtesy of special dyes that can be activated and deactivated when exposed to different wavelengths of light.

“
With the amount of buying, consuming, and wasting that exists, we wanted to figure out a way to update materials in a more efficient way, which was largely the motivation behind this project,” MIT professor Stefanie Mueller, who led the project, told Digital Trends. “We’ve developed a system for repeatedly changing the colors of 3D-printed objects after fabrication in just over 20 minutes. Specifically, we can recolor multicolored objects using a projector model and our own 3D printable ink that changes color when exposed to UV and visible light.”

mit csail colorfab color changing items of different colors with activation areas  credit
MIT CSAIL

According to Mueller, the technology could allow users to change the color of different items of clothing in order to accessorize them, or for a retail store to be able to customize its products in real time if a buyer wants to see an item in a different color. It currently takes 23 minutes to change an object’s color, but they hope it will be possible to speed up the process as the project advances. The hope is that it could one day be used like the color-changing nails in the movie Total Recall, in which a receptionist is able to change the color simply by touching her nails with her pen.

“This is just a research prototype at this point, so there are no immediate plans to commercialize,” Mueller said. “As a next step, we hope to speed up the printing process by using a more powerful light and potentially adding more light-adaptable dye to the ink. We also hope to improve the granularity of the colors so that more nuanced patterns can be printed.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Digital Trends] January 29, 2018 at 02:27PM. Credit to Author and Digital Trends | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Better than holograms: A new 3-D projection into thin air

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

Better than holograms: A new 3-D projection into thin air
This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected image of researcher Erich Nygaard in Provo, Utah. Scientists have figured out how to manipulate tiny nearly unseen specks in the air and use them to produce images more realistic than most holograms, according to a study published on Wednesday, Jan. 23, 2018, in the journal Nature. (Dan Smalley Lab, Brigham Young University via AP)

One of the enduring sci-fi moments of the big screen—R2-D2 beaming a 3-D image of Princess Leia into thin air in “Star Wars”—is closer to reality thanks to the smallest of screens: dust-like particles.

Scientists have figured out how to manipulate nearly unseen specks in the air and use them to create 3-D images that are more realistic and clearer than holograms, according to a study in Wednesday’s journal Nature . The study’s lead author, Daniel Smalley, said the new technology is “printing something in space, just erasing it very quickly.”

 

In this case, scientists created a small butterfly appearing to dance above a finger and an image of a graduate student imitating Leia in the Star Wars scene.

 

Even with all sorts of holograms already in use, this new technique is the closest to replicating that Star Wars scene.

 

“The way they do it is really cool,” said Curtis Broadbent, of the University of Rochester, who wasn’t part of the study but works on a competing technology. “You can have a circle of people stand around it and each person would be able to see it from their own perspective. And that’s not possible with a hologram.”

 

The tiny specks are controlled with laser light, like the fictional tractor beam from “Star Trek,” said Smalley, an electrical engineering professor at Brigham Young University. Yet it was a different science fiction movie that gave him the idea: The scene in the movie “Iron Man” when the Tony Stark character dons a holographic glove. That couldn’t happen in real life because Stark’s arm would disrupt the image.

 

Going from holograms to this type of technology—technically called volumetric display—is like shifting from a two-dimensional printer to a three-dimensional printer, Smalley said. Holograms appear to the eye to be three-dimensional, but “all of the magic is happening on a 2-D surface,” Smalley said.

 

The key is trapping and moving the particles around potential disruptions—like Tony Stark’s arm—so the “arm is no longer in the way,” Smalley said.

Better than holograms: A new 3-D projection into thin air

Initially, Smalley thought gravity would make the particles fall and make it impossible to sustain an image, but the energy changes air pressure in a way to keep them aloft, he said.

Better than a hologram: Research produces 3-D images floating in 'thin air'

This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected three-dimensional triangular prism in Provo, Utah. A study on this volumetric display was published in the journal Nature on Wednesday, Jan. 23, 2018. By shining light on specks in the air and then having the particles beam light back out, study lead author Smalley said the new technology is like “you really are printing something in space, just erasing it very quickly.” (Dan Smalley Lab, Brigham Young University via AP)

Other versions of volumetric display use larger “screens” and “you can’t poke your finger into it because your fingers would get chopped off,” said Massachusetts Institute of Technology professor V. Michael Bove, who wasn’t part of the study team but was Smalley’s mentor.

 

The device Smalley uses is about one-and-a-half times the size of a children’s lunchbox, he said.

 

So far the projections have been tiny, but with more work and multiple beams, Smalley hopes to have bigger projections.

 

This method could one day be used to help guide medical procedures—as well as for entertainment, Smalley said. It’s still years away from daily use.

Better than holograms: A new 3-D projection into thin air

This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected image of the earth above a finger tip in Provo, Utah. Scientists have figured out how to manipulate tiny nearly unseen specks in the air and use them to produce images more realistic than most holograms, according to a study published on Wednesday, Jan. 23, 2018, in the journal Nature. (Dan Smalley Lab, Brigham Young University via AP)

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] January 24, 2018 at 01:36PM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

The Era of Quantum Computing Is Here. Outlook: Cloudy

Your daily selection of the latest science news!

According to Quanta Magazine

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of , but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage even if it is not fully fault-tolerant,” said Gambetta.

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Quanta Magazine] January 24, 2018 at 04:08PM. Credit to Author and Quanta Magazine | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

MIT Engineers Have Designed a Chip That Behaves Just Like Brain Cell Connections

Your daily selection of the latest science news!

According to ScienceAlert

The most promising artificial synapse to date.

For those working in the field of advanced artificial intelligence, getting a computer to simulate brain activity is a gargantuan task, but it may be easier to manage if the hardware is designed more like brain hardware to start with.

This emerging field is called neuromorphic computing. And now engineers at MIT may have overcome a significant hurdle – the design of a chip with artificial synapses.

For now, human brains are much more powerful than any computer – they contain around 80 billion neurons, and over 100 trillion synapses connecting them and controlling the passage of signals.

How computer chips currently work is by transmitting signals in a language called binary. Every piece of information is encoded in 1s and 0s, or on/off signals.

To get an idea of how this compares to a brain, consider this: in 2013, one of the world’s most powerful supercomputers ran a simulation of brain activity, achieving only a minuscule result.

Riken’s K Computer used 82,944 processors and a petabyte of main memory – the equivalent of around 250,000 desktop computers at the time.

It took 40 minutes to simulate one second of the activity of 1.73 billion neurons connected by 10.4 trillion synapses. That may sound like a lot, but it’s really equivalent to just one percent of the human brain.

But if a chip used synapse-like connections, the signals used by a computer could be much more varied, enabling synapse-like learning. Synapses mediate the signals transmitted through the brain, and neurons activate depending on the number and type of ions flowing across the synapse. This helps the brain recognise patterns, remember facts, and carry out tasks.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [ScienceAlert] January 23, 2018 at 09:19PM. Credit to Author and ScienceAlert | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

This Neural Network Built by Japanese Researchers Can ‘Read Minds’

Your daily selection of the latest science news!

According to Singularity Hub

It already seems a little like computers can read our minds; features like Google’s auto-complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder, “How did they know?” For better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real, and a new study from researchers in Kyoto, Japan is an unequivocal step in that direction.

A team from Kyoto University used a deep neural network to read and interpret people’s thoughts. Sound crazy? This actually isn’t the first time it’s been done. The difference is that previous methods—and results—were simpler, deconstructing images based on their pixels and basic shapes. The new technique, dubbed “deep image reconstruction,” moves beyond binary pixels, giving researchers the ability to decode images that have multiple layers of color and structure.

“Our brain processes visual information by hierarchically extracting different levels of features or components of different complexities,” said Yukiyasu Kamitani, one of the scientists involved in the study. “These neural networks or AI models can be used as a proxy for the hierarchical structure of the human brain.”

The study lasted 10 months and consisted of three people viewing images of three different categories: natural phenomena (such as animals or people), artificial geometric shapes, and letters of the alphabet for varying lengths of time.

Reconstructions utilizing the DGN. Three reconstructed images
correspond to reconstructions from three subjects.

The viewers’ brain activity was measured either while they were looking at the images or afterward. To measure brain activity after people had viewed the images, they were simply asked to think about the images they’d been shown.

Recorded activity was then fed into a neural network that “decoded” the data and used it to generate its own interpretations of the peoples’ thoughts.

In humans (and, actually, all mammals) the visual cortex is located at the back of the brain, in the occipital lobe, which is above the cerebellum. Activity in the visual cortex was measured using functional magnetic resonance imaging (fMRI), which is translated into hierarchical features of a deep neural network.

Starting from a random image, the network repeatedly optimizes that image’s pixel values. The neural network’s features of the input image become similar to the features decoded from brain activity.

Importantly, the team’s model was trained using only natural images (of people or nature), but it was able to reconstruct artificial shapes. This means the model truly ‘generated’ images based on brain activity, as opposed to matching that activity to existing examples.

Not surprisingly, the model did have a harder time trying to decode brain activity when people were asked to remember images, as compared to activity when directly viewing images. Our brains can’t remember every detail of an image we saw, so our recollections tend to be a bit fuzzy.

The reconstructed images from the study retain some resemblance to the original images viewed by participants, but mostly, they look like minimally-detailed blobs. However, the technology’s accuracy is only going to improve, and its applications will increase accordingly.

Imagine “instant art,” where you could produce art just by picturing it in your head. Or what if an AI could record your brain activity as you’re asleep and dreaming, then re-create your dreams in order to analyze them? Last year, completely paralyzed patients were able to communicate with their families for the first time using a brain-computer interface.

There are countless creative and significant ways to use a model like the one in the Kyoto study. But brain-machine interfaces are also one of those technologies we can imagine having eerie, Black Mirror-esque consequences if not handled wisely. Neuroethicists have already outlined four new human rights we would need to implement to keep mind-reading technology from going sorely wrong.

Despite this, the Japanese team certainly isn’t alone in its efforts to advance mind-reading AI. Elon Musk famously founded Neuralink with the purpose of building brain-machine interfaces to connect people and computers. Kernel is working on making chips that can read and write neural code.

Whether it’s to recreate images, mine our deep subconscious, or give us entirely new capabilities, though, it’s in our best interest that mind-reading technology proceeds with caution.

Image Credit: igor kisselev / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] January 14, 2018 at 11:05AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day

Your daily selection of the latest science news!

According to Singularity Hub

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.

The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.

The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.

It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.

Computing Power

Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.

The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.

That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.

“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”

AI for Science

One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.

The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.

In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.

“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.

What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.

“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.

A Virtual Data Scientist

That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.

“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”

The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.

“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”

Inside the Black Box

Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.

“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.

Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.

The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.

“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”

Moving Forward

Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.

“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”

The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.

“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.

It’s all in a day’s work.

Image Credit: Gennady Danilkin / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] January 3, 2018 at 11:07AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

How a Machine That Can Make Anything Would Change Everything

Your daily selection of the latest science news!

According to Singularity Hub

“Something is going to happen in the next forty years that will change things, probably more than anything else since we left the caves.” –James Burke

James Burke has a vision for the future. He believes that by the middle of this century, perhaps as early as 2042, our world will be defined by a new device: the nanofabricator.

These tiny factories will be large at first, like early computers, but soon enough you’ll be able to buy one that can fit on a desk. You’ll pour in some raw materials—perhaps water, air, dirt, and a few powders of rare elements if required—and the nanofabricator will go to work. Powered by flexible photovoltaic panels that coat your house, it will tear apart the molecules of the raw materials, manipulating them on the atomic level to create…anything you like. Food. A new laptop. A copy of Kate Bush’s debut album, The Kick Inside. Anything, providing you can give it both the raw materials and the blueprint for creation.

It sounds like science fiction—although, with the advent of 3D printers in recent years, less so than it used to. Burke, who hosted the BBC show Tomorrow’s World, which introduced bemused and excited audiences to all kinds of technologies, has a decades-long track record of technological predictions. He isn’t alone in envisioning the nanofactory as the technology that will change the world forever. Eric Drexler, thought by many to be the father of nanotechnology, wrote in the 1990s about molecular assemblers, hypothetical machines capable of manipulating matter and constructing molecules on the nano level, with scales of a billionth of a meter.

Richard Feynman, the famous inspirational physicist and bongo-playing eccentric, gave the lecture that inspired Drexler as early as 1959. Feynman’s talk, “Plenty of Room at the Bottom,” speculated about a world where moving individual atoms would be possible. While this is considered more difficult than molecular manufacturing, which seeks to manipulate slightly bigger chunks of matter, to date no one has been able to demonstrate that such machines violate the laws of physics.

In recent years, progress has been made towards this goal. It may well be that we make faster progress by mimicking the processes of biology, where individual cells, optimized by billions of years of evolution, routinely manipulate chemicals and molecules to keep us alive.

“If nanofabricators are ever built, the systems and structure of the world as we know them were built to solve a problem that will no longer exist.”

But the dream of the nanofabricator is not yet dead. What is perhaps even more astonishing than the idea of having such a device—something that could create anything you want—is the potential consequences it could have for society. Suddenly, all you need is light and raw materials. Starvation ceases to be a problem. After all, what is food? Carbon, hydrogen, nitrogen, phosphorous, sulphur. Nothing that you won’t find with some dirt, some air, and maybe a little biomass thrown in for efficiency’s sake.

Equally, there’s no need to worry about not having medicine as long as you have the recipe and a nanofabricator. After all, the same elements I listed above could just as easily make insulin, paracetamol, and presumably the superior drugs of the future, too.

What the internet did for information—allowing it to be shared, transmitted, and replicated with ease, instantaneously—the nanofabricator would do for physical objects. Energy will be in plentiful supply from the sun; your Santa Clause machine will be able to create new solar panels and batteries to harness and store this energy whenever it needs to.

Suddenly only three commodities have any value: the raw materials for the nanofabricator (many of which, depending on what you want to make, will be plentiful just from the world around you); the nanofabricators themselves (unless, of course, they can self-replicate, in which case they become just a simple ‘conversion’ away from raw materials); and, finally, the blueprints for the things you want to make.

In a world where material possessions are abundant for everyone, will anyone see any necessity in hoarding these blueprints? Far better for a few designers to tinker and create new things for the joy of it, and share them with all. What does ‘profit’ mean in a world where you can generate anything you want?

As Burke puts it, “This will destroy the current social, economic, and political system, because it will become pointless…every institution, every value system, every aspect of our lives have been governed by scarcity: the problem of distributing a finite amount of stuff. There will be no need for any of the social institutions.”

In other words, if nanofabricators are ever built, the systems and structure of the world as we know them were built to solve a problem that will no longer exist.

In some ways, speculating about such a world that’s so far removed from our own reminds me of Eliezer Yudkowsky’s warning about trying to divine what a superintelligent AI might make of the human race. We are limited to considering things in our own terms; we might think of a mouse as low on the scale of intelligence, and Einstein as the high end. But superintelligence breaks the scale; there is no sense in comparing it to anything we know, because it is different in kind. In the same way, such a world would be different in kind to the one we live in today.

We, too, will be different in kind. Liberated more than ever before from the drive for survival, the great struggle of humanity. No human attempts at measurement can comprehend what is inside a black hole, a physical singularity. Similarly, inside the veil of this technological singularity, no human attempts at prognostication can really comprehend what the future will look like. The one thing that seems certain is that human history will be forever divided in two. We may well be living in the Dark Age before this great dawn. Or it may never happen. But James Burke, just as he did over forty years ago, has faith.

Image Credit: 3DSculptor / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] December 25, 2017 at 11:03AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Physicists set new record with 10-qubit entanglement

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

10 qubit entanglement
False-color circuit image showing 10 superconducting qubits (star shapes) interconnected by a central bus resonator B (gray). Credit: Song et al. ©2017 American Physical Society

 

(Phys.org)—Physicists have experimentally demonstrated quantum entanglement with 10 qubits on a superconducting circuit, surpassing the previous record of nine entangled superconducting qubits. The 10-qubit state is the largest multiqubit entangled state created in any solid-state system and represents a step toward realizing large-scale quantum computing.

Lead researcher Jian-Wei Pan and co-workers at the University of Science and Technology of China, Zhejiang University, Fuzhou University, and the Institute of Physics, China, have published a paper on their results in a recent issue of Physical Review Letters.

In general, one of the biggest challenges to scaling up multiqubit entanglement is addressing the catastrophic effects of decoherence. One strategy is to use superconducting circuits, which operate at very cold temperatures and consequently have longer coherence times.

In the new set-up, the researchers used qubits made of tiny pieces of aluminum, which they connected to each other and arranged in a circle around a central bus resonator. The bus is a key component of the system, as it controls the interactions between qubits, and these interactions generate the entanglement.

As the researchers demonstrated, the bus can create entanglement between any two qubits, can produce multiple entangled pairs, or can entangle up to all 10 qubits. Unlike some previous demonstrations, the entanglement does not require a series of logic gates, nor does it involve modifying the physical wiring of the circuit, but instead all 10 qubits can be entangled with a single collective qubit-bus interaction.

To measure how well the qubits are entangled, the researchers used quantum tomography to determine the probability of measuring every possible state of the system. Although there are thousands of such , the resulting probability distribution yielded the correct state about 67% of the time. This fidelity is well above the threshold for genuine multipartite (generally considered to be about 50%).

In the future, the physicists’ goal is to develop a quantum simulator that could simulate the behavior of small molecules and other quantum systems, which would allow for a more efficient analysis of these systems compared to what is possible with classical computers.


Explore further:
Quantum computing on the move

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] November 29, 2017 at 09:33AM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Synopsis: Quantum Circulators Simplified

Your daily selection of the latest science news!

According to Physics – spotlighting exceptional research

Synopsis figure

B. J. Chapman et al., Phys. Rev. X (2017)

The superconducting qubit is a leading candidate for building a quantum computer. So far, however, quantum circuits with only a small number of such qubits have been demonstrated. As researchers scale up the qubit number, they need devices that can route the microwave signals with which these qubits communicate. Benjamin J. Chapman at JILA, the University of Colorado, and the National Institute of Standards and Technology, all in Boulder, Colorado, and co-workers have designed, built, and tested a compact on-chip microwave circulator that could be integrated into large qubit architectures.

Circulators are multiple-port devices that transmit signals directionally—a signal entering from port i will exit from port i+1. This property can be used to shield qubits from stray microwave fields, which could perturb the qubits’ coherence. The device’s directional, or nonreciprocal, behavior requires a symmetry-breaking mechanism. Commercial circulators exploit the nonreciprocal polarization rotation of microwave signals in a permanent magnet’s field, but they are too bulky for large-scale quantum computing applications. Newly demonstrated circulators, based on the nonreciprocity of the quantum Hall effect, can be integrated on chips (see Synopsis: Quantum Circulator on a Chip) but require tesla-scale magnetic fields to operate or initialize them.

The team’s chip-based scheme can instead be operated with very small magnetic fields (10–100 μT). Inside the device, simple circuits shift the signals in frequency and time, in a sequence that is different for each input port. These noncommutative temporal and spectral shifts provide the symmetry-breaking mechanism that gives the device its directionality. Experimental tests prove that the circulator works at high speed and with minimal losses, while an analysis of the device’s noise performance indicates that up to 1000 of these circulators could in principle be integrated in a single-superconducting-qubit setup.

This research is published in Physical Review X.

 

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Physics – spotlighting exceptional research] November 22, 2017 at 01:05PM. Credit to Author and Physics – spotlighting exceptional research | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

An Artificial Intelligence Just Found 56 New Gravitational Lenses

Your daily selection of the latest science news!

1.jpg

This illustration shows how gravitational lensing works. The gravity of a large galaxy cluster is so strong, it bends, brightens and distorts the light of distant galaxies behind it. The scale has been greatly exaggerated; in reality, the distant galaxy is much further away and much smaller. Credit: NASA, ESA, L. Calcada

According to Universe Today

Gravitational lenses are an important tool for astronomers seeking to study the most distant objects in the Universe. This technique involves using a massive cluster of matter (usually a galaxy or cluster) between a distant light source and an observer to better see light coming from that source. In an effect that was predicted by Einstein’s Theory of General Relativity, this allows astronomers to see objects that might otherwise be obscured.

Recently, a group of European astronomers developed a method for finding gravitational lenses in enormous piles of data. Using the same artificial intelligence algorithms that Google, Facebook and Tesla have used for their purposes, they were able to find 56 new gravitational lensing candidates from a massive astronomical survey. This method could eliminate the need for astronomers to conduct visual inspections of astronomical images.

The study which describes their research, titled “Finding strong gravitational lenses in the Kilo Degree Survey with Convolutional Neural Networks“, recently appeared in the Monthly Notices of the Royal Astronomical Society. Led by Carlo Enrico Petrillo of the Kapteyn Astronomical Institute, the team also included members of the National Institute for Astrophysics (INAF), the Argelander-Institute for Astronomy (AIfA) and the University of Naples.

While useful to astronomers, gravitational lenses are a pain to find. Ordinarily, this would consist of astronomers sorting through thousands of images snapped by telescopes and observatories. While academic institutions are able to rely on amateur astronomers and citizen astronomers like never before, there is imply no way to keep up with millions of images that are being regularly captured by instruments around the world.

To address this, Dr. Petrillo and his colleagues turned to what are known as “Convulutional Neural Networks” (CNN), a type of machine-learning algorithm that mines data for specific patterns. While Google used these same neural networks to win a match of Go against the world champion, Facebook uses them to recognize things in images posted on its site, and Tesla has been using them to develop self-driving cars.

As Petrillo explained in a recent press article from the Netherlands Research School for Astronomy:

“This is the first time a convolutional neural network has been used to find peculiar objects in an astronomical survey. I think it will become the norm since future astronomical surveys will produce an enormous quantity of data which will be necessary to inspect. We don’t have enough astronomers to cope with this.”

The team then applied these neural networks to data derived from the Kilo-Degree Survey (KiDS). This project relies on the VLT Survey Telescope (VST) at the ESO’s Paranal Observatory in Chile to map 1500 square degrees of the southern night sky. This data set consisted of 21,789 color images collected by the VST’s OmegaCAM, a multiband instrument developed by a consortium of European scientist in conjunction with the ESO.

These images all contained examples of Luminous Red Galaxies (LRGs), three of which wee known to be gravitational lenses. Initially, the neural network found 761 gravitational lens candidates within this sample. After inspecting these candidates visually, the team was able to narrow the list down to 56 lenses. These still need to be confirmed by space telescopes in the future, but the results were quite positive.

As they indicate in their study, such a neural network, when applied to larger data sets, could reveal hundreds or even thousands of new lenses:

“A conservative estimate based on our results shows that with our proposed method it should be possible to find ?100 massive LRG-galaxy lenses at z ~> 0.4 in KiDS when completed. In the most optimistic scenario this number can grow considerably (to maximally ? 2400 lenses), when widening the colour-magnitude selection and training the CNN to recognize smaller image-separation lens systems.”

In addition, the neural network rediscovered two of the known lenses in the data set, but missed the third one. However, this was due to the fact that this lens was particularly small and the neural network was not trained to detect lenses of this size. In the future, the researchers hope to correct for this by training their neural network to notice smaller lenses and rejects false positives.

But of course, the ultimate goal here is to remove the need for visual inspection entirely. In so doing, astronomers would be freed up from having to do grunt work, and could dedicate more time towards the process of discovery. In much the same way, machine learning algorithms could be used to search through astronomical data for signals of gravitational waves and exoplanets.

Much like how other industries are seeking to make sense out of terabytes of consumer or other types of “big data”, the field astrophysics and cosmology could come to rely on artificial intelligence to find the patterns in a Universe of raw data. And the payoff is likely to be nothing less than an accelerated process of discovery.

Read more…

__

This article and images were originally posted on [Universe Today] October 26, 2017 at 07:18PM

Credit to Author and Universe Today | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

This neural network aims to give your smartphone photos DSLR-like quality

Scientists from the Computer Vision Lab at ETH Zürich university have developed the new tool which uses deep learning to automatically improve photos.

Your daily selection of the hottest trending tech news!

According to Android Authority

Developments in AI and neural networks have serious implications for photography. It’s through deep learning that we can achieve things like improving photos before they’ve even been taken or creating realistic-looking fake celebrities. Thanks to a new report (via Engadget), we’ve gotten a look at an exciting new neural network that’s designed to upscale your smartphone photos to DSLR-quality.

Scientists from the Computer Vision Lab at ETH Zürich university have developed the new tool which uses deep learning to automatically improve photos. Using a database of high-quality images as a reference point, the neural network can make adjustments to smartphone photos, bringing them closer to what a DSLR camera would provide.

Of course, this isn’t really applying any kind of DSLR tech to a smartphone, it’s simply trying to mimic the properties visible in DSLR photos on non-DSLR images. But it’s nonetheless interesting: see the original image on top of the collage below (and right in the image above) with the same photo after processing underneath.

At the moment, the images look more like what you might expect from something like the Google Photos “auto-enhance” feature than from a DSLR camera (notice how the exposure on the left image at the top of the page is completely overblown), but the team is aiming to build on this in the future. Apparently, automatic correction for certain shooting conditions, like making an overcast day appear brighter, is in the works.

 

Read more…

__

This article and images were originally posted on [Android Authority] October 30, 2017 at 10:11AM

Credit to Author and Android Authority

 

 

 

 

Can good design be cost-effective: Team builds massive database of mobile-app designs

By mining designs at scale, semantic relationships can be found between seemingly unrelated apps and learned from, Kumar says.

Your daily selection of the latest science news!

According to Phys.org

1.jpg

Scroll through your smartphone screen and you’ll no doubt see a small sea of apps for everything from watching sports to tracking the movements of the stock market.

The number of apps has exploded in recent years along with the proliferation of smartphones, tablets, and the ways they can be used.

But designing those apps for maximum utility is mostly a hit or miss process, according to Illinois Computer Science Professor Ranjitha Kumar. There are only limited guides to what works and what doesn’t.

Kumar would like to change that, and she believes it is possible with the recent release of Rico, a huge database of  designs collected by her and a group of other researchers.

Their paper on Rico will be presented at the ACM Symposium on User Interface Software and Technology (UIST), which starts Oct. 22 in Quebec City, Canada.

“Existing practice involves inspecting a bunch of design examples by hand. What you’ll usually do when you have a new project is you’ll go look at other apps that are doing similar things, and you would actually print them out and try to visualize, ‘These are the screens a user would go through to perform this task in this app,'” she said.

But that manual approach is slow and expensive, so designers are likely to look only at what they know. A developer of, say, a diabetes app might try to limit her time and expense by looking first—and perhaps only—at other similar medical apps.

But other apps that seem to have little or no relation might offer design elements that could help them be more engaging, Kumar says. The diabetes app might benefit from a screen where users log the foods they eat, something that might be built into a food-blogging app the designer might never look at.

By mining designs at scale, semantic relationships can be found between seemingly unrelated apps and learned from, Kumar says.

 

Read more…

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] October 25, 2017 at 02:45PM

Credit to Author and Phys.org

 

 

 

 

Google’s Machine Learning Software Has Learned to Replicate Itself

These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry.

Your daily selection of the latest science news!

According to ScienceAlert

Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs.

Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.

An AutoML system recently broke a record for categorising images by their content, scoring 82 percent.

While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image.

For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.

Read more…

__

This article and images were originally posted on [ScienceAlert] October 17, 2017 at 07:09PM

Credit to Author and ScienceAlert

 

 

 

 

FDA-approved robot assistant gives surgeons force feedback

Your daily selection of the hottest trending tech news!

According to Engadget

Surgeons are trained to accurately operate on you when you need it, but robotic assistants could help them get to hard-to-reach areas and boost their accuracy even more. Senhance, the robotic surgical assistant that has just earned the FDA’s approval, was designed to accomplish both of those. The machine can help surgeons carry out minimally invasive surgery — in fact, the FDA has approved its use because after a pilot test involving 150 patients, the agency has concluded that Senhance is as accurate as the da Vinci robot when it came to gynecological and colorectal procedures.

Now that Senhance has been approved by the FDA, you’ll likely start seeing it — from afar, we hope, and not while you’re on the operating table — in hospitals across the US. Here’s a sample procedure being performed with the machine’s help if you’d like to watch it in action.

Source: FDA, TransEnterix

Read more…

__

This article and images were originally posted on [Engadget] October 15, 2017 at 01:30AM

Credit to Author and Engadget

 

 

 

 

Quantum Inside: Intel Manufactures an Exotic New Chip

The work was done in collaboration with QuTech, a Dutch company spun out of the University of Delft that specializes in quantum computing. QuTech has made significant progress in recent years toward developing more stable qubits.

Your daily selection of the hottest trending tech news!

According to New on MIT Technology Review

Intel has begun manufacturing chips for quantum computers.

The new hardware is too feeble to do much real work, but it offers a strong signal that the technology is inching closer to real-world applications. “We’re [moving] quantum computing from the academic space to the semiconductor space,” says Jim Clarke, director of quantum hardware at Intel.

While regular computers store and manipulate data by representing binary 1s and 0s, a quantum computer uses quantum bits or “qubits,” exploiting quantum phenomena to represent data in more than one state at once. This makes it possible to compute information in a fundamentally different way, and to perform some parallel calculations in the same time it would take to perform a single one.

Quantum computing has long been an academic curiosity, and there are enormous challenges to handling quantum information reliably. The sense is now growing, however, that the technology could emerge from research labs within a matter of years (see “10 Breakthrough Technologies 2017: Practical Quantum Computers”).

Intel’s quantum chip uses superconducting qubits. The approach builds on an existing electrical circuit design but uses a fundamentally different electronic phenomenon that only works at very low temperatures. The chip, which can handle 17 qubits, was developed over the past 18 months by researchers at a lab in Oregon and is being manufactured at an Intel facility in Arizona.

Read more…

__

This article and images were originally posted on [New on MIT Technology Review] October 10, 2017 at 04:51PM

Credit to Author and New on MIT Technology Review

 

 

 

Scientists develop machine-learning method to predict the behavior of molecules

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

Scientists develop machine-learning method to predict the behavior of molecules

A new learning algorithm is illustrated on a molecule known as malonaldehyde that undergoes an internal chemical reaction. The distribution of red points corresponds molecular configurations used to train the algorithm. The blue points represent configurations generated independently by the learning algorithm. The turquoise points confirm the predictions in an independent numerical experiment. Credit: Leslie Vogt.

An international, interdisciplinary research team of scientists has come up with a machine-learning method that predicts molecular behavior, a breakthrough that can aid in the development of pharmaceuticals and the design of new molecules that can be used to enhance the performance of emerging battery technologies, solar cells, and digital displays.

The work appears in the journal Nature Communications.

“By identifying patterns in , the learning algorithm or ‘machine’ we created builds a knowledge base about atomic interactions within a molecule and then draws on that information to predict new phenomena,” explains New York University’s Mark Tuckerman, a professor of chemistry and mathematics and one of the paper’s primary authors.

The paper’s other primary authors were Klaus-Robert Müller of Berlin’s Technische Universität (TUB) and the University of California Irvine’s Kieron Burke.

The work combines innovations in machine learning with physics and chemistry. Data-driven approaches, particularly in the area of machine learning, allow everyday devices to learn automatically from limited sample data and, subsequently, to act on new input information. Such approaches have transformed how we carry out common tasks like online searching, text analysis, image recognition, and language translation.

In recent years, related development has occurred in the natural sciences, with efforts directed toward engineering, materials science, and molecular design. However, machine- learning approaches in these fields have generally not explored the creation of methodologies—tools that could advance science in ways that have already been achieved in banking and public safety.

The research team created a machine that can learn complex interatomic interactions, which are normally prescribed by complex , without having to perform such intricate calculations.

In constructing their machine, the researchers created a small sample set of the molecule they wished to study in order to train the algorithm and then used the machine to simulate complex chemical behavior within the molecule. As an illustrative example, they chose a chemical process that occurs within a simple molecule known as malonaldehyde. To weigh the viability of the tool, they examined how the machine predicted the chemical behavior and then compared their prediction with our current chemical understanding of the molecule. The results revealed how much the machine could learn from the limited training data it had been given.

“Now we have reached the ability to not only use AI to learn from data, but we can probe the AI model to further our scientific understanding and gain new insights,” remarks Klaus-Robert Müller, professor for machine learning at Technical University of Berlin.

A video demonstrating, for the first time, a chemical process that was modelled by machine learning—a proton transferring within the malonaldehyde molecule—can be viewed here: http://ift.tt/2y8CQAH … ch/machine-learning/ .

Read more…

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] October 11, 2017 at 05:03AM

Credit to Author and Phys.org – latest science and technology news stories

 

 

 

 

Microsoft makes play for next wave of computing with quantum computing toolkit

That quantum computing future is, fortunately, still likely to be many years off. For now, Microsoft is taking sign ups for its quantum preview today.

Your daily selection of the best tech news!

According to Ars Technica

Microsoft

 

At its Ignite conference today, Microsoft announced its moves to embrace the next big thing in computing: quantum computing. Later this year, Microsoft will release a new quantum computing programming language, with full Visual Studio integration, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Quantum computing uses quantum features such as superposition and entanglement to perform calculations. Where traditional digital computers are made from bits, each bit representing either a one or a zero, quantum computers are made from some number of qubits (quantum bits). Qubits represent, in some sense, both one and zero simultaneously (a quantum superposition of 1 and 0). This ability for qubits to represent multiple values gives quantum computers exponentially more computing power than traditional computers.

Traditional computers are built up of logic gates—groups of transistors that combine bits in various ways to perform operations on them—but this construction is largely invisible to people writing programs for them. Programs and algorithms aren’t written in terms of logic gates; they use higher level constructs, from arithmetic to functions to objects, and more. The same is not really true of quantum algorithms; the quantum algorithms that have been developed so far are in some ways more familiar to an electronic engineer than a software developer, with algorithms often represented as quantum circuits—arrangements of quantum logic gates, through which qubits flow—rather than more typical programming language concepts.

 

Read more…

__

This article and images were originally posted on [Ars Technica] September 25, 2017 at 09:02AM

Credit to Author and Ars Technica

 

 

 

 

Scientists create world’s first ‘molecular robot’ capable of building molecules

Whilst building and operating such tiny machine is extremely complex, the techniques used by the team are based on simple chemical processes.

Your daily selection of the latest science news!

According to Phys.org

1.jpg

Artist’s impression of the molecular robot manipulating a molecule. Credit: Stuart Jantzen, biocinematics.com

Scientists at The University of Manchester have created the world’s first ‘molecular robot’ that is capable of performing basic tasks including building other molecules.

The , which are a millionth of a millimetre in size, can be programmed to move and build molecular cargo, using a tiny robotic arm.

Each individual is capable of manipulating a single molecule and is made up of just 150 carbon, hydrogen, oxygen and nitrogen atoms. To put that size into context, a billion billion of these robots piled on top of each other would still only be the same size as a single grain of salt.

The robots operate by carrying out chemical reactions in special solutions which can then be controlled and programmed by scientists to perform the basic tasks.

In the future such robots could be used for medical purposes, advanced manufacturing processes and even building molecular factories and assembly lines. The research will be published in Nature on Thursday 21st September.

Professor David Leigh, who led the research at University’s School of Chemistry, explains: ‘All matter is made up of atoms and these are the basic building blocks that form . Our robot is literally a molecular robot constructed of atoms just like you can build a very simple robot out of Lego bricks. The robot then responds to a series of simple commands that are programmed with chemical inputs by a scientist.

‘It is similar to the way robots are used on a car assembly line. Those robots pick up a panel and position it so that it can be riveted in the correct way to build the bodywork of a car. So, just like the robot in the factory, our molecular version can be programmed to position and rivet components in different ways to build different products, just on a much smaller scale at a molecular level.’

The benefit of having machinery that is so small is it massively reduces demand for materials, can accelerate and improve drug discovery, dramatically reduce power requirements and rapidly increase the miniaturisation of other products. Therefore, the potential applications for molecular robots are extremely varied and exciting.

Read more…

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] September 20, 2017 at 01:09PM

Credit to Author and Phys.org

 

 

 

 

Here’s What’s Really Going on With That Study Saying AI Can Detect Your Sexual Orientation

This included fixed features, such as the shape of a person’s nose, as well as transient features, such as grooming style.

Your daily selection of the latest science news!

According to ScienceAlert

Last week, scientists made headlines around the world when news broke of an artificial intelligence (AI) that had been trained to determine people’s sexual orientation from facial images more accurately than humans.

According to the study, when looking at photos this neural network could correctly distinguish between gay and heterosexual men 81 percent of the time (and 74 percent for women), but it didn’t take long before news of the findings provoked an uproar.

On Friday, sex and gender diversity groups GLAAD and the Human Rights Campaign (HRC) issued a joint statement decrying what they called “dangerous and flawed research that could cause harm to LGBTQ people around the world”.

The AI, which was trained by researchers from Stanford University on more than 35,000 public images of men and women sourced from an American dating site, used a predictive model called logistic regression to classify their sexual orientation (also made public on the site) based on their facial features.

Read more…

__

This article and images were originally posted on [ScienceAlert] September 12, 2017 at 04:02AM

Credit to Author and ScienceAlert

 

 

 

 

Breaking: An Entirely New Type of Quantum Computing Has Been Invented

Your daily selection of the latest science news!

According to ScienceAlert

Australian researchers have designed a new type of qubit – the building block of quantum computers – that they say will finally make it possible to manufacture a true, large-scale quantum computer.

Broadly speaking, there are currently a number of ways to make a quantum computer. Some take up less space, but tend to be incredibly complex. Others are simpler, but if you want it to scale up you’re going to need to knock down a few walls.

Some tried and true ways to capture a qubit are to use standard atom-taming technology such as ion traps and optical tweezers that can hold onto particles long enough for their quantum states to be analysed.

Others use circuits made of superconducting materials to detect quantum superpositions within the insanely slippery electrical currents.

The advantage of these kinds of systems is their basis in existing techniques and equipment, making them relatively affordable and easy to put together.

The cost is space – the technology might do for a relatively small number of qubits, but when you’re looking at hundreds or thousands of them linked into a computer, the scale quickly becomes unfeasible.

Thanks to coding information in both the nucleus and electron of an atom, the new silicon qubit, which is being called a ‘flip-flop qubit’, can be controlled by electric signals, instead of magnetic ones. That means it can maintain quantum entanglement across a larger distance than ever before, making it cheaper and easier to build into a scalable computer.

Read more…

__

This article and images were originally posted on [ScienceAlert] September 6, 2017 at 05:20AM

Credit to Author and ScienceAlert

 

 

 

 

Keep on marching for science education

Your daily selection of the latest science news!

According to Nature – Issue – nature.com science feeds

The new school year is beginning in the United States, and science education in Florida is at risk from laws that passed earlier this summer. It leaves me wondering: where have those who joined April’s March for Science gone?

That global action was probably the most popular science-advocacy event of this generation. I took part in Titusville, Florida, and was impressed with the attendance, enthusiasm and creative slogans. In the speeches that followed, I warned against pending legislation that would allow any citizen to demand a hearing to challenge instructional materials. Both critics and advocates see this as a way to stifle teaching about evolution and climate change. We had the summer to make our case.

The science-advocacy group Florida Citizens for Science — for which I volunteer as a board member and communications officer — led the battle to kill, or least modify, those bills. We lost on all fronts. The bills are now law.

Where were those marchers when we needed them? I know several science cheerleaders who took some concrete steps to forestall the legislation (by phoning elected representatives, for example), but I can count on one hand the number of working scientists who offered their expertise to our group. And I didn’t hear of any who approached lawmakers on their own.

Having the scientific community more actively involved might have had an impact. The final vote in the state senate was tight. Advocates of the law were widely quoted as claiming that evolution is just a theory and that anthropogenic global warming is in doubt. It would have been invaluable if scientists at local universities had issued simple statements: yes, evolution is a fact; the word ‘theory’ is used differently in science from how it’s used in casual conversation; and the basics of human-caused global warming need to be taught. Perhaps authoritative voices from the state’s universities would have swayed a senator or two.

Since the laws were passed, dozens of articles about them have been published statewide and even nationally. Social media has been buzzing. But the scientific community is still woefully quiet.

Hey, scientists, beleaguered high-school science teachers could use your support.

Other US states have endured attacks on science education. Legislatures in Alabama and Indiana passed non-binding resolutions that encourage ‘academic freedom’ for science teachers who cover topics — including biological evolution and the chemical origins of life — that the lawmakers deem controversial.

In Iowa, state lawmakers proposed a law requiring teachers to balance instruction on evolution and global warming with opposing views. That effort dwindled without concrete action, but not because of pressure from the scientific community.

“Hey, scientists, beleaguered high-school science teachers could use your support.”

We have had some help in our efforts: Jiri Hulcr and Andrea Lucky, scientists at the University of Florida in Gainesville, spoke out with me against these bad educational bills in a newspaper opinion piece. We argued that the choice was stark: training students for careers in the twenty-first century, or plunging them into the Middle Ages.

And Paul D. Cottle at Florida State University in Tallahassee is unrelenting in pursuing his goal of preparing elementary and high-school students for their adult lives. He’s an integral part of Future Physicists of Florida, a middle-school outreach programme that identifies students with mathematical ability and guides them into courses that will prepare them for university studies in science and engineering. More generally, he makes sure that students, parents and school administrators hear the message that the path to high-paying, satisfying careers using skills acquired in mathematics and science starts long before university, and depends on accurate instruction.

Plenty of issues need attention. The pool of qualified science and maths teachers is shrinking. Florida students’ performance in state-mandated science exams has been poor and stagnant for nearly a decade. This year, the state’s education department will begin to review and select science textbooks that will be used in classrooms across the state for at least the next five years.

We need scientists who are willing to take the time and effort to push back against the textbook challenges that these new laws will encourage. We need expert advisers eager to review and recommend quality science textbooks for our schools. We need bold scientists ready to state unapologetically that evolution, global warming — and, yes, even a round Earth — are facts of life.

You’re busy. I know. And some of you are uncomfortable in the spotlight. But doing something, even on a small scale, is better than doing nothing. Sign up for action alerts from the National Center for Science Education and your state’s science-advocacy group, if you have one. Be a voice within any organizations you belong to, urging them to make statements supporting science education as issues arise. Introduce yourself to teachers at local elementary and high schools.

Even if all you have to offer are ideas and emotional support, we’ll take them. Politicians, school administrators, business leaders, parents and even children need to know that you support high-quality science education.

The March for Science was a beneficial, feel-good event. It’s over. But we need you to keep on marching!

Read more…

__

This article and images were originally posted on [Nature – Issue – nature.com science feeds] August 30, 2017 at 01:07PM

Credit to Author and Nature

 

 

 

EGaming, the Humble Book Bundle: Data Science is LIVE!

Pick up this Data Science Book Bundle from O’Reilly Media and get Thoughtful Machine Learning with Python, R in a Nutshell, Doing Data Science, Head First Data Analysis, and more.

The Humble Book Bundle: Data Science presented by O’Reilly just launched on Wednesday, August 30 at 11 a.m. Pacific time! Is your programming inadequate to the task? Well, Data, we’ve got the bundle for you. Pick up this Data Science Book Bundle from O’Reilly Media and get Thoughtful Machine Learning with Python, R in a Nutshell, Doing Data Science, Head First Data Analysis, and more. You’ll excel in no time – and remember, the effort yields its own rewards!

The Humble Book Bundle: Data Science presented by O'Reilly

 

 

ESIST may receive a commission for any purchases made through our affiliate links. All commissions made will be used to support and expand ESIST.Tech

This article and images were originally posted on [ESIST]

 

 

 

AI Could Predict Alzheimer’s Disease Two Years in Advance

The technology is still in its early stages, but the findings suggest that AI analysis of brain scans could offer better results than relying on humans alone, Rosa-Neto told Live Science.

Your daily selection of the latest science news!

According to Live Science

An artificial-intelligence-driven algorithm can recognize the early signs of dementia in brain scans, and may accurately predict who will develop Alzheimer’s disease up to two years in advance, a new study finds.

The algorithm — which accurately predicted probable Alzheimer’s disease 84 percent of the time — could be particularly useful in selecting patients for clinical trials for drugs intended to delay disease onset, said lead study author Sulantha Sanjeewa, a computer scientist at McGill University in Canada.

“If you can tell from a group of of individuals who is the one that will develop the disease, one can better test new medications that could be capable of preventing the disease,” said co-lead study author Dr. Pedro Rosa-Neto, an associate professor of neurology, neurosurgery and psychiatry, also at McGill University. [6 Big Mysteries of Alzheimer’s Disease]

 

Read more…

__

This article and images were originally posted on [Live Science] August 29, 2017 at 06:21PM

Credit to Author and Live Science

 

 

 

How machine learning could help to improve climate forecasts

Conventional computer algorithms rely on programmers entering reams of rules and facts to guide the system’s output. Machine-learning systems — and a subset, deep-learning systems, which simulate complex neural networks in the human brain — derive their own rules after combing through large amounts of data.

Your daily selection of the latest science news!

According to Nature 

Greg Kendall-Ball

Many of the latest climate models seek to increase the detail in simulations of cloud structure.

As Earth-observing satellites become more plentiful and climate models more powerful, researchers who study global warming are facing a deluge of data. Some are now turning to the latest trend in artificial intelligence (AI) to help trawl through all the information, in the hope of discovering new climate patterns and improving forecasts.

“Climate is now a data problem,” says Claire Monteleoni, a computer scientist at George Washington University in Washington DC who has helped to pioneer the marriage of machine-learning techniques with climate science. In machine learning, AI systems improve in performance as the amount of data that they analyse grows. This approach is a natural fit for climate science: a single run of a high-resolution climate model can produce a petabyte of data, and the archive of climate data maintained by the UK Met Office, the national weather service, now holds about 45 petabytes of information — and adds 0.085 petabytes a day.

Researchers hoping to wrangle all these data will meet next month in Boulder, Colorado, to assess the state of science in the field known as climate informatics. Work in this area has grown rapidly. In the past several years, researchers have used AI systems to help them to rank climate models, spot cyclones and other extreme weather events — in both real and modelled climate data — and identify new climate patterns. “The pace seems to be picking up,” says Monteleoni.

 

Read more…

__

This article and images were originally posted on [Nature – Issue – nature.com science feeds] August 23, 2017 at 01:17PM

Credit to Author and Nature – Issue – nature.com science feeds

 

 

 

Quantum Computing Is Coming at Us Fast, So Here’s Everything You Need to Know

In the quantum world, objects can exist in a what is called a superposition of states: A hypothetical atomic-level light bulb could simultaneously be both on and off. This strange feature has important ramifications for computing.

Your daily selection of the latest science news!

According to ScienceAlert

In early July, Google announced that it will expand its commercially available cloud computing services to include quantum computing. A similar service has been available from IBM since May. These aren’t services most regular people will have a lot of reason to use yet.

But making quantum computers more accessible will help government, academic and corporate research groups around the world continue their study of the capabilities of quantum computing.

Understanding how these systems work requires exploring a different area of physics than most people are familiar with.

From everyday experience we are familiar with what physicists call “classical mechanics,” which governs most of the world we can see with our own eyes, such as what happens when a car hits a building, what path a ball takes when it’s thrown and why it’s hard to drag a cooler across a sandy beach.

Quantum mechanics, however, describes the subatomic realm – the behaviour of protons, electrons and photons. The laws of quantum mechanics are very different from those of classical mechanics and can lead to some unexpected and counterintuitive results, such as the idea that an object can have negative mass.

Physicists around the world – in government, academic and corporate research groups – continue to explore real-world deployments of technologies based on quantum mechanics. And computer scientists, including me, are looking to understand how these technologies can be used to advance computing and cryptography.

Read more…

__

This article and images were originally posted on [ScienceAlert] August 26, 2017 at 12:20AM

Credit to Author and ScienceAlert

 

 

 

Major leap towards data storage at the molecular level

The potential for molecular data storage is huge. To put it into a consumer context, molecular technologies could store more than 200 terabits of data per square inch – that’s 25,000 GB of information stored in something approximately the size of a 50p coin, compared to Apple’s latest iPhone 7 with a maximum storage of 256 GB.

Your daily selection of the latest science news!

According to Phys.org 

Credit: CC0 Public Domain

From smartphones to supercomputers, the growing need for smaller and more energy efficient devices has made higher density data storage one of the most important technological quests.

Now scientists at the University of Manchester have proved that storing data with a class of molecules known as single-molecule magnets is more feasible than previously thought.

The research, led by Dr David Mills and Dr Nicholas Chilton, from the School of Chemistry, is being published in Nature. It shows that , a memory effect that is a prerequisite of any , is possible in individual molecules at -213 °C. This is extremely close to the temperature of liquid nitrogen (-196 °C).

The result means that data storage with single molecules could become a reality because the data servers could be cooled using relatively cheap liquid nitrogen at -196°C instead of far more expensive liquid helium (-269 °C). The research provides proof-of-concept that such technologies could be achievable in the near future.website

Provided by:
University of Manchester
search and more info

 

Read more…

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] August 23, 2017 at 01:03PM

Credit to Author and Phys.org 

 

 

 

Breakthrough ink discovery could transform the production of new laser and optoelectronic devices

The research titled Black phosphorus ink formulation for inkjet printing of optoelectronics and photonics has been published today in Nature Communications and was funded by the Royal Academy of Engineering and the Engineering and Physical Sciences Research Council (EPSRC).

Your daily selection of the latest science news!

According to Phys.org 

1.jpg

Black phosphorus (BP) crystal before it is converted into functional ink. Credit: smart-elements.com

A breakthrough ‘recipe’ for inkjet printing, which could enable high-volume manufacturing of next-generation laser and optoelectronic technologies, has been uncovered by Cambridge researchers.

The research, led by Dr Tawfique Hasan, of the Cambridge Graphene Centre, University of Cambridge, found that Black phosphorous (BP) ink – a unique two-dimensional material similar to graphene – is compatible with conventional inkjet techniques, making possible – for the first time – the scalable mass manufacture of BP-based laser and .

An interdisciplinary team of scientists from Cambridge as well as Imperial College London, Aalto University, Beihang University, and Zhejiang University, carefully optimised the chemical composition of BP to achieve a stable ink through the balance of complex and competing fluidic effects. This enabled the production of new functional laser and optoelectronic devices using high-speed printing.

Read more…

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] August 17, 2017 at 08:33AM

Credit to Author and Phys.org

 

 

 

Featured video: A self-driving wheelchair

Your daily selection of the best tech news!

According to MIT News 

Singapore and MIT have been at the forefront of autonomous vehicle development. First, there were self-driving golf buggies. Then, an autonomous electric car. Now, leveraging similar technology, MIT and Singaporean researchers have developed and deployed a self-driving wheelchair at a hospital.

Spearheaded by Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, this autonomous wheelchair is an extension of the self-driving scooter that launched at MIT last year — and it is a testament to the success of the Singapore-MIT Alliance for Research and Technology, or SMART, a collaboration between researchers at MIT and in Singapore.

 

Read more…

__

This article and images were originally posted on [MIT News] July 26, 2017 at 03:35PM

Credit to Author and MIT News – Computer Science and Artificial Intelligence Laboratory (CSAIL)

 

 

 

This Insane Nanochip Device Can Heal Tissue Just by Touching The Skin Once

By using our novel nanochip technology, injured or compromised organs can be replaced,” says one of the study leaders, Chandan Sen. “We have shown that skin is a fertile land where we can grow the elements of any organ that is declining. Wait, are we in Star Trek now?

Your daily selection of the latest science news!

According to ScienceAlert

Imagine buzzing the skin over an internal wound with an electrical device and having it heal over just a few days – that’s the promise of new nanochip technology that can reprogram cells to replace tissue or even whole organs.

It’s called Tissue Nanotransfection (TNT), and while it’s only been tested on mice and pigs so far, the early signs are encouraging for this new body repair tool – and it sounds like a device straight out of science-fiction.

The prototype device, developed by a team at Ohio State University, sits on the skin and uses an intense electrical field to deliver specific genes to the tissue underneath it. Those genes create new types of cells that can be used nearby or elsewhere in the body.

“By using our novel nanochip technology, injured or compromised organs can be replaced,” says one of the study leaders, Chandan Sen. “We have shown that skin is a fertile land where we can grow the elements of any organ that is declining.”

Read more…

__

This article and images were originally posted on [ScienceAlert] August 9, 2017 at 03:17AM

Credit to Author and ScienceAlert

 

 

 

This Teenage Girl Invented an AI-Based App That Can Quickly Diagnose Eye Disease

This ingenious system is called Eyeagnosis, and Kopparapu – a high school junior – recently presented it at the O’Reilly Artificial Intelligence conference in New York.

Your daily selection of the latest science news!

According to ScienceAlert

You’ve probably heard that diabetes can lead to blindness. This complication is called diabetic retinopathy (DR), when blood vessels are progressively damaged in the retina. It’s the leading cause of preventable blindness in the world.

Screening and early diagnosis are crucial for treating this problem, but more than 50 percent of all cases go unnoticed. Now 16-year-old Kavya Kopparapu, whose grandfather in India was diagnosed with DR, has invented a simple, cheap new screening tool for it.

“The lack of diagnosis is the biggest challenge,” she told IEEE Spectrum. “In India, there are programs that send doctors into villages and slums, but there are a lot of patients and only so many ophthalmologists.”

Her solution? To develop a smartphone app that can screen for the disease with the help of a specially trained artificial intelligence program and a simple 3D-printed lens attachment.

Read more…

__

This article and images were originally posted on [ScienceAlert] August 9, 2017 at 03:17AM

Credit to Author and ScienceAlert

What Is Science?

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

Your daily selection of the latest science news!

Source: Live Science 

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe.

The word “science” is derived from the Latin word scientia, which is knowledge based on demonstrable and reproducible data, according to the Merriam-Webster Dictionary. True to this definition, science aims for measurable results through testing and analysis. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it is focuses only on the natural world, according to the University of California. Anything that is considered supernatural does not fit into the definition of science.

Read more…

__

This article and images were originally posted on [Live Science] August 7, 2017 at 11:08AM

Credit to Author and Live Science

Meet the adorable robot camera Japan’s space agency sent to the ISS

Science can be cute as hell when it wants to be – take the JEM Internal Ball Camera (“Int-Ball” for short). The device, created by the Japan Aerospace Exploration Agency (JAXA), was delivered to the International Space Station on June 4, 2017, and now JAXA is releasing its first video and images.

The purpose of Int-Ball is to give scientists on the ground the ability to remotely capture images and video, via a robot that can move autonomously around in space and capture both moving and still imagery. The 3D-printed drone offer real-time monitoring for “flight controllers and researchers on the ground,” according to JAXA, and the media it gathers can also be fed back to ISS crew.

Int-Ball’s unique design is obviously made possible by the zero-G environment in which it operates. It’s aiming to be able to move around “anywhere at any time via autonomous flight and record images from any angle,” says Japan’s space agency, and will hope to help onboard ISS crew by reducing the time they spend taking photography and capturing video themselves to zero. That currently accounts for around 10 percent of ISS crew time, JAXA says.

Int-Ball contains actuators, rotational and acceleration sensors and electromagnetic brakes to help it orient itself in space, and JAXA is exploring the tech for other applications including in satellites. The mission on the ISS, following its initial verification, which is underway now, includes improving its performance and seeking ways to help it better operate – both in experiments inside and outside spaceborne vehicles.

No word on its friendship capabilities but I have to imagine they’re very high.

__

This article and images was originally posted on [TechCrunch] July 17, 2017 at 09:33AM

by  

 

 

 

U.S. to Fund Advanced Brain-Computer Interfaces

Matt Angle’s claim might have sounded eccentric before: for years, he insisted that the key to solving one of neuroscience’s most intractable challenges lay in a 1960s-era technology invented in the tiny nation of Moldova.

It’s a lot harder to dismiss Angle’s approach now. Today, the U.S. Department of Defense selected Angle’s small San Jose-based company, Paradromics Inc., to lead one of six consortia it is backing with $65 million to develop technologies able to record from one million individual neurons inside a human brain simultaneously.

Recording from large numbers of neurons is essential if engineers are ever to create a seamless, high-throughput data link between the human brain and computers, including to restore lost senses.

That goal has been in the news a lot lately. In April, electric car and rocket entrepreneur Elon Musk announced he was backing a $100 million brain-computer interface company called Neuralink. Facebook followed up by saying that it had started work on a thought-to-text device to let people silently compose e-mails or posts.

The announcements generated worldwide headlines but also skepticism, since neither Musk nor Facebook disclosed how they’d pull off such feats (see “With Neuralink, Elon Musk Promises Human-to-Human Telepathy. Don’t Believe It.”).

Now the federal contracts, handed out by DARPA, offer a peek into what cutting-edge technologies could make a “brain modem” really possible. They include flexible circuits that can be layered onto the brain, sand-sized wireless “neurograins,” and holographic microscopes able to observe thousands of neurons at once. Two other projects aim at restoring vision with light-emitting diodes covering the brain’s visual cortex.

Brain-computer interfaces convey information out of the brain using electronics. Here, a close-up shows how miniature wires are bonded together to create an electrical contact. This is the end that stays outside the brain.

Paradromics’s haul is as much as $18 million, but the money comes with a “moonshot”-like list of requirements—the implant should be not much bigger than a nickel, must record from 1 million neurons and also be able to send signal back into the brain.

“We are trying to find the sweet spot—and I think we have found it—between being at that cutting edge and getting as much information out at one time, but at the same time not being so far out that you can’t implement it,” says Angle.

Since learning to listen in on the electrical chatter of a single neuron a century ago using metal electrodes, scientists have never managed to simultaneously record from more than a few hundred at once in a living human brain, which has about 80 billion neurons in all.

Angle, 32, says he ran into that problem as a graduate student. He wanted to study the way that odors are represented in the olfactory bulb, a part of your brain right behind your nose. But the effort was stymied by the lack of any way to record from more than a handful of neurons at a time.

That’s when a professor at Howard University, and the father of one of Angle’s old college friends, mentioned an obscure Moldovan company that had developed a way to stretch hot metal and mass produce coils of extremely thin insulated wires, just 20 microns thick.

The technique, similar to how fiber optic strands are made, is used to create antennas and to make magnetic wires that can be sewn into towels by hotels to prevent customers from stealing them. But Angle and his collaborators—Andreas Schaefer of the Francis Crick Institute and Nick Melosh of Stanford University—realized the materials could let them make electrical contact with large numbers of brain cells at once.

Today, Angle says, his team orders spools of the wire and then bundles strands together in cords that are 10,000 wires thick. One end of the wires can be sharpened, creating a brush-like surface that can penetrate the brain like needles would. Angle says the thickness of the wires is calibrated so that it is strong enough not to buckle as it is pushed into the brain, but thin enough not to cause much damage.

The other ends of the wires are glued together, polished, and then pressed onto a microprocessor with tens of thousands of randomly spaced “landing pads,” some of which bond to the wires. These pads detect the electrical signals conveyed through the wires from the brain so they can be tallied and analyzed.  Angle says “connectorizing” so many wires is what’s held such concepts back in the past.

In Paradromics’s case, the eventual objective is a high-density connection to the speech center of the brain that could let the company tap into what words a person is thinking of saying. But if the technology works out, it could also vastly expand the ability of neuroscientists to listen in as large ensembles of neurons generate complex behaviors, knit together sensory stimuli, and even create consciousness itself.

__

This article and images was originally posted on [New on MIT Technology Review] July 10, 2017 at 10:06AM

by Adam Piore

 

 

 

A future without fakes thanks to quantum technology

A future without fakes thanks to quantum technology

Gold microchip. Credit: Lancaster University

Counterfeit products are a huge problem – from medicines to car parts, fake technology costs lives.

Every year, imports of counterfeited and pirated goods around the world cost nearly US $0.5 trillion in lost revenue.

Counterfeit medicines alone cost the industry over US $200 billion every year. They are also dangerous to our health – around a third contain no active ingredients, resulting in a million deaths a year.

And as the Internet of Things expands, there is the need to trust the identity of smart systems, such as the brake system components within connected and .

But researchers exhibiting at the Royal Society Summer Science Exhibition believe we are on the verge of a future without fakes thanks to new technology.

Whether aerospace parts or luxury goods, the researchers say the new technology will make counterfeiting impossible.

Scientists have created unique atomic-scale ID’s based on the irregularities found in 2-D materials like graphene.

On an atomic scale, quantum physics amplifies these irregularities, making it possible to ‘fingerprint’ them in simple electronic devices and optical tags.

The team from Lancaster University and spin-out company Quantum Base will be announcing their new patent in optical technology to read these imperfections at the “Future without Fakes” exhibit of the Royal Society’s Summer Science Exhibition.

For the first time, the team will be showcasing this via a smartphone app which can read whether a product is real or fake, and enable people to check the authenticity of a product through their smartphones.

The customer will be able to scan the optical tag on a product with a smartphone, which will match the 2-D tag with the manufacturer’s database.

This has the potential to eradicate product counterfeiting and forgery of digital identities, two of the costliest crimes in the world today.

This patented and the related application can be expected to be available to the public in the first half of 2018, and it has the potential to fit on any surface or any product, so all global markets may be addressed.

Professor Robert Young of Lancaster University, world leading expert in quantum information and Chief scientist at Quantum Base says: “It is wonderful to be on the front line, using scientific discovery in such a positive way to wage war on a global epidemic such as counterfeiting, which ultimately costs both lives and livelihoods alike.”


Explore further:
Invention of forge-proof ID to revolutionise security

More information:
Optical identification using imperfections in 2D materials. arXiv. http://ift.tt/2sKvNKS

Journal reference:
arXiv

__

This article and images was originally posted on [Phys.org – latest science and technology news stories] July 5, 2017 at 07:24AM

Provided by: Lancaster University
search and more info

 

 

 

Tiny, Lens-Free Camera Could Hide in Clothes, Glasses

The lens-free camera is so thin it could be embedded anywhere, according to researchers.

Credit: Caltech/Hajimiri Lab

A tiny, paper-thin camera that has no lens could turn conventional photography on its head, according to new research.

The device, a square that measures just 0.04 inches by 0.05 inches (1 by 1.2 millimeters), has the potential to switch its “aperture” among wide angle, fish eye and zoom instantaneously. And because the device is so thin, just a few microns thick, it could be embedded anywhere. (For comparison, the average width of a human hair is about 100 microns.)

“The entire backside of your phone could be a camera,” said Ali Hajimiri, a professor of electrical engineering and medical engineering at the California Institute of Technology (Caltech) and the principal investigator of the research paper, describing the new camera. [Photo Future: 7 High-Tech Ways to Share Images]

It could be embedded in a watch or in a pair of eyeglasses or in fabric, Hajimiri told Live Science. It could even be designed to launch into space as a small package and then unfurl into very large, thin sheets that image the universe at resolutions never before possible, he added.

“There’s no fundamental limit on how much you could increase the resolution,” Hajimiri said. “You could do gigapixels if you wanted.” (A gigapixel image has 1 billion pixels, or 1,000 times more than an image from a 1-megapixel digital camera.)

Hajimiri and his colleagues presented their innovation, called an optical phased array, at the Optical Society’s (OSA) Conference on Lasers and Electro-Optics, which was held in March. The research was also published online in the OSA Technical Digest.

The proof-of-concept device is a flat sheet with an array of 64 light receivers that can be thought of as tiny antennas tuned to receive light waves, Hajimiri said. Each receiver in the array is individually controlled by a computer program.

In fractions of a second, the light receivers can be manipulated to create an image of an object on the far right side of the view or on the far left or anywhere in between. And this can be done without pointing the device at the objects, which would be necessary with a camera.

“The beauty of this thing is that we create images without any mechanical movement,” he said.

Hajimiri called this feature a “synthetic aperture.” To test how well it worked, the researchers laid the thin arrayover a silicon computer chip. In experiments, the synthetic aperture collected light waves, and then other components on the chip converted the light waves to electrical signals that were sent to a sensor.

The resulting image looks like a checkerboard with illuminated squares, but this basic low-resolution image is just first step, Hajimiri said. The device’s ability to manipulate incoming light waves is so precise and fast that, theoretically, it could capture hundreds of different kinds of images in any kind of light, including infrared, in a matter of seconds, he said.

“You can make an extremely powerful and large camera,” Hajimiri said.

Achieving a high-power view with a conventional camera requires that the lens be very big, so that it can collect enough light. This is why professional photographers on the sidelines of sporting events wield huge camera lenses.

But bigger lenses require more glass, and that can introduce light and color flaws in the image. The researchers’ optical phased array doesn’t have that problem, or any added bulk, Hajimiri said.

For the next stage of their research, Hajimiri and his colleagues are working to make the device larger, with more light receivers in the array.

“Essentially, there’s no limit on how much you could increase the resolution,” he said. “It’s just a question of how large you can make the phased array.”

Original article on Live Science.

__

This article and images was originally posted on [Live Science] June 28, 2017 at 12:20PM

By Tracy Staedter, Live Science Contributor

 

 

 

The world’s most powerful computer

China only started producing its first computer chips in 2001. But its chip industry has developed at an awesome pace.

So much so that Chinese-made chips power the world’s most powerful supercomputer, which is Chinese too.

The computer, known as the Sunway TaihuLight, contains some 41,000 chips and can carry out 93 quadrillion calculations per second. That’s twice as fast as the next-most-powerful supercomputer on the planet (which also happens to be Chinese).

The mind-boggling amount of calculations computers like this can carry out in the blink of an eye can help crunch incredibly complicated data – such as variations in weather patterns over months and years and decades.

The BBC’s Cameron Andersen visited the TaihuLight at its home in the National Supercomputing Center in the eastern Chinese city of Wuxi.

Watch the video above to see his exclusive look inside the world’s most powerful computer – and the challenges that this processing powerhouse might try and solve in the near future.

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday.

__

This article and images was originally posted on [BBC Future] June 24, 2017 at 09:42AM

 

 

 

 

 

These 7 Disruptive Technologies Could Be Worth Trillions of Dollars

Scientists, technologists, engineers, and visionaries are building the future. Amazing things are in the pipeline. It’s a big deal. But you already knew all that. Such speculation is common. What’s less common? Scale.

How big is big?

“Silicon Valley, Silicon Alley, Silicon Dock, all of the Silicons around the world, they are dreaming the dream. They are innovating,” Catherine Wood said at Singularity University’s Exponential Finance in New York. “We are sizing the opportunity. That’s what we do.”

Catherine Wood at Exponential Finance.

Wood is founder and CEO of ARK Investment Management, a research and investment company focused on the growth potential of today’s disruptive technologies. Prior to ARK, she served as CIO of Global Thematic Strategies at AllianceBernstein for 12 years.

“We believe innovation is key to growth,” Wood said. “We are not focused on the past. We are focused on the future. We think there are tremendous opportunities in the public marketplace because this shift towards passive [investing] has created a lot of risk aversion and tremendous inefficiencies.”

In a new research report, released this week, ARK took a look at seven disruptive technologies, and put a number on just how tremendous they are. Here’s what they found.

(Check out ARK’s website and free report, “Big Ideas of 2017,” for more numbers, charts, and detail.)

1. Deep Learning Could Be Worth 35 Amazons

Deep learning is a subcategory of machine learning which is itself a subcategory of artificial intelligence. Deep learning is the source of much of the hype surrounding AI today. (You know you may be in a hype bubble when ads tout AI on Sunday golf commercial breaks.)

Behind the hype, however, big tech companies are pursuing deep learning to do very practical things. And whereas the internet, which unleashed trillions in market value, transformed several industries—news, entertainment, advertising, etc.—deep learning will work its way into even more, Wood said.

As deep learning advances, it should automate and improve technology, transportation, manufacturing, healthcare, finance, and more. And as is often the case with emerging technologies, it may form entirely new businesses we have yet to imagine.

“Bill Gates has said a breakthrough in machine learning would be worth 10 Microsofts. Microsoft is $550 to $600 billion,” Wood said. “We think deep learning is going to be twice that. We think [it] could approach $17 trillion in market cap—which would be 35 Amazons.”

2. Fleets of Autonomous Taxis to Overtake Automakers

Wood didn’t mince words about a future when cars drive themselves.

This is the biggest change that the automotive industry has ever faced,” she said.

Today’s automakers have a global market capitalization of a trillion dollars. Meanwhile, mobility-as-a-service companies as a whole (think ridesharing) are valued around $115 billion. If this number took into account expectations of a driverless future, it’d be higher.

The mobility-as-a-service market, which will slash the cost of “point-to-point” travel, could be worth more than today’s automakers combined, Wood said. Twice as much, in fact. As gross sales grow to something like $10 trillion in the early 2030s, her firm thinks some 20% of that will go to platform providers. It could be a $2 trillion opportunity.

Wood said a handful of companies will dominate the market, and Tesla is well positioned to be one of those companies. They are developing both the hardware, electric cars, and the software, self-driving algorithms. And although analysts tend to look at them as a just an automaker right now, that’s not all they’ll be down the road.

“We think if [Tesla] got even 5% of this global market for autonomous taxi networks, it should be worth another $100 billion today,” Wood said.

3. 3D Printing Goes Big With Finished Products at Scale

3D printing has become part of mainstream consciousness thanks, mostly, to the prospect of desktop printers for consumer prices. But these are imperfect, and the dream of an at-home replicator still eludes us. The manufacturing industry, however, is much closer to using 3D printers at scale.

Not long ago, we wrote about Carbon’s partnership with Adidas to mass-produce shoe midsoles. This is significant because, whereas industrial 3D printing has focused on prototyping to date, improving cost, quality, and speed are making it viable for finished products.

According to ARK, 3D printing may grow into a $41 billion market by 2020, and Wood noted a McKinsey forecast of as much as $490 billion by 2025. “McKinsey will be right if 3D printing actually becomes a part of the industrial production process, so end-use parts,” Wood said.

4. CRISPR Starts With Genetic Therapy, But It Doesn’t End There

According to ARK, the cost of genome editing has fallen 28x to 52x (depending on reagents) in the last four years. CRISPR is the technique leading the genome editing revolution, dramatically cutting time and cost while maintaining editing efficiency. Despite its potential, Wood said she isn’t hearing enough about it from investors yet.

“There are roughly 10,000 monogenic or single-gene diseases. Only 5% are treatable today,” she said. ARK believes treating these diseases is worth an annual $70 billion globally. Other areas of interest include stem cell therapy research, personalized medicine, drug development, agriculture, biofuels, and more.

Still, the big names in this area—Intellia, Editas, and CRISPR—aren’t on the radar.

“You can see if a company in this space has a strong IP position, as Genentech did in 1980, then the growth rates can be enormous,” Wood said. “Again, you don’t hear these names, and that’s quite interesting to me. We think there are very low expectations in that space.”

5. Mobile Transactions Could Grow 15x by 2020

By 2020, 75% of the world will own a smartphone, according to ARK. Amid smartphones’ many uses, mobile payments will be one of the most impactful. Coupled with better security (biometrics) and wider acceptance (NFC and point-of-sale), ARK thinks mobile transactions could grow 15x, from $1 trillion today to upwards of $15 trillion by 2020.

In addition, to making sharing economy transactions more frictionless, they are generally key to financial inclusion in emerging and developed markets, ARK says. And big emerging markets, such as India and China, are at the forefront, thanks to favorable regulations.

“Asia is leading the charge here,” Wood said. “You look at companies like Tencent and Alipay. They are really moving very quickly towards mobile and actually showing us the way.”

6. Robotics and Automation to Liberate $12 Trillion by 2035

Robots aren’t just for auto manufacturers anymore. Driven by continued cost declines and easier programming, more businesses are adopting robots. Amazon’s robot workforce in warehouses has grown from 1,000 to nearly 50,000 since 2014. “And they have never laid off anyone, other than for performance reasons, in their distribution centers,” Wood said.

But she understands fears over lost jobs.

This is only the beginning of a big round of automation driven by cheaper, smarter, safer, and more flexible robots. She agrees there will be a lot of displacement. Still, some commentators overlook associated productivity gains. By 2035, Wood said US GDP could be $12 trillion more than it would have been without robotics and automation—that’s a $40 trillion economy instead of a $28 trillion economy.

“This is the history of technology. Productivity. New products and services. It is our job as investors to figure out where that $12 trillion is,” Wood said. “We can’t even imagine it right now. We couldn’t imagine what the internet was going to do with us in the early ’90s.”

7. Blockchain and Cryptoassets: Speculatively Spectacular

Blockchain-enabled cryptoassets, such as Bitcoin, Ethereum, and Steem, have caused more than a stir in recent years. In addition to Bitcoin, there are now some 700 cryptoassets of various shapes and hues. Bitcoin still rules the roost with a market value of nearly $40 billion, up from just $3 billion two years ago, according to ARK. But it’s only half the total.

“This market is nascent. There are a lot of growing pains taking place right now in the crypto world, but the promise is there,” Wood said. “It’s a very hot space.”

Like all young markets, ARK says, cryptoasset markets are “characterized by enthusiasm, uncertainty, and speculation.” The firm’s blockchain products lead, Chris Burniske, uses Twitter—which is where he says the community congregates—to take the temperature. In a recent Twitter poll, 62% of respondents said they believed the market’s total value would exceed a trillion dollars in 10 years. In a followup, more focused on the trillion-plus crowd, 35% favored $1–$5 trillion, 17% guessed $5–$10 trillion, and 34% chose $10+ trillion.

Looking past the speculation, Wood believes there’s at least one big area blockchain and cryptoassets are poised to break into: the $500-billion, fee-based business of sending money across borders known as remittances.

“If you look at the Philippines-to-South Korean corridor, what you’re seeing already is that Bitcoin is 20% of the remittances market,” Wood said. “The migrant workers who are transmitting currency, they don’t know that Bitcoin is what’s enabling such a low-fee transaction. It’s the rails, effectively. They just see the fiat transfer. We think that that’s going to be a very exciting market.”

Stock media provided by NomadSoul1/Pond5.com

__

This article and images was originally posted on [Singularity Hub] June 16, 2017 at 12:03PM

 

 

 

Wireless charging of moving electric vehicles overcomes major hurdle

Wireless charging of moving electric vehicles overcomes major hurdle in new Stanford study

Stanford scientists have created a device that wirelessly transmits electricity to a movable disc. The technology could some day be used to charge moving electric vehicles and personal devices. Credit: Sid Assawaworrarit/Stanford University

If electric cars could recharge while driving down a highway, it would virtually eliminate concerns about their range and lower their cost, perhaps making electricity the standard fuel for vehicles.

Now Stanford University scientists have overcome a major hurdle to such a future by wirelessly transmitting to a nearby moving object. Their results are published in the June 15 edition of Nature.

“In addition to advancing the wireless charging of vehicles and personal devices like cellphones, our new technology may untether robotics in manufacturing, which also are on the move,” said Shanhui Fan, a professor of electrical engineering and senior author of the study. “We still need to significantly increase the amount of electricity being transferred to charge electric cars, but we may not need to push the distance too much more.”

The group built on existing technology developed in 2007 at MIT for transmitting electricity wirelessly over a distance of a few feet to a stationary object. In the new work, the team transmitted electricity wirelessly to a moving LED lightbulb. That demonstration only involved a 1-milliwatt charge, whereas electric cars often require tens of kilowatts to operate. The team is now working on greatly increasing the amount of electricity that can be transferred, and tweaking the system to extend the transfer distance and improve efficiency.

Driving range

Wireless charging would address a major drawback of plug-in – their limited driving range. Tesla Motors expects its upcoming Model 3 to go more than 200 miles on a single charge and the Chevy Bolt, which is already on the market, has an advertised range of 238 miles. But electric vehicle batteries generally take several hours to fully recharge. A charge-as-you-drive system would overcome these limitations.

“In theory, one could drive for an unlimited amount of time without having to stop to recharge,” Fan explained. “The hope is that you’ll be able to charge your electric car while you’re driving down the highway. A coil in the bottom of the vehicle could receive electricity from a series of coils connected to an electric current embedded in the road.”

Some transportation experts envision an automated highway system where driverless electric vehicles are wirelessly charged by solar power or other renewable energy sources. The goal would be to reduce accidents and dramatically improve the flow of traffic while lowering greenhouse gas emissions.

The video will load shortly

Wireless technology could also assist GPS navigation of driverless cars. GPS is accurate up to about 35 feet. For safety, autonomous cars need to be in the center of the lane where the transmitter coils would be embedded, providing very precise positioning for GPS satellites.

 

Magnetic resonance

Mid-range wireless power transfer, as developed at Stanford and other research universities, is based on coupling. Just as major power plants generate alternating currents by rotating coils of wire between magnets, electricity moving through wires creates an oscillating magnetic field. This field also causes electrons in a nearby coil of wires to oscillate, thereby transferring power wirelessly. The transfer efficiency is further enhanced if both coils are tuned to the same magnetic resonance frequency and are positioned at the correct angle.

However, the continuous flow of electricity can only be maintained if some aspects of the circuits, such as the frequency, are manually tuned as the object moves. So, either the energy transmitting coil and receiver coil must remain nearly stationary, or the device must be tuned automatically and continuously – a significantly complex process.

To address the challenge, the Stanford team eliminated the radio-frequency source in the transmitter and replaced it with a commercially available voltage amplifier and feedback resistor. This system automatically figures out the right frequency for different distances without the need for human interference.

“Adding the amplifier allows power to be very efficiently transferred across most of the three-foot range and despite the changing orientation of the receiving coil,” said graduate student Sid Assawaworrarit, the study’s lead author. “This eliminates the need for automatic and continuous tuning of any aspect of the circuits.”

Assawaworrarit tested the approach by placing an LED bulb on the receiving coil. In a conventional setup without active tuning, LED brightness would diminish with distance. In the new setup, the brightness remained constant as the receiver moved away from the source by a distance of about three feet. Fan’s team recently filed a patent application for the latest advance.

The group used an off-the-shelf, general-purpose amplifier with a relatively low efficiency of about 10 percent. They say custom-made amplifiers can improve that efficiency to more than 90 percent.

“We can rethink how to deliver electricity not only to our cars, but to smaller devices on or in our bodies,” Fan said. “For anything that could benefit from dynamic, wireless charging, this is potentially very important.”


Explore further:
Wireless power could revolutionize highway transportation, researchers say

More information:
Sid Assawaworrarit et al. Robust wireless power transfer using a nonlinear parity?time-symmetric circuit, Nature (2017). DOI: 10.1038/nature22404

Journal reference:
Nature

__

This article and images was originally posted on [Phys.org ] June 14, 2017 at 01:09PM

Provided by: Stanford University

 

 

 

 

3-in-1 device offers alternative to Moore’s law

reconfigurable semiconductor

Illustration of the reconfigurable device with three buried gates, which can be used to create n- or p-type regions in a single semiconductor flake. Credit: Dhakras et al. ©2017 IOP Publishing Ltd

 

In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance with Moore’s law. However, the number of transistors on a computer chip cannot exponentially increase forever, and this is motivating researchers to look for other ways to improve semiconductor technologies.

In a new study published in Nanotechnology, a team of researchers at SUNY-Polytechnic Institute in Albany, New York, has suggested that combining multiple functions in a single device can improve device functionality and reduce fabrication complexity, thereby providing an alternative to scaling down the device’s dimensions as the only method to improve functionality.

To demonstrate, the researchers designed and fabricated a reconfigurable device that can morph into three fundamental semiconductor devices: a p-n diode (which functions as a rectifier, for converting alternating current to direct current), a MOSFET (for switching), and a bipolar junction transistor (or BJT, for current amplification).

“We are able to demonstrate the three most important semiconductor devices (p-n diode, MOSFET, and BJT) using a single reconfigurable device,” coauthor Ji Ung Lee at the SUNY-Polytechnic Institute told Phys.org. “While these devices can be fabricated individually in modern semiconductor fabrication facilities, often requiring complex integration schemes if they are to be combined, we can form a single device that can perform the functions of all three devices.”

The multifunctional device is made of two-dimensional tungsten diselenide (WSe2), a recently discovered transition metal dichalcogenide semiconductor. This class of materials is promising for electronics applications because the bandgap is tunable by controlling the thickness, and it is a direct bandgap in single layer form. The bandgap is one of the advantages of 2D transition metal dichalcogenides over graphene, which has zero bandgap.

In order to integrate multiple functions into a single device, the researchers developed a new doping technique. Since WSe2 is such a new material, until now there has been a lack of doping techniques. Through doping, the researchers could realize properties such as ambipolar conduction, which is the ability to conduct both electrons and holes under different conditions. The doping technique also means that all three of the functionalities are surface-conducting devices, which offers a single, straightforward way of evaluating their performance.

“Instead of using traditional semiconductor fabrication techniques that can only form fixed devices, we use gates to dope,” Lee said. “These gates can dynamically change which carriers (electrons or holes) flow through the semiconductor. This ability to change allows the reconfigurable device to perform multiple functions.

“In addition to implementing these devices, the reconfigurable device can potentially implement certain logic functions more compactly and efficiently. This is because adding gates, as we have done, can save overall area and enable more efficient computing.”

In the future, the researchers plan to further investigate the applications of these multifunctional devices.

“We hope to build complex computer circuits with fewer device elements than those using the current process,” Lee said. “This will demonstrate the scalability of our device for the post-CMOS era.”


Explore further:
Team engineers oxide semiconductor just single atom thick

More information:
Prathamesh Dhakras, Pratik Agnihotri, and Ji Ung Lee. “Three fundamental devices in one: a reconfigurable multifunctional device in two-dimensional WSe2.” Nanotechnology. DOI: 10.1088/1361-6528/aa7350

Journal reference:
Nanotechnology

__

This article and images was originally posted on [Phys.org] June 14, 2017 at 07:00AM

by Lisa Zyga