Japan’s New Supercomputer Is the Fastest Ever for Astronomy Research

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on ExtremeTechExtremeTech September 10, 2018 at 08:15AM.)

Scientists have incredibly powerful telescopes at their disposal today like Hubble and the Very Large Telescope in Chile. Even more powerful instruments like the James Webb Space Telescope and Extremely Large Telescope are coming down the pike. However, observations will only get you so far. To understand how the universe works, scientists often need to turn to computer simulations. Those simulations are getting more powerful thanks to a new Japanese supercomputer called ATERUI II. It’s the fastest system in the world dedicated entirely to astronomy research.

ATERUI II lives at the National Astronomical Observatory of Japan (NAOJ) and currently ranks as number 83 on the top 500 list of most powerful supercomputers. Most of these devices are shared among multiple fields or exist only for government use, but ATERUI II is all about making astronomical research faster and better.

The new ATERUI II is a Cray XC50 system that is three times more powerful than its predecessor. Researchers brought it online in June with more than 40,000 total processing cores supporting up to three quadrillion operations per second (that’s more than three-thousand teraflops). It uses Intel Xeon Gold 6148 processors. Each one costs around $3,000 and comes equipped with 20 cores (40 threads) with a maximum frequency of 3.7GHz and 27.5MB of cache. ATERUI II also packs a whopping 385 terabytes of RAM.

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [ExtremeTechExtremeTech] September 10, 2018 at 08:15AM. Credit to Author and ExtremeTechExtremeTech | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day

Your daily selection of the latest science news!

According to Singularity Hub

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.

The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.

The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.

It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.

Computing Power

Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.

The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.

That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.

“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”

AI for Science

One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.

The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.

In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.

“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.

What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.

“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.

A Virtual Data Scientist

That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.

“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”

The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.

“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”

Inside the Black Box

Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.

“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.

Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.

The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.

“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”

Moving Forward

Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.

“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”

The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.

“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.

It’s all in a day’s work.

Image Credit: Gennady Danilkin / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] January 3, 2018 at 11:07AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

The Female Supercomputer Designer Who Inspired Steve Jobs

Your daily selection of the hottest trending tech news!

According to Co.Design

Screen Shot 2017-11-19 at 4.12.57 PM.png

The product designer and mechanical engineer Tamiko Thiel was working for the Cambridge, Massachusetts, company Thinking Machines. She and her colleagues were building a supercomputer that proposed a radical new concept. Instead of using one giant processor to crunch large amounts of data, they were going to use thousands of processors that would tackle little parts of the data-crunching one by one. It was the early 1980s, and Thinking Machines was trying to build an artificially intelligent machine based on the human brain. As the project’s lead designer, Thiel was charged with the question: What should this new kind of technology look like?

1.jpg
Tamiko Thiel in front of Apple’s Richard Feynman “Think Different” poster, San Francisco, 1998. [Image: © Tamiko Thiel/Lew Tucker (photo)]

It was a hard question to answer, because most computers at the time resembled refrigerators–and because it would be difficult to convince the company’s future clients that yet another giant beige box was truly a new kind of machine. The stakes for the supercomputer’s industrial design were high.

Months earlier, Thiel had designed a logo for the company that visualized a hypercube–a 12-dimensional cube-in-a-cube that was the underlying structure of the supercomputer’s hardware. She printed it on T-shirts for the team–which would gain greater fame when the Nobel Prize-winning physicist Richard Feynman wore it in an early Apple “Think Different” advertisement.

The logo was the perfect metaphor: a symbolic abstraction that expressed the deeper functions of the machine they were building. So Thiel began to translate her visualization into a real, working prototype for the supercomputer’s hardware. The result? The now iconic Connection Machine, a supercomputer made of eight black cubes that form a larger cube, with transparent panels that reveal the blinking lights of the 4,096 chips whirring away inside. Now, 30 years after its invention, the Connection Machine has been acquired by the Museum of Modern Art, where it is currently on view as part of the exhibition Thinking Machines: Art and Design in the Computer Age, 1959–1989.

1.jpg
CM-2 with DataVault mass storage device, 1987. [Image: © Thinking Machines Corporation, 1987/Steve Grohe (photo)/courtesy Tamiko Thiel]

The Connection Machine’s design was so striking that it influenced Steve Jobs, according to Joanna Hoffman, Jobs’s colleague at NeXT who worked with him after he was kicked out of Apple, and a friend of Thiel’s.

“[Joanna] told me way after the fact–way too late!–that Steve Jobs came to her and said, ‘Find out who designed the Connection Machine, I want them to design my NeXT computer,’” Thiel recalls. “She said, ‘Sorry you’re too late. Tamiko’s gone to Germany to become an artist. I’m going, ‘Joanna, you should have found me!’

Regardless, the design seemed to stick in Jobs’s mind. Thiel points to the design of Jobs’s products before and after the Connection Machine. While usability and design were always important to him, there was a notable departure between the original Macintosh, and the NeXTcube, which took the form of a perfect, black cube–very similar to the Connection Machine: “[The Macintosh] still looked like, well, a cute little nerdy computer, but still a nerdy beige computer. And with the NeXT Cube, which really was a perfect cube, it was a designed form that was separate from the necessities of computer design.”

From there, Jobs’s design aesthetic continued in a similarly minimalist vein with the iMacs, iPods, and eventually iPhones. “They were all sort of objects from outer space. Each one had this quality of the sublime that went beyond their functionality as an object into imagining a different sphere of human existence,” Thiel says. While Jobs never directly acknowledged her impact during his lifetime, the combination of Hoffman’s testimony paired with the evolution of Apple design point to the Connection Machine’s lasting influence on the products that many of us still use.

Perhaps that’s why today, the Connection Machine’s rectilinear, geometric form feels like an obvious form for the computer to take. “It was not obvious. We’re talking 1983, before the Macintosh came out,” Thiel says. “The general image of computers was IBM computers, racks of electronics. They looked like refrigerators or heating units. They didn’t have any identity.”

2.jpg
Ted Bilodeau, Arlene Chung, Dick Clayton, CM-1 prototype, Tamiko Thiel, Brewster Kahle, Carl Feynman, 1985. [Photo: © Tamiko Thiel]

Thiel’s influential ideas about computer design are rooted in her background. Thiel’s father was a naval engineer and architect turned Bauhaus-influenced designer who knew and worked with luminaries like Walter Gropius and Marcel Breuer. “I grew up in a household that was ‘form follows function,‘” Thiel says. “Basically design had to be functional and it had to express its function. And this was 1 of the 10 commandments in our household.”

When faced with the problem of how to design the form of a supercomputer that was based on the human brain and had the potential to herald a new dawn of computing, Thiel originally thought to put it inside a glass box. But she realized that simply showing the machine’s mechanical parts didn’t give any indication what it did–its function was encoded on tiny chips that even experts wouldn’t be able to see using only their eyes. Instead, the machine’s symbolic function was far more important to demonstrate. So she began to talk to the team’s engineers, to learn about the metaphors they used to describe the nature of the machine. That’s when she remembered her hypercube logo, which became the model for her industrial design.

The Connection Machine’s final form doesn’t just give shape to the internal structure of the device–it also gives the machine a powerful presence of its own, designed to match its groundbreaking technology. As Thiel recalls Thinking Machines founder Danny Hillis telling their team: “We want to build a machine that can be proud of us.” The saying, which cheekily points to the team’s goal of building an artificial intelligence, shows that the industrial design of the Connection Machine was an important goal for the engineers behind the project as well.

It also helped give expression to their hope in the future of computing. “I was steeped in all the science fiction visions and the [idea that] we will build artificial intelligences that will surpass us and hopefully be our companions and not our overlords,” Thiel says. “I wanted personally to express these fantasies, these visions of what we were building, or what the machine could evolve into.”

Screen Shot 2017-11-19 at 4.15.13 PM.png
Tamiko Thiel working on CM-1 prototype, 1985. [Photo: © Tamiko Thiel]

The Connection Machine’s design had another important role: to sell  supercomputers at a time when the supercomputer market was virtually nonexistent. In fact, Thiel says that most other people working on supercomputers and researching AI thought the ideas behind the Connection Machine were pure ivory tower silliness. “If you bring a new customer into a computer room and you say, you’ve never seen a machine like this before in your life, and you show them something that looks like their refrigerator back home, you’re going to have a lot of work to do to convince them,” Thiel says. “If you can bring them into the room, they stop dead in their tracks, their jaw drops, and they say, ‘Oh my god, I’ve ever seen anything like this before in my life’–you’ve convinced them emotionally. At a deep down level you speak to their own fantasies and visions of what the future can bring.”

The Connection Machine was a success–to an extent. In 1989, the second generation of Connection Machine, the CM-2, which Thiel designed, won the Gordon Bell Prize as the fastest machine on the planet, and its fifth generation successor won the same prize in 1993. At a time when computers weren’t powerful enough to recognize human faces or process natural language, the Connection Machine was indexing texts and identifying what it thought were important concepts. Before the internet, it was the first search engine where you could type in a natural language query and get an answer. It could even navigate–a proto-version of the mapping services we all use today with the tap of a touch screen.

But there wasn’t a market for these devices outside of scientific research or big business, including Dow Jones–“You weren’t going to get someone to put a Connection Machine in your car as you drive down the Mass Pike,” Thiel says–and the company’s intense focus on AI research, combined with government funds drying up after the Cold War, eventually contributed to its demise. Thiel herself went on to live in Germany, where she’s still based, to pursue a career as an artist. She creates installations with augmented and virtual reality and has exhibited her artwork all over the world. This year, she’s one of Google’s Tilt Brush Artists-in-Residence.

We live in a different era today. Artificial intelligence has arrived and is trickling into everything we do online, from the way we unlock our phones to how we do our jobs. Now, with the MoMA exhibition, the Connection Machine is getting its due. “We felt like we’d come up with a design that was revolutionary, that could change the way people looked at computers or technological products,” Thiel says. “Thirty years later it’s a confirmation when the MoMA Design Department acquires it, saying, ‘Yes, we do think this is very, very important.’”

Read more…

__

This article and images were originally posted on [Co.Design] November 16, 2017 at 07:04AM

Credit to Author and Co.Design | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

The world’s most powerful computer

China only started producing its first computer chips in 2001. But its chip industry has developed at an awesome pace.

So much so that Chinese-made chips power the world’s most powerful supercomputer, which is Chinese too.

The computer, known as the Sunway TaihuLight, contains some 41,000 chips and can carry out 93 quadrillion calculations per second. That’s twice as fast as the next-most-powerful supercomputer on the planet (which also happens to be Chinese).

The mind-boggling amount of calculations computers like this can carry out in the blink of an eye can help crunch incredibly complicated data – such as variations in weather patterns over months and years and decades.

The BBC’s Cameron Andersen visited the TaihuLight at its home in the National Supercomputing Center in the eastern Chinese city of Wuxi.

Watch the video above to see his exclusive look inside the world’s most powerful computer – and the challenges that this processing powerhouse might try and solve in the near future.

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday.

__

This article and images was originally posted on [BBC Future] June 24, 2017 at 09:42AM

 

 

 

 

 

China aims to build world’s first exascale supercomputer prototype by 2017 

sc

Building supercomputers is a digital arms race, and China is moving quickly to solidify its lead. Last year, the country unveiled the world’s fastest supercomputer, the Sunway TaihuLight (above). This year, according to state news agency Xinhua, the government has set its sights on completing the world’s first prototype exascale computer; a machine capable of making a billion, billion calculations per second.

The prototype computer will be ready before the end of the year, said Zhang Ting, an engineer at the country’s National Supercomputer Center, but the finished product won’t be operational for several years more. “A complete computing system of the exascale supercomputer and its applications can only be expected in 2020,” said Zhang. “[It] will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010.”

It’s not clear exactly how this prototype system will relate to the finished exascale computer in terms of capability, but the news suggests China will at least be first to reach such a milestone. A number of nations — including Japan and the US — are planning to build exascale computers. The US Department of Energy says its current schedule is to have an exsacale system operational by 2023.

As of last June, China has more supercomputers in the world’s top 500 than the US — 167 compared to 165. (The US has more machines in the top 10 though; five to China’s two.) These systems are used for a number of tasks, ranging from life sciences to national defense. In 2015, the US actually blocked the export of Intel chips to China for its then-fastest supercomputer, fearing that the machine would be used for nuclear research. China instead built an even faster system (the Sunway TaihuLight) using its own processors instead.

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

 

 

 

Supercomputer comes up with a profile of dark matter: Standard Model extension predicts properties of candidate particle

Supercomputer comes up with a profile of dark matter: Standard Model extension predicts properties of candidate particle
Simulated distribution of dark matter approximately three billion years after the Big Bang (illustration not from this work). Credit: The Virgo Consortium/Alexandre Amblard/ESA

 

 

In the search for the mysterious dark matter, physicists have used elaborate computer calculations to come up with an outline of the particles of this unknown form of matter. To do this, the scientists extended the successful Standard Model of particle physics which allowed them, among other things, to predict the mass of so-called axions, promising candidates for dark matter. The German-Hungarian team of researchers led by Professor Zoltán Fodor of the University of Wuppertal, Eötvös University in Budapest and Forschungszentrum Jülich carried out its calculations on Jülich’s supercomputer JUQUEEN (BlueGene/Q) and presents its results in the journal Nature.

“Dark matter is an invisible form of matter which until now has only revealed itself through its gravitational effects. What it consists of remains a complete mystery,” explains co-author Dr Andreas Ringwald, who is based at DESY and who proposed the current research. Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the . High-precision measurements using the European satellite “Planck” show that almost 85 percent of the entire mass of the universe consists of dark matter. All the stars, planets, nebulae and other objects in space that are made of conventional matter account for no more than 15 percent of the mass of the universe.

“The adjective ‘dark’ does not simply mean that it does not emit visible light,” says Ringwald. “It does not appear to give off any other wavelengths either – its interaction with photons must be very weak indeed.” For decades, physicists have been searching for particles of this new type of matter. What is clear is that these particles must lie beyond the Standard Model of particle physics, and while that model is extremely successful, it currently only describes the conventional 15 percent of all matter in the cosmos. From theoretically possible extensions to the Standard Model physicists not only expect a deeper understanding of the universe, but also concrete clues in what energy range it is particularly worthwhile looking for dark-matter candidates.

The unknown form of matter can either consist of comparatively few, but very heavy particles, or of a large number of light ones. The direct searches for heavy dark-matter candidates using large detectors in underground laboratories and the indirect search for them using large particle accelerators are still going on, but have not turned up any so far. A range of physical considerations make extremely light particles, dubbed axions, very promising candidates. Using clever experimental setups, it might even be possible to detect direct evidence of them. “However, to find this kind of evidence it would be extremely helpful to know what kind of mass we are looking for,” emphasises theoretical physicist Ringwald. “Otherwise the search could take decades, because one would have to scan far too large a range.”

The existence of axions is predicted by an extension to quantum chromodynamics (QCD), the quantum theory that governs the , responsible for the nuclear force. The strong interaction is one of the four fundamental forces of nature alongside gravitation, electromagnetism and the weak nuclear force, which is responsible for radioactivity. “Theoretical considerations indicate that there are so-called topological quantum fluctuations in quantum chromodynamics, which ought to result in an observable violation of time reversal symmetry,” explains Ringwald. This means that certain processes should differ depending on whether they are running forwards or backwards. However, no experiment has so far managed to demonstrate this effect.

The extension to quantum chromodynamics (QCD) restores the invariance of time reversals, but at the same time it predicts the existence of a very weakly interacting particle, the axion, whose properties, in particular its mass, depend on the strength of the topological quantum fluctuations. However, it takes modern supercomputers like Jülich’s JUQUEEN to calculate the latter in the temperature range that is relevant in predicting the relative contribution of axions to the matter making up the universe. “On top of this, we had to develop new methods of analysis in order to achieve the required temperature range,” notes Fodor who led the research.

The results show, among other things, that if axions do make up the bulk of dark matter, they should have a mass of 50 to 1500 micro-electronvolts, expressed in the customary units of , and thus be up to ten billion times lighter than electrons. This would require every cubic centimetre of the universe to contain on average ten million such ultra-lightweight particles. Dark matter is not spread out evenly in the universe, however, but forms clumps and branches of a weblike network. Because of this, our local region of the Milky Way should contain about one trillion axions per cubic centimetre.

Thanks to the Jülich supercomputer, the calculations now provide physicists with a concrete range in which their search for axions is likely to be most promising. “The results we are presenting will probably lead to a race to discover these particles,” says Fodor. Their discovery would not only solve the problem of in the universe, but at the same time answer the question why the strong interaction is so surprisingly symmetrical with respect to time reversal. The scientists expect that it will be possible within the next few years to either confirm or rule out the existence of axions experimentally.

The Institute for Nuclear Research of the Hungarian Academy of Sciences in Debrecen, the Lendület Lattice Gauge Theory Research Group at the Eötvös University, the University of Zaragoza in Spain, and the Max Planck Institute for Physics in Munich were also involved in the research.

Explore further: 3 knowns and 3 unknowns about dark matter

More information: S. Borsanyi et al, Calculation of the axion mass based on high-temperature lattice quantum chromodynamics, Nature (2016). DOI: 10.1038/nature20115

Join our fans by liking us on Facebook, or follow us on TwitterGoogle+feedlyflipboardand Instagram.

Check out our Flipboard magazine ESIST  If you enjoy reading our posts. All of our handpicked articles will be delivered to the flipboard magazine every day.

Original article on phys.org