A Japanese startup is planning an artificial shooting star show by 2020

Your daily selection of the hottest trending tech news!

According to (This article and its images were originally posted on Digital Trends July 22, 2018 at 12:03PM.)

leonid meteor shower

leonid meteor shower
Barcroft Media/Getty Images

Who says you have to wait around for a meteor shower? Certainly not ALE Co., a Tokyo-based startup that wants you to wish upon a shooting star anytime you’d like. The company is looking to develop a system that would offer paying customers “shooting stars on demand,” and as per a report by Japan Times, the first man-made meteor shower in the history of the world could take place in just two years.

The system depends upon two satellites, both of which are currently being developed. The first should be launched into orbit in March 2019, while its sibling would take flight sometime next summer. Each satellite will carry around 400 tiny spheres, each of which contain a proprietary chemical formula that would mimic falling stars in the sky. Think of them, in some ways, as extraterrestrial fireworks. Each of the little spheres could be reused, which means that they could be repurposed for between 20 and 30 artificial shooting star shows.

The satellites purportedly have a lifespan of around 24 months, and would be programmed to send the tiny fireworks flying in the right place, speed, and direction in order to achieve visible illumination even over an extremely crowded metropolitan area (think Tokyo or New York City). And because they would be shot out in space, millions of viewers could be able to enjoy the show from their own homes, ALE claims.

“We are targeting the whole world, as our stockpile of shooting stars will be in space and can be delivered across the world,” ALE CEO Lena Okajami told reporters.

Should all systems continue to operate smoothly during these planning and production phases, both satellites could be in place by February 2020, and an initial test run could be ready to go later in the spring. That means we’ve less than two years to think of all of our most pressing desires and make as many wishes as possible.

The first test is slated to take place over Hiroshima, which was chosen thanks to its weather, landscape, and cultural background, the company said. It’s unclear exactly how much you’ll have to pay in order to order a meteor shower of your own. The initial tests have a budget of $20 million, which includes the cost of launching a pair of satellites.

Continue reading… | Stay even more current with our live technology feed.

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.

__

This article and images were originally posted on [Digital Trends] July 22, 2018 at 12:03PM. Credit to Author  and Digital Trends | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

This Japanese AI security camera shows the future of surveillance will be automated

Your daily selection of the hottest trending tech news!

According to The Verge (This article and its images were originally posted on The Verge June 26, 2018 at 07:34AM.)

The world of automated surveillance is booming, with new machine learning techniques giving CCTV cameras the ability to spot troubling behavior without human supervision. And sooner or later, this tech will be coming to a store near you — as illustrated by a new AI security cam built by Japanese telecom giant NTT East and startup Earth Eyes Corp.

The security camera is called the “AI Guardman” and is designed to help shop owners in Japan spot potential shoplifters. It uses open source technology developed by Carnegie Mellon University to scan live video streams and estimate the poses of any bodies it can see. (Think of it like a Kinect camera, but using 2D instead of 3D data to track people.) The system then tries to match this pose data to predefined ‘suspicious’ behavior. If it sees something noteworthy, it alerts shopkeepers via a connected app.

A demo video of a prototype version of the technology published by Earth Eyes gives a good overview of how it works:

AI Guardman has been under development for at least a few years, but last month NTT East and Earth Eyes shared the results of some early trials with the camera. As you might expect for a PR blast, the feedback was glowing; and according to a report from Japan’s IT Media, NTT East and Earth Eyes claim that AI Guardman reduced shoplifting in stores by around 40 percent.

Without independent verification, these claims should be taken with a pinch of salt, but the underlying technology is certainly solid. New deep learning techniques have enabled us to analyze video footage more quickly and cheaply than ever before, and a larger number of companies in Japan, America, and China are developing products with similar capabilities. Similar features are also making their way into home security cameras, with companies like Amazon and Nest offering rudimentary AI analysis, like spotting the difference between pets and people.

AI Guardman is notable, though, as a product with advance features that customers will be able to buy, plug in, and start running without too much delay. A spokesperson for NTT East told The Verge that the camera would go on sale at the end of July, with an up-front price of around $2,150 and a monthly subscription fee of $40 for cloud support. NTT says it hopes to introduce the camera to 10,000 stores in the next three years. “Our primary target is big businesses although we do not have the intention to omit small ones,” said a spokesperson.


An illustration of AI Guardman. The camera uses deep learning to spot suspicious behavior and alerts a shopkeeper via an app, who can then approach the individual.Image: NTT East

But there are a lot of potential problems with automated surveillance, including privacy, accuracy, and discrimination. Although AI can reliably look at a person and map out where their body parts are, it’s challenging to match that information to ‘suspicious’ behavior, which tends to be context dependent. NTT East admitted as much, and said that “common errors” by the AI Guardman included misidentifying both indecisive customers (who might pick up an item, put it back, then pick it up again) and salesclerks restocking shelves as potential shoplifters.

It’s difficult to know the extent of this problem, as NTT East said it had not published any studies on the technology’s accuracy, and could not share any information on statistics like its false positive rate (that is, how often it identifies innocent behavior as something suspicious).

It’s also possible that the training data might be biased towards certain groups, or that the technology might be used as a pretext for discrimination. (I.e., a security guard following someone around the store because ‘the computer said they’re suspicious.’) NTT East denied that the technology could be discriminative as it “does not find pre-registered individuals.”

Evaluating technology like this is difficult at a distance, but it’s clear that this sort of automated surveillance is only going to become more common in the future, with researchers working on advanced analysis like spotting violent behavior in crowds, and tech companies selling tools like facial recognition to law enforcement. Next time you walk past a CCTV camera, your concern won’t be who is watching, but what.

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.

__

This article and images were originally posted on [The Verge] June 26, 2018 at 07:34AM. Credit to Author and The Verge | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

This Neural Network Built by Japanese Researchers Can ‘Read Minds’

Your daily selection of the latest science news!

According to Singularity Hub

It already seems a little like computers can read our minds; features like Google’s auto-complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder, “How did they know?” For better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real, and a new study from researchers in Kyoto, Japan is an unequivocal step in that direction.

A team from Kyoto University used a deep neural network to read and interpret people’s thoughts. Sound crazy? This actually isn’t the first time it’s been done. The difference is that previous methods—and results—were simpler, deconstructing images based on their pixels and basic shapes. The new technique, dubbed “deep image reconstruction,” moves beyond binary pixels, giving researchers the ability to decode images that have multiple layers of color and structure.

“Our brain processes visual information by hierarchically extracting different levels of features or components of different complexities,” said Yukiyasu Kamitani, one of the scientists involved in the study. “These neural networks or AI models can be used as a proxy for the hierarchical structure of the human brain.”

The study lasted 10 months and consisted of three people viewing images of three different categories: natural phenomena (such as animals or people), artificial geometric shapes, and letters of the alphabet for varying lengths of time.

Reconstructions utilizing the DGN. Three reconstructed images
correspond to reconstructions from three subjects.

The viewers’ brain activity was measured either while they were looking at the images or afterward. To measure brain activity after people had viewed the images, they were simply asked to think about the images they’d been shown.

Recorded activity was then fed into a neural network that “decoded” the data and used it to generate its own interpretations of the peoples’ thoughts.

In humans (and, actually, all mammals) the visual cortex is located at the back of the brain, in the occipital lobe, which is above the cerebellum. Activity in the visual cortex was measured using functional magnetic resonance imaging (fMRI), which is translated into hierarchical features of a deep neural network.

Starting from a random image, the network repeatedly optimizes that image’s pixel values. The neural network’s features of the input image become similar to the features decoded from brain activity.

Importantly, the team’s model was trained using only natural images (of people or nature), but it was able to reconstruct artificial shapes. This means the model truly ‘generated’ images based on brain activity, as opposed to matching that activity to existing examples.

Not surprisingly, the model did have a harder time trying to decode brain activity when people were asked to remember images, as compared to activity when directly viewing images. Our brains can’t remember every detail of an image we saw, so our recollections tend to be a bit fuzzy.

The reconstructed images from the study retain some resemblance to the original images viewed by participants, but mostly, they look like minimally-detailed blobs. However, the technology’s accuracy is only going to improve, and its applications will increase accordingly.

Imagine “instant art,” where you could produce art just by picturing it in your head. Or what if an AI could record your brain activity as you’re asleep and dreaming, then re-create your dreams in order to analyze them? Last year, completely paralyzed patients were able to communicate with their families for the first time using a brain-computer interface.

There are countless creative and significant ways to use a model like the one in the Kyoto study. But brain-machine interfaces are also one of those technologies we can imagine having eerie, Black Mirror-esque consequences if not handled wisely. Neuroethicists have already outlined four new human rights we would need to implement to keep mind-reading technology from going sorely wrong.

Despite this, the Japanese team certainly isn’t alone in its efforts to advance mind-reading AI. Elon Musk famously founded Neuralink with the purpose of building brain-machine interfaces to connect people and computers. Kernel is working on making chips that can read and write neural code.

Whether it’s to recreate images, mine our deep subconscious, or give us entirely new capabilities, though, it’s in our best interest that mind-reading technology proceeds with caution.

Image Credit: igor kisselev / Shutterstock.com

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Singularity Hub] January 14, 2018 at 11:05AM. Credit to Author and Singularity Hub | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Japanese console market grows for the first time in three years

The Japanese console market has seen an increase in sales revenue for the first time in three years, and that’s down the to Switch.

According to Famitsu figures, translated and broken down by Japanese games industry analyst Dr. Serkan Toto, combined console hardware and software sales from December 26, 2016 to June 25, 2017, totaled 153.2 billion yen ($1.35 billion).

That’s a year-over-year increase of 14.8 percent, and marks the first revenue rise in three years for the Japanese console market (during that six month time period, at least).

As explained by Toto, in recent years consoles have struggled to keep up with the booming popularity of mobile games. And in 2016, Famitsu reported an overall drop-off in console hardware and software sales.

With that in mind, the Japanese publication is unsurprisingly attributing the market’s sudden turnaround in fortunes to Nintendo’s console-handheld hybrid.

The system has already sold over 1 million units in Japan, despite only launching on March 3, and it’s solid debut performance has clearly been a boon to the region’s console market.

Whether or not the Switch can maintain that momentum remains to be seen, and the device’s immediate success will depend on Nintendo’s ability to manufacture enough hardware to meet consumer demand.

__

This article and images was originally posted on [Gamasutra News] July 4, 2017 at 12:12PM

By Chris Kerr

 

 

 

Japanese white collar workers are already being replaced by artificial intelligence

AI.jpg

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

Check out our Flipboard magazine ESIST  If you enjoy reading our posts. All of our handpicked articles will be delivered to the flipboard magazine every day.

This article was originally posted on Quartz

By Dave Gershgorn