Sunday, 20 March 2016

Stagefright exploit reliably attacks Android phones


You may know that the Stagefright security flaw is theoretically dangerous, but it hasn't been that risky in practice -- it's just too difficult to implement on an Android device in a reliable way. Or rather, it was. Security researchers at NorthBit have developed a proof-of-concept Stagefright exploit, Metaphor, that reliably compromises Android phones. The key is a back-and-forth procedure that gauges a device's defenses before diving in. Visit a website with a maliciously-designed MPEG-4 video and the attack will crash Android's media server, send hardware data back to the attacker, send another video file, collect additional security data and deliver one last video file that actually infects the device.
It sounds laborious, but it works quickly: a typical attack breaks into a phone within 20 seconds. And while it's most effective on a Nexus 5 with stock firmware, it's known to work on the customized Android variants found on phones like the HTC One, LG G3 and Samsung Galaxy S5.
This doesn't amount to an in-the-wild attack, and you'll be fine if you're running Android 6.0 Marshmallow or any other OS version patched against Stagefright. The catch is that relatively few people are in that boat -- most Android users are running Lollipop or earlier, and only some of those devices have Stagefright patches. You're probably fine if you own a relatively recent device, but your friend with a years-old Android phone is at risk.

Friday, 11 March 2016

A NEW APP AUTOMATICALLY SENDS THAT GROUP PHOTO TO YOUR FRIENDS

The Knoto app on an iPhone
Dave Gershgorn/ Popular Science
Knoto
The Knoto app automatically sends the photos you take of your friends to them.
"Sure, I'll send you that photo."
We say it all the time at parties, on vacation, or during formal events. But some (like me) are forgetful and terrible friends, and the photo never gets sent.
The Knoto app uses artificial intelligence to try to fix that problem. When you take a picture, Knoto's facial recognition algorithm detects who's in the photo, and automatically sends them a copy.
"It's really a different experience when you're getting these photos," Jonas Lee, CEO of Knoto's parent company PhotoKharma, told Popular Science. "On the receiving end, you're getting a broadcast of you, from your friends."
The concept behind Knoto's technology is to make photo sharing easier. Instead of thinking about sending photos to people or uploading them to Facebook, it's done automatically, and with the photos that people are most likely to want: the pictures of themselves.
When you first launch the app, Knoto attempts to connect with Facebook to know who you are, and what your friends look like. After authorization (you must allow Knoto to connect to Facebook for it to work), the app goes through all your tagged Facebook images, and your images where your friends are tagged, using them to recognize the people in your life. It also goes through all the photos in your phone, and tries to make matches. You can help the poor algorithm out by manually tagging people you take pictures of often, as well.
The Knoto app on an iPhone
Screenshots
The App That Automatically Sends Photos
The Knoto app automatically recognizes the faces of people in your photo library, and can use that information to automatically send pictures of them.
Once you take a photo, the Knoto app finds the faces in the image, crops them out, and sends them to the encrypted Knoto server. There, the faces are matched to other known faces, and this data of the person's identity is transmitted back to your phone.
The photos are then sent and received by the Knoto app, which ends up being a stream of photos of yourself and your friends or family. If the person being sent the photos doesn't have the app, they get a text with an image and a link to download other photos.
In terms of privacy, Lee says that by cropping the photos to only include faces and transmitting those cropped images eliminates a lot of concerns.
"Everybody is naked from the face up," Lee said, adding that no human looks at the photos, either. (This of course disregards those doing handstands.)
As you take more photos, the app's software gets better overtime. Those two acts, learning from photos and then applying that knowledge, is the artificial intelligence at work. Knoto uses convolutional neural networks, a flavor of machine learning that works well with images, to identify specific people in photos.
When I tried it, Knoto processed all the photos in my phone, which included every member of my grandfather's Navy crew from WWII, Paulie fromGoodfellas, and Alphabet executive chairman Eric Schmidt. However, I feel confident won't be sending them any photos (especially Paulie).
However, the app worked well. I tried to send 24 photos to myself and I quickly got a text with an image and links to the rest of the photos.
There are some limitations, however. Namely, the Knoto app needs to be perpetually open in the background, so it just doesn't work if you're one of those compulsive app closers. (We all know the type.)
I also didn't get options to send photos to my Facebook friends via Messenger or the like, and after the app had looked through my Facebook photos, I still had to manually tag a bunch of people, which seems to defeat the whole point of Facebook integration.
Knoto is great for people who take lots of selfies with friends, or go on trips together and are forgetful about social media. It's a product that relies on people being lazy, and too busy doing other things to deal with their photos after they've been taken. But are people ready for automatic sharing? That's the gamble Knoto makes.
The Knoto app is available for free on the iOS app store today.
  
 

GOOGLE'S ALPHAGO A.I. DEFEATS WORLD CHAMPION AT THE GAME OF GO

Google's AlphaGo beats world champion Lee Se-dol in Go.
Google/ Screenshot
Google's AlphaGo beats world champion Lee Se-dol in Go.
The boundaries for what machines cannot do has been pushed a little farther, as Google DeepMind's AlphaGo has beaten Go world champion Lee Se-dol in the first of five matches.
After a 3 1/2 hour game, Se-dol conceded to the computer.
This is only the first match, but Se-dol expected a 5-0 sweep on his part. During a press conference after the match, Se-dol hung his head.
“I didn’t know AlphaGo would play such a perfect game,” he said.
Demis Hassabis, founder and CEO of DeepMind, compared this win to landing on the moon in a tweet after the match was called.
Go was thought to be unplayable by a machine at a champion level, because the game is so complex. This idea was shattered in October 2015, when AlphaGo beat European chamption Fan Hui, but skeptics still believed that Se-dol would best the machine.
Behind the scenes, Google has been tweaking AlphaGo, learning from the mistakes it make in the 2015 match with Hui. That seems to have paid off, as even Se-dol, who studied the few previous AlphaGo games was surprised about how well the machine performed.
Much like Garry Kasparov in 1997, Se-dol also has not had very much access to games that AlphaGo has played, besides the 5 against Hui. Normally, both players would be able to analyze the other's playing style and prepare themselves. On the other hand, it might not have mattered, since the computer doesn't have a set personality or preferential style—it just looks to win.
The second match will take place Thursday, March 10 at 1 p.m. KST (or Wednesday, March 9 at 11 p.m. EST for us in the West). It's available to stream on DeepMind's YouTube channel.

GOOGLE'S ROBOTS ARE LEARNING HOW TO PICK THINGS UP

  
 
Google's grasping robots in action
Google
Arms with brains
Google's grasping robots rely on neural networks to discover new ways to pick up objects.
When babies learn to grasp things, they combine two systems, vision and motor skills. These two mechanisms, coupled with lots of trial and error, is how we can pick up pencil differently than a stapler, and now robots are starting to learn the same way.
Google is teaching its robots a simple task, picking up objects in one bin and placing them in another. They're not the first robot to pick something up, but these robots are actually learning new ways to pick up objects of different shapes, sizes, and characteristics based on constant feedback. For instance, they've robot learned to pick up a soft object differently than a hard object.
Other projects, like Cornell's DeepGrasping paper, analyze an object once for the best place to grasp, attempt to pick it up, then try again if it failed. Google's approach continuously analyzes the object and the robot hand's relation to it, making it more adaptable, like a human.
These robots are really just arms with brains, hooked up to a camera. They have two grasping fingers attached to a triple-joined arm, which are controlled by two deep neural networks. Deep neural networks are a popular flavor of artificial intelligence, because of their aptitude in being able to make predictions based on large amounts of data. In this case, one neural network is simply looking at photos of the bin and predicts whether the robot's hand can correctly grasp the object. The other interprets how well the hand is grabbing, so it can inform the first network to make adjustments.
Researchers noted that the robots didn't need to be calibrated based on different camera placement. As long as the camera had a clear view of the bin and arm, the neural network would be able to adjust and continue learning to pick up objects.
Over the course of two months, Google had its robots pick up objects 800,000 times. Six to 14 robots were working on picking up objects at any given time, and the only human role was to reload the robot's bin of objects. The objects were ordinary household objects: office supplies, children's toys, and sponges.
The most surprising outcome to the researchers, noted in the paperpublished on ArXiv.org, was that the robots learned to pick up hard and soft objects in different ways. For an object that was perceived as rigid, the grippers would just grasp the outer edges of the object and squeeze to hold it tightly. But for a soft object like a sponge, the neural network realized it would be easier to place one gripping finger in the middle and one around the edge, and then squeeze.
This work is separated from other grasping robots by the constant, direct feedback that helps the neural network learn, with very little human interference. This allows the robot arm to even pick up things it has never seen before, with a high rate of success. Researchers logged a failure rate for picking up new objects from 10 to 20 percent based on the object, and if the robot failed to pick up an object, it would try again. Google fares a little worse than Cornells's DeepGrasping project, which ranged from consistent success on things like plush toys to 16 percent failure on hard objects.
Teaching robots to understand the world around them, and their physical limits, is an important process for things like self-driving cars, autonomous robots, delivery drones, and every other futuristic idea that involves robots interacting with the natural world.
Next, the researchers want to test the robots in real-world conditions, outside of the lab. That means varying lighting and location, objects that move, and wear and tear on the robot.

APPLE TO ANNOUNCE NEW PRODUCTS ON MARCH 21

  
 
Apple
Tech Insider
Apple Event on March 21
Cupertino will announce new products in the fourth week of March
Apple will hold an event on March 21 to showcase new products. Invites were sent out to media for the company's March event with the tagline "Let Us Loop You In." Many rumors have pointed to the company's plans to introduce a new, small screened iPhone 5se device. While Apple tends to save its phone announcements for September, many sites like 9to5Mac and Bloomberg have pointed to similar rumors surrounding an updated version of the 4-inch screen device.
Taking after Android phone makers like Samsung, HTC, LG and more, Apple used its last major hardware revision to increase the size of its devices' screens. The iPhone 6 and 6 Plus--bearing a 4.7-inch and 5.5-inch screen, respectively--have seen regular updates.
The NFC payment system Apple Pay and optical image stabilization have been among the updates the larger phones have seen. And even more improvements made their way to the "tock update" iPhone 6s. But the 4-inch option has remained at the level of an iPhone 5S--the same phone the company released back in 2013. If rumors are to be believed, that could change with this March 21 event.
The iPhone 5se is expected to have the same 4-inch screen size, but much beefier internal specs than the original 5S. Presumed to stand for "special edition," the 5se could likely boast an A8 or A9 processor, increased storage space from offering only 16GB and an improved 8MP camera to match up to the iPhone 6 or 6S series of devices. Similar to the 6S, the ability to reveal Siri with only one's voice could come to the iPhone 5se as well.
But it may not be all about the iPhone — Apple could bring updates to the iPad and maybe even its Mac line of computers at the California event. With the company keeping a tight lid on things as usual, we won't know for sure until March 21 arrives.
The week will be an eventful one for the iPhone-maker. Those following Apple's battle with the FBI over decreasing the security of a phone used by one of the San Bernardino shooting suspects know that the company will head to court one day after the March 21 event, at 1pm pacific.
You can tune into Apple's March 21 event on the company's devices at 10am pacific, 1pm eastern.
TAGS: