Pages: [1]   Go Down
  Print  
Author Topic: How Apple used deep neural networks to bring face detection to iPhone and iPad  (Read 354 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: November 17, 2017, 04:05:18 pm »

How Apple used deep neural networks to bring face detection to iPhone and iPad

'Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps.'

Apple doesn't want to store your data on its servers. There's just no way to guarantee your privacy once your data leaves your device. But providing services on-device is a huge challenge as well.

From the Apple Machine Learning Journal:


  We faced several challenges. The deep-learning models need to be shipped as part of the operating system, taking up valuable NAND storage space. They also need to be loaded into RAM and require significant computational time on the GPU and/or CPU. Unlike cloud-based services, whose resources can be dedicated solely to a vision problem, on-device computation must take place while sharing these system resources with other running applications. Finally, the computation must be efficient enough to process a large Photos library in a reasonably shor...

Source: How Apple used deep neural networks to bring face detection to iPhone and iPad
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: