Artificial Intelligence at Apple

Jessica YungData Science, Technology Article SummariesLeave a Comment

apple-artificial-intelligence

A summary of the article ‘An Exclusive Look at how AI and Machine Learning work at Apple’ by Steven Levy posted on Backchannel.

Apple has been keeping a low profile on its artificial intelligence developments, so much so that critics thought it was far behind companies such as Facebook and Google. In this interview, Apple executives discuss how sophisticated Artificial Intelligence (AI) has been quietly deployed in Apple products in the past few years.

Outline of Article Summary:

  1. How Apple views Machine Learning (ML)
    • ML talent at Apple: Distribution, acquisition and problems
    • How Apple approaches new technologies
  2. External Opinion: Problems with Artificial Intelligence (AI) at Apple
    • How the AI Establishment views AI at Apple
    • Conflict between adoption of ML and Apple’s principles on user privacy
  3. Examples of Machine Learning at Apple
    • Ways Apple uses ML in its products
    • ML techniques Apple uses

1. How Apple views Machine Learning

‘It’s transformational, but not more so than other advances, like touch screens, or flat panels, or object-oriented programming. In Apple’s view, machine learning isn’t the final frontier.’

The machine learning (ML) mindset seems at odds with the Apple ethos of controlling the user experience. With ML, results emerge from the data and not directly from Apple designers. More on this below.

Machine Learning Talent at Apple
  • Distribution: The company’s machine learning talent is shared throughout the entire company, available to product teams.
  • Problems: It may be harder for Apple to attract talent because (1) their concerns about user privacy reduce the pool of data engineers can use to train models and (2) they prioritise product over publication.
  • High-profile acquisitions include Turi (Aug 2016) and Siri (2011).
How Apple approaches new technology

Two phases:

  1. Apply stuff available externally to get tech off the ground.
  2. “As it becomes clear a technology area is critical to our ability to deliver a great product over time, we build our in-house capabilities to deliver the experience we want. To make it great, we want to own and innovate internally.” – Senior Vice President of Software Engineering Craig Federighi

E.g. in Speech Technology: When Siri Senior Director Alex Acero arrived three years ago, Apple was still licensing much of its speech technology for Siri from a third party, a situation due for a change. Now Apple has its own speech technology.

Apple thinks it has some advantage in ML because it has control of the entire product delivery system.

  • Apple makes its own chips, so Siri Senior Director Acero was able to work directly with the silicon design team and the engineers who write the firmware for the devices to maximize performance of the neural net.
  • The needs of the Siri team influenced even aspects of the iPhone’s design.

Not everyone views Apple’s AI contributions favourably.

2. External opinion: Problems with AI at Apple

How the AI establishment views AI at Apple
  • Apple is constrained by its lack of a search engine (which can deliver the data that helps to train neural networks) and its inflexible insistence on protecting user information (which potentially denies Apple data it otherwise might use).
  • Apple isn’t a part of the AI community and was seen to be behind the pack with respect to AI. It did not publicise Siri’s transition to neural nets in July 2014.
  • Apple doesn’t publish much.
Conflict between adoption of ML and Apple’s principles on user privacy

‘Probably the biggest issue in Apple’s adoption of machine learning is how the company can succeed while sticking to its principles on user privacy.’

Two key problems:

  1. Protecting personal preferences and information that neural nets identify.
  2. Gathering information required to train neural nets to recognise behaviours WITHOUT collecting the personal information of users.

Problem 1: Protecting personal preferences and information that neural nets identify. E.g. if a machine learning model infers that you like apples.

Solution: Keep some of the most sensitive things where the ML is occurring entirely local to the device (e.g. word people type using iPhone QuickType keyboard). That is, these inferences are not sent to the Apple servers.

  • Example from Damien Tremblay’s website: you might be having a conversation and someone mentions a term that is a potential search. Other companies might have to analyze the whole conversation in the cloud to identify those terms, he says, but an Apple device can detect it without having the data leave the user’s possession — because the system is constantly looking for matches on a knowledge base kept on the phone. (It’s part of that 200 megabyte “brain.”)

Problem 2: Gathering information required to train neural nets to recognise behaviours WITHOUT collecting the personal information of users.

Solution:

  • Train models on publicly available corpuses of information
  • Anonymise data, tagging it with random identifiers not associated with Apple IDs.
  • ‘Differential privacy’: Basically a method that crowd-sources information in a way that doesn’t identify individuals at all. Apple implements this by adding mathematical noise to certain pieces of data so that Apple can detect usage patterns without identifying individual users. (virtual coin-tossing and cryptographic protocols.)
    • E.g.: to surface newly popular words that aren’t in Apple’s knowledge base or its vocabulary, links that suddenly emerge as more relevant answers to queries, or a surge in the usage of certain emojis.
    • “The traditional way that the industry solves this problem is to send every word you type, every character you type, up to their servers, and then they trawl through it all and they spot interesting things. We do end-to-end encryption, so we don’t do that.” – Federighi
apple-executives-interview

Header image from Levy’s article about the interview.

3. Examples of Machine Learning at Apple

As far as the core product is concerned, Senior VP for Internet Software and Services Eddy Cue cites four components of the product:

  1. speech recognition (to understand when you talk to it),
  2. natural language understanding (to grasp what you’re saying),
  3. execution (to fulfill a query or request), and
  4. response (to talk back to you).
Ways Apple uses ML in its Products

Examples:

  • the phone identifies a caller who isn’t in your contact list (but did email you recently).
  • Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next.
  • Or when you get a reminder of an appointment that you never got around to putting into your calendar.
  • Or when a map location pops up for the hotel you’ve reserved, before you type it in.
  • Or when the phone points you to where you parked your car, even though you never asked it to.
  • a machine learning model for “palm rejection” enabled the screen sensor to detect the difference between a swipe, a touch, and a pencil input with a very high degree of accuracy.
  • Changes in Siri’s voice
    • Previously Siri’s remarks came from a database of recordings collected in a voice center. Each sentence was a stitched-together patchwork of those chunks.
    • Machine learning smooths these chunks out and makes Siri sound more like an actual person. These changes will be in the iOS 10 release in Fall 2016.
  • Siri began using ML to understand user intent in November 2014, and released a version with deeper learning a year later.

Note: Apple is finally opening Siri up to other developers!

ML techniques Apple uses:

Some of the previous techniques remained operational — if you’re keeping score at home, this includes “hidden Markov models” — but now the system leverages machine learning techniques, including deep neural networks (DNN), convolutional neural networks, long short-term memory units, gated recurrent units, and n-grams.

Leave a Reply