Apple’s Differential Privacy

Business executives at Apple have always been somewhat ambivalent about the issue of customer privacy. On the one hand, they routinely claim that they maintain a much higher standard of confidentiality towards their user data than many other technology firms. And yet, on the other hand, artificial intelligence programs like Siri cannot learn the preferences of their users without accessing such personal information.

Last week, Apple drew attention to its new computer operating system by announcing that it will employ a technique known as differential privacy to balance these countervailing business imperatives. The term refers to the practice of mixing dummy (i.e. false) data into a large data set in order to make it more difficult for a party with data access to identify any particular user.

How does it work? Imagine, for instance, a bachelor who owns a single residential property. A fictitious wife and a vacation home might be added to his “big data” file without being included in his individual personal profile.

It’s a potentially effective strategy, but it’s a risky one as well. After all, a hacker might thwart its intent by discovering a way to identify and then delete the false content. Or the firm might mismanage its systems and lose the ability to distinguish between the true and the false data.

Given such concerns, perhaps Apple should consider a simpler approach to protecting user data. At the moment, it requires users to read its incomprehensible tiny-print disclosure language before they install its software on their devices.

Instead, perhaps the firm could simply explain the benefits and risks of its data management practices in basic layperson’s language. Each prospective user could then make an informed decision about whether the benefits of utilizing the services justify the risks of doing so.

Such a policy would place Apple squarely on the side of the principle of information transparency. It would also eliminate the need to engage in differential privacy techniques.

But what if Apple doesn’t opt for this policy? Then it’s quite possible that the firm will continue to employ such techniques for the foreseeable future, mixing its good data with the bad.