Tag Archives: Cybersecurity

American Capitalism

Remember Yahoo? Twenty years ago, it was a titan of the internet, with services that ranged from email to search to web hosting to video.

But it failed to maintain its competitive position against emerging firms like Google, Facebook, and Amazon. And although it hired Marissa Mayer away from Google in 2012 to become its new Chief Executive Officer, its market share continued to decline.

Even worse, the firm suffered multiple massive data breaches during Mayer’s reign. Hackers gained access to the personal information of (quite literally) billions of users, while Mayer arranged for the firm’s American assets to be sold to Verizon.

To be fair, one can certainly argue that Yahoo was beyond any chance of resuscitation when Mayer came aboard as its Chief Executive Officer. And yet one cannot deny that the firm clearly failed under her watch.

So what will happen to Mayer after Verizon acquires Yahoo? Apparently, she’ll receive a $23 million severance package. And earlier this month, the firm publicly clarified that she will earn these benefits on top of $56 million worth of previously earned stock options.

Ms. Mayer undeniably risked her career by moving from Google to Yahoo. And according to the principles of American capitalism, she should have expected to receive lavish economic rewards if she had succeeded at reviving the firm.

But according to those same principles, stakeholders in failed organizations should expect to share in the losses of their business entities. After all, if they are eager to share in the spoils of success, they should also be willing to bear the risks of failure.

But in Mayer’s case, and in many similar cases, the very corporate officers who preside over the failure of their firms are immensely (and perversely) rewarded for their outcomes. In other words, they receive the spoils of success, whether they actually succeed or fail.

That may simply represent an ingrained feature of American capitalism. But it cannot possibly be a productive condition for the long-term health of the American economy.

How Fast Is Facebook?

We’re all generally aware that the web servers of social networking platforms like Facebook are capable of processing data very quickly. But do we really comprehend how quickly?

Until recently, I didn’t really comprehend data processing speeds at all. But then I signed up for a new Facebook account. Although I originally opened a personal account many years ago, I deleted it after becoming frustrated at the platform’s constant modifications to its privacy controls. Frankly, I didn’t see why I couldn’t simply instruct the service that “only I should be able to post items to my account pages” once and once only.

But after a colleague convinced me that the platform’s social networking capabilities might warrant a second look, I ventured onto Facebook’s home page and reviewed the sign-up instructions.

I was asked for my name, an email address, and two or three other brief items of identification. That seemed reasonable to me! I was then asked whether I wished to give Facebook access to the electronic address book that is associated with my email account, so that the social network could help me locate my friends. Thanks, but no thanks! I declined that offer.

After a brief moment’s delay, I logged into my new account. And to my astonishment, I was immediately presented with a list of people whom (according to Facebook) I might know, and whom I might wish to “friend.”

Why was I astonished? Well, most of the names on that list were recognizable to me. They ranged from good friends whom I contact often, to total strangers whom I briefly contacted for business reasons on a single occasion many years ago.

For a while, I was flummoxed. How could Facebook know so many of my past and present contacts, across such a broad range of personal and business relationships, if I declined to open my electronic address book to the service? And then the answer struck me.

Although Facebook didn’t have access to my address book, it did know my email address. And if many of Facebook’s existing users had opened their address books to Facebook when they first signed up for the social network, the algorithms could have searched through many (or perhaps even all) of those address books for my email address.

So quickly, though? During that single brief moment while I signed up for the service? Apparently, Facebook is fast. Really fast.

Of course, it might be worth pondering a couple of follow-up questions. Do most of the individuals who open their address books to Facebook when they sign up for accounts really understand how the social network plans to utilize that access? And is it really fair for Facebook to ask for access only once, and then to utilize it forever without ever asking again?

Reasonable minds may certainly differ over the answers to those questions. And yet there is one impressive fact that is not debatable at all; namely, once we permit Facebook to access our personal information, it can make very fast use of our data.

Apple’s Differential Privacy

Business executives at Apple have always been somewhat ambivalent about the issue of customer privacy. On the one hand, they routinely claim that they maintain a much higher standard of confidentiality towards their user data than many other technology firms. And yet, on the other hand, artificial intelligence programs like Siri cannot learn the preferences of their users without accessing such personal information.

Last week, Apple drew attention to its new computer operating system by announcing that it will employ a technique known as differential privacy to balance these countervailing business imperatives. The term refers to the practice of mixing dummy (i.e. false) data into a large data set in order to make it more difficult for a party with data access to identify any particular user.

How does it work? Imagine, for instance, a bachelor who owns a single residential property. A fictitious wife and a vacation home might be added to his “big data” file without being included in his individual personal profile.

It’s a potentially effective strategy, but it’s a risky one as well. After all, a hacker might thwart its intent by discovering a way to identify and then delete the false content. Or the firm might mismanage its systems and lose the ability to distinguish between the true and the false data.

Given such concerns, perhaps Apple should consider a simpler approach to protecting user data. At the moment, it requires users to read its incomprehensible tiny-print disclosure language before they install its software on their devices.

Instead, perhaps the firm could simply explain the benefits and risks of its data management practices in basic layperson’s language. Each prospective user could then make an informed decision about whether the benefits of utilizing the services justify the risks of doing so.

Such a policy would place Apple squarely on the side of the principle of information transparency. It would also eliminate the need to engage in differential privacy techniques.

But what if Apple doesn’t opt for this policy? Then it’s quite possible that the firm will continue to employ such techniques for the foreseeable future, mixing its good data with the bad.