Download PDF: Reputation and Accountability in the Age of Algorithms

Whenever artificial intelligence and algorithms are the topic of conversation among communicators, the focus is usually on how these technologies will take over tactical activities – think of automated reporting, chat bots or big data analytics.

This article is not about that. Rather, it is about the revolutionary shifts that these technologies bring to organizations as a whole, and how these shifts create new challenges for communicators. It is about understanding the ways in which these technologies reshape how organizations engage with their stakeholders.

Public Unease about Algorithms
Increasingly, operations, choices and decisions that not long ago fell under the control of human actors are at least partially delegated to computerized algorithms. These systems outperform humans in identifying important relationship patterns across vast and distributed datasets, and have already become instrumental in, for example, online shopping, equity trading, hiring and promotion, or even recommending medical treatments to physicians or sentencing to judges. However, these systems can and do fail. They may reinforce social inequality, encroach on consumer privacy, unethically influence stock and commodity exchange or even election outcomes.

In response, there is growing public unease about these technologies and their social ramifications, paired with increasing calls for more transparency. Public concerns about algorithms are central not only for their creators but also for the rapidly growing number of organizations that employ them. As more and more people interact on a constant basis with algorithms, the public perception of organizations increasingly depends upon them. Algorithms not only represent and shape user experiences of the organization that owns them, but also affect the reputations of organizations that rely on third-party algorithms as part of their value chain.

As many organizations interconnect with the influential algorithms of Amazon, Google, Facebook and the like, their reputations also partly depend upon the algorithmic activities of these large players.

Communicators will increasingly be in charge of managing reputational concerns with algorithms. In order to do this effectively, they need to understand the specificities of algorithms and the public’s concerns about them.

Algorithms are Quickly Reshaping all Kinds of Decisions
Broadly speaking, algorithms are “encoded procedures for transforming input data into a desired output, based on specified calculations”1. As such, algorithms can, in principle, be performed by humans and can be found in any culture with mathematical procedures. However, as performed by computerized systems, they have quickly proliferated as rational means of everyday decision-making. Within the last two decades, algorithmic decision-making has been popularized by, for example, Amazon’s product recommendations, Google’s search results and Facebook’s timeline algorithm. Initially the debate about algorithms focused mostly on ‘soft’ decisions, such as the question of how algorithmized recommendations might change the book market or how a ‘filter bubble’ might alter public discourse in the very long run.

During the last five years, however, something changed. First, it became obvious that the algorithmic decisions where not so ‘soft’ after all: the Brexit vote or the 2016 American presidential election triggered public debate about the role played by Facebook’s timeline algorithms in preferring extreme political positions, which resulted in several parliamentary hearings with Facebook board members. Second, algorithmic decision-making systems became the object of increased public scrutiny.

Two prominent cases are: criminal justice algorithms (CJAs) for risk assessment and predictive policing have been criticized for reproducing existing social differences, as their machine learning process is fed by older cases; and patient assessment systems (PAS) have been considered as ‘uncertified doctors’, ultimately overtaking decisions about life and death in cases where the uncertainty of data does not allow clear, digital yes-or-no decisions. Today, algorithmic decision-making is no longer a topic for technicians and specialists. Algorithms and the proliferation of machine-based decisions are quickly reshaping countless spheres of life.

Reputational Concerns about Algorithms
There are multiple ways algorithms can cause stakeholders to be concerned. Three types of concern can be distinguished in relation to algorithms:2 evidence concerns, outcome concerns, and opacity concerns.

Evidence concerns can surface on three levels. First, decision-making algorithms can be criticized because they may give inconclusive evidence by producing probable outcomes. Their calculations allow for “best guesses” based on probabilities but never for certain results. Second, these algorithms may give inscrutable evidence when knowledge about input data and their use is limited. Finally, they may give misguided evidence when their conclusions rely on inadequate inputs, in other words “garbage in, garbage out”. Evidence concerns became a highly apparent issue in, for example, PAS which are used to project the success of medical treatment and the likelihood of patients’ deaths.

PAS use information about medical treatments, diagnoses of particular patients, and comparative patterns of common therapies. They could neglect, however, individual factors of personality and psyche, such as a patient’s will to survive, that have been proven critical in treatment success. As doctors themselves often do not know the information basis and estimation procedures of proprietary PAS, these systems have recently become a topic of strong public concern.

Second, algorithms pose outcome concerns. Algorithms aren’t perfect. They may produce unfair, biased or factually incorrect results. They have, for instance, been found to discriminate against certain groups of people (as the case with profiling algorithms). Such outcome concerns are apparent in the case of automated content: several news agencies use news robots to produce e.g. financial news. Stock market data are automatically translated into text, which works precisely because they do not need human editors to control them. Any error in these outputs would obviously raise serious issues for related trades and the respective news agency.

Three Fundamental Concerns with Algorithms

  • Evidence Concerns: Algorithms may entail inconclusive, inscrutable and/or misguided evidence
  • Outcome Concerns: Algorithms may generate unfair, biased or factually incorrect outcomes
  • Opacity Concerns: Complex decision-making systems based on algorithms pose fundamental challenges to transparency

Both evidence and outcome concerns are common but not necessarily linked to complex algorithmic decision-making systems based on algorithms. However, the third set, opacity concerns, are qualitatively different in this regard. They arise in the context of nearly all complex decision-making systems as they remain – at least in part – opaque. And this is not just about companies actively keeping them secret in order to protect their competitive advantage. The fluidity of these systems makes it excessively difficult, and in some cases even impossible, to detect problems and identify causes even if organizations grant access. Why is that the case?

Understanding Algorithmic Opacity
In light of more complex decision-making systems based on algorithms, calls for transparency are indeed “disappointingly limited” and “doomed to fail”.3 This is not simply because algorithms are the property of corporations who do not want to lose their competitive edge and do not want users to manipulate their algorithms; it is because merely seeing mathematical operations does not make them meaningful or comprehensible.

To understand an algorithm means to understand the problem that it helps to solve, not to simply study a mechanism and its hardware: “Trying to understand (an algorithm) by reducing actions to lines of code would be […] like trying to understand bird flight by studying only feathers”.4

This holds especially true for machine learning algorithms, which are in large part shaped by the training data they use, but also for digital data in general, as “Data have no value or meaning in isolation. All parts of the infrastructure are in flux […]”. 5Opacity is thus not only a result of technical complexity, but also of the fact that, in practice, these technologies are not simply reducible to their parts.

The last one or two decades have brought about new kinds of algorithms, self-learning algorithms, which pose even stronger opacity challenges. Self-learning algorithms are a set of rules defined not by programmers but by algorithmically produced rules of learning. In other words, these are algorithms, which program ever new algorithms. As a result, they can be assessed only experimentally and not logically.

That being the case, there are instances where the practicality of algorithmic opacity must be taken for granted. Machine learning systems, different from older rule-based algorithms, are often not conducive to or designed with human understanding in mind. There are a number of reasons for this, chief among them being that corporations keep them secret for strategic reasons and that we do not (yet) have the socio-technical means to make them comprehensible for human collectives.

However, as Kirsten Martin has recently argued, if the argument “too complicated to explain” would simply suffice, organizations would be incentivized to produce complicated systems precisely to avoid accountability.

What’s Next for Communicators?
The proliferation of algorithms in organizations brings about a new set of concerns with organizational conduct that communications leaders must address. Addressing these concerns effectively demands a basic understanding of the general workings of algorithmized systems and how these come to shape different kinds of reputational concerns that will inevitably emerge around these technologies, posing new challenges to be tackled by communicators.

A key reputational concern here is algorithmic opacity and the challenges it poses to communicators regarding the safeguarding of organizational accountability; i.e. how can communicators manage accountability when their organizations introduce more and more systems that are essentially “black boxes” and perceived as only poorly transparent and “creepy” technology.

Dr. Alexander Buhmann is a researcher working at the intersection of communication, new technology, and management. He is currently assistant professor at the Department of Communication and Culture at BI Norwegian Business School, co-director of the BI Centre for Corporate Communication, and research fellow at the USC Annenberg School of Communication and Journalism’s Center on Public Diplomacy.

References:

Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P.J. Boczkowski & K.A. Foot (Eds.), Media Technologies. Essays on Communication, Materiality, and Society, Cambrdi-ge/MA: MIT Press, pp. 167-194.

2 See for an overview: Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2)

3 Crawford, K. (2016). Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Pub-lics. Science, Technology & Human Values, 41(1), 77-92.

4 Marr, D. (1982). Vision: A Computational Investigation Into the Human Representation and Pro-cessing of Visual Information, San Francisco: W.H. Freeman & Company.

5 Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. Cambridge/MA: MIT Press.

6 Martin, K. (2018). Ethical Implications and Accountability of Algorithms. Journal of Business Ethics, Online First, pp. 1-16

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter