Abstract

This article analyzes the various ways algorithmic logic structures, streamlines, and delimits the conception of time and memory; orders the logics of social arrangement; and delimits the political. The author considers the ways in which algorithms extend racial discrimination, rendering it less visible, less discernible, and so more difficult to address. He briefly formulates a notion of crypto-value embedded within algorithmic self-conception and elaborates an algorithmic ontology. The latter is distinguished from the contemporary understanding of the post-human. The essay concludes with a reflection on a politics of street encounter as a counter to prevailing algorithmic constraints on the political. “Coding time” accordingly concerns the coding of time, the conception of time embedded in coding, the sociality and value that coding produces, and the implications for being and being human that the time of coding is manifesting.

Human memory is nonlinear. In this it is like cognition more generally, as neuroscientists have been discovering of late. It is layered and relayered through feedback loops, crosscurrents, lateral networks, intersections, and intersessions. There may indeed be a deeper order—a deep structure—at play, as some neuroscientists are inevitably seeking and as leading AI and machine learning research such as Google Brain is working to emulate. But it remains to date elusive.1

Algorithms, in their computational composition, have sought to offer something of an order. But they do so only by insisting on an imposed structure, an imperialism of arrangement. They challenge our experience and understanding of the relationship of past and present to the future in really fundamental ways. And as such they insist on a different, novel sort of memory, a largely linear and nonlateral conception of time and relation, and by extension of the political.

Algorithmic memory is made up of myriad data points, latent until invoked, static until plugged into algorithmic movement with a beginning and end in exactly that order, formulaically bounded. Computational memory is a virtual infrastructure, hosted in hardware of course, for housing data. Algorithmically enacted memory involves a code-driven matching of current to saved data profiles in a queried domain. Algorithmic memory, then, is a data bank or structured storage of these encoded matches. Google Brain has trained itself, not inconsequentially, to recognize a cat on the basis of matching the image capture of a cat passing by with ten thousand stored cat images. Facial recognition technology on our smart devices is designed to operate exactly on this logic. Recognizability, as now generally acknowledged, is predicated on the existing data base profiles. What falls outside these stored profiles—almost invariably those that are not racially white or light—will fail to be identified, for better and for worse.2

Algorithms recognize patterns, which they in fact establish and enact. Algorithms are patterned and patterning thought. What does not fit the pattern of its making is either correctable error or literally irrelevant, meaningless, counterproductive, mere noise or detritus. Perhaps at the limit it is the very unthought, or at least the unrecognized, what in a different if suggestively related context Stuart Hall characterized as “the constitutive outside.”3

As self-generating, algorithms seek to defy the historical, to render the historical—history itself—beside the point. History, in contrast to the stored data points from the past, is instantaneously erased as the next algorithmically produced instant is immediately brought about. Even historical data become present the moment they are algorithmically invoked. Not the first time the end of history has been declared, to be sure, but this perhaps the first more or less full enactment of perpetual-motion technology. The algorithmically recognized past exists only as a present. Perpetual virtuality reaches for virtual perpetuity.

It follows that thinking, on this profile, has no temporality other than instantaneity.4 This structural prescription contours thinking. It is restricting as much in how thought and memory operate as in the formalistic delimiting of legitimate modes and, by implication, objects of critical thinking themselves. Anything can be thought so long as it fits the modeling profile of this prescriptively formalistic reasoning, of the temporality of instantaneity. It follows that the critical insistence on preserving or renewing the receding conditions of slow time, slow thought, and slow being amounts to a politics of the counter, of a resistance to or, more radically, a break with the temporalities of algo-instantaneity.

“Coding time” accordingly concerns the coding of time, the conception of time embedded in coding, the sociality and value coding produces, and the implications for being and being human to which the time of coding is giving rise.

___________________

Anthropomorphic memory is made up of the recollection of pasts. While drawing on the bits of data in people's memory banks, such memory is not simply reducible to these data bits. It involves the recounting, the narratives composed to make sense of how those data fit together, re-membering the occurrence. There is a relation constitutive of human memory as such, threading together registers of the past and their being made present, their re-presentation to themselves and others.

For algorithmic memory, by contrast, the data and their operationalization are all there is. There is but a singularity to the plane of data qua data. It is not coincidental that we speak of algorithmic memory only in the singular. Algorithms produce no traumatic or happy memories, no affective reminiscences or recollections. Algorithmic memory acquires significance solely as perpetual presence. It may be tempting to suggest that human memory is a mode of interpretative mediation, whereas algorithmic memory is not. This would be a mistake. It would be more accurate to say that a different mode of mediation operates for each, interpretational narration in the former instance, data match-mapping in the latter. (The composition of the latter, of course, embeds interpretative assumptions.) Algorithmic memory consists not of memories produced by algorithms so much as the data bank, the cache, of algorithmically produced matches. It is no more or less than buckets of coded information.

___________________

Where technology once largely served human determination and decision-making, now human judgment is increasingly giving way to the projected “wisdom” of techno-determination. Advanced algorithms offer processing beyond human capabilities of almost inconceivably large quantities of data across a near limitless number of quantifiable factors. As a result, techno-generated predictions are being fashioned about future events, such as market behavior, fraud, accidents, and now an increasingly wide variety of human action. These predictions, in turn, strongly incline toward, if they do not directly activate, undertakings to make more likely or mitigate or benefit from the predicted outcomes.

A number of jurisdictions in the United States, for example, have implemented an algorithm to determine bail eligibility on the basis of predictions about the future behavior of the accused. Using inputted data about the accused's past actions, financial records, and record of reliability, the accused's probability of showing up to the required next court appearance or of jumping bail is calculated. The judge's experience and intuition, as well as her biases, are preempted from the decision-making. By all accounts to date, this has delimited personal bias as well as expressed racial and gendered bias.5 But it reintroduces new biases, notably weighting toward class determinations. These, of course, embed the original racial and gendered biases, if less overtly and in less easily recognizable forms. The lack of transportation or lack of funding to pay for a public transportation ticket to show up to past hearings will translate—instantaneously—into bail denial.

The use of algorithms such as VITAL (Validating Investment Tool for Advancing Life Sciences) to make decisions is also increasingly widespread. VITAL, a product of Deep Knowledge Ventures, surveys very large tranches of data to project the most profitable investments likely as a consequence of the patterns revealed in the reviewed data. A Hong Kong venture capital fund saw fit in 2014 to appoint VITAL to its Board of Directors to take advantage of its investment recommendations.6 However, such decision-making as seen in algorithmic terms is based, as with its predictiveness, on surface pattern recognition and the identification of regularities and/or irregularities. In other words, to understand algorithmic knowledge is also to rethink the normativity of the surface. Giuliana Bruno critiques surfaces in terms of their vital materiality, reading them as critical sites of encounter, connectivity, and communication: they are, she writes, “sites that are able to hold in their structure substantial forms of haptic, folded, material experience; their ‘superficial’ texture is touchable and therefore touching, and can even convey material relations, creating forms of encounter.”7 Algorithms may embed their coders' desires or biases, of course. But they recognize, insofar as they can be said to recognize anything at all, only data, or coded and codable information.

Algorithms thus place probabilizing predictiveness on steroids: more data crunching, more patterns, more statistical application, more assurances not of y pertaining per se but of the probability of its likelihood. The dominance of algorithms in such predictive analytics creates a world of empirical measurement, tabulation, instrumentation and instrumentalization, statistical modeling, and utilitarian calculation. In short, a contouring of the social that delimits the thinkable as much as it opens up to productive possibility.

Predictions, no matter by whom or what, have always been made in the same way: analyzing data from the past or present in order to forecast the future. Predictions are not speculations. They take the form of hypothetical imperatives: if x then y, given both the probability that x occurs and the degree of likelihood that, once occurring, it gives rise to y. (The occurrence of x + 1 leads to a different probabilistically established outcome than y, the variation depending on the volume of the “+ 1.”)

These techno-commanding instrumentalizations amount to the command of hypothetical imperatives, of the means-ends variety. Indeed, we could push even further to say that the hypothetical has become categorical in the sense that human subjects across a wide variety of domains are now techno-compelled to pursue nothing but the ends of their instrumentation. These are the ends given by technology of the kind by which human subjects are now being more or less universally networked. As algorithmic effect (if not intention, where the algorithm can be said to have intention), the algorithmic is both the giving of orders, commanding ways of doing and by implication ways of being, and the creating of order, even the imposing of it.

The algorithmic accordingly produces the repetitive, the patterned, the structured, the ordered. It serves as “a process, a program with clearly defined limits, a finite instruction sequence . . . a formula capable of accommodating different values and yielding different results.”8 Its repetitions structure the rhythm of lived experience and sensibility. They shape sensibilities and affects rhythmed by—when not (occasionally) disrupting—the structures of social arrangement.

___________________

Algorithmic time, then, points to a sense in which time generally could be said to be always in a critical condition. As instantaneity, as immediate presence, time ceases to be upon its instantiation. It is not, as it is. Its presence is immediately—instantaneously—its past. For algorithmic memory, this immediately instantiated past becomes just another data point, deposited in the data bank. This highlights the contrast with the multiplicitous temporalities that make up the relational play of anthropomorphic thinking, the entanglements of pasts, presents, and futures both irreducible to each other and, to paraphrase Jean-Luc Godard, not necessarily in that conventional order.

Algorithmic reasoning, in contrast to the slow(er) time of anthropomorphic memory, is constitutively disposed to futurity-made-present. The time of instantaneity, of time as instant, is the time for which the algorithmic reaches, its endpoint. It takes 250 microseconds to techno-read one megabyte of memory: that's 0.00025 seconds. If you can imagine this as speed, it means reading information more than thirty times the length of this entire article in less than the blink of an eye! It is almost unimaginably fast.

This virtuality—in which each endpoint only reaches for the next, and the next after that, ad infinitum—is most readily evidenced in algorithmic trading. High-frequency trading is calculated and enacted on the basis of massive volumes of data, updated instantaneously with data about other trades, trends, earnings reports, the constantly shifting volatility index, breaking news about politics, takeovers, buyouts, and so forth. The update necessitates the next trade, and then the next. When a large volume of stocks is sold, the algorithm will instantaneously recognize the resultant dip in the stock's price, issue a buy order in a matter of microseconds, and then, when the price consequently rises as a result of the increased algo-demand, sell the stock seconds later at the higher price. A one-cent increase in the price of a million shares will generate an algo-trading profit of ten thousand dollars in seconds. The lag time between microseconds and seconds is a function not of algorithmic application so much as the relatively slow time of real-world trading registration. There is no rest, either for the data inputs and outputs or for the generated trade. Conventionally set clocks are foregone, overridden by the rhythms of data in, calculations, trades, data out, new data in . . . repeat . . . twenty-four/seven. The world is awash not in crypto-currency so much as algo-cryptic virtual value. Algorithmic temporality is teleological, reaching always for its immediate(d) end(s), only to be redirected instantaneously to a new end as soon as the previous one has been realized.

Algorithms also frame and so structure the limits of the conceivable and doable regarding the subject (matter) of their address and reference point. The algorithm makes the world molded to its enumeration. Algorithmic life is, in short, a lifeworld, life and world delimited by the reach of algorithmic reference. If the limit of the world is the limit of its language, then it could be said (riffing on Ludwig Wittgenstein) that the limit of the world—in shape and formulation and reach and temporality—is now the limit of the algorithmic (itself a linguistic product, after all). “World-limit” (or world possibility) is now more and less set by the scope and movements—the rhythm—of the algorithm.

So it is no longer adequate just to track the development of computational power. In other words, what the computer—or more broadly, perhaps less materially, artificial intelligence—is capable of next in terms of how quickly and how comprehensively it can learn so as to respond intelligently to input is growing beyond the capacities of human reasoning. It is becoming pure potentiality. It is no longer that the limits of language are the limits of world-making. Rather, the limits of data processing have surpassed, if not replaced, language as the driving limit case of instantaneous worlding.

The world—or, one might better say, worlds—is awash with data that algorithms are capable of processing at speeds and levels of sophistication far outstripping human capabilities. And alongside this, even as a result at least in part, there is an unpredictable, even ironically incalculable potential of computer processing power due to computers' capabilities for learning.

Self-driving cars, creeping closer to coming to market, navigate on the basis of algorithmically accumulated data about surroundings and destinations. They predicate driving decisions on algorithmic instructions obtained from other algorithms along with quite comprehensive data feeds about the driving environment, Google-mapped directions, and so forth. Ultimately, such data could include everything conceivably known about and by other cars around one, from direction and speed and signaled changes to destinations and mapped directions. Before long, robotics will populate a vast array of work functions and workplaces, dramatically transforming a modernist political economy predicated on the dignity of non-alienated laboring to one ordered around artificial intelligence, techno-determination, and (as with repeated historic predictions of machinic domination) a renewed challenge to human relevance. This challenge and its attendant worries, if not its dreads, find cultural expression from Frankenstein to Ian McEwan's robotic “Adam.” McEwan's “Adam,” as the name suggests, threatens the genesis of a new order in which robots rule the home and the bedroom, reproduction and not just the production of surplus value.9

___________________

Algorithms, accordingly, are key to these increasingly deep learning capabilities, not only in terms of their sophistication, but also in the rapidity of their development and improvement. The doubling of computer processing power, according to Moore's Law, enables, crucially, not only an increase but an exponential advance in computational capability. Numerous commentators now argue that we are at a later stage of this exponential increase.10 In short, artificial intelligence milestones have been reached with accelerating but now diminishing rapidity.

Algorithms increasingly interact among themselves. The trading algorithms mentioned above trade with other trading algorithms, resulting in multitudes of transactions executed at speeds and in quantities never to be witnessed by or even made intelligible to humans. Even the most sophisticated quants have been at a loss to explain exactly how particular algo-trading decisions are arrived at. And those programming AlphaGo are completely incapable of explaining the algo-generating strategies in the program's victories over the leading human experts in games of chess and Go. Trading decisions may be made by algorithmic computation with complete automation, or auto-generation, so that human intervention is further downplayed or even excluded altogether. Thomas Bridle worries that “the machines are learning to keep their secrets.”11

The logic implicit here is best evidenced by the workings of preemption. This newly emergent “ontopower,” as Brian Massumi puts it, anticipates threats or challenges and seeks to preempt their occurrence by acting on their predicted causes. It “seeks to act,” in his characterization, “on the time before . . . the time of threat.”12 Preemption thus is a time machine of our time, altering the future by redirecting the time presently inhabited or that between the time inhabited and the actualization of the projected threat or challenge. It is an acting (out) on the present future, to make a determined future present. Time itself has become techno-directed, operationalized technologically. The logic of preemption mirrors that of algorithmic being: data induces a perceived threat or challenge, then acts not just to prevent its occurrence but to bring about a state of affairs rendering the impossibility of the threat's coming to be. If t then p, so as to ensure not-t. Preemption entails altering the causative conditions so the threatened or challenging outcome is derailed, disentailed. History in the making instantaneously amounts to remade history.

This slippery reach for maker history, for DIY more or less instantaneous future-presents, overlooks the hold of pasts embedded in the material culture of the present. Temporal simultaneities abound, haunting any makeover. Think of pasts embedded in metropolitan architecture. Buildings in Bordeaux, for example, embed reliefs of slavery scenes on the exteriors of nineteenth-century buildings made present. The memory of slavery is cemented in the buildings, made present, given new, contemporized meaning, itself forwarded and transformed in its futurity. Urban walkabouts subliminally embed images—“memory recalls”—of domination. Memories accordingly are rendered layered, competitive, subliminally contested, multidirectional, multimodal in the transforming temporalities of their re-constitution.

Ann Stoler gets at these tensions in her most recent book, Duress: Imperial Durabilities in Our Time.13 Duress, as she puts it in part, means lasting, continuity, persistence, endurance, extension. To act under duress is to act under compulsion, to be made to do something, as in “serving time.” If serving time can be invoked as a more extended metaphor for the past's relation to time, the algorithmic seeks to stand that relation on its head: time is remade to serve this iteration of the technological. Time endured, unlike algorithmic time, tests one's durability, one's persistence in, through, and across time. Algorithmic time tests only one's capacity to keep apace.

___________________

Evident here is the distinction—perhaps better, the tension—between memory and make-believe. History in the making has become make-believe in constant demand. Make-believe is different than mistaken memory, or than fantasy, for that matter. Make-believe amounts to fabrication, collapsing the distinctions between sewing together a fabric of narration, misleading oneself, and compelling others to believe the made-up account or representation.

Judgment itself becomes functionally algorithmic, nothing but the determination of the decision by algorithmic computation. Judgment, in other words, becomes—is reducible to—computationally recognized and computable data. For example, Google Assistant (Assist), a Siri-on-steroids, is intended to be the omniscient database of all and any information relevant to the decisions facing one, and to yield the optimal outcomes based on the available information, including one's history of preferences regarding the matter at hand. Assume one wants to book an air ticket to a defined destination. Assist would search out and purchase the best flights based on its “knowledge”—the data inputs available to it—about one's flying preferences, including cost, route, airline, seating, timing, and so forth. The decision would be a function of the decision algorithm and relevant data culled from Google's vast database of over a billion people and things. The purchase would be transacted simply by hitting “enter.” The algo-automated learning machine would “understand exactly what you mean and give you back exactly what you want,” as Larry Page has put it.14 If only you knew yourself!

What of the role of having a feeling or intuition about the data, taking into account the feelings or sensibilities of others concerning the data or even the judgment options themselves? What of a choice in the moment that deviates from one that your record strongly suggests you would make? I usually choose a particular seat in the plane. It is open now when I am booking my flight but has the seat next to it taken. The row behind is empty even though it has slightly less legroom. In the moment, I choose the empty row, chancing that it might not fill up. I wouldn't always make that choice. It would depend on data inputs, to be sure. But it might just as well turn on a feeling in the moment. It may be one I will later regret or celebrate, to be sure, but it is emphatically one no auto-calculus would generate. Algorithms have no hunches.

That algorithms have no hunches means that their commitment to rendering human lives and living more efficient—their repeatedly stated utilitarian endgame—imposes order, as stated at the outset. But in doing so, they come at a profound discount to human being. Having a hunch is not just exercising an intuition on the basis of which I take a calculated gamble. The algo-evaporation of hunching and intuition actually makes life less interesting, in some circumstances even more dangerous. An asylum-seeking refugee might Google-map routes across a landscape of escape from a country of residence but then decide on an “instinct” for a more circuitous route to avoid more obvious and so more policed crossings. The technological erasure of hunches means there are fewer stories to share, fewer lessons learned. At the very least this sort of techno-determination, once generalized across all domains, narrows, if it does not obliterate, the range of possibilities for freedom to be exercised. Given that wisdom comes from experience, from learning from failure, a reduction of life to a series of algo-efficiencies is likely to lead to less self-reflection, to making the world less wise, to rendering the political less open and less free.

It is telling, then, that with the emergence of crypto-value and proliferating technologies of encryption, calls for transparency have grown louder. As processes of value creation, of intrusiveness, and of the curtailment of privacy have become more coded and so less readily obvious, concerns about openness in decision-making have grown. And yet the speed with which these logics of cryptic social processes have taken hold has far outstripped general comprehension of their operational logics, their spiraling application across increasing dimensions of social life, and any sustainable sense of how to regulate or control their applicability and social impacts. The socio-logics of encryption have resulted in less direct impacts too. Policy-makers coterminously have felt emboldened by this expansive culture likewise to become more secretive, more cryptic and obscure, and less transparent in their decision-making, too. Hence the spiraling concerns about corruption, in policy-making, electioneering, political self-dealing, global trade policies, and governance.

That anyone with access to such technological proficiency could benefit from its application evidences the democratizing possibilities of these technological developments, as many commentators have celebrated and perhaps the information sharing of sites like Wikileaks has evidenced. But the obscuring possibilities constitutive of the technology quickly place in question any straightforward social benefits, as the vast expanse of the Dark Web and its nefarious proclivities immediately suggest.

___________________

The algorithmic is a hyper-discriminating machine. It is discriminating in both of that term's two driving senses. First, algorithms differentiate and distinguish between relevant and irrelevant data, between what's of interest and what's not, based on past patterns that then don't just inform but drive relevance. And in so doing, second, algorithms trade on and intensify discriminating trends, disadvantaging the already disadvantaged. But they do so by rendering invisible—almost irrelevant because virtually untraceable—the grounds of the discrimination. Service work, long disproportionately peopled by Black, brown, and women workers, is being steadily replaced by work performed by service machines, from food preparation to reservations staffing and, in the near future, to bus and cab driving.15 In the United States, since unemployment data have been tracked, the Black unemployment rate has invariably been double that of whites. Technological developments will dramatically, and invisibly, exacerbate these trends.

Algorithms discriminate with utter indifference, without affect. The reproducing discriminations consequently become more or less naturalized, seemingly sewn into the condition of social being (and not just the social condition of being) itself. The algorithmic has no reflexive or reflective memory of the discrimination, only accumulated data bits about employment, or residential address, education, or consumption. The algorithmic suffers no guilt, or shame, or even qualms, self-doubt. It is a being of a different ontological register. The algorithmic is the perfect post-racial discriminating machine.

One telling instance, among many. In May 2016, laying the ground for Trump's presidential election, Breitbart and InfoWars pressured Facebook to replace human staff determining trending topics with an algorithm, based on clicks. From this moment on, driven by the acceleration of bot-circulated stories and clicks, fake news proliferated (actual fake news, not presidential accusations of it, which perhaps could be characterized as an extension of fake news itself). African American voters were especially targeted, to discourage them from voting given the overwhelming likelihood that they would vote for Hillary Clinton and Democrats rather than Donald Trump and Republicans. The voting rate among the Black electorate in 2016 showed a decline, most notably in key swing states, driven in part by this campaign. Some have argued that there is nothing in the nature, the being of the algorithmic in and of itself, to produce these discriminating trends. But then there is nothing inherent in algorithmic being to delimit them either. And insofar as algorithms more readily tend invisibly to reify the given and established—a product of long-existing data feeds—they likely exacerbate ongoing discrimination. Algorithms are also the post-racial discriminatory reification machine.

Facial recognition algorithms point to another way in which algo-subjectivity drives social policy formation. I take “algo-subjectivity” to be the “subject” formation of the algorithmic in its own terms, not the determination of human subjectivity by the algorithm nor the anthropomorphizing of algorithmic subject formation. Algorithms have no fingers to scald, no skin to pierce, no ears burning for gossip. Algorithms drive the capacity of machines to learn, to create and compose, to optimize not just cost-benefit calculation but actually to engage in decision making, whether behavior-inducing rational choice or utilitarian or strategic.

Half the states in the United States now make available driver's-license photographs and identifying data to local police to conduct algorithmically determined criminal identification. Criminal suspects' images are “lined up” against the driver's-license photographic database, the facial recognition algorithm seeking to make a match. No policy protocols currently exist to oversee mismatches. Race and gender once again loom large here. Darker skin tones currently make it more difficult for the software to recognize facial contours, and the algorithm is incapable currently of controlling for facial cosmetics.16 The relative techno-incapacity to make fine-grained distinctions for darker skin tones means mismatches for people of color are more likely, heightening the likelihood of criminological misattribution. With human line-up error, defense lawyers can at least cross-examine both police and prosecutorial conduct of the line-up as well as the witness, seeking to establish faulty process and memory. With algorithmic identification, cross-examination is rendered obsolete, at least in large part. The prosecution alienates the defense from one more avenue of interrogation, in turn foreshortening the time of prosecution, the time to a (mis-)determination of guilt.

There is an unfortunate analog to this among progressive activists, too. Following the notorious white nationalist march in Charlottesville, Virginia, activists sought to identify attendees from media photographic matches with online databases like Facebook in order to out and shame them with relatives, friends, employers. There were numerous successes, to be sure. Top Dog (a store in Berkeley, California) fired an employee for actively joining the racist-slogan-chanting, volatile, and ultimately murderous rally. A North Carolina family publicly excommunicated a family member for attending, too. But there were failures as well, misidentifications causing stress, embarrassment, and indeed harassment to the wrongly identified. Facial recognition or matching software will not solve all mismatch determinations. And in any case what one does in the case of a match requires political and moral judgment—wise practical reasoning—not ceding the political and ethical to an autonomous techno-fix.

___________________

A question here is whether algorithms express desires. Films like Her and Ex Machina intimate that they do, a theme Ian McEwan explores in his ambiguously titled Machines Like Me.17 They can no doubt express desires in the sense of stating them. But are they desiring machines in the sense of animating desires, of desiring as such? They have agency of a kind, of course, but can the computational also aspire, amount to, or become dispositional? Algorithms certainly present the possible content to be or become the objects of desire. But the question is whether algorithms can be programmed or program themselves to move or be moved to acquire or to satisfy those objects, to be satisfied by them. Projecting the mental states of others, rightly or wrongly, is not tantamount to having such mental states one- or itself.

For every object of desire, there is a marginal utility, or so economists would have us believe. For what is demand if not at basis the expression of desire, even if reductively so? The marginal utility of x is the point at which the consumption of each additional unit of it is less satisfying. It would be what and where I would be prepared to pay less for x, whether monetarily or in terms of effort to acquire and satisfy. It is the onset point of diminishing returns. Algorithmic calculation hyper-inflates and intensifies the speed of these diminutions. As our social media, email, and internet browsing pages testify, algo-technology is an incessant desire-preempting and -driving machine, too.

By contrast, time as processual is an unfolding, revealing what it holds in store not all in the instant. Or even in a Sartrean series of instances. A processual conception of time necessitates finding sense in the relational as such, on its own terms. One could say that what processual time—and by extension the desiring profiles that emanate from it—holds in store remains in part obtuse even to “itself” until it has transpired. Hence the failure of much prediction.

This counter-algorithmic critique then also suggests a break with the project of the posthuman, which is ultimately teleological. The posthuman is underpinned by temporality—the very temporality of the human, in fact. It refers to an “after” of the human, where observational and aspirational senses are often indistinguishable. Posthumanness is also indexed to a presumptive spatiality that accompanies humanness (the body in some more or less ideational sense). Even in its purported break from the presumptions of humanisms, posthumanness is nevertheless bounded by the spatial and temporal preconceptions associated with how the human is imagined, if not outright defined. In the end, the posthuman reaches for a subject that is perfectly inclusive, perfectly diverse, perfectly just, perfectly polymorphous. The dystopian version likewise is predicated on this sense, insisting that the reach for the necessarily unachievable aspiration of the posthuman is exactly what gives rise to its destructive or “decadent” streak.

Algorithmic being is akin to human subjectivity only in the sense that it reduces the latter to the computational surrogate of the former (its alter-ego transmogrifying into its algo-ego?). Algo-being trades on a logic akin to behaviorial positivism. Eschewing the metaphysical in favor of the behavioral directives of coding syntax, algorithmic being operates purely processually, a techno-rhythmic temporality. Object-oriented programming (in contrast to object-oriented ontology) drives things on the basis of coded procedures operating on objects as fields of contained data (analogous to DNA as the procedural logics of human being). Popular cultural projections of human affect—emotion, desire—onto algorithms and the robotic perhaps renders the latter more approachable and less alien, but they confuse the operating logics of the one for the character of the other. They anthropomorphize and anthropologize the technomorphic and technological. Object-oriented programming proved effective in programming website windows and buttons but highly challenged and overly convoluted in capturing complex categories and relational databases.18 Instructively, algorithms more generally are enormously effective in enabling a considerable range of functionality but disturbingly delimiting and constraining of vast swathes of “permissible” activity, relation, interaction, expression, and social movement.19

Machine-embedded algorithms do not so much “listen to” or “overhear” or “eavesdrop” on our conversations, “probe” our bodies, or “read” our minds. Instruments may do that, in part driven by algorithmic directions. Algorithms, at core, may direct or command the machines and collect the resultant data. They make matches with existing database entries, make predictions, even spew out resultant directives or commands. And they do so only in contingently materialized or “embodied” ways. They can operate quite effectively without ears and eyes, sans sensors. It is we who may indeed be spooked by this. We may hanker for “machines like us” as well as those that like us. But at heart, insofar as the metaphor holds, algorithms as such are constitutively—ontologically—indifferent to anything but data. Algo-being, to push the point, is a being that ultimately may become a DIY producer. But while it represents structuring architecture and so conditions, it will do so with no disposition, no inclination to call its own. Dylan's classic line, “no direction home, like a rolling stone” metaphorically expresses algo-being's characteristic data-inclined condition, if literally not at all.

___________________

Joi Ito insists on calling the algorithmic “extended intelligence,” an enabling supplement to human thinking and production, exhorting us rather to “forget artificial intelligence.”20 This, of course, takes for granted the centering of the human in posthuman articulations. The implication of my argument is that it would be analytically more accurate neither to privilege the singularity of human intelligence nor to reduce the possibility of multiple, interacting intelligences as supplements or extensions of the human.

Consider, in closing, the all-too-present logics today of deportation, forced migrations, refugees: what Shahram Khosravi has deemed “estranged citizenship” and multiple abandonments, expulsions, social marginalizations, and dismemberships.21 We might call this, without irony, social dismembering. Fugitive time, the endless time between, is often referred to by refugees and deportees as “deadtime,” the “time of [social] death,” a suspended time of life unlived. It is a time reduced by those exercising the power of the state to endless waiting, a time wasted. Not so much a pessimism, which itself is anticipatory if negatively, as a time altogether lost, neither past nor future, fully irrecuperable because there is nothing as such to recuperate. A time of non-becoming, non-destination, and so non-arrival. It is a state, a condition, of or at least akin to nonbeing.

Once the public site of the political, the street is historically where political populism is manifest. But this hides the fact that the political has long been expressed, if at all, across variable and distributed sites, and especially so today: the sea, the border, the refugee camp, virtual online sites. This fact complicates the relation between the public and the private, the political and the cultural. But it also complicates the relation to mobilization, and in mobilization to memory and memorialization. In short, it raises anew the question of where the political “lands” in relation to these shifts, and in relation to algo-capitals as they shift from the national-local to the global, from the sited to the virtual and cryptic.

What, in light of this, it might be asked, is the driving mode of the political today? The political is not so much deflating as diffusing in the shifts and circulations from and through the bodily manifestations in bare life and the biopolitical to virtualities in all their complexities. But the bodily also shifts in its making, in its making up and made up being and condition, too. So what of the emergence of the technopolitical, in its long history of formation, transformation, and now dominance? Here the political resonates in all kinds of ways from contestations around categorization to the relation to memory, memorialization, and its shifts, themselves requiring the work of making and making up. Just as art is not the same after photography, memory is quite radically remade in the digital turn. Heroes, for one, are differently fashioned, emerge and disappear, are cemented, fragmented, virtualized in a digital moment.

A counter-project of algo-being that takes it not on anthropomorphizing but on its own ontological terms is ultimately spatiotemporal: a being that occupies a space-time in defiance of the mutual exclusivity of the human and the algorithmic. Such a being promises from its own standpoint an emergent subjectivity not simply beyond the human, but that refuses or collapses the human-technological contrast. A being that, in algo-“self”-characterization, is representationally symbolic, fluidly metamorphic, instantaneously transfigured, pure onto-pliability. And perhaps always in that sense other to its immediate self, as close to mediated immediation as can be gotten (or begotten, its actionable errors and failures not immediately forgotten, excusable, forgiven).

The overriding question, then, is this: what of the possibility of a future, one not just lived together but worth living in its entanglements, in relation to the pasts of the lived gendered, classed conditions of raciality, and what of the possibilities of their fused co-making? Claims to—assertions of—a mother- or fatherland always rest on fabricated claims to history. Mixed blood, culturally, socially, institutionally, even genetically, is to a greater or lesser degree the historical legacy of all. The same must be made of, insisted for, algo-being, of and for the algo-anthropological in their insistent, mixed, complicated, and complicating fusings.

Acknowledgments

Conversations and exchanges with numerous people enabled elaboration of my thinking here. I owe debts of gratitude to Jenna Ng, Achille Mbembe, Debarati Sanyal, Anirban Gupta-Nigam, and the editors and reviewers of Critical Times.

Notes

Works Cited

Benthall, Sebastian, and Haynes, Bruce D.
Racial Categories in Machine Learning
.”
Paper presented at “FAT* ’19: Conference on Fairness, Accountability, and Trans-parency,”
Atlanta
,
January 29–31, 2019
. arxiv.org/pdf/1811.11668.pdf.
Bridle, James.
The New Dark Age: Technology and the End of the Future
.
London
:
Verso
,
2018
.
Bruno, Giuliana.
Surface: Matters of Aesthetics, Materiality, and Media
.
Chicago
:
University of Chicago Press
,
2015
.
Buolomwini, Joy. “
We're Training Machines to Be Racist: The Fight against Bias Is On
.”
Wired UK
,
April
10
,
2018
. www.youtube.com/watch?v=N-Lxw5rcfZg.
Daniels, Serena Maria. “
When We Talk about Automation, We Also Need to Talk about Race
.”
Huffington Post
,
June
22
,
2018
. www.huffingtonpost.com/entry/automationrace_us_5b20eb7ae4b0adfb826f9f48.
Fuller, Matthew, and Harwood, Graham. “
Abstract Urbanism
.” In
Code and the City
, edited by Kitchin, Rob and Perng, Sung-Yueh,
61
71
.
New York
:
Routledge
,
2016
.
Hall, Stuart. “
Introduction: Who Needs ‘Identity’?
” In
Questions of Cultural Identity
, edited by Hall, Stuart and du Gay, Paul,
1
9
.
Thousand Oaks, CA
:
Sage
,
1996
.
Ito, Joi. “
Forget about Artificial Intelligence, Extended Intelligence Is the Future
.”
Wired UK
,
April
24
,
2019
. www.wired.co.uk/article/artificial-intelligence-extended-intelligence?fbclid=IwAR0dZkNhL9ErzszcDcatDWl9AJUIcnEAUQVh1n5BZP_BFpdoxLkL5jcTuGA.
Khosravi, Shahram.
After Deportation: Ethnographic Perspectives
.
Basingstoke
:
Palgrave Macmillan
,
2019
.
Kowalik, Zbiegniew, Wrobel, Andrzej, and Rydk, Andrzej. “
Why Does the Human Brain Need to Be a Nonlinear System?
Behavioral and Brain Sciences
,
February
4
,
2010
. www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/why-does-the-human-brain-need-to-be-a-nonlinear-system/E296DB3553F379C13E217D58B181129F.
Massumi, Brian.
Ontopower: War, Powers, and the State of Perception
.
Durham, NC
:
Duke University Press
,
2015
.
McEwan, Ian.
Machines Like Me
.
New York
:
Penguin
,
2019
.
Pasquinelli, Matteo. “
The Eye of the Algorithm: Cognitive Anthropocene and the Making of the World Brain
.”
November
5
,
2014
. matteopasquinelli.com/eye-of-the-algorithm/.
Siegel, Eric. “
What If the Data Tells You to Be Racist? When Algorithms Explicitly Penalize
.”
KDNuggets
,
September
2018
. www.kdnuggets.com/2018/09/siegel-when-algorithms-explicitly-penalize.html.
Simonite, Tom. “
Moore's Law Is Dead. Now What?
MIT Technology Review
,
May
13
,
2016
. www.technologyreview.com/s/601441/moores-law-is-dead-now-what/.
Skow, Bradford.
Objective Becoming
.
Oxford
:
Oxford University Press
,
2015
.
Srinivasan, Ramesh. “
How Bias in Technology Drives Inequality: Interview with Ana Kasparian
.”
#NoFilter, YouTube TV
,
April
15
,
2019
. www.youtube.com/watch?v=_meJ0j2Eels&t=412s&fbclid=IwAR0P2TwTf8fyfpY6QzLUMYn5sKZ6AGc9Vk_1b15e8xEM79kAMV4-qCnDRvk.
Stoler, Ann Laura.
Duress: Imperial Durabilities in Our Time
.
Durham, NC
:
Duke University Press
,
2016
.
Talin
. “
The Rise and Fall of Object Oriented Programming
.”
Medium
,
November
19
,
2018
. medium.com/machine-words/the-rise-and-fall-of-object-oriented-programming-d67078f970e2.
Uricchio, William. “
The Algorithmic Turn: Photosynth, Augmented Reality and the Changing Implications of the Image
.” In
Cultural Technologies: The Shaping of Culture in Media and Society
, edited by Bolin, Goran,
19
35
.
New York
:
Routledge
,
2012
.
Wile, Rob. “
A Venture Capital Firm Just Named an Algorithm to Its Board of Directors—Here's What It Actually Does
.”
Business Insider
,
May
13
,
2014
. www.businessinsider.com/vital-named-to-board-2014-5.
Yeung, Ken. “
Google's Page Not Worried about Facebook's Graph Search, Says He Is ‘Confident about Our Core Business.’
Next Web
,
January
22
,
2013
. thenextweb.com/google/2013/01/22/googles-larry-page-not-worried-about-facebook-graph-search/.
This is an open access article distributed under the terms of a Creative Commons license (CC BY-NC-ND 3.0).