BY INDERJEET MANI
–
Our front balcony faces the Gulf of Thailand, and on evenings when the moon is full or nearly so, we love to watch it rising over the sea, its luminous presence marked by those great basaltic plains once mistaken for seas. The moon is naturally the subject of countless iPhone pictures that I share on social media. In a network driven by mutual admiration, getting those likes from friends and acquaintances is now essential to the rituals of picture taking.
My memories of the moonlight I snapped a few nights ago are tied not only to the appearance of the moon, but also to what was going on when I took that picture. As it happens, my wife and I were enjoying a penne with spinach sauce. I remarked on the moon, and as we watched it, we held hands briefly. The moon that night also brought to mind memories of a much earlier time when my father and I would stand together observing those more distant moons.
When I look at my moon picture now, I recall the feeling of the wash of moonlight over air and water, and the presence of my wife beside me. For dozens of other moon pictures, unlike birthday or work-related ones, I have no recollections of the occasion of my taking them. While writing this essay helps preserve my personal memories, it’s possible that my clicking at the scenery around me might be diminishing or even erasing them.
In a recent psychological experiment, people touring a museum who were asked to photograph certain exhibits had trouble remembering them, whereas the exhibits that they didn’t photograph were surprisingly easier to remember. Another set of experiments has revealed the extent to which people rely on machines to relieve themselves of the burden of memory. Humans are willing to forget information if they believe it is available online, remembering where it can be found rather than the information itself. It’s sad enough to find memories of friends and distant places dimmed by age, without having to deal with technology ruining them further.
Not so long ago, the link between photographs and memories was celebrated simply and effectively. We sat around the fireside with our families, thumbing through those vintage photo albums with their wrinkled plastic sheets, remarking on a stooped grandfather’s piercing eyes, or admiring those glimpses of a daughter playing in the tub with her faded rubber ducky. Today, our kids, now grown up, show little interest in those family albums, offering only a brief nod and maybe an “uh-huh” while snap-chatting their friends about something far more interesting. The nostalgic world of physical photo albums is now an attic curiosity, like those fraying wedding saris and locks of forgotten hair. What the world offers us instead is the vast ocean of online repositories where we drop our little snapshots, hoping that our memories won’t face death by a thousand clicks.
All is not lost, however, in that sea. When I uploaded my moon shots that night to the Cloud, the system knew not only when and where they were taken, based on information available on my phone, but also the fact that the moon was involved, along with moonlight and the sea. My wife, leaning in on one of the shots, was accurately identified. Realizing that some of my moon photos were taken in quick succession, Google Photos stitched them together into an animation, which I duly shared on Facebook. I also shared various digitally enhanced versions, including one that resembled an oil color. And I got those likes.
The systems we are tethered to are in possession of numerous potentially memory-jogging bits of information. The weather on the eve of the moon shot was lovely, reflected partly by the temperature, barometric pressure, humidity, and wind velocity. Earlier that day, my calendar had thoughtfully reminded me that it was the birthday of an 87-year-old aunt in India. The powers that be must also realize by now that when I take my moon shots, my wife and I are often seated together at dinner, sometimes in the company of friends, on a balcony at a considerable height above the sea. My wife’s emotional state might also be inferred from her facial expressions. My mood would be easy to discover from my tweets (some of which are already entirely predictable).
In the near future, systems will be able to assemble such information and generate verbal summaries of our photos, explaining what was happening at the time. These summaries will include rich descriptions of image content. Today, photo captioning algorithms can provide not only tags, but can also describe entities and scenes (which is especially helpful to the visually impaired). These descriptions are generated using natural language processing from information found in pre-existing image captions as well as from online textual content related to objects and scenes found in the photograph. Taken together, these smarts may help resurrect, from their synaptic slumber, personal memories associated with a picture.
While technology may help our personal memories, they are not as cool, for sharing, as pictures. Even though a digital photo today is the result of a complex computational reinvention of the scene, it is still understood as a view of reality, and as such, on an equal footing with experience. After all, no matter how much it may be staged or edited, a photo must resemble the scene from which it was mechanically generated. In the language of semiotics, photographs are signs that are inherently iconic and indexical. Those characteristics, in turn, allow us to conveniently forget that a moon shot is entirely different from the moon that we view through our native visual system. As Susan Sontag observed forty years ago in On Photography, “Photographed images do not seem to be statements about the world so much as pieces of it, miniatures of reality that anyone can make or acquire.”
Future technological innovations may continue to warp our definition of what is real and personal. We are already reeling from the disruptive impact of social network algorithms and search tools deciding on which collective events we should focus on. When fully programmable cameras become commercially viable, algorithms worn on our face and body will decide when and where and how to take photographs, choosing and framing those experiences for us. From the buzzing confusion of images, shots with features that are popular across users, or that fit with a machine’s deep fantasies, will likely be preferred. And once virtual reality truly takes hold in online gaming and entertainment, almost all of the visual experiences we savor will have been selected by machines that capture and render them based on their own perspectives. By then, we will be used to living mind-bogglingly virtual lives.
In his 1922 essay Photography and the New God, the photographer Paul Strand wrote about the need to humanize the machine, “lest it in turn dehumanize us.” Nearly a century later, the direction we’re heading as a species seems to involve ceding key cognitive functions over to intelligent mechanical appendages whom we attend to more than each other. Some of our most treasured moments are now bits of electronic information, ghostly images desperately craving for attention. But unlike us, they have a chance to persist way into the future.
Just as we get that eerie feeling watching archival footage of Tolstoy or Tagore, anthropologists and historians of the future may wonder as they interpret our personal photos. It behooves us to try and provide an honest and human-centered telling, mediated by technology, of what they were originally about. After all, it was we who were present, like our ancestors before us, observing the moon on an enchanting evening.
–
Inderjeet Mani is a writer and specialist in AI and computational linguistics. His books include The Imagined Moment, and his work has also been published in 3:AM Magazine, Aeon, Apple Valley Review, Areo Magazine, Babel Magazine, Drunken Boat, Eclectica, New World Writing, Nimrod, Short Fiction Journal, Slow Trains, Storgy, Unsung Stories, Word Riot, and other venues. On Twitter, he is @InderjeetMani, and his website is http://tinyurl.com/inderjeetmani