A Letter to 1994: Considerations for your Digital Dreams
In the last 20 years technology has become such an intrinsic part of our lives and identity that we hardly even notice the full impact it has on us. Back in 1994, things were very different: the commercialisation of the internet around 1995 would mark the beginning of the (tech-)world as we know it. Imagine we could send a letter back to the about-to-be tech geniuses of 1994 revealing the lessons we’ve learned in the past two decades and warning them about the challenges to come. This article takes the shape of one such letter, and though the means of sending it back in time unfortunately do not exist yet, I remain hopeful as to being able to send it someday.
Dear 1994, You don’t know it yet but you are at the dawn of the digital age. Personal computers are still a luxury of the few. In order to keep in touch with your connections, you must make time to physically visit them, call them at home, or send them a letter. Privacy invasion only happens when someone walks into your room (or house) without your permission.
In 2019, things look quite different. While we are enthusiastic about the benefits technology has to offer (sorry, no hoverboards yet, but you should check out our robots!), there are some concerns attached to these benefits, three of which are highlighted below.
1. Spotify: I heard it through the grapevine
Spotify's Discover weekly is an A.I./Algorithm generated playlist, bringing you a weekly set of new artists & songs to check out. The algorithm seems to be doing this so well that, within the first six months of its launch, Spotify users had already streamed songs from their Discover Weekly playlist 1.7 billion times.
So how does it work? Spotify's algorithm collects and processes various types of data from its users as well as the songs they listen to. For example, streaming counts of tracks, whether a user saves songs to a playlist or a tune’s characteristics, like time signature, tempo, and loudness. The service then looks for similarities between users and songs and employs the outcome of this comparison as a base for recommending new songs.
At this point Spotify’s discover weekly knows me so well that if it proposed I'd say yes
We like people who are similar to us, and the more we like someone, the more likely it is for us to be persuaded by this person. Robert Cialdini devoted an entire principle of persuasion to this topic in his 1994 book 'Influence, the Psychology of Persuasion’. Spotify’s algorithm has become so good at creating this affinity that one user tweeted "At this point Spotify’s discover weekly knows me so well that if it proposed I'd say yes”.
The underlying ‘danger’ may not be that obvious, though. Once a user has put a fair amount of trust into the algorithm, adding a couple of songs from a specific promoted artist, or with a political theme may go unnoticed. Even more so, users could potentially be persuaded to, for example, pay more for the service or even support the company in its political, social and environmental goals.
Additionally, Spotify collects both behavioural information from its users and gains access to all their Facebook data and the data stored on their phone (i.e. likes, what kind of posts they write, photos, and their smartphone-related behaviour).
Does the value you gain from Spotify as a user outweigh the personal information you give up for it? What is the non-monetary price of Spotify?
2. Notifications: You have 325 new messages
We are increasingly distracted by our devices: how often does one go to a party nowadays to find everyone checking their smartphone instead of socialising? And have a look around at the bus stop the next time you take public transport: most likely, at least half of the people there will be fully absorbed by their phone.
One of the main reasons for our phone addiction is dopamine. Every time we receive a message or get ‘likes', a bit of dopamine is released. Dopamine is a neurotransmitter which makes you feel good every time it’s released, motivating you to repeat the activities or experiences that triggered its release, like listening to your favourite song, eating your favourite food, getting a compliment, having sex …or checking your phone.
While fear of missing out has always been there, the explosion of social media has launched young people headfirst into the FOMO experience
But there is a danger here: many people nowadays prefer getting their dopamine fix through their phone rather than by doing more intensive activities. Anastasia Dedyukhina, an authority on the topic of digital distractions and writer of the book ‘Homo Distractus’, tells us that having a device near your bed can seriously damage your sex life, because why go through the whole process of seduction, taking your clothes off and having sex, if you can get as much satisfaction by checking your notifications?
Alongside limiting our social contact, our devices can also have a severe impact on our stress levels and physical health, both by the pressure of having to instantly answer messages and by generating FOMO (Fear of Missing Out). Psychology Today states that "while fear of missing out has always been there, the explosion of social media has launched young people headfirst into the FOMO experience. Now we have the ability (or curse) to easily see what all our peers are doing all the time.” In a recent study published in Motivation and Emotion, scientists at Carleton and McGill University linked FOMO with negative outcomes such as fatigue, stress, sleep problems, and psychosomatic symptoms.
Let’s start early in creating awareness of the effects of digital devices on our experience of life. This way, before we get unconsciously addicted, we may discover that there is more to life, and thereby set the right priorities.
3. A.I.: Robot lovers
The concept of Artificial Intelligence has been around since the start of computing. Alan Turing stated that a machine that could converse with humans without the humans knowing that it is a machine would win the “imitation game” and could be said to be “intelligent”. The term became officially adopted in 1956 when American computer scientist John McCarthy organised the Dartmouth Conference, after which research centres popped up across the United States to explore the potential of AI.
After a brief stop in A.I. research from the 70s to 90s (A.I. applications required the processing of enormous amount of data for which computers were not yet well-developed enough), the concept picked up speed again in the late 90s with the advancements of modern computing, and it has been rising ever since.
Microsoft also joined the party when they had to silence their A.I. bot Tay, after Twitter users taught it racism
The main purpose of Artificial Intelligence is to make machines or programmes learn from experience. But Artificial Intelligence can also get it wrong, and we have ample proof. In March 2018, Uber's self driving car killed a pedestrian because it didn't see her quickly enough and Google's autonomous test vehicles have been involved in several crashes over the years. Facebook shut down Alice and Bob—two of their A.I.-driven chatbots who had developed their own secret language and were carrying on conversations with each other. Moreover, it was discovered that ads run on the platform could be specifically targeted to people interested in xenophobic topics like "How to burn Jews”. Facebook said those categories were created by an algorithm, not a human, and removed them as an option. Microsoft also joined the party when they had to silence their A.I. bot Tay, after Twitter users taught it racism.
There are two causes at play here: firstly, A.I. might be programmed to do something beneficial, but it develops a destructive method for achieving its goal(s). As in the example of self-driving cars: the vehicles may have done exactly what their users asked for, but the way they went about it wasn't what the users wanted. That prompts the question: is it even possible to make a foolproof A.I.? This would imply predicting the actions of an A.I., making sure all of its goals are aligned with ours, and, finally, making sure we didn’t make any mistakes in our predictions.
The second cause is external influences (like the example of Twitter users influencing Microsofts bot): can these be controlled? And can they be predicted beforehand?
Gartner, the world's leading research and advisory company, predicts that A.I. technologies such as autonomous driving, deep neural nets, conversational platforms and virtual assistants will be mainstream in the next two to five years. So, before we (as a society but also as companies) put all our trust into this technology, we should consider whether we can, in fact, retain control over it.
A different perspective
Dear 1994, I may come across as critical of technological developments, while they, of course, have presented us with many benefits over the years. But maybe, now you’re aware of the impact technology has on our lives in 2019, you can make different, better decisions? I’ll leave that up to you. Greetings from the future.
As it stands, we do not yet have a time machine, and can not travel either to the past or the future.
But as designers, considering our past experiences and imagining what might happen in the future will help us make better decisions for the products we will be creating. Now if you’ll excuse me, I’ll be in the lab, figuring out how to combine design, ethics, and spacetime physics (did anyone say DeLorean?).
All illustrations by Dolinde van Beek