The last UX meetup I attended in 2019, involved an open and thought-provoking conversation amongst both designers and non-designers, around whether we trust tech - meaning digital products and services, and digging deeper into our relationship with it.

The first few responses were optimistic, but as the conversation evolved the room started to divide into “Nayers”, “Yayers” and “Unsurers”. The Nayers often vilified social media companies in particular for their persuasive design and/or advertising-led business models and the consequences these have in user’s daily life including distraction, over-dependency both personally and professionally, among others. They also voiced their concern around data privacy, and how for some, the only solution to these problems was to delete their accounts and/or apps from their smartphones.

On the other hand, the Yayers (including myself at the time), highlighted how we already trust tech by actively engaging on platforms such as eBay, Uber, Airbnb as well as praised social media companies for the benefits they’ve created from promoting and keeping businesses afloat to facilitating long distance relationships and friendships.

Having both participated in the discussion and listened to the various perspectives flying around that evening, I reflected on it and would like to share my point of view (POV) around how we should approach digital trust in 2020, as both a user and a designer.

The reality is, today, the inherent trust with tech companies has been permanently fractured. As we continue to lead our lives online leaving behind our digital footprint in the form of data points, our willingness to believe and accept that tech companies are using these with our best interest at heart, is dissipating swiftly.

For example, once the height of convenience, ‘always-on’ services are now viewed as both an addiction and a curse. Or just take social media that was once seen as a mirror to society, now to be the very reason for its downfall. With the scale and reach these companies have are now fundamentally changing the very social fibre of our worlds — and not necessarily for the better.

At the same time, the media has painted this picture that technology is addictive. But who exactly is addicted to it? Maybe some people are but surely not everyone. So, before we can assess our relationship with the products and services we’ve come to use and the experiences we enjoy, it’s worth mentioning that it probably looks very different for the knowledge worker who rides electric scooter to work and streams productivity podcasts, compared to the single working mother who can barely carve out enough “me time” to take a shower.

That being said, people’s tech savviness and subsequent online behaviour varies across socio-demographic and economic groups, with some being more vulnerable than others.

Nonetheless, we can all improve our relationship with tech, by starting with our values. These are attributes of the person you aspire to be. For me, I value personal development, fitness and nutrition, having positive and healthy relationships, mindfulness and meditation. So I try to make time for my values by both literally marking time for them on my calendar as well as ensuring the content I consume and tech tools I use, align with these.

However, this takes conscious effort and intention. It’s about examining your life and figuring out what apps or tools best serve you right now. Ask yourself:

  • Why am I using [X] product?
  • How much am I using it?
  • What am I doing when I use it?
  • What else would I be doing instead of using it?

So, by practising self-awareness you can try to understand the internal triggers that drive you towards using the tech tools that you use. Is it an escape from some kind of uncomfortable sensation? Is it boredom? Is it a genuine desire and curiosity to learn something new?

Once you address the internal triggers, you can adjust the external triggers of tech tools by disabling notifications for example, or changing the default settings so they best suit your preferences and needs.

And when a certain product doesn’t serve you anymore, it will probably be because you no longer value or consider a priority whatever that product was enabling you to do, or the way it was making you feel when using it.

Then again, depending on the person using the tech, this is easier said than done. We are bombarded at every turn with persuasive design that exploits our psychological weaknesses and often leads us into temptation, habituation and distraction. It’s not our fault and not everyone can or should be expected to muster superhuman levels of self-regulation, just to adapt to this all-out war tech companies are waging for our attention.

Instead, if we truly want to build and maintain a trusted relationship with technology, it needs to be properly designed and incentivised to help us do so.

One of the early things I’ve learned as a UX designer when conducting usability testing in particular, is the reassurance we give test participants that it’s not their fault if and when something goes wrong. It’s the interface system’s fault. And so as designers we should always strive to make it easier for users to have control over how they interact with tech. This is especially important when designing functional design elements related to data management, privacy and security, and ensuring they can be easily discovered, understood and used, making them a central part of user’s experience.

As such, big tech companies and startups alike, should review their design patterns and how these align with their design principles, on a regular basis.

Take for example how “transparency and collaboration” is embodied through open channels on Slack; or how the idea of “capturing life’s unique moments” translates into Instagram’s photo feed. That idea, however, has becomes diluted by the kind of social interactions or should I say “addictive triggers” in the form of likes and comments that has given rise to influencers, as well as, unintended psychological effects such as social comparison. Nonetheless, people are becoming increasingly mindful of how tech affects their mental and emotional state, forcing companies to respond one way or another.

But how much damage is ok before someone steps in to help and ensure no more people get hurt?

Therefore, companies should deeply re-consider their design patterns and the core behaviours they’re designed to encourage. And this involves taking a step back to re-evaluate the ethos and purpose of their products and if the way they work and are used today still reflect that.

Companies should also embrace transparency by making people aware of any changes, updates or how their data is being used and creating instances for them to react to that or give feedback.

For example, LinkedIn periodically reminds me that they import and store my contacts to suggest connections and show me relevant updates. Within the same pop-up, I’m also presented with an option to turn off the contacts sync, thereby controlling who I connect with.

I think consumers are starting to take back control by reconsidering the platforms they are willing to trust — and that the responsibilities placed on these platforms to be trustworthy. And as they continue to do so, companies will have to work harder to earn the trust of all kinds of users, from CEOs to front desk employees and soccer moms alike.