Moving towards an implicit notion of interaction in UX
This post is just me sharing some thoughts I’ve been playing with for a few weeks. So this is an opinion piece about a really interesting topic.
Interaction design is as much about using design conventions to create intuitive experiences as it is about thinking ahead to what it will become.
Sometimes, we think far ahead and dive into the field of speculative design, and sometimes, we look around at things such as technological advancements or societal changes, and we can predict how they will affect our relationship with computers in general.
This summer, I did a short course on UX for AI design at TheStarter. That, summed with the uncountable hours I’ve spent researching for my master’s thesis, got me thinking about how the notion of an interaction might change.
Specifically, it got me thinking about how embedded systems are becoming ubiquitous in our daily lives, from automobiles to household appliances and everything in between. Connectedness is becoming the norm, and this increased connectedness is gifting objects that we would not usually consider interactive interaction.
This is interesting because it changes the scope of what we normally consider interacting with a computer.
In the past, we talked about sitting at a desk, looking at a screen, and interacting via keyboard and mouse with a WIMP interface. Today, we mean something like standing in a subway station interacting with a smartphone by touching it, more like a WYSIWYG philosophy.
Notice how the agent that starts the dialogue changes in these two scenarios. With a laptop, the user would turn it on and interact with it. He might get a notification and shift his attention to another task, but the human has the first motivation.
With the smartphone (and this goes for wearables like smartwatches), the very nature of the devices that house the interfaces potentiates the possibility that the computer calls for our attention. The computer starts the conversation via haptic, auditive, or visual cues.
And while this is massively effective for many reasons, it also creates some problems. One would probably mention addiction to some apps, but in my opinion, interruption is the main issue with the modern philosophy of interaction. While today’s interfaces seem to be integrated into our day-to-day life, interactions with technology exist in small bubbles throughout our day, where we interrupt whatever we are doing (or sometimes multitask) to partially give our attention to an interface.
But these also differ in another aspect. They differ as our involvement with the interaction decreases. Human-Computer Interaction no longer requires full attention (probably due to years of UX and usability work). Interfaces are easier to use, so much so that the interaction can be an afterthought while you do something else.
And these two trends might intertwine in the future, representing a big change in the field of HCI.
Now, abstractly, let’s imagine that we interact with our world, and the system reacts accordingly. No need for an intermediary. As said by Golden Krishna, the best interface is no interface.
Last semester, I worked on a project to design an assistive technology for deaf people to better their driving experience by translating auditive cues of the environment into haptic feedback that the user could perceive (you can read about it in my portfolio). I am not sure that this is the best example, but in this case, the user would interact with the environment, specifically by driving, and as a result, the interface would reciprocate with appropriate feedback.
What if the future of interfaces is making them invisible visually and practically? What will be the role of the UI Designer? Is there a UI to be designed at all?
While this future is certainly full of questions to be answered, I feel that some things will remain the same. One of these is the concept of Affordance, Signifier, and State. Whether we interact with a computer directly or indirectly, its presence must be acknowledged. Users must know that a system is active and “listening” for an interaction (its state). More than that, users must know what this system offers as possibilities (i.e., affordances) and how to activate these possibilities (by signifiers).
This does not mean that when implicit interactions get more recognition, more conventional styles of HCI will become obsolete. There is a time and a place for everything, and Interaction Design is, at its core, contextual.
Leaving the world of abstractions and transforming these concepts into actual products will be a long and enriching journey. One that has already begun. The automotive sector already has examples of implicit interactions, as multiple IoT applications do, and there are already papers with ten or more years discussing this topic.