It was found that the most significant human–computer interaction (HCI) analysis models, adopted in the research, represent a relation in which the computational system is subordinated to the action of the human user interacting with it. The assumption of a user’s primacy role is established as the dominant academic paradigm, which cannot properly envision the future developments that, as noted above, pose a potential of unpredictability that needs to be admitted through less conservative or restrictive conceptual beacons.
User-experience (UX) is an elusive goal and it is neither easy to predict nor to objectively design an individual's experience with an artifact. However, establishing a set of guidelines for designing a product, service or environment is less elusive and has the potential to promote a satisfying experience by identifying all the aspects of the interaction that support it. It arises from the goal of projecting a holistic experience through the amplification of the classic perspective of mere task accomplishment goals in human–computer interaction (HCI). Thus, it seeks to combine, through a projectual process, all factors that contribute to the individual's experience.
UX has been historically portrayed as a determinant for the success of an artifact. However, post-industrial society, aligned with the Fourth Industrial Revolution's model, increasingly transcends the idea of technical reproducibility of tangible objects to design and develop elusive creations such as systems and networks, interactions or experiences.
Advanced robotics, IoT, self-driving vehicles, non-biological sentient life, artificial intelligence (AI) and machine learning (automatic learning or cognitive computing), among others, are phenomena whose future development prospects force us to rethink the relationship that users establish with computing systems and vice versa.
It was found that the most significant HCI analysis models adopted in the research represent a relation in which the computational system is subordinated to the action of the human user interacting with it. The assumption of a user's primacy role is established as the dominant academic paradigm, which cannot properly envision the future developments that, as noted above, pose a potential of unpredictability that needs to be admitted through less conservative or restrictive conceptual beacons.
In this chapter, we propose a neutral positioning model that aims to overcome and challenge the psychosomatic and sensorimotor limits of users and computational systems, and to prevent the primacy role of either of the interaction agents involved.
This conceptual and functional equivalence between agents presents the arguments that both can assume the role of emitter (who submits a request) and/or receiver agent (who responds to the request), and both can define the interaction's goals and the sequence of procedures that develops the interaction.
We propose empowerment of the computational systems and the adoption of the concept of interactors (i.e., those that interact), replacing the concepts of user and computational system.
This term was used by Janet H. Murray in Hamlet on the Holodeck: The Future of Narrative in Cyberspace. The appropriation of the concept for both interactional agents is advocated. We believe that the adoption of this terminology is fundamental as a means to reinforce a new paradigm – the Interaction Experience (IxX), replacing the User-experience (UX), with the equivalence between agents as the fundamental postulate for the analysis of future challenges and HCI relations.
Social and technological changes and revolutions have been increasingly rapid, profound and complex, offering challenges that, in turn, require in-depth reflection and a growing need for theoretical problematization.
Although many theorists acknowledge that we are still in the Third Industrial Revolution, in which recent phenomena are only its unfolding and consequences,
stated that since the beginning of the 21st century, there has been such a significant digital revolution that justifies a new nomenclature – the Fourth Industrial Revolution: The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as artificial intelligence, robotics, the internet of things (IoT), autonomous vehicles, 3D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing. (Schwab 2015)
The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as artificial intelligence, robotics, the internet of things (IoT), autonomous vehicles, 3D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing. (Schwab 2015)
It is generally characterized by abrupt and radical changes driven by disruptive technologies which, unlike the previous ones, will have to be considered in deep synergy. For a better understanding of the impact of technologies, Schwab considers three interconnected categories: physical, digital and biological. For the first, of a tangible nature, he assumes that, along with advances in mobility technology and materials, advanced robotics will allow new stages of human–machine interaction. Regarding the digital category, it is IoT that accelerates the process of connecting the real world to the virtual world, enabling a new network economy based on online platforms. The biological category foresees the widespread use of nanobiology and biotechnology, directed to the areas of genetics and synthetic biology. Some technologies have become so ubiquitous that individuals, immersed as they are, are no longer aware of their use and dependence. The history of mankind has shown that all of its achievable imagination products sooner or later come true.
Much can be speculated about how technological evolution has shaped the new realities and promoted profound changes in contemporary culture and society. The long evolutionary path is likely to continue to be brought about by the integration of these technologies, which will progressively abolish the distinction between the artificial and the natural. In this sense, the conservative view of artifacts, † computers or systems as tools or instruments, hardly fits into contemporary times, particularly with all the technological innovations in progress.
Society can be characterized by the possession of artifacts that, in different contexts, built a social representation of a manner and a time, configuring what is called material culture. From the Industrial Revolution and the consolidation of modern society, based on the circulation and acquisition of goods, onwards, the idea that this materiality no longer has the same importance begins to emerge after World War II, because the most interesting experiences are displaced from the physicality of objects and because they are more open to positive reinterpretations (Van Boven and Gilovich 2003). Although we know empirically that the possession of objects is important for developed societies and generates the psychological conditions conducive to their acquisition, abundance has generated a different feeling and today an experience is more relevant than the possession of an object or material good.
In his book Shaping Things, Sterling (2005) spells out the changes that objects have undergone throughout history, namely, “artifacts, hand-made, hand- or animal-powered devices made and used by people.” There was the time of artisans, “where devices were hand-made for specific uses, specific people.” Then machines came: “mechanical things that did stuff, powered by non-human sources, non-animal sources, such as steam, combustion engines, and electricity.” Machines were used by customers who were the companies and industries “who bought the stuff.” After the age of machines, the times of mass production came, when quality products were exchanged for identical, albeit cheaper, products. Products designed to last a lifetime are followed by short-lived objects designed for programed obsolescence. Sterling considers this to be a paradigm shift in the way people have become consumers.
Today we find ourselves in the time of “gizmos,” “often electronic, often networked.” The gizmos have users. Sterling describes the bondage that users maintain with their products as trapped in an endless cycle of worry and care, upgrades and new models, failures and repairs, and ongoing maintenance. He predicts that, after the “gizmos,” the “spimes” * will come, where people are transformed into “wranglers,” and after the “spimes,” humans will merge with machines and there will be no distinction between devices and people: both will be “biots.” According to Norman (2008), each of these designations degrades the people who are labeled by design as objects rather than impersonated as real people.
The function of design becomes increasingly important. Through practice and diligence, it must be able to continue to build bridges between people, society and technology in a continuous movement of anticipation. This continuous and systematic look into the future is one of the most powerful weapons held by design thinking and goes far beyond simply solving formal and punctual problems. Contemporary design increasingly transcends the idea of creating tangible material objects to encompass more elusive creations such as interactions, strategies and systems.
The expanded notion of design supports a comprehensive and inclusive perspective, focusing on design as activity, process and thought, and encourages its approach to the fields of history, philosophy, the arts and technology, which are considered key disciplines for design praxis (Leerberg 2009). On the other hand, contemporary design favors process over outcome. There is pressure to be free from objects, which, in turn, have also become immaterial and ephemeral.
An essay by Breslin and Buchanan (2008) describes design according to project needs and demands in the various areas that integrate it. The first area of design is communication with images and symbols. The second relates to artifacts such as industrial design, engineering and architecture. The third arose from the need to project interactions and experiences. This fact was accompanied by a growing theoretical problematization expressed in new multidisciplinary areas that allowed the context of human interaction with computer systems to be framed. The fourth area encompasses environments and systems within which all other areas of design are integrated. Through a holistic perspective, it encompasses information spaces with which it interacts through physical and digital interfaces.
Buchanan and Margolin (1995) argue that design is a liberal art of technological culture, concerned with the design and planning of all contexts of the artificial world: symbols, signs and images, physical objects, activities and services, and systems or environments. For Löwgren (2014), design, as a discipline, must consider the gestation process. This implies five main characteristics: 1) Design involves changing situations by shaping and deploying artifacts; 2) Design is about exploring possible futures; 3) Design entails framing the “problem” in parallel with creating possible “solutions”; 4) Design involves thinking through sketching and other tangible representations; and 5) Design addresses instrumental, technical, aesthetic and ethical aspects throughout.
1) Design involves changing situations by shaping and deploying artifacts; 2) Design is about exploring possible futures; 3) Design entails framing the “problem” in parallel with creating possible “solutions”; 4) Design involves thinking through sketching and other tangible representations; and 5) Design addresses instrumental, technical, aesthetic and ethical aspects throughout.
The notion of shaping is objectively used to suggest the designer's action, as opposed to, for example, construction that refers to engineering, fabrication or creation that could refer generically to anything (Löwgren 2014). Interaction designers do not create static objects because they consider the existence of a dynamic pattern of interactivity (Löwgren and Stolterman 2007). This perspective is closely linked to context and not just focused on technology. Thus, the designer's responsibilities are not only the artifact's functional competences but also his/her ethical and aesthetic qualities. All emerging technological developments combined with investment in natural language processing, computer vision, gesture analysis and the development of human–computer interactions will enable the physical body to meet the technological body. The understanding of the notion of shaping, proposed by Löwgren, will be fundamental, for example, to conceive and plan the approach of the machine to the human and vice versa and will allow the designer to reflect on the profound disruptions that the new technological existences will introduce in the human perception, in the subject/object dimension and in the idea of mediation. In turn, the emergence of new hybrids brings to the foreground such complex issues as body hybridization, the notion of human consciousness and technological consciousness. Change will take place through design.
In the last decade, user-experience (UX) has become a buzzword in the area of human–computer interaction (HCI) and Interaction Design (IxD). Historically, HCI research has focused almost exclusively on instrumental and technical aspects in order to achieve behavioral goals in the workplace. Academy and industry were focused on usability and human factors engineering: on how to operationalize psychology and ergonomics to develop methods that would support work tasks and promote efficient, error-free interactions. The task has become the focus of user-centered analysis and evaluation techniques, such as usability testing (Hassenzahl and Tractinsky 2006). Ensuring the instrumental value of the interactive artifact has become the area's main investment. Grudin (2008) noted that in the 1980s CHI
researchers wanted to give their study a hard science turn: CHI researchers wanted to be engaged in “hard” science or engineering. The terms cognitive engineering and usability engineering were adopted. In the first paper presented at CHI 83 “Design Principles for Human–Computer Interfaces,” Norman applied engineering techniques to discretionary use, creating “user-satisfaction functions” based on technical parameters. Only years later did CHI loosen its identification with engineering. (Grudin 2008)
CHI researchers wanted to be engaged in “hard” science or engineering. The terms cognitive engineering and usability engineering were adopted. In the first paper presented at CHI 83 “Design Principles for Human–Computer Interfaces,” Norman applied engineering techniques to discretionary use, creating “user-satisfaction functions” based on technical parameters. Only years later did CHI loosen its identification with engineering. (Grudin 2008)
Design driven in the context of HCI, experimental psychology and computer science is more oriented towards the pragmatic problems of usability, task efficiency and interface effectiveness. However, this focus on instrumental value alone has been widely questioned. More than facilitating the human–computer relationship, it is essential to provide the user with the best user-experience with a product. HCI and interaction design, while pursuing the same goals, sometimes present themselves as two distinct yet antagonistic approaches – on the one hand, the instrumental and functional dimension can be focused on, as described above; on the other hand, they can be understood as specializations of design within the scope of the project. In order to apply the knowledge produced by HCI, interaction design can be considered a design strand whose praxis is based on the development of interactive artifacts that seek to improve the human relationship with computer systems.
Despite differences and beyond pure utility and efficiency, HCI and interaction design have begun to approach common themes, in particular, the investigation of the ethical and aesthetic dimensions of use, focusing on emotions and the quality of experience. Ethics and aesthetics in the context of the project cannot be set aside or possibly only understood as a formalization of the artifact. The partnership between design and engineering must be understood in a transdisciplinary way and, for this, it is necessary to generate a common understanding around a contemporary concept of design.
With the evolution of technology, designers and scientists working in the area of HCI began to question the activity of engineers when developing software and computer interfaces, Graphical User Interfaces being crucial in designing applications that focus on how the user acts, not on how machines operate. As technology evolves, interactive artifacts become easier to use, more efficient, more desirable, more relevant and more meaningful to the user. This dynamic has placed the user at the center of interaction design.
User-centered design (UCD) is a design process that focuses on the needs, goals and requirements of users. It regulates how ergonomics associated with human factors, usability knowledge and other techniques help keep the focus on users and reinforce the idea that designers should develop interactive systems for individuals rather than the opposite. The goal is to produce artifacts that are usable and accessible so as to “optimize these interactions in an integrated manner to promote user safety, health and well-being as well as the efficiency of the system in which they are involved” (Rebelo 2017). Interestingly, in ISO 9241-210: 2019, * the concept “human-centered design” is used instead of “user-centered design” in order to emphasize that this standard is extended to all stakeholders. However, in practice, these concepts are often used as synonyms.
The focus of interaction design is not just technology but the social dimension, the humanist vocation and the revaluation of belief in creative and changing power. For these reasons, discipline is not held hostage to either technology or its obsolescence, which may mean a release from the power of technique and its expansive domain. Therefore, the humanist perspective of technical-scientific knowledge is valued, framed by the emergence and consolidation of new interaction paradigms based on a direct relationship between the user and the machine, where it is fundamental not to lose sight of the experience technologically mediated by UCD. The experience summoned by interactive digital artifacts in the context of use, as well as the whole holistic dynamics of the human's rational, emotional, aesthetic and ethical relationship with his world and vice versa, mediated by design thinking and practice, are crucial in the consolidation of a transdisciplinary praxis for designers and other actors in the processes of interaction, whose purpose is the qualification of the experience that artifacts and computer systems summon, as well as in the appropriation and bond that the individual establishes with them. If the user is comfortable in terms of social responsibility and ethical standards, this reality impacts not only instrumental and measurable results but also the experience with artifacts.
The perspectives of interaction design and engineering started gravitating around a common interest – the user-experience. Although UX can be categorized in a variable way – either from person to person, from product to product (equipment) or from task to task – something that is “functional, efficient and desirable by the targeted audience can be defined as usable” (Kuniavsky 2003). Thus, it is configured as functional if you do something that is considered useful to people, as efficient if it performs the desired task quickly and easily, and as desirable if it provokes an emotional response from the user (the less tangible aspect of UX is that of allying surprise and satisfaction in using a really suitable object).
ISO 9241-210: 2019 defines UX as “user perceptions and responses that result from the use and/or anticipated use of a system, product or service.” It also states that “user experience includes all users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and achievements that occur before, during and after use.” In turn, usability is defined as the “extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Human–computer interactions have been developed to enable greater information flows in the communication processes between humans and computers, which require less difficulty in interaction and favor usability (Abascal and Moriyón 2002).
Both usability and UX are critical to the success or failure of an artifact and can be gauged during or after using a product, system or service. Professionals and researchers incorporate the notion of UX as a viable alternative to traditional HCI. In this way, UX projects the “experience” as a whole, contrary to HCI's classic view of task accomplishment, seeking to combine, through a design procedure, all the factors that participate in this experience. Buxton (2007) explores these ideas by stating that despite the technocratic and materialistic bias of our culture, it is ultimately experiences, not things that we are designing. […] Obviously, aesthetics and functionality play an important role in all of this since they attract and deliver the capacity for that experience. But experience is the ultimate – but too often neglected – goal of the exercise.
despite the technocratic and materialistic bias of our culture, it is ultimately experiences, not things that we are designing. […] Obviously, aesthetics and functionality play an important role in all of this since they attract and deliver the capacity for that experience. But experience is the ultimate – but too often neglected – goal of the exercise.
McCarthy and Wright (2004) report that “Experience” most likely mirrors tangible life. Experience is an open and complex concept that refers to the life lived and not to its idealization. UX, according to Hassenzahl and Tractinsky (2006), is a consequence of a user's psychological state (predispositions, expectations, needs, motivation, humor, among others) of the characteristics of the projected system (complexity, purpose, usability, functionality) and of the context (or environment) in which interaction occurs (organizational/social environment, meaning of the activity, willingness for use). The goal is to understand human–computer interactions in order to create more complete and cross-sectional experiences.
There are epistemological and ontological dimensions in HCI that must be analyzed in the relationship established between computer systems and their users. Brey (2005) argues that the main relationship between humans and computer systems has been epistemic. From an epistemic perspective, the computer functions as a cognitive device or artifact that broadens and/or complements cognitive function by performing information processing tasks. However, in recent decades, the epistemic relationship between humans and computers has been complemented by an ontological relationship, in which the computer has acquired a new class of functions. In this role, computers can generate virtual and social environments, as well as simulate human behavior and reconstruct the idea of human relationships in interaction, which allows the machine to move from object status to subject status.
In 1964, Marshall McLuhan's theories, presented in his book Understanding Media: The Extensions of Man, made it possible to understand the computer as a medium, as opposed to its reducing vision as a tool. The existence of the computer allowed the digitization of mediated information (acquisition, manipulation, storage and distribution), implying effective changes in cultural patterns. Computers, according to McLuhan, allow increasing human capacities and faculties, working as extensions. Computers are no longer information devices, as they have become cognitive artifacts presenting themselves as an extension of human cognition. This view that there is a special class of artifacts that are distinguished by their ability to represent, store, retrieve and manipulate information was introduced by Norman (1993). This author defines them as artificial devices that are designed to store, display or operate information for the purpose of performing a representative function. The words “information” and “representation” make it possible to distinguish between cognitive artifacts and other artifacts.
Brey's (2005) functional analysis identified computational systems as simulation devices, where a symbiosis between the human mind and the computational system is so close that it results in hybrid cognitive systems (human and artificial). Advances in technology, beyond the goals of ease, learning, use and ease, are allowing both computers and interfaces to be transparent. Norman (1990) argued that the computers of the future should be invisible. “Computers are getting invisible,” reinforces Lialina (2012). In this context, Bolter and Grusin (2000) address the concept of transparency (as a characteristic of immediacy) that occurs when the user forgets (or is unaware of) the medium through which information is being transmitted and is in direct contact with the content. This allows interactions to become truer and closer to reality. “Virtual reality, three-dimensional graphics and interface design are all working to make digital technology transparent” (idem) to make the user feel part of the system.
Within the scope of HCI and, accordingly, interface development processes should ensure their standardization, consistency and transparency to meet user needs and facilitate human action. In this regard, Maybury and Wahlster (1998) highlight the detail of the development of increasingly intelligent interfaces, defining them as those that promote the efficiency and naturalness of interaction by aggregating the benefits of adaptability, context-fitness and convergence of tasks. Some technological advances are contributing to this: It took almost two decades, but the future arrived around five years ago, when clicking mouse buttons ceased to be our main input method and touch and multi-touch technologies hinted at our new emancipation from hardware. The cosiness of iProducts, as well as breakthroughs in Augmented Reality (it got mobile), rise of wearables, maturing of all sorts of tracking (motion, face) and the advancement of projection technologies erased the visible border between input and output devices. (Lialina 2012)
It took almost two decades, but the future arrived around five years ago, when clicking mouse buttons ceased to be our main input method and touch and multi-touch technologies hinted at our new emancipation from hardware. The cosiness of iProducts, as well as breakthroughs in Augmented Reality (it got mobile), rise of wearables, maturing of all sorts of tracking (motion, face) and the advancement of projection technologies erased the visible border between input and output devices. (Lialina 2012)
These advances enabled interaction with computers to provide natural movements such as touch (multi-touch interfaces), gestures (haptic interfaces) or voice commands (voice-over interfaces). Hiding computers will only become possible when you stop referring to user interfaces and thus help users mitigate the existence of computers and interfaces. In the context of the near future, interface design is beginning to be identified as experience design, i.e., only emotions can be felt, goals achieved and tasks completed. With the invisibility of the interfaces and the computer, the user is becoming quietly invisible, a fact that may go unnoticed or be accepted as an evolutionary step, i.e., progress (Lialina 2012). In short, interfaces were for users and experiences are for people (interactors).
At the UX Week 2008 * event, hosted by Adaptive Path, Don Norman no longer considered users as such, stating, “One of the horrible words we use is users. I am on a crusade to get rid of the word ‘users'. I would rather call them ‘people'.” He also said that designers design for people and not for users. Lialina warns us that when, in a broader context, one analyzes the word “user” as opposed to the term people, ambiguity may occur. Being a user is the last reminder that there is, visible or otherwise, a computer, a programmed system that is used. It reinforces the fact that the user did not develop in parallel with the computer, but before it. Therefore, it is relevant to recognize this aspect and to remember that users were invented. As a result of their fictional construction, they continued to be reimagined and reinvented in the 1970s, 1980s and 1990s and the new millennium.
UX is characterized by user-driven experience through embodiment when interacting with an interface. Draude (2017) defines “embodiment” as a form of personification of digital interfaces that create a dynamic of interaction that resembles that of humans. This embodiment in virtual reality happens in various ways, namely through avatars or virtual assistants and their interaction with users is only possible because they are, in most cases, artificial intelligence that mimic human behavior.
Artificial intelligence (AI) studies the relationship between the computer and the human brain in order to understand human psychology from a computational perspective. It distances itself from computer science because it uses distinct programming languages that leave room for machine learning which, in turn, seeks to translate human behavior through the ability to vocalize, understand a language or decipher an image. AI is redefining the meaning of being human – the project is quite ambitious but not unrealistic. A machine that passes the Turing test and can perfectly mimic human behavior does not yet exist, but there seems to be no reason why it cannot exist in the future.
AI research attempts to answer questions such as: Can a machine act intelligently? Can it solve a problem that a person would solve through reasoning? Can a machine have a mind, mental states and consciousness similar to humans? Can machines feel? Are human intelligence and machine intelligence identical? Is the human brain essentially a computer? The answer to these questions depends on the definition of “intelligence” or “consciousness” and what is meant by “machines.”
But can machines develop and move towards supposed emancipation, or even autonomy, if we continue to imagine them as humanlike? Should the machine simulate the values of humanity? Simulation refers to the construction of a virtually created hyper-reality (Baudrillard 1981), i.e., it is a virtual reality that starts from an equivalence to physical reality, using the representation and modeling of images and ideas existing in the real world, adapting them to the new medium. However, should virtual reality refer to the “real world”?
From the system of (real) objects emerges an open field where a digitalized world of things with no moral laws, archetypes, stereotypes, religion, culture or even politics proliferates. The technological culture in which future AI is inserted is free to create an illusion that overcomes another illusion (that of the real world) in an attempt to eliminate negativity and evil, probably through rationality, precision and asepsis. It is therefore essential to discard “human” and “humanism” as paradigmatic terms of reference for the development of AI. Many fear that humanity could be condemned and extinguished at the hands of dominant and criminal artificial intelligence. Can machines be less humanistic than humans? What human characteristics can be programmed? What will humans teach machines? Ethical codes? Values? Cultural patterns? Can machines question them along their evolutionary path as they gain experience?
According to Damásio (1994), the interrelationship between emotions and reason goes back to the evolutionary history of living beings. It can be said that it is not possible to separate reason from emotion. Emotions are an indispensable part of rational life. Thus, contrary to what Descartes and even Kant propose – that reasoning must be done in a pure way, dissociated from emotions – in fact, it is the latter that allows the balance of decisions. The rational and emotional dimensions can be imagined as interconnected biological computers, each with its own particular intelligence, subjectivity and memory. Homo sapiens uses both to make balanced decisions. And do AI also need the rational and emotional dimensions? Will they stick to their principles and codes of conduct more effectively than humans, who are permanently manipulated and subverted by emotions and even reason?
For now, it is still sought, through personification and anthropomorphization, to print human characteristics at the interface, as well as to promote its passage from technological object to subject. Thus, for Draude, these technological interfaces simultaneously become representations (of the human) and devices (tools). When this simulation is able to reconstruct the idea of human relationships in human–computer interaction, it will achieve its concrete goal – from object status the machine ascends to subject status (Draude 2017). In this sense, it is through embodiment that the connection between the virtual world and the physical world is established. This presence in the real and virtual world, advocated by virtual agents, introduces their duplication or unfolding and the release of any physical constraints. However, AI researchers start from the idea that the brain is an efficient computer, analog rather than digital, that allows one to interpret natural phenomena. It is not proven that human intelligence needs biological support, says Fiolhais (1994). Perhaps intelligent life, that is, information processing similar to what is carried out in the human brain, materializes in robots, biots, stardust or even in some other form of representation.
The traditional approach to AI is based on a system grounded on rational rules and emotions. But the rules are not enough to understand or predict human behavior and intelligence. Current research emphasizes the importance of affection and emotions in human–computer interaction.
Picard (1997) advocates the development of so-called affective computers that can serve the following purpose: “It is my hope that affective computers, as tools to help us, will not just be more intelligent machines, but will also be companions in our endeavors to better understand how we are made, and so enhance our own humanity” (idem). He also defines the term affective computing as computing that relates to, arises from or deliberately influences emotions. In turn, for Hollnagel (2003), affective computing is questionable, as he recognizes that the computer lacks “something similar to an autonomic nervous system” that is generally understood by experts as a sine qua non for human affection and emotion.
Because computers are based on the logical processing of information, they cannot be emotional or affective. However, affective computing takes the “computer” perspective as to how it can feel the user's effect, adapt or even express its own affective response (Picard and Klein 2002). While UX research recognizes the importance of affectivity and emotions, it is more concerned with the affective consequences on the human side than on the machine side. UX takes a “human” perspective because it is interested in understanding the role of affection as a mediator of technology.
From our perspective, human–computer interactions should be considered by UX, as well as those provided by other contexts in which interaction occurs: human–environment interaction, computer–computer interaction, computer–environment interaction and fictional interactions.
The passage from object to subject, achievable due to the embodiment of physical and psychological aspects of the human being, is indicative not only of the anthropomorphization of an artifact, but of the whole object or machine (Draude 2017). If computational agents can reproduce emotions, language and patterns of human communication, if their conversational and cognitive capacities can match those of a human being to some extent, one must ask, what is it like to be a user in an interaction?
Humans interacting with these interfaces have been understood as users, a designation that, as noted earlier, has raised several questions. The term user may be suitable for individuals who control and manipulate objects and who have a sense of ownership over them. This nomenclature can be applied when objects are understood as tools or instruments, whose operation is dependent on external control for the fulfillment of an objective or function. Objects continually move from functional to symbolic character within a given cultural system, as Baudrillard (1968) points out. They have immanent meanings, which transcend the functional, which is related not only to the practical purpose of objects, but also to their ability to be part of a game of symbolic relations.
Baudrillard points out that objects, in general, act as mirrors, since they do not emit real images, but those we want. Objects manifest a “soul” that guarantees the symbiotic relationship between objects and individuals – it is always ourselves that we own, consume and collect. The author proposes the review of the concept of functional object widely disseminated by Bauhaus, that is, of the perfect correspondence between form and function. Envisaging the “function” itself as a myth, Baudrillard concludes that contemporary man, instead of manipulating objects, is being manipulated by them: “objects are no longer surrounded by a gesture theater … they almost became the actors in a global process of which man is simply the role or the spectator” (Baudrillard 1968). In The System of Objects, the user becomes the used. What should it look like in the digitized system of things?
From the foregoing, it is not appropriate to continue talking about the use of a computer system or an artifact, because, like a human, they are becoming increasingly autonomous and capable of interacting. By way of example, if an AI system takes the form of a conversational agent, then it integrates the subject status, and it is no longer appropriate to refer to “utilization” but to “interaction” between two agents. Murray (1998) proposed a nomenclature from the syntax of interaction: the interactor, who is considered an actor that interacts with the narrative and alters its course. In order to emphasize, in the context of interactive narratives, reading, writing, gameplaying and exploration actions, the term interactor is used to refer to an individual in this activity.
For the beginning of the conceptual analysis, an etymological exploration of the term interact, relevant in this theoretical context, is proposed: inter is a Latin preposition with the meaning of “between,” “in the middle of,” which is used in English as a prefix; act derives from the Latin verb form actum, of the verb agere, which means “act,” “produce,” “accomplish.” This verb has a first sense, like many Latin words, linked to agriculture and derives from an Indo-European root that is also present in the Greek verb ἄγω. From the family of this verb, we have ager – field, territory; actio, “action,” “achievement,” “activity.” In turn, the word “actor” derives from the Latin actor, “agent, the one who does some action,” from actum, “something done,” from the verb form referred to above. The term interactor places the actors in levels that were previously hierarchical and have now become horizontal, that is, it refers to the agents in an interactive process, whether human or computerized. Thus, we can distinguish the user who uses something (vertical correspondence) from the interacting user (horizontal correspondence).
With the dematerialization of artifacts and the valorization of experience, with the inadequacy of the term user, with the passage of interfaces and computers from object to subject, with equity between human and machine, models, concepts and structuring definitions are required that can overcome and challenge the psychosomatic and sensorimotor boundaries of users and computer systems so that they can profile a neutrality of positions in which no priority is given to either of the agents involved in the interaction.
It was found that in HCI research models of analysis have been adopted in which the computer is subordinated to the action of the user who interacts with it. The primacy of the user is assumed as a dominant academic paradigm, which does not allow the understanding of the expectations of future development that, as mentioned, impose a potential for unpredictability that should be observed in less conservative or restrictive conceptual markers (considering the case of AI and biots). To this end, a significant change of terminology is introduced with the aim of removing the user's supremacy in his relationship with the use of the computer system. Empowerment of computer systems and the adoption of the concept of interactors – those that interact (human or computer system), instead of the user – are proposed. We defend the appropriation of this concept for both agents.
This conceptual and functional match between the agents advocates that either one or the other can assume the role of issuing agent (submitting the request) and receiving agent (submitting the response to the request). In this sense, a model was developed that manifests this equity between agents: “the Basic HCI Model's positioning neutrality manifests itself in the refusal of role assignments to human agent and computerized agent, preventing the assumption of a human agent's control over the developed one” (Rafael et al. 2019). This neutrality of positions, although contemporarily uncommon, designs an HCI conceived and adapted to the needs and desires of the interaction of either of the agents involved. In the indicated model, the said neutrality of positions is expressed by not assigning specific roles to the human agent and the computational agent in the development of interaction. In this proposal, it is not possible to conjecture that human agents are invariably in a dominant position and that the relationships established with computational agents are also invariably centered on the objectives of the former.
Given the framework presented throughout this chapter, it is naturally necessary that in designing and developing interactive systems, engineers, designers and computer designers, and others intervening in the process, should equitably consider the needs, expectations and goals of both human agents and computational agents. Thus, the interactor is simultaneously the human agent and the computational agent.
Developmental expectations in what concerns artificial intelligence systems and non-biological sentient life, advanced robotics, machine learning, cognitive computing, quantum computing, among other technological advances, are phenomena whose developmental perspectives require equity between agents. Anticipating the future, in an advanced society, conscious entities that, unlike humans, are immune to the cycles of death and rebirth will, over time, overcome their limitations and have nothing more to learn from us. Can we, humans, be the result of the imagination and creation of less developed beings, and are we fulfilling the same evolutionary design by creating artificial intelligence or conscious entities that will supplant us as we have supplanted our creators? This scenario may turn out to be a possibility. For now, the design of the interaction between humans and computers is still dependent on ethical, aesthetic and functional options, taken by experts who intervene in the process of HCI.
We propose the adoption of a terminology in which:
Research in HCI is constantly evolving, so systematization or rationalization will always correspond to a complex, continuous and necessarily dynamic process, subject to constant revisions. This attitude should promote the articulation between theory and practice, conceptually and structurally framing the analysis, design and evaluation of human–computer interactions and interfaces, encouraging innovation by clarifying relationships between agents and less obvious interactive solutions, and trigger the use of a language common to researchers, designers and other actors in the process of designing and developing human–computer interactions.
This theme is furthered developed by Klaus Schwab, in Die Vierte Industrielle Revolution, originally published by Word Economic Forum. Geneva, Switzerland, in 2016.
An artifact simply means any product of human workmanship or any object modified by man. The term is used to denote anything from a hammer to a computer system, but it is often used with the meaning of “a tool” in HCI or Interaction Design terminology. The term is also used to denote activities in a design process. For example, in Unified Process (an object-oriented system development methodology) a “design artifact” is sometimes used to denote the outcome of a process activity (Larman 1998). The antonym of “artifact” is a “natural object”, an object not made by man (Wordnet, Princeton University). Definition of “artifact” from The Glossary of Human Computer Interaction is available in https://www.interaction-design.org/literature/book/the-glossary-of-human-computer-interaction/artifact
Spime is a neologism for a futuristic object, characteristic of IoT, that can be traced across space and time over its lifetime. Spimes are essentially virtual master objects that can at various times have physical incarnations of themselves. An object can be considered a spime when all its essential information is stored in the cloud.The term spime was coined for this concept by author Bruce Sterling. It is a contraction of “space” and “time” and was probably first used in a large public forum by Sterling at SIGGRAPH, Los Angeles, August 2004. Sterling, Bruce (August 2004). When Blobjects Rule the Earth (Speech). SIGGRAPH, Los Angeles. Retrieved June 3, 2014. http://www.viridiandesign.org/notes/401-450/00422_the_spime.html
The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human–Computer Interaction. “Design principles for human-computer interfaces,” by Donald A. Norman, was published by CHI '83, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1–10, in 1983.
ISO (International Organization for Standardization). ISO 9241-210:2019 (en) Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems. Available athttps://www.iso.org/obp/ui/#iso:std:iso:9241:-210:ed-1:v1:en
Video by Don Norman at UX Week 2008, organized by Adaptive Path. Available at https://www.youtube.com/watch?v=WgJcUHC3qJ8