Design and AI: prospects for dialogue

Diseño e IA: perspectivas de diálogo

Ceconello, M. Spallazzo, D. Sciannamè, M.

Polimi - Dipartimento di Design - Politecnico di Milano
Polimi - Dipartimento di Design - Politecnico di Milano
Polimi - Dipartimento di Design - Politecnico di Milano

Retirado de:

ABSTRACT: As artificial intelligence (AI) is beginning to pervade everyday life, design needs to raise questions and take a leading role in the dialogue between the latest technological innovations and their users. Through an analysis of the current literature and an empirical study, this article aims to highlight the role designers can assume in this context while AI may become their ideal object of study and working tool. In particular, our analysis on function, language and meaning of products integrating AI, carried out from a design standpoint, seeks to demonstrate that the two fields can be symbiotic, culminating in the prospects of an interaction that is not only functional and understandable, but also practically and emotionally significant. We show that, as well as many of the theoretical requirements, Interaction Design already has sufficient experience to serve as mediator between artificial intelligence and human beings. Pursuing natural interaction, through a tangible materialization and an empathic focus on users rather than on technology, might represent a pathway towards the inquiry and experimentation combining these two disciplines.

KEYWORDS: artificial intelligence; Interaction Design; design-driven scenarios; aesthetics of interaction; natural language

RESUMEN: Como la inteligencia artificial (IA) está empezando a invadir nuestra vida cotidiana, el diseño tiene que plantear preguntas y desempeñar un papel rector en el diálogo entre las más recientes tecnologías y sus usuarios. A través de un análisis de la literatura actual y de un estudio empírico, esa argumentación tiene como objetivo la investigación sobre la función que los diseñadores pueden asumir en ese contexto, mientras que la IA se convierte en su ideal objeto de estudio e instrumento de trabajo. En particular, por medio de nuestro análisis sobre función, lenguaje y significado de los productos que integran IA, llevado a cabo del punto de vista del diseño, se quiere demostrar que los dos ámbitos puedan ser simbióticos, culminando en la prospectiva de una interacción que no sea solamente funcional y comprensible, sino también prácticamente y emocionalmente significativa. Se desprende de lo anterior que, además de muchos requisitos teóricos, Interaction Design, ya posee experiencia suficiente para que pueda ser considerado un mediador entre inteligencia artificial y ser humano. Averiguar una interacción más natural, a través de una materialización tangible y una empática focalización en el usuario en lugar de la tecnología, podría representar un camino hacia la investigación teórica y la experimentación que entrecrucen las dos disciplinas.

PALABRAS CLAVE: Inteligencia artificial; Interaction Design; design-driven escenarios; estética de la interacción; lenguaje natural

1. Framing artificial intelligence

According to Paola Antonelli’s argumentation in an interview during the AI-Artificial Imperfection roundtable held in New York in March 2018, Artificial Intelligence (AI therefore) is the material designers will be called on to engage with in the coming years (Antonelli, 2018). This statement by the curator of MoMA's design section may seem provocative, but actually sets a convergence between AI and Design fields.

AI – most broadly defined as a system capable of learning, reasoning and acting autonomously and adaptively – is currently spreading throughout industrial products, services and interfaces intended for the public, to be used at home, in the workplace or in the public sphere, and typically included in the scope of design, introducing new ways of interacting that designers have yet to thoroughly investigate. There are several examples of AI-based artefacts, from the Nest thermostat - programming itself according to the users’ habits and optimizing energy consumption while maintaining a high level of environmental comfort – to the most diffused and conceived systems that enhance tools we use daily, including Netflix or Amazon’s suggestions.

In the sphere of AI, the most interesting potential areas of intervention for design can be found in Machine Learning (ML). This is a process of constant learning based on the statistical analysis of enormous amounts of data in order to recognize patterns, used to formulate predictions that then influence the machine’s response to the inputs it receives. ML can be supervised, based on administering data categorized by humans; reinforced, learning from errors and being given rewards (enhancements) on achieving set goals; or unsupervised, in which the machine interprets uncategorized data in complete freedom, developing skills and behaviours as a result (Hao, 2018). While the unpredictability of this last model makes AI less interesting to designers, the recently born personal assistant devices are the exact identification of a possible design matter. These objects are basically technology-driven, so that they manifest their ability to learn through continuous conversation with their owners. They represent a frontier that draws ever closer to the idea traditionally associated with AI: that of sentient robots capable of simulating human behaviour to a believable degree, but they show no hint of a user-centred or interaction-focused design.

These applications serve as an example of the double interpretation of AI, which have long enlivened the scientific debate: on the one hand, McCarthy’s position focuses on creating a super brain capable of simulating human behaviour, while, on the other, Engelbart's position revolves around amplifying human potential through AI (Winograd, 2006).

Engelbart's concept of augmentation views AI as an instrument capable of enhancing the human intellect and potential rather than replacing them (Engelbart, 1962): an approach much more similar to those historically expressed in the field of Human-Computer Interaction (HCI) (Grudin, 2006), which focused more on the user than on the machine.

Winograd summarizes these two different positions as two distinct approaches to the theme of AI: a rationalistic approach based on the conviction that essential aspects of thought can be grasped in a formal symbolic representation, and a design-oriented approach focused on people’s interactions with their surrounding environments rather than modelling the mechanisms that operate within intelligent systems (Winograd, 2006).

These definitions bring the world of AI closer to our other field of interest, specifically, Interaction Design, a field that considers interaction in a holistic and more inclusive way than HCI. It is regarded as the mutual influence among people, artefacts and the contexts in which they are positioned; it is dialogue, connection, and social involvement (Kolko, 2011). Since Interaction Design is interpreted as the art of facilitating interaction among human beings through products and services (Saffer, 2009), it comes clear how AI appears to fall fully within the scope of designers’ activity.

Currently, theoretical reflections and experimentations portraying the convergence of AI and Design are quite limited. Thus, the aim of this contribution is to advance argumentations on the role of design in the above described context, setting a possible path for future research. For this purpose, we pose two main research questions: what can design do for AI? And, conversely, what can AI do for design?


2. Methodology

Based on an in-depth bibliographic review of the sources currently available and an analysis on specific case studies, a preliminary path in response to those questions is proposed.

In particular, the dissertation is inserted in a broader context of research, here building on three filters, typical of a design viewpoint: function, language and meaning (Kolko, 2011).

In this case, the investigation on the function(s) of AI serves as the means to better understand this technological field. Indeed, function represents the most objective feature of AI outcomes, as it can be inferred by the observation and evaluation of defined aspects, in particular those related to an object functional use, affordance and modalities of interaction. For this purpose, the theoretical enquiry is supported by an empirical analysis. As a matter of fact, ten virtual assistants materialized into home devices have been investigated according to their (i) shape, (ii) input and output modalities, (iii) feedback systems and (iv) discoverability (Saffer, 2009) of functions, specifically considering how proactive those artefacts are (Table 1). They have been selected according to four criteria: they had to be (i) multipurpose home assistants, (ii) specifically designed as first-party hardware, (iii) already commercialized or coming in the near future and (iv) able to control other smart home appliances. In case of families of products, we only considered the first released (therefore, we only analysed Google Home and Amazon Echo).

Then, a closer look to the relationship that can be established between the two fields of design and AI is fostered by some reflections about the language that AI is currently adopting and the role that design may have in its shaping. This approach (that foresees designers giving expression to the spreading technology) comes from a theoretical investigation about the mediating role of design – and Interaction Design in particular – and it is integrated with evidence derived from the critical considerations pointed out from the empirical research about domestic virtual assistants as emblematic representatives of current AI manifestation.

Finally, the essence of the argumentation can be researched in the meaning of AI technology, representing the comprehensive synthesis of function and language solutions towards the definition of a shared direction for future work. This last part mostly follows the authors’ sensibility as designers to highlight the potentialities of those systems, integrating design into AI-based systems and vice versa.


Table 1 – Comparative analysis of home virtual assistants


3. The designer’s challenge: translating function

Clarifying the function of an AI-based system is as difficult a task as defining the capabilities and potential of a human being. A primary hint on this subject comes from the shape. As it is commonly recognized, form follows function, and that is assessed also by the analysis of home virtual assistants. In fact, two main formal paths emerge (with just few exceptions): on the one hand, there are smart speakers following basic and mainly regular shapes; on the other, anthropo/zoo-morphic robots are being built according to geometric abstractions, but with recognizable heads and bodies. As a result, in the first case, the function of these objects is a showcase of an effective speaking technology; while an anthropomorphic shape corresponds to devices aiming at establishing a social contact.

Yet, beyond this general indication, the properly defined functions of virtual assistants are endless: Amazon's Alexa system, in December 2017, had approximately 26,000 functions (White, 2018) and the numbers are rapidly increasing. Though, in such a context, everything that design, and Interaction Design in particular, has taught us over the years is put to the test when the object of design is not a simple system, be it a product or interface, but rather a system we might define as sentient.

Let us take the concept of affordance, for instance. Discussed for the first time by the psychologist Gibson (1979) as the possibility of perceptible action, that is, a product’s ability to suggest possible actions that might be carried out with it, this concept was then successfully revisited and expanded in the field of design by Don Norman. Playing with the concept of real and perceived affordance, the author (1988) investigated the difference between the real potential of a product/interface and what it suggests to the user, especially if (s)he is approaching it for the first time. It is clear that the bias between the real and perceived affordance of devices such as the Amazon Echo is enormous: in a recent article addressing the affordance of virtual assistants, White uses the term discovery to indicate the activity of a user who is exploring the functionality of such devices (White, 2018). In fact, there is no question that users may have trouble discovering the thousands of functions potentially offered by a device when they have such minimal interactions outside of voice commands. In the above-mentioned article, White proposes some possible solutions to facilitate exploration on the part of users, including a system for suggesting potentially interesting functions based on general and specific usage data and context (White, 2018). Proactivity becomes then a huge facilitating feature to help users discover virtual assistant’s multiple functions and that is the reason why we investigated its presence in relation to those objects, highlighting its still scarce diffusion. In fact, it is limited to some anthropomorphic assistants integrating a camera. In this way, they can relate on more data to foster their suggestions: they not only evaluate noises or routines, but they also read body language and can understand what their users are doing. Furthermore, they can recognize and be triggered even just when their users are passing by. In this context, the highest point in terms of empathic interaction and proactivity is represented by Olly, which proves its adaptability by developing and manifesting its own personality according to its interlocutor’s one. On the contrary, speaker-based assistants are limited to be unobtrusive respondents when they are prompted.

This example shows that the contribution of design can benefit AI-based systems and that, conversely, design can benefit from these systems. The solution White proposes, in fact, represents an entire field of possibilities for design, a stimulus for creating new application scenarios (Manzini, 2001) that facilitate the act of employing the device and making the most of it.

Indeed, the traditional way of conceiving Interaction Design with feedback and feedforward (Saffer, 2009) only proves efficient if the object of study/design is a system programmed to respond constantly to a given input. In the example of virtual assistants, their behaviour is quite unpredictable for the user (and potentially the designer and/or programmer) who makes a request. In many cases (s)he can only imagine or expect a certain outcome, while being sure just of the more usual interactions (through app or buttons) and of the basic and routine commands that (s)he has performed various times.

The proactivity and partial unpredictability of devices enhanced by the use of AI pose designers with what Rittel and Webber (1973) would define as a wicked problem: a complex challenge that is difficult to define and solve.

In fact, imagining the use scenarios of a product by outlining the so-called User Experience (UX) involves knowing how the product itself will behave in every possible condition. This certainty is unthinkable in the case of an intelligence which, although artificial, is able to learn over time and eventually display different responses to the same input.

Not surprisingly, UX is one of the aspects that researchers have analysed most thoroughly in studying AI-related design. Although rather preliminary, the research centres of Carnegie Mellon University and Aarhus University have carried out studies of UX enabled by introducing AI. In particular, the research focused on ML as a design material as well as its effects in terms of user experience (Dove, Halskov, Forlizzi, & Zimmerman, 2017; Yang, Sciuto, Zimmerman, Forlizzi, & Steinfeld, 2018) and in-depth studies on the use of virtual assistants in everyday life were also conducted (Sciuto, Saini, Forlizzi, & Hong, 2018).These studies show that designers have a very generic, non-specific understanding of ML and, consequently, have made little investment in capitalizing on the integration of ML services in their UX activities (Dove et al., 2017). This lack of maximization is due to designers' lack of academic training on the subject and the fact that they have very few opportunities to engage with design-driven projects. As in the early days of HCI, in fact, engineering fields are still leading the way, and design is currently still in the kind of drunkenness and experimentation phase that normally characterizes the introduction of a new technology (Antonelli, 2018).

This also results from the analysis of the interactions with domestic virtual assistants. In particular we considered input, output and feedback modalities as relevant factors to understand how typical design elements are currently treated. Among these, vocal inputs and outputs are the most employed, reflecting the latest technological achievements of AI, especially in the Natural Language Processing (NLP) field, which has improved to the point that it can easily understand human requests and respond accordingly. All the other input systems do not differ from those usually related to devices like smartphones or tablets: through-app interaction, buttons, touch surfaces and displays are the most common. Jibo and Aido are the only examples to respond to inputs which take some distance from those references in favour of a more natural and tangible dimension. Also the outputs demonstrate the influence of already existing technology: they vary according to the integrated tools but they mostly take advantage of audio and video contents. Yet, in this dimension, it is possible to observe some effects linked with AI technology and purpose: in order to respond to a specific demand, all the selected devices, in fact, are able to interact with other home appliances; while some of them, namely those aiming to establish a social contact, also move. For instance, Jibo can dance to entertain their users, Aido and Zenbo may move across the house to welcome guests or check on something, whereas Olly uses movement as a reinforce for communication. These aspects already show new possibilities which can be exploited by designers to create new experiences. However, probably the most important ingredient in an interpersonal-simulated interaction is the feedback system. Some of them are typical augmented (Saffer, 2009) feedbacks revealing the internal state of the object while the function is processing: almost all the analysed objects use lights to communicate their status, for smart speakers in particular, it is the only visible feedback, while other devices like Olly make of lights a true expressive system. Additionally, voice is a feedback directly consequential to the AI technology whether in a rigorous or more confidential way, with a robotic or person-like tone, these devices let their users know if and what they have understood before they provide the requested content. This kind of feedback is especially positive in the interaction with anthropomorphic assistants as it gives the impression of being engaged in an actual conversation with a companion, and not just being talking to a machine. Moreover, the home assistants provided with a display or being able to move take advantage of these characteristics for giving feedback, trying to provide a natural interaction and making the machine appear more alive: they can turn their heads towards the speakers or blink their eyes displayed on the screen.

Due to the fact that physical actions are required in a very limited manner, virtual assistants offer almost no inherent feedbacks (White, 2018).

At the same time, the study of user experiences among the owners of virtual home assistants, and Alexa in particular, shows that users are quick to discover the objective limits of these devices, especially limits stemming from difficulties in understanding and a lack of affordance, and rapidly settle for a routine use involving only a few repetitive commands (Sciuto et al., 2018). As the authors themselves suggest, this situation calls out for designers to step in and begin conceiving of new scenarios of interaction.


4. In search of a language for AI

By mediating between technology and aesthetics without losing sight of the human dimension (Kolko, 2011), designers play a fundamental role especially in this blurry transitional period. The mediation between innovation and habits may serve to reassure users about a field that remains relatively unknown and still needs to be defined and regulated in society. As the pervasive advertising campaigns aired during the last Super Bowl indicate (Patterson, 2019), a technology that is increasingly similar to us, one capable of surpassing abilities we previously considered to be exclusively human and that tends to permeate everyday life, can generate anxiety and fear. However, as Antonelli (2018) argues, the skill of translating abstract and monstrous concepts into something familiar and ordinary is typical of designers and artists: just think of the time when computers were used by only a few individuals and the masses had not yet grasped their potential. To explain the real potential of applying AI, designers must use a language, the form and content of which is expressed through context and use (Kolko, 2011). In other words, they must tackle the aesthetics of interaction, an aspect that goes beyond formal beauty or usability to also speak to more complex emotional and cognitive processes (Xenakis & Arnellos, 2013). The fields of HCI and Human-Centred Design (HCD) understand it respectively as experience, the study of people using computational machines, and expression, referring to the ways individuals interact with artefacts (Graves Petersen, Hallnäs, & Jacob, 2008). Focusing on use, therefore, aesthetics translates into choices about the form, material, finishing details and behaviour of interactive objects. As the preliminary work by (Soranzo, Petrelli, Ciolfi, & Reidy, 2018) demonstrates, design can contribute widely to this field: research into the effects that the materials of the language of design may have in a multi-sensory interaction is still in its early stages. Designers’ work of most effectively integrating and conveying the potential of this technology into everyday objects must also extend to psychological considerations: from the do-level or scope of interaction to the motor-level or how to achieve it and the be-level, that is, the reason why it makes sense to pursue it (Lenz, Diefenbach, & Hassenzahl, 2014). To date, the literature has yet not provided an unambiguous definition of the relationship between these parameters, and designers beginning to work on the aesthetics of interaction tend to focus either on the quality and underlying motivations of the interaction, without investigating ways to achieve them, or on a detailed description of the aesthetics of interaction while ignoring how these modes of interaction might be significant (Lenz et al., 2014).

According to Sullivan (1896), the form of an artefact should be closely linked to its function, just as a specific interaction should imply clear motivations. This can prove difficult, however, when talking about AI: in addition to the multiple functions we referred to earlier, the potential of these systems can transcend even the intentions of their creators. Besides software, interactive speakers represent a tangible rendering of AI in that they aim to generate interpersonal interaction by recognizing and reproducing human speech. However, plagued as it is by mutual misunderstandings, this dialogue mainly appears to highlight the still-significant gap between the human and digital spheres, just as early HCI did.

To facilitate more natural and user-centred interaction, initial research shows that users perceive a complex sensorial experience in which the tactile component plays a fundamental role as it is perceived as more human, which results in the people involved in such interactions significantly modifying their behaviour and reactions (Liu & London, 2016; Soranzo et al., 2018), a finding that in turn suggests a productive direction for further research.

If it is true that design can constructively investigate interactions between users and AI, it is equally true that, when AI is translated into the physical world, it likewise offers designers completely new stimuli. Today, designers must necessarily foresee, favour and facilitate interactive modes that are delimited within a restricted range of possibilities, thus giving rise to static interactions, with analogue as well as digital objects. Despite users’ infinite creativity, in fact, in terms of programming the response of an object or system remains trapped inside the consequential logic of “If...Then”. The adaptive nature of AI algorithms, in contrast, allows artefacts to evolve and even to recognize and respond appropriately to the emotions of their users (Liu & London, 2016), thereby providing design with a vast range of new scenarios to explore and make the most of.

Looking at the ten domestic virtual assistants object of study, it’s clear that designers are still in search of a language for AI: as a matter of fact, our study underlines that they are designed and interpreted according to their similarity to other, known, objects, or to the abstract expectations that the speculations of our culture have encouraged. As highlighted in the previous section, virtual assistants are actually embodied in products that resemble a loudspeaker or look for an anthropomorphic feeling.

Then, it does not come by chance that the most common use of speaker-shaped intelligent assistants is to play music (Sciuto, et al. 2018). Despite the introduction of NLP features, smart speakers seem to be perceived for how they appear, namely loud-speakers with a 360° diffusion, and the communication by producers themselves encourages this perception. On the other hand, anthropomorphic devices are commonly defined as actual assistants, companions, home managers, and they perfectly embody these features assuming a humanoid figure.

The challenge for design is to achieve a balance between product familiarity and freedom of innovation (Clapper, 2018), between designer control and the overwhelming power of technology. The risk we run, in fact, is that of creating artefacts that cannot be deciphered – except by those who created them – or that are carried away by the unpredictability of ML. As a matter of fact, domestic assistants are usually characterized by tens of thousands of skills that continuously augment and develop also thanks to engaged communities. And yet, those multiple skills are rarely used, as discussed by White (2018) and confirmed by Sciuto and colleagues (2018) through quantitative and qualitative analysis of use of Alexa, and they are mostly unknown to final users.

By setting our sights on responding to human needs and taking into account their specificities (including the mechanisms of automation caused by habit), however, designers can capitalize on just the right degree of predictability to balance novelty and usability through a shared language that avoids complicating the UX (Fisher, 2018).


5. Conclusions. The contribution of design – a meaningful synthesis

Although design, in its attention to detail, might make possible an ethereal dialogue with users (Kolko, 2011), its relevance depends on the meaning that receivers attribute to it. This factor has yet to be sufficiently investigated in the context of AI. What significance do we attribute to this technology? What is its role in society? And above all, why do we need it?

Our impression is that the field is still in the experimental phase that characterizes the introduction of every new technology and that design should take a leading role in guiding a human-centred transition towards meaningful products. Programmers have solved these issues in some specific cases, typically those involving software integration, and yet voice assistants are emblematic proof that we are still plagued by ambiguity. So, how can our interactions with AI become meaningful?

On one hand, HCD can be a source of meaning: by bringing the use of AI into alignment with human needs, we could avoid the trap of creating extremely powerful systems designed to solve problems that don’t exist (Lovejoy, 2018). On the other hand, well-established theories (Merleau-Ponty, 1945) argue that our experience of the world through the body and situated action is a generator of meaning. It follows that experience and intuition are more important than abstraction (Hummels & Overbeeke, 2010) and, therefore, by identifying appropriate languages and functions we can make interaction mechanical, tangible and meaningful once again.

Conversely, if we consider AI to be a replica of human intelligence in its very nature and apply this similarity to the ways it expresses itself as well, the meaning of interaction can be sought precisely in the features of interpersonal interactions. Although it is quite different than ML, the digital agent described in Marti's work (2010) identifies a key point for solving this problem: what generates meaning in interaction is not only our perception of the world, but also our experience of being perceived by it. Mutual influence has been shown to lie at the foundations of the aesthetics of the experience between two sentient beings, and this mutuality is expressed through perceptual crossing, that is, involving all the senses across the board when interacting with artefacts.

By synthesizing the key principles identified for clarifying the function of a communicative language and developing it, therefore, interactive systems based on AI would be able to demonstrate their responsiveness to users on multiple levels and thereby generate significant interaction. Apple seems to be moving in this direction with its Home Pod, which will be able to interact through gestures, tactile feedback, as well as interpreting emotions.

The challenge posed by Antonelli thus proves to be a forecast for the future: designers acting as mediators between technological innovation (AI) and everyday life to create innovative scenarios.



Antonelli, P. (2018, February 8). AI Is Design’s Latest Material. Da

Clapper, G. (2018, October 4). Control and Simplicity in the Age of AI. From

Dove, G., Halskov, K., Forlizzi, J., & Zimmerman, J. (2017). UX Design Innovation: Challenges for Working with Machine Learning As a Design Material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pagg. 278–288). New York, NY, USA: ACM.

Fisher, K. (2018, March 20). Predictably Smart. Da

Gibson, J. J. (1979). The Ecological Approach to Visual Perception (1 edition). New York, London: Taylor & Francis.

Graves Petersen, M., Hallnäs, L., & Jacob, R. J. K. (2008). Introduction to Special Issue on the Aesthetics of Interaction. ACM Transactions on Computer-Human Interaction, 15(4).

Hao, K. (2018). What is machine learning? MIT Technology Review.

Hummels, C., & Overbeeke, K. (2010). Special Issue Editorial: Aesthetics of Interaction. International Journal of Design, 4(2), 1–2.

Kolko, J. (2011). Thoughts on Interaction Design (Seconda Edizione). Burlington, Massachussets: MK Publications.

Lenz, E., Diefenbach, S., & Hassenzahl, M. (2014). Aesthetics of Interaction – A Literature Synthesis. In NordiCHI ’14. Helsinki: ACM New York.

Liu, X., & London, K. (2016). T.A.I: A Tangible AI Interface to Enhance Human-Artificial Intelligence (AI) Communication Beyond the Screen. In DIS ’16. Brisbane: ACM New York.

Lovejoy, J. (2018, January 25). The UX of AI. Da

Manzini, E. (2001). Sustainability and scenario building. Scenarios of sustainable wellbeing and sustainable solutions development. In Proceedings Second International Symposium on Environmentally Conscious Design and Inverse Manufacturing (pagg. 97–102).

Marti, P. (2010). Perceiving While Being Perceived. International Journal of Design, 4(2).

Merleau-Ponty, M. (1945). Phenomenology of Perception. London: Routledge.

Patterson, T. (2019, February 2). The 2019 Super Bowl Ads Are a Case Study in Technological Dread. Da

Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169.

Saffer, D. (2009). Designing for Interaction: Creating Innovative Applications and Devices (2 edizione). Berkeley, CA: New Riders Pub.

Sciuto, A., Saini, A., Forlizzi, J., & Hong, J. I. (2018). «Hey Alexa, What’s Up?»: A Mixed-Methods Studies of In-Home Conversational Agent Usage. In Proceedings of the 2018 Designing Interactive Systems Conference (pagg. 857–868). New York: ACM.

Soranzo, A., Petrelli, D., Ciolfi, L., & Reidy, J. (2018). On the perceptual aesthetics of interactive objects. Quarterly journal of experimental psychology, 71(12).

Sullivan, L. H. (1896). The tall office building artistically considered. From

White, R. W. (2018). Skill Discovery in Virtual Assistants. Commun. ACM, 61(11), 106–113.

Xenakis, I., & Arnellos, A. (2013). The relation between interaction aesthetics and affordances. Design Issues, 34(1), 57–73.

Yang, Q., Sciuto, A., Zimmerman, J., Forlizzi, J., & Steinfeld, A. (2018). Investigating How Experienced UX Designers Effectively Work with Machine Learning. In Proceedings of the 2018 Designing Interactive Systems Conference (pagg. 585–596). New York: ACM.

Reference According to APA Style, 5th edition:
Ceconello, M. Spallazzo, D. Sciannamè, M. ; (2019) Design and AI: prospects for dialogue. Convergências - Revista de Investigação e Ensino das Artes , VOL XII (23) Retrieved from journal URL: