Designing interactive solutions, nowadays, means facing with the spreading of devices and systems with embedded sensors that created a hyper-connected world. The use of personal data and the derived information has become a critical point in the creation of solutions that make use of them to create additional knowledge and value to the service provided. While in the creative process, designers tend toward utopian goals, the results they formalize embed not only functional solutions to problems but also possible unconsidered individual and societal consequences in terms of both possibilities and problems. My 2 cents here are twelve critical themes that emerge when we design services that imply the use of personal information


Consent or denial of access

In the sphere of experimentation on interactive artifact, smart material and digital fabrication, design fiction and speculative design have a key role allowing designers to imagine and illustrate a future in which the artifact/technology/innovation is already present, is integrated and operates allowing also the exploration of assumptions, concepts and possible implications, including critical discussions that investigate on the complexity in social, cultural, ethical and environmental terms. The InDATA research project produces both a theoretical-practical framework and a platform that, starting from open access data, nourish the construction of contextual scenarios so to validate the existence of what is designed, triggering possible innovation that take advantage of hybrid and smart material, and new modes of interaction.


Awareness of data tracking, sharing and use

Although most of the time the user is well aware of the tracked data is also true that some issues arise when the individual is not completely conscious of when the tracking occurs, what data is collected, with whom is shared or sold, who is using it and what kind of profit they make. When users sign and agree to terms and conditions for the services they subscribe, they rarely read and understand them, and providers consider the agreement on terms and condition as a sufficient green light on the use of personal information as the contracts establish. They rarely provide additional information material about how data are tracked, collected, used, shared or even sold. Unawareness of the user in the occurring of tracking and the related impossibility to hide from it, can cause lack of individual power on controlling the exposition of personal and intimate information. Service providers and third parties could gain profit from users’ data without involving the users in the trading and even without letting them be aware of the use of their information for a specific purpose.


Rights on data access management

Data ownership and control of access concerns the user’s right in deciding who can see or use the information, which kind of information is used, its granularity and the level of personalization. It affects services received by the user influencing the balance between individual freedom and privacy regarding policies, health and safety. The increasing tailoring and optimization of tasks and services requires many data, its use by third parties and the related power of the user in denying the access is not always clear. It happens especially for access requests coming from authorities and governments. Hidden tracking as well as pervasive availability of data thanks to its sharing on the web, make the control on the use of information difficult.


Automation of actions and services

The suggestions and filtering options provided by tailored services through the use of algorithms that analyze personal data, create the mechanisms called ‘filter bubble’ and ‘echo chamber’ (selective exposure to online contents and information) (Liao & Fu, 2013). While tailoring and proactivity are changing the paradigms of services such as the shifted trend from cure to self-care and prevention for healthcare, the automation of analysis of personal data bring up doubts on how technologies such as AI can be used. Persuasion and decision-making based on personal data analysis is nothing new, and the perfection of gathering and collection as well as the automation of the analysis, raise several issues especially when services imply AI to automate procedures and manage complexity. Biases and prejudices in learning algorithms become particularly critical when AI is applied to the field of justice and lawmaking because of its socio-political implications. Even for the programmers how algorithms make decisions is not always clear. It is therefore even harder for non-technicians to understand the decision-making processes results in their cause-effect inferences.


Alteration of cognitive load

The cognitive load on decision-making and task completion can be both lowered and raised due to automation. Proactive and reactive services can lower cognitive load allowing the user to focus on experiences instead of repetitive tasks. However, the return of information and knowledge in the form of visual feedback or insights and suggestions can raise the cognitive load related to burdens on new issues that have psychological effects (e.g. raise users expectations) creating new problems that can be brought up by the invasiveness of the service or can be created by the return of too many details due to the granularity of gathered information.


Alteration of risk of judgment

Automatic detection and analysis are moving the burden on tasks completion from the humans to the machine, and the analysis is performed mostly on automatic detected data instead of data that is actively provided the human. While automation of processes through AI makes choices and task completion easier, it also removes the effort for judgment from human, raising risks of inattentiveness in decision-making processes creating the ‘automation paradox’ (systems take decisions while humans mentally ‘switch off’). Decision making tasks based on personal information as well as judgments made by machines, could open new perspectives of discriminations. However, changing the attitude of the users during the experience, the interaction with bots and AI can make people feel less judged than if interacting with another human being. Moreover, the difference between human-human interaction and human-machine interaction through technologies is blending. Technological advancement is making difficult for the users to distinguish between humans and chatbots. AI is substituting humans in many tasks that involve chat or voice interaction and can even pretend to be a real human.


Self mirroring into data

The users’ self-perception changes and relates to the self-knowledge they acquire while understanding their own’s data. When the user receives the information back from the service, the feedback can be return in the form of information visualization (with different levels of granularity) or through the analysis of the information, in the form of insights, suggestions and tailored proactivity. The return of information about hidden mechanisms such as inner body functions and behavioral patterns, is making visible something that is usually unaware for the individual. When the individuals analyze the received feedback, they experience disembodiment: knowing themselves through data creates disconnection between the knowledge and the physical. The users can reflect themselves in their data doppelgänger as well as being misrepresented by their digital identity.


Information overload

The details and granularity of data gathered by advanced sensors can be useful for the precision of the information and the knowledge extracted from it. It is, however, important to consider the psychological impacts the information has when is received by the user. Self-knowledge can be negatively perturbed by an overload of information that is irrelevant or has too many details. An overexposure of information for the users could be misleading, or even make them bother about irrelevant knowledge leading to further consequences such as control addiction. The increasing amount of available data is raising questions about its usefulness, about the possibilities to extract valuable knowledge from it.


Alteration of attitude and quality of life

The use of personal information brings consequences in terms of alteration of the users’ attitude toward actions and behaviors and in terms of quality of life. The increasing availability and pervasive use of sensors to detect people’s data alter their attitude toward everyday life behaviors due to the ‘observer effect’ that can be conscious or unconscious. People behave in a different way when they know (or think) they’re being observed. Systems can take advantages from the ‘observer effect’ for the user’s good aiming at changing a bad behavior according to the user’s goal settings. The ubiquitous and pervasive connectivity allows the users to be ‘always present’ in their digital representations, however the impossibility to hide and disconnect can raise concerning on changes in the quality of life and changes in approaches toward everyday activities (e.g. changing in the way people work by being always connected).


Data use for public benefit

A utilitarian ethical approach applied on the use of personal information as a benefit for large groups of people or even the whole society, can produce byproducts in the form of problematic effects for individuals and for the society itself. While the gathered data can be used for public services and for increasing public knowledge so to drive to better decision such as for creation of policies, or energy saving strategies, massive amount of data about people can make them become targets for massive surveillance and deny access to services according to their and other people’s data.


Creation and management of communities of value

The interaction of people through their data often creates a community by itself thanks to the sharing of values connected to the purpose of data tracking that are common for the community members. It is however important to be sure that the exposition of data among the community is volunteer and to consider the limitation of the self-exposure and self-disclosure to the information related to the purpose of the service. Furthermore, people that share values are not always tracking their data and their representation among the community of values can be so hidden to the other members. Even if the amount of data is big enough to make a decision, only the collected data contributes in the decision-making process, while the ‘voice’ of non-tracked people is cut out and is not represented in the results.


Democratization of services

The use of personal data to enable remote interaction allows to provide affordable or even free services and features for lower income people as well as to increase the plateau of users. The democratization of services, however, is granted for the people that can provide their data while who doesn’t have access to Internet connection or can’t afford the necessary devices, is cut out. One other element that can hinder the access to services is the user’s willing to share data due to not being part of a specific interest group or community, due to don’t even knowing about the availability of the service, or due to doubts about the use of the data by who is collecting it.

Further readings

  • Boyd, D., & Crawford, K. (2011). Six Provocations for Big Data. 17  Colombo, S. (2018). Morals, Ethics, and the New Design Conscience. In Rampino, L., 2018. Evolving Perspectives in Product Design: From Mass Production to Social Awareness. (p. 15). Franco Angeli
  • Hughes, B., Joshi, I., & Wareham, J. (2008). Health 2.0 and Medicine 2.0: Tensions and Controversies in the Field. Journal of Medical Internet Research, 10(3), e23
  • Joinson, A., Reips, U.-D., Buchanan, T., & Schofield, C. B. P. (2010). Privacy, Trust, and Self-Disclosure Online. Human-Computer Interaction, 25(1), 1–24
  • Li, I., Dey, A. K., & Forlizzi, J. (2011). Understanding my data, myself: Supporting self-reflection with ubicomp technologies. Proceedings of the 13th International Conference on Ubiquitous Computing – UbiComp ’11, 405
  • Liao, Q. V., & Fu, W.-T. (2013). Beyond the filter bubble: Interactive effects of perceived threat and topic involvement on selective exposure to information. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI ’13, 2359
  • Marr, B. (2015). Big data: Using smart big data, analytics and metrics to make better decisions and improve performance. Wiley
  • Mitchell, W. J. (2010). Me++: The cyborg self and the networked city. MIT
  • Neff, G., & Nafus, D. (2016). Self-tracking. The MIT Press.
  • Varisco, L. (2019). Personal Interaction Design: Introducing the discussion on the consequences of the use of personal information in the design process (PhD Dissertation). Politecnico di Milano, Milan, Italy.
  • Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1,), 121–136