Invisibility in Ubicomp I

Ubiquitous computing technologies, according to Weiser’s vision, fade in the background of user’s attention. This has been discussed as invisibility, and the discussion continues on whether this invisibility is physical, as when sensors are hidden or embedded in everyday products, or psychological, as when the engagement of the user with the technology is such that the interface “disappears” (see note 1).

Invisibility however is also a challenge from the user perspective. Invisible infrastructures subtract the spatial dimension from technologies and make it difficult for users to understand them. In some cases the benefit and the ease of using the infrastructure is so great that users adopt the infrastructure even without understanding it. However, there are cases where invisibility stands on the way adoption, as infrastructures require users to constantly engage with them. The Ubiquitous Computing field is full of examples like the automatic adjustments of settings in a smart home, the myriad of eco-feedback technologies, and pervasive CPU sharing or cyberforaging. If users do not engage with these infrastructures, they cannot deliver an acceptable service, hindering adoption again.

My idea, simple and intuitive, is to make these infrastructures available to perception. This approach addresses two objectives: first it provides for the users to explore the infrastructure before engaging with it, and to obtain feedback from their interaction with it. Second, it turns infrastructures into actors in the physical space where interaction takes place.

NOTE 1: Ubiquitous computing technologies are built on top of infrastructures, and together with them, similarly offer infrastructures through which services are delivered. Leigh Star describes infrastructures as invisible (embeddedness and transparency). Leigh’s invisibility relates to more to the fact that users obviate the infrastructures and focus on the tasks their are to accomplish. Taking the example of the gas pipe, it’s difficult to argue that the pipe is invisible (even if camouflaged), but it’s obvious that for most gas users the pipes are something we know about, but nothing to think about when we use gas.


7 thoughts on “Invisibility in Ubicomp I

  1. Hello Jo,

    I have read your “I bet you look good on the wall paper” ( and I am very impressed with what you’ve built. I got a few comments/ideas though.

    1- Your motivation for building the system is that lack of user’s understanding of technology due to its invisibility. Thus you try making it visible through your very nice projection system, but at the same time you step back and do not show it all the time, and resort to it as a “debug” tool. My idea is that, perhaps you do not need such a detailed view (input/output/actions) all the time, but you need some indication that first, some action is being taken, and second, that there is a possibility to enter into debug mode. Think of cases where the output of the system is not visible to the user at the moment: if the user is in the ground floor, and issues the order “turn off the lights in the basement”. How does the user know that light are effectively turn off? A good metaphor of this is the computer’s system tray. There could be an AmbientHome “system wall area” (user defined of course) where a few icons show all the time the state of the system.

    2- The debug mode: do users really want to debug? It’s my understanding that not all users want to debug software; not even at the higher levels of input/action/output. Let alone the elderly and the non-techy. Also, you make the point that highly complex/interconnected systems can be difficult to understand, even with your visualization. How about different degrees of detail? Level one could be just a metaphorical representation of the system (more adequate for highly complex situations). And all levels on top of that could have an increasing amount of details.

    3- The cancel command. You point out some problems with the
    calling of the cancel command. My hypothesis is that the problem is due to two things, place and mode. When activating the video by touching the display users identify a place and a mode. The place is the display and the mode is touch. Everything that happens due to this initial interaction is rooted (psychologically) to this first action. My feeling is that a manipulation on the results of the interaction should naturally be through the same place and mode; ergo, touching a cancel button in the screen. Having to look at the wall and issue a voice command is an alteration both to the place and mode of the original interaction. For novice users, such alterations should be minimized. I concede though that the experience user could picture the interaction with the screen as a part of the AmbientHome and thus have no problem issuing the voice command. But if the problem to tackle is the lack of mental models, the we need to resort to what’s natural and intuitive. Perhaps the way out is to support both, cancellation in the same place/mode (for the novice users) and through voice commands (for the advanced users).

    4- You have surely thought about this, but let me point it out. Given that applications/devices/services will come from a diverse providers, how do you think they can be integrated into your environment? Option one is to expect that they stick to the standards of their products. Option two is to open the Rendering Engine of your system for device manufacturers to provide their software. I am not very sure on how to approach this issue… I just keep having it in the back of my mind.

    OK, that’s it for now. Great job! I couldn’t read your other paper as it is not available for download.

    Juan David

  2. Hi Juan David,

    Thanks for taking the time to look into my papers, and for your kind words! I certainly agree with point 1, it is important to think about the granularity of showing the system state, this is more of an all or nothing approach. It would be useful to provide users with just enough cues to understand what’s happening, and allow them to ask for more details whenever necessary. This is, of course, a delicate balance. A nice approach has been presented by Ju and Klemmer in their CSCW 2008 paper. They propose 3 techniques to provide feedback about the system’s understanding and what it is doing without overwhelming the user.

    Considering point 2, debugging might indeed go a bit too far, although existing work has shown that people perform better when being able to ask why questions about a (desktop) application’s behavior. Again, I agree that it would be good to provide information at different levels of detail.

    A broken connection between place and mode might indeed explain the problem with the voice-controlled cancel command, very useful to think about it this way, thanks! I agree that it can be useful to provide different control options for novice and experts.

    That’s a difficult problem indeed. It’s the same problem researchers face who design a new (ubicomp) framework, they have to make assumptions and trade-offs to go forward. The Crystal system that I mentioned before, also only works when applications are written from the ground up using this approach. I think the best way to do it is to use a specific format for this kind of information, that different software components, devices and sensors can adhere to. My colleague used an approach where he provided a very generic ontology, which could be merged with application- or domain-specific extensions.

    Thanks again for your very useful comments! I’ll send you my other paper through e-mail, it will also be online soon!


    — Jo

  3. Pingback: Invisibility in Ubicomp II – The introduction of infrastructure awareness systems « Peripeteia

  4. Pingback: Invisibility in Ubicomp II – The introduction of infrastructure awareness systems | Peripeteia

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s