CHI ’18- Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
The proceedings are available in the ACM Digital Library
Just follow the ACM link in the web program to go directly to a specific paper and find its PDF (available to all for free for one month). For the rest of the year, use the links below to access the PDF of each paper for free.
SESSION: Paper Presentations
Breaking! A Typology of Security and Privacy News and How It’s Shared
News coverage of security and privacy (S&P) events is pervasive and may affect the salience of S&P threats to the public. To better understand this coverage and its effects, we asked: What types of S&P news come into people’s awareness? How do people hear about and share this news? Over two years, we recruited 1999 participants to fill out a survey on emergent S&P news events. We identified four types of S&P news: financial data breaches, corporate personal data breaches, high sensitivity systems breaches, and politicized / activist cybersecurity. These event types strongly correlated with how people shared S&P news-e.g., financial data breaches were shared most (42%), while politicized / activist cybersecurity events were shared least (21%). Furthermore, participants’ age, gender and security behavioral intention strongly correlated with how they heard about and shared S&P news-e.g., males more often felt a personal responsibility to share, and older people were less likely to hear about S&P news through conversation.
Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns
We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognize 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.
All Work and No Play?
Many conversational agents (CAs) are developed to answer users’ questions in a specialized domain. In everyday use of CAs, user experience may extend beyond satisfying information needs to the enjoyment of conversations with CAs, some of which represent playful interactions. By studying a field deployment of a Human Resource chatbot, we report on users’ interest areas in conversational interactions to inform the development of CAs. Through the lens of statistical modeling, we also highlight rich signals in conversational interactions for inferring user satisfaction with the instrumental usage and playful interactions with the agent. These signals can be utilized to develop agents that adapt functionality and interaction styles. By contrasting these signals, we shed light on the varying functions of conversational interactions. We discuss design implications for CAs, and directions for developing adaptive agents based on users’ conversational behaviors.
Designing the Desirable Smart Home: A Study of Household Experiences and Energy Consumption Impacts
Research has shown that desirable designs shape the use and experiences people have when interacting with technology. Nevertheless, how desirability influences energy consumption is often overlooked, particularly in HCI studies evaluating the sustainability benefits of smart home technology. In this paper, we present a qualitative study with 23 Australian households who reflect on their experiences of living with smart home devices. Drawing on Nelson and Stolterman’s concept of desiderata we develop a typology of householders’ desires for the smart home and their energy implications. We structure these desires as three smart home personas: the helper, optimiser and hedonist, which align with desiderata’s three approaches to desire (reason, ethics and aesthetics). We use these insights to discuss how desirability can be used within HCI for steering design of the smart home towards sustainability.
The Making of Performativity in Designing [with] Smart Material Composites
As the material becomes active in disclosing the fullness of its capabilities, the boundaries between human and nonhuman performances are destabilized in productive practices that take their departure from materials. This paper illuminates the embodied crafting of action possibilities in material-driven design (MDD) practices with electroluminescent materials. The paper describes and discusses aspects of the making process of electroluminescent materials in which matter, structure, form, and computation are manipulated to deliberately disrupt the affordance of the material, with the goal to explore unanticipated action possibilities and materialize the performative qualities of the sample. In light of this account, the paper concludes by urging the HCI community to performatively rupture the material, so to be able to act upon it as if it was always unfinished or underdeveloped. This, it is shown, can help open up the design space of smart material composites and reveal their latent affordances.
Patterns for How Users Overcome Obstacles in Voice User Interfaces
Voice User Interfaces (VUIs) are growing in popularity. However, even the most current VUIs regularly cause frustration for their users. Very few studies exist on what people do to overcome VUI problems they encounter, or how VUIs can be designed to aid people when these problems occur. In this paper, we analyze empirical data on how users (n=12) interact with our VUI calendar system, DiscoverCal, over three sessions. In particular, we identify the main obstacle categories and types of tactics our participants employ to overcome them. We analyzed the patterns of how different tactics are used in each obstacle category. We found that while NLP Error obstacles occurred the most, other obstacles are more likely to frustrate or confuse the user. We also found patterns that suggest participants were more likely to employ a “guessing” approach rather than rely on visual aids or knowledge recall.
ThinkActive: Designing for Pseudonymous Activity Tracking in the Classroom
We report on the design of ThinkActive – a system to encourage primary aged school children to reflect on their own personal activity data in the classroom. We deployed the system with a cohort of 30 school children, over a six-week period, in partnership with an English Premier League Football club’s health and nutrition programme. The system utilizes inexpensive activity trackers and pseudonymous avatars to promote reflection with personal data using an in-situ display within the classroom. Our design explores pseudonymity as an approach to managing privacy and personal data within a public setting. We report on the motivations, challenges, and opportunities for students, teachers, and third-party providers to engage in the collection and sharing of activity data with primary school children.
Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems
Automatic Gender Recognition (AGR) refers to various computational methods that aim to identify an individual’s gender by extracting and analyzing features from images, video, and/or audio. Applications of AGR are increasingly being explored in domains such as security, marketing, and social robotics. However, little is known about stakeholders’ perceptions and attitudes towards AGR and how this technology might disproportionately affect vulnerable communities. To begin to address these gaps, we interviewed 13 transgender individuals, including three transgender technology designers, about their perceptions and attitudes towards AGR. We found that transgender individuals have overwhelmingly negative attitudes towards AGR and fundamentally question whether it can accurately recognize such a subjective aspect of their identity. They raised concerns about privacy and potential harms that can result from being incorrectly gendered, or misgendered, by technology. We present a series of recommendations on how to accommodate gender diversity when designing new digital systems.
CivilServant: Community-Led Experiments in Platform Governance
As online platforms monitor and intervene in the daily lives of billions of people, platforms are being used to govern enduring social problems. Field experiments could inform wise uses of this power if tensions between democratic values and experimentation could be resolved. In this paper, we introduce CivilServant, a novel experimentation infrastructure that online communities and their moderators use to evaluate policies and replicate each others’ findings. We situate CivilServant in the political history of policy experiments and present design considerations for community participation, ethics, and replication. Based on two case studies of community-led experiments and public debriefings on the reddit platform, we share findings on community deliberation about experiment results. We also report on uses of evidence, finding that experiments informed moderator practices, community policies, and replications by communities and platforms. We discuss the implications of these findings for evaluating platform governance in an open, democratic, experimenting society.
M-Kulinda: Using a Sensor-Based Technology Probe to Explore Domestic Security in Rural Kenya
In rural Kenyan households, property theft is a persistent problem. To explore how Information and Communication Technologies (ICTs) may be used to address this problem we designed and deployed “M-Kulinda”-a sensor-based technology probe. We used interview, observation, diary, and data logging methods to understand 20 households’ experiences using the system. Our findings suggest that a probe’s approach is useful in this context, more specifically we found that participants used our system in different ways to address their specific needs (e.g., monitoring poultry, electronics, and their family members). We also observed changes in our participants’ understanding of sensors; M-Kulinda prompted them to reflect on other areas where sensors could be used in their households. We present design implications based on these findings, and offer new perspectives on the role of technology in deterring crime.
Nonvisual Interaction Techniques at the Keyboard Surface
Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.
Admixed Portrait: Design to Understand Facebook Portrayals in New Parenthood
We report on a design-led study of the photographic representation of self and family on Facebook during and after becoming parents for the first time. Our experience-centered, research-through-design study engaged eight participants across five UK homes, in a month-long deployment of a prototype technology — a design research artifact, Admixed Portrait, that served to prompt participant reflection on first-time parenthood. In addition to pre- and post-deployment interviews, participants kept diaries capturing personal reflections during the deployment, on daily social media use and interactions with Admixed. Our qualitative insights on social media representations of transitional experience and identity for new parents, reveal how their online ‘photowork’ related to self-expression and social functioning. We contribute design considerations for developing tools to support photographic expression in social media use, and methodological insights about design-led inquiry for understanding transitional experiences.
MABLE: Mediating Young Children’s Smart Media Usage with Augmented Reality
There has been a growing concern over the huge increase in use of smart media by young children. This study explores the possibility of using augmented-reality(AR) for regulat-ing preschoolers’ media usage behavior. With MABLE (mobile application for behavioral learning and education), parents can provide AR-assisted feedback by changing facial expressions and sound effects. When overlaying a smart media, which has MABLE running, in front of a QR marker on a puppet, a facial expression is displayed on top of the puppet’s face. A two-week long experiment with 36 parent-child pairs showed that compared to using just the puppet, using MABLE showed higher amount of engage-ment among preschoolers. For the effectiveness of parental mediation in terms of self-control, our data showed mixed results. MABLE had positive effects in that the amount of rule-compliance increased and problematic behaviors de-creased, whereas the level of behavioral dependency on smart media was not influenced.
How Relevant are Incidental Power Poses for HCI?
The concept of power pose originates from a Psychology study from 2010 which suggested that holding an expansive pose can change hormone levels and increase risk-taking behavior. Follow-up experiments suggested that expansive poses incidentally imposed by the design of an environment lead to more dishonest behaviors. While multiple replication attempts of the 2010 study failed, the follow-up experiments on incidental postures have so far not been replicated. As UI design in HCI can incidentally lead to expansive body postures, we attempted two conceptual replications: we first asked 44 participants to tap areas on a wall-sized display and measured their self-reported sense of power; we then asked 80 participants to play a game on a large touch-screen and measured risk-taking. Based on Bayesian analyses we find that incidental power poses had little to no effect on our measures but could cause physical discomfort. We conclude by discussing our findings in the context of theory-driven research in HCI.
Designing and Evaluating mHealth Interventions for Vulnerable Populations: A Systematic Review
Diverse disciplines, including Human-Computer Interaction have explored how mobile health (mHealth) applications can transform healthcare and health promotion. Increasingly, research has explored how mHealth tools can promote healthy behaviors within vulnerable populations-groups that disproportionately experience barriers to wellness. We conducted a systematic review of 83 papers from diverse disciplines to characterize the design and impact of mHealth tools in low-socioeconomic (low-SES) and racial/ethnic minority individuals. Our findings highlight that the diversity within low-SES and racial/ethnic minority groups was not reflected in the populations studied. Most studies focused on improving the health of individuals, often neglecting factors at the community and society levels that influence health disparities. Moreover, few improvements in health outcomes were demonstrated. We further discuss factors that acted as barriers and facilitators of mHealth intervention adoption. Our findings highlight trends that can drive critically needed digital health innovations for vulnerable populations.
The Illusion of Control: Placebo Effects of Control Settings
Algorithmic prioritization is a growing focus for social media users. Control settings are one way for users to adjust the prioritization of their news feeds, but they prioritize feed content in a way that can be difficult to judge objectively. In this work, we study how users engage with difficult-to-validate controls. Via two paired studies using an experimental system — one interview and one online study — we found that control settings functioned as placebos. Viewers felt more satisfied with their feed when controls were present, whether they worked or not. We also examine how people engage in sensemaking around control settings, finding that users often take responsibility for violated expectations — for both real and randomly functioning controls. Finally, we studied how users controlled their social media feeds in the wild. The use of existing social media controls had little impact on user’s satisfaction with the feed; instead, users often turned to improvised solutions, like scrolling quickly, to see what they want.
Easy Return: An App for Indoor Backtracking Assistance
We present a system that, implemented as an iPhone app controllable from an Apple Watch, can help a blind person backtrack a route taken in a building. This system requires no maps of the building or environment modifications. While traversing a path from a starting location to a destination, the system builds and records a path representation in terms of a sequence of turns and of step counts between turns. If the user wants to backtrack the same path, the system can provide assistance by tracking the user’s location in the recorded path, and producing directional information in speech form about the next turns and step counts to follow. The system was tested with six blind participants in a controlled indoor experiment.
Point-and-Shake: Selecting from Levitating Object Displays
Acoustic levitation enables a radical new type of human-computer interface composed of small levitating objects. For the first time, we investigate the selection of such objects, an important part of interaction with a levitating object display. We present Point-and-Shake, a mid-air pointing interaction for selecting levitating objects, with feedback given through object movement. We describe the implementation of this technique and present two user studies that evaluate it. The first study found that users could accurately (96%) and quickly (4.1s) select objects by pointing at them. The second study found that users were able to accurately (95%) and quickly (3s) select occluded objects. These results show that Point-and-Shake is an effective way of initiating interaction with levitating object displays.
When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration
We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics-display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.
SteeringWheel: A Locality-Preserving Magnification Interface for Low Vision Web Browsing
Low-vision users struggle to browse the web with screen magnifiers. Firstly, magnifiers occlude significant portions of the webpage, thereby making it cumbersome to get the webpage overview and quickly locate the desired content. Further, magnification causes loss of spatial locality and visual cues that commonly define semantic relationships in the page; reconstructing semantic relationships exclusively from narrow views dramatically increases the cognitive burden on the users. Secondly, low-vision users have widely varying needs requiring a range of interface customizations for different page sections; dynamic customization in extant magnifiers is disruptive to users’ browsing. We present SteeringWheel, a magnification interface that leverages content semantics to preserve local context. In combination with a physical dial, supporting simple rotate and press gestures, users can quickly navigate different webpage sections, easily locate desired content, get a quick overview, and seamlessly customize the interface. A user study with 15 low-vision participants showed that their web-browsing efficiency improved by at least 20 percent with SteeringWheel compared to extant screen magnifiers.
Extending Keyboard Shortcuts with Arm and Wrist Rotation Gestures
We propose and evaluate a novel interaction technique to enhance physical keyboard shortcuts with arm and wrist rotation gestures, performed during keypresses: rolling the wrist, rotating the arm/wrist, and lifting it. This extends the set of shortcuts from key combinations (e.g. ctrl + v) to combinations of key(s) and gesture (e.g. v + roll left) and enables continuous control. We implement this approach for isolated single keypresses, using inertial sensors of a smartwatch. We investigate key aspects in three studies: 1) rotation flexibility per keystroke finger, 2) rotation control, and 3) user-defined gesture shortcuts. As a use case, we employ our technique in a painting application and assess user experience. Overall, results show that arm and wrist rotations during keystrokes can be used for interaction, yet challenges remain for integration into practical applications. We discuss recommendations for applications and ideas for future research.
IntroAssist: A Tool to Support Writing Introductory Help Requests
Writing introductory help requests is a key part of develop-ing new professional connections, such as through email and other online messaging systems. This paper presents the design and an experimental evaluation of IntroAssist-a web-based tool that leverages cognitive apprenticeship in-structional methods to support writing introductory help requests through an expert-informed checklist, tagged peer examples, self-tagging, and suggested word limit. In a study of IntroAssist with novice entrepreneurs, we find that 1) expert raters consider help requests written with the tool as more effective, 2) participants are able to perform introduc-tory help seeking skills after the tool is removed, and 3) participants report being more likely to send help requests written with the tool. We present implications for the de-velopment of systems that support the initiation of profes-sional relationships.
Surprise Me If You Can: Serendipity in Health Information
Our natural tendency to be curious is increasingly important now that we are exposed to vast amounts of information. We often cope with this overload by focusing on the familiar: information that matches our expectations. In this paper we present a framework for interactive serendipitous information discovery based on a computational model of surprise. This framework delivers information that users were not actively looking for, but which will be valuable to their unexpressed needs. We hypothesize that users will be surprised when presented with information that violates the expectations predicted by our model of them. This surprise model is balanced by a value component which ensures that the information is relevant to the user. Within this framework we have implemented two surprise models, one based on association mining and the other on topic modeling approaches. We evaluate these two models with thirty users in the context of online health news recommendation. Positive user feedback was obtained for both of the computational models of surprise compared to a baseline random method. This research contributes to the understanding of serendipity and how to “engineer” serendipity that is favored by users.
Awe the Audience: How the Narrative Trajectories Affect Audience Perception in Public Speaking
Telling a great story often involves a deliberate alteration of emotions. In this paper, we objectively measure and analyze the narrative trajectories of stories in public speaking and their impact on subjective ratings. We conduct the analysis using the transcripts of over 2000 TED talks and estimate potential audience response using over 5 million spontaneous annotations from the viewers. We use IBM Watson Tone Analyzer to extract sentence-wise emotion, language, and social scores. Our study indicates that it is possible to predict (with AUC as high as 0.88) the subjective ratings of the audience by analyzing the narrative trajectories. Additionally, we find that some trajectories (for example, a flat trajectory of joy) correlate well with some specific ratings (e.g. “Longwinded’) assigned by the viewers. Such an association could be useful in forecasting audience responses using objective analysis.
Introducing Transient Gestures to Improve Pan and Zoom on Touch Surfaces
Despite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.
MirrorMirror: A Mobile Application to Improve Speechreading Acquisition
Many people around the world have difficulties in day-to-day conversation due to hearing loss. Hearing aids often fail to offer enough benefits and have low adoption rates. However, people with hearing loss find that speechreading can improve their understanding during conversation, but speechreading is a challenging skill to learn. Speechreading classes can improve acquisition, however there are a limited number of classes available and students can only practice effectively when attending class. To address this, we conducted a postal survey with 59 speechreading students to understand students’ perspectives on practicing. Using our findings, we developed an Android application called MirrorMirror – a new Speechreading Acquisition Tool (SAT) that allows students to practice their speechreading by recording and watching videos of people they frequently speak with. We evaluated MirrorMirror through three case studies with speechreading students and found that they could effectively target their speechreading practice on people, words and situations they encounter during daily conversations.
Hybrid-Brailler: Combining Physical and Gestural Interaction for Mobile Braille Input and Editing
Braille input enables fast nonvisual entry speeds on mobile touchscreen devices. Yet, the lack of tactile cues commonly results in typing errors, which are hard to correct. We propose Hybrid-Brailler, an input solution that combines physical and gestural interaction to provide fast and accurate Braille input. We use the back of the device for physical chorded input while freeing the touchscreen for gestural interaction. Gestures are used in editing operations, such as caret movement, text selection, and clipboard control, enhancing the overall text entry experience. We conducted two user studies to assess both input and editing performance. Results show that Hybrid-Brailler supports fast entry rates as its virtual counterpart, while significantly increasing input accuracy. Regarding editing performance, when compared with the mainstream technique, Hybrid-Brailler shows performance benefits of 21% in speed and increased editing accuracy. We finish with lessons learned for designing future nonvisual input and editing techniques.
Flexible and Mindful Self-Tracking: Design Implications from Paper Bullet Journals
Digital self-tracking technologies offer many potential benefits over self-tracking with paper notebooks. However, they are often too rigid to support people’s practical and emotional needs in everyday settings. To inform the design of more flexible self-tracking tools, we examine bullet journaling: an analogue and customisable approach for logging and reflecting on everyday life. Analysing a corpus of paper bullet journal photos and related conversations on Instagram, we found that individuals extended and adapted bullet journaling systems to their changing practical and emotional needs through: (1) creating and combining personally meaningful visualisations of different types of trackers, such as habit, mood, and symptom trackers; (2) engaging in mindful reflective thinking through design practices and self-reflective strategies; and (3) posting photos of paper journals online to become part of a self-tracking culture of sharing and learning. We outline two interrelated design directions for flexible and mindful self-tracking: digitally extending analogue self-tracking and supporting digital self-tracking as a mindful design practice.
Predicting Human Performance in Vertical Menu Selection Using Deep Learning
Predicting human performance in interaction tasks allows designers or developers to understand the expected performance of a target interface without actually testing it with real users. In this work, we present a deep neural net to model and predict human performance in performing a sequence of UI tasks. In particular, we focus on a dominant class of tasks, i.e., target selection from a vertical list or menu. We experimented with our deep neural net using a public dataset collected from a desktop laboratory environment and a dataset collected from hundreds of touchscreen smartphone users via crowdsourcing. Our model significantly outperformed previous methods on these datasets. Importantly, our method, as a deep model, can easily incorporate additional UI attributes such as visual appearance and content semantics without changing model architectures. By understanding about how a deep learning model learns from human behaviors, our approach can be seen as a vehicle to discover new patterns about human behaviors to advance analytical modeling.
Customizing Hybrid Products
We explore how the convergence of the digital and physical into hybrid products leads to new possibilities for customization. We report on a technology probe, a hybrid advent calendar with both paper form and digital layers of content, both of which were designed to be customizable. We reveal how over two hundred active users adapted its physical and digital aspects in various ways, some anticipated and familiar, but others surprising. This leads us to contribute concepts to help understand and design for hybrid customization — the idea of broad customization spanning physical and digital; end-to-end customization by different stakeholders along the value chain for a product; and the combination of these into customization maps.
Fingers’ Range and Comfortable Area for One-Handed Smartphone Interaction Beyond the Touchscreen
Previous research and recent smartphone development presented a wide range of input controls beyond the touchscreen. Fingerprint scanners, silent switches, and Back-of-Device (BoD) touch panels offer additional ways to perform input. However, with the increasing amount of input controls on the device, unintentional input or limited reachability can hinder interaction. In a one-handed scenario, we conducted a study to investigate the areas that can be reached without losing grip stability (comfortable area), and with stretched fingers (maximum range) using four different phone sizes. We describe the characteristics of the comfortable area and maximum range for different phone sizes and derive four design implications for the placement of input controls to support one-handed BoD and edge interaction. Amongst others, we show that the index and middle finger are the most suited fingers for BoD interaction and that the grip shifts towards the top edge with increasing phone sizes.
Exploration and Explanation in Computational Notebooks
Computational notebooks combine code, visualizations, and text in a single document. Researchers, data analysts, and even journalists are rapidly adopting this new medium. We present three studies of how they are using notebooks to document and share exploratory data analyses. In the first, we analyzed over 1 million computational notebooks on GitHub, finding that one in four had no explanatory text but consisted entirely of visualizations or code. In a second study, we examined over 200 academic computational notebooks, finding that although the vast majority described methods, only a minority discussed reasoning or results. In a third study, we interviewed 15 academic data analysts, finding that most considered computational notebooks personal, exploratory, and messy. Importantly, they typically used other media to share analyses. These studies demonstrate a tension between exploration and explanation in constructing and sharing computational notebooks. We conclude with opportunities to encourage explanation in computational media without hindering exploration.
Multi-Touch Skin: A Thin and Flexible Multi-Touch Sensor for On-Skin Input
Skin-based touch input opens up new opportunities for direct, subtle, and expressive interaction. However, existing skin-worn sensors are restricted to single-touch input and limited by a low resolution. We present the first skin overlay that can capture high-resolution multi-touch input. Our main contributions are: 1) Based on an exploration of functional materials, we present a fabrication approach for printing thin and flexible multi-touch sensors for on-skin interactions. 2) We present the first non-rectangular multi-touch sensor overlay for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. 3) To validate the feasibility and versatility of our approach, we present four application examples and empirical results from two technical evaluations. They confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations.
Improving Comprehension of Measurements Using Concrete Re-expression Strategies
It can be difficult to understand physical measurements (e.g., 28 lb, 600 gallons) that appear in news stories, data reports, and other documents. We develop tools that automatically re-express unfamiliar measurements using the measurements of familiar objects. Our work makes three contributions: (1) we identify effectiveness criteria for objects used in concrete measurement re-expressions; (2) we operationalize these criteria in a scalable method for mining a large dataset of concrete familiar objects with their physical dimensions from Amazon and Wikipedia; and (3) we develop automated concrete re-expression tools that implement three common re-expression strategies (adding familiar context, reunitization and proportional analogy) as energy minimization algorithms. Crowdsourced evaluations of our tools indicate that people find news articles with re-expressions more helpful and re- expressions help them to better estimate new measurements.
Understanding the Needs of Searchers with Dyslexia
As many as 20% of English speakers have dyslexia, a language disability that impacts reading and spelling. Web search is an important modern literacy skill, yet the accessibility of this language-centric endeavor to people with dyslexia is largely unexplored. We interviewed ten adults with dyslexia and conducted an online survey with 81 dyslexic and 80 non-dyslexic adults, in which participants described challenges they face in various stages of web search (query formulation, search result triage, and information extraction). We also report the findings of an online study in which 174 adults with dyslexia and 172 without dyslexia rated the readability and relevance of sets of search query results. Our findings demonstrate differences in behaviors and preferences between dyslexic and non-dyslexic searchers, and indicate that factoring readability into search engine rankings and/or interfaces may benefit both dyslexic and non-dyslexic users.
Evaluation Strategies for HCI Toolkit Research
Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.
TopoText: Context-Preserving Text Data Exploration Across Multiple Spatial Scales
TopoText is a context-preserving technique for visualizing text data for multi-scale spatial aggregates to gain insight into spatial phenomena. Conventional exploration requires users to navigate across multiple scales but only presents the information related to the current scale. This limitation potentially adds more steps of interaction and cognitive overload to the users. TopoText renders multi-scale aggregates into a single visual display combining novel text-based encoding and layout methods that draw labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each individual scale, but also indicates the spatial coverage of the aggregates and their underlying hierarchical relationships. We validate TopoText with both a user study as well as several application examples.
Design Patterns for Data Comics
Data comics for data-driven storytelling are inspired by the visual language of comics and aim to communicate insights in data through visualizations. While comics are widely known, few examples of data comics exist and there has not been any structured analysis nor guidance for their creation. We introduce data-comic design-patterns, each describing a set of panels with a specific narrative purpose, that allow for rapid storyboarding of data comics while showcasing their expressive potential. Our patterns are derived from i) analyzing common patterns in infographics, datavideos, and existing data comics, ii) our experiences creating data comics for different scenarios. Our patterns demonstrate how data comics allow an author to combine the best of both worlds: spatial layout and overview from infographics as well as linearity and narration from videos and presentations.
Practices and Technology Needs of a Network of Farmers in Tharaka Nithi, Kenya
Farmers in rural areas of Kenya generally rely on traditional agricultural practices inherited from past generations. However, population increases and climate changes have put pressure on resources such as land and water. These resource pressures have created a need to broaden and expand farming practices. We conducted an exploratory study with farmers in Tharaka Nithi, Kenya to explore their practices, if and how they used ICT, and how the technologies used might be designed to aid their practices, if at all. Overall, our results show that farmers desired more knowledge to enable them apply ICT interventions in ways that improved yields. Farmers were also interested in accessing information on soil fertility, water predictability and market opportunities. These findings suggest opportunities for technology design to support farming practices among rural communities in rural settings. We also articulate social challenges that designers will face when thinking about coming up with such solutions.
Design for Collaborative Survival: An Inquiry into Human-Fungi Relationships
In response to recent calls for HCI to address ongoing environmental crises and existential threats, this paper introduces the concept of collaborative survival and examines how it shapes the design of interactive artifacts. Collaborative survival describes how our (human) ability to persist as a species is deeply entangled with and dependent upon the health of a multitude of other species. We explore collaborative survival within the context of designing tools for mushroom foraging and reflect on how interactive products can open new pathways for noticing and joining-with these entanglements towards preferable futures. In addition to highlighting three tactics-engagement, attunement and expansion-that can guide designs towards multispecies flourishing, our prototypes illustrate the potential for wearable technology to extend the body into the environment.
“An Odd Kind of Pleasure”: Differentiating Emotional Challenge in Digital Games
Recent work introduced the notion of emotional challenge as a means to afford more unique and diverse gaming experiences. However, players’ experience of emotional challenge has received little empirical attention. It remains unclear whether players enjoy it and what exactly constitutes the challenge thereof. We surveyed 171 players about a challenging or an emotionally challenging experience, and analyzed their responses with regards to what made the experience challenging, their emotional response, and the relation to core player experience constructs. We found that emotional challenge manifested itself in different ways, by confronting players with difficult themes or decisions, as well as having them deal with intense emotions. In contrast to more’conventional’ challenge, emotional challenge evoked a wider range of negative emotions and was appreciated significantly more by players. Our findings showcase the appeal of uncomfortable gaming experiences, and extend current conceptualizations of challenge in games.
Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality
Eyes-free target acquisition is a basic and important human ability to interact with the surrounding physical world, relying on the sense of space and proprioception. In this research, we leverage this ability to improve interaction in virtual reality (VR), by allowing users to acquire a virtual object without looking at it. We expect this eyes-free approach can effectively reduce head movements and focus changes, so as to speed up the interaction and alleviate fatigue and VR sickness. We conduct three lab studies to progressively investigate the feasibility and usability of eyes-free target acquisition in VR. Results show that, compared with the eyes-engaged manner, the eyes-free approach is significantly faster, provides satisfying accuracy, and introduces less fatigue and sickness; Most participants (13/16) prefer this approach. We also measure the accuracy of motion control and evaluate subjective experience of users when acquiring targets at different locations around the body. Based on the results, we make suggestions on designing appropriate target layout and discuss several design issues for eyes-free target acquisition in VR.
Change Blindness in Proximity-Aware Mobile Interfaces
Interface designs on both small and large displays can encourage people to alter their physical distance to the display. Mobile devices support this form of interaction naturally, as the user can move the device closer or further away as needed. The current generation of mobile devices can employ computer vision, depth sensing and other inference methods to determine the distance between the user and the display. Once this distance is known, a system can adapt the rendering of display content accordingly and enable proximity-aware mobile interfaces. The dominant method of exploiting proximity-aware interfaces is to remove or superimpose visual information. In this paper, we investigate change blindness in such interfaces. We present the results of two experiments. In our first experiment we show that a proximity-aware mobile interface results in significantly more change blindness errors than a non-moving interface. The absolute difference in error rates was 13.7%. In our second experiment we show that within a proximity-aware mobile interface, gradual changes induce significantly more change blindness errors than instant changes—confirming expected change blindness behavior. Based on our results we discuss the implications of either exploiting change blindness effects or mitigating them when designing mobile proximity-aware interfaces.
Rethinking Thinking Aloud: A Comparison of Three Think-Aloud Protocols
This paper presents the results of a study that compared three think-aloud methods: concurrent think-aloud, retrospective think-aloud, and a hybrid method. The three methods were compared through an evaluation of a library website, which involved four points of comparison: task performance, participants’ experiences, usability problems discovered, and the cost of employing the methods. The results revealed that the concurrent method outperformed both the retrospective and the hybrid methods in facilitating successful usability testing. It detected higher numbers of usability problems than the retrospective method, and produced output comparable to that of the hybrid method. The method received average to positive ratings from its users, and no reactivity was observed. Lastly, this method required much less time on the evaluator’s part than did the other two methods, which involved double the testing and analysis time.
Reading on Smart Glasses: The Effect of Text Position, Presentation Type and Walking
Smart glasses are increasingly being used in professional contexts. Having key applications such as short messaging and newsreader, they enable continuous access to textual information. In particular, smart glasses allow reading while performing other activities as they do not occlude the user’s world view. For efficient reading, it is necessary to understand how a text should be presented on them. We, therefore, conducted a study with 24 participants using a Microsoft HoloLens to investigate how to display text on smart glasses while walking and sitting. We compared text presentation in the top-right, center, and bottom-center positions with Rapid Serial Visual Presentation (RSVP) and line-by-line scrolling. We found that text displayed in the top-right of smart glasses increases subjective workload and reduces comprehension. RSVP yields higher comprehension while sitting. Conversely, reading with scrolling yields higher comprehension while walking. Insights from our study inform the design of reading interfaces for smart glasses.
Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration
We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user’s gaze direction and body gestures while it transforms in size and orientation to stay within the AR user’s field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.
Viewer Experience of Obscuring Scene Elements in Photos to Enhance Privacy
With the rise of digital photography and social networking, people are sharing personal photos online at an unprecedented rate. In addition to their main subject matter, photographs often capture various incidental information that could harm people’s privacy. While blurring and other image filters may help obscure private content, they also often affect the utility and aesthetics of the photos, which is important since images shared in social media are mainly for human consumption. Existing studies of privacy-enhancing image filters either primarily focus on obscuring faces, or do not systematically study how filters affect image utility. To understand the trade-offs when obscuring various sensitive aspects of images, we study eleven filters applied to obfuscate twenty different objects and attributes, and evaluate how effectively they protect privacy and preserve image quality for human viewers.
Navigating the Job Search as a Low-Resourced Job Seeker
The Internet is providing increasing access to information about employment opportunities, but not everyone can leverage it effectively. Research suggests that job seekers with limited access to Internet technologies are being left behind, while those with limited social resources are expected to rely on the Internet even more. In this work, we conducted in-depth semi-structured interviews with 11 low-resourced job seekers in a metropolitan area in the Midwestern USA to understand how social and digital resources support their efforts to find work. We find that online resources support job seekers in finding relevant jobs via search, but do not help them identify opportunities to improve their job search process or increase their chances of securing employment. We recommend that systems aiming to support low-resourced job seekers design for deeper engagement with their users across the job search process, to help users recognize ways to improve on their existing practices.
The Value of Empty Space for Design
We present a study on a group of people who, upon adopting a new lifestyle movement, have discovered and constructed alternative aspects of space. Drawing on 23 interviews with minimalists and participant observations of their Meetup meetings, we highlight the central role of empty space in their lives at home. Our findings show how empty space for minimalists emerge as a new, hitherto unknown space in the home and the ways minimalists seek to create, maintain, and stay sensitive to these empty spaces. Empty spaces for minimalists signify their achievements, exudes aesthetic appeal, and provide a sanctuary away from city life. We propose new opportunities for design based on our findings of empty space. We suggest that design should consider supporting the practices and values that revolve around the absence of artifacts.
Disorder or Driver?: The Effects of Nomophobia on Work-Related Outcomes in Organizations
Nomophobia, which refers to discomfort or anxiety caused by being unable to use one’s smartphone, has become prevalent among smartphone users. However, the influence of nomophobia on employees’ work-related outcomes remains unclear. Drawing on the job demands-resources theory, this study develops a model that explores the interplay between employees’ nomophobia, work engagement, emotional exhaustion, work interruption, and job productivity. The proposed model was tested using data collected from 187 employees in one organization. The results demonstrate that some employees with high levels of nomophobia feel more engaged with their work and more productive, yet others tend to be emotionally exhausted and feel they are less productive. By illuminating the dual effects of nomophobia on employees’ work-related outcomes, this study extends our understanding of how smartphone use positively and negatively affects employees in the workplace. The notion of nomophobia in the workplace is discussed, along with new directions for research.
My Telepresence, My Culture?: An Intercultural Investigation of Telepresence Robot Operators’ Interpersonal Distance Behaviors
Interpersonal distance behaviors can vary significantly across countries and impact human social interaction. Do these cross-cultural differences play out when one of the interaction partners participates through a teleoperated robot? Emerging research shows that when being approached by a robot, people tend to hold similar cultural preferences as they would for an approaching human. However, no work yet has investigated this question from a robot teleoperator’s perspective. Toward answering this, we conducted an online study (N = 774) using a novel simulation paradigm across two countries (U.S. and India). Results show that in the role of a telepresence robot operator, participants exhibited cross-cultural differences in interpersonal distance behavior in line with human-human proxemic research, indicating that culture-specific distance behavior can manifest in the way a robot operator controls a robot. We discuss implications for designers who seek to automate path planning and navigation for teleoperated robots.
Privacy Lies: Understanding How, When, and Why People Lie to Protect Their Privacy in Multiple Online Contexts
In this paper, we study online privacy lies: lies primarily aimed at protecting privacy. Going beyond privacy lenses that focus on privacy concerns or cost/benefit analyses, we explore how contextual factors, motivations, and individual-level characteristics affect lying behavior through a 356-person survey. We find that statistical models to predict privacy lies that include attitudes about lying, use of other privacy-protective behaviors (PPBs), and perceived control over information improve on models based solely on self-expressed privacy concerns. Based on a thematic analysis of open-ended responses, we find that the decision to tell privacy lies stems from a range of concerns, serves multiple privacy goals, and is influenced by the context of the interaction and attitudes about the morality and necessity of lying. Together, our results point to the need for conceptualizations of privacy lies-and PPBs more broadly-that account for multiple goals, perceived control over data, contextual factors, and attitudes about PPBs.
“We Are the Product”: Public Reactions to Online Data Sharing and Privacy Controversies in the Media
As online platforms increasingly collect large amounts of data about their users, there has been growing public concern about privacy around issues such as data sharing. Controversies around practices perceived as surprising or even unethical often highlight patterns of privacy attitudes when they spark conversation in the media. This paper examines public reaction “in the wild” to two data sharing controversies that were the focus of media attention-regarding the social media and communication services Facebook and WhatsApp, as well as the email service unroll.me. These controversies instigated discussion of data privacy and ethics, accessibility of website policies, notions of responsibility for privacy, cost-benefit analyses, and strategies for privacy management such as non-use. An analysis of reactions and interactions captured by comments on news articles not only reveals information about pervasive privacy attitudes, but also suggests communication and design strategies that could benefit both platforms and users.
FaceDisplay: Towards Asymmetric Multi-User Interaction for Nomadic Virtual Reality
Mobile VR HMDs enable scenarios where they are being used in public, excluding all the people in the surrounding (Non-HMD Users) and reducing them to be sole bystanders. We present FaceDisplay, a modified VR HMD consisting of three touch sensitive displays and a depth camera attached to its back. People in the surrounding can perceive the virtual world through the displays and interact with the HMD user via touch or gestures. To further explore the design space of FaceDisplay, we implemented three applications (FruitSlicer, SpaceFace and Conductor) each presenting different sets of aspects of the asymmetric co-located interaction (e.g. gestures vs touch). We conducted an exploratory user study (n=16), observing pairs of people experiencing two of the applications and showing a high level of enjoyment and social interaction with and without an HMD. Based on the findings we derive design considerations for asymmetric co-located VR applications and argue that VR HMDs are currently designed having only the HMD user in mind but should also include Non-HMD Users.
Interactive Guidance Techniques for Improving Creative Feedback
Good feedback is critical to creativity and learning, yet rare. Many people do not know how to actually provide effective feedback. There is increasing demand for quality feedback — and thus feedback givers — in learning and professional settings. This paper contributes empirical evidence that two interactive techniques — reusable suggestions and adaptive guidance — can improve feedback on creative work. We present these techniques embodied in the CritiqueKit system to help reviewers give specific, actionable, and justified feedback. Two real-world deployment studies and two controlled experiments with CritiqueKit found that adaptively-presented suggestions improve the quality of feedback from novice reviewers. Reviewers also reported that suggestions and guidance helped them describe their thoughts and reminded them to provide effective feedback.
Environmental Factors in Indoor Navigation Based on Real-World Trajectories of Blind Users
Indoor localization technologies can enhance quality of life for blind people by enabling them to independently explore and navigate indoor environments. Researchers typically evaluate their systems in terms of localization accuracy and user behavior along planned routes. We propose two measures of path-following behavior: deviation from optimal route and trajectory variability. Through regression analysis of real-world trajectories from blind users, we identify relationships between a) these measures and b) elements of the environment, route characteristics, localization error, and instructional cues that users receive. Our results provide insights into path-following behavior for turn-by-turn indoor navigation and have implications for the design of future interactions. Moreover, our findings highlight the importance of reporting these environmental factors and route properties in similar studies. We present automated and scalable methods for their calculation and to encourage their reporting for better interpretation and comparison of results across future studies.
BSpeak: An Accessible Voice-based Crowdsourcing Marketplace for Low-Income Blind People
BSpeak is an accessible crowdsourcing marketplace that enables blind people in developing regions to earn money by transcribing audio files through speech. We examine accessibility and usability barriers that 15 first-time users, who are low-income and blind, experienced while completing transcription tasks on BSpeak and Mechanical Turk (MTurk). Our mixed-methods analysis revealed severe accessibility barriers in MTurk due to the absence of landmarks, unlabeled UI elements, and improper use of HTML headings. Compared to MTurk, participants found BSpeak significantly more accessible and usable, and completed tasks with higher accuracy in lesser time due to its voice-based implementation. In a two-week field deployment of BSpeak in India, 24 low-income blind users earned rupee 7,310 by completing over 16,000 transcription tasks to yield transcriptions with 87% accuracy. Through our analysis of BSpeak’s strengths and weaknesses, we provide recommendations for designing crowdsourcing marketplaces for low-income blind people in resource-constrained settings.
Understanding Chatbot-mediated Task Management
Effective task management is essential to successful team collaboration. While the past decade has seen considerable innovation in systems that track and manage group tasks, these innovations have typically been outside of the principal communication channels: email, instant messenger, and group chat. Teams formulate, discuss, refine, assign, and track the progress of their collaborative tasks over electronic communication channels, yet they must leave these channels to update their task-tracking tools, creating a source of friction and inefficiency. To address this problem, we explore how bots might be used to mediate task management for individuals and teams. We deploy a prototype bot to eight different teams of information workers to help them create, assign, and keep track of tasks, all within their main communication channel. We derived seven insights for the design of future bots for coordinating work.
Rich Representations of Visual Content for Screen Reader Users
Alt text (short for “alternative text”) is descriptive text associated with an image in HTML and other document formats. Screen reader technologies speak the alt text aloud to people who are visually impaired. Introduced with HTML 2.0 in 1995, the alt attribute has not evolved despite significant changes in technology over the past two decades. In light of the expanding volume, purpose, and importance of digital imagery, we reflect on how alt text could be supplemented to offer a richer experience of visual content to screen reader users. Our contributions include articulating the design space of representations of visual content for screen reader users, prototypes illustrating several points within this design space, and evaluations of several of these new image representations with people who are blind. We close by discussing the implications of our taxonomy, prototypes, and user study findings.
Let’s Hate Together: How People Share News in Messaging, Social, and Public Networks
There are currently a wide variety of ways to share news with others: from sharing in a personal message, to sharing on a social network, to publicly posting. Through a survey with over one thousand people and an artifact analysis of 262 shared articles, we examine differences in motivations and frequency of sharing news on public, social and private platforms. We find that public sharing is more focused on spreading an ideology, while private sharing in messaging is dominated by stories inspired by the recipient’s interests or context. The survey revealed three main groups of news sharing practices: those who shared to all channels (public, social, private), those who didn’t share at all, and those who shared to private and social. The groups differed in their attitudes toward online discussion; those that shared the most were neutral and those that didn’t share had negative attitudes about discussion online. We discuss sharing practices and implications for social systems that support sharing news.
Reinterpreting Schlemmer’s Triadic Ballet: Interactive Costume for Unthinkable Movements
In the 1920s, Oskar Schlemmer, artist in the Bauhaus movement, created the Triadic Ballet costumes. These re-strict movement of dancers, creating new expressions. In-spired by this, we designed an interactive wire costume. It restricts lower body movements, and emphasizes arm movements spurring LED-light ‘sparks’ and ‘waves’ wired in a tutu-like costume. The Wire Costume was introduced to a dancer who found that an unusual bond emerged be-tween her and the costume. We discuss how sensory altera-tion (sight, kinesthetic awareness and proprioception) and bodily training to adjust to the new soma, can result in nov-el, evocative forms of expression. The interactive costume can foster a certain mood, introduce feelings, and even embody a whole character — only revealed once worn and danced. We describe a design exploration combining cul-tural and historical research, interviews with experts and material explorations that culminated in a novel prototype.
Exploring the Potential of Exergames to affect the Social and Daily Life of People with Dementia and their Caregivers
This paper presents the outcomes of an exploratory field study that examined the social impact of an ICT-based suite of exergames for people with dementia and their caregivers. Qualitative data was collected over a period of 8 months, during which time we studied the daily life of 14 people with dementia and their informal and professional caregivers. We focus on the experiential aspects of the system and examine its social impact when integrated into the daily routines of both people with dementia themselves and their professional and family caregivers. Our findings indicate that relatives were able to regain leisure time, whilst people with dementia were able to recapture certain aspects of their social and daily activities that might otherwise have been lost to them. Results suggest that the system enhanced social-interaction, invigorated relationships, and improved the empowerment of people with dementia and their caregivers to face daily challenges.
Grafter: Remixing 3D-Printed Machines
Creating new 3D printed objects by recombining models found in hobbyist repositories has been referred to as “re-mixing”. In this paper, we explore how to best support users in remixing a specific class of 3D printed objects, namely those that perform mechanical functions. In our survey, we found that makers remix such machines by manually extracting parts from one parent model and combine it with parts from a different parent model. This approach often puts axles made by one maker into bearings made by another maker or combines a gear by one maker with a gear by a different maker. This approach is problem-atic, however, as parts from different makers tend to fit poorly, which results in long series of tweaks and test-prints until all parts finally work together. We address this with our interactive system grafter. Grafter does two things. First, grafter largely automates the process of extracting and recombining mechanical elements from 3D printed machines. Second, it enforces a more efficient approach to reuse: it prevents users from extracting indi-vidual parts, but instead affords extracting groups of me-chanical elements that already work together, such as axles and their bearings or pairs of gears. We call this mecha-nism-based remixing. In a final user study, all models that participants had remixed using grafter could be 3D printed without further tweaking and worked immediately.
Depth Conflict Reduction for Stereo VR Video Interfaces
Applications for viewing and editing 360° video often render user interface (UI) elements on top of the video. For stereoscopic video, in which the perceived depth varies over the image, the perceived depth of the video can conflict with that of the UI elements, creating discomfort and making it hard to shift focus. To address this problem, we explore two new techniques that adjust the UI rendering based on the video content. The first technique dynamically adjusts the perceived depth of the UI to avoid depth conflict, and the second blurs the video in a halo around the UI. We conduct a user study to assess the effectiveness of these techniques in two stereoscopic VR video tasks: video watching with subtitles, and video search.
From Pulse Trains to “Coloring with Vibrations”: Motion Mappings for Mid-Air Haptic Textures
Can we experience haptic textures in mid-air? Typically, the experience of texture is caused by vibration of the fingertip as it moves over the surface of an object. This object’s surface also guides the finger’s movement, creating an implicit motion-to-vibration mapping. If we wish to simulate a texture in mid-air, such guidance does not exist, making the choice of motion-to-vibration mapping non-obvious. We evaluate the experience of moving a pointer with four different motion-to vibration mappings in an interview study. We found that some mappings lead to a perception shift, transforming the experience. When this occurs, the pointer is no longer perceived as vibrating, interactions become more pleasurable, and users have an increased experience of agency and control. We discuss how to leverage this in the design of haptic interfaces.
Collaborative Dynamic Queries: Supporting Distributed Small Group Decision-making
Communication is critical in small group decision-making processes during which each member must be able to express preferences to reach consensus. Finding consensus can be difficult when each member in a group has a perspective that potentially conflicts with those of others. To support groups attempting to harmonize diverse preferences, we propose Collaborative Dynamic Queries (C-DQ), a UI component that enables a group to filter queries over decision criteria while being aware of others’ preferences. To understand how C-DQ affects a group’s behavior and perception in the decision-making process, we conducted 2 studies with groups who were prompted to make decisions together on mobile devices in a dispersed and synchronous situation. In Study 1, we found showing group preferences with C-DQ helped groups to communicate more efficiently and effectively. In Study 2, we found filtering candidates based on each member’s own filter range further improved a groups’ communication efficiency and effectiveness.
Crowdsourcing Rural Network Maintenance and Repair via Network Messaging
Repair and maintenance requirements limit the successful operation of rural infrastructure. Current best practices are centralized management, which requires travel from urban areas and is prohibitively expensive, or intensively training community members, which limits scaling. We explore an alternative model: crowdsourcing repair from the community. Leveraging a Community Cellular Network in the remote Philippines, we sent SMS to all active network subscribers (n = 63) requesting technical support. From the pool of physical respondents, we explored their ability to repair through mock failures and conducted semi-structured interviews about their experiences with repair. We learned that community members would be eager to practice repair if allowed, would network to recruit more expertise, and seemingly have the collective capacity to resolve some common failures. They are most successful when repairs map directly to their lived experiences. We suggest infrastructure design considerations that could make repairs more tractable and argue for an inclusive approach.
Interactive Interior and Proxemics Thresholds: Empowering Participants in Sensitive Conversations
The position and workings of interactive interior elements matter greatly on the relations people may enact. This paper reports on the conception and evaluation of an interactive table and its interior effects designed to support sensitive consultations between healthcare personnel, patients and relatives as they happen during treatment of cancer diseases in a hospital department of oncology. The interior design includes the physical shape of artefact, its digital functionality and how the seating around it is to take place. The design of the table is substantiated through observations of current practice, framing of the design challenge, conceptualization, and exploring form giving alternatives. Through a set of evaluations in actual use settings it is argued how the design concept of the table as interactive interior points to how notions in interaction proxemics should be rearticulated. In particular, this paper argues how proxemics thresholds should be regarded as dynamic and relational.
SESSION: Paper Presentations
Blocks4All: Overcoming Accessibility Barriers to Blocks Programming for Children with Visual Impairments
Blocks-based programming environments are a popular tool to teach children to program, but they rely heavily on visual metaphors and are therefore not fully accessible for children with visual impairments. We evaluated existing blocks-based environments and identified five major accessibility barriers for visually impaired users. We explored techniques to overcome these barriers in an interview with a teacher of the visually impaired and formative studies on a touchscreen blocks-based environment with five children with visual impairments. We distill our findings on usable touchscreen interactions into guidelines for designers of blocks-based environments.
What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies
One contributing factor to how people choose to use technology is their perceptions of associated risk. In order to explore this influence, we adapted a survey instrument from risk perception literature to assess mental models of users and technologists around risks of emerging, data-driven technologies (e.g., identity theft, personalized filter bubbles). We surveyed 175 individuals for comparative and individual assessments of risk, including characterizations using psychological factors. We report our observations around group differences (e.g., expert versus non-expert) in how people assess risk, and what factors may structure their conceptions of technological harm. Our findings suggest that technologists see these risks as posing a bigger threat to society than do non-experts. Moreover, across groups, participants did not see technological risks as voluntarily assumed. Differences in how people characterize risk have implications for the future of design, decision-making, and public communications, which we discuss through a lens we call risk-sensitive design.
TaskCam: Designing and Testing an Open Tool for Cultural Probes Studies
TaskCams are simple digital cameras intended to serve as a tool for Cultural Probe studies and made available by the Interaction Research Studio via open-source distribution. In conjunction with an associated website, instructions and videos, they represent a novel strategy for disseminating and facilitating a research methodology. At the same time, they provide a myriad of options for customisation and modification, allowing researchers to adopt and adapt them to their needs. In the first part of this paper, the design team describes the rationale and design of the TaskCams and the tactics developed to make them publicly available. In the second part, the story is taken up by designers from the Everyday Design Studio, who assembled their own TaskCams and customised them extensively for a Cultural Probe study they ran for an ongoing project. Rather than discussing the results of their study, we focus on how their experiences reveal some of the issues both in producing and using open-source products such as these. These suggest the potential of TaskCams to support design-led user studies more generally.
Making Problems in Design Research: The Case of Teen Shoplifters on Tumblr
HCI draws on a variety of traditions but recently there have been calls to consolidate contributions around the problems researchers set out to solve. However, with this comes the assumption that problems are tractable and certain, rather than constructed and framed by researchers. We take as a case study a Tumblr community of teen shoplifters who post on how to steal from stores, discuss shoplifting as political resistance, and share jokes and stories about the practice. We construct three different “problems” and imagine studies that might result from applying different design approaches: Design Against Crime; Critical Design and Value Sensitive Design. Through these studies we highlight how interpretations of the same data can lead to radically different design responses. We conclude by discussing problem making as a historically and politically contingent process that allow researchers to connect data and design according to certain moral and ethical principles.
Group vs Individual: Impact of TOUCH and TILT Cross-Device Interactions on Mixed-Focus Collaboration
Cross-device environments (XDEs) have been developed to support a multitude of collaborative activities. Yet, little is known about how different cross-device interaction techniques impact group collaboration, including how their impact on independent and joint work that often occurs during group work. In this work, we explore the impact of two XDE data browsing techniques: TOUCH and TILT. Through a mixed-methods study of a collaborative sensemaking task, we show that TOUCH and TILT have distinct impacts on how groups accomplish, and shift between, independent and joint work. Finally, we reflect on these findings and how they can more generally inform the design of XDEs.
Back to Analogue: Self-Reporting for Parkinson’s Disease
We report the process used to create artefacts for self-reporting Parkinson’s Disease symptoms. Our premise was that a technology-based approach would provide participants with an effective, flexible, and resilient technique. After testing four prototypes using Bluetooth, NFC, and a microcontroller we accomplished almost full compliance and high acceptance using a paper diary to track day-to-day fluctuations over 49 days. This diary is tailored to each patient’s condition, does not require any handwriting, allows for implicit reminders, provides recording flexibility, and its answers can be encoded automatically. We share five design implications for future Parkinson’s self-reporting artefacts: reduce participant completion demand, design to offset the effect of tremor on input, enable implicit reminders, design for positive and negative consequences of increased awareness of symptoms, and consider the effects of handwritten notes in compliance, encoding burden, and data quality.
The Hide and Seek of Workspace: Towards Human-Centric Sustainable Architecture
This contribution exemplifies how the study of space perception and its impact on space-use behavior can inform sustainable architecture. We describe our attempt to integrate the methods of user research in an architectural project that was focused on optimization of space usage. In an office building, two large office rooms were refurbished to provide desk-sharing opportunities through hot-desking. We studied the space-use behavior of 33 office workers over eight weeks in those two rooms as well as their occasional presence in ten other areas (cafeteria, atrium, meeting rooms, etc.). Quantitative and qualitative analyses were performed to understand the nature and nuances of space occupancy at the scope of the building and within the refurbished offices. While at the scope of building the patterns of movements between rooms were found to be related to the professional profile of the users, at the scope of office the occupancy patterns were influenced by the spatial design of workspaces. More precisely, certain visual attributes of a workspace, namely Visual Exposure and Visual Openness, could determine whether or not it was regularly used. In this paper, we describe our findings in detail and discuss their implications for sustainable building design.
How Teens with Visual Impairments Take, Edit, and Share Photos on Social Media
We contribute a qualitative investigation of how teens with visual impairments (VIP) access smartphone photography, from the time they take photos through editing and sharing them on social media. We observed that they largely want to engage with photos visually, similarly to their sighted peers, and have developed strategies around photo capture, editing, sharing, and consumption that attempt to mitigate usability limitations of current photography and social media apps. We demonstrate the need for more work examining how young people with low vision engage with smartphone photography and social media, as they are heavy users of such technologies and have challenges distinct from their totally blind counterparts. We conclude with design considerations to alleviate the usability barriers we uncovered and for making smartphone photography and social media more accessible and relevant for VIPs.
Attending to Slowness and Temporality with Olly and Slow Game: A Design Inquiry Into Supporting Longer-Term Relations with Everyday Computational Objects
Slowness has emerged as a rich lens to frame HCI investigations into supporting longer-term human-technology relations. Yet, there is a need to further address how we design for slowness on conceptual and practical levels. Drawing on the concepts of unawareness, intersections, and ensembles, we contribute an investigation into designing for slowness and temporality grounded in design practice through two cases: Olly and Slow Game. We designed these artifacts over two and a half years with careful attention to how the set of concepts influenced key design decisions in terms of their form, materials, and computational qualities. Our designer-researcher approach revealed that, when put into practice, the concepts helped generatively grapple with slowness and temporality, but are in need of further development to be mobilized for design. We critically reflect on insights emerging across our practice-based research to reflexively refine the concepts and better support future HCI research and practice.
VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval
We propose VirtualGrasp, a novel gestural approach to retrieve virtual objects in virtual reality. Using VirtualGrasp, a user retrieves an object by performing a barehanded gesture as if grasping its physical counterpart. The object-gesture mapping under this metaphor is of high intuitiveness, which enables users to easily discover, remember the gestures to retrieve the objects. We conducted three user studies to demonstrate the feasibility and effectiveness of the approach. Progressively, we investigated the consensus of the object-gesture mapping across users, the expressivity of grasping gestures, and the learnability and performance of the approach. Results showed that users achieved high agreement on the mapping, with an average agreement score [35] of 0.68 (SD=0.27). Without exposure to the gestures, users successfully retrieved 76% objects with VirtualGrasp. A week after learning the mapping, they could recall the gestures for 93% objects.
Harvesting Caregiving Knowledge: Design Considerations for Integrating Volunteer Input in Dementia Care
Improving volunteer performance leads to better caregiving in dementia care settings. However, caregiving knowledge systems have been focused on eliciting and sharing expert, primary caregiver knowledge, rather than volunteer-provided knowledge. Through the use of an experience prototype, we explored the content of volunteer caregiver knowledge and identified ways in which such non-expert knowledge can be useful to dementia care. By using lay language, sharing information specific to the client and collaboratively finding strategies for interaction, volunteers were able to boost the effectiveness of future volunteers. Therapists who reviewed the content affirmed the reliability of volunteer caregiver knowledge and placed value on its recency, variety and its ability to help bridge language and professional barriers. We discuss how future systems designed for eliciting and sharing volunteer caregiver knowledge can be used to promote better dementia care.
Gamification for Self-Tracking: From World of Warcraft to the Design of Personal Informatics Systems
World of Warcraft (WoW) may be a source of inspiration to enrich the Personal Informatics systems user’s experience and, at the same time, improve gamification design. Through the findings of a four-year reflexive ethnography in WoW, I outline how its game design elements support players in making sense of their own data, emphasizing how “game numbers” are turned into meanings. On the basis of the study results, I propose a series of design considerations to be used in the design of self-tracking systems, which recommend to embody data into digital entities, provide different analytical tools depending on the users’ expertise through a flexible model, and foster the formation of “communities of practice” in order to support learning processes.
Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality
Head and eye movement can be leveraged to improve the user’s interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
Sketch&Stitch: Interactive Embroidery for E-textiles
E-Textiles are fabrics that integrate electronic circuits and components. Makers use them to create interactive clothing, furniture, and toys. However, this requires significant manual labor and skills, and using technology-centric design tools. We introduce Sketch&Stitch, an interactive embroidery system to create e-textiles using a traditional crafting approach: Users draw their art and circuit directly on fabric using colored pens. The system takes a picture of the sketch, converts it to embroidery patterns, and sends them to an embroidery machine. Alternating between sketching and stitching, users build and test their design incrementally. Sketch&Stitch features Circuitry Stickers representing circuit boards, components, and custom stitch patterns for wire crossings to insulate, and various textile touch sensors such as pushbuttons, sliders, and 2D touchpads. Circuitry Stickers serve as placeholders during design. Using computer vision, they are recognized and replaced later in the appropriate embroidery phases. We close with technical considerations and application examples.
Using Stakeholder Theory to Examine Drivers’ Stake in Uber
Uber is a ride-sharing platform that is part of the ‘gig-economy,’ where the platform supports and coordinates a labor market in which there are a large number of ephemeral, piecemeal jobs. Despite numerous efforts to understand the impacts of these platforms and their algorithms on Uber drivers, how to better serve and support drivers with these platforms remains an open challenge. In this paper, we frame Uber through the lens of Stakeholder Theory to highlight drivers’ position in the workplace, which helps inform the design of a more ethical and effective platform. To this end, we analyzed Uber drivers’ forum discussions about their lived experiences of working with the Uber platform. We identify and discuss the impact of the stakes that drivers have in relation to both the Uber corporation and their passengers, and look at how these stakes impact both the platform and drivers’ practices.
Conversations in the Eye of the Storm: At-Scale Features of Conversational Structure in a High-Tempo, High-Stakes Microblogging Environment
This work propels social media research beyond the single post as the unit of analysis toward fuller treatment of interaction by making the construct of the conversation analytically available. We offer a method for constructing @reply conversations in Twitter to apprehend social media conversational features at scale. We apply this method to the high-tempo, high-stakes environment of 2012’s Hurricane Sandy, with its high volume of online talk by affected locals and distinct disaster-stage phasing by which to consider interactional difference. We investigate the temporality of conversations; the relationality of who speaks to whom; the number and kind of conversationalists; and how content affects temporal features. The analysis reveals that, during the height of the emergency, people expand conversations both in number and kind of conversational partners-just as their information search intensifies. This expansion contributes to longer, slower-paced conversations in the high-emergency period, suggesting reliance on online relationships during times of greatest uncertainty.
Interactive Extraction of Examples from Existing Code
Programmers frequently learn from examples produced and shared by other programmers. However, it can be challenging and time-consuming to produce concise, working code examples. We conducted a formative study where 12 participants made examples based on their own code. This revealed a key hurdle: making meaningful simplifications without introducing errors. Based on this insight, we designed a mixed-initiative tool, CodeScoop, to help programmers extract executable, simplified code from existing code. CodeScoop enables programmers to “scoop” out a relevant subset of code. Techniques include selectively including control structures and recording an execution trace that allows authors to substitute literal values for code and variables. In a controlled study with 19 participants, CodeScoop helped programmers extract executable code examples with the intended behavior more easily than with a standard code editor.
Haptic Revolver: Touch, Shear, Texture, and Shape Rendering on a Reconfigurable Virtual Reality Controller
We present Haptic Revolver, a handheld virtual reality controller that renders fingertip haptics when interacting with virtual surfaces. Haptic Revolver’s core haptic element is an actuated wheel that raises and lowers underneath the finger to render contact with a virtual surface. As the user’s finger moves along the surface of an object, the controller spins the wheel to render shear forces and motion under the fingertip. The wheel is interchangeable and can contain physical textures, shapes, edges, or active elements to provide different sensations to the user. Because the controller is spatially tracked, these physical features can be spatially registered with the geometry of the virtual environment and rendered on-demand. We evaluated Haptic Revolver in two studies to understand how wheel speed and direction impact perceived realism. We also report qualitative feedback from users who explored three application scenarios with our controller.
Influences of Human Cognition and Visual Behavior on Password Strength during Picture Password Composition
Visual attention, search, processing and comprehension are important cognitive tasks during a graphical password composition activity. Aiming to shed light on whether individual differences on visual behavior affect the strength of the created passwords, we conducted an eye-tracking study (N=36), and adopted an accredited cognitive style theory to interpret the results. The analysis revealed that users with different cognitive styles followed different patterns of visual behavior which affected the strength of the created passwords. Motivated, by the results of the first study, we introduced adaptive characteristics to the user authentication mechanism, aiming to assist specific cognitive style user groups to create more secure passwords, and conducted a second study with a new sample (N=40) to test the adaptive characteristics. Results strengthen our assumptions that adaptive mechanisms based on users’ differences in cognitive and visual behavior uncover a new perspective for improving the password’s strength within graphical user authentication realms.
Supporting Workplace Detachment and Reattachment with Conversational Intelligence
Research has shown that productivity is mediated by an individual’s ability to detach from their work at the end of the day and reattach with it when they return the next day. In this paper we explore the extent to which structured dialogues, focused on individuals’ work-related tasks or emotions, can help them with the detachment and reattachment processes. Our inquiry is driven with SwitchBot, a conversational bot which engages with workers at the start and end of their work day. After preliminarily validating the design of a detachment and reattachment dialogue frame-work with 108 crowdworkers, we study SwitchBot’s use in-situ for 14 days with 34 information workers. We find that workers send fewer e-mails after work hours and spend a larger percentage of their first hour at work using productivity applications than they normally would when using SwitchBot. Further, we find that productivity gains were better sustained when conversations focused on work-related emotions. Our results suggest that conversational bots can be effective tools for aiding workplace detachment and reattachment and help people make successful use of their time on and off the job.
iTurk: Turning Passive Haptics into Active Haptics by Making Users Reconfigure Props in Virtual Reality
We present a system that complements virtual reality experiences with passive props, yet still allows modifying the virtual world at runtime. The main contribution of our system is that it does not require any actuators; instead, our system employs the user to reconfigure and actuate otherwise passive props. We demonstrate a foldable prop that users reconfigure to represent a suitcase, fuse cabinet, railing, and a seat. A second prop, suspended from a long pendulum, not only stands in for inanimate objects, but also for objects that move and demonstrate proactive behavior, such as a group of flying droids that physically attack the user. Our approach conveys a sense of a living, animate world, when in reality the user is the only animate entity present in the system, complemented with only one or two physical props. In our study, participants rated their experience as more enjoyable and realistic than a corresponding no-haptics condition.
Clusters, Trends, and Outliers: How Immersive Technologies Can Facilitate the Collaborative Analysis of Multidimensional Data
Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.
Methods for Evaluation of Imperfect Captioning Tools by Deaf or Hard-of-Hearing Users at Different Reading Literacy Levels
As Automatic Speech Recognition (ASR) improves in accuracy, it may become useful for transcribing spoken text in real-time for Deaf and Hard-of-Hearing (DHH) individuals. To quantify users’ comprehension and opinion of automatic captions, which inevitably contain some errors, we must identify appropriate methodologies for evaluation studies with DHH users, including quantitative measurement instruments suitable to the various literacy levels among the DHH population. A literature review guided our selection of several probes (e.g. multiple-choice comprehension-question accuracy or response time, scalar-questions about user estimation of ASR errors or their impact, users’ numerical estimation of accuracy), which we evaluated in a lab study with DHH users, wherein their literacy levels and the actual accuracy of each caption stimulus were factors. For some probes, participants with lower literacy had more positive subjective responses overall, and, for participants with particular literacy score ranges, some probes were insufficiently sensitive to distinguish between caption accuracy levels.
Effects of Individual Differences in Blocking Workplace Distractions
Information workers are experiencing ever-increasing online distractions in the workplace, and software to block distractions is becoming more popular. We conducted an exploratory field study with 32 information workers in their workplace using software to block online distractions for one week. We discovered that with online distractions blocked, participants assessed their focus and productivity to be significantly higher. Those who benefited most were those who reported being less in control of their work, associated with personality traits of lower Conscientiousness and Lack of Perseverence. Unexpectedly, those reporting higher control of work experienced a cost of higher workload with online distractions blocked. Those who reported the greatest increase in focus with distractions blocked were those who were more susceptible to social media distractions. Without distractions, people with higher control of work worked longer stretches without physical breaks, with consequently higher stress. We present design recommendations to promote focus for our observed coping behaviors.
Dynamic Demographics: Lessons from a Large-Scale Census of Performative Possibilities in Games
While much popular discussion of representation in games exists, there is very little rigorously collected data available from which to draw direct conclusions. In this study, we set out to address this gap by performing a census of playable characters across a large sample of contemporary games. We gathered data from 200 games including independently published (“indie”) games and so-called “AAA” titles from large publishers. While our initial analysis yielded some insight into the landscape of playable characters, it also highlighted the contingent, negotiated, and interpretive nature of representation in games. This led to additional analysis that emphasized the ways in which this negotiation manifests in research in the methods and metrics used to quantify representation. We argue that researchers studying representation in games need to treat it as a possibility space for a multitude of potential interpretations rather than a singular, measurable, phenomenon.
Philosophers Living with the Tilting Bowl
This paper reports on a postphenomenological inquiry of six trained philosophers, who as study participants lived with and reflected on a research product we designed known as the Tilting Bowl: a ceramic bowl that unpredictably but gently tilts multiple times daily. The Tilting Bowl is a counterfactual artifact that is designed specifically for this study as part of a material speculation approach to design research. A postphenomenological inquiry looks to describe and analyze accounts of relationships between humans and technological artifacts, and how each mutually shapes the other through mediations that form the human subjectivity and objectivity of any given situation. This paper contributes an empirical account and analysis of the relations that emerged (background and alterity) and the relativistic views that co-constitute the philosophers, Tilting Bowl, and their specific worlds. The findings demonstrate the relevance of this philosophical framing to fundamentally and broadly understand how people engage digital artifacts.
LumiWatch: On-Arm Projected Graphics and Touch Input
Compact, worn computers with projected, on-skin touch interfaces have been a long-standing yet elusive goal, largely written off as science fiction. Such devices offer the potential to mitigate the significant human input/output bottleneck inherent in worn devices with small screens. In this work, we present the first fully functional and self-contained projection smartwatch implementation, containing the requisite compute, power, projection and touch-sensing capabilities. Our watch offers roughly 40 sq. cm of interactive surface area — more than five times that of a typical smartwatch display. We demonstrate continuous 2D finger tracking with interactive, rectified graphics, transforming the arm into a touchscreen. We discuss our hardware and software implementation, as well as evaluation results regarding touch accuracy and projection visibility.
Hit-or-Wait: Coordinating Opportunistic Low-effort Contributions to Achieve Global Outcomes in On-the-go Crowdsourcing
We consider the challenge of motivating and coordinating large numbers of people to contribute to solving local, communal problems through their existing routines. In order to design such “on-the-go crowdsourcing” systems, there is a need for mechanisms that can effectively coordinate contributions to address problem solving needs in the physical world while leveraging people’s existing mobility with minimal disruption. We thus introduce Hit-or-Wait, a general decision-theoretic mechanism that intelligently controls decisions over when to notify a person of a task, in ways that reason both about system needs across tasks and about a helper’s changing patterns of mobility. Through simulations and a field study in the context of community-based lost-and-found, we demonstrate that using Hit-or-Wait enables a system to make efficient use of people’s contributions with minimal disruptions to their routines without the need for explicit coordination. Interviews with field study participants further suggest that highlighting an individual’s contribution to the global goal may help people value their contributions more.
An Empirical Exploration of Mindfulness Design Using Solo Travel Domain
Despite recent popularity of mindfulness smartphone applications and an interest in incorporating mindfulness into new technologies, existing applications tend to focus mainly on its meditation dimension. In this paper, we review existing literature on digital and traditional mindfulness to map its design space and synthesize the findings with our prior research on designing for aesthetic needs. We identify “recollection” and “evaluation” as two important dimensions of mindfulness that have not yet been incorporated into popular digital tools. Through a two-phase design activity over 16 months, we developed ColorAway, an innovative tool that promotes mindfulness through interaction with modified travel photos. Recruited participants evaluated ColorAway and offered unique insights into how mindfulness can be better designed. We also discuss how the process of designing for mindfulness can possibly inform the design of personal technology. This research is part of a larger study that builds on scholarly research and theories with the goal of designing interactive technologies for solo travelers.
Inspiring AWE: Transforming Clinic Waiting Rooms into Informal Learning Environments with Active Waiting Education
This research explores patient education in pediatric hematology and oncology clinics. Based on interviews, observations, and a review of existing patient materials, we argue that education in clinic waiting rooms is in need of reform. We applied design principles from research in science museums along with tangible interaction techniques to create the Sickle Cell Station, an interactive learning experience about sickle cell disease. To evaluate the effectiveness of this design we observed approximately 580 participants in a pediatric hematology clinic waiting area in four different design conditions. These observations included detailed video analysis of 81 patients and their parents to understand their interaction and learning with the Sickle Cell Station. Our results show an engaging learning experience with relevant conversation, inquiry, and collaboration. We describe how patient engagement varied in the four design conditions and conclude with implications for new designs in the area of Active Waiting Education (AWE).
Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain
Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR.
Watch Me Play: Does Social Facilitation Apply to Digital Games?
The presence of observers and virtual characters can significantly shape our gaming experience. Researchers suppose that most of the basic socio-psychological phenomena are also applicable for digital games. However, the social processes in gaming setups can differ from our experience in other social situations. Our work emphasizes that awareness. Insights are needed for the purposeful design of a game’s social setting, specifically in applied contexts of learning and training. Here, we focus on the social facilitation effect, which describes an unconscious change in performance due to the presence of others, by investigating the impact of real observers and virtual agents on player experience and performance in four different games. The results of our four studies show that, in contrast to previous assumptions, in-game success was not significantly influenced by the presence of any social entity, indicating that social facilitation does not generally apply to the context of playing digital games.
Do You Think What I Think: Perceptions of Delayed Instant Messages in Computer-Mediated Communication of Romantic Relations
In romantic relationships, Instant Messaging (IM) can serve as a communication channel to maintain a sense of mutual presence and relational closeness when being physically separated. However, IM is asynchronous by design. There can exist time delay for people to receive and reply to incoming messages, which may violate romantic partner’s mutual expectation. Limited understanding is available around how unintended and intended delays affect the relationship of romantic partners. This work examines how romantic partners grow, perceive, and use mutual knowledge about each other in delayed IM to resolve the expectancy violation. We conducted a 7-day diary study on 16 pairs of romantic couples and used the diary entries as probes for post-study one-on-one interviews. Our findings show that couples employ different strategies of information grounding to parse and resolve delayed IM. Based on these findings, we propose several theoretical and practical implications.
From Research to Practice: Informing the Design of Autism Support Smart Technology
Smart technologies (wearable and mobile devices) show tremendous potential in the detection, diagnosis, and management of Autism Spectrum Disorder (ASD) by enabling continuous real-time data collection, identifying effective treatment strategies, and supporting intervention design and delivery. Though promising, effective utilization of smart technology in aiding ASD is still limited. We propose a set of implications to guide the design of ASD-support technology by analyzing 149 peer-reviewed articles focused on children with autism from ACM Digital Library, IEEE Xplore, and PubMed. Our analysis reveals that technology should facilitate real-time detection and identification of points-of-interest, adapt its behavior driven by the real-time affective state of the user, utilize familiar and unfamiliar features depending on user-context, and aid in revealing even minuscule progress made by children with autism. Our findings indicate that such technology should strive to blend-in with everyday objects. Moreover, gradual exposure and desensitization may facilitate successful adaptation of novel technology.
Explanations as Mechanisms for Supporting Algorithmic Transparency
Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook’s News Feed algorithm might affect participants’ beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system’s output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
On the Design of OLO Radio: Investigating Metadata as a Design Material
With the massive adoption of music streaming services globally, metadata is being generated that captures people’s music listening histories in more precise detail than ever before. These metadata archives offer a valuable and overlooked resource for designing new ways of supporting people in experiencing the music they have listened to over the course of their lives. Yet, little research has demonstrated how metadata can be applied as a material in design practice. We describe the design of OLO Radio, a device that leverages music listening history metadata to support experiences of exploring and living with music from one’s past. We unpack and reflect on design choices that made use of the exacting precision captured in listening history metadata archives to support relatively imprecise qualities of feedback and interaction to encourage rich, open-ended experiences of contemplation, curiosity, and enjoyment over time. We conclude with implications for HCI research and practice in this space.
Engage Early, Correct More: How Journalists Participate in False Rumors Online during Crisis Events
Journalists are struggling to adapt to new conditions of news production and simultaneously encountering criticism for their role in spreading misinformation. Against the backdrop of this “crisis in journalism”, this research seeks to understand how journalists are actually participating in the spread and correction of online rumors. We compare the engagement behaviors of journalists to non-journalists- and specifically other high visibility users-within five false rumors that spread on Twitter during three crisis events. Our findings show journalists engaging earlier than non-journalists in the spread and the correction of false rumors. However, compared to other users, journalists are (proportionally) more likely to deny false rumors. Journalists are also more likely to author original tweets and to be retweeted-underscoring their continued role in shaping the news. Interestingly, journalists scored high on “power user” measures, but were distinct from other power users in significant ways-e.g. by being more likely to deny rumors.
Measuring the “Why” of Interaction: Development and Validation of the User Motivation Inventory (UMI)
Motivation is a fundamental concept in understanding people’s experiences and behavior. Yet, motivation to engage with an interactive system has received only limited attention in HCI. We report the development and validation of the User Motivation Inventory (UMI). The UMI is an 18-item multidimensional measure of motivation, rooted in self-determination theory (SDT). It is designed to measure intrinsic motivation, integrated, identified, introjected, and external regulation, as well as amotivation. Results of two studies (total N = 941) confirm the six-factor structure of the UMI with high reliability, as well as convergent and discriminant validity of each subscale. Relationships with core concepts such as need satisfaction, vitality, and usability were studied. Additionally, the UMI was found to detect differences in motivation for people who consider abandoning a technology compared to those who do not question their use. The central role of motivation in users’ behavior and experience is discussed.
GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes
Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.’s $1 for rapid user interface prototypes. However, most existing recognizers are limited to a particular input modality and/or pre-trained set of gestures, and cannot be easily combined with other recognizers. In particular, creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skill. Inspired by $1’s easy, cheap, and flexible design, we present the GestureWiz prototyping environment that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner. We present a series of experiments with designers and crowds to show that GestureWiz can perform with reasonable accuracy and latency. We demonstrate advantages of GestureWiz when recreating gesture-based interfaces from the literature and conducting a study with 12 interaction designers that prototyped a multimodal interface with support for a wide range of novel gestures in about 45 minutes.
In Search of the Dream Team: Temporally Constrained Multi-Armed Bandits for Identifying Effective Team Structures
Team structures—roles, norms, and interaction patterns—define how teams work. HCI researchers have theorized ideal team structures and built systems nudging teams towards them, such as those increasing turn-taking, deliberation, and knowledge distribution. However, organizational behavior research argues against the existence of universally ideal structures. Teams are diverse and excel under different structures: while one team might flourish under hierarchical leadership and a critical culture, another will flounder. In this paper, we present DreamTeam: a system that explores a large space of possible team structures to identify effective structures for each team based on observable feedback. To avoid overwhelming teams with too many changes, DreamTeam introduces multi-armed bandits with temporal constraints: an algorithm that manages the timing of exploration–exploitation trade-offs across multiple bandits simultaneously. A field experiment demonstrated that DreamTeam teams outperformed self-managing teams by 38%, manager-led teams by 46%, and teams with unconstrained bandits by 41%. This research advances computation as a powerful partner in establishing effective teamwork.
Troubling Vulnerability: Designing with LGBT Young People’s Ambivalence Towards Hate Crime Reporting
HCI is increasingly working with ‘vulnerable’ people, yet there is a danger that the label of vulnerability can alienate and stigmatize the people such work aims to support. We report our study investigating the application of interaction design to increase rates of hate crime reporting amongst Lesbian, Gay, Bisexual and Transgender young people. During design-led workshops, participants expressed ambivalence towards reporting. While recognizing their exposure to hate crime, they simultaneously rejected being identified as victim as implied in the act of reporting. We used visual communication design to depict the young people’s ambivalent identities and contribute insights into how these fail and succeed to account for the intersectional, fluid and emergent nature of LGBT identities through the design research process. We argue that by producing ambiguously designed texts alongside conventional outcomes, we ‘trouble’ our design research narratives as a tactic to disrupt static and reductive understandings of vulnerability within HCI.
PHUI-kit: Interface Layout and Fabrication on Curved 3D Printed Objects
We seek to make physical user interface (PHUI) design more like graphical user interface (GUI) design by using a drag-and drop interface to place widgets, allowing widgets to be repositioned and by hiding implementation details. PHUIs are interfaces built from tangible widgets arranged on the surfaces of physical objects. PHUI layout will become more important as we move from rectangular screens to purpose-built interactive devices. Approaches to PHUI layout based on sculpture make it difficult to reposition widgets, and software approaches do not involve placing widgets on the device exterior. We created PHUI-kit, a software approach to PHUI layout on 3D printed enclosures, which has a drag-and-drop interface, supports repositioning of widgets, and hides implementation details. We describe algorithms for placing widgets on curved surfaces, modifying the enclosure geometry, and routing wiring inside the enclosure. The tool is easy to use and supports a wide range of design possibilities.
Crowd-Guided Ensembles: How Can We Choreograph Crowd Workers for Video Segmentation?
In this work, we propose two ensemble methods leveraging a crowd workforce to improve video annotation, with a focus on video object segmentation. Their shared principle is that while individual candidate results may likely be insufficient, they often complement each other so that they can be combined into something better than any of the individual results—the very spirit of collaborative working. For one, we extend a standard polygon-drawing interface to allow workers to annotate negative space, and combine the work of multiple workers instead of relying on a single best one as commonly done in crowdsourced image segmentation. For the other, we present a method to combine multiple automatic propagation algorithms with the help of the crowd. Such combination requires an understanding of where the algorithms fail, which we gather using a novel coarse scribble video annotation task. We evaluate our ensemble methods, discuss our design choices for them, and make our web-based crowdsourcing tools and results publicly available.
Phone vs. Tangible in Museums: A Comparative Study
Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors’. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a “bring your own device” approach.
Evaluating User Satisfaction with Typography Designs via Mining Touch Interaction Data in Mobile Reading
Previous work has demonstrated that typography design has a great influence on users’ reading experience. However, current typography design guidelines are mainly for general purpose, while the individual needs are nearly ignored. To achieve personalized typography designs, an important and necessary step is accurately evaluating user satisfaction with the typography designs. Current evaluation approaches, e.g., asking for users’ opinions directly, however, interrupt the reading and affect users’ judgments. In this paper, we propose a novel method to address this challenge by mining users’ implicit feedbacks, e.g., touch interaction data. We conduct two mobile reading studies in Chinese to collect the touch interaction data from 91 participants. We propose various features based on our three hypotheses to capture meaningful patterns in the touch behaviors. The experiment results show the effectiveness of our evaluation models with higher accuracy on comparing with the baseline under three text difficulty levels, respectively.
Keeping a Low Profile?: Technology, Risk and Privacy among Undocumented Immigrants
Undocumented immigrants in the United States face risks of discrimination, surveillance, and deportation. We investigate their technology use, risk perceptions, and protective strategies relating to their vulnerability. Through semi-structured interviews with Latinx undocumented immigrants, we find that while participants act to address offline threats, this vigilance does not translate to their online activities. Their technology use is shaped by needs and benefits rather than risk perceptions. While our participants are concerned about identity theft and privacy generally, and some raise concerns about online harassment, their understanding of government surveillance risks is vague and met with resignation. We identify tensions among self-expression, group privacy, and self-censorship related to their immigration status, as well as strong trust in service providers. Our findings have implications for digital literacy education, privacy and security interfaces, and technology design in general. Even minor design decisions can substantially affect exposure risks and well-being for such vulnerable communities.
Comparing Computer-Based Drawing Methods for Blind People with Real-Time Tactile Feedback
In this paper, we present a drawing workstation for blind people using a two-dimensional tactile pin-matrix display for in- and output. Four different input modalities, namely menu-based, gesture-based, freehand-stylus and a Time-of-Flight (ToF) depth segmentation of real-world object silhouettes, are utilized to create graphical shapes. Users can freely manipulate shapes after creation. Twelve blind users evaluated and compared the four image creation modalities. During evaluation, participants had to copy four different images. The results show that all modalities are highly appropriate for non-visual drawing tasks. There is no generally preferred drawing modality, but most participants rated the robust and well-known menu-based interaction as very good. Furthermore, menu was second in performance and the most accurate drawing modality. Our evaluation demonstrated direct manipulation works well for blind users at the position of the reading hand. In general, our drawing tool allows blind users to create appealing images.
Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation
Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces. We discuss potential applications supported by Canetroller ranging from entertainment to mobility training.
Pulp Nonfiction: Low-Cost Touch Tracking for Paper
Paper continues to be a versatile and indispensable material in the 21st century. Of course, paper is a passive medium with no inherent interactivity, precluding us from computationally-enhancing a wide variety of paper-based activities. In this work, we present a new technical approach for bringing the digital and paper worlds closer together, by enabling paper to track finger input and also drawn input with writing implements. Importantly, for paper to still be considered paper, our method had to be very low cost. This necessitated research into materials, fabrication methods and sensing techniques. We describe the outcome of our investigations and show that our method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive and venerable material.
Exploring New Metaphors for a Networked World through the File Biography
We present a body of work undertaken in response to the challenge outlined by Harper et al. in their paper, ?’What is a File?’ [9]. Through a conceptual and design-led exploration of new file metaphors, we developed the ‘file biography’, a digital entity that encompasses the provenance of a file and allows the user to keep track of how it propagates. We explored this through prototyping and utilised it in two user studies. In the studies, we (i) asked people to sketch out file biographies for their own content, and (ii) deployed a tool enabling users to build their own simple file biographies across multiple versions of Word documents. We conclude that new file metaphors may need to play different roles for different types of digital content, with a distinction being drawn between content that is ‘in production’ and virtual possessions that are, in a sense, a ‘finished’ artefact.
Understanding Older Users’ Acceptance of Wearable Interfaces for Sensor-based Fall Risk Assessment
Algorithms processing data from wearable sensors promise to more accurately predict risks of falling — a significant concern for older adults. Substantial engineering work is dedicated to increasing the prediction accuracy of these algorithms; yet fewer efforts are dedicated to better engaging users through interactive visualizations in decision-making using these data. We present an investigation of the acceptance of a sensor-based fall risk assessment wearable device. A participatory design was employed to develop a mobile interface providing visualizations of sensor data and algorithmic assessments of fall risks. We then investigated the acceptance of this interface and its potential to motivate behavioural changes through a field deployment, which suggested that the interface and its belt-mounted wearable sensors are perceived as usable. We also found that providing contextual information for fall risk estimation combined with relevant practical fall prevention instructions may facilitate the acceptance of such technologies, potentially leading to behaviour change.
How People Form Folk Theories of Social Media Feeds and What it Means for How We Study Self-Presentation
Self-presentation is a process that is significantly complicated by the rise of algorithmic social media feeds, which obscure information about one’s audience and environment. User understandings of these systems, and therefore user ability to adapt to them, are limited, and have recently been explored through the lens of folk theories. To date, little is understood of how these theories are formed, and how they tie to the self-presentation process in social media. This paper presents an exploratory look at the folk theory formation process and the interplay between folk theories and self-presentation via a 28-participant interview study. Results suggest that people draw from diverse sources of information when forming folk theories, and that folk theories are more complex, multifaceted and malleable than previously assumed. This highlights the need to integrate folk theories into both social media systems and theories of self-presentation.
Analogy Mining for Specific Design Needs
Finding analogical inspirations in distant domains is a powerful way of solving problems. However, as the number of inspirations that could be matched and the dimensions on which that matching could occur grow, it becomes challenging for designers to find inspirations relevant to their needs. Furthermore, designers are often interested in exploring specific aspects of a product– for example, one designer might be interested in improving the brewing capability of an outdoor coffee maker, while another might wish to optimize for portability. In this paper we introduce a novel system for targeting analogical search for specific needs. Specifically, we contribute an analogical search engine for expressing and abstracting specific design needs that returns more distant yet relevant inspirations than alternate approaches.
Measuring Employment Demand Using Internet Search Data
We are in a transitional economic period emphasizing automation of physical jobs and the shift towards intellectual labor. How can we measure and understand human behaviors of job search, and how communities are adapting to these changes? We use internet search data to estimate employment demand in the United States. Starting with 225 million raw job search queries in 2015 and 2016 from a popular search engine, we classify queries into one of 15 fields of employment with accuracy and F-1 of 97%, and use the resulting query volumes to estimate per-sector employment demand in U.S. counties. We validate against Bureau of Labor Statistics measures, and then demonstrate benefits for communities, showing significant differences in the types of jobs searched for across socio-economic dimensions like poverty and education level. We discuss implications for macroeconomic measurement, as well as how community leaders, policy makers, and the field of HCI can benefit.
Data Illustrator: Augmenting Vector Design Tools with Lazy Data Binding for Expressive Visualization Authoring
Building graphical user interfaces for visualization authoring is challenging as one must reconcile the tension between flexible graphics manipulation and procedural visualization generation based on a graphical grammar or declarative languages. To better support designers’ workflows and practices, we propose Data Illustrator, a novel visualization framework. In our approach, all visualizations are initially vector graphics; data binding is applied when necessary and only constrains interactive manipulation to that data bound property. The framework augments graphic design tools with new concepts and operators, and describes the structure and generation of a variety of visualizations. Based on the framework, we design and implement a visualization authoring system. The system extends interaction techniques in modern vector design tools for direct manipulation of visualization configurations and parameters. We demonstrate the expressive power of our approach through a variety of examples. A qualitative study shows that designers can use our framework to compose visualizations.
Safety vs. Surveillance: What Children Have to Say about Mobile Apps for Parental Control
Mobile applications (“apps”) developed to promote online safety for children are underutilized and rely heavily on parental control features that monitor and restrict their child’s mobile activities. This asymmetry in parental surveillance initiates an interesting research question — how do children themselves feel about such parental control apps? We conducted a qualitative analysis of 736 reviews of 37 mobile online safety apps from Google Play that were publicly posted and written by children (ages 8-19). Our results indicate that child ratings were significantly lower than that of parents with 76% of the child reviews giving apps a single star. Children felt that the apps were overly restrictive and invasive of their personal privacy, negatively impacting their relationships with their parents. We relate these findings with HCI literature on mobile online safety, including broader literature around privacy and surveillance, and outline design opportunities for online safety apps.
Co-performance: Conceptualizing the Role of Artificial Agency in the Design of Everyday Life
This paper introduces the notion of co-performance, with the aim to offer Human-Computer Interaction (HCI) researchers and practitioners a new perspective on the role of artificial agency in everyday life, from automated systems to autonomous devices. In contrast to ‘smartness,’ which focuses on a supposed autonomy of artifacts, co-performance considers artifacts as capable of learning and performing next to people. This shifts the locus of design from matters of distributions of agency at design time, to matters of embodied learning in everyday practice for both human and artificial performers. From this perspective, co-performance acknowledges the dynamic differences in capabilities between humans and artifacts, and highlights the fundamentally recursive relation between professional design and use. Implications for HCI design practice are unpacked through reflections on smart thermostat design in light of historic changes in roles between humans and heating systems, and changing ideas of appropriateness in everyday practices of domestic heating.
Social Influence and Reciprocity in Online Gift Giving
Giving gifts is a fundamental part of human relationships that is being affected by technology. The Internet enables people to give at the last minute and over long distances, and to observe friends giving and receiving gifts. How online gift giving spreads in social networks is therefore important to understand. We examine 1.5 million gift exchanges on Facebook and show that receiving a gift causes individuals to be 56% more likely to give a gift in the future. Additional surveys show that online gift giving was more socially acceptable to those who learned about it by observing friends’ participation instead of a non-social encouragement. Most receivers pay the gift forward instead of reciprocating directly online, although surveys revealed additional instances of direct reciprocity, where the initial gifting occurred offline. Thus, social influence promotes the spread of online gifting, which both complements and substitutes for offline gifting.
Time for Break: Understanding Information Workers’ Sedentary Behavior Through a Break Prompting System
Extended periods of uninterrupted sedentary behavior are detrimental to long-term health. While prolonged sitting is prevalent among information workers, it is difficult for them to break prolonged sedentary behavior due to the nature of their work. This work aims to understand information workers’ intentions & practices around standing or moving breaks. We developed Time for Break, a break prompting system that enables people to set their desired work duration and prompts them to stand up or move. We conducted an exploratory field study (N = 25) with Time for Break to collect participants’ work & break intentions and behaviors for three weeks, followed by semi-structured interviews. We examined rich contexts affecting participants’ receptiveness to standing or moving breaks, and identified how their habit strength and self-regulation are related to their break-taking intentions & practices. We discuss design implications for interventions to break up periods of prolonged sedentary behavior in workplaces.
Breaking the Tracking: Enabling Weight Perception using Perceivable Tracking Offsets
Virtual reality (VR) technology strives to enable a highly immersive experience for the user by including a wide variety of modalities (e.g. visuals, haptics). Current VR hardware however lacks a sufficient way of communicating the perception of weight of an object, resulting in scenarios where users can not distinguish between lifting a bowling ball or a feather. We propose a solely software based approach of simulating weight in VR by deliberately using perceivable tracking offsets. These tracking offsets nudge users to lift their arm higher and result in a visual and haptic perception of weight. We conducted two user studies showing that participants intuitively associated them with the sensation of weight and accept them as part of the virtual world. We further show that compared to no weight simulation, our approach led to significantly higher levels of presence, immersion and enjoyment. Finally, we report perceptional thresholds and offset boundaries as design guidelines for practitioners.
Remixed Reality: Manipulating Space and Time in Augmented Reality
We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.
Mapping Machine Learning Advances from HCI Research to Reveal Starting Places for Design Innovation
HCI has become particularly interested in using machine learning (ML) to improve user experience (UX). However, some design researchers claim that there is a lack of design innovation in envisioning how ML might improve UX. We investigate this claim by analyzing 2,494 related HCI research publications. Our review confirmed a lack of research integrating UX and ML. To help span this gap, we mined our corpus to generate a topic landscape, mapping out 7 clusters of ML technical capabilities within HCI. Among them, we identified 3 under-explored clusters that design researchers can dig in and create sensitizing concepts for. To help operationalize these technical design materials, our analysis then identified value channels through which the technical capabilities can provide value for users: self, context, optimal, and utility-capability. The clusters and the value channels collectively mark starting places for envisioning new ways for ML technology to improve people’s lives.
Pentelligence: Combining Pen Tip Motion and Writing Sounds for Handwritten Digit Recognition
Digital pens emit ink on paper and digitize handwriting. The range of the pen is typically limited to a special writing surface on which the pen’s tip is tracked. We present Pentelligence, a pen for handwritten digit recognition that operates on regular paper and does not require a separate tracking device. It senses the pen tip’s motions and sound emissions when stroking. Pen motions and writing sounds exhibit complementary properties. Combining both types of sensor data substantially improves the recognition rate. Hilbert envelopes of the writing sounds and mean-filtered motion data are fed to neural networks for majority voting. The results on a dataset of 9408 handwritten digits taken from 26 individuals show that motion+sound outperforms single-sensor approaches at an accuracy of 78.4% for 10 test users. Retraining the networks for a single writer on a dataset of 2120 samples increased the precision to 100% for single handwritten digits at an overall accuracy of 98.3%.
Exploring Multimodal Watch-back Tactile Display using Wind and Vibration
A tactile display on the back of a smartwatch is an attractive output option; however, its channel capacity is limited owing to the small contact area. In order to expand the channel capacity, we considered using two perceptually distinct types of stimuli, wind and vibration, together on the same skin area. The result is a multimodal tactile display that combines wind and vibration to create “colored” tactile sensations on the wrist. As a first step toward this goal, we conducted in this study four user experiments with a wind-vibration tactile display to examine different ways of combining wind and vibration: Individual, Sequential, and Simultaneous. The results revealed the sequential combination of wind and vibration to exhibit the highest potential, with an information transfer capacity of 3.29 bits. In particular, the transition of tactile modality was perceived at an accuracy of 98.52%. The current results confirm the feasibility and potential of a multimodal tactile display combining wind and vibration.
eKichabi: Information Access through Basic Mobile Phones in Rural Tanzania
This paper presents eKichabi, a tool for retrieving contact information for agriculture-related enterprises in Tanzania. eKichabi is an Unstructured Supplementary Service Data (USSD) application which users can access through basic mobile phones. We describe our focus groups, a design iteration, deployment in four villages, and follow up interviews by phone. This work demonstrates the feasibility of USSD for information access applications that have the potential for deployment on a large scale in the developing world. From user interviews, we identified strong evidence of eKichabi fulfilling an unmet need for business related information, both in identifying business contacts in other villages, as well locating specific service providers. One of our key findings demonstrates that users access information through multiple modes, including text search, in addition to menu navigation organized by both business sector category and geographic area.
Understanding Identity Presentation in Medical Crowdfunding
People desire to present themselves favorably to others. However, medical crowdfunding beneficiaries are often expected to present their dire medical conditions and financial straits to solicit financial support. To investigate how beneficiaries convey their situation on medical crowdfunding pages and how contributors perceive the presented information, we interviewed both medical crowdfunding beneficiaries and contributors. While beneficiaries emphasized the serious of their medical situations to signal their deservedness of support, contributor participants gave less attention to that content. Rather, they focused on their impression of the beneficiary’s character formed by various features of contributions such as the contributor’s names, messages, and shared pictures. These contribution features further signaled common connections between the beneficiary and contributors and each contributor’s unique involvement in the beneficiary’s medical journey. However, the contribution amount resulted in judgement about other contributors. We suggest design opportunities and challenges that apply these results to the design of medical crowdfunding interfaces.
Pocket Transfers: Interaction Techniques for Transferring Content from Situated Displays to Mobile Devices
We present Pocket Transfers: interaction techniques that allow users to transfer content from situated displays to a personal mobile device while keeping the device in a pocket or bag. Existing content transfer solutions require direct manipulation of the mobile device, making inter-action slower and less flexible. Our introduced tech-niques employ touch, mid-air gestures, gaze, and a mul-timodal combination of gaze and mid-air gestures. We evaluated the techniques in a novel user study (N=20), where we considered dynamic scenarios where the user approaches the display, completes the task, and leaves. We show that all pocket transfer techniques are fast and seen as highly convenient. Mid-air gestures are the most efficient touchless method for transferring a single item, while the multimodal method is the fastest touchless method when multiple items are transferred. We provide guidelines to help researchers and practitioners choose the most suitable content transfer techniques for their systems.
SESSION: Paper Presentations
Understanding the Family Perspective on the Storage, Sharing and Handling of Family Civic Data
Across social care, healthcare and public policy, enabled by the “big data” revolution (which has normalized large-scale data-based decision-making), there are moves to “join up” citizen databases to provide care workers with holistic views of families they support. In this context, questions of personal data privacy, security, access, control and (dis-)empowerment are critical considerations for system designers and policy makers alike. To explore the family perspective on this landscape of what we call Family Civic Data, we carried out ethnographic interviews with four North-East families. Our design-game-based interviews were effective for engaging both adults and children to talk about the impact of this dry, technical topic on their lives. Our findings, delivered in the form of design guidelines, show support for dynamic consent: families would feel most empowered if involved in an ongoing co-operative relationship with state welfare and civic authorities through shared interaction with their data.
The Effects of Adding Search Functionality to Interactive Visualizations on the Web
The widespread use of text-based search in user interfaces has led designers in visualization to occasionally add search functionality to their creations. Yet it remains unclear how search may impact a person’s behavior. Given the unstructured context of the web, users may not have explicit information-seeking goals and designers cannot make assumptions about user attention. To bridge this gap, we observed the impact of integrating search with five visualizations across 830 online participants. In an unguided task, we find that (1) the presence of text-based search influences people’s information-seeking goals, (2) search can alter the data that people explore and how they engage with it, and (3) the effects of search are amplified in visualizations where people are familiar with the underlying dataset. These results suggest that text-search in web visualizations drives users towards more diverse information seeking goals, and may be valuable in a range of existing visualization designs.
Towards Design Principles for Visual Analytics in Operations Contexts
Operations engineering teams interact with complex data systems to make technical decisions that ensure the operational efficacy of their missions. To support these decision-making tasks, which may require elastic prioritization of goals dependent on changing conditions, custom analytics tools are often developed. We were asked to develop such a tool by a team at the NASA Jet Propulsion Laboratory, where rover telecom operators make decisions based on models predicting how much data rovers can transfer from the surface of Mars. Through research, design, implementation, and informal evaluation of our new tool, we developed principles to inform the design of visual analytics systems in operations contexts. We offer these principles as a step towards understanding the complex task of designing these systems. The principles we present are applicable to designers and developers tasked with building analytics systems in domains that face complex operations challenges such as scheduling, routing, and logistics.
Displaying Invisible Objects: Why People Rarely Re-read E-books
This study of paper and e-books investigates how specific affordances of physical and digital objects relate to people’s valuations and uses of those objects over time. We found that while the visibility of paper books amplified the meaningfulness of organizational and display actions taken with regards to those objects, the systems that supported interactions with e-books instead tended to make such actions less meaningful. We argue that these systems also discouraged re-uses of e-books for most participants — the important exceptions being several participants who used the book-focused social networking site Goodreads. This paper details how the affordances and limitations that resulted from the material constructions of paper and e-books impacted participants’ uses of and feelings towards those objects, and examines the implications of using a supplementary online system for displaying digital objects.
A Visual Interaction Cue Framework from Video Game Environments for Augmented Reality
Based on an analysis of 49 popular contemporary video games, we develop a descriptive framework of visual interaction cues in video games. These cues are used to inform players what can be interacted with, where to look, and where to go within the game world. These cues vary along three dimensions: the purpose of the cue, the visual design of the cue, and the circumstances under which the cue is shown. We demonstrate that this framework can also be used to describe interaction cues for augmented reality applications. Beyond this, we show how the framework can be used to generatively derive new design ideas for visual interaction cues in augmented reality experiences.
HARK No More: On the Preregistration of CHI Experiments
Experimental preregistration is required for publication in many scientific disciplines and venues. When experimental intentions are preregistered, reviewers and readers can be confident that experimental evidence in support of reported hypotheses is not the result of HARKing, which stands for Hypothesising After the Results are Known. We review the motivation and outcomes of experimental preregistration across a variety of disciplines, as well as previous work commenting on the role of evaluation in HCI research. We then discuss how experimental preregistration could be adapted to the distinctive characteristics of Human-Computer Interaction empirical research, to the betterment of the discipline.
Designing Coherent Gesture Sets for Multi-scale Navigation on Tabletops
Multi-scale navigation interfaces were originally designed to enable single users to explore large visual information spaces on desktop workstations. These interfaces can also be quite useful on tabletops. However, their adaptation to co-located multi-user contexts is not straightforward. The literature describes different interfaces, that only offer a limited subset of navigation actions. In this paper, we first identify a comprehensive set of actions to effectively support multi-scale navigation. We report on a guessability study in which we elicited user-defined gestures for triggering these actions, showing that there is no natural design solution, but that users heavily rely on the now-ubiquitous slide, pinch and turn gestures. We then propose two interface designs based on this set of three basic gestures: one involves two-hand variations on these gestures, the other combines them with widgets. A comparative study suggests that users can easily learn both, and that the gesture-based, visually-minimalist design is a viable option, that saves display space for other controls.
HindSight: Enhancing Spatial Awareness by Sonifying Detected Objects in Real-Time 360-Degree Video
Our perception of our surrounding environment is limited by the constraints of human biology. The field of augmented perception asks how our sensory capabilities can be usefully extended through computational means. We argue that spatial awareness can be enhanced by exploiting recent advances in computer vision which make high-accuracy, real-time object detection feasible in everyday settings. We introduce HindSight, a wearable system that increases spatial awareness by detecting relevant objects in live 360-degree video and sonifying their position and class through bone conduction headphones. HindSight uses a deep neural network to locate and attribute semantic information to objects surrounding a user through a head-worn panoramic camera. It then uses bone conduction headphones, which preserve natural auditory acuity, to transmit audio notifications for detected objects of interest. We develop an application using HindSight to warn cyclists of approaching vehicles outside their field of view and evaluate it in an exploratory study with 15 users. Participants reported increases in perceived safety and awareness of approaching vehicles when using HindSight.
Uncertainty Displays Using Quantile Dotplots or CDFs Improve Transit Decision-Making
Everyday predictive systems typically present point predictions, making it hard for people to account for uncertainty when making decisions. Evaluations of uncertainty displays for transit prediction have assessed people’s ability to extract probabilities, but not the quality of their decisions. In a controlled, incentivized experiment, we had subjects decide when to catch a bus using displays with textual uncertainty, uncertainty visualizations, or no-uncertainty (control). Frequency-based visualizations previously shown to allow people to better extract probabilities (quantile dotplots) yielded better decisions. Decisions with quantile dotplots with 50 outcomes were(1) better on average, having expected payoffs 97% of optimal(95% CI: [95%,98%]), 5 percentage points more than control (95% CI: [2,8]); and (2) more consistent, having within-subject standard deviation of 3 percentage points (95% CI:[2,4]), 4 percentage points less than control (95% CI: [2,6]).Cumulative distribution function plots performed nearly as well, and both outperformed textual uncertainty, which was sensitive to the probability interval communicated. We discuss implications for real time transit predictions and possible generalization to other domains.
What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems
Fitness trackers not just provide easy means to acquire physiological data in real-world environments due to affordable sensing technologies, they further offer opportunities for physiology-aware applications and studies in HCI; however, their performance is not well understood. In this paper, we report findings on the quality of 3 sensing technologies: PPG-based wrist trackers (Apple Watch, Microsoft Band 2), an ECG-belt (Polar H7) and reference device with stick-on ECG electrodes (Nexus 10). We collected physiological (heart rate, electrodermal activity, skin temperature) and subjective data from 21 participants performing combinations of physical activity and stressful tasks. Our empirical research indicates that wrist devices provide a good sensing performance in stationary settings. However, they lack accuracy when participants are mobile or if tasks require physical activity. Based on our findings, we suggest a textitDesign Space for Wearables in Research Settings and reflected on the appropriateness of the investigated technologies in research contexts.
Lost in Migration: Information Management and Community Building in an Online Health Community
The ever-growing volume of information within online health communities (OHCs) presents an urgent need for new solutions that improve the efficiency of information organization and retrieval for their members. To meet this need, OHCs may choose to adopt off-the-shelf platforms that provide novel features for information management, but were not specifically designed to meet these communities’ needs. The questions remain, however, as to the impact of these new platforms on social dynamics within OHCs and their well-being. To examine these questions, we qualitatively studied a migration of a popular OHC, focusing on diabetes self-management, between two off-the-shelf social computing platforms. Despite improving information management, the migration served as a catalyst to reveal the importance of features for identity management and closed circle communication that were not apparent to either the management or the membership of the community. We describe the study and draw implications for research and design for OHCs.
On Visual Granularity: Collocated Sales Meeting Interactions in the Machine Industry
Visual representations are being used in typical sales meetings of the machine industry to exchange information and support social interactions. In these meetings, sales representatives design for granularity by taking into account verbal and visual details of communication. Our article builds on increasingly occurring collocated interactions in sales meetings investigating the social relevance of mobile devices in face-to-face settings. The article aims to understand the supporting and disturbing role of visual granularity in sales meetings and develops design implications for interaction designers. We conducted an ethnographic study of sales meetings in material handling and paper machine industries, including Conversation Analysis (CA) of video recordings, and involving groups of professional analysts that are seldom used in HCI. Our findings draw evidence from sales meetings and design processes on successful and unsuccessful use of granularity in visual representations. Finally, we propose seven design guidelines for visual granularity striving to understand buyers’ perceptions and visual qualities.
Distance and Attraction: Gravity Models for Geographic Content Production
Volunteered Geographic Information (VGI), such as contributions to OpenStreetMap and geotagged Wikipedia articles, is often assumed to be produced locally. However, recent work has found that peer-produced VGI is frequently contributed by non-locals. We evaluate this approach across hundreds of content types from Wikipedia, OpenStreetMap, and eBird, and show that these models can describe more than 90% of “VGI flows” for some content types. Our findings advance geographic HCI theory, suggesting some spatial mechanisms underpinning VGI production. We also discuss design implications that can help (a) human and algorithmic consumers of VGI evaluate the perspectives it contains and (b) address geographic coverage variations in these platforms (e.g. via more effective volunteer recruitment strategies).
MatchSticks: Woodworking through Improvisational Digital Fabrication
Digital fabrication tools have broadened participation in making and enabled new methods of rapid physical prototyping across diverse materials. We present a novel smart tool designed to complement one of the first materials employed by humans – wood – and celebrate the fabrication practice of joinery. Our tool, MatchSticks, is a digital fabrication system tailored for joinery. Combining a portable CNC machine, touchscreen user interface, and parametric joint library, MatchSticks enables makers of varying skill to rapidly explore and create artifacts from wood. Our system embodies tacit woodworking knowledge and distills the distributed workflow of CNC tools into a hand tool; it operates on materials existing machines find difficult, produces assemblies much larger than its workspace, and supports the parallel creation of geometries. We describe the workflow and technical details of our system, present example artifacts produced by our tool, and report results from our user study.
Visuo-Haptic Illusions for Improving the Perceived Performance of Shape Displays
In this work, we utilize visuo-haptic illusions to improve the perceived performance of encountered-type haptic devices, specifically shape displays, in virtual reality. Shape displays are matrices of actuated pins that travel vertically to render physical shapes; however, they have limitations such as low resolution, small display size, and low pin speed. To address these limitations, we employ illusions such as redirection, scaling, and retargeting that take advantage of the visual dominance effect, the idea that vision often dominates when senses conflict. Our evaluation of these techniques suggests that redirecting sloped lines with angles less than 40 degrees onto a horizontal line is an effective technique for increasing the perceived resolution of the display. Scaling up the virtual object onto the shape display by a factor less than 1.8x can also increase the perceived resolution. Finally, using vertical redirection a perceived 3x speed increase can be achieved.
A Schnittmuster for Crafting Context-Sensitive Toolkits
DIY-making can be an expensive pastime if makers are relying on ready-made toolkits, specialised materials and off-shelf components. Many prefabricated commercial kits seek to lower the learning barrier of making and to help beginners to successfully take their first steps in engineering. However, as soon as the novices become a little more advanced, these toolkits often do not fit the specific requirements of personal maker projects anymore. We introduce the idea of a Schnittmuster (or a meta-toolkit) as a novel approach to toolkit design that seeks to address these creativity-limiting factors as well as practical entrance hurdles. To demonstrate the adaptive power of the Schnittmuster concept, we discuss an exemplar in the context of capacitive touch sensing (FlexE-Touch). Implemented under the constraints of materials, user skill sets and making environments, we illustrate how the Schnittmuster facilitated four cheap and flexible toolkit instantiations for crafting custom touch sensor electrodes.
Repurposing Emoji for Personalised Communication: Why ? means “I love you”
The use of emoji in digital communication can convey a wealth of emotions and concepts that otherwise would take many words to express. Emoji have become a popular form of communication, with researchers claiming emoji represent a type of “ubiquitous language” that can span different languages. In this paper however, we explore how emoji are also used in highly personalised and purposefully secretive ways. We show that emoji are repurposed for something other than their “intended” use between close partners, family members and friends. We present the range of reasons why certain emoji get chosen, including the concept of “emoji affordance” and explore why repurposing occurs. Normally used for speed, some emoji are instead used to convey intimate and personal sentiments that, for many reasons, their users cannot express in words. We discuss how this form of repurposing must be considered in tasks such as emoji-based sentiment analysis.
More Stars or More Reviews?
The large majority of reputation systems use features such as star ratings and reviews to give users a reputation in online peer-to-peer markets. Both have been shown to be effective for signaling trustworthiness. However, the exact extent to which these features can change perceptions of users’ trustworthiness remains an open question. Using data from an online experiment conducted on Airbnb users, we investigate which of the two types of reputation information –average star rating or the number of reviews –is more important for signaling a user’s trustworthiness. We find that the relative effectiveness of ratings and reviews differ depending on whether reputation has a strong or a weak differentiation power. Our findings show that reputation effects are contingent on and susceptible to the context created by the alternative choices presented to users, highlighting how reputation information is displayed can drastically alter their efficacy for engendering trust.
Self-Reflection and Personal Physicalization Construction
Self-reflection is a central goal of personal informatics systems, and constructing visualizations from physical tokens has been found to help people reflect on data. However, so far, constructive physicalization has only been studied in lab environments with provided datasets. Our qualitative study investigates the construction of personal physicalizations in people’s domestic environments over 2-4 weeks. It contributes an understanding of (1) the process of creating personal physicalizations, (2) the types of personal insights facilitated, (3) the integration of self-reflection in the physicalization process, and (4) its benefits and challenges for self-reflection. We found that in constructive personal physicalization, data collection, construction and self-reflections are deeply intertwined. This extends previous models of visualization creation and data-driven self-reflection. We outline how benefits such as reflection through manual construction, personalization, and presence in everyday life can be transferred to a wider set of digital and physical systems.
G2G: The Design and Evaluation of a Shared Calendar and Messaging System for Grandparents and Grandchildren
Distance separated grandparents and grandchildren often face challenges in staying connected. To explore this topic, we designed G2G, a shared calendar and video messaging system to connect young children (ages 5-10) with their grandparents over distance. Our design focused on providing grandparents and grandchildren with an awareness of each other’s lives to support conversations and design elements to help reduce the need for parent scaffolding. A field study with two grandparent-grandchild pairs over two months showed that systems designed around structured communication can help young children develop a routine around staying in touch with their remote grandparents. Autonomy in maintaining awareness can help children to be engaged more easily. This suggests that designs focusing on connecting young children to their grandparents over distance should be flexible yet structured and designing to reduce parental scaffolding can lead to positive effects and strengthened relationships.
Playing Close to Home: Interaction and Emerging Play in Outdoor Play Installations
Outdoor play is becoming an increasingly marginalised activity in the urban landscape. Even in HCI, research on interactive solutions for outdoor play has largely been limited to special areas and in particular playgrounds. But children play everywhere, and especially play close to home is central in children’s play activities. In this article we draw upon knowledge about designing for children’s play in interaction design as well as in landscape architecture, to study how interactive play installations can be integrated in outdoor environments of a residential area. We present a field study in which digitally enhanced play installations were installed, in dialogue with the landscape, in between the buildings of a residential area. We focus on how emerging play activities made use of the installations as well as of the surrounding landscape in expected as well as unexpected ways. Based on the observations, we discuss how residential play is special, and how this affects how to design for it.
CFar: A Tool to Increase Communication, Productivity, and Review Quality in Collaborative Code Reviews
Collaborative code review has become an integral part of the collaborative design process in the domain of software development. However, there are well-documented challenges and limitations to collaborative code review—for instance, high-quality code reviews may require significant time and effort for the programmers, whereas faster, lower-quality reviews may miss code defects. To address these challenges, we introduce CFar, a novel tool design for extending collaborative code review systems with an automated code reviewer whose feedback is based on program-analysis technologies. To validate this design, we implemented CFar as a production-quality tool and conducted a mixed-method empirical evaluation of the tool usage at Microsoft. Through the field deployment of our tool and a laboratory study of professional programmers using the tool, we produced several key findings showing that CFar enhances communication, productivity, and review quality in human–human collaborative code review.
Announcing Pregnancy Loss on Facebook: A Decision-Making Framework for Stigmatized Disclosures on Identified Social Network Sites
Pregnancy loss is a common experience that is often not disclosed in spite of potential disclosure benefits such as social support. To understand how and why people disclose pregnancy loss online, we interviewed 27 women in the U.S. who are social media users and had recently experienced pregnancy loss. We developed a decision-making framework explaining pregnancy loss disclosures on identified social network sites (SNS) such as Facebook. We introduce network-level reciprocal disclosure, a theory of how disclosure reciprocity, usually applied to understand dyadic exchanges, can operate at the level of a social network to inform decision-making about stigmatized disclosures in identified SNSs. We find that 1) anonymous disclosures on other sites help facilitate disclosure on identified sites (e.g., Facebook), and 2) awareness campaigns enable sharing about pregnancy loss for many who would not disclose otherwise. Finally, we discuss conceptual and design implications. CAUTION: This paper includes quotes about pregnancy loss.
Conceptualizing Disagreement in Qualitative Coding
Collaborative qualitative coding often involves coders assign- ing different labels to the same instance, leading to ambiguity. We refer to such an instance of ambiguity as disagreement in coding. Analyzing reasons for such a disagreement is essential– both for purposes of bolstering user understanding gained from coding and reinterpreting the data collaboratively, and for negotiating user-assigned labels for building effective machine learning models. We propose a conceptual definition of collective disagreement using diversity and divergence within the coding distributions. This perspective of disagreement translates to diverse coding contexts and groups of coders irrespective of discipline. We introduce two tree-based ranking metrics as standardized ways of comparing disagreements in how data instances have been coded. We empirically validate that, of the two tree-based metrics, coders’ perceptions of dis- agreement match more closely with the n-ary tree metric than with the post-traversal tree metric.
Intermodulation: Improvisation and Collaborative Art Practice for HCI
This paper integrates theory, ethnography, and collaborative artwork to explore improvisational activity as both topic and tool of multidisciplinary HCI inquiry. Building on theories of improvisation drawn from art, music, HCI and social science, and two ethnographic studies based on interviews, participant observation and collaborative art practice, we seek to elucidate the improvisational nature of practice in both art and ordinary action, including human-computer interaction. We identify five key features of improvisational action — reflexivity, transgression, tension, listening, and interdependence — and show how these can deepen and extend both linear and open-ended methodologies in HCI and design. We conclude by highlighting collaborative engagement based on ‘intermodulation’ as a tool of multidisciplinary inquiry for HCI research and design.
OptiMo: Optimization-Guided Motion Editing for Keyframe Character Animation
The mission of animators is to create nuanced, high-quality character motions. To achieve this, the careful editing of animation curves—curves that determine how a series of keyframed poses are interpolated over time—is an important task. Manual editing affords full and precise control, but requires tedious and nonintuitive trials and errors. Numerical optimization can automate such exploration; however, automatic solutions cannot always be perfect, and it is difficult for animators to control optimization owing to its black-box behavior. In this paper, we present a new framework called optimization-guided motion editing, which is aimed at maintaining a sense of full control while utilizing the power of optimization. We have designed interactions and developed a set of mathematical formulations to enable them. We discuss the framework’s potential by demonstrating several usage scenarios with our proof-of-concept system, named OptiMo.
Medley: A Library of Embeddables to Explore Rich Material Properties for 3D Printed Objects
In our everyday life, we interact with and benefit from objects with a wide range of material properties. In contrast, personal fabrication machines (e.g., desktop 3D printers) currently only support a much smaller set of materials. Our goal is to close the gap between current limitations and the future of multi-material printing by enabling people to explore the reuse of material from everyday objects into their custom designs. To achieve this, we develop a library of embeddables–everyday objects that can be cut, worked and embedded into 3D printable designs. We describe a design space that characterizes the geometric and material properties of embeddables. We then develop Medley—a design tool whereby users can import a 3D model, search for embeddables with desired material properties, and interactively edit and integrate their geometry to fit into the original design. Medley also supports the final fabrication and embedding process, including instructions for carving or cutting the objects, and generating optimal paths for inserting embeddables. To validate the expressiveness of our library, we showcase numerous examples augmented by embeddables that go beyond the objects’ original printed materials.
GeoCoin: Supporting Ideation and Collaborative Design with Smart Contracts
Design and HCI researchers are increasingly working with complex digital infrastructures, such as cryptocurrencies, distributed ledgers and smart contracts. These technologies will have a profound impact on digital systems and their audiences. However, given their emergent nature and technical complexity, involving non-specialists in the design of applications that employ these technologies is challenging. In this paper, we discuss these challenges and present GeoCoin, a location-based platform for embodied learning and speculative ideating with smart contracts. In collaborative workshops with GeoCoin, participants engaged with location-based smart contracts, using the platform to explore digital ‘debit’ and ‘credit’ zones in the city. These exercises led to the design of diverse distributed-ledger applications, for time-limited financial unions, participatory budgeting, and humanitarian aid. These results contribute to the HCI community by demonstrating how an experiential prototype can support understanding of the complexities behind new digital infrastructures and facilitate participant engagement in ideation and design processes.
Evaluating Attack and Defense Strategies for Smartphone PIN Shoulder Surfing
We evaluate the efficacy of shoulder surfing defenses for PIN-based authentication systems. We find tilting the device away from the observer, a widely adopted defense strategy, provides limited protection. We also evaluate a recently proposed defense incorporating an “invisible pressure component” into PIN entry. Contrary to earlier claims, our results show this provides little defense against malicious insider attacks. Observations during the study uncover successful attacker strategies for reconstructing a victim’s PIN when faced with a tilt defense. Our evaluations identify common misconceptions regarding shoulder surfing defenses, and highlight the need to educate users on how to safeguard their credentials from these attacks.
VR-OOM: Virtual Reality On-rOad driving siMulation
Researchers and designers of in-vehicle interactions and interfaces currently have to choose between performing evaluation and human factors experiments in laboratory driving simulators or on-road experiments. To enjoy the benefit of customizable course design in controlled experiments with the immediacy and rich sensations of on-road driving, we have developed a new method and tools to enable VR driving simulation in a vehicle as it travels on a road. In this paper, we describe how the cost-effective and flexible implementation of this platform allows for rapid prototyping. A preliminary pilot test (N = 6), centered on an autonomous driving scenario, yields promising results, illustrating proof of concept and indicating that a basic implementation of the system can invoke genuine responses from test participants.
KickAR: Exploring Game Balancing Through Boosts and Handicaps in Augmented Reality Table Football
When player skill levels are not matched, games provide an unsatisfying player experience. Player balancing is used across many digital game genres to address this, but has not been studied for co-located augmented reality (AR) tabletop games, where using boosts and handicaps can adjust for different player skill levels. In the setting of an AR table football game, we studied the importance of game balancing being triggered by the game system or the players, and whether player skill should be required to trigger game balancing. We implemented projected icons to prominently display game balancing mechanics in the AR table football game. In a within-subjects study (N=24), we found players prefer skill-based control over game balancing and that different triggers are perceived as having different fairness. Further, the study showed that even game balancing that is perceived as unfair can provide enjoyable game experiences. Based on our findings, we provide suggestions for player balancing in AR tabletop games.
Digital Joinery For Hybrid Carpentry
The craft of carpentry relies on joinery: the connections between pieces of wood to create multipart structures. In traditional woodworking, joints are limited to the manual chisel skills of the craftsperson, or to capabilities of the machines, which favorite 90° or 180° angle joints with no more than two elements. We contribute an interactive design process in which joints are generated digitally to allow for unrestricted beam connectors, then produced from Nylon-12 using selective laser sintering (SLS) 3D printing. We present our Generative Joinery Design Tool and demonstrate our system on a selection of stools. The paper exemplifies the potential of Digital Joinery to enhance carpentry by incorporating a hybrid and interactive level of design sophistication and affordances that are very hard to achieve with traditional skills and tools.
What Makes an Automated Vehicle a Good Driver?
An automated vehicle needs to learn how human road users experience the intentions of other drivers and understand how they communicate with each other in order to avoid misunderstandings and prevent giving a negative external image during interactions. The aim of the present study is to identify a cooperative lane change indication which other drivers understand unambiguously and prefer when it comes to lane change announcements in a dense traffic situation on the highway. A fixed-base driving simulator study is conducted with N = 66 participants in Germany in a car-following scenario. Participants rated, from the lag driver’s perspective, different lane change announcements of another driver which varied in lateral movements (i.e., duration, lateral offset). Main findings indicate that a medium offset and moderate duration of lateral movement is experienced as most cooperative. The results are crucial for the development of lane change strategies for automated vehicles.
Feel My Pain: Design and Evaluation of Painpad, a Tangible Device for Supporting Inpatient Self-Logging of Pain
Monitoring patients’ pain is a critical issue for clinical caregivers, particularly among staff responsible for providing analgesic relief. However, collecting regularly scheduled pain readings from patients can be difficult and time-consuming for clinicians. In this paper we present Painpad, a tangible device that was developed to allow patients to engage in self-logging of their pain. We report findings from two hospital-based field studies in which Painpad was deployed to a total of 78 inpatients recovering from ambulatory surgery. We find that Painpad results in improved frequency and compliance with pain logging, and that self-logged scores may be more faithful to patients’ experienced pain than corresponding scores reported to nurses. We also show that older adults may prefer tangible interfaces over tablet-based alternatives for reporting their pain, and we contribute design lessons for pain logging devices intended for use in hospital settings.
Challenges and Opportunities for Technology-Supported Activity Reporting in the Workplace
Effective communication of activities and progress in the workplace is crucial for the success of many modern organizations. In this paper, we extend current research on workplace communication and uncover opportunities for technology to support effective work activity reporting. We report on three studies: With a survey of 68 knowledge workers followed by 14 in-depth interviews, we investigated the perceived benefits of different types of progress reports and an array of challenges at three stages: Collection, Composition, and Delivery. We show an important interplay between written and face-to-face reporting, and highlight the importance of tailoring a report to its audience. We then present results from an analysis of 722 reports composed by 361 U.S.-based knowledge workers, looking at the influence of the audience on a report’s language. We conclude by discussing opportunities for future technologies to assist both employees and managers in collecting, interpreting, and reporting progress in the workplace.
Supporting Meaningful Personal Fitness: the Tracker Goal Evolution Model
While the number of users sporting fitness trackers is constantly increasing, little is understood about how tracking goals can evolve over time. As recent studies have shown that the long-term health effects of trackers are limited, we need to readdress how trackers engage users. We conducted semi-structured interviews and an online survey to explore how users change their tracking goals. Based on our results, we created the Tracker Goal Evolution Model. The model describes how tracker goals can evolve from internal user needs through qualitative goals to quantitative goals that can be used with trackers. It also includes trust and reflection as key contextual factors contributing to meaningful transitions between goals. We postulate showing how tracker goals relate to other personal fitness goals as key for long-term engagement with trackers. Our model is useful for designers of future trackers as a tool to create evolving and meaningful tracking goals.
Everybody’s Hacking: Participation and the Mainstreaming of Hackathons
Hackathons have become a popular tool for bringing people together to imagine new possibilities for technology. Despite originating in technology communities, hackathons have now been widely adopted by a broad range of organisations. This mainstreaming of hackathons means they encompass a very different range of attendees and activities than they once did, to the extent that some events billed as hackathons may involve no coding at all. Given this shift away from production of code, they might instead be seen as an increasingly popular participatory design activity, from which designers and researchers in HCI can learn. Through fieldwork at six hackathons that targeted non-technical communities, we identify the types of activities and contributions that emerge through these events and the barriers and tensions that might exist. In doing so, we contribute a greater understanding of hackathons as a growing phenomenon and as a potential tool for participatory research.
Pointing at a Distance with Everyday Smart Devices
Large displays are becoming commonplace at work, at home, or in public areas. However, interaction at a distance — anything greater than arms-length — remains cumbersome, restricts simultaneous use, and requires specific hardware augmentations of the display: touch layers, cameras, or dedicated input devices. Yet a rapidly increasing number of people carry smartphones and smartwatches, devices with rich input capabilities that can easily be used as input devices to control interactive systems. We contribute (1) the results of a survey on possession and use of smart devices, and (2) the results of a controlled experiment comparing seven distal pointing techniques on phone or watch, one- and two-handed, and using different input channels and mappings. Our results favor using a smartphone as a trackpad, but also explore performance tradeoffs that can inform the choice and design of distal pointing techniques for different contexts of use.
The Story in the Notebook: Exploratory Data Science using a Literate Programming Tool
Literate programming tools are used by millions of programmers today, and are intended to facilitate presenting data analyses in the form of a narrative. We interviewed 21 data scientists to study coding behaviors in a literate programming environment and how data scientists kept track of variants they explored. For participants who tried to keep a detailed history of their experimentation, both informal and formal versioning attempts led to problems, such as reduced notebook readability. During iteration, participants actively curated their notebooks into narratives, although primarily through cell structure rather than markdown explanations. Next, we surveyed 45 data scientists and asked them to envision how they might use their past history in an future version control system. Based on these results, we give design guidance for future literate programming tools, such as providing history search based on how programmers recall their explorations, through contextual details including images and parameters.
What Did I Really Vote For?
E-voting has been embraced by a number of countries, delivering benefits in terms of efficiency and accessibility. End-to-end verifiable e-voting schemes facilitate verification of the integrity of individual votes during the election process. In particular, methods for cast-as-intended verification enable voters to confirm that their cast votes have not been manipulated by the voting client. A well-known technique for effecting cast-as-intended verification is the Benaloh Challenge. The usability of this challenge is crucial because voters have to be actively engaged in the verification process. In this paper, we report on a usability evaluation of three different approaches of the Benaloh Challenge in the remote e-voting context. We performed a comparative user study with 95 participants. We conclude with a recommendation for which approaches should be provided to afford verification in real-world elections and suggest usability improvements.
Tangible Drops: A Visio-Tactile Display Using Actuated Liquid-Metal Droplets
We present Tangible Drops, a visio-tactile display that for the first time provides physical visualization and tactile feedback using a planar liquid interface. It presents digital information interactively by tracing dynamic patterns on horizontal flat surfaces using liquid metal drops on a programmable electrode array. It provides tactile feedback with directional information in the 2D vector plane using linear locomotion and/or vibration of the liquid metal drops. We demonstrate move, oscillate, merge, split and dispense-from-reservoir functions of the liquid metal drops by consuming low power (450 mW per electrode) and low voltage (8–15 V). We report on results of our empirical study with 12 participants on tactile feedback using 8 mm diameter drops, which indicate that Tangible Drops can convey tactile sensations such as changing speed, varying direction and controlled oscillation with no visual feedback. We present the design space and demonstrate the applications of Tangible Drops, and conclude by suggesting potential future applications for the technique.
FingerT9: Leveraging Thumb-to-finger Interaction for Same-side-hand Text Entry on Smartwatches
We introduce FingerT9, leveraging the action of thumb-to-finger touching on the finger segments, to support same-side-hand (SSH) text entry on smartwatches. This is achieved by mapping a T9 keyboard layout to the finger segments. Our solution avoids the problems of fat finger and screen occlusion, and enables text entry using the same-side hand which wears the watch. In the pilot study, we determined the layout mapping preferred by the users. We conducted an experiment to compare the text-entry performances of FingerT9, the tilt-based SSH input, and the direct-touch non-SSH input. The results showed that the participants performed significantly faster and more accurately with FingerT9 than the tilt-based method. There was no significant difference between FingerT9 and direct-touch methods in terms of efficiency and error rate. We then conducted the second experiment to study the learning curve on SSH text entry methods: FingerT9 and the tilt-based input. FingerT9 gave significantly better long-term improvement. In addition, eyes-free text entry (i.e., looking at the screen output but not the keyboard layout mapped on the finger segments) was made possible once the participants were familiar with the keyboard layout.
Time-Turner: Designing for Reflection and Remembrance of Moments in the Home
Families preserve memories of their special and everyday experiences, though it can be hard to capture all these moments in everyday life. We explore the concept of automated forms of capturing family life and presenting them through situated, tangible everyday artifacts in the home. We designed Time-Turner, an always-on video recording system along with a set of three drink coasters that allow family members to easily search, filter and replay videos to connect to their past. We engaged households in speculative enactments and interviews to explore the design space. Our findings point to the value of witnessing real rather than staged moments and the ways in which the affordances of everyday artifacts can allow media to be ‘lived with’ as a part of everyday life. Yet our design also revealed tensions around sharing and changing perceptions across time and generations. This points to design challenges around safeguarding this media and capturing ‘reality’ as opposed to curated content.
Webcam Covering as Planned Behavior
Most of today’s laptops come with an integrated webcam placed above the screen to enable video conferencing. Due to the risk of webcam spying attacks, some laptop users seem to be concerned about their privacy and seek protection by covering the webcam. This paper is the first to investigate personal characteristics and beliefs of users with and without webcam covers by applying the Theory of Planned Behavior. We record the privacy behavior of 180 users, develop a path model, and analyze it by applying Partial Least Squares. The analysis indicates that privacy concerns do not significantly influence users’ decision to use a webcam cover. Rather, this behavior is influenced by users’ attitudes, social environment, and perceived control over protecting privacy. Developers should take this as a lesson to design privacy enhancing technologies which are convenient, verifiably effective and endorsed by peers.
WrisText: One-handed Text Entry on Smartwatch using Wrist Gestures
We present WrisText – a one-handed text entry technique for smartwatches using the joystick-like motion of the wrist. A user enters text by whirling the wrist of the watch hand, towards six directions which each represent a key in a circular keyboard, and where the letters are distributed in an alphabetical order. The design of WrisText was an iterative process, where we first conducted a study to investigate optimal key size, and found that keys needed to be 55º or wider to achieve over 90% striking accuracy. We then computed an optimal keyboard layout, considering a joint optimization problem of striking accuracy, striking comfort, word disambiguation. We evaluated the performance of WrisText through a five-day study with 10 participants in two text entry scenarios: hand-up and hand-down. On average, participants achieved a text entry speed of 9.9 WPM across all sessions, and were able to type as fast as 15.2 WPM by the end of the last day.
Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects
Embedding sensors into objects allow them to recognize various interactions. However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.
Social Computing-Driven Activism in Youth Empowerment Organizations: Challenges and Opportunities
Throughout the world, organizations empower youth to participate in civic engagement to impact social change, and adult-youth collaborations are instrumental to the success of such initiatives. However, little is known about how technology supports this activism work, despite the fact that tools such as Social Networking Applications (SNAs) are increasingly being leveraged in such contexts. We report results from a qualitative study of SNA use within a youth empowerment organization. Using the analytical lens of object-oriented publics, our findings reveal opportunities and challenges that youth and staff face when they use SNAs. We describe the illegibility of youth outreach efforts on SNAs, and how this illegibility complicated staff attempts to hold youth accountable. We also characterize how youth and staff differed in what they felt were socially appropriate uses of SNA features, and tensions that arose in the co-use of these tools. We conclude with implications for the design of collaborative technologies that support youth-led activism in organizational contexts.
AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time
Developing cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.
SymbiosisSketch: Combining 2D & 3D Sketching for Designing Detailed 3D Objects in Situ
We present SymbiosisSketch, a hybrid sketching system that combines drawing in air (3D) and on a drawing surface (2D) to create detailed 3D designs of arbitrary scale in an augmented reality (AR) setting. SymbiosisSketch leverages the complementary affordances of 3D (immersive, unconstrained, life-sized) and 2D (precise, constrained, ergonomic) interactions for in situ 3D conceptual design. A defining aspect of our system is the ongoing creation of surfaces from unorganized collections of 3D curves. These surfaces serve a dual purpose: as 3D canvases to map strokes drawn on a 2D tablet, and as shape proxies to occlude the physical environment and hidden curves in a 3D sketch. SymbiosisSketch users draw interchangeably on a 2D tablet or in 3D within an ergonomically comfortable canonical volume, mapped to arbitrary scale in AR. Our evaluation study shows this hybrid technique to be easy to use in situ and effective in transcending the creative potential of either traditional sketching or drawing in air.
Video Game Selection Procedures For Experimental Research
Videogames are complex stimuli, and selecting games that consistently induce a desired player experience (PX) in an experimental setting can be challenging. The number of relatively high-quality games being released each year continues to increase, which makes deriving a shortlist of plausible candidate games from this pool increasingly problematic. Despite this, guidance for structuring and reporting on the game selection process remains limited. This paper therefore proposes two approaches to game selection: the first leverages online videogame databases and existing PX research, and is structured with respect to widely-applicable videogame metadata. The second process applies established game design theory to serve researchers when insufficient connections between desired PX outcomes and recognisable game elements exist. Both methods are accompanied by example reports of their application. The present work aims to assist experimental researchers in selecting videogames likely to meet their needs, while encouraging more rigorous standards of reporting in the field.
Animated Edge Textures in Node-Link Diagrams: a Design Space and Initial Evaluation
Network edge data attributes are usually encoded using color, opacity, stroke thickness and stroke pattern, or some combination thereof. In addition to these static variables, it is also possible to animate dynamic particles flowing along the edges. This opens a larger design space of animated edge textures, featuring additional visual encodings that have potential not only in terms of visual mapping capacity but also playfulness and aesthetics. Such animated edge textures have been used in several commercial and design-oriented visualizations, but to our knowledge almost always in a relatively ad hoc manner. We introduce a design space and Web-based framework for generating animated edge textures, and report on an initial evaluation of particle properties – particle speed, pattern and frequency – in terms of visual perception.
Silicone Devices: A Scalable DIY Approach for Fabricating Self-Contained Multi-Layered Soft Circuits using Microfluidics
We present a scalable Do-It-Yourself (DIY) fabrication workflow for prototyping highly stretchable yet robust devices using a CO2 laser cutter, which we call Silicone Devices. Silicone Devices are self-contained and thus embed components for input, output, processing, and power. Our approach scales to arbitrary complex devices as it supports techniques to make multi-layered stretchable circuits and buried VIAs. Additionally, high-frequency signals are supported as our circuits consist of liquid metal and are therefore highly conductive and durable. To enable makers and interaction designers to prototype a wide variety of Silicone Devices, we also contribute a stretchable sensor toolkit, consisting of touch, proximity, sliding, pressure, and strain sensors. We demonstrate the versatility and novel opportunities of our technique by prototyping various samples and exploring their use cases. Strain tests report on the reliability of our circuits and preliminary user feedback reports on the user-experience of our workflow by non-engineers.
RFIBricks: Interactive Building Blocks Based on RFID
We present RFIBricks, an interactive building block system based on ultrahigh frequency radio-frequency identification (RFID) sensing. The system enables geometry resolution based on a simple yet highly generalizable mechanism: an RFID contact switch, which is made by cutting each RFID tag into two parts, namely antenna and chip. A magnetic connector is then coupled with each part. When the antenna and chip connect, an interaction event with an ID is transmitted to the reader. On the basis of our design of RFID contact switch patterns, we present a system of interactive physical building blocks that resolves the stacking order and orientation when one block is stacked upon another, determines a three-dimensional (3D) geometry built on a two-dimensional base plate, and detects user inputs by incorporating electromechanical sensors. Because it is calibration-free and does not require batteries in each block, it facilitates straightforward maintenance when deployed at scale. Compared with other approaches, this RFID-based system resolves several critical challenges in human-computer interaction, such as 1) determining the identity and the built 3D geometry of passive building blocks, 2) enabling stackable token+constraint interaction on a tabletop, and 3) tracking in-hand assembly.
Let Me Be Implicit: Using Motive Disposition Theory to Predict and Explain Behaviour in Digital Games
We introduce explicit and implicit motives (i.e., achievement, affiliation, power, autonomy) into player experience research and situate them in existing theories of player motivation, personality, playstyle, and experience. Additionally, we conducted an experiment with 109 players in a social play situation and show that: 1. As expected, there are several correlations of playstyle, personality, and motivation with explicit motives, but few with implicit motives; 2. The implicit affiliation motive predicts in-game social behaviour; and 3. The implicit affiliation motive adds significant variance to explain regression models of in-game social behaviours even when we control for social aspects of personality, the explicit affiliation motive, self-esteem, and social player traits. Our results support that implicit motives explain additional variance because they access needs that are experienced affectively and pre-consciously, and not through cognitive interpretation necessary for explicit expression and communication, as is the case in any approaches that use self-report.
Single or Multiple Conversational Agents?: An Interactional Coherence Comparison
Chatbots focusing on a narrow domain of expertise are in great rise. As several tasks require multiple expertise, a designer may integrate multiple chatbots in the background or include them as interlocutors in a conversation. We investigated both scenarios by means of a Wizard of Oz experiment, in which participants talked to chatbots about visiting a destination. We analyzed the conversation content, users’ speech, and reported impressions. We found no significant difference between single- and multi-chatbots scenarios. However, even with equivalent conversation structures, users reported more confusion in multi-chatbots interactions and adopted strategies to organize turn-taking. Our findings indicate that implementing a meta-chatbot may not be necessary, since similar conversation structures occur when interacting to multiple chatbots, but different interactional aspects must be considered for each scenario.
Understanding the Use and Impact of the Zero-Rated Free Basics Platform in South Africa
Companies are offering zero-rated, or data-charge free Internet services to help bring unconnected users online where Internet access is less affordable. However, it is unclear whether these services achieve this goal or how they shape Internet use. To inform evidence-based policy around and the design of zero-rated services, we show in this paper how mobile users are making use of Facebook’s controversial Free Basics platform. We present findings from interviews of 35 Free Basics users in South Africa: current low-income users and non-regular student users. Our findings suggest that Free Basics does shape Internet usage, for instance, users spend more time online because of ‘free’ apps. Second, Free Basics saves users money but adoption of the platform depends on access to other ‘free’ Internet options. Finally, most users are confused about how zero-rated services work and what ‘free’ means. Based on our findings, we make recommendations for future work.
Cooperating to Compete: the Mutuality of Cooperation and Competition in Boardgame Play
This paper examines the complex relationship between competition and cooperation in boardgame play. We understand boardgaming as distributed cognition, where people work together in a shared activity to accomplish the game. Although players typically compete against each other, this competition is only possible through ongoing cooperation to negotiate, enact and maintain the rules of play. In this paper, we report on a study of people playing modern boardgames. We analyse how knowledge of the game’s state is distributed amongst the players and the game components, and examine the different forms of cooperation and collaboration that occur during play. Further, we show how players use the material elements of the game to support articulation work and to improve their awareness and understanding of the game’s state. Our goal is to examine the coordinative practices that the players use during play and explicate the ways in which these enable competition.
A Matter of Control or Safety?: Examining Parental Use of Technical Monitoring Apps on Teens’ Mobile Devices
Adoption rates of parental control applications (“apps”) for teens’ mobile devices are low, but little is known about the characteristics of parents (or teens) who use these apps. We conducted a web-based survey of 215 parents and their teens (ages 13-17) using two separate logistic regression models (parent and teen) to examine the factors that predicted parental use of technical monitoring apps on their teens’ mobile devices. Both parent and teen models confirmed that low autonomy granting (e.g., authoritarian) parents were the most likely to use parental control apps. The teen model revealed additional nuance, indicating that teens who were victimized online and had peer problems were more likely to be monitored by their parents. Overall, increased parental control was associated with more (not fewer) online risks. We discuss the implications of these findings and provide design recommendations for mobile apps that promote online safety through engaged, instead of restrictive, parenting.
CommunityCrit: Inviting the Public to Improve and Evaluate Urban Design Ideas through Micro-Activities
While urban design affects the public, most people do not have the time or expertise to participate in the process. Many online tools solicit public input, yet typically limit interaction to collecting complaints or early-stage ideas. This paper explores how to engage the public in more complex stages of urban design without requiring a significant time commitment. After observing workshops, we designed a system called CommunityCrit that offers micro-activities to engage communities in elaborating and evaluating urban design ideas. Through a four-week deployment, in partnership with a local planning group seeking to redesign a street intersection, CommunityCrit yielded 352 contributions (around 10 minutes per participant). The planning group reported that CommunityCrit provided insights on public perspectives and raised awareness for their project, but noted the importance of setting expectations for the process. People appreciated that the system provided a window into the planning process, empowered them to contribute, and supported diverse levels of skills and availability.
The Perils of Confounding Factors: How Fitts’ Law Experiments can Lead to False Conclusions
The design of Fitts’ historical reciprocal tapping experiment gravely confounds index of difficulty ID with target distance D: Summary statistics for the candidate Fitts model and a competing model may appear identical, and the validity of Fitts’ model for some tasks can be legitimately questioned. We show that the contamination of ID by either target distance D or width W is due to the common practices of pooling and averaging data belonging to different distance-width (D,W) pairs for the same ID, and taking a geometric progression for values of D and W. We analyze a case study of the validation of Fitts’ law in eye-gaze movements, where an unfortunate experimental design has misled researchers into believing that eye-gaze movements are not ballistic. We then provide simple guidelines to prevent confounds: Practitioners should carefully design the experimental conditions of (D,W), fully distinguish data acquired for different conditions, and put less emphasis on r² scores. We also recommend investigating the use of stochastic sampling for D and W.
Utilizing Narrative Grounding to Design Storytelling Gamesfor Creative Foreign Language Production
Foreign language students must learn to use language creatively to overcome knowledge gaps and keep readers or listeners interested. However, few tools exist to support practicing this skill. Therefore, we set out to explore design of storytelling games for practicing creative language use. Through an iterative design process, we identified narrative grounding (establishing common ground for collaborative narrative) as key to student engagement and learning. However, designing games for narrative grounding while keeping the game flexible enough to easily accommodate teacher goals is challenging. Considering this challenge, we designed a collaborative storytelling game where students help scaffold the narrative and teachers can easily integrate language goals with “language cards”. In an in-classroom evaluation with 36 students, we show the importance of narrative grounding for learning. Qualitative evidence also suggests narrative grounding makes the game more engaging for players. We conclude with discussion of design implications for digital language learning tools.
Accessible Maps for the Blind: Comparing 3D Printed Models with Tactile Graphics
Tactile maps are widely used in Orientation and Mobility (O&M) training for people with blindness and severe vision impairment. Commodity 3D printers now offer an alternative way to present accessible graphics, however it is unclear if 3D models offer advantages over tactile equivalents for 2D graphics such as maps. In a controlled study with 16 touch readers, we found that 3D models were preferred, enabled the use of more easily understood icons, facilitated better short term recall and allowed relative height of map elements to be more easily understood. Analysis of hand movements revealed the use of novel strategies for systematic scanning of the 3D model and gaining an overview of the map. Finally, we explored how 3D printed maps can be augmented with interactive audio labels, replacing less practical braille labels. Our findings suggest that 3D printed maps do indeed offer advantages for O&M training.
Reactile: Programming Swarm User Interfaces through Direct Physical Manipulation
We explore a new approach to programming swarm user interfaces (Swarm UI) by leveraging direct physical manipulation. Existing Swarm UI applications are written using a robot programming framework: users work on a computer screen and think in terms of low-level controls. In contrast, our approach allows programmers to work in physical space by directly manipulating objects and think in terms of high-level interface design. Inspired by current UI programming practices, we introduce a four-step workflow-create elements, abstract attributes, specify behaviors, and propagate changes-for Swarm UI programming. We propose a set of direct physical manipulation techniques to support each step in this workflow. To demonstrate these concepts, we developed Reactile, a Swarm UI programming environment that actuates a swarm of small magnets and displays spatial information of program states using a DLP projector. Two user studies-an in-class survey with 148 students and a lab interview with eight participants-confirm that our approach is intuitive and understandable for programming Swarm UIs.
SESSION: Paper Presentations
Increasing User Attention with a Comic-based Policy
End user license agreements, terms of service agreements and privacy policies all suffer from many of the same problems: people rarely read them and yet still agree to whatever is contained within them. There are many usability challenges with these policies: they are often lengthy, with jargon filled language that is difficult to quickly comprehend. However, these notices are the primary tool for users to understand the privacy implications of their digital activities and make informed decisions on which websites and software they use. Prior research has explored alternative designs for such notices, using more visual and structured interfaces for conveying information. We expand upon these results by exploring a comic-based interface, examining whether it can engage users to pay more attention to a terms of service agreement. Our results indicate that the comic version did hold user attention for longer than text-based alternatives, encouraging deeper investigation into comic-based interfaces.
Deployments of the table-non-table: A Reflection on the Relation Between Theory and Things in the Practice of Design Research
Design-oriented research in HCI has increasingly migrated towards theoretical perspectives to understand the implications of newly crafted technology in everyday life. However, in this context, the relations between theory and understanding the things we make are not always clear, especially the degree to which the nature of research artifacts is revealed through or determined by theory. We examine a series of field deployment studies we conducted with our research artifact table-non-table over the course of four and a half years that we came to see as a postphenomenological inquiry. Importantly, our interpretations of this artifact, methodological concerns, and theoretical groundings evolved over time. We account for and critically reflect on these shifts in the relationship between theory and our design artifact. We detail how theory was enacted and embodied in our design research practice and offer insights into the complex relations between theory and things in design-oriented HCI research.
Investigating How Smartphone Movement is Affected by Body Posture
We present an investigation into how hand usage is affected by different body postures (Sitting at a table, Lying down and Standing) when interacting with smartphones. We theorize a list of factors (smartphone support, body support and muscle usage) and explore their influence the tilt and rotation of the smartphone. From this we draw a list of hypotheses that we investigate in a quantitative study. We varied the body postures and grips (Symmetric bimanual, Asymmetric bimanual finger, Asymmetric bimanual thumb and Single-handed) studying the effects through a dual pointing task. Our results showed that the body posture Lying down had the most movement, followed by Sitting at a table and finally Standing. We additionally generate reports of motions performed using different grips. Our work extends previous research conducted with multiple grips in a sitting position by including other body postures, it is anticipated that UI designers will use our results to inform the development of mobile user interfaces.
“I can do everything but see!” — How People with Vision Impairments Negotiate their Abilities in Social Contexts
This research takes an orientation to visual impairment (VI) that does not regard it as fixed or determined alone in or through the body. Instead, we consider (dis)ability as produced through interactions with the environment and configured by the people and technology within it. Specifically, we explore how abilities become negotiated through video ethnography with six VI athletes and spectators during the Rio 2016 Paralympics. We use generated in-depth examples to identify how technology can be a meaningful part of ability negotiations, emphasizing how these embed into the social interactions and lives of people with VI. In contrast to treating technology as a solution to a ‘sensory deficit’, we understand it to support the triangulation process of sense-making through provision of appropriate additional information. Further, we suggest that technology should not try and replace human assistance, but instead enable people with VI to better identify and interact with other people in-situ.
vrSocial: Toward Immersive Therapeutic VR Systems for Children with Autism
Social communication frequently includes nuanced nonverbal communication cues, including eye contact, gestures, facial expressions, body language, and tone of voice. This type of communication is central to face-to-face interaction, but can be challenging for children and adults with autism. Innovative technologies can provide support by augmenting human-delivered cuing and automated prompting. Specifically, immersive virtual reality (VR) offers an option to generalize social skill interventions by concretizing nonverbal information in real-time social interactions. In this work, we explore the design and evaluation of three nonverbal communication applications in immersive VR. The results of this work indicate that delivering real-time visualizations of proximity, speaker volume, and duration of one’s speech is feasible in immersive VR and effective for real-time support for proximity regulation for children with autism. We conclude with design considerations for therapeutic VR systems.
DeepWriting: Making Digital Ink Editable via Deep Generative Modeling
Digital ink promises to combine the flexibility and aesthetics of handwriting and the ability to process, search and edit digital text. Character recognition converts handwritten text into a digital representation, albeit at the cost of losing personalized appearance due to the technical difficulties of separating the interwoven components of content and style. In this paper, we propose a novel generative neural network architecture that is capable of disentangling style from content and thus making digital ink editable. Our model can synthesize arbitrary text, while giving users control over the visual appearance (style). For example, allowing for style transfer without changing the content, editing of digital ink at the word level and other application scenarios such as spell-checking and correction of handwritten text. We furthermore contribute a new dataset of handwritten text with fine-grained annotations at the character level and report results from an initial user evaluation.
Media of Things: Supporting the Production of Metadata Rich Media Through IoT Sensing
Rich metadata is becoming a key part of the broadcast production pipeline. This information can be used to deliver compelling new consumption experiences which are personalized, location-aware, interactive and multi-screen. However, media producers are struggling to generate the metadata required for such experiences, using inefficient post-production solutions which are limited in how much of the original context they can capture. In response, we present Media of Things (MoT), a tool for on-location media productions. MoT enables practical and flexible generation of sensor based point-of-capture metadata. We demonstrate how embedded ubiquitous sensing technologies such as the Internet of Things can be leveraged to produce context rich, time sequenced metadata in a production studio. We reflect on how this workflow can be integrated within the constraints of broadcast production and the possibilities that emerge from access to rich data at the beginning of the production lifecycle to produce well described media for reconfigurable consumption.
Enhancing Online Problems Through Instructor-Centered Tools for Randomized Experiments
Digital educational resources could enable the use of randomized experiments to answer pedagogical questions that instructors care about, taking academic research out of the laboratory and into the classroom. We take an instructor-centered approach to designing tools for experimentation that lower the barriers for instructors to conduct experiments. We explore this approach through DynamicProblem, a proof-of-concept system for experimentation on components of digital problems, which provides interfaces for authoring of experiments on explanations, hints, feedback messages, and learning tips. To rapidly turn data from experiments into practical improvements, the system uses an interpretable machine learning algorithm to analyze students’ ratings of which conditions are helpful, and present conditions to future students in proportion to the evidence they are higher rated. We evaluated the system by collaboratively deploying experiments in the courses of three mathematics instructors. They reported benefits in reflecting on their pedagogy, and having a new method for improving online problems for future students.
Exploring the Role of Conversational Cues in Guided Task Support with Virtual Assistants
Voice-based conversational assistants are growing in popularity on ubiquitous mobile and stationary devices. Cortana, as well as Google Home, Amazon Echo, and others, can provide support for various tasks from managing reminders to booking a hotel. However, with few exceptions, user input is limited to explicit queries or commands. In this work, we explore the role of implicit conversational cues in guided task completion scenarios. In a Wizard of Oz study, we found that, for the task of cooking a recipe, nearly one-quarter of all user-assistant exchanges were initiated from implicit conversational cues rather than from plain questions. Given that these implicit cues occur in such high frequency, we conclude by presenting a set of design implications for the design of guided task experiences in contemporary conversational assistants.
Simulator Sickness in Augmented Reality Training Using the Microsoft HoloLens
Augmented Reality is on the rise with consumer-grade smart glasses becoming available in recent years. Those interested in deploying these head-mounted displays need to understand better the effect technology has on the end user. One key aspect potentially hindering the use is motion sickness, a known problem inherited from virtual reality, which so far remains under-explored. In this paper we address this problem by conducting an experiment with 142 subjects in three different industries: aviation, medical, and space. We evaluate whether the Microsoft HoloLens, an augmented reality head-mounted display, causes simulator sickness and how different symptom groups contribute to it (nausea, oculomotor and disorientation). Our findings suggest that the Microsoft HoloLens causes across all participants only negligible symptoms of simulator sickness. Most consumers who use it will face no symptoms while only few experience minimal discomfort in the training environments we tested it in.
Experiencing the Body as Play
Games research in HCI is continually interested in the human body. However, recent work suggests that the field has only begun to understand how to design bodily games. We propose that the games research field is advancing from playing with digital content using a keyboard, to using bodies to play with digital content, towards a future where we experience our bodies as digital play. To guide designers interested in supporting players to experience their bodies as play, we present two phenomenological perspectives on the human body (Körper and Leib) and articulate a suite of design tactics using our own and other people’s work. We hope with this paper, we are able to help designers embrace the point that we both “have” a body and “are” a body, thereby aiding the facilitation of the many benefits of engaging the human body through games and play, and ultimately contributing to a more humanized technological future.
Upstanding by Design: Bystander Intervention in Cyberbullying
Although bystander intervention can mitigate the negative effects of cyberbullying, few bystanders ever attempt to intervene. In this study, we explored the effects of interface design on bystander intervention using a simulated custom-made social media platform. Participants took part in a three-day, in-situ experiment, in which they were exposed to several cyberbullying incidents. Depending on the experimental condition, they received different information about the audience size and viewing notifications intended to increase a sense of personal responsibility in bystanders. Results indicated that bystanders were more likely to intervene indirectly than directly, and information about the audience size and viewership increased the likelihood of flagging cyberbullying posts through serial mediation of public surveillance, accountability, and personal responsibility. The study has implications for understanding bystander effect in cyberbullying, and how to develop design solutions to encourage bystander intervention in social media.
Examining the Demand for Spam: Who Clicks?
Despite significant advances in automated spam detection, some spam content manages to evade detection and engage users. While the spam supply chain is well understood through previous research, there is little understanding of spam consumers. We focus on the demand side of the spam equation examining what drives users to click on spam via a large-scale analysis of de-identified, aggregated Facebook log data (n=600,000). We find (1) that the volume of spam and clicking norms in a users’ network are significantly related to individual consumption behavior; (2) that more active users are less likely to click, suggesting that experience and internet skill (weakly correlated with activity level) may create more savvy consumers; and (3) we confirm previous findings about the gender effect in spam consumption, but find this effect largely corresponds to spam topics. Our findings provide practical insights to reduce demand for spam content, thereby affecting spam profitability.
ColorMod: Recoloring 3D Printed Objects using Photochromic Inks
Recent research has shown how to change the color of existing objects using photochromic materials. These materials can switch their appearance from transparent to colored when exposed to light of a certain wavelength. The color remains even when the object is removed from the light source. The process is fully reversible allowing users to recolor the object as many times as they want. So far, these systems have been limited to single color changes, i.e. changes from transparent to colored. In this paper, we present ColorMod, a method to extend this approach to multi-color changes (e.g., red-to-yellow). We accomplish this using a multi-color pattern with one color per voxel across the surface of the object. When recoloring the object, our system locally activates only those voxels that have the desired color and turns all other voxels off. We describe ColorMod’s hardware/software system and its user interface that comes with a conversion tool for 3D printing as well as a painting tool that matches physical voxels with the desired appearance. We also contribute our own material formula for a 3D-printable photochromic ink.
Multidimensional Risk Communication: Public Discourse on Risks during an Emerging Epidemic
Crisis informatics has examined how institutions and individuals seek, communicate, and curate information in response to crises. The public’s communication and perception of risks on social media remain understudied. In this study, we report a qualitative analysis of public perceptions of risks and risk management measures on Reddit during the Zika crisis, an emerging epidemic associated with high uncertainty regarding pathology, epidemiology, and broad consequences. We found two types of perceived risks: ones directly caused by the Zika virus, and ones potentially introduced by authorities’ risk management measures. Risk perceptions unfolded along multiple dimensions beyond the imminent and personal level. Reddit users discussed in a speculative way to foresee various risks in the long run or at larger geographical scales. We discuss the multidimensionality and speculative nature of risk perception on social media, and derive implications for crisis informatics research and public health research and practice.
A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab
Recognizing others is a major challenge for people with visual impairments (VIPs) and can hinder engagement in social activities. We present Accessibility Bot, a research prototype bot on Facebook Messenger, that leverages state-of-the-art computer vision and a user’s friends’ tagged photos on Facebook to help people with visual impairments recognize their friends. Accessibility Bot provides users information about identity and facial expressions and attributes of friends captured by their phone’s camera. To guide our design, we interviewed eight VIPs to understand their challenges and needs in social activities. After designing and implementing the bot, we conducted a diary study with six VIPs to study its use in everyday life. While most participants found the Bot helpful, their experience was undermined by perceived low recognition accuracy, difficulty aiming a camera, and lack of knowledge about the phone’s status. We discuss these real-world challenges, identify suitable use cases for Accessibility Bot, and distill design implications for future face recognition applications.
Evaluation Beyond Usability: Validating Sustainable HCI Research
The evaluation of research artefacts is an important step to validate research contributions. Sub-disciplines of HCI often pursue primary goals other than usability, such as Sustainable HCI (SHCI), HCI for development, or health and wellbeing. For such disciplines, established evaluation methods are not always appropriate or sufficient, and new conventions for identifying, discussing, and justifying suitable evaluation methods need to be established. In this paper, we revisit the purpose and goals of evaluation in HCI and SHCI, and elicit five key elements that can provide guidance to identifying evaluation methods for SHCI research. Our essay is meant as a starting point for discussing current and improving future evaluation practice in SHCI; we also believe it holds value for other subdisciplines in HCI that encounter similar challenges while evaluating their research.
Beyond Translation: Design and Evaluation of an Emotional and Contextual Knowledge Interface for Foreign Language Social Media Posts
Although many social media sites now provide machine translation (MT) for foreign language posts, translation of a post may not suffice to support understanding of, and engagement with, that post. We present SenseTrans, a tool that provides emotional and contextual annotations generated by natural language analysis in addition to machine translation. We evaluated SenseTrans in a laboratory experiment in which native English speakers browsed five Facebook profiles in foreign languages. One group used the SenseTrans interface while the other group used MT alone. Participants using SenseTrans reported significantly greater understanding of the posts, and greater willingness to engage with the posts. However, no additional cognitive load was associated with using an interface that provided more information. These results provide promising support for the idea of using computational tools to annotate communication to support multilingual sense making and interaction on social media.
Projective Windows: Bringing Windows in Space to the Fingertip
In augmented and virtual reality (AR and VR), there may be many 3D planar windows with 2D texts, images, and videos on them. However, managing the position, orientation, and scale of such a window in an immersive 3D workspace can be difficult. Projective Windows strategically uses the absolute and apparent sizes of the window at various stages of the interaction to enable the grabbing, moving, scaling, and releasing of the window in one continuous hand gesture. With it, the user can quickly and intuitively manage and interact with windows in space without any controller hardware or dedicated widget. Through an evaluation, we demonstrate that our technique is performant and preferable, and that projective geometry plays an important role in the design of spatial user interfaces.
Scenariot: Spatially Mapping Smart Things Within Augmented Reality Scenes
The emerging simultaneous localizing and mapping (SLAM) based tracking technique allows the mobile AR device spatial awareness of the physical world. Still, smart things are not fully supported with the spatial awareness in AR. Therefore, we present Scenariot, a method that enables instant discovery and localization of the surrounding smart things while also spatially registering them with a SLAM based mobile AR system. By exploiting the spatial relationships between mobile AR systems and smart things, Scenariot fosters in-situ interactions with connected devices. We embed Ultra-Wide Band (UWB) RF units into the AR device and the controllers of the smart things, which allows for measuring the distances between them. With a one-time initial calibration, users localize multiple IoT devices and map them within the AR scenes. Through a series of experiments and evaluations, we validate the localization accuracy as well as the performance of the enabled spatial aware interactions. Further, we demonstrate various use cases through Scenariot.
AlterWear: Battery-Free Wearable Displays for Opportunistic Interactions
As the landscape of wearable devices continues to expand, power remains a major issue for adoption, usability, and miniaturization. Users are faced with an increasing number of personal devices to manage, charge, and care for. In this work, we argue that power constraints limit the design space of wearable devices. We present AlterWear: an architecture for new wearable devices that implement a batteryless design using electromagnetic induction via NFC and bistable e-ink displays. Although these displays are active only when in proximity to an NFC-enabled device, this unique combination of hardware enables both quick, dynamic and long-term interactions with persistent visual displays. We demonstrate new wearables enabled through AlterWear with dynamic, fashion-forward, and expressive displays across several form factors, and evaluate them in a user study. By forgoing the need for onboard power, AlterWear expands the ecosystem of functional and fashionable wearable technologies.
The Role of Gamification in Participatory Environmental Sensing: A Study In the Wild
Participatory sensing (PS) and citizen science hold promises for a genuinely interactive and inclusive citizen engagement in meaningful and sustained collection of data about social and environmental phenomena. Yet the underlying motivations for public engagement in PS remain still unclear particularly regarding the role of gamification, for which HCI research findings are often inconclusive. This paper reports the findings of an experimental study specifically designed to further understand the effects of gamification on citizen engagement. Our study involved the development and implementation of two versions (gamified and non-gamified) of a mobile application designed to capture lake ice coverage data in the sub-arctic region. Emerging findings indicate a statistically significant effect of gamification on participants’ engagement levels in PS. The motivation, approach and results of our study are outlined and implications of the findings for future PS design are reflected.
How Information Sharing about Care Recipients by Family Caregivers Impacts Family Communication
Previous research has shown that tracking technologies have the potential to help family caregivers optimize their coping strategies and improve their relationships with care recipients. In this paper, we explore how sharing the tracked data (i.e., caregiving journals and patient’s conditions) with other family caregivers affects home care and family communication. Although previous works suggested that family caregivers may benefit from reading the records of others, sharing patients’ private information might fuel negative feelings of surveillance and violation of trust for care recipients. To address this research question, we added a sharing feature to the previously developed tracking tool and deployed it for six weeks in the homes of 15 family caregivers who were caring for a depressed family member. Our findings show how the sharing feature attracted the attention of care recipients and helped the family caregivers discuss sensitive issues with care recipients.
DataInk: Direct and Creative Data-Oriented Drawing
Creating whimsical, personal data visualizations remains a challenge due to a lack of tools that enable for creative visual expression while providing support to bind graphical content to data. Many data analysis and visualization creation tools target the quick generation of visual representations, but lack the functionality necessary for graphics design. Toolkits and charting libraries offer more expressive power, but require expert programming skills to achieve custom designs. In contrast, sketching affords fluid experimentation with visual shapes and layouts in a free-form manner, but requires one to manually draw every single data point. We aim to bridge the gap between these extremes. We propose DataInk, a system supports the creation of expressive data visualizations with rigorous direct manipulation via direct pen and touch input. Leveraging our commonly held skills, coupled with a novel graphical user interface, DataInk enables direct, fluid, and flexible authoring of creative data visualizations.
Remediating a Design Tool: Implications of Digitizing Sticky Notes
Sticky notes are ubiquitous in design processes because of their tangibility and ease of use. Yet, they have well-known limitations in professional design processes, as documentation and distribution are cumbersome at best. This paper compares the use of sticky notes in ideation with a remediated digital sticky notes setup. The paper contributes with a nuanced understanding of what happens when remediating a physical design tool into digital space, by emphasizing focus shifts and breakdowns caused by the technology, but also benefits and promises inherent in the digital media. Despite users’ preference for creating physical notes, handling digital notes on boards was easier and the potential of proper documentation make the digital setup a possible alternative. While the analogy in our remediation supported a transfer of learned handling, the users’ experiences across technological setups impact their use and understanding, yielding new concerns regarding cross-device transfer and collaboration.
Slacktivists or Activists?: Identity Work in the Virtual Disability March
Protests are important social forms of activism, but can be inaccessible to people with disabilities. Online activism, like the 2017 Disability March, has provided alternative venues for involvement in accessible protesting and social movements. In this study, we use identity theory as a lens to understand why and how disabled activists engaged in an online movement, and its impact on their self-concepts. We interviewed 18 disabled activists about their experiences with online protesting during the Disability March. Respondents’ identities (as both disabled individuals and as activists) led them to organize or join the March, evolved alongside the group’s actions, and were reprioritized or strained as a result of their involvement. Our findings describe the values and limitations of this activism to our respondents, highlight the tensions they perceived about their activist identities, and present opportunities to support further accessibility and identity changes by integrating technology into their activist experiences.
Applying Computational Analysis to Textual Data from the Wild: A Feminist Perspective
With technologies that afford much larger-scale data collection than previously imagined, new ways of processing and interpreting qualitative textual data are required. HCI researchers use a range of methods for interpreting the ‘full range of human experience’ from qualitative data, however, such approaches are not always scalable. Feminist geography seeks to explore how diverse and varied accounts of place can be understood and represented, whilst avoiding reductive classification systems. In this paper, we assess the extent to which unsupervised topic models can support such a research agenda. Drawing on literature from Feminist and Critical GIS, we present a case study analysis of a Volunteered Geographic Information dataset of reviews about breastfeeding in public spaces. We demonstrate that topic modelling can offer novel insights and nuanced interpretations of complex concepts such as privacy and be integrated into a critically reflexive feminist data analysis approach that captures and represents diverse experiences of place.
Design Opportunities for AAC and Children with Severe Speech and Physical Impairments
Augmentative and alternative communication (AAC) technologies can support children with severe speech and physical impairments (SSPI) to express themselves. Yet, these seemingly ‘enabling’ technologies are often abandoned by this target group, suggesting a need to understand how they are used in communication. Little research has considered the interaction between people, interaction design and the material dimension of AAC. To address this, we report on a qualitative video study that examines the situated communication of five children using AAC in a special school. Our findings offer a new perspective on reconceptualising AAC design and use revealing four areas for future design: (1) incorporating an embodied view of communication, (2) designing to emphasise children’s competence and agency, (3) regulating the presence, prominence and value of AAC, and (4) supporting a wider range of communicative functions that help address children’s needs.
Facebook in Venezuela: Understanding Solidarity Economies in Low-Trust Environments
Since 2014, Venezuela has experienced severe economic crisis, including scarcity of basic necessities such as food and medicine. This has resulted in over-priced goods, scams, and other forms of economic abuse. We present an investigation of Venezuelans’ efforts to form an alternative, Solidarity Economy (SE) through Facebook Groups. In these groups, individuals can barter for items at fair prices. We highlight group practices and design features of Facebook Groups which support solidarity or anti-solidarity behaviors. We conclude by leveraging design principles for online communities presented by Kollock to present strategies to design more effective SEs in environments of low trust.
Digital Payment and Its Discontents: Street Shops and the Indian Government’s Push for Cashless Transactions
In November 2016, the Government of India banned the vast majority of the nation’s banknotes in a move referred to as ‘demonetization’, with the stated goals of fighting corruption, terrorism, and eventually expanding digital transactions. In this study of 200 shop-keepers in Mumbai and Bengaluru, we found that cash shortage increased digital payment adoption but that digital payments fell after new banknotes became available. Digital payment adoption depended on the nature and scope of transactions, type of product sold, as well as personal factors specific to business owners such as comfort and familiarity with other digital technologies and online transactions. Using theoretical work on market and information behavior, we examined environmental pushes for technology adoption against prevalent transactional practices, trust, and control. We propose that the move toward digital payments must be framed within a larger undertaking of technology-driven modernity that drives these initiatives, rather than just the efficiency or productivity gains digital payments present.
Moving Target Selection: A Cue Integration Model
This paper investigates a common task requiring temporal precision: the selection of a rapidly moving target on display by invoking an input event when it is within some selection window. Previous work has explored the relationship between accuracy and precision in this task, but the role of visual cues available to users has remained unexplained. To expand modeling of timing performance to multimodal settings, common in gaming and music, our model builds on the principle of probabilistic cue integration. Maximum likelihood estimation (MLE) is used to model how different types of cues are integrated into a reliable estimate of the temporal task. The model deals with temporal structure (repetition, rhythm) and the perceivable movement of the target on display. It accurately predicts error rate in a range of realistic tasks. Applications include the optimization of difficulty in game-level design.
Inferring Loop Invariants through Gamification
In today’s modern world, bugs in software systems incur significant costs. One promising approach to improve software quality is automated software verification. In this approach, an automated tool tries to prove the software correct once and for all. Although significant progress has been made in this direction, there are still many cases where automated tools fail. We focus specifically on one aspect of software verification that has been notoriously hard to automate: inferring loop invariants that are strong enough to enable verification. In this paper, we propose a solution to this problem through gamification and crowdsourcing. In particular, we present a puzzle game where players find loop invariants without being aware of it, and without requiring any expertise on software verification. We show through an experiment with Mechanical Turk users that players enjoy the game, and are able to solve verification tasks that automated state-of-the-art tools cannot.
CrowdLayout: Crowdsourced Design and Evaluation of Biological Network Visualizations
Biologists often perform experiments whose results generate large quantities of data, such as interactions between molecules in a cell, that are best represented as networks (graphs). To visualize these networks and communicate them in publications, biologists must manually position the nodes and edges of each network to reflect their real-world physical structure. This process does not scale well, and graph layout algorithms lack the biological underpinnings to offer a viable alternative. In this paper, we present CrowdLayout, a crowdsourcing system that leverages human intelligence and creativity to design layouts of biological network visualizations. CrowdLayout provides design guidelines, abstractions, and editing tools to help novice workers perform like experts. We evaluated CrowdLayout in two experiments with paid crowd workers and real biological network data, finding that crowds could both create and evaluate meaningful, high-quality layouts. We also discuss implications for crowdsourced design and network visualizations in other domains.
Imaginary Design Workbooks: Constructive Criticism and Practical Provocation
his paper reports on design strategies for critical and experimental work that remains constructive. We report findings from a design workshop that explored the “home hub” space through “imaginary design workbooks”. These feature ambiguous images and annotations written in an invented language to suggest a design space without specifying any particular idea. Many of the concepts and narratives which emerged from the workshop focused on extreme situations: some thoughtful, some dystopian, some even mythic. One of the workshop ideas was then developed with a senior social worker who works with young offenders. A “digital social worker” concept was developed and critiqued simultaneously. We draw on Foucault’s history of surveillance to “defamiliarise” both the home hub technology and the current youth justice system. We argue that the dichotomy between “constructive” and “critical” design is false because design is never neutral.
Drunk User Interfaces: Determining Blood Alcohol Level through Everyday Smartphone Tasks
Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a person’s motor coordination and cognition using performance metrics and sensor data. We examine five drunk user interfaces and combine them to form the “DUI app”. DUI uses machine learning models trained on human performance metrics and sensor data to estimate a person’s blood alcohol level (BAL). We evaluated DUI on 14 individuals in a week-long longitudinal study wherein each participant used DUI at various BALs. We found that with a global model that accounts for user-specific learning, DUI can estimate a person’s BAL with an absolute mean error of 0.005% ± 0.007% and a Pearson’s correlation coefficient of 0.96 with breathalyzer measurements.
“That Really Pushes My Buttons”: Designing Bullying and Harassment Training for the Workplace
Workplace bullying and harassment have been identified as two of the most concerning silent and unseen occupational hazards of the 21st century. The design of bespoke training addressing domain-specific job roles and relations presents a particular challenge. Using the concept of data-in-place where data is understood as being bound and produced by a particular place, this paper describes how locally-situated accounts can be used to engage employees in workplace-specific training seminars. Using higher education as a case study, we describe a four-stage design process for future training efforts: (1) in-depth interviews for further understanding of bullying and harassment; (2) design of digital probes for capturing contextual data; (3) probe deployment and subsequent data analysis; (4) data-driven discussion-based seminars. We outline the potential for digital probes in promoting the denormalization of toxic workplace cultures, considerations for novel sensitive data governance models, and the discussion of data-in-place’s temporal dimension.
Taking into Account Sensory Knowledge: The Case of Geo-techologies for Children with Visual Impairments
This paper argues for designing geo-technologies supporting non-visual sensory knowledge. Sensory knowledge refers to the implicit and explicit knowledge guiding our uses of our senses to understand the world. To support our argument, we build on an 18 months field-study on geography classes for primary school children with visual impairments. Our findings show (1) a paradox in the use of non-visual sensory knowledge: described as fundamental to the geography curriculum, it is mostly kept out of school; (2) that accessible geo-technologies in the literature mainly focus on substituting vision with another modality, rather than enabling teachers to build on children’s experiences; (3) the importance of the hearing sense in learning about space. We then introduce a probe, a wrist-worn device enabling children to record audio cues during field-trips. By giving importance to children’s hearing skills, it modified existing practices and actors’ opinions on non-visual sensory knowledge. We conclude by reflecting on design implications, and the role of technologies in valuing diverse ways of understanding the world.
Understanding the Uncertainty in 1D Unidirectional Moving Target Selection
In contrast to the extensive studies on static target pointing, much less formal understanding of moving target acquisition can be found in the HCI literature. We designed a set of experiments to identify regularities in 1D unidirectional moving target selection, and found a Ternary-Gaussian model to be descriptive of the endpoint distribution in such tasks. The shape of the distribution as characterized by μ and σ in the Gaussian model were primarily determined by the speed and size of the moving target. The model fits the empirical data well with 0.95 and 0.94 R2 values for μ and σ , respectively. We also demonstrated two extensions of the model, including 1) predicting error rates in moving target selection; and 2) a novel interaction technique to implicitly aid moving target selection. By applying them in a game interface design, we observed good performances in both predicting error rates (e.g., 2.7% mean absolute error) and assisting moving target selection (e.g., 33% or a greater increase in pointing accuracy).
Agile 3D Sketching with Air Scaffolding
Hand motion and pen drawing can be intuitive and expressive inputs for professional digital 3D authoring. However, their inherent limitations have hampered wider adoption. 3D sketching using hand motion is rapid but rough, and 3D sketching using pen drawing is delicate but tedious. Our new 3D sketching workflow combines these two in a complementary manner. The user makes quick hand motions in the air to generate approximate 3D shapes, and uses them as scaffolds on which to add details via pen-based 3D sketching on a tablet device. Our air scaffolding technique and corresponding algorithm extract only the intended shapes from unconstrained hand motions. Then, the user sketches 3D ideas by defining sketching planes on these scaffolds while appending new scaffolds, as needed. A user study shows that our progressive and iterative workflow enables more agile 3D sketching compared to ones using either hand motion or pen drawing alone.
KeyTime: Super-Accurate Prediction of Stroke Gesture Production Times
We introduce KeyTime, a new technique and accompanying software for predicting the production times of users’ stroke gestures articulated on touchscreens. KeyTime employs the principles and concepts of the Kinematic Theory, such as lognormal modeling of stroke gestures’ velocity profiles, to estimate gesture production times significantly more accurately than existing approaches. Our experimental results obtained on several public datasets show that KeyTime predicts user-independent production times that correlate r=.99 with groundtruth from just one example of a gesture articulation, while delivering an average error in the predicted time magnitude that is 3 to 6 times smaller than that delivered by CLC, the best prediction technique up to date. Moreover, KeyTime reports a wide range of useful statistics, such as the trimmed mean, median, standard deviation, and confidence intervals, providing practitioners with unprecedented levels of accuracy and sophistication to characterize their users’ a priori time performance with stroke gesture input.
Entrepreneurship and the Socio-Technical Chasm in a Lean Economy
Online technologies are increasingly hailed as enablers of entrepreneurship and income generation. Recent evidence suggests, however, that even the best such tools disproportionately favor those with pre-existing entrepreneurial advantages. Despite intentions, the technology on its own seems far from addressing socio-economic inequalities. Using participatory action research, we investigated why this might be, in an intimate, close-up context. Over a 1-year period, we–a collaborative team of university researchers and residents of Detroit’s East Side–worked to establish a neighborhood tour whose initial goal was to raise supplementary income and fundraise for community block clubs. We found that in addition to technical requirements, such as communication tools, a range of non-technological efforts is needed to manage projects, build self-efficacy, and otherwise support community participants. Our findings widen Ackerman’s “socio-technical gap” for some contexts and offer a counterpoint to overgeneralized claims about well-designed technologies being able to address certain classes of social challenges.
VirtualSpace – Overloading Physical Space with Multiple Virtual Reality Users
Although virtual reality hardware is now widely available, the uptake of real walking is hindered by the fact that it requires often impractically large amounts of physical space. To address this, we present VirtualSpace, a novel system that allows overloading multiple users immersed in different VR experiences into the same physical space. VirtualSpace accomplishes this by containing each user in a subset of the physical space at all times, which we call tiles; app-invoked maneuvers then shuffle tiles and users across the entire physical space. This allows apps to move their users to where their narrative requires them to be while hiding from users that they are confined to a tile. We show how this enables VirtualSpace to pack four users into 16m2. In our study we found that VirtualSpace allowed participants to use more space and to feel less confined than in a control condition with static, pre-allocated space.
Effects of Socially Stigmatized Crowdfunding Campaigns in Shaping Opinions
Donation-based crowdfunding platforms have an increasing number of campaigns on socially stigmatized topics. These platforms’ widespread online reachability and the large flow of monetary donations have the potential to shape individuals’ opinions by influencing their perceptions. However, little research has been done to investigate whether these campaigns impact individuals’ opinions and how. We conducted an experiment to explore how an attitude-inconsistent campaign on fairness and equality for LGBTIQ people influenced participants’ opinion on this topic. Although all the participants changed their perceived opinions after reading the support for the campaigns, participants opposing equality were less inclined to change their attitude than participants supporting equality. To examine this difference further, we conducted another experiment where participants were exposed to both attitude-consistent and attitude-inconsistent campaigns with varying levels of social support. Participants opposing equality showed less sensitivity to the level of social support, and wanted to donate significantly more money to anti-equality campaigns compared to those who supported equality. Results demonstrate the complex role of crowdfunding campaigns in shaping individuals’ opinions on stigmatized topics.
A Design Framework for Awareness Cues in Distributed Multiplayer Games
In the physical world, teammates develop situation awareness about each other’s location, status, and actions through cues such as gaze direction and ambient noise. To support situation awareness, distributed multiplayer games provide awareness cues – information that games automatically make available to players to support cooperative gameplay. The design of awareness cues can be extremely complex, impacting how players experience games and work with teammates. Despite the importance of awareness cues, designers have little beyond experiential knowledge to guide their design. In this work, we describe a design framework for awareness cues, providing insight into what information they provide, how they communicate this information, and how design choices can impact play experience. Our research, based on a grounded theory analysis of current games, is the first to provide a characterization of awareness cues, providing a palette for game designers to improve design practice and a starting point for deeper research into collaborative play.
Empowerment in HCI – A Survey and Framework
Empowering people through technology is of increasing concern in the HCI community. However, there are different interpretations of empowerment, which diverge substantially. The same term thus describes an entire spectrum of research endeavours and goals. This conceptual unclarity hinders the development of a meaningful discourse and exchange. To better understand what empowerment means in our community, we reviewed 54 CHI full papers using the terms empower and empowerment. Based on our analysis and informed by prior writings on power and empowerment, we construct a framework that serves as a lens to analyze notions of empowerment in current HCI research. Finally, we discuss the implications of these notions of empowerment on approaches to technology design and offer recommendations for future work. With this analysis, we hope to add structure and terminological clarity to this growing and important facet of HCI research.
Multiray: Multi-Finger Raycasting for Large Displays
We explore and evaluate a multi-finger raycasting design space that we call “multiray”. Each finger projects a ray on to the display, so the user is interacting from a distance using a form of direct input. Specifically, we propose techniques, where patterns of ray intersections created by hand postures form 2D geometric shapes to trigger actions and perform direct manipulations that go beyond single-point selections. Two formative studies examine characteristics of multi-finger raycasting for different projection methods, shapes, and tasks. Based on the results of those investigations, we demonstrate a number of dynamic UI controls and operations that utilise multiray points and shapes.
Feel the Movement: Real Motion Influences Responses to Take-over Requests in Highly Automated Vehicles
Take-over requests (TORs) in highly automated vehicles are cues that prompt users to resume control. TORs however, are often evaluated in non-moving driving simulators. This ignores the role of motion, an important source of information for users who have their eyes off the road while engaged in non-driving related tasks. We ran a user study in a moving-base driving simulator to investigate the effect of motion on TOR responses. We found that with motion, user responses to TORs vary depending on the road context where TORs are issued. While previous work showed that participants are fast to respond to urgent cues, we show that this is true only when TORs are presented on straight roads. Urgent cues issued on curved roads elicit slower responses than non-urgent cues on curved roads. Our findings indicate that TORs should be designed to be aware of road context to accommodate natural user responses.
HomeFinder Revisited: Finding Ideal Homes with Reachability-Centric Multi-Criteria Decision Making
Finding an ideal home is a difficult and laborious process. One of the most crucial factors in this process is the reachability between the home location and the concerned points of interest, such as places of work and recreational facilities. However, such importance is unrecognized in the extant real estate systems. By characterizing user requirements and analytical tasks in the context of finding ideal homes, we designed ReACH, a novel visual analytics system that assists people in finding, evaluating, and choosing a home based on multiple criteria, including reachability. In addition, we developed an improved data-driven model for approximating reachability with massive taxi trajectories. This model enables users to interactively integrate their knowledge and preferences to make judicious and informed decisions. We show the improvements in our model by comparing the theoretical complexities with the prior study and demonstrate the usability and effectiveness of the proposed system with task-based evaluation.
SpaceTokens: Interactive Map Widgets for Location-centric Interactions
Map users often need to interact repetitively with multiple important locations. For example, a traveler may frequently check her hotel or a train station on a map, use them to localize an unknown location, or investigate routes involving them. Ironically, these location-centric tasks cannot be performed using locations directly; users must instead pan and zoom the map or use a menu to access locations. We propose SpaceTokens, interactive widgets that act as clones of locations, and which users can create and place on map edges like virtual whiteboard magnets. SpaceTokens make location a first-class citizen of map interaction. They empower users to rapidly perform location-centric tasks directly using locations: users can select combinations of on-screen locations and SpaceTokens to control the map window, or connect them to create routes. Participants in a study overwhelmingly preferred a SpaceTokens prototype over Google Maps on identical smartphones for the majority of tasks.
M3 Gesture Menu: Design and Experimental Analyses of Marking Menus for Touchscreen Mobile Interaction
Despite their learning advantages in theory, marking menus have faced adoption challenges in practice, even on today’s touchscreen-based mobile devices. We address these challenges by designing, implementing, and evaluating multiple versions of M3 Gesture Menu (M3), a reimagination of marking menus targeted at mobile interfaces. M3 is defined on a grid rather than in a radial space, relies on gestural shapes rather than directional marks, and has constant and stationary space use. Our first controlled experiment on expert performance showed M3 was faster and less error-prone by a factor of two than traditional marking menus. A second experiment on learning demonstrated for the first time that users could successfully transition to recall-based execution of a dozen commands after three ten-minute practice sessions with both M3 and Multi-Stroke Marking Menu. Together, M3, with its demonstrated resolution, learning, and space use benefits, contributes to the design and understanding of menu selection in the mobile-first era of end-user computing.
Queer Visibility: Supporting LGBT+ Selective Visibility on Social Media
LGBT+ people adjust the presentation of their gender and sexual identities in response to social pressures, but their level of visibility differs between social media. We interviewed seventeen LGBT+ students at a socially-conservative university to investigate: (1) how do social media affect LGBT+ user experience of managing self presentation; and (2) how do social media affect participation in LGBT+ communities? We develop implications for design to support queering social media. (1) Give people abilities to present themselves with selective visibility, enabling choices about privacy and sharing, in contrast with the HCI design principle of indiscriminate ‘making visible’. That is, enable participants to define their social media identities in their own ways. (2) Conduct studies, with a methodology likewise ensures that participants can define their gender and sexual identities in their own ways, rather than according to a predetermined vocabulary.
Bento Browser: Complex Mobile Search Without Tabs
People engaged in complex searches such as planning a vacation or understanding their medical symptoms are often overwhelmed by opening and managing many tabs. These challenges are exacerbated as search moves to smartphones and mobile devices where screen real-estate is limited and tasks are frequently suspended, resumed, and interleaved. Rather than continue to utilize tab-based browsing for complex search, we introduce a new way of browsing through a scaffolded interface. The list of search results serves as a mutable workspace, where a user can track progress on a specific information query. The search query serves as a gateway into this workspace, accessed through a task-subtask hierarchy. We instantiate this in the Bento mobile search system and investigate its effectiveness in three studies. We find converging evidence that users were able to make progress on their complex searching tasks with this structure, and find it more organized and easier to revisit.
Chibitronics in the Wild: Engaging New Communities in Creating Technology with Paper Electronics
We share a study on the public adoption the Chibitronics circuit sticker toolkit, an open source, commercially available hardware toolkit for learning and creating electronics on paper. We examine sales data over a two-and-a-half-year period from November 2013, when the kit was launched commercially, to June 2016. We also look at publicly available project documentation from users during this period. We find that the Chibitronics user community confounds norms for traditional technology-making communities, especially in gender demographics. We explore the artifacts and types of documentation produced by users to learn about the various backgrounds, values, and goals of subcommunities, which includes educators, Makers, and crafters. In particular, we focus on artifacts from the craft community as a surprising and distinctive subset of technology creators. The diversity in public engagement shows how paper electronics tools like Chibitronics can be an effective approach for engaging new and broader audiences to participate in technology creation.
A Trip to the Moon: Personalized Animated Movies for Self-reflection
Self-tracking physiological and psychological data poses the challenge of presentation and interpretation. Insightful narratives for self-tracking data can motivate the user towards constructive self-reflection. One powerful form of narrative that engages audience across various culture and age groups is animated movies. We collected a week of self-reported mood and behavior data from each user and created in Unity a personalized animation based on their data. We evaluated the impact of their video in a randomized control trial with a non-personalized animated video as control. We found that personalized videos tend to be more emotionally engaging, encouraging greater and lengthier writing that indicated self-reflection about moods and behaviors, compared to non-personalized control videos.
Coco’s Videos: An Empirical Investigation of Video-Player Design Features and Children’s Media Use
In this study, we present Coco’s Videos, a video-viewing platform for preschoolers designed to support them in learning to self-manage their media consumption. We report results from a three-week experimental deployment in 24 homes in which preschoolers used three different versions of the platform: one that is neutral to the limits they set, one that enforces the limits they set, and one that attempts to erode the limits they set by automatically playing additional content after the planned content is finished (“post-play”). We found that post-play significantly reduced children’s autonomy and likelihood of self-regulation, extended video-viewing time, and led to increases in parent intervention. We found that the lock-out mechanism did not reduce video-viewing time or the likelihood of parent intervention. Together, our results suggest that avoiding platforms that work to undermine the user’s intentions is more likely to help children self-regulate their media use than rigid parental controls.
ResearchIME: A Mobile Keyboard Application for Studying Free Typing Behaviour in the Wild
We present a data logging concept, tool, and analyses to facilitate studies of everyday mobile touch keyboard use and free typing behaviour: 1) We propose a filtering concept to log typing without recording readable text and assess reactions to filters with a survey (N=349). 2) We release an Android keyboard app and backend that implement this concept. 3) Based on a three-week field study (N=30), we present the first analyses of keyboard use and typing biometrics on such free text typing data in the wild, including speed, postures, apps, auto correction, and word suggestions. We conclude that research on mobile keyboards benefits from observing free typing beyond the lab and discuss ideas for further studies.
Wearables for Learning: Examining the Smartwatch as a Tool for Situated Science Reflection
Relatively little research exists on the use of smartwatches to support learning. This paper presents an approach for commodity smartwatches as a tool for situated reflection in elementary school science. The approach was embodied in a smartwatch app called ScienceStories that allows students to voice record reflections about science concepts anytime, anywhere. We conducted a study with 18 fifth-grade children to investigate first, the effects of ScienceStories on students’ science self-efficacy, and second the effects of different motivational structures (gamification, narrative-based, hybrid) designed into the smartwatch app on students’ quality and quantity of use. Quantitative results showed ScienceStories increased science self-efficacy especially with a motivational structure. The gamified version had the highest quantity of use, while narrative performance performed worst. Qualitative findings described how students’ recordings related to science topics and were contextualized. We discuss how our findings contribute to understanding of how to design smartwatch apps for educational purposes.
Investigating How Online Help and Learning Resources Support Children’s Use of 3D Design Software
3D design software is increasingly available to children through libraries, maker spaces, and for free on the web. This unprecedented availability has the potential to unleash children’s creativity in cutting edge domains, but is limited by the steep learning curve of the software. Unfortunately, there is little past work studying the breakdowns faced by children in this domain-most past work has focused on adults in professional settings. In this paper, we present a study of online learning resources and help-seeking strategies available to children starting out with 3D design software. We find that children face a range of challenges when trying to learn 3D design independently-tutorials present instructions at a granularity that leads to overlooked and incorrectly-performed actions, and online help-seeking is largely ineffective due to challenges with query formulation and evaluating found information. Based on our findings, we recommend design directions for next-generation help and learning systems tailored to children.
Tactile Information Transmission by 2D Stationary Phantom Sensations
A phantom sensation refers to an illusory tactile sensation perceived midway between multiple distant stimulations on the skin. Phantom sensations have been used intensively in tactile interfaces owing to their simplicity and effectiveness. Despite that, the perceptual performance of phantom sensations is not completely understood, especially for 2D cases. This work is concerned with 2D stationary phantom sensations and their fundamental value as a means for information display. In User Study 1, we quantified the information transmission capacity using an absolute identification task of 2D phantom sensations. In User Study 2, we probed the distributions of the actual perceived positions of 2D phantom sensations. The investigations included both types of phantom sensations-within and out of the body. Our results provide general guidelines as to leveraging 2D phantom sensations in the design of spatial tactile display.
Understanding Artefact and Process Challenges for Designing Low-Res Lighting Displays
Low-resolution (low-res) lighting displays are increasingly used by HCI researchers, designers, and in the industry as a versatile and aesthetic medium for deploying ambient interfaces in various contexts. These display types distinguish themselves from conventional high-res screens through: high contrasts, hi-power LED technology which allows visibility even in bright environments, and their ability to take on three-dimensional free forms. However, to date most work on low-res displays has been either of experimental nature or carried out in isolated industry contexts. This paper addresses this gap through an analysis of our own experiences from previous experimental design studies and related work, which led us to five domain challenges for designing low-res displays. We then describe how we approached these challenges in a deployment study, which involved the implementation of a prototype guided by a low-res prototyping toolkit. Based on an analysis of our design process and findings from the deployment study, we present ten design recommendations for low-res lighting displays.
Thermorph: Democratizing 4D Printing of Self-Folding Materials and Interfaces
We develop a novel method printing complex self-folding geometries. We demonstrated that with a desktop fused deposition modeling (FDM) 3D printer, off-the-shelf printing filaments and a design editor, we can print flat thermoplastic composites and trigger them to self-fold into 3D with arbitrary bending angles. This is a suitable technique, called Thermorph, to prototype hollow and foldable 3D shapes without losing key features. We describe a new curved folding origami design algorithm, compiling given arbitrary 3D models to 2D unfolded models in G-Code for FDM printers. To demonstrate the Thermorph platform, we designed and printed complex self-folding geometries (up to 70 faces), including 15 self-curved geometric primitives and 4 self-curved applications, such as chairs, the simplified Stanford Bunny and flowers. Compared to the standard 3D printing, our method saves up to 60% – 87% of the printing time for all shapes chosen.
Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay
In competitive co-located gameplay, players use their opponents’ gaze to make predictions about their plans while simultaneously managing their own gaze to avoid giving away their plans. This socially competitive dimension is lacking in most online games, where players are out of sight of each other. We conducted a lab study using a strategic online game; finding that (1) players are better at discerning their opponent’s plans when shown a live visualisation of the opponent’s gaze, and (2) players who are aware that their gaze is tracked will manipulate their gaze to keep their intentions hidden. We describe the strategies that players employed, to various degrees of success, to deceive their opponent through their gaze behaviour. This gaze-based deception adds an effortful and challenging aspect to the competition. Lastly, we discuss the various implications of our findings and its applicability for future game design.
Security During Application Development: an Application Security Expert Perspective
Many of the security problems that people face today, such as security breaches and data theft, are caused by security vulnerabilities in application source code. Thus, there is a need to understand and improve the experiences of those who can prevent such vulnerabilities in the first place – software developers as well as application security experts. Several studies have examined developers’ perceptions and behaviors regarding security vulnerabilities, demonstrating the challenges they face in performing secure programming and utilizing tools for vulnerability detection. We expand upon this work by focusing on those primarily responsible for application security – security auditors. In an interview study of 32 application security experts, we examine their views on application security processes, their workflows, and their interactions with developers in order to further inform the design of tools and processes to improve application security.
Insert Needle Here! A Custom Display for Optimized Biopsy Needle Placement
Needle-guiding templates are used for a variety of minimally invasive medical interventions. While physically supporting needle placement with a grid of holes, they lack integrated information where needles need to be inserted. Physicians must manually determine the correct holes based on the output of planning software – a workflow that is error-prone and lengthy. We address these issues by embedding a display into the template using electroluminescence (EL) screen printing. The EL display is connected to planning software and illuminates the correct hole. In an empirical evaluation with physicians and researchers from the medical domain, we compare the illuminated against the conventional template as used in magnetic resonance imaging (MRI) guided prostate biopsies. Our results show that the EL display significantly improves task completion time by 51%, task load by 47% and usability by 30%.
Automatic Diagnosis of Students’ Misconceptions in K-8 Mathematics
K-8 mathematics students must learn many procedures, such as addition and subtraction. Students frequently learn “buggy’ variations of these procedures, which we ideally could identify automatically. This is challenging because there are many possible variations that reflect deep compositions of procedural thought. Existing approaches for K-8 math use manually specified variations which do not scale to new math algorithms or previously unseen misconceptions. Our system examines students’ answers and infers how they incorrectly combine basic skills into complex procedures. We evaluate this approach on data from approximately 300 students. Our system replicates 86% of the answers that contain clear systematic mistakes (13%). Investigating further, we found 77% at least partially replicate a known misconception, with 53% matching exactly. We also present data from 29 participants showing that our system can demonstrate inferred incorrect procedures to an educator as successfully as a human expert.
Defining and Predicting the Localness of Volunteered Geographic Information using Ground Truth Data
Many applications of geotagged content are predicated on the concept of localness (e.g., local restaurant recommendation, mining social media for local perspectives on an issue). However, definitions of who is a “local” in a given area are typically informal and ad-hoc and, as a result, approaches for localness assessment that have been used in the past have not been formally validated. In this paper, we begin the process of addressing these gaps in the literature. Specifically, we (1) formalize definitions of “local” using themes identified in a 30-paper literature review, (2) develop the first ground truth localness dataset consisting of 132 Twitter users and 58,945 place-tagged tweets, and (3) use this dataset to evaluate existing localness assessment approaches. Our results provide important methodological guidance to the large body of research and practice that depends on the concept of localness and suggest means by which localness assessment can be improved.
Bots & (Main)Frames: Exploring the Impact of Tangible Blocks and Collaborative Play in an Educational Programming Game
While recent work has begun to evaluate the efficacy of educational programming games, many common design decisions in these games (e.g., single player gameplay using touchpad or mouse) have not been explored for learning outcomes. For instance, alternative design approaches such as collaborative play and embodied interaction with tangibles may also provide important benefits to learners. To better understand how these design decisions impact learning and related factors, we created an educational programming game that allows for systematically varying input method and mode of play. In this paper, we describe design rationale for mouse and tangible versions of our game, and report a 2×2 factorial experiment comparing efficacy of mouse and tangible input methods with individual and collaborative modes of play. Results indicate tangibles have a greater positive impact on learning, situational interest, enjoyment, and programming self-beliefs. We also found collaborative play helps further reduce programming anxiety over individual play.
StammerApp: Designing a Mobile Application to Support Self-Reflection and Goal Setting for People Who Stammer
Stammering is a speech disorder affecting approximately 1% of the worldwide population. It can have associated impacts on daily life, such as loss of confidence in social situations and increased anxiety levels (particularly when speaking to strangers). Work exploring the development of digital tools to support people who stammer (PwS) is emerging. However, there is a paucity of research engaging PwS in the design process, with participation being facilitated mainly in testing phases. In this paper, we describe the user-centered design, development and evaluation of StammerApp, a mobile application to support PwS. We contribute insights into the challenges and barriers that PwS experience day-to-day and reflect on the complexities of designing with this diverse group. Finally, we present a set of design recommendations for the development of tools to support PwS in their everyday interactions, and provide an example of how these might be envisioned through the StammerApp prototype.
Contextualizing Privacy Decisions for Better Prediction (and Protection)
Modern mobile operating systems implement an ask-on-first-use policy to regulate applications’ access to private user data: the user is prompted to allow or deny access to a sensitive resource the first time an app attempts to use it. Prior research shows that this model may not adequately capture user privacy preferences because subsequent requests may occur under varying contexts. To address this shortcoming, we implemented a novel privacy management system in Android, in which we use contextual signals to build a classifier that predicts user privacy preferences under various scenarios. We performed a 37-person field study to evaluate this new permission model under normal device usage. From our exit interviews and collection of over 5 million data points from participants, we show that this new permission model reduces the error rate by 75% (i.e., fewer privacy violations), while preserving usability. We offer guidelines for how platforms can better support user privacy decision making.
OptiSpace: Automated Placement of Interactive 3D Projection Mapping Content
We present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces.
Technology and the Givens of Existence: Toward an Existential Inquiry Framework in HCI Research
The profound impact of digital technologies on human life makes it imperative for HCI research to deal with the most fundamental aspects of human existence. Arguably, insights from existential philosophy and psychology are highly relevant for addressing such issues. Building on previous attempts to bring in existential themes and terminology to HCI, this paper argues that Yalom’s notion of “the givens of existence”, as well as related work in experimental existential psychology, can inform the development of an existential inquiry framework in HCI. The envisioned framework is intended to complement current approaches in HCI by specifically focusing on the existential aspects of the design and use of technology. The paper reflects on possible ways, in which existential concepts can support HCI research, and maintains that adopting an existential framework in HCI would be consistent with the overall conceptual development of the field.
SESSION: Paper Presentations
Smart Kitchens for People with Cognitive Impairments: A Qualitative Study of Design Requirements
Individuals with cognitive impairments currently leverage extensive human resources during their transitions from assisted living to independent living. In Western Europe, many government-supported volunteer organizations provide sheltered living facilities; supervised environments in which people with cognitive impairments collaboratively learn daily living skills. In this paper, we describe communal cooking practices in sheltered living facilities and identify opportunities for supporting these with interactive technology to reduce volunteer workload. We conducted two contextual observations of twelve people with cognitive impairments cooking in sheltered living facilities and supplemented this data through interviews with four employees and volunteers who supervise them. Through thematic analysis, we identified four themes to inform design requirements for communal cooking activities: Work organization, community, supervision, and practicalities. Based on these, we present five design implications for assistive systems in kitchens for people with cognitive deficiencies.
Graphical Perception of Continuous Quantitative Maps: the Effects of Spatial Frequency and Colormap Design
Continuous ‘pseudocolor’ maps visualize how a quantitative attribute varies smoothly over space. These maps are widely used by experts and lay citizens alike for communicating scientific and geographical data. A critical challenge for designers of these maps is selecting a color scheme that is both effective and aesthetically pleasing. Although there exist empirically grounded guidelines for color choice in segmented maps (e.g., choropleths), continuous maps are significantly understudies, and their color-coding guidelines are largely based on expert opinion and design heuristics–many of these guidelines have yet to be verified experimentally. We conducted a series of crowdsourced experiments to investigate how the perception of continuous maps is affected by colormap characteristics and spatial frequency (a measure of data complexity). We find that spatial frequency significantly impacts the effectiveness of color encodes, but the precise effect is task-dependent. While rainbow schemes afforded the highest accuracy in quantity estimation irrespective of spatial complexity, divergent colormaps significantly outperformed other schemes in tasks requiring the perception of high-frequency patterns. We interpret these results in relation to current practices, and devise new and more granular guidelines for color mapping in continuous maps.
Wall++: Room-Scale Interactive and Context-Aware Sensing
Human environments are typified by walls, homes, offices, schools, museums, hospitals and pretty much every indoor context one can imagine has walls. In many cases, they make up a majority of readily accessible indoor surface area, and yet they are static their primary function is to be a wall, separating spaces and hiding infrastructure. We present Wall++, a low-cost sensing approach that allows walls to become a smart infrastructure. Instead of merely separating spaces, walls can now enhance rooms with sensing and interactivity. Our wall treatment and sensing hardware can track users’ touch and gestures, as well as estimate body pose if they are close. By capturing airborne electromagnetic noise, we can also detect what appliances are active and where they are located. Through a series of evaluations, we demonstrate Wall++ can enable robust room-scale interactive and context-aware applications.
Design Vocabulary for Human–IoT Systems Communication
Digital devices and intelligent systems are becoming popular and ubiquitous all around us. However, they seldom provide sufficient feed-forwards and feedbacks to reassure users as to their current status and indicate what actions they are about to perform. In this study, we selected and analyzed nine concept videos on future IoT products/systems. Through systematic analysis of the interactions and communications of users with the machines and systems demonstrated in the films, we extracted 38 design vocabulary items and clustered them into 12 groups: Active, Request, Trigger functions, Approve, Reject, Notify, Recommend, Guide, Show problems, Express emotions, Exchange info, and Socialize. This framework can not only inspire designers to create self-explanatory intelligence, but also support developers to provide a language structure at different levels of the periphery of human attention. Through the enhancement of situated awareness, human IoT system interaction can become more seamless and graceful.
Accountability Work: Examining the Values, Technologies and Work Practices that Facilitate Transparency in Charities
Charities are subject to stringent transparency and accountability requirements from government and funders to ensure that they are conducting work and spending money appropriately. Charities are increasingly important to civic life and have unique characteristics as organisations. This provides a rich space in which HCI research may learn from and affect both held notions of transparency and accountability, and the relationships between these organisations and their stakeholders. We conducted ethnographic fieldwork and workshops over a seven month period at a charity. We aimed to understand how the transparency obligations of a charity manifest through work and how the workers of a charity reason about transparency and accountability as an everyday practice. Our findings highlight how organisations engage in presenting different accounts of their work; how workers view their legal transparency obligations in contrast with their accountability to their everyday community; and how their labour does not translate well to outcome measures or metrics. We discuss implications for the design of future systems that support organisations to produce accounts of their work as part of everyday practice.support organisations to produce accounts of their work as part of everyday practice.
Crowdsourcing Treatments for Low Back Pain
Low back pain (LBP) is a globally common condition with no silver bullet solutions. Further, the lack of therapeutic consensus causes challenges in choosing suitable solutions to try. In this work, we crowdsourced knowledge bases on LBP treatments. The knowledge bases were used to rank and offer best-matching LBP treatments to end users. We collected two knowledge bases: one from clinical professionals and one from non-professionals. Our quantitative analysis revealed that non-professional end users perceived the best treatments by both groups as equally good. However, the worst treatments by non-professionals were clearly seen as inferior to the lowest ranking treatments by professionals. Certain treatments by professionals were also perceived significantly differently by non-professionals and professionals themselves. Professionals found our system handy for self-reflection and for educating new patients, while non-professionals appreciated the reliable decision support that also respected the non-professional opinion.
Emotional Dialogue Generation using Image-Grounded Language Models
Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.
Science Everywhere: Designing Public, Tangible Displays to Connect Youth Learning Across Settings
A major challenge in education is understanding how to connect learning experiences across settings (e.g., school, afterschool, and home) for youth. In this paper, we introduce and describe the participatory design process we undertook to develop Science Everywhere (SE), which is a sociotechnical system where children share their everyday science learning via social media. Public displays installed throughout the neighborhood invite parents, adults, peers, and community members to interact with children’s ideas to better develop connections for learning across settings. Our case study of community interactions with the public displays illuminate how these technologies encouraged behaviors such as the noticing of children’s ideas, recognition of people in the neighborhood, and bridging to new learning opportunities for youth.
Collaborative Reflection: A Practice for Enriching Research Partnerships Spanning Culture, Discipline, and Time
All too often, research partnerships are project-driven and short-lived. Multi-lifespan design and other longer-term approaches offer alternative models. In this paper, we contribute one alternative model for cross-boundary research partnerships spanning longer timeframes and offer one best practice: collaborative reflection. Specifically, we provide an in-depth case study of a multi-lifespan design partnership (over nine years and ongoing) between a Rwandan NGO focused on peacebuilding and a US university research group focused on information design theory and method. First, we document our process for conducting a collaborative reflection that seeks balance among the contributors while navigating differences in culture, discipline, experience, and skills. Next, we reflect on five themes: (1) common ground: sensibilities and commitments; (2) trust; (3) research landscape: crossing nations and institutions; (4) research as a healing mechanism; and (5) multi-lifespan design partnership. We conclude with a discussion of overarching considerations for design researchers who engage in cross-boundary research partnership.
Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones used in the Wild
Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users’ current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art’s limitations.
Perspective on and Re-orientation of Physical Proxies in Object-Focused Remote Collaboration
Remote collaborators working together on physical objects have difficulty building a shared understanding of what each person is talking about. Conventional video chat systems are insufficient for many situations because they present a single view of the object in a flattened image. To understand how this limited perspective affects collaboration, we designed the Remote Manipulator (ReMa), which can reproduce orientation manipulations on a proxy object at a remote site. We conducted two studies with ReMa, with two main findings. First, a shared perspective is more effective and preferred compared to the opposing perspective offered by conventional video chat systems. Second, the physical proxy and video chat complement one another in a combined system: people used the physical proxy to understand objects, and used video chat to perform gestures and confirm remote actions.
The Index of Pupillary Activity: Measuring Cognitive Load vis-à-vis Task Difficulty with Pupil Oscillation
A novel eye-tracked measure of the frequency of pupil diameter oscillation is proposed for capturing what is thought to be an indicator of cognitive load. The proposed metric, termed the Index of Pupillary Activity, is shown to discriminate task difficulty vis-a-vis cognitive load (if the implied causality can be assumed) in an experiment where participants performed easy and difficult mental arithmetic tasks while fixating a central target (a requirement for replication of prior work). The paper’s contribution is twofold: full documentation is provided for the calculation of the proposed measurement which can be considered as an alternative to the existing proprietary Index of Cognitive Activity (ICA). Thus, it is possible for researchers to replicate the experiment and build their own software which implements this measurement. Second, several aspects of the ICA are approached in a more data-sensitive way with the goal of improving the measurement’s performance.
Voicesetting: Voice Authoring UIs for Improved Expressivity in Augmentative Communication
Alternative and augmentative communication (AAC) systems used by people with speech disabilities rely on text-to-speech (TTS) engines for synthesizing speech. Advances in TTS systems allowing for the rendering of speech with a range of emotions have yet to be incorporated into AAC systems, leaving AAC users with speech that is mostly devoid of emotion and expressivity. In this work, we describe voicesetting as the process of authoring the speech properties of text. We present the design and evaluation of two voicesetting user interfaces: the Expressive Keyboard, designed for rapid addition of expressivity to speech, and the Voicesetting Editor, designed for more careful crafting of the way text should be spoken. We evaluated the perceived output quality, requisite effort, and usability of both interfaces; the concept of voicesetting and our interfaces were highly valued by end-users as an enhancement to communication quality. We close by discussing design insights from our evaluations.
Digital Outdoor Play: Benefits and Risks from an Interaction Design Perspective
Outdoor play has been proven to be beneficial for children’s development. HCI research on Heads-Up Games suggests that the well-known decline in outdoor play can be addressed by adding technology to such activities. However, outdoor play benefits such as social interaction, creative thinking, and physical activity may be compromised when digital features are added. We present the design & implementation of a novel digitally-enhanced outdoor-play prototype. Our evaluation with 48 children revealed that a non-digital version of the novel outdoor play object afforded social play and game invention. Evaluation of the digitally-enhanced version showed reduced collaborative social interaction and reduced creative thinking when compared with baseline. However, we showed that specific sensing and feedback features better supported outdoor play benefits. For example non-accumulated feedback was shown to increase collaborative play and creative thinking in comparison to accumulated feedback. We provide evidence-based recommendations for designers of outdoor play technologies.
Leveraging Community-Generated Videos and Command Logs to Classify and Recommend Software Workflows
Users of complex software applications often rely on inefficient or suboptimal workflows because they are not aware that better methods exist. In this paper, we develop and validate a hierarchical approach combining topic modeling and frequent pattern mining to classify the workflows offered by an application, based on a corpus of community-generated videos and command logs. We then propose and evaluate a design space of four different workflow recommender algorithms, which can be used to recommend new workflows and their associated videos to software users. An expert validation of the task classification approach found that 82% of the time, experts agreed with the classifications. We also evaluate our workflow recommender algorithms, demonstrating their potential and suggesting avenues for future work.
Towards Algorithmic Experience: Initial Efforts for Social Media Contexts
Algorithms influence most of our daily activities, decisions, and they guide our behaviors. It has been argued that algorithms even have a direct impact on democratic societies. Human – Computer Interaction research needs to develop analytical tools for describing the interaction with, and experience of algorithms. Based on user participatory workshops focused on scrutinizing Facebook’s newsfeed, an algorithm-influenced social media, we propose the concept of Algorithmic Experience (AX) as an analytic framing for making the interaction with and experience of algorithms explicit. Connecting it to design, we articulate five functional categories of AX that are particularly important to cater for in social media: profiling transparency and management, algorithmic awareness and control, and selective algorithmic memory.
Which one is me?: Identifying Oneself on Public Displays
While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users’ recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment’s requirements, and for the user strategies that are feasible in that environment.
Analysis and Modeling of Grid Performance on Touchscreen Mobile Devices
Touchscreen mobile devices can afford rich interaction behaviors but they are complex to model. Scrollable two-dimensional grids are a common user interface on mobile devices that allow users to access a large number of items on a small screen by direct touch. By analyzing touch input and eye gaze of users during grid interaction, we reveal how multiple performance components come into play in such a task, including navigation, visual search and pointing. These findings inspired us to design a novel predictive model that combines these components for modeling grid tasks. We realized these model components by employing both traditional analytical methods and data-driven machine learning approaches. In addition to showing high accuracy achieved by our model in predicting human performance on a test dataset, we demonstrate how such a model can lead to a significant reduction in interaction time when used in a predictive user interface.
Communication Behavior in Embodied Virtual Reality
Embodied virtual reality faithfully renders users’ movements onto an avatar in a virtual 3D environment, supporting nuanced nonverbal behavior alongside verbal communication. To investigate communication behavior within this medium, we had 30 dyads complete two tasks using a shared visual workspace: negotiating an apartment layout and placing model furniture on an apartment floor plan. Dyads completed both tasks under three different conditions: face-to-face, embodied VR with visible full-body avatars, and no embodiment VR, where the participants shared a virtual space, but had no visible avatars. Both subjective measures of users’ experiences and detailed annotations of verbal and nonverbal behavior are used to understand how the media impact communication behavior. Embodied VR provides a high level of social presence with conversation patterns that are very similar to face-to-face interaction. In contrast, providing only the shared environment was generally found to be lonely and appears to lead to degraded communication.
In the Eye of the Student: An Intangible Cultural Heritage Experience, with a Human-Computer Interaction Twist
We critically engage with CHI communities emerging outside the global North (ArabHCI and AfriCHI) to explore how participation is configured and enacted within socio-cultural and political contexts fundamentally different from Western societies. We contribute to recent discussions about postcolonialism and decolonization of HCI by focusing on non-Western future technology designers. Our lens was a course designed to engage Egyptian students with a local yet culturally-distant community to design applications for documenting intangible heritage. Through an action research, the instructors reflect on selected students’ activities. Despite deploying a flexible learning curriculum that encourages greater autonomy, the students perceived themselves with less agency than other institutional stakeholders involved in the project. Further, some of them struggled to empathize with the community as the impact of the cultural differences on configuring participation was profound. We discuss the implications of the findings on HCI education and in international cross-cultural design projects.
shapeShift: 2D Spatial Manipulation and Self-Actuation of Tabletop Shape Displays for Tangible and Haptic Interaction
We explore interactions enabled by 2D spatial manipulation and self-actuation of a tabletop shape display. To explore these interactions, we developed shapeShift, a compact, high-resolution (7 mm pitch), mobile tabletop shape display. shapeShift can be mounted on passive rollers allowing for bimanual interaction where the user can freely manipulate the system while it renders spatially relevant content. shapeShift can also be mounted on an omnidirectional-robot to provide both vertical and lateral kinesthetic feedback, display moving objects, or act as an encountered-type haptic device for VR. We present a study on haptic search tasks comparing spatial manipulation of a shape display for egocentric exploration of a map versus exploration using a fixed display and a touch pad. Results show a 30% decrease in navigation path lengths, 24% decrease in task time, 15% decrease in mental demand and 29% decrease in frustration in favor of egocentric navigation.
Where is Community Among Online Learners?: Identity, Efficacy and Personal Ties
Research questions about community among online learners are gaining importance as enrollments in online programs explode. However, what community means for this context has not been studied in a comprehensive way. We contribute a quantitative study of learners’ feelings and behavior expectations about online community, adapting scales for sense of community (SOC) and developing an instrument to assess community collective efficacy (CCE). Our analysis of students’ responses to these scales revealed two factors underlying SOC (shared identity and interpersonal friendship) and three factors underlying CCE (identity regulation, coordination and social support). We used these factors to discuss contrasting definitions of community (shared identity versus ego networks). Exploratory data analyses also revealed relationships to other student variables that begin to articulate roles and mechanisms for online students’ felt community, and raise design implications about what we might do with and for the community structure.
SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversations
Deaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speechbubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display. From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.
Juxtapeer: Comparative Peer Review Yields Higher Quality Feedback and Promotes Deeper Reflection
Peer review asks novices to take on an evaluator’s role, yet novices often lack the perspective to accurately assess the quality of others’ work. To help learners give feedback on their peers’ work through an expert lens, we present the Juxtapeer peer review system for structured comparisons. We build on theories of learning through contrasting cases, and contribute the first systematic evaluation of comparative peer review. In a controlled experiment, 476 consenting learners across four courses submitted 1,297 submissions, 4,102 reviews, and 846 self assessments. Learners assigned to compare submissions wrote reviews and self-reflections that were longer and received higher ratings from experts than those who evaluated submissions one at a time. A second study found that a ranking of submissions derived from learners’ comparisons correlates well with staff ranking. These results demonstrate that comparing algorithmically-curated pairs of submissions helps learners write better feedback.
Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time
Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.
“Play PRBLMS”: Identifying and Correcting Less Accessible Content in Voice Interfaces
Voice interfaces often struggle with specific types of named content. Domain-specific terminology and naming may push the bounds of standard language, especially in domains like music where artistic creativity extends beyond the music itself. Artists may name themselves with symbols (e.g. M S C RA) that most standard automatic speech recognition (ASR) systems cannot transcribe. Voice interfaces also experience difficulty surfacing content whose titles include non-standard spellings, symbols or other ASCII characters in place of English letters, or are written using a non-standard dialect. We present a generalizable method to detect content that current voice interfaces underserve by leveraging differences in engagement across input modalities. Using this detection method, we develop a typology of content types and linguistic practices that can make content hard to surface. Finally, we present a process using crowdsourced annotations to make underserved content more accessible.
Exploring the Value of Parent Tracked Baby Data in Interactions with Healthcare Professionals: A Data-Enabled Design Exploration
This paper presents a designerly exploration of the potential values of parent-tracked baby data in interactions between parents and healthcare professionals (HCPs). Where previous work has used parent-tracked data as part of the solution to a problem, we contribute by starting our design exploration from data, using it as creative material in our design process. As we intend to work towards a system that could be viable across different levels of care, we invited three different types of HCPs and five families with newborns, for a five-week situated design exploration. Facilitated by an open and dynamic data collection toolkit, parents and HCPs could together decide what data to collect. In a continuous dialogue, they reflected on the relevance of that data in their interaction. Based on this, we continuously and remotely developed two concepts.
Tangible Awareness: How Tangibles on Tabletops Influence Awareness of Each Other’s Actions
Tangibles on multitouch tabletops increase speed, accuracy, and eyes-free operability for individual users, and verbal and behavioral social interaction among multiple users around smaller tables with a shared focus of attention. Modern multitouch tables, however, provide sizes and resolutions that let groups work alongside each other in separate workspaces. But how aware do these users remain of each other’s actions, and what impact can tangibles have on their awareness? In our study, groups of 2–4 users around the table played an individual game grabbing their attention as primary task, while they also had to occasionally become aware of other players’actions and react as secondary task. We found that players were significantly more aware of other players’actions using tangibles than those using pure multitouch interaction, indicated by faster reaction times. This effect was especially strong with more players. We close with qualitative user feedback and design recommendations. We found that players were significantly more aware of other players’actions using tangibles than those using pure multitouch interaction, indicated by faster reaction times. This effect was especially strong with more players. We close with qualitative user feedback and design recommendations.
Grand Challenges in Shape-Changing Interface Research
Shape-changing interfaces have emerged as a new method for interacting with computers, using dynamic changes in a device’s physical shape for input and output. With the advances of research into shape-changing interfaces, we see a need to synthesize the main, open research questions. The purpose of this synthesis is to formulate common challenges across the diverse fields engaged in shape-change research, to facilitate progression from single prototypes and individual design explorations to grander scientific goals, and to draw attention to challenges that come with maturity, including those concerning ethics, theory-building, and societal impact. In this article we therefore present 12 grand challenges for research on shape-changing interfaces, derived from a three-day workshop with 25 shape-changing interface experts with backgrounds in design, computer science, human-computer interaction, engineering, robotics, and material science.
Investigating the Role of an Overview Device in Multi-Device Collaboration
The availability of mobile device ecologies enables new types of ad-hoc co-located decision-making and sensemaking practices in which people find, collect, discuss, and share information. However, little is known about what kind of device configurations are suitable for these types of tasks. This paper contributes new insights into how people use configurations of devices for one representative example task: collaborative co-located trip-planning. We present an empirical study that explores and compares three strategies to use multiple devices: no-overview, overview on own device, and a separate overview device. The results show that the overview facilitated decision- and sensemaking during a collaborative trip-planning task by aiding groups to iterate their itinerary, organize locations and timings efficiently, and discover new insights. Groups shared and discussed more opinions, resulting in more democratic decision-making. Groups provided with a separate overview device engaged more frequently and spent more time in closely-coupled collaboration.
Greater than the Sum of its PARTs: Expressing and Reusing Design Intent in 3D Models
With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.
Calling for a Revolution: An Analysis of IoT Manifestos
Designers and developers are increasingly writing manifestos to express frustration and uncertainty as they struggle to negotiate between the possibilities that IoT technologies offer, and the ethical concerns they engender. Manifestos are defining of a ‘moment of crisis’ and their recent proliferation indicates a desire for change. We analyze the messages manifesto authors have for their readers. Emerging from a sense of uncertainty, these manifestos create publics for debate, demand attention and call for change. While manifestos provide potential roadmaps for a better future, they also express a deep concern and even fear of the state of the world and the role of technology in it. We highlight how practitioners are responding to unstable and rapidly changing times and detail what solutions they envision, and what conflicts these might bring about. Our analysis suggests new ways HCI might theorize and design for responsibility while attending to the perils of responsibilisation.
Share and Share Alike?: Social Information and Interaction Style in Coordination of Shared Use
Interfaces are commonly designed from the perspective of individual users, even though most of the systems we use in everyday life are in fact shared. We argue that more attention is needed for system sharing, especially because interfaces are known to influence coordination of shared use. In this work, we aim to deepen the understanding of this relation. To do so, we design three interfaces for a shared lighting system that vary in the type of social information they allow people to share with others and in their overall interaction style. We systematically compare longitudinal and real-life use of the interfaces, evaluating (1) people’s appraisal of three types of social information and (2) the influence of an interaction style on coordination of shared use. The results disclose relations between the interface and the amount of verbal communication, consideration, and accountability. With this work, we urge the need for interaction designers to consider shared use.
What’s the Difference?: Evaluating Variations of Multi-Series Bar Charts for Visual Comparison Tasks
An increasingly common approach to data analysis involves using information dashboards to visually compare changing data. However, layout constraints coupled with varying levels of visualization literacy among dashboard users make facilitating visual comparison in dashboards a challenging task. In this paper, we evaluate variants of bar charts, one of the most prevalent class of charts used in dashboards. We report an online experiment (N = 74) conducted to evaluate four alternative designs: 1) grouped bar chart, 2) grouped bar chart with difference overlays, 3) bar chart with difference overlays, and 4) difference bar chart. Results show that charts with difference overlays facilitate a wider range of comparison tasks while performing comparably to charts without them on individual tasks. Finally, we discuss the implications of our findings, with a focus on supporting visual comparison in dashboards.
Eliciting Users’ Demand for Interface Features
How valuable are certain interface features to their users? How can users’ demand for features be quantified? To address these questions, users’ demand curve for the sorting feature was elicited in a controlled experiment, using personal finance as the user context. Users made ten rounds of investment allocation across up to 77 possible funds, thus encountering choice overload, typical of many online environments. Users were rewarded for positive investment returns. To overcome choice overload, users could sort the alternatives based on product attributes (fees, category, fund name, past performance). To elicit their demand for sorting, the experimental design enabled users to forgo 0%-9% of their reward in return for activating the sorting feature. The elicited downward sloping demand curve suggests a curvilinear relationship between sorting use and cost. More broadly, the study offers a way to quantify user demand of UI features, and a basis for comparison between features.
The Space of Possibilities: Political Economies of Technology Innovation in Sub-Saharan Africa
HCI researchers work within spaces of possibility for potential designs of technology. New methods (e.g., user centrism); expected types of interaction (user with device); and potential applications (urban navigation) can extend the boundaries of these possibilities. However, structural and systemic factors can also foreclose them. A recent wide and shallow survey of 116 individuals involved in technology development across 26 countries in sub-Saharan Africa reveals how factors of political economy significantly impact upon technological possibilities. Monopolies, international power dynamics, race, and access to capital open or constrain technological possibilities at least as much as device-centric or user-focused constraints do. Though their thrust may have been anticipated by reference to political economic trends, the structural constraints we found were underestimated by technologists even a decade ago. We discuss the implications for technology development in Africa and beyond.
“Genderfluid” or “Attack Helicopter”: Responsible HCI Research Practice with Non-binary Gender Variation in Online Communities
As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants’ genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes “trolls'” opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research.
“Debrief O’Clock”: Planning, Recording, and Making Sense of a Day in the Field in Design Research
Design research is generative, intuitive, experiential, and tactical. Documenting the design research process helps to communicate these decisions, judgements, and values that are embodied in design products. Yet, practices for documenting design research are underreported in the CHI community, particularly for immersive design research field studies. We contribute the “Debrief O’Clock” fieldnote practice for documenting design research field studies, comprising collaborative discussion sessions and the production of written research accounts. We show how the Debrief O’Clock practice emerged in the context of a Digital Community Noticeboard project with a very remote Australian Aboriginal community, and explain three key purposes of Debrief O’Clock as: 1) an early stage data recording and analysis process; 2) a tactical manoeuvre in responsive project planning; and 3) a mechanism for personal debriefing and reflexivity. We conclude with a series of open practical, ethical, and methodological questions to advance the discussion of design research documentation practices.
Family Health Promotion in Low-SES Neighborhoods: A Two-Month Study of Wearable Activity Tracking
Low-socioeconomic status (SES) families face increased barriers to physical activity (PA)-a behavior critical for reducing and preventing chronic disease. Research has explored how wearable PA trackers can encourage increased activity, and how the adoption of such trackers is driven by people’s emotions and social needs. However, more work is needed to understand how PA trackers are perceived and adopted by low-SES families, where PA may be deprioritized due to economic stresses, limited resources, and perceived crime. Accordingly, we conducted a two-month, in-depth qualitative study, exploring low-SES caregivers’ perspectives on PA tracking and promotion. Our findings show how PA tracking was impacted by caregivers’ attitudes toward safety, which were influenced by how they perceived social connections within their neighborhoods; and cognitive-emotional processes. We conclude that PA tracking tools for low-SES families should help caregivers and children to experience and celebrate progress.
Crowdsourcing vs Laboratory-Style Social Acceptability Studies?: Examining the Social Acceptability of Spatial User Interactions for Head-Worn Displays
The use of crowdsourcing platforms for data collection in HCI research is attractive in their ability to provide rapid access to large and diverse participant samples. As a result, several researchers have conducted studies investigating the similarities and differences between data collected through crowdsourcing and more traditional, laboratory-style data collection. We add to this body of research by examining the feasibility of conducting social acceptability studies via crowdsourcing. Social acceptability can be a key determinant for the early adoption of emerging technologies, and as such, we focus our investigation on social acceptability for Head-Worn Display (HWD) input modalities. Our results indicate that data collected via a crowdsourced experiment and a laboratory-style setting did not differ at a statistically significant level. These results provide initial support for crowdsourcing platforms as viable options for conducting social acceptability research.
Empirical Support for a Causal Relationship Between Gamification and Learning Outcomes
Preparing for exams is an important yet stressful time for many students. Self-testing is known to be an effective preparation strategy, yet some students lack motivation to engage or persist in self-testing activities. Adding game elements to a platform supporting self-testing may increase engagement and, by extension, exam performance. We conduct a randomized controlled experiment (n=701) comparing the effect of two game elements — a points system and a badge system — used individually and in combination. We find that the badge system elicits significantly higher levels of voluntary self-testing activity and this effect is particularly pronounced amongst a relatively small cohort. Importantly, this increased activity translates to a significant improvement in exam scores. Our data supports a causal relationship between gamification and learning outcomes, mediated by self-testing behavior. This provides empirical support for Landers’ theory of gamified learning when the gamified activity is conducted prior to measuring learning outcomes.
Going the Distance: Trust Work for Citizen Participation
Trust is a vital component of citizen participation-whether citizens decide to engage in opportunities for participation in local government can hinge entirely on the existence of trust between citizens and public officials. Understanding the role of trust in this space is vital for HCI and the growing area of Digital Civics which works to improve or create new modes of citizen participation. Currently, however, trust is understudied from the perspectives of public officials. This gap creates a critical blind spot as technical interventions may be mismatched to the ways trust is put into action by public officials working to support citizen participation. We begin to address this gap by presenting a broad qualitative study of how public officials in a large US city operationalize trust in citizen participation. We found trust is enacted through ongoing practices that man-age distance in relationships between public officials and city residents.
Methods for Intentional Encoding of High Capacity Human-Designable Visual Markers
Previous techniques for human-designable visual markers have focused on small encoding spaces, and assume artists do not need to encode specific bit representations. We present a general framework for human-designable visual markers for artists to encode specific bit representations in large spaces. A three-part study, conducted over three weeks, methodically evaluates the usability of different encoding methods when artists encode specific bit representations. The methods span different shape characteristics suitable for artist encoding (convexity, hollowness, number, size, and distance from centroid) and visualization tools are proposed to aid in this process. We further demonstrate that any of the methods presented may be practically used to encode a URL with the aid of a universally available database like TinyURL (rather than a task-specific database), making human-designable visual markers practical for applications such as advertisements.
Investigating the Impact of Annotation Interfaces on Player Performance in Distributed Multiplayer Games
In distributed multiplayer games, it can be difficult to communicate strategic information for planning game moves and player interactions. Often, players spend extra time communicating, reducing their engagement in the game. Visual annotations in game maps and in the gameworld can address this problem and result in more efficient player communication. We studied the impact of real-time feedback on planning annotations, specifically two different annotation types, in a custom-built, third-person, multiplayer game and analyzed their effects on player performance, experience, workload, and annotation use. We found that annotations helped engage players in collaborative planning, which reduced frustration, and shortened goal completion times. Based on these findings, we discuss how annotating in virtual game spaces enables collaborative planning and improves team performance.
Let’s Talk About Race: Identity, Chatbots, and AI
Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways?
Extracting Regular FOV Shots from 360 Event Footage
Video summaries are a popular way to share important events, but creating good summaries is hard. It requires expertise in both capturing and editing footage. While hiring a professional videographer is possible, this is too costly for most casual events. An alternative is to place 360 video cameras around an event space to capture footage passively and then extract regular field-of-view (RFOV) shots for the summary. This paper focuses on the problem of extracting such RFOV shots. Since we cannot actively control the cameras or the scene, it is hard to create “ideal’ shots that adhere strictly to traditional cinematography rules. To better understand the tradeoffs, we study human preferences for static and moving camera RFOV shots generated from 360 footage. From the findings, we derive design guidelines. As a secondary contribution, we use these guidelines to develop automatic algorithms that we demonstrate in a prototype user interface for extracting RFOV shots from 360 videos.
“Is More Better?”: Impact of Multiple Photos on Perception of Persona Profiles
In this research, we investigate if and how more photos than a single headshot can heighten the level of information provided by persona profiles. We conduct eye-tracking experiments and qualitative interviews with variations in the photos: a single headshot, a headshot and images of the persona in different contexts, and a headshot with pictures of different people representing key persona attributes. The results show that more contextual photos significantly improve the information end users derive from a persona profile; however, showing images of different people creates confusion and lowers the informativeness. Moreover, we discover that choice of pictures results in various interpretations of the persona that are biased by the end users’ experiences and preconceptions. The results imply that persona creators should consider the design power of photos when creating persona profiles.
Speak Up: A Multi-Year Deployment of Games to Motivate Speech Therapy in India
The ability to communicate is crucial to leading an independent life. Unfortunately, individuals from developing communities who are deaf and hard of hearing tend to encounter difficulty communicating, due to a lack of educational resources. We present findings from a two-year deployment of Speak Up, a suite of voice-powered games to motivate speech therapy, at a school for the deaf in India. Using ethnographic methods, we investigated the interplay between Speak Up and local educational practices. We found that teachers’ speech therapy goals had evolved to differ from those encoded in the games, that the games influenced classroom dynamics, and that teachers had improved their computer literacy and developed creative uses for the games. We used these insights to further enhance Speak Up by creating an explicit teacher role in the games, making changes that encouraged teachers to build their computer literacy, and adding an embodied agent.
Understanding the Effect of In-Video Prompting on Learners and Instructors
Online instructional videos are ubiquitous, but it is difficult for instructors to gauge learners’ experience and their level of comprehension or confusion regarding the lecture video. Moreover, learners watching the videos may become disengaged or fail to reflect and construct their own understanding. This paper explores instructor and learner perceptions of in-video prompting where learners answer reflective questions while watching videos. We conducted two studies with crowd workers to understand the effect of prompting in general, and the effect of different prompting strategies on both learners and instructors. Results show that some learners found prompts to be useful checkpoints for reflection, while others found them distracting. Instructors reported the collected responses to be generally more specific than what they have usually collected. Also, different prompting strategies had different effects on the learning experience and the usefulness of responses as feedback.
Force Jacket: Pneumatically-Actuated Jacket for Embodied Haptic Experiences
Immersive experiences seek to engage the full sensory system in ways that words, pictures, or touch alone cannot. With respect to the haptic system, however, physical feedback has been provided primarily with handheld tactile experiences or vibration-based designs, largely ignoring both pressure receptors and the full upper-body area as conduits for expressing meaning that is consistent with sight and sound. We extend the potential for immersion along these dimensions with the Force Jacket, a novel array of pneumatically-actuated airbags and force sensors that provide precisely directed force and high frequency vibrations to the upper body. We describe the pneumatic hardware and force control algorithms, user studies to verify perception of airbag location and pressure magnitude, and subsequent studies to define full-torso, pressure and vibration-based feel effects such as punch, hug, and snake moving across the body. We also discuss the use of those effects in prototype virtual reality applications.
Toward Health Information Technology that Supports Overweight/Obese Women in Addressing Emotion- and Stress-Related Eating
Emotion- and stressed-related eating (ESRE) is associated with weight management difficulties and is more likely to affect women than men. Additionally, health information technology (HIT) for weight management tends to be less effective for women than it is for men, and less effective for people who engage in ESRE. Therefore, this study explores how HIT can support overweight/obese women curb ESRE behavior. Study participants, all adult overweight/obese women (BMI ‘ 25), logged dietary intake for 10 days with the Lose It! smartphone app as an elicitation exercise. Cross sectional, semi-structured interviews (N = 22) were then conducted to explore technology support needs concerning ESRE behavior. Findings revealed participants had the following needs: holistic health goal development, building motivation to achieve goals, and assistance with handling stress. Resulting HIT guidelines include supporting holistic health goals, developing and sustaining motivation, exchange of emotional support, understanding of behavior, and change in ESRE mindset.
Parody in Place: Exposing Socio-spatial Exclusions in Data-Driven Maps with Design Parody
This paper describes the development of Parody in Place, a design parody that depicts Seattle neighborhoods with typographic arrangements derived from data generated by technology platforms such as Yelp and Zillow. The project invites inquiry into what technology corporations make matter and where, in ways that challenge the neutrality of neighborhood-based data. We designed the subject of our parody, a mock company called Dork Posters, to explore how the modes of parody by which the system operates expose socio-spatial exclusions both contested and propagated by digital platforms. Our interventions reveal shifts in response toward mapping techniques, from ambivalence to curiosity. We used Dork Posters to question reductionist techniques of data aggregation and ad hoc theories of data provenance. Our engagements also prompted reflection on the politics of measurement: how data sources shape result- ing insights and valuations. We end by discussing possibilities for expanding the design research program within human-computer interaction through parody.
From Scanning Brains to Reading Minds: Talking to Engineers about Brain-Computer Interface
We presented software engineers in the San Francisco Bay Area with a working brain-computer interface (BCI) to surface the narratives and anxieties around these devices among technical practitioners. Despite this group’s heterogeneous beliefs about the exact nature of the mind, we find a shared belief that the contents of the mind will someday be “read’ or “decoded’ by machines. Our findings help illuminate BCI’s imagined futures among engineers. We highlight opportunities for researchers to involve themselves preemptively in this nascent space of intimate biosensing devices, suggesting our findings’ relevance to long-term futures of privacy and cybersecurity.
Crowdsourcing Exercise Plans Aligned with Expert Guidelines and Everyday Constraints
Exercise plans help people implement behavior change. Crowd workers can help create exercise plans for clients, but their work may result in lower quality plans than produced by experts. We built CrowdFit, a tool that provides feedback about compliance with exercise guidelines and leverages strengths of crowdsourcing to create plans made by non-experts. We evaluated CrowdFit in a comparative study with 46 clients using exercise plans for two weeks. Clients received plans from crowd planners using CrowdFit, crowd planners without CrowdFit, or from expert planners. Compared to crowd planners not using CrowdFit, crowd planners using CrowdFit created plans that are more actionable and more aligned with exercise guidelines. Compared to experts, crowd planners created more actionable plans, and plans that are not significantly different with respect to tailoring, strength and aerobic principles. They struggled, however, to satisfy exercise requirements of amount of exercise. We discuss opportunities for designing technology supporting physical activity planning by non-experts.
Bottom-Up Imaginaries: The Cultural-Technical Practice of Inventing Regional Advantage through IT R&D
Recent HCI research on social creativity and bottom-up innovation has highlighted how concerted efforts by the government policy and business communities to develop innovation ecosystems are increasingly intertwined with IT research and development. We note that many such efforts focus on cultivating regional advantage [20] in the form of innovation hubs that are situated in and leverage distinct sociocultural histories and geographies. Cultivating regional advantage entails achieving broad consensus about what that region’s advantage might be, that is, the construction of a regional advantage imaginary beyond the policies, IT supports, and practices to make it happen. Here, we document how an ongoing public debate among makers and manufacturers in Taiwan as a region-distinguished by direct engagement with design, fabrication, prototyping, and manufacturing processes-are proposing pathways toward a regional advantage that both reflects Taiwan’s recent sociocultural and economic histories and also its near future aspirations.
Balancing Privacy and Information Disclosure in Interactive Record Linkage with Visual Masking
Effective use of data involving personal or sensitive information often requires different people to have access to personal information, which significantly reduces the personal privacy of those whose data is stored and increases risk of identity theft, data leaks, or social engineering attacks. Our research studies the tradeoffs between privacy and utility of personal information for human decision making. Using a record-linkage scenario, this paper presents a controlled study of how varying degrees of information availability influences the ability to effectively use personal information. We compared the quality of human decision-making using a visual interface that controls the amount of personal information available using visual markup to highlight data discrepancies. With this interface, study participants who viewed only 30% of data content had decision quality similar to those who had full 100% access. The results demonstrate that it is possible to greatly limit the amount of personal information available to human decision makers without negatively affecting utility or human effectiveness. However, the findings also show there is a limit to how much data can be hidden before negatively influencing the quality of judgment in decisions involving person-level data. Despite the reduced accuracy with extreme data hiding, the study demonstrates that with proper interface designs, many correct decisions can be made with even legally de-identified data that is fully masked (74.5% accuracy with fully-masked data compared to 84.1% with full access). Thus, when legal requirements only allow for de-identified data access, use of well-designed interface can significantly improve data utility.
Streets for People: Engaging Children in Placemaking Through a Socio-technical Process
In this paper, we present a socio-technical process designed to engage children in an ongoing urban design project-Streets for People-in Newcastle, UK. We translated urban design proposals developed by residents and the local authority to enable children to contribute ideas to the project. Our process comprised three stages: situated explorations and evidence gathering through digitally supported neighbourhood walks; issue mapping and peer-to-peer discussions using an online engagement platform; and face-to-face dialogue between children, residents, and the local authority through a ‘Town Hall’ event. We report insights gained through our engagement and show how our activities facilitated issue advocacy and the development of children’s capacities, but also surfaced tensions around the agency of children in political processes. We reflect on the challenges of working in this space, and discuss wider implications for technology design and ethical questions that ‘scaling up’ such work might pose.
Vanishing Importance: Studying Immersive Effects of Game Audio Perception on Player Experiences in Virtual Reality
Sound and virtual reality (VR) are two important output modalities for creating an immersive player experience (PX). While prior research suggests that sounds might contribute to a more immersive experience in games played on screens and mobile displays, there is not yet evidence of these effects of sound on PX in VR. To address this, we conducted a within-subjects experiment using a commercial horror-adventure game to study the effects of a VR and monitor-display version of the same game on PX. Subsequently, we explored, in a between-subjects study, the effects of audio dimensionality on PX in VR. Results indicate that audio has a more implicit influence on PX in VR because of the impact of the overall sensory experience and that audio dimensionality in VR may not be a significant factor contributing to PX. Based on our findings and observations, we provide five design guidelines for VR games.
Flexible Learning with Semantic Visual Exploration and Sequence-Based Recommendation of MOOC Videos
Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. To overcome this limitation, this paper contributes a visual recommender system called MOOCex. The system recommends lecture videos across different courses by considering both video contents and sequential inter-topic relationships mined from course syllabi; and more importantly, it allows for interactive visual exploration of the semantic space of recommendations within a learner’s current context. When compared to traditional methods (e.g., content-based recommendation and ranked list representations), MOOCex suggests videos from more diverse perspectives and helps learners make better video playback decisions. Further, feedback from MOOC learners and instructors indicates that the system enhances both learning and teaching effectiveness.
“Trust Us”: Mobile Phone Use Patterns Can Predict Individual Trust Propensity
An individual’s trust propensity – i.e., a dispositional willingness to rely on others” – mediates multiple socio-technical systems and has implications for their personal, and societal, well-being. Hence, understanding and modeling an individual’s trust propensity is important for human-centered computing research. Conventional methods for understanding trust propensities have been surveys and lab experiments. We propose a new approach to model trust propensity based on long-term phone use metadata that aims to complement typical survey approaches with a lower-cost, faster, and scalable alternative. Based on analysis of data from a 10-week field study (mobile phone logs) and “ground truth” survey involving 50 participants, we: (1) identify multiple associations between phone-based social behavior and trust propensity; (2) define a machine learning model that automatically infers a person’s trust propensity. The results pave way for understanding trust at a societal scale and have implications for personalized applications in the emerging social internet of things.
“Suddenly, we got to become therapists for each other”: Designing Peer Support Chats for Mental Health
Talk therapy is a common, effective, and desirable form of mental health treatment. Yet, it is inaccessible to many people. Enabling peers to chat online using effective principles of talk therapy could help scale this form of mental health care. To understand how such chats could be designed, we conducted a two-week field experiment with 40 people experiencing mental illnesses comparing two types of online chats-chats guided by prompts, and unguided chats. Results show that anxiety was significantly reduced from pre-test to post-test. User feedback revealed that guided chats provided solutions to problems and new perspectives, and were perceived as “deep,” while unguided chats offered personal connection on shared experiences and were experienced as “smooth.” We contribute the design of an online guided chat tool and insights into the design of peer support chat systems that guide users to initiate, maintain, and reciprocate emotional support.
Accountability in the Blue-Collar Data-Driven Workplace
This paper examines how mobile technology impacts employee accountability in the blue-collar data-driven workplace. We conducted an observation-based qualitative study of how electricians in an electrical company interact with data related to their work accountability, which comprises the information employees feel is reasonable to share and document about their work. The electricians we studied capture data both manually, recording the hours spent on a particular task, and automatically, as their mobile devices regularly track data such as location. First, our results demonstrate how work accountability manifests for employees’ manual labor work that has become data-driven. We show how employees work through moments of transparency, privacy, and accountability using data focused on location, identification and time. Second, we demonstrate how this data production is interdependent with employees’ beliefs about what is a reasonable level of detail and transparency to provide about their work. Lastly, we articulate specific design implications related to work accountability.
Food Democracy in the Making: Designing with Local Food Networks
This paper introduces the concept of ‘food democracy’ as a theoretical framing for HCI to engage in human-food interaction. Extending existing foci of health and environmental sustainability, food democracy requires thinking through aspects of social and economic justice, and democratic governance as directions for the study and design of technologies for alternative food movements. To exemplify food democracy, we report on field observations and interviews about the opportunities and challenges for supporting the development of local food networks with communities in deprived neighbourhoods using an online direct food marketing platform. Using a food democracy framing, we identify tensions around environmental, social, and economic goals; challenges of local food businesses operating within the existing economic paradigm; and differing perspectives on ownership and governance in the network. We discuss the need for HCI to design for systems change and propose a design space for HCI in supporting food democracy movements.
Better Understanding of Foot Gestures: An Elicitation Study
We present a study aimed to better understand users’ perceptions of foot gestures employed on a horizontal surface. We applied a user elicitation methodology, in which participants were asked to suggest foot gestures to actions (referents) in three conditions: standing up in front of a large display, sitting down in front of a desktop display, and standing on a projected surface. Based on majority count and agreement scores, we identified three gesture sets, one for each condition. Each gesture set shows a mapping between a common action and its chosen gesture. As a further contribution, we suggest a new measure called specification score, which indicates the degree to which a gesture is specific, preferable and intuitive to an action in a specific condition of use. Finally, we present measurable insights that can be implemented as guidelines for future development and research of foot interaction.
InfoNice: Easy Creation of Information Graphics
Information graphics are widely used to convey messages and present insights in data effectively. However, creating expressive data-driven infographics remains a great challenge for general users without design expertise. We present InfoNice, a visualization design tool that enables users to easily create data-driven infographics. InfoNice allows users to convert unembellished charts into infographics with multiple visual elements through mark customization. We implement InfoNice into Microsoft Power BI to demonstrate the integration of InfoNice into data analysis workflow seamlessly, bridging the gap between data exploration and presentation. We evaluate the usability and usefulness of InfoNice through example infographics, an in-lab user study, and real-world user feedback. Our results show that InfoNice enables users to create a variety of infographics easily for common scenarios.
Metamaterial Textures
We present metamaterial textures—3D printed surface geometries that can perform a controlled transition between two or more textures. Metamaterial textures are integrated into 3D printed objects and allow designing how the object interacts with the environment and the user’s tactile sense. Inspired by foldable paper sheets (“origami”) and surface wrinkling, our 3D printed metamaterial textures consist of a grid of cells that fold when compressed by an external global force. Unlike origami, however, metamaterial textures offer full control over the transformation, such as in between states and sequence of actuation. This allows for integrating multiple textures and makes them useful, e.g., for exploring parameters in the rapid prototyping of textures. Metamaterial textures are also robust enough to allow the resulting objects to be grasped, pushed, or stood on. This allows us to make objects, such as a shoe sole that transforms from flat to treaded, a textured door handle that provides tactile feedback to visually impaired users, and a configurable bicycle grip. We present an editor assists users in creating metamaterial textures interactively by arranging cells, applying forces, and previewing their deformation.
A Case for Design Localization: Diversity of Website Aesthetics in 44 Countries
Adapting the visual designs of websites to a local target audience can be beneficial, because such design localization increases users’ appeal, trust, and work efficiency. Yet designers often find it difficult to decide when to adapt and how to adapt the designs, mainly because there are currently no guidelines that describe common website designs in various countries. We contribute the first large-scale analysis of 80,901 website designs across 44 countries, made available via an interactive web-based design catalog. Using computational image metrics to compare the ~2,000 most visited websites per country, we found significant differences between several design aspects, such as a website’s colorfulness, visual complexity, the number of text areas and the average saturation of colors. Our results contribute a snapshot of web designs that users in 44 countries frequently see, showing that the design of websites with a global reach are more homogenized compared to local websites between countries.
As We May Study: Towards the Web as a Personalized Language Textbook
We present a system designed to enable learners of a foreign language to read materials that are personally interesting to them from the web and practice vocabulary with interactive exercises based on their past readings. We report on the results of deploying the system for one month with three classes of Dutch highschool students learning French. The students and their teacher were positive about the system and in particular about the personalization aspects that the system enables.
KnobSlider: Design of a Shape-Changing UI for Parameter Control
Physical controls are widely used by professionals such as sound engineers or aircraft pilots. In particular knobs and sliders are the most prevalent in such interfaces. They have advantages over touchscreen GUIs, especially when users require quick and eyes-free control. However, their interfaces (e.g., mixing consoles) are often bulky and crowded. To improve this, we present the results of a formative study with professionals who use physical controllers. Based on their feedback, we propose design requirements for future interfaces for parameters control. We then introduce the design of our KnobSlider that combines the advantages of a knob and a slider in one unique shape-changing device. A qualitative study with professionals shows how KnobSlider supports the design requirements, and inspired new interactions and applications.
BreathVR: Leveraging Breathing as a Directly Controlled Interface for Virtual Reality Games
With virtual reality head-mounted displays rapidly becoming accessible to mass audiences, there is growing interest in new forms of natural input techniques to enhance immersion and engagement for players. Research has explored physiological input for enhancing immersion in single player games through indirectly controlled signals like heart rate or galvanic skin response. In this paper, we propose breathing as a directly controlled physiological signal that can facilitate unique and engaging play experiences through natural interaction in single and multiplayer virtual reality games. Our study (N = 16) shows that participants report a higher sense of presence and find the gameplay more fun and challenging when using our breathing actions. From study observations and analysis we present five design strategies that can aid virtual reality game designers interested in using directly controlled forms of physiological input.
Choosing to Help Monsters: A Mixed-Method Examination of Meaningful Choices in Narrative-Rich Games and Interactive Narratives
The potential of narrative-rich games to impact emotions, attitudes, and behavior brings with it exciting opportunities and implications within both entertainment and serious game contexts. However, effects are not always consistent, potentially due to game choices not always being perceived as meaningful by the players. To examine these perceptual variations, we used a mixed-method approach. A qualitative study first investigated meaningful game choices from the players’ perspectives. Building on the themes developed in this first study, a quantitative study experimentally examined the effect of meaningful game choices on player experiences of appreciation, enjoyment, and narrative engagement. Results highlight the importance of moral, social, and consequential characteristics in creating meaningful game choices, which positively affected appreciation. Meaningfulness of game choices may therefore be crucial for narrative-rich games and interactive narratives to impact players.
Support for Social and Cultural Capital Development in Real-time Ridesharing Services
Today’s transportation systems and technologies have the potential to transform the ways individuals acquire resources from their social networks and environments. However, it is unclear what types of resources can be acquired and how technology could support these efforts. We address this gap by investigating these questions in the domain of real-time ridesharing systems. We present insights from two qualitative studies: (1) a set of semi-structured interviews with 13 Uber drivers and (2) a set of semi-structured interviews with 13 Uber riders. Our results show that both drivers and riders acquired and benefited from informational, emotional and instrumental resources, as well as cultural exchanges via interactions with each other and with online platforms. We argue that these interactions could support the development of social and cultural capital. We discuss our findings in the context of labor and contribute design implications for in-car social and cultural experiences and for the ways technologies such as GPS and location-based services can support the additional emotional, social, and cultural labor that drivers provide to their riders.
Are You Dreaming?: A Phenomenological Study on Understanding Lucid Dreams as a Tool for Introspection in Virtual Reality
Virtual reality (VR) is resurging in popularity with the advancement of low-cost hardware and more realistic graphics. How might this technology help others? That is, to increase mental well-being? The ultimate VR might look like lucid dreaming, the phenomenon of knowing one is dreaming while in the dream. Lucid dreaming can be used as an introspective tool and, ultimately, increase mental well-being. What these introspective experiences are like for lucid dreamers might be key in determining specific design guidelines for future creation of a technological tool used for helping people examine their own thoughts and emotions. This study describes nine active and proficient lucid dreamers’ representations of their introspective experiences gained through phenomenological interviews. Four major themes emerged: sensations and feelings, actions and practices, influences on experience, and meaning making. This knowledge can help design a VR system that is grounded in genuine experience and preserving the human condition.
A Large-Scale Study of iPhone App Launch Behaviour
There have been many large-scale investigations of users’ mobile app launch behaviour, but all have been conducted on Android, even though recent reports suggest iPhones account for a third of all smartphones in use. We report on the first large-scale analysis of app usage patterns on iPhones. We conduct a reproduction study with a cohort of over 10,000 jailbroken iPhone users, reproducing several studies previously conducted on Android devices. We find some differences, but also significant similarities: e.g. communications apps are the most used on both platforms; similar patterns are apparent of few apps being very popular but there existing a ‘long tail’ of many apps used by the population; users show similar patterns of ‘micro-usage’; almost identical proportions of people use a unique combination of apps. Such similarities add confidence but also specificity about claims of consistency across smartphones. As well as presenting our findings, we discuss issues involved in reproducing studies across platforms.
Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands
Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user’s hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user’s hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.
“Bursting the Assistance Bubble”: Designing Inclusive Technology with Children with Mixed Visual Abilities
Children living with visual impairments (VIs) are increasingly educated in mainstream rather than special schools. But knowledge about the challenges they face in inclusive schooling environments and how to design technology to overcome them remains scarce. We report findings from a field study involving interviews and observations of educators and children with/without VIs in mainstream schools, in which we identified the “teaching assistant bubble” as a potential barrier to group learning, social play and independent mobility. We present co-design activities blending elements of future workshops, multisensory crafting, fictional inquiry and bodystorming, demonstrating that children with and without VIs can jointly lead design processes and explore design spaces reflective of mixed visual abilities and shared experiences. We extend previous research by characterising challenges and opportunities for improving inclusive education of children with VIs in mainstream schools, in terms of balancing assistance and independence, and reflect on the process and outcomes of co-designing with mixed-ability groups in this context.
Review of Intrinsic Motivation in Simulation-based Game Testing
This paper presents a review of intrinsic motivation in player modeling, with a focus on simulation-based game testing. Modern AI agents can learn to win many games; from a game testing perspective, a remaining research problem is how to model the aspects of human player behavior not explained by purely rational and goal-driven decision making. A major piece of this puzzle is constituted by intrinsic motivations, i.e., psychological needs that drive behavior without extrinsic reinforcement such as game score. We first review the common intrinsic motivations discussed in player psychology research and artificial intelligence, and then proceed to systematically review how the various motivations have been implemented in simulated player agents. Our work reveals that although motivations such as competence and curiosity have been studied in AI, work on utilizing them in simulation-based game testing is sparse, and other motivations such as social relatedness, immersion, and domination appear particularly underexplored.
Semi-Automated Coding for Qualitative Research: A User-Centered Inquiry and Initial Prototypes
Qualitative researchers perform an important and painstaking data annotation process known as coding. However, much of the process can be tedious and repetitive, becoming prohibitive for large datasets. Could coding be partially automated, and should it be? To answer this question, we interviewed researchers and observed them code interview transcripts. We found that across disciplines, researchers follow several coding practices well-suited to automation. Further, researchers desire automation after having developed a codebook and coded a subset of data, particularly in extending their coding to unseen data. Researchers also require any assistive tool to be transparent about its recommendations. Based on our findings, we built prototypes to partially automate coding using simple natural language processing techniques. Our top-performing system generates coding that matches human coders on inter-rater reliability measures. We discuss implications for interface and algorithm design, meta-issues around automating qualitative research, and suggestions for future work.
An Eye For Design: Gaze Visualizations for Remote Collaborative Work
In remote collaboration, gaze visualizations are designed to display where collaborators are looking in a shared visual space. This type of gaze-based intervention can improve coordination, however researchers have yet to fully explore different gaze visualization techniques and develop a deeper understanding of the ways in which features of visualizations may interact with task attributes to influence collaborative performance. There are many ways to visualize characteristics of eye movements, such as a path connecting fixation points or a heat map illustrating fixation duration and coverage. In this study, we designed and evaluated three unique gaze visualizations in a remote search task. Our results suggest that the design of gaze visualizations affects performance, coordination, searching behavior, and perceived utility. Additionally, the degree of task coupling further influences the effect of gaze visualizations on performance and coordination. We then reflect on the value of gaze visualizations for remote work and discuss implications for the design of gaze-based interventions.
Assisting Students with Intellectual and Developmental Disabilities in Inclusive Education with Smartwatches
Smartwatches have a large potential to support everyday activities. However, their potential as assistive technologies in inclusive academic environments is unclear. To investigate how smartwatches can support students with intellectual and developmental disabilities (IDDs) to perform activities that require emotional and behavioral skills and involve communication, collaboration and planning, we implemented WELI. WELI (Wearable Life) is a wearable application designed to assist young adults with IDDs attending a postsecondary education program. This paper reports on the user-centric design process adopted in the development of WELI, and describes how smartwatches can assist students with IDDs in special education. The results reported are drawn from 8 user studies with 58 participants in total. WELI features include behavioral intervention, mood regulation, reminders, checklists, surveys and rewards. Results indicate that several considerations must be taken into account when designing for students with IDD, and that overall the students are enthusiastic about adopting an innovative smartwatch application in class, as they reacted positively about the technology and features provided.
Making as Expression: Informing Design with People with Complex Communication Needs through Art Therapy
There is a growing emphasis on designing with people with diverse health experiences rather than designing for them. Yet, collaborative design becomes difficult when working with individuals with health conditions (e.g., stroke, cancer, abuse, depression) that affect their ability or willingness to engage alongside researchers and verbally express themselves. The present paper analyzes how the clinical practice of art therapy engages these individuals in co-creative, visual expression of ideas, thoughts, and experiences. Drawing on interviews with 22 art therapists and over two years of field work in a clinical setting, we detail how art therapists view making as expression for people with complex communication needs. Under this view, we argue that art therapy practice can inspire collaborative design engagements by understanding materials as language, creating space for expression, and sustaining expressions in a broader context. We discuss practical and ethical implications for design work involving individuals with complex communication needs.
The Unexpected Entry and Exodus of Women in Computing and HCI in India
In India, women represent 45% of total computer science enrollment in universities, almost three times the rate in the United States, where it is 17%. At the same time, women make up an estimated 25-30% of the HCI community in India, half the rate in the U.S. We investigate the complexities of these surprising phenomena through qualitative research of Indian computer science and human-computer interaction researchers and professionals at various life stages. We find among other things that Indian familial norms play a significant role in pressuring young women into computing as a field; that familial pressures and workplace discrimination then cause a precipitous exit of women from computing at the onset of marriage; and that HCI occupies an interstitial space between art and technology that affects women’s careers. Our findings underscore the societal influence on women’s representation in the tech sector and invite further participation by the HCI community in related questions.
ProtoAR: Rapid Physical-Digital Prototyping of Mobile Augmented Reality Applications
The latest generations of smartphones with built-in AR capabilities enable a new class of mobile apps that merge digital and real-world content depending on a user’s task, context, and preference. But even experienced mobile app designers face significant challenges: creating 2D/3D AR content remains difficult and time-consuming, and current mobile prototyping tools do not support AR views. There are separate tools for this; however, they require significant technical skill. This paper presents ProtoAR which supplements rapid physical prototyping using paper and Play-Doh with new mobile cross-device multi-layer authoring and interactive capture tools to generate mobile screens and AR overlays from paper sketches, and quasi-3D content from 360-degree captures of clay models. We describe how ProtoAR evolved over four design jams with students to enable interactive prototypes of mobile AR apps in less than 90 minutes, and discuss the advantages and insights ProtoAR can give designers.
SurfaceConstellations: A Modular Hardware Platform for Ad-Hoc Reconfigurable Cross-Device Workspaces
We contribute SurfaceConstellations, a modular hardware platform for linking multiple mobile devices to easily create novel cross-device workspace environments. Our platform combines the advantages of multi-monitor workspaces and multi-surface environments with the flexibility and extensibility of more recent cross-device setups. The SurfaceConstellations platform includes a comprehensive library of 3D-printed link modules to connect and arrange tablets into new workspaces, several strategies for designing setups, and a visual configuration tool for automatically generating link modules. We contribute a detailed design space of cross-device workspaces, a technique for capacitive links between tablets for automatic recognition of connected devices, designs of flexible joint connections, detailed explanations of the physical design of 3D printed brackets and support structures, and the design of a web-based tool for creating new SurfaceConstellation setups.
Revisiting “The Rise and Decline” in a Population of Peer Production Projects
Do patterns of growth and stabilization found in large peer production systems such as Wikipedia occur in other communities? This study assesses the generalizability of Halfaker et al.’s influential 2013 paper on “The Rise and Decline of an Open Collaboration System.” We replicate its tests of several theories related to newcomer retention and norm entrenchment using a dataset of hundreds of active peer production wikis from Wikia. We reproduce the subset of the findings from Halfaker and colleagues that we are able to test, comparing both the estimated signs and magnitudes of our models. Our results support the external validity of Halfaker et al.’s claims that quality control systems may limit the growth of peer production communities by deterring new contributors and that norms tend to become entrenched over time.
Designing Pronunciation Learning Tools: The Case for Interactivity against Over-Engineering
Paired role-play is a common collaborative activity in language learning classrooms, adding meaning and cultural context to the learning process. This is complemented by teachers’ immediate and explicit feedback. Interactive tools that provide explicit feedback during collaborative learning are scarce, however. More commonly, supporting dialogue practice takes the form of computer-aided single-student read-and-record activities. This limitation is partly due to the complexity of processing language learners’ speech in unconstrained tasks. In this paper, we assess the value of pronunciation error detection algorithms within a realistic, software-aided, paired role-playing task with beginning learners of French. We found that students’ pronunciations improve regardless of the type of error detector employed — even for those using simple heuristics. We suggest that speech technologies for language learning have been too focused on engineering goals. Instead, new interactive designs supporting collaboration may be used to overcome engineering limitations and properly support students’ engagement.
From Her Story, to Our Story: Digital Storytelling as Public Engagement around Abortion Rights Advocacy in Ireland
Despite the divisive nature of abortion within the Republic of Ireland and Northern Ireland, where access to safe, legal abortion is severely restricted, effecting legislative reform demands widespread public support. In light of a building pro-choice counter-voice, this work contributes to a growing body of HCI research that takes an activist approach to design. We report findings from four design workshops with 31 pro-choice stakeholders across Ireland in which we positioned an exploratory protosite, HerStoryTold, to engender critical conversations around the use of sensitive abortion narratives as a tool for engagement. Our analysis shows how digital storytelling can help reject false narratives and raise awareness of the realities of abortion laws. It suggests design directions to curate narratives that provoke empathy, foster polyvocality, and ultimately expand the engaged community. Furthermore, this research calls for designers to actively support community mobilization through providing ‘stepping stones’ to activism.
Capturing, Representing, and Interacting with Laughter
We investigate a speculative future in which we celebrate happiness by capturing laughter and representing it in tangible forms. We explored technologies for capturing naturally occurring laughter as well as various physical representations of it. For several weeks, our participants collected audio samples of everyday conversations with their loved ones. We processed those samples through a machine learning algorithm and shared the resulting tangible representations (e.g., physical containers and edible displays) with our participants. In collecting, listening to, interacting with, and sharing their laughter with loved ones, participants described both joy in preserving and interacting with laughter and tension in collecting it. This study revealed that the tangibility of laughter representations matters, especially its symbolism and material quality. We discuss design implications of giving permanent forms to laughter and consider the sound of laughter as a part of our personal past that we might seek to preserve and reflect upon.
Geocaching with a Beam: Shared Outdoor Activities through a Telepresence Robot with 360 Degree Viewing
People often enjoy sharing outdoor activities together such as walking and hiking. However, when family and friends are separated by distance it can be difficult if not impossible to share such activities. We explore this design space by investigating the benefits and challenges of using a telepresence robot to support outdoor leisure activities. In our study, participants participated in the outdoor activity of geocaching where one person geocached with the help of a remote partner via a telepresence robot. We compared a wide field of view (WFOV) camera to a 360° camera. Results show the benefits of having a physical embodiment and a sense of immersion with the 360° view. Yet challenges related to a lack of environmental awareness, safety issues, and privacy concerns resulting from bystander interactions. These findings illustrate the need to design telepresence robots with the environment and public in mind to provide an enhanced sensory experience while balancing safety and privacy issues resulting from being amongst the general public.
PalmTouch: Using the Palm as an Additional Input Modality on Commodity Smartphones
Touchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present PalmTouch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the device’s top edge implicitly places the palm on the touchscreen. We present different use cases for PalmTouch, including the use as a shortcut and for improving reachability. To evaluate these use cases, we have developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. Results of the evaluation show that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate PalmTouch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.
How “Wide Walls” Can Increase Engagement: Evidence From a Natural Experiment in Scratch
A core aim for designing constructionist learning systems and toolkits is enabling “wide walls”-a metaphor used to describe supporting a diverse range of creative outcomes. Ensuring that a broad design space is afforded to learners by a toolkit is a common approach to achieving wide walls. We use econometric methods to provide an empirical test of the wide walls theory through a natural experiment in the Scratch online community. We estimate the causal effect of a policy change that gave a large number of Scratch users access to a more powerful version of Scratch data structures, effectively widening the walls for learners. We show that access to and use of these more powerful new data structures caused learners to use data structures more frequently. Our findings provide support for the theory that wide walls can increase engagement and learning.
Activity Tracking in vivo
While recent research has emphasized the importance of understanding the lived experience of personal tracking, very little is known about the everyday coordination between tracker use and the surrounding environment. We combine behavioral data from trackers with video recordings from wearable cameras, in an attempt to understand how usage unfolds in daily life and how it is shaped by the context of use. We recorded twelve participants’ daily use of activity trackers, collecting and analyzing 244 incidents where activity trackers were used. Among our findings, tracker use was strongly driven by reflection and learning-in action, contrasting the traditional view that learning is one of deep exploration, following the collection of data on behaviors. We leverage on these insights and propose three directions for the design of activity trackers: facilitating learning through glances, providing normative feedback and facilitating micro-plans.
Understanding Users’ Capability to Transfer Information between Mixed and Virtual Reality: Position Estimation across Modalities and Perspectives
Mixed Reality systems combine physical and digital worlds, with great potential for the future of HCI. It is possible to design systems that support flexible degrees of virtuality by combining complementary technologies. In order for such systems to succeed, users must be able to create unified mental models out of heterogeneous representations. In this paper, we present two studies focusing on the users’ accuracy on heterogeneous systems using Spatial Augmented Reality (SAR) and immersive Virtual Reality (VR) displays, and combining viewpoints (egocentric and exocentric). The results show robust estimation capabilities across conditions and viewpoints.
Seemo: A Computational Approach to See Emotions
Successful human interactions are based on becoming aware of other’s emotion and making adaptations accordingly. However, understanding emotion is a complex task that has generated countless debates among researchers over the past decades. The abstract nature of human emotion highlights the need for a new data-driven approach that can better describe and compare across fine-grained emotional states. In this study, we propose Seemo, a novel neural embedding framework, which allows us to map human emotions into vector space representations. Seemo is trained using Twitter data and is evaluated on two fundamental use cases in traditional emotion research: determining the underlying dimensions of emotions and identifying the set of basic emotions. The evaluation reveals that on both tasks Seemo can generate results consistent with the mainstream theories. Results also show that the vector space representation of Seemo can effectively decode the important relationships between emotions that were usually not explicitly presented.
Knowing You, Seeing Me: Investigating User Preferences in Drone-Human Acknowledgement
In the past, human proxemics research has poorly predicted human robot interaction distances. This paper presents three studies on drone gestures to acknowledge human presence and clarify suitable acknowledging distances. We evaluated four drone gestures based on non-verbal human greetings. The gestures included orienting towards the counterpart and salutation gestures. We tested these individually and in combination to create a feeling of acknowledgement in people. Our users preferred being acknowledged from two meters away but gestures were also effective from four meters. Rotating the drone towards the user elicited a higher degree of acknowledgement than without. We conclude with a set design guidelines for drone gestures.
Evaluating CoBlox: A Comparative Study of Robotics Programming Environments for Adult Novices
A new wave of collaborative robots designed to work alongside humans is bringing the automation historically seen in large-scale industrial settings to new, diverse contexts. However, the ability to program these machines often requires years of training, making them inaccessible or impractical for many. This paper rethinks what robot programming interfaces could be in order to make them accessible and intuitive for adult novice programmers. We created a block-based interface for programming a one-armed industrial robot and conducted a study with 67 adult novices comparing it to two programming approaches in widespread use in industry. The results show participants using the block-based interface successfully implemented robot programs faster with no loss in accuracy while reporting higher scores for usability, learnability, and overall satisfaction. The contribution of this work is showing the potential for using block-based programming to make powerful technologies accessible to a wider audience.
Rolling-Menu: Rapid Command Selection in Toolbars Using Roll Gestures with a Multi-DoF Mouse
This paper presents Rolling-Menu, a technique for selecting toolbar items, based on the use of roll gestures with a multidimensional device, the Roly-Poly Mouse (RPM). Rolling-Menu reduces object-command transition, resulting in a better integration between command selection and direct manipulation of application objects. Selecting a toolbar item with Rolling-Menu requires rolling RPM in a predefined direction corresponding to the item. We propose a design space of Rolling-Menu that includes different roll mapping and validation modes. A first user’s study, with a simple toolbar containing up to 14 items, establishes that the best version of Rolling-Menu takes, on average, up to 29% less time than the Mouse to select a toolbar item. Moreover accuracy of the selection with Rolling-Menu is above 90%. Both the validation mode and the mapping between roll direction and toolbar items influence the performance of Rolling-Menus. A second study compares the three best versions of Rolling-Menu with the Mouse to select an item in two types of multidimensional toolbars: a toolbar containing dropdown lists, and a grid toolbar. Results confirm the advantage of Rolling-Menu over a Mouse.
Effects of Enhanced Gaze Presentation on Gaze Leading in Remote Collaborative Physical Tasks
With respect to collaborative physical tasks, gaze and gestures play significant roles when referring to physical objects. In video-mediated communication, however, such nonverbal cues become “ineffectual” when they are presented via a 2D monitor, making video-mediated collaborative physical tasks inefficient. This study focuses on gaze cues to support remote collaborative physical tasks and uses an eye-shaped display, “ThirdEye,” a simple add-on display that represents a remote participant’s gaze direction. ThirdEye is expected to be especially effective when used with mobile terminals. We investigated whether the ThirdEye’s gaze shift is effective in leading a local observer’s attention toward objects in the local environment, even when ThirdEye is presented with the actual face image of a remote person. Experimental results show that ThirdEye can lead the local participant’s attention to intended objects faster than without ThirdEye.
Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets
This paper presents Dream Lens, an interactive visual analysis tool for exploring and visualizing large-scale generative design datasets. Unlike traditional computer aided design, where users create a single model, with generative design, users specify high-level goals and constraints, and the system automatically generates hundreds or thousands of candidates all meeting the design criteria. Once a large collection of design variations is created, the designer is left with the task of finding the design, or set of designs, which best meets their requirements. This is a complicated task which could require analyzing the structural characteristics and visual aesthetics of the designs. Two studies are conducted which demonstrate the usability and usefulness of the Dream Lens system, and a generatively designed dataset of 16,800 designs for a sample design problem is described and publicly released to encourage advancement in this area.
Seeing What Is and What Can Be: On Sustainability, Respect for Work, and Design for Respect
This paper privileges visual contributions-original images and referenced materials-nearly as much as text. As such, it follows a trend towards pictorials and image intensive papers elsewhere in SIGCHI venues that have yet to find acceptance in the CHI paper tracks. The paper in both its text and its visual contributions takes up (a) ongoing questions of how designs matter-especially in relation to sustainability, (b) questions of extending notions of sustainability beyond the environment to include notions from respect for human labor to respect between nations, (c) questions of the utility of photographic methods in building design understanding and conceptualization, (d) questions of emphasis and extension for Rams’ principles of good design, and (e) hypotheses about the relations between seemingly small design details and global attitudes, policy, and harmony, inspired by Allison’s account of Thucydides’ Trap. These are big questions. It is their ambitious character that unifies them.
Everything We Do, Everything We Press: Data-Driven Remote Performance Management in a Mobile Workplace
This paper examines how data-driven performance monitoring technologies affect the work of telecommunications field engineers. As a mobile workforce, this occupational group rely on an array of smartphone applications to plan, manage and report on their jobs, and to liaise remotely with managers and colleagues. These technologies intend to help field engineers be more productive and have greater control over their work; however they also gather data related to the quantity and effectiveness of their labor. We conducted a qualitative study examining engineers’ experiences of these systems. Our findings suggest they simultaneously enhance worker autonomy, support co-ordination with and monitoring of colleagues, but promote anxieties around productivity and the interpretation of data by management. We discuss the implications of data-driven performance management technologies on worker agency, and examine the consequences of such systems in an era of quantified workplaces.
Social Affordances at Play: Game Design Toward Socio-Technical Innovation
In this paper we propose that game design strategies and theories can be useful tools for supporting the design of innovative socio-technical systems aimed at supporting social co-presence. We support this proposal with an annotated portfolio of a series of research prototype games that investigate sensor affordances and configurations to sustain and enhance social co-presence. We introduce relevant theory from game studies (the magic circle; the MDA (mechanics/dynamics/aesthetics framework)) to help ground and guide the use of game design in HCI practice. We conclude with recommendations for adopting game design as a supplementary research technique, with caveats about the limits of the approach.
A Critical Examination of Feedback in Early Reading Games
Learning games now play a role in both formal and informal learning, including foundational skills such as literacy. While feedback is recognised as a key pedagogical dimension of these games, particularly in early learning, there has been no research on how commercial games available to schools and parents reify learning theory into feedback. Using a systematic content analysis, we examine how evidence-based feedback principles manifest in five widely-used learning games designed to foster young children’s reading skills. Our findings highlight strengths in how games deliver feedback when players succeed. Many of the games, however, were inconsistent and not proactive when providing error feedback, often promoting trial and error strategies. Furthermore, there was a lack of support for learning the game mechanics and a preference for task-oriented rewards less deeply embedded in the gameplay. Our research provides a design and research agenda for the inclusion of feedback in early learning games.
HCI meets Material Science: A Literature Review of Morphing Materials for the Design of Shape-Changing Interfaces
With the proliferation of flexible displays and the advances in smart materials, it is now possible to create interactive devices that are not only flexible but can reconfigure into any shape on demand. Several Human Computer Interaction (HCI) and robotics researchers have started designing, prototyping and evaluating shape-changing devices, realising, however, that this vision still requires many engineering challenges to be addressed. On the material science front, we need breakthroughs in stable and accessible materials to create novel, proof-of-concept devices. On the interactive devices side, we require a deeper appreciation for the material properties and an understanding of how exploiting material properties can provide affordances that unleash the human interactive potential. While these challenges are interesting for the respective research fields, we believe that the true power of shape-changing devices can be magnified by bringing together these communities. In this paper we therefore present a review of advances made in shape-changing materials and discuss their applications within an HCI context.
Ticket to Talk: Supporting Conversation between Young People and People with Dementia through Digital Media
We explore the role of digital media in supporting intergenerational interactions between people with dementia and young people. Though meaningful social interaction is integral to quality of life in dementia, initiating conversation with a person with dementia can be challenging, especially for younger people who may lack knowledge of someone’s life history. This can be further compounded without a nuanced understanding of the nature of dementia, along with an unfamiliarity in leading and maintaining conversation. We designed a mobile application – Ticket to Talk – to support intergenerational interactions by encouraging young people to collect media relevant to individuals with dementia to use in conversations with people with dementia. We evaluated Ticket to Talk through trials with two families, a care home, and groups of older people. We highlight difficulties in using technologies such as this as a conversational tool, the value of digital media in supporting intergenerational interactions, and the potential to positively shape people with dementia’s agency in social settings.
Falling for Fake News: Investigating the Consumption of News via Social Media
In the so called ‘post-truth’ era, characterized by a loss of public trust in various institutions, and the rise of ‘fake news’ disseminated via the internet and social media, individuals may face uncertainty about the veracity of information available, whether it be satire or malicious hoax. We investigate attitudes to news delivered by social media, and subsequent verification strategies applied, or not applied, by individuals. A survey reveals that two thirds of respondents regularly consumed news via Facebook, and that one third had at some point come across fake news that they initially believed to be true. An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style. This reflects a shift away from traditional methods of accessing the news, and highlights the difficulties in combating the spread of fake news.
‘It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions
Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people’s perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no ‘best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
I Really did That: Sense of Agency with Touchpad, Keyboard, and On-skin Interaction
Input on the skin is emerging as an interaction style. At CHI 2012, Coyle and colleagues identified an increase in the sense of agency (SoA) as one benefit of skin input. However, their study only compared skin input to button presses and has not, to our knowledge, been replicated. Therefore, we had 24 participants compare skin input to both button presses and touchpad input, measuring SoA using the Libet Clock paradigm. We replicate previous findings regarding increased SoA in skin versus button input and also find that SoA for skin is significantly increased compared to touchpad input. Interview data addressing subjective experience further support these findings. We discuss agency and the experiences associated with skin input, as well as differences to input with non-skin devices.
The Impact of Abstract vs. Concrete Feedback Design on Behavior Insights from a Large Eco-Driving Field Experiment
About 17% of the worldwide CO2-emissions can be ascribed to road transportation. Using information systems (IS)-enabled feedback has shown to be very efficient in promoting a less fuel-consuming driving style. Today, in-car IS that provide feedback on driving behavior are in the midst of a fundamental change. Increasing digitalization of in-car IS enables virtually any kind of feedback. Still, we see a gap in the empirical evidence on how to leverage this potential, raising questions on future HCI-based feedback design. To address this knowledge gap, we designed an eco-driving feedback IS and, building upon construal level theory, hypothesize that abstract feedback is more effective in reducing fuel consumption than concrete feedback. Deployed in a large field experiment with 56 participants covering over 297,000km, we provide first empirical evidence that supports this hypothesis. Despite its limitations, this research may have general implications for the design of real-time feedback.
Tangible Landscape: A Hands-on Method for Teaching Terrain Analysis
This paper presents novel and effective methods for teaching about topography–or shape of terrain–and assessing 3-dimensional spatial learning using tangibles. We used Tangible Landscape–a tangible interface for geospatial modeling–to teach multiple hands-on tangible lessons on the concepts of grading (i.e., earthwork), geomorphology, and hydrology. We examined students’ ratings of the system’s usability and user experience and tested students’ acquisition and transfer of knowledge. Our results suggest the physicality of the objects enabled the participants to effectively interact with the system and each other, positively impacting ratings of usability and task-specific knowledge building. These findings can potentially advance the design and implementation of tangible teaching methods for the topics of geography, design, architecture, and engineering.
[Un]breaking News: Design Opportunities for Enhancing Collaboration in Scientific Media Production
Contemporary scientific media production requires a complex socio-technical infrastructure we call the “Media Production Pipeline” (MPP). Media professionals engage with researchers along the MPP to disseminate science news to the lay public. However, differing incentive structures and professional contexts frequently set researchers’ values and needs at odds with those of media professionals, resulting in problematic or failed interactions. We ask the research question: what pain points in scientific media production afford opportunities for future HCI innovation? We then present a grounded theory analysis of 24 interviews with researchers and media professionals, yielding several key contributions. First, we describe two collaborative domains in scientific media production between research advocates and media outlets. Second, we characterize discrete technological gaps and pain points in both domains. Finally, we discuss implications for design and propose solutions from HCI areas like peer production, online communities, recommender systems, and online collaboration.
ThermoKiosk: Investigating Roles for Digital Surveys of Thermal Experience in Workplace Comfort Management
Thermal comfort in shared workplaces is often contested and impacts productivity, wellbeing, and energy use. Yet, subjective and situated comfort experiences are rarely captured and engaged with. In this paper, we explore roles for digital surveys in capturing and visualising subjective experiences of comfort in situ for comfort management. We present findings from a 3-week field trial of our prototype system called ThermoKiosk, which we deployed in an open plan, shared office with a history of thermal comfort complaints. In interviews with occupants and members of facilities management, we find that the data and interactions can play an important role in initiating dialogue to understand and handle tensions, and point to design considerations for more systematically integrating them into workplace comfort practices.
The Privilege of Immersion: Racial and Ethnic Experiences, Perceptions, and Beliefs in Digital Gaming
People of color comprise a large proportion of the US player base, yet are systematically and grossly underrepresented in digital games. We constructed a survey to assess if players perceive this underrepresentation, how they experience these representations, and sample their beliefs about diversity and gaming. Mixed-methods analyses show significant differences between players of color and White players on perception of racial norms in gaming, effects on behavior, emotions, player satisfaction, engagement, and beliefs stemming from a lack of diversity. Players from all races-ethnicities overwhelmingly expressed a desire for greater diversity. We discuss reasons why our methodology shows higher dissatisfaction than previous research and discuss our findings in the context of industry’s challenge to meet audience demands for greater racial diversity in games.
Stitching Infrastructures to Facilitate Telemedicine for Low-Resource Environments
Telemedicine can potentially transform healthcare delivery in low-resource environments by enabling extension of medical knowledge to remote locations, thus enhancing the efficiency and effectiveness of the larger healthcare infrastructure. However, empirical studies have shown mixed results at best. We present a qualitative investigation of a long-standing telemedicine program operating from Lucknow (Uttar Pradesh, India). Invoking the lenses of human infrastructure and seamful spaces, we highlight the factors that determine the success of this telemedicine program. We identify and describe three important aspects: (1) conceptualizing telemedicine as the connectedness of two nodes rather than doctors and patients alone, (2) identifying the critical ‘carrying agent’ (local doctors at peripheral nodes) and engaging them in program design and implementation, and (3) ensuring co-creation by engaging patients in the process. Finally, we discuss how our lenses allowed us to recognize the seams made visible through the juxtaposition of the infrastructures at the central and peripheral nodes, and to emphasize the human elements that addressed these seams for ensuring the facilitation of a successful telemedicine program.
BIGFile: Bayesian Information Gain for Fast File Retrieval
We introduce BIGFile, a new fast file retrieval technique based on the Bayesian Information Gain framework. BIGFile provides interface shortcuts to assist the user in navigating to a desired target (file or folder). BIGFile’s split interface combines a traditional list view with an adaptive area that displays shortcuts to the set of file paths estimated by our computationally efficient algorithm. Users can navigate the list as usual, or select any part of the paths in the adaptive area. A pilot study of 15 users informed the design of BIGFile, revealing the size and structure of their file systems and their file retrieval practices. Our simulations show that BIGFile outperforms Fitchett et al.’s AccessRank, a best-of-breed prediction algorithm. We conducted an experiment to compare BIGFile with ARFile (AccessRank instantiated in a split interface) and with a Finder-like list view as baseline. BIGFile was by far the most efficient technique (up to 44% faster than ARFile and 64% faster than Finder), and participants unanimously preferred the split interfaces to the Finder.
The Use of Private Mobile Phones at War: Accounts From the Donbas Conflict
Studying technology use in unstable and life-threatening conditions can help highlight assumptions of use built into technologies and foreground contradictions in the design of devices and services. This paper provides an account of how soldiers, volunteers, and civilians use mobile technologies in wartime, reporting on fieldwork conducted in Western Russia and Eastern Ukraine with people close to or participating directly in the armed conflict in the Donbas region. We document how private mobile phones and computers became a crucial but ambiguous infrastructure despite their lack of durability in extreme conditions of a military conflict, and their government and military surveillance potential. Our participants rely on a combination of myths and significant technical knowledge to negotiate the possibilities mobile technologies offer and the life-threatening reality of enemy surveillance they engender. We consider the problems of always-on always-connected devices under conditions of war and surveillance and our responsibilities as HCI practitioners in the design of social technologies.
ConceptScape: Collaborative Concept Mapping for Video Learning
While video has become a widely adopted medium for online learning, existing video players provide limited support for navigation and learning. It is difficult to locate parts of the video that are linked to specific concepts. Also, most video players afford passive watching, thus making it difficult for learners with limited metacognitive skills to deeply engage with the content and reflect on their understanding. To support concept-driven navigation and comprehension of lecture videos, we present ConceptScape, a system that generates and presents a concept map for lecture videos. ConceptScape engages crowd workers to collaboratively generate a concept map by prompting them to externalize reflections on the video. We present two studies to show that (1) interactive concept maps can be useful tools for concept-based video navigation and comprehension, and (2) with ConceptScape, novice crowd workers can collaboratively generate complex concept maps that match the quality of those by experts.
Expressive Time Series Querying with Hand-Drawn Scale-Free Sketches
We present Qetch, a tool where users freely sketch patterns on a scale-less canvas to query time series data without specifying query length or amplitude. We study how humans sketch time series patterns — humans preserve visually salient perceptual features but often non-uniformly scale and locally distort a pattern — and we develop a novel matching algorithm that accounts for human sketching errors. Qetch enables the easy construction of complex and expressive queries with two key features: regular expressions over sketches and relative positioning of sketches to query multiple time-aligned series. Through user studies, we demonstrate the effectiveness of Qetch’s different interaction features. We also demonstrate the effectiveness of Qetch’s matching algorithm compared to popular algorithms on targeted, and exploratory query-by-sketch search tasks on a variety of data sets.
Prayana: Intermediated Financial Management in Resource-Constrained Settings
We describe the design of a novel mobile phone-based application for loan management in a resource-constrained setting. In this setting, a social enterprise manages auto-rickshaw loans for drivers, taking charge of collections. The design was informed by an ethnographic study which revealed how loan management for this financially vulnerable population is a daily struggle, and loan payment is a collaborative achievement between collectors and drivers. However, drivers and collectors have limited resources to-hand for loan management. To address this, we designed Prayana, an intermediated financial management app. Prayana shares the principles of many persuasive technologies, such as education, motivation, and nudges, but is designed for users with a range of print, technical, and financial literacies and embodies the core design sensibility of enhancing users’ agency. Furthermore, it does not put the onus solely on drivers to better manage their money, instead it aims to enhance the collaborative work of loan management, supporting both the drivers and collectors.
ChromaGlasses: Computational Glasses for Compensating Colour Blindness
Prescription glasses are used by many people as a simple, and even fashionable way, to correct refractive problems of the eye. However, there are other visual impairments that cannot be treated with an optical lens in conventional glasses. In this work we present ChromaGlasses, Computational Glasses using optical head-mounted displays for compensating colour vision deficiency. Unlike prior work that required users to look at a screen in their visual periphery rather than at the environment directly, ChromaGlasses allow users to directly see the environment using a novel head-mounted displays design that analyzes the environment in real-time and changes the appearance of the environment with pixel precision to compensate the impairment of the user. In this work, we present first prototypes for ChromaGlasses and report on the results from several studies showing that ChromaGlasses are an effective method for managing colour blindness.
Face Value?
We are interested in increasing the ability of groups to collaborate efficiently by leveraging new advances in AI and Conversational Agent (CA) technology. Given the longstanding debate on the necessity of embodiment for CAs, bringing them to groups requires answering the questions of whether and how providing a CA with a face affects its interaction with the humans in a group. We explored these questions by comparing group decision-making sessions facilitated by an embodied agent, versus a voice-only agent. Results of an experiment with 20 user groups revealed that while the embodiment improved various aspects of group’s social perception of the agent (e.g., rapport, trust, intelligence, and power), its impact on the group-decision process and outcome was nuanced. Drawing on both quantitative and qualitative findings, we discuss the pros and cons of embodiment, argue that the value of having a face depends on the types of assistance the agent provides, and lay out directions for future research.
Coding Tactile Symbols for Phonemic Communication
We present a study to examine one’s learning and processing capacity of broadband tactile information, such as that derived from speech. In Study 1, we tested a user’s capability to recognize tactile locations and movements on the forearm in the presence of masking stimuli and determined 9 distinguishable tactile symbols. We associated these symbols to 9 phonemes using two approaches, random and articulation associations. Study 2 showed that novice participants can learn both associations. However, performance for retention, construction of words and knowledge transfer to recognize unlearned words was better with articulation association. In study 3, we trained novel participants to directly recognize words before learning phonemes. Our results show that novel users can retain and generalize the knowledge to recognize new words faster when they were directly train on words. Finally, Study 4 examined optimal presentation rate for the tactile symbols without compromising learning and recognition rate.
X-Ray Refine: Supporting the Exploration and Refinement of Information Exposure Resulting from Smartphone Apps
Most smartphone apps collect and share information with various first and third parties; yet, such data collection practices remain largely unbeknownst to, and outside the control of, end-users. In this paper, we seek to understand the potential for tools to help people refine their exposure to third parties, resulting from their app usage. We designed an interactive, focus-plus-context display called X-Ray Refine (Refine) that uses models of over 1 million Android apps to visualise a person’s exposure profile based on their durations of app use. To support exploration of mitigation strategies, emphRefine can simulate actions such as app usage reduction, removal, and substitution. A lab study of emphRefine found participants achieved a high-level understanding of their exposure, and identified data collection behaviours that violated both their expectations and privacy preferences. Participants also devised bespoke strategies to achieve privacy goals, identifying the key barriers to achieving them.
Controlling Maximal Voluntary Contraction of the Upper Limb Muscles by Facial Electrical Stimulation
In this paper, we propose to use facial electrical stimulation to control maximal voluntary contraction (MVC) of the upper limbs. The method is based on a body mechanism in which the contraction of the masseter muscles enhances MVC of the limb muscles. Facial electrical stimulation is applied to the masseter muscles and the lips. The former is to enhance the MVC by causing involuntary contraction of the masseter muscles, and the latter is to suppress the MVC by interfering with voluntary contraction of the masseter muscles. In a user study, we used electromyography sensors on the upper limbs to evaluate the effects of the facial electrical stimulation on the MVC of the upper limbs. The experimental results show that the MVC was controlled by the facial electrical stimulation. We assume that the proposed method is useful for sports athletes because the MVC is linked to sports performance.
How to Design a Digital Storytelling Authoring Tool for Developing Pre-Reading and Pre-Writing Skills
In the paper we describe an exploration into the design of an authoring tool to support the creation of multimedia stories. We explicitly targeted children with no reading or writing skills and their educators. Children in this age group often enjoy reading and creating stories together with adults and in so doing develop important pre-literacy skills. Literature suggests that when children play an active role in these activities, with a high level of engagement and interaction, there is a significant increase in their vocabulary acquisition and an improvement in their communication skills. Thus, we investigated these issues by conducting an explorative study in a pre-school class with fifteen children and three teachers. Here, we describe the emerging challenges and provide design directions for an authoring system to support the co-creation of stories for pre-literate children.
Non-Native English Speakers Learning Computer Programming: Barriers, Desires, and Design Opportunities
People from nearly every country are now learning computer programming, yet the majority of programming languages, libraries, documentation, and instructional materials are in English. What barriers do non-native English speakers face when learning from English-based resources? What desires do they have for improving instructional materials? We investigate these questions by deploying a survey to a programming education website and analyzing 840 responses spanning 86 countries and 74 native languages. We found that non-native English speakers faced barriers with reading instructional materials, technical communication, reading and writing code, and simultaneously learning English and programming. They wanted instructional materials to use simplified English without culturally-specific slang, to use more visuals and multimedia, to use more culturally-agnostic code examples, and to embed inline dictionaries. Programming also motivated some to learn English better and helped clarify logical thinking about natural languages. Based on these findings, we recommend learner-centered design improvements to programming-related instructional resources and tools to make them more accessible to people around the world.
The Ambient Birdhouse: An IoT Device to Discover Birds and Engage with Nature
We introduce the Ambient Birdhouse, a novel IoT design for the home that seeks to encourage awareness and discovery of birds outside. People increasingly have routines and technologies that disconnect them from nature. Moreover birds are hard to come to know, seen but not heard, heard but not seen, or simply around when we are not. The Ambient Birdhouse aims to reconcile these positions, by using local bird media to leverage people’s playfulness and curiosity, calmly sustain interest over time and ultimately to garner interest and engagement in nature and conservation projects. We trialled the Ambient Birdhouse with five families. Key findings are that the playful nature of the Birdhouse has an immediate grasp on children, and through them on the rest of the family. Children were prompt to learn bird calls, and invented and played games that involved the Birdhouse. Learning strategies emerged spontaneously from family routines and arrangements, with each family creating different moments and spaces for learning.
Pocket Skills: A Conversational Mobile Web App To Support Dialectical Behavioral Therapy
Mental health disorders are a leading cause of disability worldwide. Although evidence-based psychotherapy is effective, engagement from such programs can be low. Mobile apps have the potential to help engage and support people in their therapy. We developed Pocket Skills, a mobile web app based on Dialectical Behavior Therapy (DBT). Pocket Skills teaches DBT via a conversational agent modeled on Marsha Linehan, who developed DBT. We examined the feasibility of Pocket Skills in a 4-week field study with 73 individuals enrolled in psychotherapy. After the study, participants reported decreased depression and anxiety and increased DBT skills use. We present a model based on qualitative findings of how Pocket Skills supported DBT. Pocket Skills helped participants engage in their DBT and practice and implement skills in their environmental context, which enabled them to see the results of using their DBT skills and increase their self-efficacy. We discuss the design implications of these findings for future mobile mental health systems.
The Application and Its Consequences for Non-Standard Knowledge Work
Application-centric computing dominates human-computer interactions, yet the concept of an application is ambiguous and the impact of its ubiquity underexplored. We unpack “the application” through the lens of non-standard knowledge work: freelance, self-employed, and fixed-term contract workers who create knowledge in collaboration with a wide variety of stakeholders on a per-project basis. Based on interviews with fourteen participants we describe how: i) their economic value is intertwined with data and skills related to specific applications; ii) their access to this value is systematically jeopardised in collaboration due to the different application practices, preferences, and proficiencies of other stakeholders; and iii) they mitigate the costs of this compromise through cross-application collaboration strategies. We trace these experiences to common characteristics of applications, such as update processes, interface symmetries, application-document relationships, and operating system and hardware dependencies. By empirically and analytically focusing on “the application”, we reveal the implications of the current application-centric computing paradigm and discuss how variations within this model create qualitatively different human-computer interactions.
Digital Konditorei: Programmable Taste Structures using a Modular Mold
Digital Gastronomy (DG) is a culinary concept that enhances traditional cooking with new HCI capabilities, rather than replacing the chef with an autonomous machine. Preliminary projects demonstrate implementation of DG via the deployment of digital instruments in a kitchen. Here we contribute an alternative solution, demonstrating the use of a modular (silicone) mold and a genetic mold-arrangement algorithm to achieve a variety of shape permutations for a recipe, allowing the control of taste structures in the dish. The mold overcomes the slow production time of 3D food printing, while allowing for a high degree of flexibility in the numerous shapes produced. This flexibility enables us to satisfy chefs’ and diners’ diverse requirements. We present the mold’s logic, arithmetic, design and special parts, the evolutionary algorithm, and a recipe, exploiting a new digital cooking concept of programmable edible taste structures and taste patterns to enrich user interaction with a given recipe.
Exploring the Weak Association between Flow Experience and Performance in Virtual Environments
Many studies conducted in non-virtual activities have shown that flow significantly influences performance, yet studies in virtual activities often reveal only a weak association. This paper begins by building a theoretical explanatory model, and then conducts 3 empirical studies to explore this question. Study 1 exams the mechanism of weak association in two virtual activities. Study 2 tests the effectiveness of a potential approach to strengthen this association. In Study 3 we applied our proposed model and design approach to optimize a VR tennis game. Results show that the influence of flow on performance was not significant in those virtual activities where the primary task and the operation of interactive artifacts were less congruent such that the artifacts can lead to flow experience that is independently of the primary task. Our research offers a theoretical and empirical basis on how to optimize virtual environment design and maximize positive effect of the flow experience.
Understanding the Mundane Nature of Self-care: Ethnographic Accounts of People Living with Parkinson’s
Self-care technologies have been influenced by medical values and models. One of the values that was acritically incorporated was that self-care was medicalised, and, as a result, technologies were designed to afford use with clinicians and fit structured medical processes. This paper seeks to broaden the understanding of self-care in HCI, to acknowledge the mundane ways in which self-care is achieved. Drawing on in-depth interviews with patients and carers, and online ethnography of an online community, we describe how the self-care of Parkinson’s is mundane. The fieldwork contrasts with more medicalised perspectives on self-care, thus we discuss the properties of a self-care concept that would acknowledge its mundane nature. Our hope is to sensitise designers to identify the mundane ways in which self-care is performed and, consequently, design technologies that better fit the complexities of everyday life with a chronic condition.
EDITalk: Towards Designing Eyes-free Interactions for Mobile Word Processing
We present EDITalk, a novel voice-based, eyes-free word processing interface. We used a Wizard-of-Oz elicitation study to investigate the viability of eyes-free word processing in the mobile context and to elicit user requirements for such scenarios. Results showed that meta-level operations like highlight and comment, and core operations like insert, delete and replace are desired by users. However, users were challenged by the lack of visual feedback and the cognitive load of remembering text while editing it. We then studied a commercial-grade dictation application and discovered serious limitations that preclude comfortable speak-to-edit interactions. We address these limitations through EDITalk’s closed-loop interaction design, enabling eyes-free operation of both meta-level and core word processing operations in the mobile context. Finally, we discuss implications for the design of future mobile, voice-based, eyes-free word processing interface.
Improving User Confidence in Concept Maps: Exploring Data Driven Explanations
Automated tools are increasingly being used to generate highly engaging concept maps as an aid to strategic planning and other decision-making tasks. Unless stakeholders can understand the principles of the underlying layout process, however, we have found that they lack confidence and are therefore reluctant to use these maps. In this paper, we present a qualitative study exploring the effect on users’ confidence of using data-driven explanation mechanisms, by conducting in-depth scenario-based interviews with ten participants. To provide diversity in stimulus and approach we use two explanation mechanisms based on projection and agglomerative layout methods. The themes exposed in our results indicate that the data-driven explanations improved user confidence in several ways, and that process clarity and layout density also affected users’ views of the credibility of the concept maps. We discuss how these factors can increase uptake of automated tools and affect user confidence.
Spokespeople: Exploring Routes to Action through Citizen-Generated Data
This paper presents insights from a collaboration with cycling advocates and local authorities to consider how HCI can open productive spaces for citizens to contribute to the realization of social goals. We worked with members of a walking and cycling advocacy organization to explore the potential for technology-mediated data collection to support advocacy and action taking. Based on our initial findings, we developed and deployed Spokespeople-a system to enable people who cycle to collect, curate and make visible their everyday journeys and experiences. We then worked with participants, cycling advocates and local authority transport planners to explore how citizens can contribute beyond data collection, by curating and prioritizing their experiences and exploring possible routes to action. We identify future directions for technology design to support citizens to make meaningful contributions to changes in the city through annotated routes, prioritization and community commissioning processes.
Evaluating the Disruptiveness of Mobile Interactions: A Mixed-Method Approach
While the proliferation of mobile devices has rendered mobile notifications ubiquitous, researchers are only slowly beginning to understand how these technologies affect everyday social interactions. In particular, the negative social influence of mobile interruptions remains unexplored from a methodological perspective. This paper contributes a mixed-method evaluation procedure for assessing the disruptive impact of mobile interruptions in conversation. The approach combines quantitative eye tracking, qualitative analysis, and a simulated conversation environment to enable fast assessment of disruptiveness. It is intended to be used as a part of an iterative interaction design process. We describe our approach in detail, present an example of its use to study a new call declining technique, and reflect upon the pros and cons of our approach.
Make Yourself at Phone: Reimagining Mobile Interaction Architectures With Emergent Users
We present APPropriate — a novel mobile design to allow users to temporarily annex any Android device for their own use. APPropriate is a small, cheap storage pod, designed to be easily carried in a pocket or hidden within clothing. Its purpose is simple: to hold a copy of the local content an owner has on their mobile, liberating them from carrying a phone, or allowing them to use another device that provides advantages over their own. Picking up another device when carrying APPropriate transfers all pertinent content to the borrowed device (using local no-cost WiFi from the APPropriate device), transforming it to give the impression that they are using their own phone. While APPropriate is useful for a wide range of contexts, the design was envisaged through a co-design process with resource-constrained emergent users in three countries. Lab studies and a subsequent deployment on participants’ own devices identified key benefits of the approach in these contexts, including for security, resource sharing, and privacy.
Interactive Feedforward for Improving Performance and Maintaining Intrinsic Motivation in VR Exergaming
Exergames commonly use low to moderate intensity exercise protocols. Their effectiveness in implementing high intensity protocols remains uncertain. We propose a method for improving performance while maintaining intrinsic motivation in high intensity VR exergaming. Our method is based on an interactive adaptation of the feedforward method: a psychophysical training technique achieving rapid improvement in performance by exposing participants to self models showing previously unachieved performance levels. We evaluated our method in a cycling-based exergame. Participants competed against (i) a self model which represented their previous speed; (ii) a self model representing their previous speed but increased resistance therefore requiring higher performance to keep up; or (iii) a virtual competitor at the same two levels of performance. We varied participants’ awareness of these differences. Interactive feedforward led to improved performance while maintaining intrinsic motivation even when participants were aware of the interventions, and was superior to competing against a virtual competitor.
ARcadia: A Rapid Prototyping Platform for Real-time Tangible Interfaces
Paper-based fabrication techniques offer powerful opportunities to prototype new technological interfaces. Typically, paper-based interfaces are either static mockups or require integration with sensors to provide real-time interactivity. The latter can be challenging and expensive, requiring knowledge of electronics, programming, and sensing. But what if computer vision could be combined with prototyping domain-aware programming tools to support the rapid construction of interactive, paper-based tangible interfaces? We designed a toolkit called ARcadia that allows for rapid, low-cost prototyping of TUIs that only requires access to a webcam, a web browser, and paper. ARcadia brings paper prototypes to life through the use of marker based augmented reality (AR). Users create mappings between real-world tangible objects and different UI elements. After a crafting and programming phase, all subsequent interactions take place with the tangible objects. We evaluated ARcadia in a workshop with 120 teenage girls and found that tangible AR technologies can empower novice technology designers to rapidly construct and iterate on their ideas.
Building Momentum: Scaling up Change in Community Organizations
Addressing calls in Sustainable HCI to scale up our work in HCI targeting sustainability, and the current knowledge gap of how to do this practically, we here present a qualitative study of 10 sustainability-oriented community organizations that are working to scale up their change making. They are all loosely connected to a local Transition network, meaning that they are aiming at transforming current practices in society, through local and practical action, to meet challenges related to climate change. We wanted to know how they try to scale up their change making, and what role ICT plays in enabling scaling up. The study contributes new insights about three stages of scaling up, in which ICT plays different roles. We conclude with implications for HCI for how to support community organizations in scaling up, while keeping values important for working toward a more resilient society.
Mechanism Perfboard: An Augmented Reality Environment for Linkage Mechanism Design and Fabrication
Prototyping devices with kinetic mechanisms, such as automata and robots, has become common in physical computing projects. However, mechanism design in the early-concept exploration phase is challenging, due to the dynamic and unpredictable characteristics of mechanisms. We present Mechanism Perfboard, an augmented reality environment that supports linkage mechanism design and fabrication. It supports the concretization of ideas by generating the initial desired linkage mechanism from a real world movement. The projection of simulated movement within the environment enables iterative tests and modifications in real scale. Augmented information and accompanying tangible parts help users to fabricate mechanisms. Through a user study with 10 participants, we found that Mechanism Perfboard helped the participant to achieve their desired movement. The augmented environment enabled intuitive modification and fabrication with an understanding of mechanical movement. Based on the tool development and the user study, we discuss implications for mechanism prototyping with augmented reality and computational support.
Addressing Age-Related Bias in Sentiment Analysis
Computational approaches to text analysis are useful in understanding aspects of online interaction, such as opinions and subjectivity in text. Yet, recent studies have identified various forms of bias in language-based models, raising concerns about the risk of propagating social biases against certain groups based on sociodemographic factors (e.g., gender, race, geography). In this study, we contribute a systematic examination of the application of language models to study discourse on aging. We analyze the treatment of age-related terms across 15 sentiment analysis models and 10 widely-used GloVe word embeddings and attempt to alleviate bias through a method of processing model training data. Our results demonstrate that significant age bias is encoded in the outputs of many sentiment analysis algorithms and word embeddings. We discuss the models’ characteristics in relation to output bias and how these models might be best incorporated into research.
Identification of Imminent Suicide Risk Among Young Adults using Text Messages
Suicide is the second leading cause of death among young adults but the challenges of preventing suicide are significant because the signs often seem invisible. Research has shown that clinicians are not able to reliably predict when someone is at greatest risk. In this paper, we describe the design, collection, and analysis of text messages from individuals with a history of suicidal thoughts and behaviors to build a model to identify periods of suicidality (i.e., suicidal ideation and non-fatal suicide attempts). By reconstructing the timeline of recent suicidal behaviors through a retrospective clinical interview, this study utilizes a prospective research design to understand if text communications can predict periods of suicidality versus depression. Identifying subtle clues in communication indicating when someone is at heightened risk of a suicide attempt may allow for more effective prevention of suicide.
Touch Your Heart: A Tone-aware Chatbot for Customer Care on Social Media
Chatbot has become an important solution to rapidly increasing customer care demands on social media in recent years. However, current work on chatbot for customer care ignores a key to impact user experience – tones. In this work, we create a novel tone-aware chatbot that generates toned responses to user requests on social media. We first conduct a formative research, in which the effects of tones are studied. Significant and various influences of different tones on user experience are uncovered in the study. With the knowledge of effects of tones, we design a deep learning based chatbot that takes tone information into account. We train our system on over 1.5 million real customer care conversations collected from Twitter. The evaluation reveals that our tone-aware chatbot generates as appropriate responses to user requests as human agents. More importantly, our chatbot is perceived to be even more empathetic than human agents.
Modern Bereavement: A Model for Complicated Grief in the Digital Age
The experience of grief and death is an inevitable part of life. Grief, a natural response to death, can be a challenging and emotionally taxing journey. Bereaved individuals often feel lost in a fog, unaware of resources available to them and unsure of which resources could be useful for supporting their healing process. Complicated grief, a more intense form of grief that extends beyond six months following the death of a loved one, presents both a unique challenge and a design opportunity for the HCI community. In this work, we present the results of a survey and interview study on the technological practices of complicated grievers. Based on themes found in the data, we propose a new model for complicated grief in the digital age, consisting of the following phases: Fog, Isolation, Exploration, Immersion, and Stabilization. We then present a set of design considerations for designers seeking to create tools for complicated grievers navigating their unique grief journeys.
Using Animation to Alleviate Overdraw in Multiclass Scatterplot Matrices
The scatterplot matrix (SPLOM) is a commonly used technique for visualizing multiclass multivariate data. However, multiclass SPLOMs have issues with overdraw (overlapping points), and most existing techniques for alleviating overdraw focus on individual scatterplots with a single class. This paper explores whether animation using flickering points is an effective way to alleviate overdraw in these multiclass SPLOMs. In a user study with 69 participants, we found that users not only performed better at identifying dense regions using animated SPLOMs, but also found them easier to interpret and preferred them to static SPLOMs. These results open up new directions for future work on alleviating overdraw for multiclass SPLOMs, and provide insights for applying animation to alleviate overdraw in other settings.
The Context of College Students’ Facebook Use and Academic Performance: An Empirical Study
The effects of Facebook on academic performance have attracted both public and scholarly attention. Prior research found that Facebook use is linked to poor academic performance, suggesting that Facebook distracts students from studying. These studies, which are primarily based on survey responses, are insufficient to uncover exactly how Facebook is used or embedded in students’ studying activities. To capture unbiased, detailed use patterns and to investigate the context of Facebook use, we studied 50 college students using automatic logging and experience sampling. We analyzed the activities and attentional states of students prior to visiting Facebook. Results show that GPAs of frequent Facebook users do not suffer. Students with high GPAs spend shorter time in each Facebook session and shorter Facebook use often follows schoolwork. These results point to a possibility that potentially problematic Facebook use occurs when students are in a spree of leisure activities, not while studying.
Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses
In the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.
Supporting the Complex Social Lives of New Parents
One of the many challenges of becoming a parent is the shift in one’s social life. As HCI researchers have begun to investigate the intersection of sociotechnical system design and parenthood, they have also sought to understand how parents’ social lives can be best supported. We build on these strands of research through a qualitative study with new parents regarding the role of digital technologies in their social lives as they transition to parenthood. We demonstrate how sociotechnical systems are entangled in the ways new parents manage their relationships, build (or resist building) new friendships and ad hoc support systems, and navigate the vulnerabilities of parenthood. We discuss how systems designed for new parents can better support the vulnerabilities they internalize, the diverse friendships they desire, and the logistical challenges they experience. We conclude with recommendations for future design and research in this area.
Algorithmic Anxiety and Coping Strategies of Airbnb Hosts
Algorithms increasingly mediate how work is evaluated in a wide variety of work settings. Drawing on our interviews with 15 Airbnb hosts, we explore the impact of algorithmic evaluation on users and their work practices in the context of Airbnb. Our analysis reveals that Airbnb hosts engage in a double negotiation on the platform: They must negotiate efforts not just to attract potential guests but also to appeal to only partially transparent evaluative algorithms. We found that a perceived lack of control and uncertainty over how algorithmic evaluation works can create anxiety among some Airbnb hosts. We present a framework for understanding this double negotiation, as well as a case study of coping strategies that hosts employ to deal with their anxiety. We conclude with a discussion of design solutions that can help reduce algorithmic anxiety and increase confidence in algorithmic systems.
More Text Please! Understanding and Supporting the Use of Visualization for Clinical Text Overview
Clinical practice is heavily reliant on the use of unstructured text to document patient stories due to its expressive and flexible nature. However, a physician’s capacity to recover information from text for clinical overview is severely affected when records get longer and time pressure increases. Data visualization strategies have been explored to aid in information retrieval by replacing text with graphical summaries, though often at the cost of omitting important text features. This causes physician mistrust and limits real-world adoption. This work presents our investigation into the role and use of text in clinical practice, and reports on efforts to assess the best of both worlds—text and visualization—to facilitate clinical overview. We report on insights garnered from a field study, and the lessons learned from an iterative design process and evaluation of a text-visualization prototype, MedStory, with 14 medical professionals. The results led to a number of grounded design recommendations to guide visualization design to support clinical text overview.
The SelfReflector: Design, IoT and the High Street
We describe the design of SelfReflector an internet-connected mirror that uses online facial recognition to estimate your age and play music from when it thinks you were 14 years old. The mirror was created for a specific shop (SPeX PisTOls optical boutique), within a research through design project centered on the high street as a space of vital social, economic and environmental exchange that offers a myriad of psychosocial support for people beyond a place to purchase goods. We present in detail how the design emerged as our research interests developed related to IoT and how people use the high street to experiment with, and support sense of self. We discuss SelfReflector in relation to challenges for IoT, facial recognition and surveillance technologies, mirrorness and the values of a craft approach to designing technology centering on the nature of the bespoke and ‘one-off’.
Co-constructing Family Memory: Understanding the Intergenerational Practices of Passing on Family Stories
Sharing family stories is an integral aspect of how families remember together and build a sense of connection. Yet, when generations in families are separated by large geographic and temporal distances, the everyday taken-for-granted processes of sharing family stories shift from conversational to mediated forms. To inform HCI research and practice in mediating family stories, we contribute an account of the co-constructive intergenerational social practices enacted to co-construct and interpret family stories. These practices demonstrate the agency of both storytellers and listeners as they work to discover, decipher, and reconstruct family stories. We close by drawing insights from this setting to frame key design challenges for multi-lifespan information systems mediating asynchronous, asymmetric, co-constructive and socially weighted information sharing interactions.
Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and Camera
Although cost-effective at-home blood pressure monitors are available, a complementary mobile solution can ease the burden of measuring BP at critical points throughout the day. In this work, we developed and evaluated a smartphone-based BP monitoring application called textitSeismo. The technique relies on measuring the time between the opening of the aortic valve and the pulse later reaching a periphery arterial site. It uses the smartphone’s accelerometer to measure the vibration caused by the heart valve movements and the smartphone’s camera to measure the pulse at the fingertip. The system was evaluated in a nine participant longitudinal BP perturbation study. Each participant participated in four sessions that involved stationary biking at multiple intensities. The Pearson correlation coefficient of the blood pressure estimation across participants is 0.20-0.77 ($mu$=0.55, $sigma$=0.19), with an RMSE of 3.3-9.2 mmHg ($mu$=5.2, $sigma$=2.0).
Jetto: Using Lateral Force Feedback for Smartwatch Interactions
Interacting with media and games is a challenging user experience on smartwatches due to their small screens. We propose using lateral force feedback to enhance these experiences. When virtual objects on the smartwatch display visually collide or push the edge of the screen, we add haptic feedback so that the user also feels the impact. This addition creates the illusion of a virtual object that is physically hitting or pushing the smartwatch, from within the device itself. Using this approach, we extend virtual space and scenes into a 2D physical space. To create realistic lateral force feedback, we first examined the minimum change in force magnitude that is detectable by users in different directions and weight levels, finding an average JND of 49% across all tested conditions, with no significant effect of weight and force direction. We then developed a proof-of-concept hardware prototype called Jetto and demonstrated its unique capabilities through a set of impact-enhanced videos and games. Our preliminary user evaluations indicated the concept was welcomed and is regarded as a worthwhile addition to smartwatch output and media experiences.
ExtVision: Augmentation of Visual Experiences with Generation of Context Images for a Peripheral Vision Using Deep Neural Network
We propose a system, called ExtVision, to augment visual experiences by generating and projecting context-images onto the periphery of the television or computer screen. A peripheral projection of the context-image is one of the most effective techniques to enhance visual experiences. However, the projection is not commonly used at present, because of the difficulty in preparing the context-image. In this paper, we propose a deep neural network-based method to generate context-images for peripheral projection. A user study was performed to investigate the manner in which the proposed system augments traditional visual experiences. In addition, we present applications and future prospects of the developed system.
Substituting Motion Effects with Vibrotactile Effects for 4D Experiences
In this paper, we present two methods to substitute motion effects using vibrotactile effects in order to improve the 4D experiences of viewers. This work was motivated by the needs of more affordable 4D systems for individual users. Our sensory substitution algorithms convert motion commands to vibrotactile commands to a grid display that uses multiple actuators. While one method is based on the fundamental principle of vestibular feedback, the other method makes use of intuitive visually-based mapping from motion to vibrotactile stimulation. We carried out a user study and could confirm the effectiveness of our substitution methods in improving 4D experiences. To our knowledge, this is the first study that investigated the feasibility of replacing motion effects using much simpler and less expensive vibrotactile effects.
Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction
Drivers use nonverbal cues such as vehicle speed, eye gaze, and hand gestures to communicate awareness and intent to pedestrians. Conversely, in autonomous vehicles, drivers can be distracted or absent, leaving pedestrians to infer awareness and intent from the vehicle alone. In this paper, we investigate the usefulness of interfaces (beyond vehicle movement) that explicitly communicate awareness and intent of autonomous vehicles to pedestrians, focusing on crosswalk scenarios. We conducted a preliminary study to gain insight on designing interfaces that communicate autonomous vehicle awareness and intent to pedestrians. Based on study outcomes, we developed four prototype interfaces and deployed them in studies involving a Segway and a car. We found interfaces communicating vehicle awareness and intent: (1) can help pedestrians attempting to cross; (2) are not limited to the vehicle and can exist in the environment; and (3) should use a combination of modalities such as visual, auditory, and physical.
Running Out of Time: The Impact and Value of Flexibility in On-Demand Crowdwork
With a seemingly endless stream of tasks, on-demand labor markets appear to offer workers flexibility in when and how much they work. This research argues that platforms afford workers far less flexibility than widely believed. A large part of the “inflexibility” comes from tight deadlines imposed on tasks, leaving workers little control over their work schedules. We experimentally examined the impact of offering workers control of their time in on-demand crowdwork. We found that granting higher “in-task flexibility” dramatically affected the temporal dynamics of worker behavior and produced a larger amount of work with similar quality. In a second experiment, we measured the compensating differential and found that workers would give up significant compensation to control their time, indicating workers attach substantial value to in-task flexibility. Our results suggest that designing tasks which give workers direct control of their time within tasks benefits both buyers and sellers of on-demand crowdwork.
Tensions of Data-Driven Reflection: A Case Study of Real-Time Emotional Biosensing
Biosensing displays, increasingly enrolled in emotional reflection, promise authoritative insight by presenting users’ emotions as discrete categories. Rather than machines interpreting emotions, we sought to explore an alternative with emotional biosensing displays in which users formed their own interpretations and felt comfortable critiquing the display. So, we designed, implemented, and deployed, as a technology probe, an emotional biosensory display: Ripple is a shirt whose pattern changes color responding to the wearer’s skin conductance, which is associated with excitement. 17 participants wore Ripple over 2 days of daily life. While some participants appreciated the ‘physical connection’ Ripple provided between body and emotion, for others Ripple fostered insecurities about ‘how much’ feeling they had. Despite our design intentions, we found participants rarely questioned the display’s relation to their feelings. Using biopolitics to speculate on Ripple’s surprising authority, we highlight ethical stakes of biosensory representations for sense of self and ways of feeling.
Communicating Algorithmic Process in Online Behavioral Advertising
Advertisers develop algorithms to select the most relevant advertisements for users. However, the opacity of these algorithms, along with their potential for violating user privacy, has decreased user trust and preference in behavioral advertising. To mitigate this, advertisers have started to communicate algorithmic processes in behavioral advertising. However, how revealing parts of the algorithmic process affects users’ perceptions towards ads and platforms is still an open question. To investigate this, we exposed 32 users to why an ad is shown to them, what advertising algorithms infer about them, and how advertisers use this information. Users preferred interpretable, non-creepy explanations about why an ad is presented, along with a recognizable link to their identity. We further found that exposing users to their algorithmically-derived attributes led to algorithm disillusionment—users found that advertising algorithms they thought were perfect were far from it. We propose design implications to effectively communicate information about advertising algorithms.
Quadcopter-Projected In-Situ Navigation Cues for Improved Location Awareness
Every day people rely on navigation systems when exploring unknown urban areas. Many navigation systems use multimodal feedback like visual, auditory or tactile cues. Although other systems exist, users mostly rely on a visual navigation using their smartphone. However, a problem with visual navigation systems is that the users have to shift their attention to the navigation system and then map the instructions to the real world. We suggest using in-situ navigation instructions that are presented directly in the environment by augmenting the reality using a projector-quadcopter. Through a user study with 16 participants, we show that using in-situ instructions for navigation leads to a significantly higher ability to observe real-world points of interest. Further, the participants enjoyed following the projected navigation cues.
P2PSTORY: Dataset of Children as Storytellers and Listeners in Peer-to-Peer Interactions
Understanding social-emotional behaviors in storytelling interactions plays a critical role in the development of interactive educational technologies for children. A challenge when designing for such interactions using technology like social robots, virtual agents, and tablets is understanding the social-emotional behaviors pertinent to storytelling-especially when emulating a natural peer-to-peer relation between the child and the technology. We present P2PSTORY, a dataset of young children (5-6 years old) engaging in natural peer-to-peer storytelling interactions with fellow classmates. The dataset consists of rich social behaviors of children without adult supervision, with each participant demonstrating being a storyteller and a listener. The dataset contains 58 video recorded sessions along with a diverse set of behavioral annotations as well as developmental and demographic profiles of each child participant. We describe the main characteristics of the dataset in addition to findings that reveal perceptual differences between adults and children when evaluating the attentiveness of listeners.
Personalizing Persuasive Strategies in Gameful Systems to Gamification User Types
Persuasive gameful systems are effective tools for motivating behaviour change. Research has shown that tailoring these systems to individuals can increase their efficacy; however, there is little knowledge on how to personalize them. We conducted a large-scale study of 543 participants to investigate how different gamification user types responded to ten persuasive strategies depicted in storyboards representing persuasive gameful health systems. Our results reveal that people’s gamification user types play significant roles in the perceived persuasiveness of different strategies. People scoring high in the ‘player’ user type tend to be motivated by competition, comparison, cooperation, and reward while ‘disruptors’ are likely to be demotivated by punishment, goal-setting, simulation, and self-monitoring. ‘Socialisers’ could be motivated using any of the strategies; they are the most responsive to persuasion overall. Finally, we contribute to CHI research and practice by offering design guidelines for tailoring persuasive gameful systems to each gamification user type.
Your Eyes Tell: Leveraging Smooth Pursuit for Assessing Cognitive Workload
A common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. Existing approaches such as questionnaires or pupil dilation measurements either only allow for subjective assessments or are susceptible to environmental influences and user physiology. We address these challenges by exploiting the fact that cognitive workload influences smooth pursuit eye movements. We compared three trajectories and two speeds under different levels of cognitive workload within a user study (N=20). We found higher deviations of gaze points during smooth pursuit eye movements for specific trajectory types at higher cognitive workload levels. Using an SVM classifier, we predict cognitive workload through smooth pursuit with an accuracy of 99.5% for distinguishing between low and high workload as well as an accuracy of 88.1% for estimating workload between three levels of difficulty. We discuss implications and present use cases of how cognition-aware systems benefit from inferring cognitive workload in real-time by smooth pursuit eye movements.
FingerPing: Recognizing Fine-grained Hand Poses using Active Acoustic On-body Sensing
FingerPing is a novel sensing technique that can recognize various fine-grained hand poses by analyzing acoustic resonance features. A surface-transducer mounted on a thumb ring injects acoustic chirps (20Hz to 6,000Hz) to the body. Four receivers distributed on the wrist and thumb collect the chirps. Different hand poses of the hand create distinct paths for the acoustic chirps to travel, creating unique frequency responses at the four receivers. We demonstrate how FingerPing can differentiate up to 22 hand poses, including the thumb touching each of the 12 phalanges on the hand as well as 10 American sign language poses. A user study with 16 participants showed that our system can recognize these two sets of poses with an accuracy of 93.77% and 95.64%, respectively. We discuss the opportunities and remaining challenges for the widespread use of this input technique.
Frames and Slants in Titles of Visualizations on Controversial Topics
Slanted framing in news article titles induce bias and influence recall. While recent studies found that viewers focus extensively on titles when reading visualizations, the impact of titles in visualization remains underexplored. We study frames in visualization titles, and how the slanted framing of titles and the viewer’s pre-existing attitude impact recall, perception of bias, and change of attitude. When asked to compose visualization titles, people used five existing news frames, an open-ended frame, and a statistics frame. We found that the slant of the title influenced the perceived main message of a visualization, with viewers deriving opposing messages from the same visualization. The results did not show any significant effect on attitude change. We highlight the danger of subtle statistics frames and viewers’ unwarranted conviction of the neutrality of visualizations. Finally, we present a design implication for the generation of visualization titles and one for the viewing of titles.
Typing on an Invisible Keyboard
A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5% over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 – 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems.
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and ‘discrimination-aware’ machine learning-absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the ‘street-level bureaucrats’ on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.
PEP (3D Printed Electronic Papercrafts): An Integrated Approach for 3D Sculpting Paper-Based Electronic Devices
We present PEP (Printed Electronic Papercrafts), a set of design and fabrication techniques to integrate electronic based interactivities into printed papercrafts via 3D sculpting. We explore the design space of PEP, integrating four functions into 3D paper products: actuation, sensing, display, and communication, leveraging the expressive and technical opportunities enabled by paper-like functional layers with a stack of paper. We outline a seven-step workflow, introduce a design tool we developed as an add-on to an existing CAD environment, and demonstrate example applications that combine the electronic enabled functionality, the capability of 3D sculpting, and the unique creative affordances by the materiality of paper.
Between Grassroots and the Hierarchy: Lessons Learned from the Design of a Public Services Directory
There is a growing interest in HCI research studying technology for citizen engagement in civic issues. We are now seeing issues around technologies for empowerment and participation, long discussed in HCI literature, appropriated and formalised in government legislation. In the UK, recent reforms stipulate that community-based service information should be published in continuously updated, collaboratively designed and maintained, online platforms. We report on a qualitative study where we worked with stakeholders involved in the collaborative design, development and implementation of such a platform. Our findings highlight tensions between the grassroots desire to innovate and local governments’ rigid compliance with statutory obligation. We pose a series of challenges and opportunities for HCI researchers engaged in the design of civic technologies to consider going forward, addressing issues of engagement in policy, measures of participation and tools for enabling participatory processes in public institutions.
Social Influences on Executive Functioning in Autism: Design of a Mobile Gaming Platform
Most studies of executive function (EF) in Autism Spectrum Disorder (ASD) focus on cognitive information processing, emphasizing less the social interaction deficits core to ASD. We designed a mobile game that uses social and nonsocial stimuli to assess children’s EF skills. The game comprised three components involving different EF skills: cognitive flexibility (shifting/inference), inhibitory control, and short-term memory. By recruiting 65 children with and without ASD to play the mobile game, we investigated the potential of such platforms for capturing important phenotypic characteristics of individuals with autism. Results highlighted between-diagnostic-group differences in playing patterns with children with ASD showing broad patterns of EF deficits, but with relative strengths in nonsocial short-term memory, and preserved response to emotional inhibition cues. We showed the system could predict IQ, an important target for clinical treatment, towards the goal of developing platforms to act as long-term, efficient, and effective behavioral biomarkers for ASD.
A Large Inclusive Study of Human Listening Rates
As conversational agents and digital assistants become increasingly pervasive, understanding their synthetic speech becomes increasingly important. Simultaneously, speech synthesis is becoming more sophisticated and manipulable, providing the opportunity to optimize speech rate to save users time. However, little is known about people’s abilities to understand fast speech. In this work, we provide the first large-scale study on human listening rates. Run on LabintheWild, it used volunteer participants, was screen reader accessible, and measured listening rate by accuracy at answering questions spoken by a screen reader at various rates. Our results show that blind and low-vision people, who often rely on audio cues and access text aurally, generally have higher listening rates than sighted people. The findings also suggest a need to expand the range of rates available on personal devices. These results demonstrate the potential for users to learn to listen to faster rates, expanding the possibilities for human-conversational agent interaction.
Storyboard-Based Empirical Modeling of Touch Interface Performance
Touch interactions are now ubiquitous, but few tools are available to help designers quickly prototype touch interfaces and predict their performance. For rapid prototyping, most applications only support visual design. For predictive modelling, tools such as CogTool generate performance predictions but do not represent touch actions natively and do not allow exploration of different usage contexts. To combine the benefits of rapid visual design tools with underlying predictive models, we developed the Storyboard Empirical Modelling tool (StEM) for exploring and predicting user performance with touch interfaces. StEM provides performance models for mainstream touch actions, based on a large corpus of realistic data. We evaluated StEM in an experiment and compared its predictions to empirical times for several scenarios. The study showed that our predictions are accurate (within 7% of empirical values on average), and that StEM correctly predicted differences between alternative designs. Our tool provides new capabilities for exploring and predicting touch performance, even in the early stages of design.
Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation
We present a mobile system that enhances mixed reality experiences and games with force feedback by means of electrical muscle stimulation (EMS). The benefit of our approach is that it adds physical forces while keeping the users’ hands free to interact unencumbered-not only with virtual objects, but also with physical objects, such as props and appliances. We demonstrate how this supports three classes of applications along the mixed-reality continuum: (1) entirely virtual objects, such as furniture with EMS friction when pushed or an EMS-based catapult game. (2) Virtual objects augmented via passive props with EMS-constraints, such as a light control panel made tangible by means of a physical cup or a balance-the-marble game with an actuated tray. (3) Augmented appliances with virtual behaviors, such as a physical thermostat dial with EMS-detents or an escape-room that repurposes lamps as levers with detents. We present a user-study in which participants rated the EMS-feedback as significantly more realistic than a no-EMS baseline.
Learning from the Veg Box: Designing Unpredictability in Agency Delegation
The Internet of Things (IoT) promises to enable applications that foster a more efficient, sustainable, and healthy way of life. If end-users are to take full advantage of these developments we foresee the need for future IoT systems and services to include an element of autonomy and support the delegation of agency to software processes and connected devices. To inform the design of such future technology, we report on a breaching experiment designed to investigate how people integrate an unpredictable service, through the veg box scheme, in everyday life. Findings from our semi-structured interviews and a two-week diary study with 11 households reveal that agency delegation must be warranted, that it must be possible to incorporate delegated decisions into everyday activities, and that delegation is subject to constraint. We further discuss design implications on the need to support people’s diverse values, and their coordinative and creative practices.
Knotation: Exploring and Documenting Choreographic Processes
Contemporary choreographers often interact directly with dancers when exploring their ideas, but lack adequate tools for capturing and documenting their work. Although our first study of choreographers and dancers revealed diverse strategies for recording choreographic fragments, we found that they all worked in terms of constraints, which they represented via spatial diagrams, as movement qualities or with their own personal notation system. This led to the design of Knotation, a mobile pen-based tool that lets choreographers sketch their own representations of choreographic ideas and render them interactive. In study two, Knotation served as a technology probe to support the contrasting practices of three professional choreographers. We revised Knotation based on their input, and ran a third structured observation study with six professional choreographers. Knotation easily supported both dance-then-record and record-then-dance strategies. Participants used and appropriated Knotation’s advanced features, including the combination of interactive timelines and floorplan diagrams, to represent and explore complex choreographic structures.
A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk
A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~$2/h, and only 4% earned more than $7.25/h. While the average requester pays more than $11/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work.
Double-sided Printed Tactile Display with Electro Stimuli and Electrostatic Forces and its Assessment
Humans can perceive tactile sensation through multimodal stimuli. To demonstrate realistic pseudo tactile sensation for the users, a tactile display is needed that can provide multiple tactile stimuli. In this paper, we have explicated a novel printed tactile display that can provide both the electrical stimulus and the electrostatic force. The circuit patterns for each stimulus were fabricated by employing the technique of double-sided conductive ink printing. Requirements for the fabrication process were analyzed and the durability of the tactile display was evaluated. Users’ perceptions of a single tactile stimulus and multiple tactile stimuli were also investigated. The obtained experimental results indicate that the proposed tactile display is capable of exhibiting realistic tactile sensation and can be incorporated by various applications such as tactile sensation printing of pictorial illustrations and paintings. Furthermore, the proposed hybrid tactile display can contribute to accelerated prototyping and development of new tactile devices.
RecipeScape: An Interactive Tool for Analyzing Cooking Instructions at Scale
For cooking professionals and culinary students, understanding cooking instructions is an essential yet demanding task. Common tasks include categorizing different approaches to cooking a dish and identifying usage patterns of particular ingredients or cooking methods, all of which require extensive browsing and comparison of multiple recipes. However, no existing system provides support for such in-depth and at-scale analysis. We present RecipeScape, an interactive system for browsing and analyzing the hundreds of recipes of a single dish available online. We also introduce a computational pipeline that extracts cooking processes from recipe text and calculates a procedural similarity between them. To evaluate how RecipeScape supports culinary analysis at scale, we conducted a user study with cooking professionals and culinary students with 500 recipes for two different dishes. Results show that RecipeScape clusters recipes into distinct approaches, and captures notable usage patterns of ingredients and cooking actions.
Fostering Commonfare. Infrastructuring Autonomous Social Collaboration
Recently, HCI scholars have started questioning the relationship between computing and political economy, with both general analyses of such relationships, and specific design cases describing design interventions. This paper contributes to this stream of reflections, and argues that IT designers and HCI scholars can critically engage with the contemporary phase of capitalism by infrastructuring the emergence of new institutional forms of autonomous social collaboration through IT projects. More specifically, we discuss strategies and tactics that are available for IT designers embracing an activist agenda while infrastructuring autonomous social collaborations. We draw on empirical data from an H2020 EU funded project — Commonfare — that seeks to foster the emergence of alternative forms of welfare provision rooted in social collaboration. In this context, we discuss how the necessary multiple relations that unfold in a project with such ambitions shape both the language and the technologies of the project itself.
Designing for Student Interactions: The Role of Embodied Interactions in Mediating Collective Inquiry in an Immersive Simulation
Advances in mobile and wireless technologies provide new possibilities for supporting K-12 learning activities that can be spatially distributed in the classroom, for example in jointly investigating a scientific phenomenon. Such technologies have an impact on the ways in which students engage with one another, and with the quality of their engagement with the activity itself. This paper uses an embodied approach to understand the patterns of interactions between students (e.g., student-to-student, student-to-teacher) and with computational media within the environment (e.g., student-to-device, student-to-large display), in relation to students’ real-time meaning making as they engage in collective inquiry in an immersive simulation environment. The design-based research study consists of two iterations tested in an authentic school setting. We found that increased student-to-student interactions was accompanied by improved observational accuracy and higher quality student explanations constructed. The design implications of the research findings are discussed.
LoopMaker: Automatic Creation of Music Loops from Pre-recorded Music
Music loops are seamlessly repeatable segments of music that can be used for music composition as well as backing tracks for media such as videos, webpages, and games. They are regularly used by both professional musicians as well as novices with very little experience in audio editing and music composition. The process of creating music loops can be challenging and tedious, particularly for novices. We present LoopMaker, an interactive system that assists users in creating and exploring music loops from pre-recorded music. Our system can be used in a semi-automatic mode in which it refines a user’s rough selection of a loop. It can also be used in a fully automatic mode in which it creates a number of loops from a given piece of music and interactively allows the user to explore these loops. Our user study suggests that our system makes the loop creation process significantly faster, easier, and more enjoyable than manual creation for both novices and experts. It also suggests that the quality of these loops are comparable to manually created loops by experts.
Doppio: Tracking UI Flows and Code Changes for App Development
Developing interactive systems often involves a large set of callback functions for handling user interaction, which makes it challenging to manage UI behaviors, create descriptive documentation, and track code revisions. We developed Doppio, a tool that automatically tracks and visualizes UI flows and their changes based on source code. For each input event listener of a widget, e.g., onClick of an Android View class, Doppio captures and associates its UI output from a program execution with its code snippet from the codebase. It automatically generates a screenflow diagram organized by the callback methods and interaction flow, where developers can review the code and UI revisions interactively. Doppio, as an IDE plugin, is seamlessly integrated into a common development workflow. Our studies show that our tool is able to generate quality visual documentation and helped participants understand unfamiliar source code and track changes.
“It’s not actually that horrible”: Exploring Adoption of Two-Factor Authentication at a University
Despite the additional protection it affords, two-factor authentication (2FA) adoption reportedly remains low. To better understand 2FA adoption and its barriers, we observed the deployment of a 2FA system at Carnegie Mellon University (CMU). We explore user behaviors and opinions around adoption, surrounding a mandatory adoption deadline. Our results show that (a) 2FA adopters found it annoying, but fairly easy to use, and believed it made their accounts more secure; (b) experience with CMU Duo often led to positive perceptions, sometimes translating into 2FA adoption for other accounts; and, (c) the differences between users required to adopt 2FA and those who adopted voluntarily are smaller than expected. We also explore the relationship between different usage patterns and perceived usability, and identify user misconceptions, insecure practices, and design issues. We conclude with recommendations for large-scale 2FA deployments to maximize adoption, focusing on implementation design, use of adoption mandates, and strategic messaging.
Am I a Bunny?: The Impact of High and Low Immersion Platforms and Viewers’ Perceptions of Role on Presence, Narrative Engagement, and Empathy during an Animated 360° Video
This study used both quantitative and qualitative data to assess whether a High Immersion viewing platform (virtual reality headset) elicits stronger feelings of narrative engagement and empathy compared to a Low Immersion platform (smartphone) when viewing an animated 360° video. In line with prior research, participants (N = 65) reported greater feelings of presence in the High Immersion condition compared to Low Immersion. However, immersive condition was not significantly related to narrative engagement or empathy. Interview responses revealed that participants’ perceptions of their role in the film experience (i.e., Character, Observer, or Other/Not Sure) varied and were significantly related to narrative engagement. Participants who saw themselves as a Character (versus Observer) reported higher narrative engagement and empathy. Findings suggest that although a more immersive viewing platform can enhance presence during a 360° video experience, a clear understanding of viewer role is both difficult to achieve and critical to story comprehension and empathy.
Making Sense of Blockchain Applications: A Typology for HCI
Blockchain is an emerging infrastructural technology that is proposed to fundamentally transform the ways in which people transact, trust, collaborate, organize and identify themselves. In this paper, we construct a typology of emerging blockchain applications, consider the domains in which they are applied, and identify distinguishing features of this new technology. We argue that there is a unique role for the HCI community in linking the design and application of blockchain technology towards lived experience and the articulation of human values. In particular, we note how the accounting of transactions, a trust in immutable code and algorithms, and the leveraging of distributed crowds and publics around vast interoperable databases all relate to longstanding issues of importance for the field. We conclude by highlighting core conceptual and methodological challenges for HCI researchers beginning to work with blockchain and distributed ledger technologies.
“Accessibility Came by Accident”: Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities
From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.
Conveying the Perception of Kinesthetic Feedback in Virtual Reality using State-of-the-Art Hardware
Including haptic feedback in current consumer VR applications is frequently challenging, since technical possibilities to create haptic feedback in consumer-grade VR are limited. While most systems include and make use of the possibility to create tactile feedback through vibration, kinesthetic feedback systems almost exclusively rely on external mechanical hardware to induce actual sensations so far. In this paper, we describe an approach to create a feeling of such sensations by using unmodified off-the-shelf hardware and a software solution for a multi-modal pseudo-haptics approach. We first explore this design space by applying user-elicited methods, and afterwards evaluate our refined solution in a user study. The results show that it is indeed possible to communicate kinesthetic feedback by visual and tactile cues only and even induce its perception. While visual clipping was generally unappreciated, our approach led to significant increases of enjoyment and presence.
A Bermuda Triangle?
User experience (UX) evaluation is a growing field with diverse approaches. To understand the development since previous meta-review efforts, we conducted a state-of-the-art review of UX evaluation techniques with special attention to the triangulation between methods. We systematically selected and analyzed 100 papers from recent years and while we found an increase of relevant UX studies, we also saw a remaining overlap with pure usability evaluations. Positive trends include an increasing percentage of field rather than lab studies and a tendency to combine several methods in UX studies. Triangulation was applied in more than two thirds of the studies, and the most common method combination was questionnaires and interviews. Based on our analysis, we derive common patterns for triangulation in UX evaluation efforts. A critical discussion about existing approaches should help to obtain stronger results, especially when evaluating new technologies.
Designing Future Social Wearables with Live Action Role Play (Larp) Designers
Designing wearable technology that supports physical and social engagement in a collocated setting is challenging. In this research, we reached out to an expert community of crafters of social experiences: larpers (live action role players). Larpers and larp designers have a longstanding tradition of designing and making use of a variety of elements, such as costumes, physical objects, environments, and recently also digital artifacts. These are crafted in support of co-experience values that we argue can inform the design of social wearables. We engaged in a co-design process with a game designer and co-founder of a larp production company, and embedded the resulting social wearables in a larp. Here, we present the results of this design and implementation process, and articulate design affordances that resonate with our larp designer’ values. This work may inspire and inform researchers and designers creating wearable technology that is aimed at supporting collocated engagement.
Antibiotic-Responsive Bioart: Exploring DIYbio as a Design Studio Practice
Our work links hybrid practices from biology, fine arts, and design in a studio setting to support materially-oriented engagement with biotechnology. Using autoethnographic methods, we present our two-year process of converting an HCI studio into a BSL-1 (biosafety level 1) facility, our iterative development of low-cost tools, and our own self-reflexive experimentation with (DIY)bio protocols. Insights from this work led us to design a weeklong bioart course, whereby junior highschool students creatively “painted” with bacteria and antibiotic substances, digitally designed stencils from the resulting petri dish images, and screenprinted them onto physical artifacts. Our findings reveal the nuances of working with biological, analog, and digital materials in a design studio setting. We conclude by reflecting on DIYbio studio as a gathering of diverse actors who work with hybrid materials to give physical form to matters of concern.
More Than a Show: Using Personalized Immersive Theater to Educate and Engage the Public in Technology Ethics
Devising strategies to engage the public in discussions around the design and development of technology is critical to building a future that works for everyone. This paper presents a novel case study, an immersive theater experience, “Quantified Self,” that combines aspects of design fiction and user enactments to construct a public engagement opportunity about technology ethics. Our audience supplied their social data (Facebook, Twitter…) and received a personalized experience where they interacted with a narrative and technology exhibits. We used a design model targeting goals of engagement, education, and discussion. Here we overview the design and production of Quantified Self and report on the results (240 participants over 6 performances) and findings from audience surveys (n=179/240) and cast/crew interviews (n=15/22). We found our approach attracted a wide audience interested in different elements of the show. Affordances and challenges of our model are discussed in detail.
CatAR: A Novel Stereoscopic Augmented Reality Cataract Surgery Training System with Dexterous Instruments Tracking Technology
We propose CatAR, a novel stereoscopic augmented reality (AR) cataract surgery training system. It provides dexterous instrument tracking ability using a specially designed infrared optical system with 2 cameras and 1 reflective marker. The tracking accuracy on the instrument tip is 20 µm, much higher than previous simulators. Moreover, our system allows trainees to use and to see real surgical instruments while practicing. Five training modules with 31 parameters were designed and 28 participants were enrolled to conduct efficacy and validity tests. The results revealed significant differences between novice and experienced surgeons. Improvements in surgical skills after practicing with CatAR were also significant.
You Watch, You Give, and You Engage: A Study of Live Streaming Practices in China
Despite gaining traction in North America, live streaming has not reached the popularity it has in China, where live- streaming has a tremendous impact on the social behaviors of users. To better understand this socio-technological phenomenon, we conducted a mixed methods study of live streaming practices in China. We present the results of an online survey of 527 live streaming users, focusing on their broadcasting or viewing practices and the experiences they find most engaging. We also interviewed 14 active users to explore their motivations and experiences. Our data revealed the different categories of content that was broadcasted and how varying aspects of this content engaged viewers. We also gained insight into the role reward systems and fan group-chat play in engaging users, while also finding evidence that both viewers and streamers desire deeper channels and mechanisms for interaction in addition to the commenting, gifting, and fan groups that are available today.
Bolt: Instantaneous Crowdsourcing via Just-in-Time Training
Real-time crowdsourcing has made it possible to solve problems that are beyond the scope of artificial intelligence (AI) within a matter of seconds, rather than hours or days with traditional crowdsourcing techniques. While this has led to an increase in the potential application domains of crowdsourcing and human computation, problems that require machine-level speeds—on the order of milliseconds, not seconds—have remained out of reach because of the fundamental bounds of human perception and response time. In this paper, we demonstrate that it is possible to exceed these bounds by combining human and machine intelligence. We introduce the look-ahead approach, a hybrid intelligence workflow that enables instantaneous crowdsourcing systems (i.e., those that can return crowd responses within mere milliseconds). The look-ahead approach works by exploring possible future states that may be encountered within a short time horizon (e.g., a few seconds into the future) and prefetching crowd worker responses to these states. We validate the efficacy and explore the limitations of our approach on the Bolt system, which consists of an arcade-style game (Lightning Dodger) that we formally model as a Markov Decision Process (MDP). When the MDP reward function is unspecified—as in many real-world tasks—the look-ahead approach enables just-in-time (JIT) training of the agent’s policy function. Through a series of crowd worker experiments, we demonstrate that the look-ahead approach can outperform the fastest individual worker by approximately two orders of magnitude. Our work opens new avenues for hybrid intelligence systems that are as smart as people, but also far faster than humanly possible.
Convey: Exploring the Use of a Context View for Chatbots
Text messaging-based conversational systems, popularly called chatbots, have seen massive growth lately. Recent work on evaluating chatbots has found that there exists a mismatch between the chatbot’s state of understanding (also called context) and the user’s perception of the chatbot’s understanding. Users found it difficult to use chatbots for complex tasks as the users were uncertain of the chatbots’ intelligence level and contextual state. In this work, we propose Convey (CONtext View), a window added to the chatbot interface, displaying the conversational context and providing interactions with the context values. We conducted a usability evaluation of Convey with 16 participants. Participants preferred using chatbot with Convey and found it to be easier to use, less mentally demanding, faster, and more intuitive compared to a default chatbot without Convey. The paper concludes with a discussion of the design implications offered by Convey.
Confronting Social Criticisms: Challenges when Adopting Data-Driven Policing Strategies
Proponents of data-driven policing strategies claim that it makes policing organizations more effective, efficient, and accountable and has the potential to address some policing social criticisms (e.g. racial bias, lack of accountability and training). What remains less understood are the challenges when adopting data-driven policing as a response to these criticisms. We present results from a qualitative field study about the adoption of data-driven policing strategies in a Midwestern police department in the United States. We identify three key challenges police face with data-driven adoption efforts: data-driven frictions, precarious and inactionable insights, and police metis concerns. We demonstrate the issues that data-driven initiatives create for policing and the open questions police agents face. These findings contribute an empirical account of how policing agents attend to the strengths and limits of big data’s knowledge claims. Lastly, we present data and design implications for policing.
Weaving Lighthouses and Stitching Stories: Blind and Visually Impaired People Designing E-textiles
We describe our experience of working with blind and visually impaired people to create interactive art objects that are personal to them, through a participatory making process using electronic textiles (e-textiles) and hands-on crafting techniques. The research addresses both the practical considerations about how to structure hands-on making workshops in a way which is accessible to participants of varying experience and abilities, and how effective the approach was in enabling participants to tell their own stories and feel in control of the design and making process. The results of our analysis is the offering of insights in how to run e-textile making sessions in such a way for them to be more accessible and inclusive to a wider community of participants.
This Changes Sustainable HCI
More than a decade into Sustainable HCI (SHCI) research, the community is still struggling to converge on a shared understanding of sustainability and HCI’s role in addressing it. We think this is largely a positive sign, reflective of maturity; yet, lacking a clear set of aims and metrics for sustainability continues to be the community’s impediment to progressing, hence we seek to articulate a vision around which the community can productively coalesce. Drawing from recent SHCI publications, we identify commonalities that might form the basis of a shared understanding, and we show that this understanding closely aligns with the authoritative conception of a path to a sustainable future proffered by Naomi Klein in her book emphThis Changes Everything. We elaborate a set of contributions that SHCI is already making that can be unified under Klein’s narrative, and compare these categories of work to those found in past surveys of the field as evidence of substantive progress in SHCI.
Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were designed to cue long-distance truck drivers for task-management and driving conditions, particularly for automated driving scenarios. Two experiments separately evaluated naive students and professional truck drivers for their behavioral and brain responses to auditory notifications, which were either auditory icons or verbal commands. Our EEG/ERP results suggest that verbal commands were more readily recognized by the brain as relevant targets, but that auditory icons were more likely to update contextual working memory. Both classes of notifications did not differ on behavioral measures. This suggests that auditory icons ought to be employed for communicating contextual information and verbal commands, for urgent requests.
Iris: A Conversational Agent for Complex Tasks
Today, most conversational agents are limited to simple tasks supported by standalone commands, such as getting directions or scheduling an appointment. To support more complex tasks, agents must be able to generalize from and combine the commands they already understand. This paper presents a new approach to designing conversational agents inspired by linguistic theory, where agents can execute complex requests interactively by combining commands through nested conversations. We demonstrate this approach in Iris, an agent that can perform open-ended data science tasks such as lexical analysis and predictive modeling. To power Iris, we have created a domain-specific language that transforms Python functions into combinable automata and regulates their combinations through a type system. Running a user study to examine the strengths and limitations of our approach, we find that data scientists completed a modeling task 2.6 times faster with Iris than with Jupyter Notebook.
Explaining Viewers’ Emotional, Instrumental, and Financial Support Provision for Live Streamers
On live streams, viewers can support streamers through various methods ranging from well-wishing text messages to money. In this study (N=230) we surveyed viewers who had given money to a streamer. We identified six motivations for why they gave money to their favorite live streamer. We then examined how factors related to viewer, streamer, and viewer-streamer interaction were associated with three forms of social support provision: emotional, instrumental, and financial support. Our main findings are: parasocial relationship was consistently correlated with all three types of social support, while social presence was only related with instrumental and financial support; interpersonal attractiveness was associated with emotional and instrumental support and lonely people were more likely to give instrumental support. Our focus on various types of social support in a live streaming masspersonal platform adds a more detailed understanding to the existing literature of mediated social support. Furthermore, it suggests potential directions for designing more supportive and interactive live streaming platforms.
Making the News: Digital Creativity Support for Journalists
This paper reports the design and first evaluations of new digital support for journalists to discover and examine crea-tive angles on news stories under development. The support integrated creative news search algorithms, interactive crea-tive sparks and reusable concept cards into one daily work tool of journalists. The first evaluations of INJECT by jour-nalists in their places of work to write published news sto-ries revealed that the journalists generated new angles on existing stories rather than new stories, changed their writ-ing behaviour, and reported evidence that INJECT use had the potential to increase the objectivity and the boldness of journalism methods used.
Visual ODLs: Co-Designing Patient-Generated Observations of Daily Living to Support Data-Driven Conversations in Pediatric Care
Teens with complex chronic illnesses have difficulty understanding and articulating symptoms such as pain and emotional distress. Yet, symptom communication plays a central role in clinical care and illness management. To understand how design can help overcome these challenges, we created a visual library of 72 sketched illustrations, informed by the Observations of Daily Living framework along with insights from 11 clinician interviews. We utilized our library with storyboarding techniques, free-form sketching, and interviews, in co-design sessions with 13 pairs of chronically-ill teens and their parents. We found that teens depicted symptoms as being interwoven with narratives of personal and social identity. Teens and parents were enthusiastic about collaboratively-generated, interactive storyboards as a tracking and communication mechanism, and suggested three ways in which they could aid in communication and coordination with informal and formal caregivers. In this paper, we detail these findings, to guide the design of tools for symptom-tracking and incorporation of patient-generated data into pediatric care.
Improving Discoverability and Expert Performance in Force-Sensitive Text Selection for Touch Devices with Mode Gauges
Text selection on touch devices can be a difficult task for users. Letters and words are often too small to select directly, and the enhanced interaction techniques provided by the OS — magnifiers, selection handles, and methods for selecting at the character, word, or sentence level — often lead to as many usability problems as they solve. The introduction of force-sensitive touchscreens has added another enhancement to text selection (using force for different selection modes); however, these modes are difficult to discover and many users continue to struggle with accurate selection. In this paper we report on an investigation of the design of touch-based and force-based text selection mechanisms, and describe two novel text-selection techniques that provide improved discoverability, enhanced visual feedback, and a higher performance ceiling for experienced users. Two evaluations show that one design successfully combined support for novices and experts, was never worse than the standard iOS technique, and was preferred by participants.
Vibrational Artificial Subtle Expressions: Conveying System’s Confidence Level to Users by Means of Smartphone Vibration
Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system’s confidence level to users intuitively. So far, auditory ASEs using beep sounds, visual ASEs using LEDs, and motion ASEs using robot movements have been implemented and shown to be effective. In this paper, we propose a novel type of ASE that uses vibration (vibrational ASEs). We implemented the vibrational ASEs on a smartphone and conducted experiments to confirm whether they can convey a system’s confidence level to users in the same way as the other types of ASEs. The results clearly showed that vibrational ASEs were able to accurately and intuitively convey the designed confidence level to participants, demonstrating that ASEs can be applied in a variety of applications in real environments.
Investigating the Effect of the Multiple Comparisons Problem in Visual Analysis
The goal of a visualization system is to facilitate dataset-driven insight discovery. But what if the insights are spurious? Features or patterns in visualizations can be perceived as relevant insights, even though they may arise from noise. We often compare visualizations to a mental image of what we are interested in: a particular trend, distribution or an unusual pattern. As more visualizations are examined and more comparisons are made, the probability of discovering spurious insights increases. This problem is well-known in Statistics as the multiple comparisons problem (MCP) but overlooked in visual analysis. We present a way to evaluate MCP in visualization tools by measuring the accuracy of user reported insights on synthetic datasets with known ground truth labels. In our experiment, over 60% of user insights were false. We show how a confirmatory analysis approach that accounts for all visual comparisons, insights and non-insights, can achieve similar results as one that requires a validation dataset.
Passenger Trip Planning using Ride-Sharing Services
Ride-sharing can potentially address transportation challenges such as traffic congestion and air pollution by letting drivers share their cars unused capacity with a number of passengers. However, even though multiple ride-sharing services exist and HCI research has investigated various aspects of their use, we still have limited knowledge on how passengers use ride-sharing services to plan their trips. In this paper, we study how passengers use existing services to support the activity of planning a trip. We report from a qualitative study where we participated in 5 rides and conducted interviews with 19 passengers about their use and opinions towards ride-sharing services. We found that planning a ride involves comparing individual preferences across a number of services which enabled participants to support finding a trip and handle challenges such as privacy and trust. Further, we discuss these findings and their implications for future HCI research in ride-sharing.
Infrastructuring the Solidarity Economy: Unpacking Strategies and Tactics in Designing Social Innovation
Solidarity organizations in Europe are committed to building a more socially just society through a better configuration of democracy, politics and economy. In this paper, we describe our efforts to contribute to the socio-political designed innovation of solidarity movements through the establishment of a research lab embedded in, and operating within, the solidarity economy. We describe three cases that span the polarities of everyday and expert design, and contribute to the scaling out of social innovations. We use these cases to exemplify the strategies and tactics that emerge from the ongoing negotiation of ‘infrastructuring’ work with solidarity organizations. Finally, we discuss how guerilla infrastructuring, designing coalitions, and spanning design polarities can contribute to HCI and design for social innovation more generally.
Exploring Co-design with Breastfeeding Mothers
Designing mobile applications for breastfeeding mothers can be challenging; creating spaces to foster co-design — when a mother’s primary focus is on her child rather than on design activities – is even more so. In this paper we discuss the development of the Milk Matters mobile application, a tool developed to motivate women to donate their surplus breast milk to the local milk bank. We look at the importance of different approaches to understanding the mothers, comparing workshops, surveys, and cultural probes. Through our work we identify three factors to consider when co-designing with and for mothers: 1) interrupted interactions 2) elements that might distract a baby and 3) the importance of empowering mothers through positive reinforcement. Based on these factors we examine our methodological approaches, suggesting ways to make future research with breastfeeding mothers more productive.
The Dream is Collapsing: The Experience of Exiting VR
Research on virtual reality (VR) has studied users’ experience of immersion, presence, simulator sickness, and learning effects. However, the momentary experience of exiting VR and transitioning back to the real-world is not well understood. Do users become self-conscious of their actions upon exit? Are users nervous of their surroundings? Using explicitation interviews, we explore the moment of exit from VR across four applications. Analysis of the interviews reveals five components of experience: space, control, sociality, time, and sensory adaptation. Participants described spatial disorientation, for example, regardless of the complexity of the VR scene. Participants also described a window across which they exit VR, for example mentally first and then physically. We present six designs for easing or heightening the exit experience, as described by the participants. Based on these findings, we further discuss the ?moment of exit’ as an opportunity for designing engaging and enhanced VR experiences.
Common Barriers to the Use of Patient-Generated Data Across Clinical Settings
Patient-generated data, such as data from wearable fitness trackers and smartphone apps, are viewed as a valuable information source towards personalised healthcare. However, studies in specific clinical settings have revealed diverse barriers to their effective use. In this paper, we address the following question: are there barriers prevalent across distinct workflows in clinical settings to using patient-generated data? We conducted a two-part investigation: a literature review of studies identifying such barriers; and interviews with clinical specialists across multiple roles, including emergency care, cardiology, mental health, and general practice. We identify common barriers in a six-stage workflow model of aligning patient and clinician objectives, judging data quality, evaluating data utility, rearranging data into a clinical format, interpreting data, and deciding on a plan or action. This workflow establishes common ground for HCI practitioners and researchers to explore solutions to improving the use of patient-generated data in clinical practices.
Analyzing the Effect of Avatar Self-Similarity on Men and Women in a Search and Rescue Game
A crucial aspect of virtual gaming experiences is the avatar: the player’s virtual self-representation. While research has demonstrated benefits to using self-similar avatars in some virtual experiences, such avatars sometimes produce a more negative experience for women. To help researchers and game designers assess the cost-benefit tradeoffs of self-similar avatars, we compared players’ performance and subjective experience in a search and rescue computer game when using two different photorealistic avatars: their own self or a friend, and when playing either a social (rescuing people) or a nonsocial (rescuing gems) version of the game. There was no effect of avatar appearance on players’ performance or subjective experience in either game version, but we also found that women’s experience with self-similar avatars was no more negative than men’s. Our results suggest that avatar appearance may not make a difference to players in certain game contexts.
Supporting Rhythm Activities of Deaf Children using Music-Sensory-Substitution Systems
Rhythm is the first musical concept deaf people learn in music classes. However, hearing loss limits the amount of information that allows a deaf person to evaluate his or her performance and stay in sync with other musicians. In this paper, we investigated how a visual and vibrotactile music-sensory-substitution device, MuSS-Bits++, affects rhythm discrimination, reproduction, and expressivity of deaf people. We conducted a controlled study with 11 deaf children and found that most participants felt more confident wearing the device in vibration mode even when it did not objectively improve their accuracy. Furthermore, we studied how MuSS-Bits++ can be used in music classes at deaf schools and what challenges and opportunities arise in such a setting. Based on these studies, we discuss insights and future directions that support the design and development of music-sensory-substitution systems for music making.
Traces: Studying a Public Reactive Floor-Projection of Walking Trajectories to Support Social Awareness
Walking trajectories have been used to understand how users interact with public displays. However, it has not yet been studied how displaying them in-situ could affect users’ awareness about others’ presence and activities. We present the study of an interactive public floor-projection called Traces. Traces projects the walking trajectories of individuals as they pass through the lobby of a university building. We investigated Traces through a 6 week in-field study. Our results outline how different uses and understandings of Traces contributed towards its appropriation as a glanceable display for social awareness. We outline design suggestions that future designers should consider to support social awareness with public displays.
Exploring Accessible Smartwatch Interactions for People with Upper Body Motor Impairments
Smartwatches are always-available, provide quick access to information in a mobile setting, and can collect continuous health and fitness data. However, the small interaction space of these wearables may pose challenges for people with upper body motor impairments. To investigate accessible smartwatch interactions for this user group, we conducted two studies. First, we assessed the accessibility of existing smartwatch gestures with 10 participants with motor impairments. We found that not all participants were able to complete button, swipe and tap interactions. In a second study, we adopted a participatory approach to explore smartwatch gesture preferences and to gain insight into alternative, more accessible smartwatch interaction techniques. Eleven participants with motor impairments created gestures for 16 common smartwatch actions on both touchscreen and non-touchscreen (bezel, wristband) areas of the watch and the user’s body. We present results from both studies and provide design recommendations.
Pseudonymous Parents: Comparing Parenting Roles and Identities on the Mommit and Daddit Subreddits
Gender equality between mothers and fathers is critical for the social and economic wellbeing of children, mothers, and families. Over the past 50 years, gender roles have begun to converge, with mothers doing more work outside of the home and fathers doing more domestic work. However, popular parenting sites in the U.S. continue to be heavily gendered. We explore parenting roles and identities on the platform Reddit.com which is used by both mothers and fathers. We draw on seven years of data from three major parenting subreddits-Parenting, Mommit, and Daddit-to investigate what topics parents discuss on Reddit and how they vary across parenting subreddits. We find some similarities in topics across the three boards, such as sleep training, as well as differences, such as fathers talking about custody cases and Halloween. We discuss the role of pseudonymity for providing parents with a platform to discuss sensitive parenting topics. We conclude by highlighting the benefits of both gender-inclusive and role-specific parenting boards. This work provides a roadmap for using computational techniques to understand parenting practices online at large scale.
Experiential Augmentation: Uncovering The Meaning of Qualitative Visualizations when Applied to Augmented Objects
As we move toward commercial usage of ubiquitous computing and augmented reality, it is important to think about how computing should communicate with us when it is distributed in our environment. This paper proposes that qualitative indexical visualizations based on learned understanding of physical phenomena (Experiential Augmentation) can enhance our interaction design language and aid digital interfaces in communicating in a real-world context. We present a study that gathers data on how participants interpret such visualizations, and propose a model with which to analyze their responses. Finally, we also give a set of design recommendations for those interested in creating similar augmentations.
Enabling the Participation of People with Parkinson’s and their Caregivers in Co-Inquiry around Collectivist Health Technologies
While user participation is central to HCI, co-inquiry takes this further by having participants direct and control research from conceptualisation to completion. We describe a co-inquiry, conducted over 16 months with a Parkinson’s support group. We explored how the participation of members might be enabled across multiple stages of a research project, from the generation of research questions to the development of a prototype. Participants directed the research into developing alternative modes of information provision, resulting in ‘Parkinson’s Radio’ — a collectivist health information service produced and edited by members of the support group. We reflect on how we supported participation at different stages of the project and the successes and challenges faced by the team. We contribute insights into the design of collectivist health technologies for this group, and discuss opportunities and tensions for conducting co-inquiry in HCI research.
Who Provides Phishing Training?: Facts, Stories, and People Like Me
Humans represent one of the most persistent vulnerabilities in many computing systems. Since human users are independent agents who make their own choices, closing these vulnerabilities means persuading users to make different choices. Focusing on one specific human choice — clicking on a link in a phishing email — we conducted an experiment to identify better ways to train users to make more secure decisions. We compared traditional facts-and-advice training against training that uses a simple story to convey the same lessons. We found a surprising interaction effect: facts-and-advice training works better than not training users, but only when presented by a security expert. Stories don’t work quite as well as facts-and-advice, but work much better when told by a peer. This suggests that the perceived origin of training materials can have a surprisingly large effect on security outcomes.
Unpacking Perceptions of Data-Driven Inferences Underlying Online Targeting and Personalization
Much of what a user sees browsing the internet, from ads to search results, is targeted or personalized by algorithms that have made inferences about that user. Prior work has documented that users find such targeting simultaneously useful and creepy. We begin unpacking these conflicted feelings through two online studies. In the first study, 306 participants saw one of ten explanations for why they received an ad, reflecting prevalent methods of targeting based on demographics, interests, and other factors. The type of interest-based targeting described in the explanation affected participants’ comfort with the targeting and perceptions of its usefulness. We conducted a follow-up study in which 237 participants saw ten interests companies might infer. Both the sensitivity of the interest category and participants’ actual interest in that topic significantly impacted their attitudes toward inferencing. Our results inform the design of transparency tools.
Inaccuracy Blindness in Collaboration Persists, even with an Evaluation Prompt
The tendency to believe and act on others’ misinformation is documented in much prior work. This paper focuses on inaccuracy blindness, the tendency to take a collaborator’s poor information at face value, which reduces problem-solving success. We draw on social psychological research from the 1970s showing that evaluative rating scales can prompt a change in perspective. In a series of studies, we prototyped and tested an evaluation prompt meant to encourage skepticism in participant detectives trying to identify a serial killer. In tests of the prototype, the prompt was partially successful in inducing skepticism (Exp. 1), but a larger study (Exp. 2) showed that, despite the evaluation prompt, participants’ inaccuracy blindness persisted. This work, and the literature more generally, shows that the tendency to be misled by collaborators’ inaccurate information remains a strong phenomenon that is hard to counteract and remains a significant challenge for the CHI community.
Huggable: The Impact of Embodiment on Promoting Socio-emotional Interactions for Young Pediatric Inpatients
Most hospitals make efforts to provide socio-emotional support for patients and their families during care. In order to expand the service provided by certified child life specialists, we created a social robot and a virtual avatar that augment part of the care CCLS offers to patients by engaging pediatric patients in playful interactions and promoting their socio-emotional wellbeing. We ran a randomized controlled trial in a form of a Wizard-of-Oz study at a local pediatric hospital to study how three different interactive media (a plush teddy bear, a virtual agent on a screen, and a social robot) influence the pediatric patient’s affect, joyful play, and social interactions with others. Behavioral analyses of verbal utterance transcriptions and children’s physical behavior revealed that the social robot is most effective in producing socially energetic conversations as well as increasing positivity and promoting multi-party interactions. The virtual avatar was socially engaging but children tended to attend more exclusively to a virtual avatar and were less responsive to others. The plush toy was least engaging of the three interventions, but children touched it the most. Based on these findings, we recommend use cases for each agent appropriate for individual pediatric patients’ health conditions and needs. These analyses of behavioral data suggest the benefit of deploying a physically embodied social robot in pediatric inpatient-care contexts on young patients’; social and emotional wellbeing.
Forte: User-Driven Generative Design
Low-cost fabrication machines (e.g., 3D printers) offer the promise of creating custom-designed objects by a range of users. To maximize performance, generative design methods such as topology optimization can automatically optimize properties of a design based on high-level specifications. Though promising, such methods require people to map their design ideas–often unintuitively–to a small number of mathematical input parameters, and the relationship between those parameters and a generated design is often unclear, making it difficult to iterate a design. We present Forte, a sketch-based, real-time interactive tool for people to directly express and iterate on their designs via 2D topology optimization. Users can ask the system to add structures, provide a variation with better performance, or optimize internal material layouts. Users can globally control how much to ‘deviate’ from the initial sketch, or perform local suggestive editing, which interactively prompts the system to update based on the new information. Design sessions with 10 participants demonstrate that Forte empowers designers to create and explore a range of optimized designs with custom forms and styles.
PolarTrack: Optical Outside-In Device Tracking that Exploits Display Polarization
PolarTrack is a novel camera-based approach to detecting and tracking mobile devices inside the capture volume. In PolarTrack, a polarization filter continuously rotates in front of an off-the-shelf color camera, which causes the displays of observed devices to periodically blink in the camera feed. The periodic blinking results from the physical characteristics of current displays, which shine polarized light either through an LC overlay to produce images or through a polarizer to reduce light reflections on OLED displays. PolarTrack runs a simple detection algorithm on the camera feed to segment displays and track their locations and orientations, which makes PolarTrack particularly suitable as a tracking system for cross-device interaction with mobile devices. Our evaluation of PolarTrack’s tracking quality and comparison with state-of-the-art camera-based multi-device tracking showed a better tracking accuracy and precision with similar tracking reliability. PolarTrack works as standalone multi-device tracking but is also compatible with existing camera-based tracking systems and can complement them to compensate for their limitations.
Revisiting “Hole in the Wall” Computing: Private Smart Speakers and Public Slum Settings
Millions of homes worldwide enjoy access to digital content and services through smart speakers such as Amazon’s Echo and Google’s Home. Promotional materials and users’ own videos typically show homes that have many well-resourced rooms, with good power and data infrastructures. Over the last several years, we have been working with slum communities in India, whose dwellings are usually very compact (one or two rooms), personal home WiFi is almost unheard of, power infrastructures are far less robust, and financial resources put such smart speakers out of individual household reach. Inspired by the “hole in the wall” internet-kiosk programme, we carried out workshops with slum inhabitants to uncover issues and opportunities for providing a smart-speaker-type device in public areas and passageways. We designed and deployed a simple probe that allowed passers-by to ask and receive answers to questions. In this paper, we present the findings of this work, and a design space for such devices in these settings.
Values, Identity, and Social Translucence: Neurodiverse Student Teams in Higher Education
To successfully function within a team, students must develop a range of skills for communication, organization, and conflict resolution. For students on the autism spectrum, these skills mirror the social, communicative, and cognitive experiences that can often be challenging for these learners. Since instructors and students collaborate using a mix of technology, we investigated the technology needs of neurodiverse teams comprised of autistic and non-autistic students. We interviewed seven autistic students and five employees of disability services in higher education. Our analysis focused on technology stakeholder values, stages of small-group development, and Social Translucence — a model for online collaboration highlighting principles of visibility, awareness, and accountability. Despite motivation to succeed, neurodiverse students have difficulty expressing individual differences and addressing team conflict. To support future design of technology for neurodiverse teams, we propose: (1) a design space and design concepts including collaborative and affective computing tools, and (2) extending Social Translucence to account for student and group identities.
T-Cal: Understanding Team Conversational Data with Calendar-based Visualization
Understanding team communication and collaboration patterns is critical for improving work efficiency in organizations. This paper presents an interactive visualization system, T-Cal, that supports the analysis of conversation data from modern team messaging platforms (e.g., Slack). T-Cal employs a user-familiar visual interface, a calendar, to enable seamless multi-scale browsing of data from different perspectives. T-Cal also incorporates a number of analytical techniques for disentangling interleaving conversations, extracting keywords, and estimating sentiment. The design of T-Cal is based on an iterative user-centered design process including interview studies, requirements gathering, initial prototypes demonstration, and evaluation with domain users. The resulting two case studies indicate the effectiveness and usefulness of T-Cal in real-world applications, including daily conversations within an industry research lab and student group chats in a MOOC.
HapCube: A Wearable Tactile Device to Provide Tangential and Normal Pseudo-Force Feedback on a Fingertip
Haptic devices allow a more immersive experience with Virtual and Augmented Reality. However, for a wider range of usage they need to be miniaturized while maintaining the quality of haptic feedback. In this study, we used two kinds of human sensory illusion of vibration. The first illusion involves creating a virtual force (pulling sensation) using asymmetric vibration, and the second involves imparting compliances of complex stress-strain curves (i.e. force-displacement curves of mechanical keyboards) to a rigid object by changing the frequency and amplitude of vibration. Using these two illusions, we developed a wearable tactile device named HapCube, consisting of three orthogonal voicecoil actuators. Four measurement tests and four user tests confirmed that 1) a combination of two orthogonal asymmetric vibrations could provide a 2D virtual force in any tangential directions on a finger pad, and 2) a single voicecoil actuator produced pseudo-force feedback of the complex compliance curves in the normal direction.
Keppi: A Tangible User Interface for Self-Reporting Pain
Motivated by the need to support those managing chronic pain, we report on the iterative design, development, and evaluation of Keppi, a novel pressure-based tangible user interface (TUI) for the self-report of pain intensity. In-lab studies with 28 participants found individuals were able to use Keppi to reliably report low, medium, and high pain as well as map squeeze pressure to pain level. Based on insights from these evaluations, we ultimately created a wearable version of Keppi with multiple form factors, including a necklace, bracelet, and keychain. Interviews indicated high receptivity to the wearable design, which satisfied additional user-identified needs (e.g., discreet and convenient) and highlighted key directions for the continued refinement of tangible devices for pain assessment.
Data, Data Everywhere, and Still Too Hard to Link: Insights from User Interactions with Diabetes Apps
For those with chronic conditions, such as Type 1 diabetes, smartphone apps offer the promise of an affordable, convenient, and personalized disease management tool. However, despite significant academic research and commercial development in this area, diabetes apps still show low adoption rates and underwhelming clinical outcomes. Through user-interaction sessions with 16 people with Type 1 diabetes, we provide evidence that commonly used interfaces for diabetes self-management apps, while providing certain benefits, can fail to explicitly address the cognitive and emotional requirements of users. From analysis of these sessions with eight such user interface designs, we report on user requirements, as well as interface benefits, limitations, and then discuss the implications of these findings. Finally, with the goal of improving these apps, we identify 3 questions for designers, and review for each in turn: current shortcomings, relevant approaches, exposed challenges, and potential solutions.
Rewire: Interface Design Assistance from Examples
Interface designers often use screenshot images of example designs as building blocks for new designs. Since images are unstructured and hard to edit, designers typically reconstruct screenshots with vector graphics tools in order to reuse or edit parts of the design. Unfortunately, this reconstruction process is tedious and slow. We present Rewire, an interactive system that helps designers leverage example screenshots. Rewire automatically infers a vector representation of screenshots where each UI component is a separate object with editable shape and style properties. Based on this representation, the system provides three design assistance modes that help designers reuse or redraw components of the example design. The results from our quantitative and user evaluations demonstrate that Rewire can generate accurate vector representations of interface screenshots found in the wild and that design assistance enables users to reconstruct and edit example designs more efficiently compared to a baseline design tool.
Uncertainty Visualization Influences how Humans Aggregate Discrepant Information
The number of sensors in our surroundings that provide the same information steadily increases. Since sensing is prone to errors, sensors may disagree. For example, a GPS-based tracker on the phone and a sensor on the bike wheel may provide discrepant estimates on traveled distance. This poses a user dilemma, namely how to reconcile the conflicting information into one estimate. We investigated whether visualizing the uncertainty associated with sensor measurements improves the quality of users’ inference. We tested four visualizations with increasingly detailed representation of uncertainty. Our study repeatedly presented two sensor measurements with varying degrees of inconsistency to participants who indicated their best guess of the “true” value. We found that uncertainty information improves users’ estimates, especially if sensors differ largely in their associated variability. Improvements were larger for information-rich visualizations. Based on our findings, we provide an interactive tool to select the optimal visualization for displaying conflicting information.
Content is King, Leadership Lags: Effects of Prior Experience on Newcomer Retention and Productivity in Online Production Groups
Organizers of online groups often struggle to recruit members who can most effectively carry out the group’s activities and remain part of the group over time. In a study of a sample of 30,000 new editors belonging to 1,054 English WikiProjects, we empirically examine the effects of generalized prior work-productivity experience (measured by overall prior article edits), prior leadership experience (measured by overall prior project edits), and localized prior work-productivity experience (measured by pre-joining article edits on a project) on early retention and productivity. We find that (1)generalized prior work-productivity experience is positively associated with retention, but negatively associated with productivity (2) prior leadership experience is negatively associated with both retention and productivity, and (3) localized prior work-productivity experience is positively associated with both retention and productivity within that focal project. We then discuss implications to inform the designs of early interventions aimed at group success.
Strategies for Engaging Communities in Creating Physical Civic Technologies
Despite widespread interest in civic technologies, empowering neighbourhoods to take advantage of these technologies in their local area remains challenging. This paper presents findings from the Ardler Inventors project, which aimed to understand how neighbourhoods can be supported in performing roles normally carried out by researchers and designers. We describe the end-to-end process of bringing people together around technology, designing and prototyping ideas, and ultimately testing several devices in their local area. Through this work, we explore different strategies for infrastructuring local residents’ participation with technology, including the use of hackathon-like intensive design events and pre-designed kits for assembly. We contribute findings relating to the ability of these strategies to support building communities around civic technology and the challenges that must be addressed.
Neuromechanics of a Button Press
To press a button, a finger must push down and pull up with the right force and timing. How the motor system succeeds in button-pressing, in spite of neural noise and lacking direct access to the mechanism of the button, is poorly understood. This paper investigates a unifying account based on neuromechanics. Mechanics is used to model muscles controlling the finger that contacts the button. Neurocognitive principles are used to model how the motor system learns appropriate muscle activations over repeated strokes though relying on degraded sensory feedback. Neuromechanical simulations yield a rich set of predictions for kinematics, dynamics, and user performance and may aid in understanding and improving input devices. We present a computational implementation and evaluate predictions for common button types.
Is it Happy?: Behavioural and Narrative Frame Complexity Impact Perceptions of a Simple Furry Robot’s Emotions
Critical to social human-robot interaction is a robot’s emotional richness, expressed within the parameters of its physical display. While emotion arousal is straightforward to convey, human valence (positivity) evaluations are famously ambiguous, whether we are assessing other humans or a robot. Imagine someone breathing raggedly: are they nervous, or excited? To assess the premise that irregular breathing connotes low valence (emotion negativity), we implemented different levels of breathing variability and complexity in simple furry robots. We asked 10 participants to watch and feel the behaviors, rate their valence, and explain their impressions. While a quantitative exploration of new and previous data showed correlation between multi-scale entropy and valence, the rich narratives revealed by thematic analysis of participant explanations call into question whether a single motion can, alone, be unambiguously valenced. Based on this evidence that people perceive robots as having inner lives, we recommend ways to build up narrative contexts over multiple interactions.
Designing a Reclamation of Body and Health: Cancer Survivor Tattoos as Coping Ritual
Historically, tattoos have been perceived as a mark of deviant behavior from the perspective of Western medicine. However, cancer survivor tattoos are one of many strategies used to recover from the trauma of cancer diagnosis and treatment. In this study, we seek to understand the significance of these tattoos in the context of survivorship. We interviewed 19 cancer survivors about their survivor tattoos, exploring the benefits of designing, discussing, and displaying these tattoos as elements of emotional recovery post-cancer. We found that the act of designing a survivor tattoo facilitated all three elements of post-traumatic growth processes, including: (1) changed self-perception; (2) changed sense of relationships with others; and (3) changed philosophy of life. Through participants’ lived experiences, we discuss information about emotions, health, and recovery encoded in tattoos, and provide implications for tools to help future cancer survivors recover from the trauma of diagnosis and treatment.
Mismatch of Expectations: How Modern Learning Resources Fail Conversational Programmers
Conversational programmers represent a class of learners who are not required to write any code, yet try to learn programming to improve their participation in technical conversations. We carried out interviews with 23 conversational programmers to better understand the challenges they face in technical conversations, what resources they choose to learn programming, how they perceive the learning process, and to what extent learning programming actually helps them. Among our key findings, we found that conversational programmers often did not know where to even begin the learning process and ended up using formal and informal learning resources that focus largely on programming syntax and logic. However, since the end goal of conversational programmers was not to build artifacts, modern learning resources usually failed these learners in their pursuits of improving their technical conversations. Our findings point to design opportunities in HCI to invent learner-centered approaches that address the needs of conversational programmers and help them establish common ground in technical conversations.
An Experience Sampling Study of User Reactions to Browser Warnings in the Field
Web browser warnings should help protect people from malware, phishing, and network attacks. Adhering to warnings keeps people safer online. Recent improvements in warning design have raised adherence rates, but they could still be higher. And prior work suggests many people still do not understand them. Thus, two challenges remain: increasing both comprehension and adherence rates. To dig deeper into user decision making and comprehension of warnings, we performed an experience sampling study of web browser security warnings, which involved surveying over 6,000 Chrome and Firefox users in situ to gather reasons for adhering or not to real warnings. We find these reasons are many and vary with context. Contrary to older prior work, we do not find a single dominant failure in modern warning design—like habituation—that prevents effective decisions. We conclude that further improvements to warnings will require solving a range of smaller contextual misunderstandings.
Extracting Design Guidelines for Wearables and Movement in Tabletop Role-Playing Games via a Research Through Design Process
We believe that wearables and movement are perfect fit for enhancing tabletop role-playing (TTRPG) experience, since they can provide embodied interaction, are perceived as character-costumes, enhance ludic properties and increase the connectedness to the imaginary game worlds. By providing these improvements, they can increase the immersiveness and player/character relationship which are critical for an ideal TTRPG experience. To investigate this underexplored area, we conducted an extensive research through design process which includes a (1) participatory design workshop with 25 participants, (2) preliminary user tests with Wizard-of-Oz and experience prototypes with 15 participants, (3) production of a new game system, wearable and tangible artifacts and (4) summative user tests for understanding the effects on experience with 16 participants. As a result of our study, we extracted design guidelines about how to integrate wearables and movement in narrative-based tabletop games and communicate how the results of each phase affected our artifacts.
Exploring the Design of Tailored Virtual Reality Experiences for People with Dementia
Despite indications that recreational virtual reality (VR) experiences could be beneficial for people with dementia, this area remains unexplored in contrast to the body of work on neurological rehabilitation through VR in dementia. With recreational VR applications coming to the market for dementia, we must consider how VR experiences for people with dementia can be sensitively designed to provide comfortable and enriching experiences. Working with seven participants from a local dementia care charity, we outline some of the opportunities and challenges inherent to the design and use of VR experiences with people with dementia and their carers through an inductive thematic analysis. We also provide a series of future directions for work in VR and dementia: 1) careful physical design, 2) making room for sharing, 3) utilizing all senses, 4) personalization, and 5) ensuring the active inclusion of the person with dementia.
Project Zanzibar: A Portable and Flexible Tangible Interaction Platform
We present Project Zanzibar: a flexible mat that can locate, uniquely identify and communicate with tangible objects placed on its surface, as well as sense a user’s touch and hover hand gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.
The RAD: Making Racing Games Equivalently Accessible to People Who Are Blind
We introduce the racing auditory display (RAD), an audio-based user interface that allows players who are blind to play the same types of racing games that sighted players can play with an efficiency and sense of control that are similar to what sighted players have. The RAD works with a standard pair of headphones and comprises two novel sonification techniques: the sound slider for understanding a car’s speed and trajectory on a racetrack and the turn indicator system for alerting players of the direction, sharpness, length, and timing of upcoming turns. In a user study with 15 participants (3 blind; the rest blindfolded and analyzed separately), we found that players preferred the RAD’s interface over that of Mach 1, a popular blind-accessible racing game. We also found that the RAD allows an avid gamer who is blind to race as well on a complex racetrack as casual sighted players can, without a significant difference between lap times or driving paths.
Inclusive Computing in Special Needs Classrooms: Designing for All
With a growing call for an increased emphasis on computing in school curricula, there is a need to make computing accessible to a diversity of learners. One potential approach is to extend the use of physical toolkits, which have been found to encourage collaboration, sustained engagement and effective learning in classrooms in general. However, little is known as to whether and how these benefits can be leveraged in special needs schools, where learners have a spectrum of distinct cognitive and social needs. Here, we investigate how introducing a physical toolkit can support learning about computing concepts for special education needs (SEN) students in their classroom. By tracing how the students’ interactions-both with the physical toolkit and with each other-unfolded over time, we demonstrate how the design of both the form factor and the learning tasks embedded in a physical toolkit contribute to collaboration, comprehension and engagement when learning in mixed SEN classrooms.
Caption Crawler: Enabling Reusable Alternative Text Descriptions using Reverse Image Search
Accessing images online is often difficult for users with vision impairments. This population relies on text descriptions of images that vary based on website authors’ accessibility practices. Where one author might provide a descriptive caption for an image, another might provide no caption for the same image, leading to inconsistent experiences. In this work, we present the Caption Crawler system, which uses reverse image search to find existing captions on the web and make them accessible to a user’s screen reader. We report our system’s performance on a set of 481 websites from alexa.com’s list of most popular sites to estimate caption coverage and latency, and also report blind and sighted users’ ratings of our system’s output quality. Finally, we conducted a user study with fourteen screen reader users to examine how the system might be used for personal browsing.
How Far Is Up?: Bringing the Counterpointed Triad Technique to Digital Storybook Apps
Interactive storybooks, such as those available on the iPad, offer multiple ways to convey a story, mostly through visual, textual and audio content. How to effectively deliver this combination of content so that it supports positive social and educational development in pre-literate children is relatively underexplored. In order to address this issue we introduce the “Counterpointed Triad Technique”. Drawing from traditional literary theory we design visual, textual and audio content that each conveys different aspects of a story. We explore the use of this technique through a storybook we designed ourselves called “How Far Is Up?”. A study involving 26 kindergarten children shows that “How Far Is Up?” can engage pre-literature children while they are reading alone and also when they are reading with an adult. Based on our craft knowledge and study findings, we present a set of design strategies that aim to provide designers with practical guidance on how to create engaging interactive digital storybooks.
Understanding the Accessibility of Smartphone Photography for People with Motor Impairments
We present the results of an exploration to understand the accessibility of smartphone photography for people with motor impairments. We surveyed forty-six people and interviewed twelve people about capturing, editing, and sharing photographs on smartphones. We found that people with motor impairments encounter many challenges with smartphone photography, resulting in users capturing fewer photographs than they would like. Participants described various strategies they used to overcome challenges in order to capture a quality photograph. We also found that photograph quality plays a large role in deciding which photographs users share and how often they share, with most participants rating their photographs as average or poor quality compared to photos shared on their social networks. Additionally, we created design probes of two novel photography interfaces and received feedback from our interview participants about their usefulness and functionality. Based on our findings, we propose design recommendations for how to improve the accessibility of mobile photoware for people with motor impairments.
Ohmic-Touch: Extending Touch Interaction by Indirect Touch through Resistive Objects
When an object is interposed between a touch surface and a finger/touch pen, the change in impedance caused by the object can be measured by the driver software. This phenomenon has been used to develop new interaction techniques. Unlike previous works that focused on the capacitance component in impedance, Ohmic-Touch enhances touch input modality by sensing resistance. Using 3D printers or inkjet printers with conductive materials and off-the-shelf electronic components/sensors, resistance is easily and precisely controllable. We implement mechanisms on touch surfaces based on the electrical resistance of the object: for example, to sense the touching position on an interposed object, to identify each object, and to sense light, force, or temperature by using resistors and sensors. Additionally, we conduct experimental studies that demonstrate that our technology has a recognition accuracy of the resistance value of 97%.
TeleHuman2: A Cylindrical Light Field Teleconferencing System for Life-size 3D Human Telepresence
For telepresence to support the richness of multiparty conversations, it is important to convey motion parallax and stereoscopy without head-worn apparatus. TeleHuman2 is a “hologrammatic” telepresence system that conveys full-body 3D video of interlocutors using a human-sized cylindrical light field display. For rendering, the system uses an array of projectors mounted above the heads of participants in a ring around a retroreflective cylinder. Unique angular renditions are calculated from streaming depth video captured at the remote location. Projected images are retro-reflected into the eyes of local participants, at 1.3º intervals providing angular renditions simultaneously for left and right eyes of all onlookers, which conveys motion parallax and stereoscopy without head-worn apparatus or head tracking. Our technical evaluation of the angular accuracy of the system demonstrates that the error in judging the angle of a remote arrow object represented in TeleHuman2 is within 1 degree, and not significantly different from similar judgments of a collocated arrow object.
Co-designing Mobile Online Safety Applications with Children
Parents use mobile monitoring software to observe and restrict their children’s activities in order to minimize the risks associated with Internet-enabled mobile devices. As children are stakeholders in such technologies, recent research has called for their inclusion in its design process. To investigate children’s perceptions of parental mobile monitoring technologies and explore their interaction preferences, we held two co-design sessions with 12 children ages 7-12. Children first reviewed and redesigned an existing mobile monitoring application. Next, they designed ways children could use monitoring software when they encounter mobile risks (e.g., cyberbullying, inappropriate content). Results showed that children acknowledged safety needs and accepted certain parental controls. They preferred and designed controls that emphasized restriction over monitoring, taught risk coping, promoted parent-child communication, and automated interactions. Our results benefit designers looking to develop parental mobile monitoring technologies in ways that children will both accept and can actively benefit from.
Intellingo: An Intelligible Translation Environment
Translation environments offer various translation aids to support professional translators. However, translation aids typically provide only limited justification for the translation suggestions they propose. In this paper we present Intellingo, a translation environment that explores intelligibility for translation aids, to enable more sensible usage of translation suggestions. We performed a comparative study between an intelligible version and a non-intelligible version of Intellingo. The results show that although adding intelligibility does not necessarily result in significant changes to the user experience, translators can better assess translation suggestions without a negative impact on their performance. Intelligibility is preferred by translators when the additional information it conveys benefits the translation process and when this information is not part of the translator’s readily available knowledge.
Thor’s Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force
We present a new handheld haptic device, Thor’s Hammer, which uses propeller propulsion to generate ungrounded, 3-DOF force feedback. Thor’s Hammer has six motors and propellers that generates strong thrusts of air without the need for physical grounding or heavy air compressors. With its location and orientation tracked by an optimal tracking system, the system can exert forces in arbitrary directions regardless of the device’s orientation. Our technical evaluation shows that Thor’s Hammer can apply up to 4 N of force in arbitrary directions with less than 0.11 N and 3.9° of average magnitude and orientation errors. We also present virtual reality applications that can benefit from the force feedback provided by Thor’s Hammer. Using these applications, we conducted a preliminary user study and participants felt the experience more realistic and immersive with the force feedback.
Transforming Last-mile Logistics: Opportunities for more Sustainable Deliveries
Road congestion, air pollution and sustainability are increasingly important in major cities. We look to understand how last-mile deliveries in the parcel sector are impacting our roads. Using formative field work and quantitative analysis of consignment manifests and location data, we identify how the effectiveness of life-style couriers is contributing to both environmental and non-environmental externalities. This paper presents an analysis of delivery performances and practices in last-mile logistics in central London, quantifying the impacts differing levels of experience have on overall round efficiency. We identify eleven key opportunities for technological support for last-mile parcel deliveries that could contribute to both driver effectiveness and sustainability. We finish by examining how HCI can lead to improved environmental and social justice by re-considering and realizing future collaborative visions in last-mile logistics.
CraftML: 3D Modeling is Web Programming
We explore web programming as a new paradigm for programmatic 3D modeling. Most existing approaches subscribe to the imperative programming paradigm. While useful, there exists a gulf of evaluation between procedural steps and the intended structure. We present CraftML, a language providing a declarative syntax where the code is the structure. CraftML offers a rich set of programming features familiar to web developers of all skill levels, such as tags, hyperlinks, document object model, cascade style sheet, JQuery, string interpolation, template engine, data injection, and scalable vector graphics. We develop an online IDE to support CraftML development, with features such as live preview, search, module import, and parameterization. Using examples and case studies, we demonstrate that CraftML offers a low floor for beginners to make simple designs, a high ceiling for experts to build complex computational models, and wide walls to support many application domains such as education, data physicalization, tactile graphics, assistive devices, and mechanical components.
ForceBoard: Subtle Text Entry Leveraging Pressure
We present ForceBoard, a pressure-based input technique that enables text entry by subtle finger motion. To enter text, users apply pressure to control a multi-letter-wide sliding cursor on a one-dimensional keyboard with alphabetical ordering, and confirm the selection with a quick release. We examined the error model of pressure control for successive and error-tolerant input, which was incorporated into a Bayesian algorithm to infer user input. A user study showed that, after a 10-minute training, the average text entry rate reached 4.2 wpm (Words Per Minute) for character-level input, and 11.0 wpm for word-level input. Users reported that ForceBoard was easy to learn and interesting to use. These results demonstrated the feasibility of applying pressure as the main channel for text entry. We conclude by discussing the limitation, as well as the potential of ForceBoard to support interaction with constraints from form factor, social concern and physical environments.
PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches
Selecting an item of interest on smartwatches can be tedious and time-consuming as it involves a series of swipe and tap actions. We present PageFlip, a novel method that combines into a single action multiple touch operations such as command invocation and value selection for efficient interaction on smartwatches. PageFlip operates with a page flip gesture that starts by dragging the UI from a corner of the device. We first design PageFlip by examining its key design factors such as corners, drag directions and drag distances. We next compare PageFlip to a functionally equivalent radial menu and a standard swipe and tap method. Results reveal that PageFlip improves efficiency for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch interaction opportunities and a set of applications that can benefit from PageFlip.
Inpher: Inferring Physical Properties of Virtual Objects from Mid-Air Interaction
We present Inpher, a virtual reality system for setting physical properties of virtual objects using mid-air interaction. Users simply grasp virtual objects and mimic their desired physical movement. The physical properties required to fulfill that movement will then be inferred directly from that motion. We provide a 3D user interface that does not require users to have an abstract model of physical properties. Our approach leverages users’ real world experiences with physics. We conducted a bodystorming to investigate users’ mental model of physics. Based on our iterative design process, we implemented techniques for inferring mass, bounciness and friction. We conducted a case study with 15 participants with varying levels of physics education. The results indicate that users are capable of demonstrating the required interactions and achieve satisfying results.
Making Core Memory: Design Inquiry into Gendered Legacies of Engineering and Craftwork
This paper describes the Making Core Memory project, a design inquiry into the invisible work that went into assem-bling core memory, an early form of computer information storage initially woven by hand. Drawing on feminist tradi-tions of situated knowing, we designed an electronic quilt and a series of participatory workshops that materialize the work of the core memory weavers. With this case we not only broaden dominant stories of design, but we also reflect on the entanglement of predominantly male, high status labor with the ostensibly low-status work of women’s hands. By integrating design and archival research as a means of cultural analysis, we further expand conversations on design research methods within human-computer inter-action (HCI), using design to reveal legacies of practice elided by contemporary technology cultures. In doing so, this paper highlights for HCI scholars that worlds of hand-work and computing, or weaving and space travel, are not as separate as we might imagine them to be.
Augmenting Code with In Situ Visualizations to Aid Program Understanding
Programmers must draw explicit connections between their code and runtime state to properly assess the correctness of their programs. However, debugging tools often decouple the program state from the source code and require explicitly invoked views to bridge the rift between program editing and program understanding. To unobtrusively reveal runtime behavior during both normal execution and debugging, we contribute techniques for visualizing program variables directly within the source code. We describe a design space and placement criteria for embedded visualizations. We evaluate our in situ visualizations in an editor for the Vega visualization grammar. Compared to a baseline development environment, novice Vega users improve their overall task grade by about 2 points when using the in situ visualizations and exhibit significant positive effects on their self-reported speed and accuracy.
Pointing All Around You: Selection Performance of Mouse and Ray-Cast Pointing in Full-Coverage Displays
As display environments become larger and more diverse – now often encompassing multiple walls and room surfaces – it is becoming more common that users must find and manipulate digital artifacts not directly in front of them. There is little understanding, however, about what techniques and devices are best for carrying out basic operations above, behind, or to the side of the user. We conducted an empirical study comparing two main techniques that are suitable for full-coverage display environments: mouse-based pointing, and ray-cast ‘laser’ pointing. Participants completed search and pointing tasks on the walls and ceiling, and we measured completion time, path lengths and perceived effort. Our study showed a strong interaction between performance and target location: when the target position was not known a priori the mouse was fastest for targets on the front wall, but ray-casting was faster for targets behind the user. Our findings provide new empirical evidence that can help designers choose pointing techniques for full-coverage spaces.
The Dark (Patterns) Side of UX Design
Interest in critical scholarship that engages with the complexity of user experience (UX) practice is rapidly expanding, yet the vocabulary for describing and assessing criticality in practice is currently lacking. In this paper, we outline and explore the limits of a specific ethical phenomenon known as “dark patterns,” where user value is supplanted in favor of shareholder value. We assembled a corpus of examples of practitioner-identified dark patterns and performed a content analysis to determine the ethical concerns contained in these examples. This analysis revealed a wide range of ethical issues raised by practitioners that were frequently conflated under the umbrella term of dark patterns, while also underscoring a shared concern that UX designers could easily become complicit in manipulative or unreasonably persuasive practices. We conclude with implications for the education and practice of UX designers, and a proposal for broadening research on the ethics of user experience.
Charrette: Supporting In-Person Discussions around Iterations in User Interface Design
As a rule, user interface designers work iteratively. Over the course of a project, they repeatedly gather feedback, typically through in-person meetings, and update their designs accordingly. Through formative work, we find that design software tools do not support designers in managing meeting notes and previous design iterations as a cohesive whole. This causes designers to rely on ad-hoc practices for organizing work, which makes it hard for them to keep track of relevant feedback and explain their design decisions. To address this problem, we present Charrette, a system that allows designers to curate design iterations, attach meeting notes to the relevant content, and navigate sequences of design iterations with the associated notes to facilitate in-person discussions. In an exploratory user study, we evaluate how Charrette affects designers’ self-reported ease in handling feedback during face-to-face discussions, compared with using their own tools. We find that using Charrette correlates with increased confidence and recall in discussing previous design decisions.
Design Within a Patriarchal Society: Opportunities and Challenges in Designing for Rural Women in Bangladesh
This paper examines the opportunities and issues that arise in designing technologies to support low-income rural women in Bangladesh. Through a qualitative, empirical study with 90 participants, we reveal systemic everyday challenges that women face that form the backdrop against which technology design could potentially happen. We discuss how technology is already impacting women’s lives, sometimes by reinforcing their subservient role in society and sometimes used tactically by women to gain a measure of agency. The issues raised by our participants concerning technology’s place in their lives provide HCI researchers with valuable guidance about what might (or might not) be appropriate to design for them. We also show how prevalent HCI research and design strategies may fit more poorly than expected into rural women’s lives, and we discuss possible alternative design directions, and the ethical and pragmatic trade-offs that they entail. Our contribution is not to “solve” the problem of designing for low-income rural women, but to expand the HCI community’s understanding of technology design within deeply patriarchal societies.
This App Would Like to Use Your Current Location to Better Serve You: Importance of User Assent and System Transparency in Personalized Mobile Services
Modern mobile apps aim to provide personalized services without appearing intrusive. A common strategy is to let the user initiate the service request (e.g., “click here to receive coupons for your favorite products”), a practice known as ?overt personalization.” Another strategy is to assuage users’ privacy concerns by being transparent about how their data would be collected, utilized and stored. To test these two strategies, we conducted a 2 (Personalization: Overt vs. Covert) x 2 (Transparency: High vs. Low) factorial experiment, with a fifth control condition. Participants (N=302) interacted with GreenByMe, a prototype of an eco-friendly mobile application. Data show that overt personalization affects perceived control. Significant three-way interactions between power usage, perceived overt personalization and perceived information transparency was seen on perceived ease of use, trust in the app, user engagement and behavioral intention to use the app in the future. In addition, results reveal that perceived information transparency also promotes trust, which is negatively linked with privacy concerns and positively correlated with user engagement and product involvement.
Pictures Worth a Thousand Words: Reflections on Visualizing Personal Blood Glucose Forecasts for Individuals with Type 2 Diabetes
Type 2 Diabetes Mellitus (T2DM) is a common chronic condition that requires management of one’s lifestyle, including nutrition. Critically, patients often lack a clear understanding of how everyday meals impact their blood glucose. New predictive analytics approaches can provide personalized mealtime blood glucose forecasts. While communicating forecasts can be challenging, effective strategies for doing so remain little explored. In this study, we conducted focus groups with 13 participants to identify approaches to visualizing personalized blood glucose forecasts that can promote diabetes self-management and understand key styles and visual features that resonate with individuals with diabetes. Focus groups demonstrated that individuals rely on simple heuristics and tend to take a reactive approach to their health and nutrition management. Further, the study highlighted the need for simple and explicit, yet information-rich design. Effective visualizations were found to utilize common metaphors alongside words, numbers, and colors to convey a sense of authority and encourage action and learning.
Pac-Many: Movement Behavior when Playing Collaborative and Competitive Games on Large Displays
Previous work has shown that large high resolution displays (LHRDs) can enhance collaboration between users. As LHRDs allow free movement in front of the screen, an understanding of movement behavior is required to build successful interfaces for these devices. This paper presents Pac-Many; a multiplayer version of the classical computer game Pac-Man to study group dynamics when using LHRDs. We utilized smartphones as game controllers to enable free movement while playing the game. In a lab study, using a 4m × 1m LHRD, 24 participants (12 pairs) played Pac-Many in collaborative and competitive conditions. The results show that players in the collaborative condition divided screen space evenly. In contrast, competing players stood closer together to avoid benefits for the other player. We discuss how the nature of the task is important when designing and analyzing collaborative interfaces for LHRDs. Our work shows how to account for the spatial aspects of interaction with LHRDs to build immersive experiences.
BebeCODE: Collaborative Child Development Tracking System
Continuous tracking young children’s development is important for parents because early detection of developmental delay can lead to better treatment through early intervention. Screening tests, often based on questions answered by a parent, are used to assess children’s development, but responses from only one parent can be subjective and even inaccurate due to limited memory and observations. In this work, we propose a collaborative child development tracking system, where screening test responses are collected through collaboration between parents or caregivers. We implement BebeCODE, a mobile system that encourages parents to independently answer all developmental questions for a given age and resolve disagreements through chatting, image/video sharing, or asking a third person. A 4-week deployment study of BebeCODE with 12 families found that parents had approximately 22% disagreements about questions regarding their children’s developmental and BebeCODE helped them reach a consensus. Parents also reported that their awareness of their child’s development, increased with BebeCODE.
Beyond the Libet Clock: Modality Variants for Agency Measurements
The Sense of Agency (SoA) refers to our capability to control our own actions and influence the world around us. Recent research in HCI has been investigating SoA to provide users an instinctive sense of “I did that” as opposed to “the system did that”. However, current agency measurements are limited. The Intentional Binding (IB) paradigm provides an implicit measure of the SoA, however, it is constrained by requiring high visual attention to a “Libet clock” on-screen. In this paper, we extended the timing stimuli through auditory and tactile cues. Our results demonstrate that audio timing through voice commands and haptic timing through tactile cues on the hand, are an effective alternative measure of the SoA using the IB paradigm. They both address current limitations of the traditional method such as visual attention overload and lack of engagement. We discuss how our results can be applied to measure SoA in tasks involving different interactive scenarios such as in Mixed/Virtual Reality.
PokeRing: Notifications by Poking Around the Finger
Smart-rings are ideal for subtle and always-available haptic notifications due to their direct contact with the skin. Previous researchers have highlighted the feasibility of haptic technology in smart-rings and their promise in delivering noticeable stimulations by poking a limited set of planar locations on the finger. However, the full potential of poking as a mechanism to deliver richer and more expressive information on the finger is overlooked. With three studies and a total of 76 participants, we informed the design of PokeRing, a smart-ring capable of delivering information via stimulating eight different locations around the index finger’s proximal phalanx. We report our evaluation of the performance of PokeRing in semi-realistic wearable conditions, (standing and walking), and its effective usage for information transfer with twenty-one spatio-temporal patterns designed by six interaction designers in a workshop. Finally, we present three applications that exploit PokeRing’s notification usages.
Forgotten But Not Gone: Identifying the Need for Longitudinal Data Management in Cloud Storage
Users have accumulated years of personal data in cloud storage, creating potential privacy and security risks. This agglomeration includes files retained or shared with others simply out of momentum, rather than intention. We presented 100 online-survey participants with a stratified sample of 10 files currently stored in their own Dropbox or Google Drive accounts. We asked about the origin of each file, whether the participant remembered that file was stored there, and, when applicable, about that file’s sharing status. We also recorded participants’ preferences moving forward for keeping, deleting, or encrypting those files, as well as adjusting sharing settings. Participants had forgotten that half of the files they saw were in the cloud. Overall, 83% of participants wanted to delete at least one file they saw, while 13% wanted to unshare at least one file. Our combined results suggest directions for retrospective cloud data management.
A Functional Optimization Based Approach for Continuous 3D Retargeted Touch of Arbitrary, Complex Boundaries in Haptic Virtual Reality
Passive or actuated physical props can provide haptic feedback, leading to a satisfying sense of presence and realism in virtual reality. However, the mismatch between the physical and virtual surfaces (boundaries) can diminish user experience. Haptic retargeting can overcome this limitation by utilizing visio-haptic effects. Previous investigations in haptic retargeting have focused on methods for point based position retargeting and techniques for remapping 2D shapes or simple 3D shape changes. Our approach extends haptic retargeting to complex, arbitrary shapes that provide a continuous mapping across all points on a boundary. This new approach also allows for multi-finger interaction. We describe a functional optimization to find the ideal spatial warping function with different goals: a maximum mapping smoothness, a minimum mismatch between the real and virtual world, or the combination of the two. We report on a preliminary user study of different optimization goals and elaborate potential applications through a set of demonstrations.
Sense of Presence, Attitude Change, Perspective-Taking and Usability in First-Person Split-Sphere 360° Video
This paper examines the sense of presence, attitude change, perspective-taking, and usability of a split-sphere, first-person perspective 360 degree video about gender inequality, in which people can choose to watch the narrative from the male or female character’s perspective. Sixty-seven participants were randomly assigned to watch (1) the video in 360 degree split-view in a head-mounted display, (2) the same film as 180 degree in a HMD, or (3) a flat control version of the video on a laptop. The 360 degree split-sphere increased the viewers’ feeling of personal responsibility for resolving gender inequality, desire to rewatch the video, fear of missing out, and feeling of missing the full story. The 180 degree video created the strongest sense of presence, embodiment, and understanding of the character. However, people with greater egocentric projection onto the male character felt less responsible for resolving gender inequality, particularly in the 360 degree split-view.
“I Hear You”: Understanding Awareness Information Exchange in an Audio-only Workspace
Graphical displays are a typical means for conveying awareness information in groupware systems to help users track joint activities, but are not ideal when vision is constrained. Understanding how people maintain awareness through non-visual means is crucial for designing effective alternatives for supporting awareness in such situations. We present a lab study simulating an extreme scenario where 32 pairs of participants use an audio-only tool to edit shared audio menus. Our aim is to characterise collaboration in this audio-only space in order to identify whether and how, by itself, audio can mediate collaboration. Our findings show that the means for audio delivery and choice of working styles in this space influence types and patterns of awareness information exchange. We thus highlight the need to accommodate different working styles when designing audio support for awareness, and extend previous research by identifying types of awareness information to convey in response to group work dynamics.
User-Driven Design Principles for Gesture Representations
Many recent studies have explored user-defined interactions for touch and gesture-based systems through end-user elicitation. While these studies have facilitated the user-end of the human-computer dialogue, the subsequent design of gesture representations to communicate gestures to the user vary in style and consistency. Our study explores how users interpret, enact, and refine gesture representations adapting techniques from recent elicitation studies. To inform our study design, we analyzed gesture representations from 30 elicitation papers and developed a taxonomy of design elements. We then conducted a partnered elicitation study with 30 participants producing 657 gesture representations accompanied by think-aloud data. We discuss design patterns and themes that emerged from our analysis, and supplement these findings with an in-depth look at users’ mental models when perceiving and enacting gesture representations. Finally, based on the results, we provide recommendations for practitioners in need of “visual language” guidelines to communicate possible user actions.
To Put That in Perspective: Generating Analogies that Make Numbers Easier to Understand
Laypeople are frequently exposed to unfamiliar numbers published by journalists, social media users, and algorithms. These figures can be difficult for readers to comprehend, especially when they are extreme in magnitude or contain unfamiliar units. Prior work has shown that adding “perspective sentences” that employ ratios, ranks, and unit changes to such measurements can improve people’s ability to understand unfamiliar numbers (e.g., “695,000 square kilometers is about the size of Texas”). However, there are many ways to provide context for a measurement. In this paper we systematically test what factors influence the quality of perspective sentences through randomized experiments involving over 1,000 participants. We develop a statistical model for generating perspectives and test it against several alternatives, finding beneficial effects of perspectives on comprehension that persist for six weeks. We conclude by discussing future work in deploying and testing perspectives at scale.
Addressing Network Anxieties with Alternative Design Metaphors
Optimism and positivity permeate discourses of smart interactive network technologies. Yet we do not have to look too far or too deep to find anxieties knotting up on the horizon and festering below the network’s glistening surface. This paper contributes a set of concepts, tactics, and novel design forms for addressing network anxieties generated through a design-led inquiry, or research through design approach. We present three technically grounded metaphors illustrated with examples selected from our exploratory design process. Weaving together concepts from surveillance studies, cultural studies, and other areas of the humanities with our visual and physical design work, we help draw attention to under-addressed concerns within HCI while proposing alternative ways of framing and engaging design issues arising with network technologies.
Supporting Communication between Grandparents and Grandchildren through Tangible Storytelling Systems
Grandparents and grandchildren that live apart often rely on communication technologies, such as messengers, video conferencing, and phone calls for maintaining relationships. While some of these systems are challenging for grandparents, others are less engaging for children. To facilitate communication, we developed StoryBox, a tangible device that allows sharing photos, tangible artifacts, and audio recordings of everyday life. We conducted a preliminary study with two families to identify design issues, and further refine the prototype. Subsequently, we conducted a field study with four families for up to four weeks to better understand real-world use and examine inter-generational connectedness. We found that StoryBox was accessible, simple, and helped bridge the technological gap between grandparents and grandchildren. Children communicated asynchronously in a playful and idiosyncratic manner, and grandparents shared past family memories. We provide insights on how to ease communication between different generations, engage them in sharing activities, and strengthen family relationships.
Tangible Tens: Evaluating a Training of Basic Numerical Competencies with an Interactive Tabletop
Basic numerical competencies developed in kindergarten form the foundations of math achievement. This indicates the importance of early interventions in the case of numerical difficulties. Building on research on math manipulatives and tangible interfaces, we developed a training of basic numerical competencies using an interactive tabletop in combination with physical LEGO-like blocks. In an experiment, we evaluated the effectiveness of the training on children’s learning of the partner number concept, basic numerical competencies and number line estimation, compared to a content-wise similar training with physical manipulatives and a human tutor. We observed significant increases in children’s understanding of the partner number concept and basic numerical competencies in both training conditions, but no differential training effects. As children can play on the interactive surface with reasonable autonomy, it seems to provide a low threshold possibility to enrich kindergarten education on numerical concepts.
‘You Can Always Do Better!”: The Impact of Social Proof on Participant Response Bias
Evaluations of technological artifacts in HCI4D contexts are known to suffer from high levels of participant response bias—where participants only provide positive feedback that they think will please the researcher. This paper describes a practical, low-cost intervention that uses the concept of social proof to influence participant response bias and successfully elicit critical feedback from study participants. We subtly exposed participants to feedback that they perceived to be provided by people ‘like them’, and experimentally controlled the tone and content of the feedback to provide either positive, negative, or no social proof. We then measured how participants’ quantitative and qualitative evaluations of an HCI artifact changed based on the feedback to which they were exposed. We conducted two controlled experiments: an online experiment with 245 MTurk workers and a field experiment with 63 women in rural India. Our findings reveal significant differences between participants in the positive, negative, and no social proof conditions, both online and in the field. Participants in the negative condition provided lower ratings and a greater amount of critical feedback, while participants in the positive condition provided higher ratings and a greater amount of positive feedback. Taken together, our findings demonstrate that social proof is a practical and generalizable technique that could be used by HCI researchers to influence participant response bias in a wide range of contexts and domains.
Teaching Language to Deaf Infants with a Robot and a Virtual Human
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.
ExtraSensory App: Data Collection In-the-Wild with Rich User Interface to Self-Report Behavior
We introduce a mobile app for collecting in-the-wild data, including sensor measurements and self-reported labels describing people’s behavioral context (e.g., driving, eating, in class, shower). Labeled data is necessary for developing context-recognition systems that serve health monitoring, aging care, and more. Acquiring labels without observers is challenging and previous solutions compromised ecological validity, range of behaviors, or amount of data. Our user interface combines past and near-future self-reporting of combinations of relevant context-labels. We deployed the app on the personal smartphones of 60 users and analyzed quantitative data collected in-the-wild and qualitative user-experience reports. The interface’s flexibility was important to gain frequent, detailed labels, support diverse behavioral situations, and engage different users: most preferred reporting their past behavior through a daily journal, but some preferred reporting what they’re about to do. We integrated insights from this work back into the app, which we make available to researchers for conducting in-the-wild studies.
Collaborative Live Media Curation: Shared Context for Participation in Online Learning
In recent years, online education’s reach and scale have increased through new platforms for large and small online courses. However, these platforms often rely on impoverished modalities, which provide limited support for participation in social learning experiences. We present Collaborative Live Media Curation (CLMC), a new medium for sharing context and participation in online learning. CLMC involves collaborative, synchronous collection, creation, and assemblage of web media, including images, text, video, and sketch. CLMC integrates live media including streaming video, screenshares, audio, and text chat. We deploy and study LiveMâché, a CLMC technology probe, in four situated online learning contexts. We discovered student and instructor strategies for sharing context and participating including creating curations in advance, sketching to illustrate and gesture, real-time transformations, sharing perspective, and assembling live streams. We develop implications through live experience patterns, which describe how spatial and computing structures support social activities.
Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events
This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.
Defining Through Expansion: Conducting Asynchronous Remote Communities (ARC) Research with Stigmatized Groups
Researchers in HCI have typically relied on face to face (FtF) methods for recruitment and data collection in their research with people living with HIV, whereas social scientists have adopted computer-mediated approaches to address concerns about data validity and access to this stigmatized population. In this paper, we use the asynchronous remote community (ARC) research method to leverage HCI instruments in an online format. ARC successfully engaged people living with HIV in terms of participation and retention by providing a safe space to discuss their experiences. By expanding on past ARC studies, we contribute to an ongoing conversation about defining ARC and working towards increased data validity — especially in stigmatized communities.
ActiveErgo: Automatic and Personalized Ergonomics using Self-actuating Furniture
Proper ergonomics improves productivity and reduces risks for injuries such as tendinosis, tension neck syndrome, and back injuries. Despite having ergonomics standards and guidelines for computer usage since the 1980s, injuries due to poor ergonomics remain widespread. We present ActiveErgo, the first active approach to improving ergonomics by combining sensing and actuation of motorized furniture. It provides automatic and personalized ergonomics of computer workspaces in accordance to the recommended ergonomics guidelines. Our prototype system uses a Microsoft Kinect sensor for skeletal sensing and monitoring to determine the ideal furniture positions for each user, then uses a combination of automatic adjustment and real-time feedback to adjust the computer monitor, desk, and chair positions. Results from our 12-person user study demonstrated that ActiveErgo significantly improves ergonomics compared to manual configuration in both speed and accuracy, and helps significantly more users to fully meet ergonomics guidelines.
Presenting The Accessory Approach: A Start-up’s Journey Towards Designing An Engaging Fall Detection Device
This paper explores a design experiment concerning the development of a personalised and engaging wearable fall detection device customised for care home residents. The design experiment focuses on a start-up company’s design process, which utilises a new design approach, which I name the accessory approach, to accommodate given cultural fit purposes of a wearer. Influenced by accessory design, that belong neither to fashion nor jewellery, the accessory approach is a way of designing wearables that involve both functional and expressive qualities including the wearer’s physical, psychological and social needs. The accessory approach is proven to enable first hand insight of the wearer’s preferences, leading to in-depth knowledge and enhanced iterative processes, which support the design of a customised device. This type of knowledge is important for the HCI community as it brings accessory design disciplines into play when wanting to understand and design for individual needs, creating engaging wearables design.
What Moves Players?: Visual Data Exploration of Twitter and Gameplay Data
In recent years, microblogging platforms have not only become an important communication channel for the game industry to generate and uphold audience interest but also a rich resource for gauging player opinion. In this paper we use data gathered from Twitter to examine which topics matter to players and to identify influential members of a game’s community. By triangulating in-game data with Twitter activity we explore how tweets can provide contextual information for understanding fluctuations in in-game activity. To facilitate analysis of the data we introduce a visual data exploration tool and use it to analyze tweets related to the game Destiny. In total, we collected over one million tweets from about 250,000 users over a 14-month period and gameplay data from roughly 3,500 players over a six-month period.
Veritaps: Truth Estimation from Mobile Interaction
We introduce the concept of Veritaps: a communication layer to help users identify truths and lies in mobile input. Existing lie detection research typically uses features not suitable for the breadth of mobile interaction. We explore the feasibility of detecting lies across all mobile touch interaction using sensor data from commodity smartphones. We report on three studies in which we collect discrete, truth-labelled mobile input using swipes and taps. The studies demonstrate the potential of using mobile interaction as a truth estimator by employing features such as touch pressure and the inter-tap details of number entry, for example. In our final study, we report an F1-score of .98 for classifying truths and .57 for lies. Finally we sketch three potential future scenarios of using lie detection in mobile applications; as a security measure during online log-in, a trust layer during online sale negotiations, and a tool for exploring self-deception.
How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games
How should an AI-based explanation system explain an agent’s complex behavior to ordinary end users who have no background in AI? Answering this question is an active research area, for if an AI-based explanation system could effectively explain intelligent agents’ behavior, it could enable the end users to understand, assess, and appropriately trust (or distrust) the agents attempting to help them. To provide insights into this question, we turned to human expert explainers in the real-time strategy domain –“shoutcasters”– to understand (1) how they foraged in an evolving strategy game in real time, (2) how they assessed the players’ behaviors, and (3) how they constructed pertinent and timely explanations out of their insights and delivered them to their audience. The results provided insights into shoutcasters’ foraging strategies for gleaning information necessary to assess and explain the players; a characterization of the types of implicit questions shoutcasters answered; and implications for creating explanations by using the patterns and abstraction levels these human experts revealed.
A Study of Urban Heat: Understanding the Challenges and Opportunities for Addressing Wicked Problems in HCI
The Urban Heat Island Effect (UHI) is a phenomenon whereby cities tend to be hotter than suburbs. We frame the UHI as a “wicked problem” that poses a range of economic, healthcare, and social challenges. Our paper examines how different stakeholders negotiate complex value systems, collect data, and rely on collaborative platforms to address the problem of urban heat. Using documentary filmmaking as a research method, we conducted ethnographically-oriented interviews with participants including vulnerable communities, urban architects, microclimate researchers, and grassroots activists. Our findings reveal that unlike problems that can be solved using traditional HCI paradigms of distributed work, the UHI presents an entanglement of challenges that do not necessarily converge on a single solution. We conclude by discussing two opportunities for addressing wicked problems through social computing: knowledge systems for sharing hybrid data across domains and interactive forums for discourse among diverse actors.
It’s a Wrap: Mapping On-Skin Input to Off-Skin Displays
Advances in sensing technologies allow for using the forearm as a touch surface to give input to off-skin displays. However, it is unclear how users perceive the mapping between an on-skin input area and an off-skin display area. We empirically describe such mappings to improve on-skin interaction. We collected discrete and continuous touch data in a study where participants mapped display content from an AR headset, a smartwatch, and a desktop display to their forearm. We model those mappings and estimate input accuracy from the spreads of touch data. Subsequently, we show how to use the models for designing touch surfaces to the forearm for a given display area, input type, and touch resolution.
Designing in the Dark: Eliciting Self-tracking Dimensions for Understanding Enigmatic Disease
The design of personal health informatics tools has traditionally been explored in self-monitoring and behavior change. There is an unmet opportunity to leverage self- tracking of individuals and study diseases and health conditions to learn patterns across groups. An open research question, however, is how to design engaging self-tracking tools that also facilitate learning at scale. Furthermore, for conditions that are not well understood, a critical question is how to design such tools when it is unclear which data types are relevant to the disease. We outline the process of identifying design requirements for self-tracking endometriosis, a highly enigmatic and prevalent disease, through interviews (N=3), focus groups (N=27), surveys (N=741), and content analysis of an online endometriosis community (1500 posts, N=153 posters) and show value in triangulating across these methods. Finally, we discuss tensions inherent in designing self-tracking tools for individual use and population analysis, making suggestions for overcoming these tensions.
Examining Wikipedia With a Broader Lens: Quantifying the Value of Wikipedia’s Relationships with Other Large-Scale Online Communities
The extensive Wikipedia literature has largely considered Wikipedia in isolation, outside of the context of its broader Internet ecosystem. Very recent research has demonstrated the significance of this limitation, identifying critical relationships between Google and Wikipedia that are highly relevant to many areas of Wikipedia-based research and practice. This paper extends this recent research beyond search engines to examine Wikipedia’s relationships with large-scale online communities, Stack Overflow and Reddit in particular. We find evidence of consequential, albeit unidirectional relationships. Wikipedia provides substantial value to both communities, with Wikipedia content increasing visitation, engagement, and revenue, but we find little evidence that these websites contribute to Wikipedia in return. Overall, these findings highlight important connections between Wikipedia and its broader ecosystem that should be considered by researchers studying Wikipedia. Critically, our results also emphasize the key role that volunteer-created Wikipedia content plays in improving other websites, even contributing to revenue generation.
Identifying Speech Input Errors Through Audio-Only Interaction
Speech has become an increasingly common means of text input, from smartphones and smartwatches to voice-based intelligent personal assistants. However, reviewing the recognized text to identify and correct errors is a challenge when no visual feedback is available. In this paper, we first quantify and describe the speech recognition errors that users are prone to miss, and investigate how to better support this error identification task by manipulating pauses between words, speech rate, and speech repetition. To achieve these goals, we conducted a series of four studies. Study 1, an in-lab study, showed that participants missed identifying over 50% of speech recognition errors when listening to audio output of the recognized text. Building on this result, Studies 2 to 4 were conducted using an online crowdsourcing platform and showed that adding a pause between words improves error identification compared to no pause, the ability to identify errors degrades with higher speech rates (300 WPM), and repeating the speech output does not improve error identification. We derive implications for the design of audio-only speech dictation.
Designing the Audience Journey through Repeated Experiences
We report on the design, premiere and public evaluation of a multifaceted audience interface for a complex non-linear musical performance called Climb! which is particularly suited to being experienced more than once. This interface is designed to enable audiences to understand and appreciate the work, and integrates a physical instrument and staging, projected visuals, personal devices and an online archive. A public premiere concert comprising two performances of Climb! revealed how the audience reoriented to the second performance through growing understanding and comparison to the first. Using trajectories as an analytical framework for the audience ‘journey’ made apparent: how the trajectories of a single performance are embedded within the larger trajectories of a concert and the creative work as a whole; the distinctive demands of understanding and interpretation; and the potential of the archive in enabling appreciation across repeated performances.
Printed Paper Actuator: A Low-cost Reversible Actuation and Sensing Method for Shape Changing Interfaces
We present a printed paper actuator as a low cost, reversible and electrical actuation and sensing method. This is a novel but easily accessible enabling technology that expands upon the library of actuation-sensing materials in HCI. By integrating three physical phenomena, including the bilayer bending actuation, the shape memory effect of the thermoplastic and the current-driven joule heating via conductive printing filament, we developed the actuator by simply printing a single layer conductive Polylactide (PLA) on a piece of copy paper via a desktop fused deposition modeling (FDM) 3D printer. This paper describes the fabrication process, the material mechanism, and the transformation primitives, followed by the electronic sensing and control methods. A software tool that assists the design, simulation and printing toolpath generation is introduced. Finally, we explored applications under four contexts: robotics, interactive art, entertainment and home environment.
Leveraging Semantic Transformation to Investigate Password Habits and Their Causes
It is no secret that users have difficulty choosing and remembering strong passwords, especially when asked to choose different passwords across different accounts. While research has shed light on password weaknesses and reuse, less is known about user motivations for following bad password practices. Understanding these motivations can help us design better interventions that work with the habits of users and not against them.
We present a comprehensive user study in which we both collect and analyze users’ real passwords and the reasoning behind their password habits. This enables us to contrast the users’ actual behaviors with their intentions. We find that user intent often mismatches practice, and that this, coupled with some misconceptions and convenience, fosters bad password habits. Our work is the first to show the discrepancy between user intent and practice when creating passwords, and to investigate how users trade off security for memorability.
Impact Activation Improves Rapid Button Pressing
The activation point of a button is defined as the depth at which it invokes a make signal. Regular buttons are activated during the downward stroke, which occurs within the first 20 ms of a press. The remaining portion, which can be as long as 80 ms, has not been examined for button activation for reason of mechanical limitations. The paper presents a technique and empirical evidence for an activation technique called Impact Activation, where the button is activated at its maximal impact point. We argue that this technique is advantageous particularly in rapid, repetitive button pressing, which is common in gaming and music applications. We report on a study of rapid button pressing, wherein users’ timing accuracy improved significantly with use of Impact Activation. The technique can be implemented for modern push-buttons and capacitive sensors that generate a continuous signal.
MindNavigator: Exploring the Stress and Self-Interventions for Mental Wellness
Mental wellness is a desirable health outcome for students. However, current personal informatics systems do not adequately support students in creating concrete mental health-related goals and turning them into actionable plans. In this paper, we introduce MindNavigator – a workshop in which groups of college students were invited to generate behavioral change goals to manage daily life stress and practice personalized interventions for two weeks. We describe the manner in which participants identified both stressors and pleasures to create actionable, engaging, and open-ended behavioral plans that aided in stress relief. We found that the social nature of the workshop helped participants understand themselves and execute self-intervention in new ways. Through this practice, we build on prior studies to propose an analytical framework of personal informatics for mental wellness.
“Only if you use English you will get to more things”: Using Smartphones to Navigate Multilingualism
We contribute to the intersection of multilingualism and human-computer interaction (HCI) with our investigation of language preferences in the context of the interface design of interactive systems. Through interview data collected from avid smartphone users located across distinct user groups in India, none of whom were native English speakers, we examine the factors that shape language choice and use on their mobile devices. Our findings indicate that these users frequently engage in English communication proactively and enthusiastically, despite their lack of English fluency, and we detail their motivations for doing so. We then discuss how language in technology use can be a way of putting forth mobility as an aspect of one’s identity, making the case for an intersectional approach to studying language in HCI.
The Problem of Community Engagement: Disentangling the Practices of Municipal Government
In this paper, we work to inform the growing space of Digital Civics with a qualitative study of community engagement practices across the breadth of municipal departments and agencies in a large US city. We conducted 34 inter-views across 15 different departments, including elected and professional city employees to understand how different domains within local government define and practice the work of engaging residents. Our interviews focused on how respondents conceptualized community engagement, how it fit into the other forms of work, and what kinds of outcomes they sought when they did ‘engagement.’ By reporting on this broad qualitative account of the many forms the work of community engagement takes in local government, we are contributing to an expansive view of digital civics that looks beyond the transactions of service delivery or the privileged moments of democratic ritual, to consider the wider terrain of mundane, daily challenges when trying to bridge between municipal government and city residents.
Using Co-Design to Examine How Children Conceptualize Intelligent Interfaces
Prior work has shown that intelligent user interfaces (IUIs) that use modalities such as speech, gesture, and writing pose challenges for children due to their developing cognitive and motor skills. Research has focused on improving recognition and accuracy by accommodating children’s specific interaction behaviors. Understanding children’s expectations of IUIs is also important to decrease the impact of recognition errors that occur. To understand children’s conceptual model of IUIs, we completed four consecutive participatory design sessions on designing IUIs with an emphasis on error detection and correction. We found that, while children think of interactive systems in terms of both user input and behavior and system output and behavior, they also propose ideas that require advanced system intelligence, e.g., context and conversation. Our work contributes new understanding of how children conceptualize IUIs and new methods for error detection and correction, and will inform the design of future IUIs for children to improve their experience.
Rethinking Engagement with Online News through Social and Visual Co-Annotation
The emergence of fake news, as well as filter bubbles and echo chambers, has precipitated renewed attention upon the ways in which news is consumed, shared and reflected and commented upon. While online news comments sections offer space for pluralist and critical discussion, studies suggest that this rarely occurs. Motivated by common practices of annotating, defacing and scribbling on physical newspapers, we built a mobile app — Newsr — that supports co-annotation, in the form of graffiti, on online news articles, which we evaluated in-the-wild for one month. We report on how the app encouraged participants to reflect upon the act of choosing news stories, whilst promoting exploration, the critique of content, and the exposure of bias within the writing. Our findings highlight how the re-design of interactive online news experiences can facilitate more directed, “in-the-moment” critique of online news stories as well as encourage readers to expand the range of news content they read.
Season Traveller: Multisensory Narration for Enhancing the Virtual Reality Experience
In the same way that we experience the real-world through a range of senses, experiencing a virtual environment through multiple sensory modalities may augment both our presence within a scenario and our reaction to it. In this paper, we present Season Traveller, a multisensory virtual reality (VR) narration of a journey through four seasons within a mystical realm. By adding olfactory and haptic (thermal and wind) stimuli, we extend traditional audio-visual VR technologies to achieve enhanced sensory engagement within interactive experiences. Using both subjective measures of presence and elicited physiological responses, we evaluated the impact of different modalities on the virtual experience. Our results indicate that 1) the addition of any singular modality improves sense of presence with respect to traditional audio-visual experiences and 2) providing a combination of these modalities produces a further significant enhancement over the aforementioned improvements. Furthermore, insights into participants’ psychophysiology were extrapolated from electrodermal activity (EDA) and heart rate (HR) measurements during each of the VR experiences.
Playing with Streakiness in Online Games: How Players Perceive and React to Winning and Losing Streaks in League of Legends
Streakiness refers to observed tendency towards consecutive appearances of particular patterns. In video games, streakiness is oftentimes inevitable, where a player keeps winning or losing for a short period. However, the phenomenon remains understudied in present online game research. How do players perceive streakiness? How does it impact player experience (PX)? How should streakiness be taken into consideration for the design of PX? In this paper, we address these questions through a qualitative study of player discussions about streakiness in League of Legends. We found that players developed various ways to describe a streak. Both winning and losing streaks negatively impacted PX. Players devised numerous strategies to manage streakiness, among which disengagement was a primary means. We analyze streakiness as a social construct through which players coped with complex game systems. We discuss design implications for managing streakiness in online games.
RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer
We present the Robotic Modeling Assistant (RoMA), an interactive fabrication system providing a fast, precise, hands-on and in-situ modeling experience. As a designer creates a new model using RoMA AR CAD editor, features are constructed concurrently by a 3D printing robotic arm sharing the same design volume. The partially printed physical model then serves as a tangible reference for the designer as she adds new elements to her design. RoMA’s proxemics-inspired handshake mechanism between the designer and the 3D printing robotic arm allows the designer to quickly interrupt printing to access a printed area or to indicate that the robot can take full control of the model to finish printing. RoMA lets users integrate real-world constraints into a design rapidly, allowing them to create well-proportioned tangible artifacts or to extend existing objects. We conclude by presenting the strengths and limitations of our current design.
Visualizing API Usage Examples at Scale
Using existing APIs properly is a key challenge in programming, given that libraries and APIs are increasing in number and complexity. Programmers often search for online code examples in Q&A forums and read tutorials and blog posts to learn how to use a given API. However, there are often a massive number of related code examples and it is difficult for a user to understand the commonalities and variances among them, while being able to drill down to concrete details. We introduce an interactive visualization for exploring a large collection of code examples mined from open-source repositories at scale. This visualization summarizes hundreds of code examples in one synthetic code skeleton with statistical distributions for canonicalized statements and structures enclosing an API call. We implemented this interactive visualization for a set of Java APIs and found that, in a lab study, it helped users (1) answer significantly more API usage questions correctly and comprehensively and (2) explore how other programmers have used an unfamiliar API.
A Multi-site Investigation of Community Awareness Through Passive Location Sharing
Local community ties are an important social resource, but research shows that these ties have been declining. The social significance of location information offers an opportunity address this decline and support local community building. Through this research, we aim to understand if and how passive location sharing might be socially beneficial for communities. We conducted a deployment of MoveMeant, a location awareness app, across three different communities. Following a research through design approach, we conducted 45 interviews with users of the system and community leaders. The findings suggest that communities face issues related to lack of awareness, cohesion, and identity. We show that the app can help increase awareness of important community resources. At the same time, the findings also show a negative effect of surfacing divisions in a community, which we discuss as a intermediate, perceptual step that may contribute to the amplification effect of technology.
Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.
TopicOnTiles: Tile-Based Spatio-Temporal Event Analytics via Exclusive Topic Modeling on Social Media
Detecting anomalous events of a particular area in a timely manner is an important task. Geo-tagged social media data are useful resource for this task; however, the abundance of everyday language in them makes this task still challenging. To address such challenges, we present TopicOnTiles, a visual analytics system that can reveal information relevant to anomalous events in a multi-level tile-based map interface by using social media data. To this end, we adopt and improve a recently proposed topic modeling method that can extract spatio-temporally exclusive topics corresponding to a particular region and a time point. Furthermore, we utilize a tile-based map interface to efficiently handle large-scale data in parallel. Our user interface effectively highlights anomalous tiles using our novel glyph visualization that encodes the degree of anomaly computed by our exclusive topic modeling processes. To show the effectiveness of our system, we present several usage scenarios using real-world datasets as well as comprehensive user study results.
In a New Land: Mobile Phones, Amplified Pressures and Reduced Capabilities
Framed within the theoretical lens of positive and negative security, this paper presents a study of newcomers to Sweden and the roles of mobile phones in the establishment of a new life. Using creative engagement methods through a series of workshops, two researchers engaged 70 adult participants enrolled into further education colleges in Sweden. Group narratives about mobile phone use were captured in creative outputs, researcher observations and notes and were analysed using thematic analysis. Key findings show that the mobile phone offers security for individuals and a safe space for newcomers to establish a new life in a new land as well as capitalising on other spaces of safety, such as maintaining old ties. This usage produces a series of threats and vulnerabilities beyond traditional technological security thinking related to mobile phone use. The paper concludes with recommendations for policies and support strategies for those working with newcomers.
Lessons from the Woodshop: Cultivating Design with Living Materials
This paper describes an eighteen-month ethnography of timber framing at a tiny house construction program in Port Townsend, Washington. This case exposes the intricate, ongoing processes that define a project where people learn to imagine, create, and ultimately maintain living materials. This case sheds light on the nature and scope of interaction design with living materials, an area of growing significance to HCI scholarship on new materials, sustainable design, and digital fabrication. Drawing from this project, we distill five lessons for design with living, finite materials. We end by discussing three emerging areas for HCI: designing for material recuperation, collaborating with more-than-human actors, and approaching material properties as prototyping sites.
Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation
Communication platforms have struggled to provide effective tools for people facing harassment online. We conducted interviews with 18 recipients of online harassment to understand their strategies for coping, finding that they often resorted to asking friends for help. Inspired by these findings, we explore the feasibility of friendsourced moderation as a technique for combating online harassment. We present Squadbox, a tool to help recipients of email harassment coordinate a “squad” of friend moderators to shield and support them during attacks. Friend moderators intercept email from strangers and can reject, organize, and redirect emails, as well as collaborate on filters. Squadbox is designed to let its users implement highly customized workflows, as we found in interviews that harassment and preferences for mitigating it vary widely. We evaluated Squadbox on five pairs of friends in a field study, finding that participants could comfortably navigate around privacy and personalization concerns.
Hoarding and Minimalism: Tendencies in Digital Data Preservation
Digital data, from texts to files and mobile applications, has become a pervasive component of our society. With seemingly unlimited storage in the cloud at their disposal, how do people approach data preservation, deciding what to keep and discard? We interviewed 23 participants with diverse backgrounds, asking them about their perceived digital data: what “stuff” they kept through the years, why, how they used it, and what they considered important. In an iterative analysis process, we uncovered a spectrum of tendencies that drive preservation strategies, with two extremes: hoarding (where participants accumulated large amounts of data, even if considered of little value) and minimalism (where they kept as little as possible, regularly cleaning their data). We contrast and compare the two extremes of the spectrum, characterize their nuanced nature, and discuss how our categorization compares to previously reported behaviors such as filing and piling, email cleaners and keepers. We conclude with broad implications for shaping technology.
Mercury: A Messaging Framework for Modular UI Components
In recent years, the entity–component–system pattern has become a fundamental feature of the software architectures of game-development environments such as Unity and Unreal, which are used extensively in developing 3D user interfaces. In these systems, UI components typically respond to events, requiring programmers to write application-specific callback functions. In some cases, components are organized in a hierarchy that is used to propagate events among vertically connected components. When components need to communicate horizontally, programmers must connect those components manually and register/unregister events as needed. Moreover, events and callback signatures may be incompatible, making modular UIs cumbersome to build and share within or across applications. To address these problems, we introduce a messaging framework, Mercury, to facilitate communication among components. We provide an overview of Mercury, outline its underlying protocol and how it propagates messages to responders using relay nodes, describe a reference implementation in Unity, and present example systems built using Mercury to explain its advantages.
Characterizing Finger Pitch and Roll Orientation During Atomic Touch Actions
Atomic interactions in touch interfaces, like tap, drag, and flick, are well understood in terms of interaction design, but less is known about their physical performance characteristics. We carried out a study to gather baseline data about finger pitch and roll orientation during atomic touch input actions. Our results show differences in orientation and range for different fingers, hands, and actions, and we analyse the effect of tablet angle. Our data provides designers and researchers with a new resource to better understand what interactions are possible in different settings ( e.g. when using the left or right hand), to design novel interaction techniques that use orientation as input (e.g. using finger tilt as an implicit mode), and to determine whether new sensing techniques are feasible (e.g. using fingerprints for identifying specific finger touches).
Extending Manual Drawing Practices with Artist-Centric Programming Tools
Procedural art, or art made with programming, suggests opportunities to extend traditional arts like painting and drawing; however, this potential is limited by tools that conflict with manual practices. Programming languages present learning barriers and manual drawing input is not a first class primitive in common programming models. We hypothesize that by developing programming languages and environments that align with how manual artists work, we can build procedural systems that enhance, rather than displace, manual art. To explore this, we developed Dynamic Brushes, a programming and drawing environment motivated by interviews with artists. Dynamic Brushes enables the creation of ad-hoc drawing tools that transform stylus inputs to procedural patterns. Applications range from transforming individual strokes to behaviors that draw multiple strokes simultaneously, respond to temporal events, and leverage external data. Results from an extended evaluation with artists provide guidelines for learnable, expressive systems that blend manual and procedural creation.
Sensing Interruptibility in the Office: A Field Study on the Use of Biometric and Computer Interaction Sensors
Knowledge workers experience many interruptions during their work day. Especially when they happen at inopportune moments, interruptions can incur high costs, cause time loss and frustration. Knowing a person’s interruptibility allows optimizing the timing of interruptions and minimize disruption. Recent advances in technology provide the opportunity to collect a wide variety of data on knowledge workers to predict interruptibility. While prior work predominantly examined interruptibility based on a single data type and in short lab studies, we conducted a two-week field study with 13 professional software developers to investigate a variety of computer interaction, heart-, sleep-, and physical activity-related data. Our analysis shows that computer interaction data is more accurate in predicting interruptibility at the computer than biometric data (74.8% vs. 68.3% accuracy), and that combining both yields the best results (75.7% accuracy). We discuss our findings and their practical applicability also in light of collected qualitative data.
Customizing Developmentally Situated Design (DSD) Cards: Informing Designers about Preschoolers’ Spatial Learning
To date, developmental needs and abilities of children under 4 years old have been insufficiently taken into account at the early stages of technology design. Bekker and Antle [6] created developmentally situated design (DSD) cards as a design tool to inform children’s technology designers about children’s development starting from 5 years of age. In this paper, we describe how we customized DSD cards for a specific developmental skill (i.e., spatial learning) of children between 2- and 4-year-olds for tangible interaction design. The cards were evaluated after a user study in which 19 participants from different backgrounds used the cards in three design workshops. Our analysis of observational notes and online survey identify and discuss how specific card features support or limit use by our participants. We draw on our findings to set forth design considerations and possible refinements that make age specific knowledge about very young children’s spatial learning to inform technologies based on tangible interaction.
Framed Guessability: Improving the Discoverability of Gestures and Body Movements for Full-Body Interaction
The wide availability of body-sensing technologies (such as Nintendo Wii and Microsoft Kinect) has the potential to bring full-body interaction to the masses, but the design of hand gestures and body movements that can be easily discovered by the users of such systems is still a challenge. In this paper, we revise and evaluate Framed Guessability, a design methodology for crafting discoverable hand gestures and body movements that focuses participants’ suggestions within a “frame,” i.e. a scenario. We elicited gestures and body movements via the Guessability and the Framed Guessability methods, consulting 89 participants in-lab. We then conducted an in-situ quasi-experimental study with 138 museum visitors to compare the discoverability of gestures and body movements elicited with these two methods. We found that the Framed Guessability movements were more discoverable than those generated via traditional Guessability, even though in the museum there was no reference to the frame.
Beagle: Automated Extraction and Interpretation of Visualizations from the Web
“How common is interactive visualization on the web?” “What is the most popular visualization design?” “How prevalent are pie charts really?” These questions intimate the role of interactive visualization in the real (online) world. In this paper, we present our approach (and findings) to answering these questions. First, we introduce Beagle, which mines the web for SVG-based visualizations and automatically classifies them by type (i.e., bar, pie, etc.). With Beagle, we extract over 41,000 visualizations across five different tools and repositories, and classify them with 85% accuracy, across 24 visualization types. Given this visualization collection, we study usage across tools. We find that most visualizations fall under four types: bar charts, line charts, scatter charts, and geographic maps. Though controversial, pie charts are relatively rare for the visualization tools that were studied. Our findings also suggest that the total visualization types supported by a given tool could factor into its ease of use. However this effect appears to be mitigated by providing a variety of diverse expert visualization examples to users.
Mediating Conflicts in Minecraft: Empowering Learning in Online Multiplayer Games
Multiplayer online games, such as Minecraft, have the potential to be powerful sites for youth learning, but can be plagued by inter-personal conflicts. This brings the need for online moderation. However, only very little is known about the practices through which such moderation happens, or how socio-technical systems could be designed to enable ‘safe’ learning spaces online. To start addressing this gap, our research examines the existing mediation practices within a moderated Minecraft server for children aged 8-13. As part of our 14 months long engagement, we triangulate data from participant observation, interviews, and analysis of server logs. We demonstrate how—in trying to ‘keep peace’—the online moderators monopolised the conflict resolution process, essentially preventing the children from actively working with and learning from the experiences of conflict. In response to these findings, we present an alternative framework for online conflict mediation, suggesting ways in which existing conflict resolution techniques originating in Prevention Science could be re-interpreted for online multiplayer settings
GestAKey: Touch Interaction on Individual Keycaps
Conventionally, keys on a physical keyboard have only two states: “released” and “pressed”. As such, various techniques, such as hotkeys, are designed to enhance the keyboard expressiveness. Realizing that user inevitably perform touch actions during keystrokes, we propose GestAKey, leveraging location and motion of the touch on individual keycaps to augment the functionalities of existing keystrokes. With a log study, we collected touch data for both normal usage (typing and hotkeys) and while performing touch gestures (location and motion), which are analyzed to assess the viability of augmenting keystrokes with simultaneous gestures. A controlled experiment was conducted to compare GestAKey with existing keyboard interaction techniques, in terms of efficiency and learnability. The results show that GestAKey has comparable performance with hotkey. We further discuss the insights of integrating such touch modality into existing keyboard interaction, and demonstrate several usage scenarios.
How Social Dynamics and the Context of Digital Content Impact Workplace Remix
As highlighted in recent work on remix in online content creation communities, people commonly take and appropriate digital content for new activities. Less is known, however, about how people repurpose digital content as part of work. We report findings from an interview study with 19 individuals in which we explored how digital content in the workplace becomes a material for remix. Our analysis emphasizes (i) how digital content is obtained from colleagues for remix, (ii) how content is made available for remix by others, and (iii) how digital content is transformed for remix. In attending to these broader processes of remix, we consider the roles of workplace technologies, such as those for file sharing, as well as social norms that mediate access, remix, and acknowledgement. We draw implications for design of technology that emphasize support for individuals in making digital content available for remix, and raising awareness of the context of that content.
Somewhere Over the Rainbow: An Empirical Assessment of Quantitative Colormaps
An essential goal of quantitative color encoding is the accurate mapping of perceptual dimensions of color to the logical structure of data. Prior research identifies weaknesses of ‘rainbow’ colormaps and advocates for ramping in luminance, while recent work contributes multi-hue colormaps generated using perceptually-uniform color models. We contribute a comparative analysis of different colormap types, with a focus on comparing single- and multi-hue schemes. We present a suite of experiments in which subjects perform relative distance judgments among color triplets drawn systematically from each of four single-hue and five multi-hue colormaps. We characterize speed and accuracy across each colormap, and identify conditions that degrade performance. We also find that a combination of perceptual color space and color naming measures more accurately predict user performance than either alone, though the overall accuracy is poor. Based on these results, we distill recommendations on how to design more effective color encodings for scalar data.
Two Kinds of Novel Multi-user Immersive Display Systems
Stereoscopic display is a standard display mode for virtual reality environments. Typical 3D projection provides only a single stereoscopic video stream; thus co-located users cannot correctly perceive the virtual scene based on their own position and view. Several works devoted to developing multi-user stereoscopic display, but the number of users is very limited or the technical implementation is complicated. In this paper we put forward two flexible and simple projection-based multi-user stereoscopic display systems. The first one, named TPA, is based on a triple-projector array and provides a 120Hz active stereo for three users. Two TPAs can be combined to form a six-user system. The second one, named DPA, is a dual-projector and easy-implemented system providing individual stereoscopic video stream for two to six users. Finally, a co-located multi-user virtual fireman simulation training system and a virtual tennis simulation system were created to verify the effectiveness of our systems.
The Effects of Badges and Avatar Identification on Play and Making in Educational Games
In our study (N=2189), we divided participants into 6 badge conditions: 1) Role model badges (e.g., Einstein), 2) Personal interest badges (e.g., Movies), 3) Achievement badges (e.g., “Code King”), 4) Choice, 5) Choice with badges always visible, and 6) No badges. Participants played a CS programming game, then used an editor to create their own level. Badges promoted avatar identification (personal interest, role model), player experience (achievement, role model), intrinsic motivation (achievement, role model), and self-efficacy (role model) during both the game and the editor. Independent of badges, avatar identification promoted player experience, intrinsic motivation, and self-efficacy. Additionally, avatar identification promoted greater overall time spent in both the game and the editor, and led to significantly higher overall quality of the completed game levels (as rated by 3 independent externally trained QA testers). Our study has implications for the design of badge systems and sheds new light on the effects of avatar identification on play and making.
Full-Body Ownership Illusion Can Change Our Emotion
Recent advances in technology have allowed users to experience an illusory feeling of full body ownership of a virtual avatar. Such virtual embodiment has the power to elicit perceptual, behavioral or cognitive changes related to oneself, however, its emotional effects have not yet been rigorously examined. To address this issue, we investigated emotional changes as a function of the level of the illusion (Study 1) and whether changes in the facial expression of a virtual avatar can modulate the effects of the illusion (Study 2). The results revealed that stronger illusory feelings of full body ownership were induced in the synchronous condition, and participants reported higher valence in the synchronous condition in both Studies 1 and 2. The results from Study 2 suggested that the facial expression of a virtual avatar can modulate participants’ emotions. We discuss the prospects of the development of therapeutic techniques using such illusions to help people with emotion-related symptoms such as depression and social anxiety.
c.light: A Tool for Exploring Light Properties in Early Design Stage
Although a light becomes an important design element, there are little techniques available to explore shapes and light effects in early design stages. We present c.light, a design tool that consists of a set of modules and a mobile application for visualizing the light in a physical world. It allows designers to easily fabricate both tangible and intangible properties of a light without a technical barrier. We analyzed how c.light contributes to the ideation process of light design through a workshop. The results showed that c.light largely expands designers’ capability to manipulate intangible properties of light and, by doing so, it facilitates collaborative and inverted ideation process in early design stages. It is expected that the results of this study could enhance our understanding of how designers manipulate light in a physical world in early design stages and could be a good stepping stone for future tool development.
Steering through Successive Objects
We investigate stroking motions through successive objects with styli. There are several promising models for stroking motions, such as crossing tasks, which require endpoint accuracy of a stroke, or steering tasks, which require continuous accuracy throughout the trajectory. However, a task requiring users to repeatedly steer through constrained path segments has never been studied, although such operations are needed in GUIs, e.g., for selecting icons or objects on illustration software through lassoing. We empirically confirmed that the interval, trajectory width, and obstacle size significantly affect the movement speed. Existing models can not accurately predict user performance in such tasks. We found several unexpected results such as that steering through denser objects sometimes required less times than expected. Speed profile analysis showed the reasons behind such behaviors, such as participants’ anticipation strategies. We also discuss the applicability of exiting performance models and revisions.
The Ethnobot: Gathering Ethnographies in the Age of IoT
Computational systems and objects are becoming increasingly closely integrated with our daily activities. Ubiquitous and pervasive computing first identified the emerging challenges of studying technology used on-the-move and in widely varied contexts. With IoT, previously sporadic experiences are interconnected across time and space in numerous and complex ways. This increasing complexity has multiplied the challenges facing those who study human experience to inform design. This paper describes the results of a study that used a chatbot or ‘Ethnobot’ to gather ethnographic data, and considers the opportunities and challenges in collecting this data in the absence of a human ethnographer. This study involved 13 participants gathering information about their experiences at the Royal Highland Show. We demonstrate the effectiveness of the Ethnobot in this setting, discuss the benefits and drawbacks of chatbots as a tool for ethnographic data collection, and conclude with recommendations for the design of chatbots for this purpose.
An Experimental Study of Cryptocurrency Market Dynamics
As cryptocurrencies gain popularity and credibility, marketplaces for cryptocurrencies are growing in importance. Understanding the dynamics of these markets can help to assess how viable the cryptocurrnency ecosystem is and how design choices affect market behavior. One existential threat to cryptocurrencies is dramatic fluctuations in traders’ willingness to buy or sell. Using a novel experimental methodology, we conducted an online experiment to study how susceptible traders in these markets are to peer influence from trading behavior. We created bots that executed over one hundred thousand trades costing less than a penny each in 217 cryptocurrencies over the course of six months. We find that individual “buy” actions led to short-term increases in subsequent buy-side activity hundreds of times the size of our interventions. From a design perspective, we note that the design choices of the exchange we study may have promoted this and other peer influence effects, which highlights the potential social and economic impact of HCI in the design of digital institutions.
Paragon: An Online Gallery for Enhancing Design Feedback with Visual Examples
Examples provide a source of inspiration for creating designs, but can they help improve the feedback process? Supplementing design feedback with examples could help recipients see issues clearly, identify concrete steps for improvement, and integrate novel ideas. Two online studies investigated how to support novices providing feedback on visual poster designs in an online context. Study One found that feedback providers select poster examples that complement their feedback and align with a provided rubric. Study Two shows that feedback providers give more specific, actionable, and novel input when using an example-centric approach, as opposed to text alone. To support this, we designed Paragon, an interface to efficiently browse examples using metadata. Finally, we discuss implications for collecting examples from the Web and structuring the design feedback process.
Personality Depends on The Medium: Differences in Self-Perception on Snapchat, Facebook and Offline
We investigate self-perception in social media through the lens of personality theory. Two mixed-methods studies involving 148 participants examine if people self-report different personality traits in social media compared with their offline traits. We first compare offline and Facebook traits, finding that on Facebook people are less Neurotic, Open and Agreeable. A second study compared offline, Facebook and Snapchat traits, replicating and extending our initial results. Again Facebook personality was less Neurotic and less Open than offline. In contrast, Snapchat personality was more Extravert than both offline and Facebook and more Open than Facebook. Interviews indicate how personality differences arise from social media affordances. Anxiety about audience judgments leads people to curate posts to appear less Neurotic on social media, but the transience of Snapchat promotes greater Extraversion than offline and Facebook. We discuss theory and design implications.
SESSION: Paper Presentations
“We Don’t Do That Here”: How Collaborative Editing with Mentors Improves Engagement in Social Q&A Communities
Online question-and-answer (Q&A) communities like Stack Overflow have norms that are not obvious to novice users. Novices create and post programming questions without feedback, and the community enforces site norms through public downvoting and commenting. This can leave novices discouraged from further participation. We deployed a month long, just-in-time mentorship program to Stack Overflow in which we redirected novices in the process of asking a question to an on-site Help Room. There, novices received feedback on their question drafts from experienced Stack Overflow mentors. We present examples and discussion of various question improvements including: question context, code formatting, and wording that adheres to on-site cultural norms. We find that mentored questions are substantially improved over non-mentored questions, with average scores increasing by 50%. We provide design implications that challenge how socio-technical communities onboard novices across domains.
Using High Frequency Accelerometer and Mouse to Compensate for End-to-end Latency in Indirect Interaction
End-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.
Modeling Perceived Screen Resolution Based on Position and Orientation of Wrist-Worn Devices
This paper presents a model allowing inferences of perceivable screen content in relation to position and orientation of mobile or wearable devices with respect to their user. The model is based on findings from vision science and allows prediction of a value of effective resolution that can be perceived by a user. It considers distance and angle between the device and the eyes of the observer as well as the resulting retinal eccentricity when the device is not directly focused but observed in the periphery. To validate our model, we conducted a study with 12 participants. Based on our results, we outline implications for the design of mobile applications that are able to adapt themselves to facilitate information throughput and usability.
Investigating Perceptual Congruence between Data and Display Dimensions in Sonification
The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback.
Acceptability and Acceptance of Autonomous Mobility on Demand: The Impact of an Immersive Experience
Autonomous vehicles have the potential to fundamentally change existing transportation systems. Beyond legal concerns, these societal evolutions will critically depend on user acceptance. As an emerging mode of public transportation [7], Autonomous mobility on demand (AMoD) is of particular interest in this context. The aim of the present study is to identify the main components of acceptability (before first use) and acceptance (after first use) of AMoD, following a user experience (UX) framework. To address this goal, we conducted three workshops (N=14) involving open discussions and a ride in an experimental autonomous shuttle. Using a mixed-methods approach, we measured pre-immersion acceptability before immersing the participants in an on-demand transport scenario, and eventually measured post-immersion acceptance of AMoD. Results show that participants were reassured about safety concerns, however they perceived the AMoD experience as ineffective. Our findings highlight key factors to be taken into account when designing AMoD experiences.
BioFidget: Biofeedback for Respiration Training Using an Augmented Fidget Spinner
This paper presents BioFidget, a biofeedback system that integrates physiological sensing and display into a smart fidget spinner for respiration training. We present a simple yet novel hardware design that transforms a fidget spinner into 1) a nonintrusive heart rate variability (HRV) sensor, 2) an electromechanical respiration sensor, and 3) an information display. The combination of these features enables users to engage in respiration training through designed tangible and embodied interactions, without requiring them to wear additional physiological sensors. The results of this empirical user study prove that the respiration training method reduces stress, and the proposed system meets the requirements of sensing validity and engagement with 32 participants in a practical setting.
Gender-Inclusive Design: Sense of Belonging and Bias in Web Interfaces
We interact with dozens of web interfaces on a daily basis, making inclusive web design practices more important than ever. This paper investigates the impacts of web interface design on ambient belonging, or the sense of belonging to a community or culture. Our experiment deployed two content-identical webpages for an introductory computer science course, differing only in aesthetic features such that one was perceived as masculine while the other was gender-neutral. Our results confirm that young women exposed to the masculine page are negatively affected, reporting significantly less ambient belonging, interest in the course and in studying computer science broadly. They also experience significantly more concern about others’ perception of their gender relative to young women exposed to the neutral page, while no similar effect is seen in young men. These results suggest that gender biases can be triggered by web design, highlighting the need for inclusive user interface design for the web.
Too Close and Crowded: Understanding Stress on Mobile Instant Messengers based on Proxemics
Nowadays, mobile instant messaging (MIM) is a necessity for our private and public lives, but it has also been the cause of stress. In South Korea, MIM stress has become a serious social problem. To understand this stress, we conducted four focus groups with 20 participants under MIM stress. We initially discovered that MIM stress relates to how people perceive the territory in MIM. We then applied proxemics-the theory of human use of space-to the thematic analysis as the rationale. The data revealed two main themes: too close and too crowded. The participants were stressed due to design features that let strangers or crowds into their MIM applications and forced them to interact and share their status with them. Based on this finding, we propose a set of implications for designing anti-stress MIM applications.
Socioeconomic Inequalities in the Non use of Facebook
Use and non-use of technology can occur in a variety of forms. This paper analyzes data from a probabilistic sample of 1000 US households to identify predictors for four different types of use and non-use of the social media site Facebook. The results make three important contributions. First, they demonstrate that many demographic and socioeconomic predictors of social media use and non-use identified in prior studies hold with a larger, more diverse sample. Second, they show how going beyond a binary distinction between use and non-use reveals inequalities in social media use and non-use not identified in prior work. Third, they contribute to ongoing discussions about the representativeness of social media data by showing which populations are, and are not, represented in samples drawn from social media.
Navigation Systems for Motorcyclists: Exploring Wearable Tactile Feedback for Route Guidance in the Real World
Current navigation systems for motor cyclists use visual or auditory cues for guidance. However, this poses a challenge to the motorcyclists since their visual and auditory channels are already occupied with controlling the motorbike, paying attention to other road users, and planing the next turn. In this work, we explore how tactile feedback can be used to guide motorcyclists. We present MOVING (MOtorbike VIbrational Navigation Guidance), a smart kidney belt that presents navigation cues through 12 vibration motors. In addition, we report on the design process of this wearable and on an evaluation with 16 participants in a real world riding setting. We show that MOVING outperforms off-the-shelf navigation systems in terms of turn errors and distraction.
CodeTalk: Improving Programming Environment Accessibility for Visually Impaired Developers
In recent times, programming environments like Visual Studio are widely used to enhance programmer productivity. However, inadequate accessibility prevents Visually Impaired (VI) developers from taking full advantage of these environments. In this paper, we focus on the accessibility challenges faced by the VI developers in using Graphical User Interface (GUI) based programming environments. Based on a survey of VI developers and based on two of the authors’ personal experiences, we categorize the accessibility difficulties into Discoverability, Glanceability, Navigability, and Alertability. We propose solutions to some of these challenges and implement these in CodeTalk, a plugin for Visual Studio. We show how CodeTalk improves developer experience and share promising early feedback from VI developers who used our plugin.
Morphees+: Studying Everyday Reconfigurable Objects for the Design and Taxonomy of Reconfigurable UIs
Users interact with many reconfigurable objects in daily life. These objects embed reconfigurations and shape- changing features that users are familiar with. For this reason, everyday reconfigurable objects have informed the design and taxonomy of shape changing UI. However, they have never been explored systematically. In this paper, we present a data set of 82 everyday reconfigurable objects that we collected in a workshop. We discuss how they can inspire the design of reconfigurable interfaces. We particularly focus on taxonomies of reconfigurable interfaces. Taxonomies have been suggested to help design and communication among researchers, however despite their extensive use, taxonomies are rarely evaluated. This paper analyses two established taxonomies – Rasmussen’s and Roudaut’s – using daily reconfigurable objects. We show relationships between the taxonomies and area for improvements. We propose Morphees+, a refined taxonomy based on Roudaut’s Shape Resolution Taxonomy.
The Theory-Practice Gap as Generative Metaphor
The theory-practice gap is a well-known concept in HCI research. It provides a way of describing a space that allegedly exists between the theory and practice of the field, and it has inspired many researchers to propose ways to “bridge the gap.” In this paper, we propose a novel interpretation of the gap as a generative metaphor that frames problems and guides researchers towards possible solutions. We examine how the metaphor has emerged in HCI discourse, and what its limitations might be. Our examination raises concerns about treating the gap as given or obvious, which could reflect researchers’ tendencies to adopt a problem-solving perspective. We discuss the value of considering problem setting in relation to the theory-practice gap, and then explore Derrida’s strategy of “reversal” as a possible way to develop new metaphors to capture the relationship between theory and practice.
Playing to Wait: A Taxonomy of Idle Games
Idle games are a recent minimalist gaming phenomenon in which the game is left running with little player interaction. We deepen understanding of idle games and their characteristics by developing a taxonomy and identifying game features. This paper examines 66 idle games using a grounded theory approach to analyze play, game mechanics, rewards, interactivity, progress rate, and user interface. To establish a clearly bounded definition of idle games, we analyzed 10 non-idle games with the same approach. We discuss how idle games move players from playing to planning, how they question dominant assumptions about gameplay, and their unusual use of resources such as player attention and computer cycles. Our work illuminates opportunities for the design of idle games, suggests design implications, and provides a framework for researchers to clearly articulate questions about this genre.
Empowering Families Facing English Literacy Challenges to Jointly Engage in Computer Programming
Research suggests that parental engagement through Joint Media Engagement (JME) is an important factor in children’s learning for coding and programming. Unfortunately, parents with limited technology background may have difficulty supporting their children’s access to programming. English-language learning (ELL) families from marginalized communities face particular challenges in understanding and supporting programming, as code is primarily authored using English text. We present BlockStudio, a programming tool for empowering ELL families to jointly engage in introductory coding, using an environment embodying two design principles, text-free and visually concrete. We share a case study involving three community centers serving immigrant and refugee populations. Our findings show ELL families can jointly engage in programming without text, via co-creation and flexible roles, and can create a range of artifacts, indicating understanding of aspects of programming within this environment. We conclude with implications for coding together in ELL families and design ideas for text-free programming research.
Digital Exhibit Labels in Museums: Promoting Visitor Engagement with Cultural Artifacts
How can we use interactive displays in museums to help visitors appreciate authentic objects and artifacts that they can’t otherwise touch or manipulate? This paper shares results from a design-based research study on the use of interactive displays to help visitors learn about artifacts in an exhibit on the history and culture of China. To explore the potential afforded by these displays, we unobtrusively video recorded 834 museum visitor groups who stopped in front of one collection of objects. Drawing on cognitive models of curiosity, we tested three redesigns of this display, each focusing on a different strategy to spark visitor curiosity, interest, and engagement. To understand the relative effectiveness of these designs, we analyzed visitor interaction and conversation. Our results uncovered significant differences across the conditions suggesting implications for the use of such technology in museums.
Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices
Learning-based gaze estimation has significant potential to enable attentive user interfaces and gaze-based interaction on the billions of camera-equipped handheld devices and ambient displays. While training accurate person- and device-independent gaze estimators remains challenging, person-specific training is feasible but requires tedious data collection for each target device. To address these limitations, we present the first method to train person-specific gaze estimators across multiple devices. At the core of our method is a single convolutional neural network with shared feature extraction layers and device-specific branches that we train from face images and corresponding on-screen gaze locations. Detailed evaluations on a new dataset of interactions with five common devices (mobile phone, tablet, laptop, desktop computer, smart TV) and three common applications (mobile game, text editing, media center) demonstrate the significant potential of cross-device training. We further explore training with gaze locations derived from natural interactions, such as mouse or touch input.
Designing for Diabetes Decision Support Systems with Fluid Contextual Reasoning
Type 1 diabetes is a potentially life-threatening chronic condition that requires frequent interactions with diverse data to inform treatment decisions. While mobile technologies such as blood glucose meters have long been an essential part of this process, designing interfaces that explicitly support decision-making remains challenging. Dual-process models are a common approach to understanding such cognitive tasks. However, evidence from the first of two studies we present suggests that in demanding and complex situations, some individuals approach disease management in distinctive ways that do not seem to fit well within existing models. This finding motivated, and helped frame our second study, a survey (n=192) to investigate these behaviors in more detail. On the basis of the resulting analysis, we posit Fluid Contextual Reasoning to explain how some people with diabetes respond to particular situations, and discuss how an extended framework might help inform the design of user interfaces for diabetes management.
The Impact of Word, Multiple Word, and Sentence Input on Virtual Keyboard Decoding Performance
Entering text on non-desktop computing devices is often done via an onscreen virtual keyboard. Input on such keyboards normally consists of a sequence of noisy tap events that specify some amount of text, most commonly a single word. But is single word-at-a-time entry the best choice? This paper compares user performance and recognition accuracy of word-at-a-time, phrase-at-a-time, and sentence-at-a-time text entry on a smartwatch keyboard. We evaluate the impact of differing amounts of input in both text copy and free composition tasks. We found providing input of an entire sentence significantly improved entry rates from 26 wpm to 32 wpm while keeping character error rates below 4%. In offline experiments with more processing power and memory, sentence input was recognized with a much lower 2.0% error rate. Our findings suggest virtual keyboards can enhance performance by encouraging users to provide more input per recognition event.
Designing the Future of Personal Fashion
Advances in computer vision and machine learning are changing the way people dress and buy clothes. Given the vast space of fashion problems, where can data-driven technologies provide the most value? To understand consumer pain points and opportunities for technological interventions, this paper presents the results from two independent need-finding studies that explore the gold-standard of personalized shopping: interacting with a personal stylist. Through interviews with five personal stylists, we study the range of problems they address and their in-person processes for working with clients. In a separate study, we investigate how styling experiences map to online settings by building and releasing a chatbot that connects users to one-on-one sessions with a stylist, acquiring more than 70 organic users in three weeks. These conversations reveal that in-person and online styling sessions share similar goals, but online sessions often involve smaller problems that can be resolved more quickly. Based on these explorations, we propose future highly personalized, online interactions that address consumer trust and uncertainty, and discuss opportunities for automation.
To Distort or Not to Distort: Distance Cartograms in the Wild
Distance Cartograms (DC) distort geographical features so that the measured distance between a single location and any other location on a map indicates absolute travel time. Although studies show that users can efficiently assess travel time with DC, distortion applied in DC may confuse users, and its usefulness “in the wild” is unknown. To understand how real world users perceive DC’s benefits and drawbacks, we devise techniques that improve DC’s presentation (preserving topological relationships among map features while aiming at retaining shapes) and scalability (presenting accurate live travel time). We developed a DC-enabled system with these techniques, and deployed it to 20 participants for 4 weeks. During this period, participants spent, on average, more than 50% of their time with DC as opposed to a standard map. Participants felt DC to be intuitive and useful for assessing travel time. They indicated intent in adopting DC in their real-life scenarios.
Towards a Multisensory Augmented Reality Map for Blind and Low Vision People: a Participatory Design Approach
Current low-tech Orientation & Mobility (O&M) tools for visually impaired people, e.g. tactile maps, possess limitations. Interactive accessible maps have been developed to overcome these. However, most of them are limited to exploration of existing maps, and have remained in laboratories. Using a participatory design approach, we have worked closely with 15 visually impaired students and 3 O&M instructors over 6 months. We iteratively designed and developed an augmented reality map destined at use in O&M classes in special education centers. This prototype combines projection, audio output and use of tactile tokens, and thus allows both map exploration and construction by low vision and blind people. Our user study demonstrated that all students were able to successfully use the prototype, and showed a high user satisfaction. A second phase with 22 international special education teachers allowed us to gain more qualitative insights. This work shows that augmented reality has potential for improving the access to education for visually impaired people.
Scaling Classroom IT Skill Tutoring: A Case Study from India
India is home to the largest under-25 demographic profile in the world, but lacks a job-ready educational system. It requires a wide-spread, skill-oriented educational model, equipping youth to thrive in highly dynamic job markets. As a response to the huge demand for technical education, a large private skill-tutoring ecosystem has sprung up in In-dia but remains geographically limited. This paper, drawn from a three-month ethnographic research conducted in Ameerpet (arguably India’s largest IT skilling hub), probes the pedagogic style and characteristics of tutoring, and of-fers reasons why learners prefer to enroll into a physical model of classroom teaching over online courses. We make design suggestions for online learning platforms to attract students who are marginalized in the more formal and com-petitive education system, and opt for Ameerpet-like skill-hubs. Our primary offering is to suggest a shift in perspec-tive of online education platforms to include job readiness and accompanying changes in course content and delivery.
Regulating Feelings During Interpersonal Conflicts by Changing Voice Self-perception
Emotions play a major role in how interpersonal conflicts unfold. Although several strategies and technological approaches have been proposed for emotion regulation, they often require conscious attention and effort. This often limits their efficacy in practice. In this paper, we propose a different approach inspired by self-perception theory: noticing that people are often reacting to the perception of their own behavior, we artificially change their perceptions to influence their emotions. We conducted two studies to evaluate the potential of this approach by automatically and subtly altering how people perceive their own voice. In one study, participants that received voice feedback with a calmer tone during relationship conflicts felt less anxious. In the other study, participants who listened to their own voices with a lower pitch during contentious debates felt more powerful. We discuss the implications of our findings and the opportunities for designing automatic and less perceptible emotion regulation systems.
Flotation Simulation in a Cable-driven Virtual Environment — A Study with Parasailing
This paper presents flotation simulation in a cable-driven virtual environment. For this, a virtual parasailing system was developed, where the visual stimulus was provided through a VR headset and the physical stimulus was given by wires. In order to prevent the user from moving out of the limited workspace of the cable-driven system, the visual acceleration was washout-filtered to produce the physical acceleration. In the parasailing trajectory, we focused on the stages of vertical acceleration/deceleration and conducted an experiment to identify how much gain can be applied to the visual acceleration, which makes the user feel the natural self-motion when integrated with physical stimulus. Then, the results were tested using several types of full-course virtual parasailing. The results showed that fairly large differences between visual and physical stimuli would be accepted and different gains could be assigned depending on the user’s altitudes.
Identity Work as Deliberation: AAPI Political Discourse in the 2016 US Presidential Election
Asian Americans and Pacific Islanders (AAPIs) are perceived as the “model minority” with a monolithic identity, in contrast to other marginalized racial groups in the United States. In reality, they are composed of different ethnicities, socio-economic backgrounds, and political ideologies. AAPIs share their political views online, engaging in the public sphere through a collaborative process we coin, “identity work as deliberation.” Using the 2016 US Presidential Election as a case study, we retrieved 4,406 Reddit comments posted between October 2016 to December 2016. We examine how users engage in an online community through a deliberation lens to understand the extent to which Reddit supports identity work as a deliberative process. Under the collective AAPI umbrella, we find that ethnic identifications complicate the types of discussion possible within r/asianamerica. We discuss how the expression of identity, and thereby solidarity, in a politicized online setting may lead to a social movement.
D-SWIME: A Design Space for Smartwatch Interaction Techniques Supporting Mobility and Encumbrance
Smartwatches enable rapid access to information anytime and anywhere. However, current smartwatch content navigation techniques, for panning and zooming, were directly adopted from those used on smartphones. These techniques are cumbersome when performed on small smartwatch screens and have not been evaluated for their support in mobility and encumbrance contexts (when the user’s hands are busy). We studied the effect of mobility and encumbrance on common content navigation techniques and found a significant decrease in performance as the pace of mobility increases or when the user was encumbered with busy hands. Based on these initial findings, we proposed a design space which would improve efficiency when navigation techniques, such as panning and zooming, are employed in mobility contexts. Our results reveal that our design space can effectively be used to create novel interaction techniques that improve smartwatch content navigation in mobility and encumbrance contexts.
A Visual Interaction Framework for Dimensionality Reduction Based Data Exploration
Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data. However, reasoning dynamically about the results of a dimensionality reduction is difficult. Dimensionality-reduction algorithms use complex optimizations to reduce the number of dimensions of a dataset, but these new dimensions often lack a clear relation to the initial data dimensions, thus making them difficult to interpret. Here we propose a visual interaction framework to improve dimensionality-reduction based exploratory data analysis. We introduce two interaction techniques, forward projection and backward projection, for dynamically reasoning about dimensionally reduced data. We also contribute two visualization techniques, prolines and feasibility maps, to facilitate the effective use of the proposed interactions. We apply our framework to PCA and autoencoder-based dimensionality reductions. Through data-exploration examples, we demonstrate how our visual interactions can improve the use of dimensionality reduction in exploratory data analysis.
Complex Mediation in the Formation of Political Opinions
The Internet plays an important role in the formation of political opinions by supporting citizens in discovering diverse political information and opinions. However, the echo chamber effect has become of increasing concern, referring to the tendency for people to encounter opinions and information similar to their own online. It remains poorly understood how ordinary citizens use the Internet in the formation of political opinions. To answer this question, we conducted an interview study with 32 Chinese citizens. We found that participants used complex strategies to coordinate personal networks and technologies in specific ways to better understand political events. To analyze this phenomenon, we draw on Bødker and Andersen’s model of complex mediation which describes how multiple mediators including people and artifacts work together to mediate an activity. We discuss how complex mediation supported participants in informing their political opinions. We derive design implications for supporting people to form political opinions.
NavigaTone: Seamlessly Embedding Navigation Cues in Mobile Music Listening
As humans, we have the natural capability of localizing the origin of sounds. Spatial audio rendering leverages this skill by applying special filters to recorded audio to create the impression that a sound emanates from a certain position in the physical space. A main application for spatial audio on mobile devices is to provide non-visual navigation cues. Current systems require users to either listen to artificial beacon sounds, or the entire audio source (e.g., a song) is repositioned in space, which impacts the listening experience. We present NavigaTone, a system that takes advantage of multi-track recordings and provides directional cues by moving a single track in the auditory space. While minimizing the impact of the navigation component on the listening experience, a user study showed that participants could localize sources as good as with stereo panning while the listening experience was rated to be closer to common music listening.
Considering Agency and Data Granularity in the Design of Visualization Tools
Previous research has identified trade-offs when it comes to designing visualization tools. While constructive “bottom-up’ tools promote a hands-on, user-driven design process that enables a deep understanding and control of the visual mapping, automated tools are more efficient and allow people to rapidly explore complex alternative designs, often at the cost of transparency. We investigate how to design visualization tools that support a user-driven, transparent design process while enabling efficiency and automation, through a series of design workshops that looked at how both visualization experts and novices approach this problem. Participants produced a variety of solutions that range from example-based approaches expanding constructive visualization to solutions in which the visualization tool infers solutions on behalf of the designer, e.g., based on data attributes. On a higher level, these findings highlight agency and granularity as dimensions that can guide the design of visualization tools in this space.
El Paquete Semanal: The Week’s Internet in Havana
We contribute a case study of El Paquete Semanal or “The Weekly Package” — the pervasive, offline internet in Cuba. We conducted a qualitative inquiry of El Paquete through extensive fieldwork—interviews and observations—in Havana, Cuba. Our findings highlight the human infrastructure that supports this offline internet, rendered visible through the lens of articulation work. By offering an in-depth perspective into these workings of El Paquete, we aim to challenge established notions of what an (or the) internet “should” look like in more and less “developed” contexts. We highlight how El Paquete is a non-standardized and non-neutral internet, but still human-centered. We also offer an enriched understanding of how an entirely offline internet can provide expansive information access to support leisure and livelihood, additionally serving as a locally relevant platform that affords local participation.
Voice Interfaces in Everyday Life
Voice User Interfaces (VUIs) are becoming ubiquitously available, being embedded both into everyday mobility via smartphones, and into the life of the home via ‘assistant’ devices. Yet, exactly how users of such devices practically thread that use into their everyday social interactions remains underexplored. By collecting and studying audio data from month-long deployments of the Amazon Echo in participants’ homes-informed by ethnomethodology and conversation analysis-our study documents the methodical practices of VUI users, and how that use is accomplished in the complex social life of the home. Data we present shows how the device is made accountable to and embedded into conversational settings like family dinners where various simultaneous activities are being achieved. We discuss how the VUI is finely coordinated with the sequential organisation of talk. Finally, we locate implications for the accountability of VUI interaction, request and response design, and raise conceptual challenges to the notion of designing ‘conversational’ interfaces.
Mental Health Support and its Relationship to Linguistic Accommodation in Online Communities
Many online communities cater to the critical and unmet needs of individuals challenged with mental illnesses. Generally, communities engender characteristic linguistic practices, known as norms. Conformance to these norms, or linguistic accommodation, encourages social approval and acceptance. This paper investigates whether linguistic accommodation impacts a specific social feedback: the support received by an individual in an online mental health community. We first quantitatively derive two measures for each post in these communities: 1) the linguistic accommodation it exhibits, and 2) the level of support it receives. Thereafter, we build a statistical framework to examine the relationship between these measures. Although the extent to which accommodation is associated with support varies, we find a positive link between the two, consistent across 55 Reddit communities serving various psychological needs. We discuss how our work surfaces a tension in the functioning of these sensitive communities, and present design implications for improving their support provisioning mechanisms.
Value-Suppressing Uncertainty Palettes
Understanding uncertainty is critical for many analytical tasks. One common approach is to encode data values and uncertainty values independently, using two visual variables. These resulting bivariate maps can be difficult to interpret, and interference between visual channels can reduce the discriminability of marks. To address this issue, we contribute Value-Suppressing Uncertainty Palettes (VSUPs). VSUPs allocate larger ranges of a visual channel to data when uncertainty is low, and smaller ranges when uncertainty is high. This non-uniform budgeting of the visual channels makes more economical use of the limited visual encoding space when uncertainty is low, and encourages more cautious decision-making when uncertainty is high. We demonstrate several examples of VSUPs, and present a crowdsourced evaluation showing that, compared to traditional bivariate maps, VSUPs encourage people to more heavily weight uncertainty information in decision-making tasks.
Baang: A Viral Speech-based Social Platform for Under-Connected Populations
Speech is more natural than text for a large part of the world including hard-to-reach populations (low-literate, poor, tech-novice, visually-impaired, marginalized) and oral cultures. Voice-based services over simple mobile phones are effective means to provide orality-driven social connectivity to such populations. We present Baang, a versatile and inclusive voice-based social platform that allows audio content creation and sharing among its open community of users. Within 8 months, Baang spread virally to 10,721 users (69% of them blind) who participated in 269,468 calls and shared their thoughts via 44,178 audio-posts, 343,542 votes, 124,389 audio-comments and 94,864 shares. We show that the ability to vote, comment and share leads to viral spread, deeper engagement, longer retention and emergence of true dialog among participants. Beyond connectivity, Baang provides its users with a voice and a social identity as well as means to share information and get community support.
Haptic Links: Bimanual Haptics for Virtual Reality Using Variable Stiffness Actuation
We present Haptic Links, electro-mechanically actuated physical connections capable of rendering variable stiffness between two commodity handheld virtual reality (VR) controllers. When attached, Haptic Links can dynamically alter the forces perceived between the user’s hands to support the haptic rendering of a variety of two-handed objects and interactions. They can rigidly lock controllers in an arbitrary configuration, constrain specific degrees of freedom or directions of motion, and dynamically set stiffness along a continuous range. We demonstrate and compare three prototype Haptic Links: Chain, Layer-Hinge, and Ratchet-Hinge. We then describe interaction techniques and scenarios leveraging the capabilities of each. Our user evaluation results confirm that users can perceive many two-handed objects or interactions as more realistic with Haptic Links than with typical unlinked VR controllers.
Breeze: Sharing Biofeedback through Wearable Technologies
Digitally presenting physiological signals as biofeedback to users raises awareness of both body and mind. This paper describes the effectiveness of conveying a physiological signal often overlooked for communication: breathing. We present the design and development of digital breathing patterns and their evaluation along three output modalities: visual, audio, and haptic. We also present Breeze, a wearable pendant placed around the neck that measures breathing and sends biofeedback in real-time. We evaluated how the breathing patterns were interpreted in a fixed environment and gathered qualitative data on the wearable device’s design. We found that participants intentionally modified their own breathing to match the biofeedback, as a technique for understanding the underlying emotion. Our results describe how the features of the breathing patterns and the feedback modalities influenced participants’ perception. We include guidelines and suggested use cases, such as Breeze being used by loved ones to increase connectedness and empathy.
eystrokes
We report on typing behaviour and performance of 168,000 volunteers in an online study. The large dataset allows detailed statistical analyses of keystroking patterns, linking them to typing performance. Besides reporting distributions and confirming some earlier findings, we report two new findings. First, letter pairs that are typed by different hands or fingers are more predictive of typing speed than, for example, letter repetitions. Second, rollover-typing, wherein the next key is pressed before the previous one is released, is sur- prisingly prevalent. Notwithstanding considerable variation in typing patterns, unsupervised clustering using normalised inter-key intervals reveals that most users can be divided into eight groups of typists that differ in performance, accuracy, hand and finger usage, and rollover. The code and dataset are released for scientific use.
Selection-based Text Entry in Virtual Reality
In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. While the technology for input as well as output devices is market ready, only a few solutions for text input exist, and empirical knowledge about performance and user preferences is lacking. In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods. Other methods such as head pointing can be viable alternatives depending on available resources. We summarize our findings by formulating guidelines for choosing optimal virtual keyboard text entry methods in VR.
Effects of Viewing Multiple Viewpoint Videos on Metacognition of Collaborative Experiences
This paper discusses the effects of multiple viewpoint videos for metacognition of experiences. We present a system for recording multiple users’ collaborative experiences by wearable and environmental sensors, and another system for viewing multiple viewpoint videos automatically identified and extracted to associate to individual users. We designed an experiment to compare the metacognition of one’s own experience between those based on memory and those supported by video viewing. The experimental results show that metacognitive descriptions related to one’s own mind, such as feelings and preferences, are possible regardless whether a person is viewing videos, but such episodic descriptions as the content of someone’s utterance and what s/he felt associated with it are strongly promoted by video viewing. We conducted another experiment where the same participants did identical metacognitive description tasks about half a year after the previous experiment. Through the experiments, we found the first-person view video is mostly used for confirming the episodic facts immediately after the experience, whereas after half a year, even one’s own experience is often felt like the experiences of others therefore the videos capturing themselves from the conversation partners and environment become important for thinking back to the situations where they were placed.
I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence
Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user–AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.
Supporting Collaborative Health Tracking in the Hospital: Patients’ Perspectives
The hospital setting creates a high-stakes environment where patients’ lives depend on accurate tracking of health data. Despite recent work emphasizing the importance of patients’ engagement in their own health care, less is known about how patients track their health and care in the hospital. Through interviews and design probes, we investigated hospitalized patients’ tracking activity and analyzed our results using the stage-based personal informatics model. We used this model to understand how to support the tracking needs of hospitalized patients at each stage. In this paper, we discuss hospitalized patients’ needs for collaboratively tracking their health with their care team. We suggest future extensions of the stage-based model to accommodate collaborative tracking situations, such as hospitals, where data is collected, analyzed, and acted on by multiple people. Our findings uncover new directions for HCI research and highlight ways to support patients in tracking their care and improving patient safety.
Investigating the Impact of Gender on Rank in Resume Search Engines
In this work we investigate gender-based inequalities in the context of resume search engines, which are tools that allow recruiters to proactively search for candidates based on keywords and filters. If these ranking algorithms take demographic features into account (directly or indirectly), they may produce rankings that disadvantage some candidates. We collect search results from Indeed, Monster, and CareerBuilder based on 35 job titles in 20 U. S. cities, resulting in data on 855K job candidates. Using statistical tests, we examine whether these search engines produce rankings that exhibit two types of indirect discrimination: individual and group unfairness. Furthermore, we use controlled experiments to show that these websites do not use inferred gender of candidates as explicit features in their ranking algorithms.
Cognitive Load Estimation in the Wild
Cognitive load has been shown, over hundreds of validated studies, to be an important variable for understanding human performance. However, establishing practical, non-contact approaches for automated estimation of cognitive load under real-world conditions is far from a solved problem. Toward the goal of designing such a system, we propose two novel vision-based methods for cognitive load estimation, and evaluate them on a large-scale dataset collected under real-world driving conditions. Cognitive load is defined by which of 3 levels of a validated reference task the observed subject was performing. On this 3-class problem, our best proposed method of using 3D convolutional neural networks achieves 86.1% accuracy at predicting task-induced cognitive load in a sample of 92 subjects from video alone. This work uses the driving context as a training and evaluation dataset, but the trained network is not constrained to the driving environment as it requires no calibration and makes no assumptions about the subject’s visual appearance, activity, head pose, scale, and perspective.
The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments
Pointing at remote objects to direct others’ attention is a fundamental human ability. Previous work explored methods for remote pointing to select targets. Absolute pointing techniques that cast a ray from the user to a target are affected by humans’ limited pointing accuracy. Recent work suggests that accuracy can be improved by compensating systematic offsets between targets a user aims at and rays cast from the user to the target. In this paper, we investigate mid-air pointing in the real world and virtual reality. Through a pointing study, we model the offsets to improve pointing accuracy and show that being in a virtual environment affects how users point at targets. In the second study, we validate the developed model and analyze the effect of compensating systematic offsets. We show that the provided model can significantly improve pointing accuracy when no cursor is provided. We further show that a cursor improves pointing accuracy but also increases the selection time.
CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality
CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user’s grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun. We describe the design considerations for CLAW and evaluate its performance through two user studies. The first study obtained qualitative user feedback on the naturalness, effectiveness, and comfort when using the device. The second study investigated the ease of the transition between grasping and touching when using our device.
Ripple Thermostat: Affecting the Emotional Experience through Interactive Force Feedback and Shape Change
Force feedback and shape change are modalities with a growing application potential beyond the more traditional GUIs. We present two studies that explored the effect of these modalities on the emotional experience when interacting with an intelligent thermostat. The first study compared visual feedback, force feedback, and a combination of force feedback and shape change. Results indicate that force feedback correlates to experienced dominance during interaction, while shape change mainly affects experienced arousal. The second study explored how force feedback and shape change could communicate affective meaning during interaction with the thermostat through a co-design study. Participants designed the thermostat behavior for three scenarios supporting energy savings. Results suggest that despite their abstractness, force feedback and shape change convey affective meaning during the user-system dialogue. The findings contribute to the design of intelligible and intuitive feedback.
A Qualitative Exploration of Perceptions of Algorithmic Fairness
Algorithmic systems increasingly shape information people are exposed to as well as influence decisions about employment, finances, and other opportunities. In some cases, algorithmic systems may be more or less favorable to certain groups or individuals, sparking substantial discussion of algorithmic fairness in public policy circles, academia, and the press. We broaden this discussion by exploring how members of potentially affected communities feel about algorithmic fairness. We conducted workshops and interviews with 44 participants from several populations traditionally marginalized by categories of race or class in the United States. While the concept of algorithmic fairness was largely unfamiliar, learning about algorithmic (un)fairness elicited negative feelings that connect to current national discussions about racial injustice and economic inequality. In addition to their concerns about potential harms to themselves and society, participants also indicated that algorithmic fairness (or lack thereof) could substantially affect their trust in a company or product.
The Benefits and Challenges of Video Calling for Emergency Situations
In the coming years, emergency calling services in North America will begin to incorporate new modalities for reporting emergencies, including video-based calling. The challenge is that we know little of how video calling systems should be designed and what benefits or challenges video calling might bring. We conducted observations and contextual interviews within three emergency response call centres to investigate these points. We focused on the work practices of call takers and dispatchers. Results show that video calls could provide valuable contextual information about a situation and help to overcome call taker challenges with information ambiguity, location, deceit, and communication issues. Yet video calls have the potential to introduce issues around control, information overload, and privacy if systems are not designed well. These results point to the need to think about emergency video calling along a continuum of visual modalities ranging from audio calls accompanied with images or video clips to one-way video streams to two-way video streams where camera control and camera work need to be carefully designed.
Whiskers: Exploring the Use of Ultrasonic Haptic Cues on the Face
Haptic cues are a valuable feedback mechanism for smart glasses. Prior work has shown how they can support navigation, deliver notifications and cue targets. However, a focus on actuation technologies such as mechanical tactors or fans has restricted the scope of research to a small number of cues presented at fixed locations. To move beyond this limitation, we explore perception of in-air ultrasonic haptic cues on the face. We present two studies examining the fundamental properties of localization, duration and movement perception on three facial sites suitable for use with glasses: the cheek, the center of the forehead, and above the eyebrow. The center of the forehead led to optimal performance with a localization error of 3.77mm and accurate duration (80%) and movement perception (87%). We apply these findings in a study delivering eight different ultrasonic notifications and report mean recognition rates of up to 92.4% (peak: 98.6%). We close with design recommendations for ultrasonic haptic cues on the face.
Let’s Play!: Digital and Analog Play between Preschoolers and Parents
Play is an enjoyable and developmentally useful part of early childhood, and parent-child play is a highly productive mechanism by which children learn to participate in the world. We conducted an observational lab study to examine how 15 parent-child pairs (children age 4-6) respond to and play with tablet apps as compared to analog toys. We found that parents and children were less likely to engage with each other or to respond to each other’s bids for attention during play sessions with tab-lets versus play sessions with toys. We also observed that specific design features of tablet devices and children’s apps-such as one-sided interfaces, game paradigms that demand continual attention, and lack of support for parallel interaction-are the primary mechanism shaping these differences. We provide guidance suggesting how children’s apps might be re-designed to preserve the ad-vantages of digital play experiences while also evolving to build in the advantages of traditional toys.
Combating Attrition in Digital Self-Improvement Programs using Avatar Customization
Digital self-improvement programs (e.g., interventions, training programs, self-help apps) are widely accessible, but can not employ the same degree of external regulation as programs delivered in controlled environments. As a result, they suffer from high attrition — even the best programs won’t work if people don’t use them. We propose that volitional engagement — facilitated through avatar customization — can help combat attrition. We asked 250 participants to engage daily for 3 weeks in a one-minute breathing exercise for anxiety reduction, using either a generic avatar or one that they customized. Customizing an avatar resulted in significantly less attrition and more sustained engagement as measured through login counts. The problem of attrition affects self-improvement programs across a range of do-mains; we provide a subtle, versatile, and broadly-applicable solution.
Use the Force Picker, Luke: Space-Efficient Value Input on Force-Sensitive Mobile Touchscreens
Picking values from long ordered lists, such as when setting a date or time, is a common task on smartphones. However, the system pickers and tables used for this require significant screen space for spinning and dragging, covering other information or pushing it off-screen. The Force Picker reduces this footprint by letting users increase and decrease values over a wide range using force touch for rate-based control. However, changing input direction this way is difficult. We propose three techniques to address this. With our best candidate, Thumb-Roll, the Force Picker lets untrained users achieve similar accuracy as a standard picker, albeit less quickly. Shrinking it to a single table row, 20% of the iOS picker height, slightly affects completion time, but not accuracy. Intriguingly, after 70 minutes of training, users were significantly faster with this minimized Thumb-Roll Picker compared to the standard picker, at the same accuracy and only 6% of the gesture footprint. We close with application examples.
Live Sketch: Video-driven Dynamic Deformation of Static Drawings
Creating sketch animations using traditional tools requires special artistic skills, and is tedious even for trained professionals. To lower the barrier for creating sketch animations, we propose a new system, emphLive Sketch,</i> which allows novice users to interactively bring static drawings to life by applying deformation-based animation effects that are extracted from video examples. Dynamic deformation is first extracted as a sparse set of moving control points from videos and then transferred to a static drawing. Our system addresses a few major technical challenges, such as motion extraction from video, video-to-sketch alignment, and many-to-one motion-driven sketch animation. While each of the sub-problems could be difficult to solve fully automatically, we present reliable solutions by combining new computational algorithms with intuitive user interactions. Our pilot study shows that our system allows both users with or without animation skills to easily add dynamic deformation to static drawings.
ECGLens: Interactive Visual Exploration of Large Scale ECG Data for Arrhythmia Detection
The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.
“Protection on that Erection?”: Discourses of Accountability & Compromising Participation in Digital Sexual Health
This paper analyses sexual health workers’ ‘talk’ around their introduction of a digital platform to enhance a regionally managed condom distribution scheme for young people. In examining the discursive resources workers used in framing the sexual health service, their service users and digital technology, we argue that problematic ideologies around young people and sexuality were exercised and reproduced. Workers positioned themselves as the gatekeepers of young people’s sexual health, who were in turn constructed as ‘mischievous’ and ‘misguided’, with technology having a corruptive role over what was considered to be ‘healthy’ and ‘normal’ sexual relationships. We suggest our findings indicate severe challenges in developing community-commissioned platforms alongside service providers, and questions how plausible user participation can be in attempting to conduct collaborative, participatory and engaged work in this context.
Fast & Furious: Detecting Stress with a Car Steering Wheel
Stress affects the lives of millions of people every day. In-situ sensing could enable just-in-time stress management interventions. We present the first work to detect stress using the movements of a car’s existing steering wheel. We extend prior work on PC peripherals and demonstrate that stress, expressed through muscle tension in the limbs, can be measured through the way we drive a car. We collected data in a driving simulator under controlled circumstances to vary the levels of induced stress, within subjects. We analyze angular displacement data to estimate coefficients related to muscle tension using an inverse filtering technique. We prove that the damped frequency of a mass spring damper model representing the arm is significantly higher during stress. Stress can be detected with only a few turns during driving. We validate these measures against a known stressor and calibrate our sensor against known stress measurements.
Norms Matter: Contrasting Social Support Around Behavior Change in Online Weight Loss Communities
Online health communities (OHCs) provide support across conditions; for weight loss, OHCs offer support to foster positive behavior change. However, weight loss behaviors can also be subverted on OHCs to promote disordered eating practices. Using comments as proxies for support, we use computational linguistic methods to juxtapose similarities and differences in two Reddit weight loss communities, r/proED and r/loseit. We employ language modeling and find that word use in both communities is largely similar. Then, by building a word embedding model, specifically a deep neural network on comment words, we contrast the context of word use and find differences that imply different behavior change goals in these OHCs. Finally, these content and context norms predict whether a comment comes from r/proED or r/loseit. We show that norms matter in understanding how different OHCs provision support to promote behavior change and discuss the implications for design and moderation of OHCs.
“A Stalker’s Paradise”: How Intimate Partner Abusers Exploit Technology
This paper describes a qualitative study with 89 participants that details how abusers in intimate partner violence (IPV) contexts exploit technologies to intimidate, threaten, monitor, impersonate, harass, or otherwise harm their victims. We show that, at their core, many of the attacks in IPV contexts are technologically unsophisticated from the perspective of a security practitioner or researcher. For example, they are often carried out by a UI-bound adversary – an adversarial but authenticated user that interacts with a victim»s device or account via standard user interfaces – or by downloading and installing a ready-made application that enables spying on a victim. Nevertheless, we show how the sociotechnical and relational factors that characterize IPV make such attacks both extremely damaging to victims and challenging to counteract, in part because they undermine the predominant threat models under which systems have been designed. We discuss the nature of these new IPV threat models and outline opportunities for HCI research and design to mitigate these attacks.