top of page

MojiBoard

ABSTRACT

Inserting emojis1 can be cumbersome when users must swap through panels. From our survey, we learned that users often use a series of consecutive emojis to convey rich, nuanced non-verbal expressions such as emphasis, change of expressions, or micro stories. We introduce MojiBoard, an emoji entry technique that enables users to generate dynamic parametric emojis from a gesture keyboard. With MojiBoard, users can switch seamlessly between typing and parameterizing emojis.

CCS CONCEPTS
• Information Systems → User Interfaces; Interaction Styles.

KEYWORDS

emoji; continuous interaction; expressive communication; gesture input; gesture keyboard; mobile

ACM Reference Format:

Jessalyn Alvina, Chengcheng Qu, Joanna McGrenere, and Wendy E. Mackay. 2019. MojiBoard: Generating Parametric Emojis with Gesture Keyboards. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI’19 Extended Abstracts), May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3290607.3312771

INTRODUCTION

Around 40% of mobile activities involve text-based communication [3]. Prior research focused on improving text input efficiency, for example improving typing speed, word prediction, or spelling and grammar. However, text messaging is not simply about producing text: users also appropriate it to support other forms of non-verbal expression. In particular, emojis, such as 😊, are often substituted for individual words or used to supplement the text. Lee at al. identified three common patterns of emoji use: 1) to express emotion, e.g. context, intensity, and emphasis; 2) for strategic reasons, e.g. reaction, self-representation, impression formation and social presence; and 3) for functional purposes, e.g. as a substitute for or a supplement to text [6].

Most text-messaging applications on mobile devices let users choose from a long list of emojis, including animated ones, to insert into their conversations. Due to limitations of screen real estate, these lists appear on multiple panels, sorted by category, such as ‘face’, ‘animal’, and ‘flag’. (For example, see GBoard2 in Fig. 1). This entry technique is inefficient and cumbersome: users must perform a linear search task while swapping among different panels [7, 8], and the text entry rate declines significantly the more panels are swapped. EmojiZoom [8] tried to address this issue by displaying all emojis at a smaller scale in one panel, enabling a focus+context exploration. Even so, the context panel can only include a certain number of emojis before the scale is too small for effective exploration. When users insert rarely used emojis [7] or include a series of identical or different consecutive emojis [4], the corresponding emoji entry rate is likely to drop even further.

We are interested in simplifying emoji entry while adding greater emoji expressivity, in a fun and easy-to-learn way. We first sent out a questionnaire to better understand emoji use. Then, based on the results, we designed MojiBoard, an emoji input technique that lets users take advantage of gesture typing to quickly enter and adjust the parameters of animated emojis, eliminating the cost of panel swapping while adding fine control over the resulting expression.

STUDY: QUESTIONNAIRE ON EMOJI USE

We sent a questionnaire to 62 unpaid participants to understand their use and appropriation of emojis in text-messaging apps. Participants were mainly young adults who text their closest friends and family: 41% mostly send messages to their partner, 31% to best friends or siblings, and 12% to other family members. Only 16% of the participants mentioned that they used text messaging most often in a professional context, e.g. to colleagues or employers. Our participants are heavy users of text-messaging apps: 96.8% messaged their primary texting-partner at least once per day, and 38.7% of the participants messaged their texting-partner more than 6 times a day.

A particularly interesting result was the participants’ use of sequences of consecutive emojis to express an intense emotion. We asked how often they did this, and which patterns they used, either sequences of the same emoji, e.g.😂😂😂 , of different emojis, e.g😉👏🎉. , or a mix of both. The majority of the participants (75%) reported regular use of emoji sequences: 61% rated this at least 3 in a 1-to-5 scale of frequency of use (1=‘Never’, 5=‘Constantly’). Of these, 37.2% use sequences of the same emoji, 32.6% use sequences of different emojis, and the remaining 30.2% use both. This suggests that young adults are not only motivated to use emojis, but also spend the time needed to create sequences of emojis when they text their closest friends and family.

We used Pohl’s [7] classification to assess emoji similarity. Some participants, e.g. P7, P15, P51, combined similar emojis to express a rich, complex emotion, for example “🐵🙈🙉” to express a feeling of having no clue or “don’t know”. Others used sequences of completely different emojis to express subtle changes of emotion, e.g. P53’s “😉😛😈”; or to describe a story or an action, e.g. P46’s “😈😈🤓 ”, (see Fig.2). These young adults create sequences of emojis to convey richer, nuanced meanings that are not easily captured by a single emoji, which suggests a design opportunity: How can we help users form emojis with greater expressivity in a fun, simple and easy-to-learn way?

13.jpg
12.jpg

MojiBoard Design

14.jpg

We introduce MojiBoard, that augments the CommandBoard [1] to generate animated, parametric emojis. This enables users to convey nuanced meanings, such as changes in emphasis or varied emotions, or even tell “micro stories”. Like the earlier Expressive Keyboard [2], we map gesture input variations to output parameters: Here, the emoji’s expression changes according to how the user performs the gesture. We chose a gesture keyboard since it is already in widespread use; users can reliably control their gesture variation [2]; and unistroke gestures offer a potentially infinite number of input variations, especially when compared to a tap gesture.

MojiBoard establishes three discrete interaction spaces: keyboard, command bar, and upper space (Fig.5). The keyboard supports both text input i.e. typing and an emoji input space. To enter an emoji input, the user gesture types an emoji keyword, such as “cry”, continues into the space above the keyboard, and draws a /\ gesture all in one single stroke (Fig. 4). MojiBoard thus expands the CommandBoard’s functionality to not only accept emoji keywords as a new type of command, but also to control its parameters so users can create personalized, animated emojis with a single unistroke gesture.

Generating Parametric Emojis. Most emoji systems use a set of keywords associated with each emoji, for example “smile” for 😊 and “sad, cry” for 😭 . MojiBoard lets users gesture type emoji keywords for quick insertion into their text messages. When the user gesture types, the four most likely word candidates appear: the highest probability word is treated as the chosen word and the rest appear in the keyboard’s suggestion bar (Fig.3), often, as auto completions to longer words (see Fig.5). MojiBoard progressively checks for emoji keywords and, in the case of a match, displays a preview of the associated emoji. To accept this emoji, the user slides into the space above the keyboard and performs the /\ gesture (Fig. 4). MojiBoard calculates the size, indicated by the green bounding box (shown in Figures 3 and 4), and the curviness ratio i.e. the radius of curvature, in real time. The bigger the bounding box, the bigger the emoji. Similarly, the curvier the gesture, the more intense the emoji’s expression. Fig.3 shows a relatively small and straight gesture, which generates a sad face with a smallfrown. As the user wiggles the gesture, which increases the curviness and the size of the bounding box, the emoji grows bigger and the expression changes from a small frown into a crying face (Fig.4). The matching keyword is maintained until the finger is lifted or a different word is typed. This reduces the likelihood of accidentally changing the keyword when wiggling or inflating the gesture.

If several emojis are associated with a particular keyword, the most frequent options appear in the command bar (Fig. 5) above the keyboard. The user can then cross through the desired emoji when moving into the upper area. MojiBoard considers each word in the phrase as potential emoji keywords, e.g. typing “tears” or “joy” displays the “face with tears of joy”😂 emoji. The resulting emoji uses a two-second animation that transitions from a small frown to a crying face, producing a more dramatic expression. The user can tap the emoji to replay the animation, repeated three times, for a total of six seconds. To cancel emoji generation, the user draws a straight gesture above the keyboard (Fig. 4).

Selecting Random Emojis. In addition to generating animated parametric emojis, MojiBoard can also insert a series of random emojis, derived from the current set of categories. This offers a simple and fun method of creating ’micro stories’ from rarely-used emojis. When the user type “random”, MojiBoard displays a preview of all the emoji categories in the command bar (Fig.5). The user can cross through the categories they wish to include: Choosing two or more emojis from the same category involves exiting and then reentering the command bar at the desired locations. The preview displays the selected emojis, which the user can insert with the /\ gesture or cancel them immediately.

16.jpg

Technical Implementation

Current emojis are represented with a two-byte unicode character, and those generated by MojiBoard need not exceed a three-byte unicode character. For example, a parametric value such as the curviness ratio can be captured in a single additional byte. Each platform or text-messaging application must decide on how to render these emojis. MojiBoard illustrates changes in size and animation, but other possibilities can be included such as such as stickers, skin-tone modifiers, or GIF image parameters.

We found that users often use emojis to convey nuanced meaning, such as emphasis, emotion changes and ‘micro stories’. MojiBoard lets users manipulate features of their gestures to modify the look of a parameterized, animated emoji, e.g. the intensity of its expression. Users need not search through emoji widget panels but can instead switch seamlessly between gesture typing and inserting and generating emojis. Users thus create highly personal emojis, whose expressions are mapped directly to their invididual gestures. We believe that future, more complete and sophisticated emoji engines could provide significantly more personalizable emojis. For example, while the random function for selecting novel emojis is fun, future work should explore alternate methods for creating micro stories,or modifying non-human emojis such as 🎉. We can also consider how designers, or possibly even users, could control mapping between gesture variation and emoji parameters. This would significantly increase the potential for personalization and expressivity, while needing fewer bytes than inserting multiple consecutive emojis. Note that we believe MojiBoard should be considered an addition, rather than a replacement for current emoji systems, since users may still want to browse through panels of emojis or type keywords associated with pre-defined, static emojis. We hope to expand MojiBoard to include parameterization from both the keyboard and the emoji widget. We are particularly interested in creating a parametric emoji engine that interpolates across different expressions, e.g. from happy to shock to crying, and plan to conduct a field study to observe how users adopt and adapt MojiBoard in their daily conversations.

CONCLUSION AND FUTURE WORK

We thank Xiaojun Bi for providing the gesture keyboard prototype used in MojiBoard. This work was partially supported by the European Research Council (ERC) grant no 321135 CREATIV: Creating Co-Adaptive Human-Computer Partnerships and the Natural Sciences and Engineering Research Council of Canada (NSERC).

ACKNOWLEDGMENTS

[1]  Jessalyn Alvina, Carla F. Griggio, Xiaojun Bi, and Wendy E. Mackay. 2017. CommandBoard: Creating a General-Purpose Command Gesture Input Space for Soft Keyboard. In Proc. ACM UIST’17. ACM, NY, USA, 17–28. https://doi.org/10.1145/ 3126594.3126639

[2]  Jessalyn Alvina, Joseph Malloch, and Wendy E. Mackay. 2016. Expressive Keyboards: Enriching Gesture-Typing on Mobile Devices. In Proc. ACM UIST’16. ACM, NY, USA, 583–593. https://doi.org/10.1145/2984511.2984560

[3]  Barry Brown, Moira McGregor, and Donald McMillan. 2014. 100 Days of iPhone Use: Understanding the Details of Mobile Device Use. In Proc. ACM MobileHCI’14. ACM, NY, USA, 223–232. https://doi.org/10.1145/2628363.2628377

[4]  Zhenpeng Chen, Xuan Lu, Wei Ai, Huoran Li, Qiaozhu Mei, and Xuanzhe Liu. 2018. Through a Gender Lens: Learning Usage Patterns of Emojis from Large-Scale Android Users. In Proc. WWW’18. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 763–772. https://doi.org/10.1145/3178876.3186157

[5]  Joonhwan Lee, Soojin Jun, Jodi Forlizzi, and Scott E. Hudson. 2006. Using Kinetic Typography to Convey Emotion in Text- basedInterpersonalCommunication.InProc.ACMDIS’06.ACM,NY,USA,41–49. https://doi.org/10.1145/1142405.1142414

[6]  Joon Young Lee, Nahi Hong, Soomin Kim, Jonghwan Oh, and Joonhwan Lee. 2016. Smiley Face: Why We Use Emoticon StickersinMobileMessaging.InProc.ACMMobileHCI’16.ACM,NY,USA,760–766. https://doi.org/10.1145/2957265.2961858

[7]  Henning Pohl, Christian Domin, and Michael Rohs. 2017. Beyond Just Text: Semantic Emoji Similarity Modeling to Support Expressive Communication;. ACM Trans. Comput.-Hum. Interact. 24, 1, Article 6 (March 2017), 42 pages. https:

//doi.org/10.1145/3039685

[8]  Henning Pohl, Dennis Stanke, and Michael Rohs. 2016. EmojiZoom: Emoji Entry via Large Overview Maps

😄🔍. In Proc. ACM MobileHCI ’16. ACM, NY, USA, 510–517. https://doi.org/10.1145/2935334.2935382

[9]  Shumin Zhai and Per Ola Kristensson. 2012. The Word-gesture Keyboard: Reimagining Keyboard Interaction. Commun.

ACM 55, 9 (Sept. 2012), 91–101. https://doi.org/10.1145/2330667.2330689

REFERENCES

bottom of page