Publications

Exploring Uni-manual Around Ear Off-Device Gestures for Earables

Published in IMWUT, 2024

Small form factor limits physical input space in earable (i.e., ear-mounted wearable) devices. Off-device earable inputs in alternate mid-air and on-skin around-ear interaction spaces using uni-manual gestures can address this input space limitation. Segmenting these alternate interaction spaces to create multiple gesture regions for reusing off-device gestures can expand earable input vocabulary by a large margin. Although prior earable interaction research has explored off-device gesture preferences and recognition techniques in such interaction spaces, supporting gesture reuse over multiple gesture regions needs further exploration. We collected and analyzed 7560 uni-manual gesture motion data from 18 participants to explore earable gesture reuse by segmentation of on-skin and mid-air spaces around the ear. Our results show that gesture performance degrades significantly beyond 3 mid-air and 5 on-skin around-ear gesture regions for different uni-manual gesture classes (e.g., swipe, pinch, tap). We also present qualitative findings on most and least preferred regions (and associated boundaries) by end-users for different uni-manual gesture shapes across both interaction spaces for earable devices. Our results complement earlier elicitation studies and interaction technologies for earables to help expand the gestural input vocabulary and potentially drive future commercialization of such devices.

Recommended citation: Shimon, S.S.A., Neshati, A., Sun, J., Xu, Q. and Zhao, J., 2024. Exploring Uni-manual Around Ear Off-Device Gestures for Earables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1), pp.1-29. http://junweis.github.io/files/3643513.pdf

“I consider VR Table Tennis to be my secret weapon!”: An Analysis of the VR Table Tennis Players’ Experiences Outside the Lab

Published in SUI, 2023

Thanks to stand-alone Virtual Reality (VR) advances, users can play realistic simulations of real-life sports at their homes. In these game simulations, players control their avatars by doing the same movements as in real life (RL) while playing against a person or AI opponent, making VR sports attractive for the players. In this paper, we surveyed a popular VR table tennis game community, focusing on understanding their demographics, challenges, and experiences with skill transfers between VR and RL. Our results show that, on average, VR table tennis players are primarily men, live in Europe/Asia, and are 38 years old. We also found that the current state of VR technology affects the player’s experience and that players see VR as a convenient way to play matches but that RL is better for socialization. Finally, we identified skills like backhand and forehand strikes that players perceived to be transferred from VR to RL and vice versa. Our research findings have the potential to serve as a valuable resource for VR table tennis game developers seeking to integrate mid-air controllers into their future projects.

Recommended citation: Karatas, E., Sunday, K., Apak, S.E., Li, Y., Sun, J., Batmaz, A.U. and Barrera Machuca, M.D., 2023, October. "I consider VR Table Tennis to be my secret weapon!": An Analysis of the VR Table Tennis Players' Experiences Outside the Lab. In Proceedings of the 2023 ACM Symposium on Spatial User Interaction (pp. 1-12). http://junweis.github.io/files/3607822.3614539.pdf

The Effect of the Vergence-Accommodation Conflict on Virtual Hand Pointing in Immersive Displays

Published in CHI, 2022

Previous work hypothesized that for Virtual Reality (VR) and Augmented Reality (AR) displays a mismatch between disparities and optical focus cues, known as the vergence and accommodation conflict (VAC), affects depth perception and thus limits user performance in 3D selection tasks within arm’s reach (peri-personal space). To investigate this question, we built a multifocal stereo display, which can eliminate the influence of the VAC for pointing within the investigated distances. In a user study, participants performed a virtual hand 3D selection task with targets arranged laterally or along the line of sight, with and without a change in visual depth, in display conditions with and without the VAC. Our results show that the VAC influences 3D selection performance in common VR and AR stereo displays and that multifocal displays have a positive effect on 3D selection performance with a virtual hand.

Recommended citation: Batmaz, A.U., Barrera Machuca, M.D., Sun, J. and Stuerzlinger, W., 2022, April. The Effect of the Vergence-Accommodation Conflict on Virtual Hand Pointing in Immersive Displays. In CHI Conference on Human Factors in Computing Systems (pp. 1-15). http://junweis.github.io/files/3491102.3502067.pdf

Global Scene Filtering, Exploration, and Pointing in Occluded Virtual Space

Published in INTERACT, 2021

Target acquisition in an occluded environment is challenging given the omni-directional and first-person view in virtual reality (VR). We propose Solar-Casting, a global scene filtering technique to manage occlusion in VR. To improve target search, users control a reference sphere centered at their head through varied occlusion management modes: Hide, SemiT (Semi-Transparent), Rotate. In a preliminary study, we find SemiT to be better suited for understanding the context without sacrificing performance by applying semi-transparency to targets within the controlled sphere. We then compare Solar-Casting to highly efficient selection techniques to acquire targets in a dense and occluded VR environment. We find that Solar-Casting performs competitively to other techniques in known environments, where the target location information is revealed. However, in unknown environments, requiring target search, Solar-Casting outperforms existing approaches. We conclude with scenarios demonstrating how Solar-Casting can be applied to crowded and occluded environments in VR applications.

Recommended citation: Chen, Y., Sun, J., Xu, Q., Lank, E., Irani, P. and Li, W., 2021, August. Global Scene Filtering, Exploration, and Pointing in Occluded Virtual Space. In IFIP Conference on Human-Computer Interaction (pp. 156-176). Springer, Cham. http://junweis.github.io/files/978-3-030-85607-6_11.pdf

Empirical Evaluation of Moving Target Selection in Virtual Reality using Egocentric Metaphors

Published in INTERACT, 2021

Virtual hand or pointer metaphors are among the key approaches for target selection in immersive environments. However, targeting moving objects is complicated by factors including target speed, direction, and depth, such that a basic implementation of these techniques might fail to optimize user performance. We present results of two empirical studies comparing characteristics of virtual hand and pointer metaphors for moving target acquisition. Through a first study, we examine the impact of depth on users’ performance when targets move beyond and within arms’ reach. We find that movement in depth has a great impact on both metaphors. In a follow-up study, we design a reach-bounded Go-Go (rbGo-Go) technique to address challenges of virtual hand and compare it to Ray-Casting. We find that target width and speed are significant determinants of user performance and we highlight the pros and cons for each of the techniques in the given context. Our results inform the UI design for immersive selection of moving targets.

Recommended citation: Chen, Y., Sun, J., Xu, Q., Lank, E., Irani, P. and Li, W., 2021, August. Empirical evaluation of moving target selection in virtual reality using egocentric metaphors. In IFIP Conference on Human-Computer Interaction (pp. 29-50). Springer, Cham. http://junweis.github.io/files/978-3-030-85610-6_3.pdf

PenShaft: Enabling Pen Shaft Detection and Interaction for Touchscreens

Published in Augmented Human, 2021

PenShaft is a battery-free, easy to implement solution for augmenting the shaft of a capacitive pen with interactive capabilities. By applying conductive materials in a specific pattern on the pen’s shaft, we can detect when it is laid on a capacitive touchscreen. This enables on-pen interactions, such as button clicks or swiping, and whole-shaft interactions, such as rotating and dragging the stylus. PenShaft supports at least six interactions, freeing the pen from having to interact with the layers or menus of a conventional user interface. We validate a device’s capability to detect these six interactions with all but two interactions achieving a success rate above 95%. We then present a series of applications to demonstrate the flexibility and utility of these interactions when using the pen.

Recommended citation: Sun, J., Foley, M., Xu, Q., Li, C., Li, J., Irani, P. and Li, W., 2021, May. PenShaft: Enabling Pen Shaft Detection and Interaction for Touchscreens. In 12th Augmented Human International Conference (pp. 1-9). http://junweis.github.io/files/3460881.3460934.pdf

Extended Sliding in Virtual Reality

Published in VRST, 2019

Although precise 3D positioning is not always necessary in virtual environments, it is still an important task for current and future applications of Virtual Reality (VR), including 3D modelling, engineering, and scientific applications. We focus on 3D positioning techniques in immersive environments that use a 6DOF controller as input device and present a new technique that improves 3D positioning performance in VR, in both speed and accuracy. Towards this goal, we adapted an extended sliding technique to VR systems with a controller as input device and compared it with previously presented 3DOF positioning techniques. The results showed that our new Extended VR Sliding technique significantly improved the accuracy for 3D positioning tasks, especially for targets in contact with the scene.

Recommended citation: Sun, J. and Stuerzlinger, W., 2019, November. Extended sliding in virtual reality. In 25th ACM Symposium on Virtual Reality Software and Technology (pp. 1-5). http://junweis.github.io/files/3359996.3364251.pdf

Object Sliding and Beyond: Investigating Object Manipulation in 3D User Interfaces

Published in SFU Thesis, 2019

3D manipulation is one of the fundamental tasks for interaction in virtual environments. Yet, it can be difficult for users to understand the spatial relationships between 3D objects and how to manipulate them in a 3D scene, as, unlike in the physical world, users do not have the same visual cues for understanding scene structure or can leverage constraints and affordances for interaction. My goal is to create better user interface for 3D manipulation platforms, with a focus on positioning objects. I designed efficient, accurate, and easy-to-use 3D positioning techniques for both desktop and virtual reality (VR) systems. My work also contributes guidelines for designing and developing 3D modelling software for desktop and VR systems, and enable 3D content designers, game designers, or even novice users to benefit from improved efficiency and accuracy for 3D positioning tasks. Much of my thesis work builds on a 3D object sliding technique, where objects slide on surfaces behind them, which helps with some positioning tasks. First, I improved 3D positioning on a desktop system, with the mouse and keyboard as input devices. I presented two new techniques that significantly outperform the industry-standard widget- based 3D positioning technique for tasks involving floating objects or objects that can be at multiple positions in visual depth. Second, I proposed a new technique that allows users to select and position hidden objects. The new technique also outperformed 3D widgets. Then, I applied my techniques in a VR system with a head-mounted display (HMD) and compared the performance of different input devices. I found that the combination of the mouse with my new positioning technique is still the best solution, even in VR. In the remainder of my thesis work, and focusing on tasks involving more distant objects, I investigated manipulation techniques in VR that do not rely on the availability of a mouse. I designed and implemented a technique that significantly improved the accuracy for 3D positioning tasks for targets that were in contact with the scene.

Recommended citation: Sun, J., 2019. Object sliding and beyond: Investigating object manipulation in 3D user interfaces (Doctoral dissertation, Communication, Art & Technology: School of Interactive Arts and Technology). https://summit.sfu.ca/item/19803

Selecting and Sliding Hidden Objects in 3D Desktop Environments

Published in Graphics Interface, 2019

Selecting and positioning objects in 3D space are fundamental tasks in 3D user interfaces. We present two new techniques to improve 3D selection and positioning. We first augment 3D user interfaces with a new technique that enables users to select objects that are hidden from the current viewpoint. This layer-based technique for selecting hidden objects works for arbitrary objects and scenes. We then also extend a mouse-based sliding technique to work even if the manipulated object is hidden behind other objects, by making the manipulated object always fully visible through a transparency mask during drag-and-drop positioning. Our user study shows that with the new techniques, users can easily select hidden objects and that sliding with transparency performs faster than the common 3D widgets technique.

Recommended citation: Sun, J. and Stuerzlinger, W., 2019, June. Selecting and Sliding Hidden Objects in 3D Desktop Environments. In Graphics Interface (Vol. 8, pp. 1-8). http://junweis.github.io/files/gi2019-8.pdf

Comparing Input Methods and Cursors for 3D Positioning with Head-Mounted Displays

Published in SAP, 2018

Moving objects is an important task in 3D user interfaces. In this work, we focus on (precise) 3D object positioning in immersive virtual reality systems, especially head-mounted displays (HMDs). To evaluate input method performance for 3D positioning, we focus on an existing sliding algorithm, in which objects slide on any contact surface. Sliding enables rapid positioning of objects in 3D scenes on a desktop system but is yet to be evaluated in an immersive system. We performed a user study that compared the efficiency and accuracy of different input methods (mouse, hand-tracking, and trackpad) and cursor display conditions (stereo cursor and one-eyed cursor) for 3D positioning tasks with the HTC Vive. The results showed that the mouse outperformed hand-tracking and the trackpad, in terms of efficiency and accuracy. Stereo cursor and one-eyed cursor did not demonstrate a significant difference in performance, yet the stereo cursor condition was rated more favourable. For situations where the user is seated in immersive VR, the mouse is thus still the best input device for precise 3D positioning.

Recommended citation: Sun, J., Stuerzlinger, W. and Riecke, B.E., 2018, August. Comparing input methods and cursors for 3D positioning with head-mounted displays. In Proceedings of the 15th ACM Symposium on Applied Perception (pp. 1-8). http://junweis.github.io/files/3225153.3225167.pdf

Selecting Invisible Objects

Published in IEEE VR, 2018

We augment 3D user interfaces with a new technique that enables users to select objects that are invisible from the current viewpoint. We present a layer-based method for selecting invisible objects, which works for arbitrary objects and scenes. The user study shows that with our new techniques users can easily select hidden objects.

Recommended citation: Sun, J. and Stuerzlinger, W., 2018, March. Selecting invisible objects. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 697-698). IEEE. http://junweis.github.io/files/8446199.pdf

Fluid VR: Extended Object Associations for Automatic Mode Switching in Virtual Reality

Published in 3DUI Contest, 2018

Constrained interaction and navigation methods for virtual reality reduce the complexity of the interaction. Yet, with previously presented solutions, users need to learn new interaction tools or remember different actions for changing between different interaction methods. In this paper, we propose Fluid VR, a new 3D user interface for interactive virtual environments that lets users seamlessly transition between navigation and selection. Based on the selected object’s properties, Fluid VR applies specific constraints to the interaction or navigation associated with the object. This way users have a better control of their actions, without having to change tools or activate different modes of interaction.

Recommended citation: Machuca, M.D.B., Sun, J., Pham, D.M. and Stuerzlinger, W., 2018, March. Fluid VR: Extended Object Associations for Automatic Mode Switching in Virtual Reality. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 846-847). IEEE. http://junweis.github.io/files/8446437.pdf

Shift-Sliding and Depth-Pop for 3D Positioning

Published in SUI, 2016

Moving objects is an important task in 3D user interfaces. We describe two new techniques for 3D positioning, designed for a mouse, but usable with other input devices. The techniques enable rapid, yet easy-to-use positioning of objects in 3D scenes. With sliding, the object follows the cursor and moves on the surfaces of the scene. Our techniques enable precise positioning of constrained objects. Sliding assumes that by default objects stay in contact with the scene’s front surfaces, are always at least partially visible, and do not interpenetrate other objects. With our new Shift-Sliding method the user can override these default assumptions and lift objects into the air or make them collide with other objects. Shift-Sliding uses the local coordinate system of the surface that the object was last in contact with, which is a new form of context-dependent manipulation. We also present Depth-Pop, which maps mouse wheel actions to all object positions along the mouse ray, where the object meets the default assumptions for sliding. For efficiency, both methods use frame buffer techniques. Two user studies show that the new techniques significantly speed up common 3D positioning tasks.

Recommended citation: Sun, J., Stuerzlinger, W. and Shuralyov, D., 2016, October. Shift-sliding and depth-pop for 3D positioning. In Proceedings of the 2016 Symposium on Spatial User Interaction (pp. 69-78). http://junweis.github.io/files/2983310.2985748.pdf

Automatic Classification of Epilepsy Lesions

Published in UWO Thesis, 2012

Epilepsy is a common and diverse set of chronic neurological disorders characterized by seizures. Epileptic seizures result from abnormal, excessive or hypersynchronous neuronal activity in the brain. Seizure types are organized firstly according to whether the source of the seizure within the brain is localized or distributed. In this work, our objective is to validate the use of MRI (Magnetic Resonance Imaging) for localizing seizure focus for improved surgical planning. We apply computer vision and machine learning techniques to tackle the problem of epilepsy lesion classification. First datasets of digitized histology images from brain cortexes of different patients are obtained by medical imaging scientists and provided to us. Some of the images are pre-labeled as normal or lesion. We evaluate a variety of image feature types that are popular in computer vision community to find those features that are appropriate for the epilepsy lesion classification. Finally we test Boosting, Support Vector Machines (SVM) and the Nearest Neighbor machine learning methods to train and classify the images into normal and lesion ones. We obtain at least 90.0% of accuracy for most of the classification experiments and the best accuracy rate we get is 93.3%. We also automatically compute neuron densities. As far as we know, our work of performing histology image classification and automatic quantification of focal cortical dysplasia in the correlation study of MRI and epilepsy histopathology is the first of its kind. Our method could potentially provide useful information for surgical planning.

Recommended citation: Sun, J., 2012. Automatic Classification of Epilepsy Lesions (Master thesis, Western University). https://ir.lib.uwo.ca/etd/1037