Arata Jingu | 神宮 亜良太

Arata Jingu is a Ph.D. student in Human-Computer Interaction Lab at Saarland University. He is currently advised by Prof. Dr. Jürgen Steimle.

He received his M.S. from the University of Tokyo, where he researched ultrasonic lip stimulation with Prof. Hiroyuki Shinoda. He received his B.E. from the University of Tokyo, where he studied iris image sensing using high-speed vision with Prof. Masatoshi Ishikawa. He did his research internship with Prof. Masahiko Inami on novel VR interactions and Prof. Pedro Lopes on human-computer integration.

CV | Google Scholar | Twitter | YouTube | Email:





Peer-Reviewed Publications

More projects are underway.

LipNotif: Use of Lips as a Non-Contact Tactile Notification Interface Based on Ultrasonic Tactile Presentation

Arata Jingu, Takaaki Kamigaki, Masahiro Fujiwara, Yasutoshi Makino, Hiroyuki Shinoda
In Proc. UIST'21 (full paper)

We propose LipNotif, a non-contact tactile notification system that uses airborne ultrasound tactile presentation to lips. This work is based on the lips' inherent structures/properties in mid-air ultrasound haptics. LipNotif allows users to receive tactile notifications using lips without sacrificing busy eyes/ears/hands or wearing bulky devices.

Paper | Video | Talk

Tactile Perception Characteristics of Lips Stimulated by Airborne Ultrasound

Arata Jingu, Masahiro Fujiwara, Yasutoshi Makino, Hiroyuki Shinoda
In Proc. WHC'21 (full paper)

We investigated the tactile perception characteristics of lips in mid-air ultrasound haptics. The lowest tactile thresholds of the lips were achieved at the valley-shaped area of the lips in terms of location, lateral modulation with periodic circular trajectories (LMc) in terms of modulation type, and 40 Hz in terms of modulation frequency.

Paper | Talk

Dynamic Iris Authentication by High-speed Gaze and Focus Control

Tomohiro Sueishi, Arata Jingu, Shoji Yachida, Michiaki Inoue, Yuka Ogino, Masatoshi Ishikawa
In Proc. SII'21 (short paper)

We propose a dynamic iris authentication system using high-speed gaze and focus control by high-speed image processing. We control high-speed rotational mirrors and liquid-based variable focus lens by triangulation with a wide-angle camera. We also control the liquid lens with additive modulation of a sine wave to get the most focused iris image.



I have a background in VR development/exhibitions.

Be in"tree"sted in | きになるき

International Virtual Reality Contest 2019, Tokyo ( Laval Virtual Award)
Laval Virtual 2020, Virtual

VR experience of becoming a tree and spending the four seasons.
Audiences can also interact with the tree user through a miniature tangible tree.
(w/ three co-creators, my part: VR Development/Server Development)

Paper | Video

Autonomous Shadow | 自律する影

IIIExhibition 2018 "Dest-logy REBUILD", Tokyo

Your shadow begins to move autonomously.
(w/ two co-creators, my part: Python/OpenCV/Shadow Image Processing/OSC)



The 69th Komaba festival (2018), Tokyo

Splatoon-like AR battle game.