Research


Corneal imaging

  • Estimating Focused Object using Corneal Surface Image

corneal-imaging

Researchers are considering the use of eye tracking in head-mounted camera systems. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard.

  • Remote Corneal Imaging by Integrating a 3D Face Model and an Eyeball Model


In corneal imaging methods, it is essential to use a 3D eyeball model for generating an undistorted image. Thus, the relationship between the eye and eye camera is fixed by using a head-mounted device. Remote corneal imaging has several potential applications such as surveillance systems and driver monitoring. Therefore, we integrated a 3D eyeball model with a 3D face model to facilitate remote corneal imaging. We conducted evaluation experiments and confirmed the feasibility of remote corneal imaging. We showed that the center of the eyeball can be estimated based on face tracking, and thus, corneal imaging can function as continuous remote eye tracking.


3D PoG

  • Estimating 3D Point-of-Gaze and Focused Object


Unlike conventional portable eye-tracking methods that estimate the position of the mounted camera using 2D image coordinates, the techniques proposed here presents richer information about person’s gaze when moving over a wide area. They also include visualizing scanpaths when the user with a head-mounted device makes natural head movements. We employ a Visual SLAM technique to estimate the head pose and extract environmental information. When the person’s head moves, the proposed method obtains a 3D point-of-regard. Furthermore, scanpaths can be appropriately overlaid on image sequences to support quantitative analysis. Additionally, a 3D environment is employed to detect objects of focus and to visualize an attention map.


RGB-IR

  • Tracking Iris and Pupil Simultaneously using RGB-IR Camera


We propose a novel eye-tracking method that uses a multispectral camera to simultaneously track the pupil and recognize the iris. Our hybrid approach leverages existing methods, combining them so as to compensate for weaknesses present in each individual method when used alone. Significantly, our method allows for movements of the center of rotation of the eye to be taken into consideration, this having been treated as a static point in most of earlier studies. Additionally, our method allows for the diameter of the pupil to be measured quantitatively using just a single camera. To confirm the effectiveness of our method, we conduct two experiments, in which we estimate the area and shape of the iris, the point-of-gaze, and the size of the pupil. We go on to observe that the effectiveness of our proposed method is increased compared to previous methods, particularly in situations where the eye is moved to the extreme inner corner of its socket.


Polarization

  • Cross-Ratio Based Gaze Estimation using Polarization Camera System


In eye-tracking, near-infrared light is often emitted, and at least four LEDs are located at the corners of displays for detecting the screen plane in the cross-ratio based method. However, long-time radiation of near-infrared light can make a user fatigued. Therefore, in this study, we attempted to extract the screen area correctly without near-infrared radiation emission. A polarizing filter is included in the display, and thus, visibility of the screen can be controlled by the light’s polarization direction of the external polarized light filter. We propose gaze estimation based on the cross-ratio method using a developed polarization camera system, which can capture two polarized images of different angles simultaneously. Further, we confirmed that the point-of-gaze could be estimated using the screen reflection detected by computing the differences between two images without near-infrared emission.

  • Screen corner detection using polarization camera for cross-ratio based gaze estimation


Eye tracking, which measures line of sight, is expected to advance as an intuitive and rapid input method for user interfaces, and a cross-ratio based method that calculates the point-of-gaze using homography matrices has attracted attention because it does not require hardware calibration to determine the geometric relationship between an eye camera and a screen. However, this method requires near-infrared (NIR) light-emitting diodes (LEDs) attached to the display in order to detect screen corners. Consequently, LEDs must be installed around the display to estimate the point-of-gaze. Without these requirements, cross-ratio based gaze estimation can be distributed smoothly. Therefore, we propose the use of a polarization camera for detecting the screen area reflected on a corneal surface. The reflection area of display light is easily detected by the polarized image because the light radiated from the display is polarized linearly by the internal polarization filter. With the proposed method, the screen corners can be determined without using NIR LEDs, and the point-of-gaze can be estimated using the detected corners on the corneal surface. We investigated the accuracy of the estimated point-of-gaze based on a cross-ratio method under various illumination and display conditions. Cross-ratio based gaze estimation is expected to be utilized widely in commercial products because the proposed method does not require infrared light sources at display corners.


Bone・Tactile

  • Active Bone-Conducted Sound Sensing for Estimating Joint Angle


We propose a wearable sensor system that measures joint angles of an elbow and finger using vibration which is emitted actively.  A novelty of this research is to use active sensing for measuring a joint angle.  The active sensing means to emit vibration and sounds to a bone, and a microphone receives the propagated vibration and sounds.

  • Haptic-enabled Active Bone-Conducted Sound Sensing


We propose active bone-conducted sound sensing for estimating a joint angle of a finger and simultaneous use as a haptic interface. For estimating the joint angle, an unnoticeable vibration is input to the finger, and a perceptible vibration is additionally inputted to the finger for providing haptic feedback. The joint angle is estimated by switching the estimation model depending on the haptic feedback and the average error of the estimation is within about seven degrees.

  • Hand Pose Estimation based on Active Bone-Conducted Sound Sensing


Estimating hand poses is essential to achieve intuitive user interfaces. In Virtual Reality, an infrared (IR) camera is used for hand tracking, and direct manipulation can be accomplished by using hands. Additionally, wearable devices have also attracted attention because of their portability. We have developed a method based on the use of a wearable device to estimate the joint angle, which can be determined using the amplitude of vibration. However, the joint angle, which can be estimated, is limited to particular joints. Therefore, our proposed method determines the hand pose based on active bone-conducted sound sensing toward intuitive user interfaces. We employed the power spectral density as a feature, thereby enabling the hand pose to be classified with a support vector machine. We confirmed the recognition accuracy and the feasibility of our proposed method through evaluation experiments.

  • Estimating contact force of fingertip


This study proposes a method for estimating the contact force of the fingertip by inputting vibrations actively. The use of active bone-conducted sound sensing has been limited to es- timating the joint angle of the elbow and the finger. We ap- plied it to the method for estimating the contact force of the fingertip. Unlike related works, it is not necessary to mount the device on a fingertip, and tactile feedback is enabled using tangible vibrations.

  • Grip Force Estimation based on Active Bone-Conducted Sound Sensing


We propose a method for determining grip force based on active bone-conducted sound sensing, which is an active acoustic sensing. In our previous studies, we estimated the joint angle, hand pose, and contact force by emitting a vibration to the body. We aspired to expand to an additional application of an active bone-conducted sound sensing, thus, we tried to estimate the grip force by creating a wrist-type device. The grip force was determined by using the power spectral density as the features, and gradient boosted regression trees (GBRT). Through evaluation experiments, the average error of the estimated grip force was around 15 N. Moreover, we confirmed that the grip strength could be determined with high accuracy.

Robotics

Coming soon…