Omek and Jinni Partner to Demonstrate Gesture-Enabled Interface for Mood-Based Video DiscoveryTuesday, January 10th, 2012
Leading Gesture Technology and Video Discovery Companies Partner to Redefine How Households Find and Select Video Content on Their Living Room Screens
LAS VEGAS — 2012 International CES — Omek Interactive, the leading provider of gesture recognition and body tracking technology, and Jinni Media, the maker of the first and only taste-and-mood based engine powering video discovery, today announced the formation of a partnership to deliver a next-generation video content navigation system for consumer electronics manufacturers and content service providers.
The prototype solution, which is being demonstrated at the Omek CES booth and the two companies’ CES hospitality suites, combines Omek’s Beckon™ technology with Jinni’s unique semantic discovery engine for movies and TV shows, creating a gesture-enabled video discovery interface. The result allows viewers a natural and intuitive way to discover the film and television content that is best attuned to their tastes and moods, by using gestures to find and refine selections based on rich combinations of attributes (such as “thought provoking love story” or “clever cons and scams”) that go far beyond standard genres or popular categories typically used in content selection guides. The gesture control system also allows users to control the video stream, once selected, allowing them to play, pause, rewind and exit back to the video discovery interface.
The demonstration solution also simulates the next phase in the product’s development, which will use Omek Beckon technology to distinguish between different individuals based on their skeletal dimensions and other physical attributes. Omek enables the user to be identified and “recognized” by his TV, allowing the interface to retrieve that user’s pre-calculated set of “tastes” created by Jinni’s proprietary Taste Profile algorithms, resulting in an effortless selection of recommendations, reflecting his semantic taste based on previous preferences. Households will be able to set up profiles for each family member, and the Jinni semantic discovery engine will provide unique recommendations for each individual or, when needed, will cross-reference between profiles in order to deliver recommendations that reflect the tastes of all who are gathered round the TV at any given moment. This will be the ideal customized video guide for families and multi-member households, offering the perfect solution for “family night” or “date night.” As facial gestures express a person’s mood, and that mood defines the content he will most likely want to watch, the future possibilities for a product that brings together the approaches of both companies are endless.
“Through our partnership with Jinni, we are redefining the living room experience. Our two technologies combined put the consumer in the power seat, no longer encumbered by remote controls or at the mercy of scheduled programming and pre-packaged content,” said Jonathan Epstein, president of Omek.
“Jinni has already changed the way that all of us find the video content that is right for each of us, by providing a much more natural way of considering our tastes and moods,” said Yosi Glick, Jinni’s CEO and Co-Founder. “By working with Omek, we’re further enhancing the experience of using our system in a way that makes it more natural, personalized, and fun. Gesture and body language is a natural way to express one’s current mood or state of mind.”
The joint Omek-Jinni prototype will be available for viewing at CES 2012 in the Omek booth, #3619 in the LVCC North Hall, and by appointment at Omek’s and Jinni’s hospitality suites.