Jiku is a system that aims to improve the experience of attending urban events through experience personalization and experience sharing. Using Jiku, attendees can access, using their mobile phones, video streams of the events captured using networked cameras provided by the organizers or mobile cameras of other attendees. They can browse, watch, interact, record, and share video streams of the events, providing a personalized experience in which attendees can watch the part of the events that are relevant and interesting to them, possibly from a different angle from where their physical locations.
Jiku is part of the NExT Search Center, a research center funded by the Singapore National Research Foundation through the Interactive Digital Media R&D Programme Office of Media Development Authority, and is a joint project between the National University of Singapore and Tsinghua University.
Jiku Live is a system that stream live videos to mobile phones, allowing multiple users to zoom and pan to see different regions of interest within the video streams at the same time.
The Jiku Mobile App is a front end Jiku system. Attendees of an event can use the mobile app to browse, watch, interact with, record, and share video streams from the cameras.
Jiku Recommend is an algorithm that runs on the mobile app, monitoring what other users like or do not like about each video clip, and recommend video clips that the user may like through collaborative filtering. The algorithm exchanges the video clips opportunitiscally based on what the user may like, and based on the coverage of the collaborative filtering algorithm.
Jiku Player is a mobile video player that supports zooming and panning, as well as automated tracking of objects.
Jiku Camera is a mobile video recording and streaming app, that records video, improves the video quality (under low light condition), tags it with sensor metadata, and uploads the video to the Jiku Video Server.
Jiku Director is a software that runs of the video server, automatically analyzing input video streams from the cameras at the event, and generate a new video that switches between the different input streams to produce a ``directed'' video stream of the event based on the interestingness of the content and the quality of the input streams.