1999年由Cyrus Bamji, Abbas Rafii, and Nazim Kareemi 创立的Canesta公司被无情的资本大鳄微软公司收购了。
To find the best memory cues for Mr. Reznick’s experiences, the researchers — Anind K. Dey, a computer science professor at Carnegie Mellon University, and Matthew Lee, a graduate student — considered the types of images that had proved the most effective in previous SenseCam studies.
They soon realized that the capriciousness of memory made answers elusive. For one subject, a donkey in the background of a barnyard photo brought back a flood of recollections. For another, an otherwise unremarkable landscape reminded the subject of a snowfall that had not been expected.
Still, the researchers came up with some broad rules for identifying and retrieving images likely to serve as memory triggers. For a people-based experience like a family reunion, the system selects photographs in which faces are clearly discernible; for a location-based experience like a visit to a museum, it uses geographical positions provided by GPS and accelerometer data to judge what images might be most salient — for example, when a subject might be hovering at one spot, like in front of a painting.
Research groups elsewhere are experimenting with other techniques to summarize and make use of SenseCam data. Alan Smeaton and colleagues at Dublin City University in Ireland are comparing images to categorize them by activity — shopping, for example — so the system can put together a visual summary of the day. At the University of Toronto, a group led by Ronald M. Baecker is investigating the usefulness of complementing SenseCam images with an audio narrative created by a loved one.
Once the system selects some photos from the hundreds taken, the caregiver winnows down the candidates, adding cues like audio from the voice recorder, verbal narration and brief text captions. The final product is a multimedia slide show on a tablet computer that allows the patient to dig deeper into highlighted parts of some images by tapping on the screen. The first tap plays audio, the second shows captions.
“The design is intended to give the patient the ability to engage actively with the experience instead of simply flipping through some pictures,” said Mr. Lee, the graduate student. Testing the system with the Reznicks and two other couples, he and Dr. Dey found that it helped patients recall events more vividly and with greater confidence than when they simply went through all of the images.
记得building rome in one day项目的背后既有微软，又有google，现在看来两家都推出街景照片缝合，也不奇怪吧。
Yesterday, I was invited to a private Project Natal sneak peek. The event was held at the beautiful EZ Studios in New York City. I met with a member of the Project Natal product team. She gave us a brief explanation of the Natal vision and gave us only a few technical details:
- RBG camera
- depth camera
- Current games will not be compatible with Natal
- “Project Natal” is a code name and will not be the final name of the product
- [Work in progress] Use voice commands to control services i.e. music, videos, movies
- [Work in progress] Facial recognition
- [Work in progress] Use gesturing to navigate the XBOX menu structure
- [Work in progress] Distinguishing between the primary player and a spectator
- [Work in progress] Suppressing background noise
Once we were briefed — we were given the opportunity to play a dodge ball (like) game. It was a great experience. There I was (without a controller) standing, kicking, swatting, jumping and working up a sweat. The responsiveness of the Natal system was incredible and very accurate. If I physically moved forward, back, swung faster or slower my dodge ball avatar responded in kind.
“Our advances in computer vision and audio-signal processing,” Malvar notes, “have enabled the development of Project Natal, a new level of experience for Xbox, in which user gestures and voice take the place of the standard game controller.– Rico Malvar, managing director of Microsoft Research Redmond