Jeudi 28 Novembre 2024
taille du texte
   
Vendredi, 24 Septembre 2010 10:10

IDF 2010, Research Day: Context-Aware Computing

Rate this item
(0 Votes)

Do you want the gadgets in your pocket to help you make decisions or monitor aspects of your daily life? Intel believes context-aware computing will take hardware to the next level of intelligence, but there are privacy issues to

consider, too.

If you have a fairly current smart phone, it has some sensors built in. It likely has a digital camera, a motion sensor, a GPS radio, and possibly even a tiny gyroscope. Right now, though, your phone is just a collection of hardware with various bits of software running on it. Need to get somewhere? Fire up the GPS app and get directions. Or, use the GPS locator to automatically check you in on Facebook Places or Foursquare.

But what if your smart phone could be really...well, smart? What if your phone always had software running in the background keeping track of what you do? We’re not talking about giving up privacy. Maybe the data on what you’re doing is kept locally, or in a personal cloud, rather than a big aggregator like Google or Facebook.

So, over time, for example, you may have a preference for low-cost Chinese restaurants. If you travel somewhere new, you’re phone will pop up recommendations for cheap Chinese food. Oh, and you’ve always shown a preference for spicy food, so you get a list of cheap Szechwan or Hunan food. You won’t be locked into those choices, either. If you feel like a pizza, you can change the preferences.

Context-aware computing, then, is a combination of sensors that monitor what you do, databases that collect information on what you like, and even post it on your blog or on Facebook, if you choose.

Now, you’re probably thinking that the potential for privacy abuse is legion. Already, electronic kiosks in Japan will tailor advertising to you personally as you walk by their locations. Is that intrusive? Perhaps.

Let’s take a somewhat more benign application: monitoring your elderly parent. As sensors become more compact, they can be woven into clothes or built into shoes or slippers. 

How Does This All Work?

Justin Rattner, who runs Intel’s research arm, and Lama Nachman, a senior researcher at Intel’s Interaction and Experience research, dove into the details of context aware computing and how works under the hood at this fall's IDF.

The keys to making context-aware computing work are low power, low cost, and flexible sensors: accelerometers, GPS locators, cameras, and so on. Note that these sensors don’t have to be built into a smart device. They can have radios (WiFi, for example) that communicate with a personal area or local area network.

Imagine a sensor build into a small device that mounts on the foot or shoe.

The sensor measures the strike time, stride time, and other data. The sensor would have to collect data for a fairly extended period of time. After it has that data, the system can detect if the user’s gait starts to stutter or change in a drastic way, and issue an alert that the user might fall. Alternatively, that could be communicated over the network to a care provider, who can intervene as needed.

Another example mentioned by Nachman is a TV remote control augmented with a sensor, which monitors what buttons are pushed and also picks up characteristics about how the remote is used. It could tell who the user is, because everyone moves or handles the remote just a little differently. Then it could make recommendations for shows to watch, based on what you’ve watched before.

Having small, low-power sensors with radios is one thing, but you need software that’s smart enough to do something with that data. This is where the inference pipeline comes in.

to know more click here

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn