Face-tracking technology allows computers to read facial expressions. In case you want to know about your mood.
Olga Khazan
If you've ever been confused about how you're feeling, and it happened to be the 1970s, you could always count on the mood ring. The jewelry fad claimed to read wearers' levels of anxiety or ebullience by measuring body temperature.
Today there's a more reliable—but equally far-out—app that performs a similar function: the clmtrackr, a new emotion-analysis tool created by a Norwegian computer scientist named Audun Øygard.
You turn on your webcam, stare into your screen, and the program will tell you what emotions you're experiencing, and in what proportions, from anger to sadness to joy.
The facial tracking is accomplished through a technique known as the constrained local model, or CLM, a type of algorithm that draws on thousands of existing pictures of faces to identify facial features and predict how they'll look when the face is scrunched into a smile, for example, or drooping in a frown.
"It has learned from prior training each of the facial landmarks," Jeffrey Cohn, a professor of psychology and robotics at Carnegie Mellon University, told me. "Then for a new face, it goes looking for those points that it has learned to find."
Continue reading
Olga Khazan
If you've ever been confused about how you're feeling, and it happened to be the 1970s, you could always count on the mood ring. The jewelry fad claimed to read wearers' levels of anxiety or ebullience by measuring body temperature.
Today there's a more reliable—but equally far-out—app that performs a similar function: the clmtrackr, a new emotion-analysis tool created by a Norwegian computer scientist named Audun Øygard.
You turn on your webcam, stare into your screen, and the program will tell you what emotions you're experiencing, and in what proportions, from anger to sadness to joy.
The facial tracking is accomplished through a technique known as the constrained local model, or CLM, a type of algorithm that draws on thousands of existing pictures of faces to identify facial features and predict how they'll look when the face is scrunched into a smile, for example, or drooping in a frown.
"It has learned from prior training each of the facial landmarks," Jeffrey Cohn, a professor of psychology and robotics at Carnegie Mellon University, told me. "Then for a new face, it goes looking for those points that it has learned to find."
Continue reading
This actually looks pretty neat.
