Saturday, February 27, 2010

Collaborative Translation by Monolinguals with Machine Translators

Authors:
Daisuke Morita Kyoto University, Kyoto, Japan
Toru Ishida Kyoto University, Kyoto, Japan

Paper Link:
http://delivery.acm.org/10.1145/1510000/1502701/p361-morita.pdf?key1=1502701&key2=4634337621&coll=ACM&dl=ACM&CFID=76752576&CFTOKEN=55465958

Morita & Ishida created a collaborative translation process that allows monolingual people to communicate reliably with a machine translator as the intermediary.

In this system, one person acts as the source language provider that enters sentences in one language. The source language sentence is translated by a machine translator and viewed by the target user who only speaks in the target language.

The target language user modifies the sentence so that the sentence makes sense and sends it back to the source user via the machine translator. If the sentence sent back to the source user has the same meaning as the one originally sent. Then the source user accepts it and the meaning of the sent message is confirmed for both sides of the communication. However if the sentence sent back does not have the same meaning, then the process is repeated.

The collaborative system solves two important problems in machine translation:
1) Mistranslation by the machine translator.
2) Nonsensical translations provided by the machine translator. (usually a result of puns in the source language or extreme differences in sentence structure)

The software associated with the system also provides highlighting to give a sentences progress in translation.

_______________
My spill:

Since I always deal with the google translator and the retarded translations it gives for any Japanese I give it, I think this is a wonderful tool.

The main problem I see is that communicating this way would be agonizingly slow. I guess the real use for this system is if your really want to ensure there are no miscommunication like for a business deal.
But for casual situations, this would be too slow.

It's also worth noting that this paper had a ton of grammatical errors which makes it hard to take this work seriously. (But translations from English-Japanese and Japanese-English can be difficult so I'll forgive them. ^_^)

In future work, I'd like to see some way to not only ensure that the meaning is preserved but that the grammar would be intact and the quality of the sentence is maintained. That would be impressive.

Multi-touch Interaction for Robot Control

(comment left on: Jacob Faire's blog)

Authors:
Mark Micire University of Massachusetts Lowell, Lowell, MA, USA
Jill L. Drury The MITRE Corporation, Bedford, MA, USA
Brenden Keyes The MITRE Corporation, Bedford, MA, USA
Holly A. Yanco University of Massachusetts Lowell, Lowell, MA, USA

Paper Link:
http://delivery.acm.org/10.1145/1510000/1502712/p425-micire.pdf?key1=1502712&key2=3044037621&coll=ACM&dl=ACM&CFID=76752576&CFTOKEN=55465958

The researchers in this paper developed a multi-touch interface to control an Urban Search-and-Rescue Robot (USAR).
Their primary objective was to observe how users would interact with the affordances provided in the control interface and what information could be generated based on those observations.

In their controller they provided a digital screen that showed:
1) A map generated by the robot as the user explored a space.
2) A front view display
3) A rear view display
4) A generated display of the area immediately surrounding the robot
5) A control panel with 4 directional arrows, a speed control slider, and a brake button.



With this controller, they had 6 users who were trained to operate the robot with a joystick, operate the robot using their multi-touch controller.

The results of the study showed that the controller they had designed generated a wide array of emergent behavior and emphasized that they needed to provide clearer affordances in the controls and provide separate camera and movement controls.

_____________________

My spill:
I think the idea to use multi-touch for robots makes sense, but I feel that their approach wasn't very ambitious or original. First of all most video game developers could have told them how to make an efficient controller for an entity separated from the user. They could have told them that a separate camera and movement controller was essential.

For the future work, I'd like to see them implement a control for a robot that has something more of a human form where the user could control arms and legs with the user's own arms and legs.
That would cool.

Providing clear affordances is important and they really needed to focus on that a bit more I think.

Thursday, February 25, 2010

CRAFTing an Environment for Collaborative Reasoning

(comment left on Patrick Webster's blog)

Authors:
Susanne C. Hupfer IBM T.J. Watson Research Center, Cambridge, MA, USA
Steven I. Ross IBM T.J. Watson Research Center, Cambridge, MA, USA
Jamie C. Rasmussen IBM T.J. Watson Research Center, Cambridge, MA, USA
James E. Christensen IBM T.J. Watson Research Center, Cambridge, MA, USA
Stephen E. Levy IBM T.J. Watson Research Center, Cambridge, MA, USA
Daniel M. Gruen IBM T.J. Watson Research Center, Cambridge, MA, USA
John F. Patterson IBM T.J. Watson Research Center, Cambridge, MA, USA

Paper Link:
http://delivery.acm.org/10.1145/1510000/1502704/p379-hupfer.pdf?key1=1502704&key2=4081617621&coll=ACM&dl=ACM&CFID=76752576&CFTOKEN=55465958

This paper covers research on collaborative reasoning and sense making on large scale "wicked" problems.
They described sensemaking as "a motivated continuous effort to understand connections (which can be among, people, places, and events) in order to anticipate their trajectories and act effectively."

The goal of their research on collaborative reasoning and problem solving led them to focus on the aspects of semantics, collaboration, and adaptability and designed a system to guide collaborative problem solving based on these goals.

To this end, the IBM team developed CRAFT (The Collaborative Reasoning and Analysis Framework and Toolkit) which allows for a generalized and visualized way to create an ontological model (basically an object oriented visualization system) that tracks relationships between entities. This visualization system provides a lengua franca (common tongue) to exchange information between members of an investigative team regarding a problem being solved.



Alongside entity tracking, the system can continually update and evolve existing data and meta data on the objects with continual inquiring and searching.

The system also allows awareness of multiple entities and users who are using the CRAFT system and see what inquiries and updates they have made to the system.

Making an inquiry into the system can uncover previously made inquiries by other entities and new entities that share a name can be flagged for either identifying the entities as the same entity or disambiguating the same-named entities.

CRAFT also provides investigation nodes that allow the user to question, hypothesize, inquiry, and gather evidence for a particular model, question, or investigation. These investigation nodes allow the user to model a particular scenario and gather evidence for or against that scenario.
(For example an investigation node on a corporation might include stock quotes relevant information and expert opinion on the action of the stocks.)

_______________________

My spill:

This sounds like an interesting problem. You always hear in the news about multiple organization that are unable to cooperate and collaborate critical information leading to some disaster or another. The CRAFT system seems like it could solve these kinds of problems.

They did mention that they needed to implement a system that had access to the internet. I think that kind of feature is critical for a system of this kind just so the system can have access to the gigantic amounts of data available on the web and the ability to collaborate with multiple entities over several networks.

I could see a big problem with information security on a collaborative system especially for criminal investigations and the like.

Intelligent Wheelchair (IW) Interface using Face and Mouth recognition

(comment left on Kerry Barone's blog)

Authors:
Jin Sun Ju Konkuk University, Seoul, South Korea
Yunhee Shin Konkuk University, Seoul, South Korea
Eun Yi Kim Konkuk University, Seoul, South Korea

Paper Link:
http://delivery.acm.org/10.1145/1510000/1502693/p307-ju.pdf?key1=1502693&key2=2377217621&coll=ACM&dl=ACM&CFID=76752576&CFTOKEN=55465958

Ju et al. developed an intelligent wheelchair system that does 4 objectives:

1) Make a non-intrusive system for controlling a wheelchair that can be used by those disabled from the neck down.
2) Make the system usable for all times of the day.
3) Make the system accurately discriminate between intentional and non-intentional commands to decrease user frustration and system correctness.
4) Make the system able to recognize and avoid obstacles.



Their first objective to provide a non-intrusive system meant that they had to avoid any kind of objects that touched the face or head to control the system. So they used a Logitech PC camera to monitor the face's orientation, eyes movements, and mouth position. The user can tilt their face and eyes left or right to indicate that they want to move in those respective directions while the mouth shape controls the forward movement where a "Go" position signaled forward and an "uhm" position signaled the IW to stop.

Objective 3 was accomplished by making the system recognize when the user was facing forward or looking in another direction. If the user was facing forward, then commands were accepted. Else, they were ignored as non-intended commands.

Their fourth objective was achieved by implementing 10 range-sensors (2 ultrasonic and 8 infra-red) that detect the area around the IW (intelligent wheelchair). Faults of the system included a few blind spots around the IW that caused it to bump into objects in those blind spots.





The first study, they measured the accuracy of the facial recognition interface by setting the users in varying environments of lighting and backgrounds and found that the average time to process a frame was 62ms resulting in 19 frames processed per second. They also measured the recall and precision of their four commands (left,right,stop,go) and found that the average recall by users was 96.5% and the precision of the commands were an impressive 100%. Half of the users were able-bodied and the other half had disabilities.

In the second study, 10 able bodied users (1/2 male, 1/2 female) used three kinds of wheelchairs (joytick controlled, headband controlled, and the IW system) to navigate a course and the time to complete the run was measured.

They found that the joystick was the quickest method both before and after training and that the headband method yielded about a 2 second improvement in speed over their system. Once the user was trained in the methods of wheelchair control, the IW system was slightly better (by a few milliseconds) than the headband method.

________________
My Spill:

I think this system provides a very reliable way to provide the extremely disabled with a way to navigate their wheelchair that is free from the annoyance of intrusive methods of control, which is good.
However I think they need to consider the people who can't even necessarily control their neck muscles for movement (of which I know a few). But then again you can't please everyone.

The fact that the system works as well as the headband method is encouraging and it is interesting they provide obstacle recognition.

They need to implement better sensing of the surrounding environment to have a truly intelligent chair, but I think that is a relatively minor problem. I am worried about how much energy consumption it takes to run all these sensors and to power the computer to translate all this data.

Also they need to be able to provide more complex controls to refine the movement of the IW.

Emotional Design

(comment left on Nic Lupfer's blog)

by: Donald A. Norman

In his book, Norman goes beyond the simple usability and accordance in the design of everyday things and analyzes the emotional aspects of design and how it is that we feel about the things we encounter on a day-to-day basis.

Norman identifies three levels of emotional design:

Visceral - Associated with the most primitive and inbuilt instincts and intuition given to us by nature. This is how judge things as "pretty" (which concerns symmetrization, bright colors, etc...) or maybe "good tasting" (usually sweet things).

Behavioral - How the product feels and operates. This is the rational level of design and includes how well the instrument fulfills its purpose.

Reflective - Appeals to our emotions and includes the cultural influences that affect how we feel about the product. It makes us happy or we learn to like things bitter or sour things through "acquired taste" by reflecting on it. (see reflective level)

Norman also emphasized the role of making things fun to improve the value of a product and discussed the role of emotions in communication devices and devices where the attachment to the device comes from interpersonal actions.

From there Norman addressed his findings as it applied to robotics and his idea that effective artificial intelligence would need to have emotions reflecting the three levels of emotional design.

__________________
My spill:
Norman's ideas in this book differ greatly from his masterpiece "The Design of Everyday Things," but I think it encompasses a much more realistic (if less concrete) representation of value and use of everyday things.

Norman's split of emotional design into the three levels of visceral, behavioral, and reflective follow the traditional theories on the idea of self and seems to be an enlightening way to look at the world.

I thought Norman's ideas were most interesting when applied to the world of robotics. I've come to believe that "emotions" truly are the missing link in creating strong AI.

I will criticize Norman's work in that it doesn't seem to provide a clear guideline for maximizing any of the three aspects of emotional design although it did provide many examples of good design in each area. This makes it hard for the engineer/designer of these things to consistently make use of his ideas.

Wednesday, February 17, 2010

Learning from IKEA hacking: i'm not one to decoupage a tabletop and call it a day.

(Comment left on: Nicholas Lupfer's Blog)

Researchers:
Daniela Rosner - School of Information - University of California - Berkeley CA
Jonathan Bean - Department of Architecture - University of California - Berkeley CA

Paper Link:
http://delivery.acm.org/10.1145/1520000/1518768/p419-rosner.pdf?key1=1518768&key2=7380346621&coll=&dl=&CFID=76581338&CFTOKEN=95309429

Rosner and Bean conducted a qualitative study on a particular online community that identifies themselves as IKEA hackers to examine the increasing trend of interest in personalization and Do It Yourself (DIY) culture. IKEA hackers provide "an intersection between online culture and the material world of creative practitioners."

IKEA hackers take IKEA products and modify them to create unique products. Examples of IKEA hacking include the GYNEA chair, two IKEA chairs made into a single gynocology chairs with comfortable leg rests.



The study was a simple set of 9 1=2 hour interviews where the researchers questioned the hackers about their motivations, inspirations, and various creations.
The researchers discovered 3 themes:
1) Identity and Creativity
2) Technology
3) Hacking

Most hackers felt a kind of creative expression in IKEA hacking that made them feel like valued individuals while simultaneously being identified as part of a community that enjoyed similar interests of creating useful and unique products.
One participant labeled this idea as "non-concurrent collaboration."

Most hackers felt a kind of satisfaction from the haptic sense of physically manipulating objects and noted that they couldn't get this sensation from traditional computer based hacking.
They also noted that they felt RL (real life) hacking had a constructive feel which opposed the kind of destructive feel that computer based hacking has.

____________
My Spill:
One interesting idea to come out of this idea was that parties interested in collaborative design should provide tools to encourage the performance of collaborative values as well as a common medium for collaboration itself.
There has to be some kind of business idea that could take advantage of that.

It is interesting to see how much web based culture can intersect with RL.

The drawback of this study is probably a lack of quantitative data to work with.
But from the perspective of the paper, it is probably on a minor drawback.
I would have liked them to discuss a few more IKEA hacking creations to get a better feel of the process of it.

Tuesday, February 16, 2010

Social Computing Privacy Concerns: Antecedents and Effects

(Comment left on: Kerry Barone's Blog)

Researchers:
Oded Nov - New York University Polytechnic Institute, New York NY
Sunil Wattal - Temple University, Philadelphia PA

Paper Link:
http://delivery.acm.org/10.1145/1520000/1518754/p333-nov.pdf?key1=1518754&key2=3172836621&coll=&dl=&CFID=78245382&CFTOKEN=17889818

Nov and Wattal address how privacy features and concerns affect the amount of information sharing in social networks.

More specifically, they addressed how the "antecedents of privacy concerns" (individual privacy concerns, trust in other users, socially learned privacy concerns) affect the "impacy of privacy concerns on information sharing."

To this end, Nov and Wattal created several hypotheses:

-Internet privacy concerns are related to specific communities.

-Increased interaction with other members in the community negatively impacts amount of data shared.

etc...

The researchers took their hypothesis and contacted users of Flickr to take part in a survey. Those that accepted would fill out the survey and have their data monitored through an API. The amount of public photos was measured to indicate the amount of sharing and trust of information on the social network.

The results of the survey and study indicated that trust in other users and the networks information sharing norms had a negative effect on the amount of sharing that occurred and decreased the amount of public information that was shared.

They also found that privacy concerns lead users to implement more restrictive setting and share less data.

___________________
My Spill:

The team put forth multiple hypotheses about privacy and social norms which is interesting, but I feel like the research team really failed to delve into getting any real data out of any of their hypotheses.
They chased to many rabbits and ended up losing most of them.

The fact that they only studied the Flickr community is a little disappointing with all the different social networking sites out there.
The amount of data shared could vary tremendously depending on the website and the kinds of controls the site offered in data sharing and the perception of the general communities. I feel like if the study were conducted again in a network like MySpace versus facebook, that we could get some good comparative data.

(Like facebook users feeling more protected and sharing more information than myspace users or something like that)

I felt like the results the team did come up with were pretty obvious.

I'd like to see how different sharing/protection mechanisms on the same site would affect amount of data shared.

Thursday, February 11, 2010

The Inmates are Running the Asylum (part 1)

(Comment left on: Brett Hlavinka's Blog)

Book: The Inmates are Running the Asylum - Why High-Tech Product Drive us Crazy
and How to Restore the Sanity

Author: Alan Cooper


In his book, Alan Cooper addresses the problem of why software is difficult to use and frustrating for end users.
He calls this problem "The Dancing Bear" in which he means that people are introduced with software products that function (dance) but not very easily or elegantly (the bear is dancing). Cooper says that people are divided into two groups when they use computer software:

1) Apologists
2) Survivors

Apologists make excuses for bad software and tell survivors that they should be amazed that the products work and fulfill its function. They say, "Look! The bear is dancing."

Survivors on the other hand look at the ungraceful dance of the bear and feel something is wrong and are frustrated with the bear's dance. (Software could be better, easier to use, and more friendly.)

The reason "dancing bearware," that is to say bad/hard-to-use software, exists is because software is created by the ultimate apologists: Software Engineers.

Software engineers are trained to think like computers: deterministic, apathetic, literal, sequential, predictable, etc.
They think this way to be able to more easily make efficient software.
Therefore, they lose the perspective of the end user and usability of the software system is sacrificed.

Cooper also identifies the problem that companies are run by businessmen who are also introducing the constraints of time, money, and marketability which makes the software be produced hastily and with a list of features that may not be important.
Thus interface design for the end users is sacrificed.

Cooper proposes a solution to the problem by saying that companies and software development teams should have special interface designers who act as a third force to ensure product quality and usability for the potential customers of software.

_______________________
My Spill:

I have mixed feeling reading this book.
On one hand, I feel like a survivor who's sympathetic to computer illiterate folks who get frustrated with everyday computer interaction. In that mindset, I can see what Cooper is saying about trying to make software easier to use. That's a good cause that most software engineers need to take into perspective. We're all in this business because at some level, we all enjoy to work with computers. Certainly easy to use programs are part of what brought us here.

On the other hand, I am still entrenched in the views of apologists.

Although Cooper is a programmer himself, I think he is sometimes disconnected from all the difficulties and considerations that must be taken to develop good software.

At one point, he explains that the search for file function on the computer is too confusing for users because it gives them to options to search by file name, type, and location. The user doesn't have to use all these options! They are there for people who know where and what they want to search for!

Honestly, people can learn how to more effectively use a computer, it's just that most people aren't willing to learn and understand!

With all that said, I think putting interaction designers into programming teams and companies is a good idea that is worth implementing. If we can make better software for people that way, then let's do it.

I also think feature creep and the computerization of certain products needs to be reversed. Modern day cameras can be overly complex and alarm clocks should be made simple.

Making things simple is a good cause. I just wish he wouldn't insult programmers.

Tuesday, February 9, 2010

How Well do Visual Verbs Work in Daily Communication for Young and Old Adults?

(comment left on Brett Hlavinka's blog.)

Researchers: Xiaojuan Ma and Perry R. Cook of Princeton University

Paper Link: http://delivery.acm.org/10.1145/1520000/1518759/p361-ma.pdf?key1=1518759&key2=8393675621&coll=&dl=&CFID=77302547&CFTOKEN=53317107

This research was concerned with creating a set of visual methods to communicate a specific verb and identifying the differences in identifying those verbs between young and old adults.



They did this by identifying the 48 most used verbs in English speech and removing redundancies in meaning across those verbs. Then, for each verb they created a single static image, a panel of four images, an animation and a video clip to try and convey that word visually. They also constructed nouns and adjectives from single photos and made 68 phrases. From there, they rounded up young and old people and had them interpret each phrase and each person was ranked in terms of:

1) Difficulty in interpreting a verb
2) Speed of coming up with a thought
3) Confidence in Response
4) Personal Preference
5) How much the context (of the phrase) helped interpretation.

The results showed that, as predicted, older people took longer to interpret phrases and recall words.

The results also showed that videos were the most effective in conveying verbs especially when the verbs were categorized as "not imageable."

Another intersting result is that indirect symbol such as "?" for "wonder" and "<3" for "want" were quite confusing for the participants and should be avoided when constructing a visual representation of a verb.

The researchers came up with several design principles or "visual verbs:"
• Multiple pictures/frames are better for conveying verbs.
• Utilize common gestures if applicable, but be aware of possible cultural differences.
• Carefully use symbols, especially when not obvious.
• Simplify backgrounds (some objects (i.e. desks) were distracting), and use common scenes and props.
• Carefully use special effects in videos, especially with elderly users who are less familiar with them and might mistake fast-forwarding to “busy” or “hurry.”
• Consider age-related effects like cognitive overhead, response speed, visual degeneration, and preference.

_________________________
My Spill:
In the light of increasing globalization and increased exposure to multiple cultures and languages, there comes a need to be able to communicate. But given the fact that there are many prominent languages and even linguists can't learn them all, there comes the need to be able to convey ourselves beyond the bounds of spoken language.

This is where this kind of research comes in. Although there isn't a system in place to solve this problem yet, a way to express language on a digital medium through images and videos rather than words would be indispensable. So this research examines the effectiveness of expressing the most crucial part of any language, verbs, in terms of vision.

The research itself doesn't interest so much as the idea of a way to communicate without words does. Imagine being able to speak to anyone without knowing any other language! While that might be cool, it does sound unfeasible. The researchers did say that the aim of their research was to give a way to communicate with people who suffer from Aphasia which is certainly a noble cause.

The real drawback of the research is the lack of any kind of tangible system to convey verbs effectively. Sure, they identified a few preferred way to communicate, but language-less communication is still far off. I might also want to blame these researchers for not coming up with any really new knowledge but I'll refrain from that.

Saturday, February 6, 2010

Sketch and Run: A stroke based Interface for Home Robots

(Comment left on: Subodh Prabhu's Blog)

Researchers:
Daisuke Sakamoto
Koichiro Honda
Masahiko Inami
Takeo Igarashi

Paper Link:
http://delivery.acm.org/10.1145/1520000/1518733/p197-sakamoto.pdf?key1=1518733&key2=0508845621&coll=&dl=&CFID=75086870&CFTOKEN=52161669

Sakamoto et al. have identified the problem that robots designed for the home are completely autonomous and lack controls for the user to specify what a robot needs to do.



So Sakamoto et al. developed what they called "Sketch and Run" for the Roomba. (a vacuum cleaner robot) The Sketch and Run system observes the robot from a system of several cameras positioned on the ceiling looking straight down; The user can then issue pen-stroke based commands on a hand-held controller (in this case, a laptop) to tell the robot where to move, vacuum, pause, resume, and



An important part of the system is that the user can issue a command and leave the robot to its work which is an important feature in any robot. Hence the name, "Sketch and Run." This principle is called asynchronism.

___________________________
My Spill:

Sketch and Run's system provides an easy-to-use way to asynchronously control robots that can easily be extended to other robot models. That's great.
I'd love to see robotics come into the home with a user friendly model.


I have a few problems with sketch and run:
First is that the command strokes for resume and stop seem too similar.
Second is that the command strokes for go home and pause seem too complex.
Third is the fact that they claim that they want to make a user friendly control system that is affordable yet they use a system of four ceiling mounted cameras in order to make their system work.
Finally, the camera system was flawed in that the Roomba would go out of the cameras detection if they went to the overlapping edges of the cameras and control would become difficult.

These kinds of serious kinks need to be worked out in the future.
They need to make simpler strokes for control and an easier way to detect the robot other than a bunch of ceiling mounted cameras.

Interactivity Attributes: A New of Thinking and Describing Interactivity

(Comment left on William Hodge's blog.)

Researchers:
Youn-kyung Lim, Sang-Su Lee, Kwang-young Lee
Department of Industrial Design, KAIST
Republic of Korea

paper link:
http://delivery.acm.org/10.1145/1520000/1518719/p105-lim.pdf?key1=1518719&key2=0569745621&coll=&dl=&CFID=76811067&CFTOKEN=85641568

Video demonstration link:
http://delivery.acm.org/10.1145/1520000/1518719/p105.mov?key1=1518719&key2=2289745621&coll=&dl=&CFID=76811067&CFTOKEN=85641568

The primary goal of this research was to define a set of attributes that allow designers to describe what they called "the shape of interactivity" or an interactive artifact.

In other words, they were setting out to describe interactive objects/interfaces.
With that knowledge, they believed that designers could more effectively create good interfaces. They believed that the kind of interaction given to an interface could be concretely describable much like physical materials are described.

More than that, this team set out to identify the kinds of emotional reactions people had to these kinds of interactions.

For example, they created a set of interactive flash modules that the users manipulated with a mouse. These flash modules demonstrated several kinds/shapes of interaction:



Concurrency (concurrent/sequential) - When clicked all the marbles moved together or separately

Continuity (continuous/discrete) - a marble moves in a circle continuously or in steps

Expectedness (expected/unexpected) - Marbles moved to a corresponding slot or in a random slot

Movement Range (narrow/wide) - Marbles shuffle around when mouse in near or far

Movement Speed (fast/slow) - Marbles move quickly or slowly when mouse is near

Proximity (precise/proximate) - User adjusts area with measurement or without

Response Speed (delayed/prompt) - User clicks marble and the the marble moves instantly or is delayed

After the users played with a particular flash module, they asked them if they the interface felt like it was described and what kind of emotions the user felt.

They found that certain kinds of interfaces produced a range of emotional responses and they believe that when designers create interfaces, the designer consciously choose the kind of interaction environment to produce the desired emotional response.

(For the kinds of emotions that are attached to which interactivity modules, please refer to the paper.)

___________

My spill:

This paper was interesting in that it has sought out to measure the kinds of emotions people had to varying kinds of interfaces. It would be useful if designers of systems and interfaces had a kind of chart where they could look up and see what kind of interface they should use to evoke pleasure or introspection or sadness or whatever the designer may choose.

That said, I really don't think much was accomplished by this research. We already have ways of describing how objects react to certain action and how objects move. To some extent, we also know how people generally feel about certain reactions. We know for ease of use, we generally want quick response speeds, precise measurements, fast movements (but trackable) and reactions we expect.

As far as what kind of future work can come from this, the research team itself did say that they didn't test different styles of the same interactions and how the users felt about those differences. For all we know, the emotional responses may just be attributable to the kind of flash module they made rather than the interactions themselves.

Friday, February 5, 2010

Sacred Imagery in Techno-Spiritual Design

(Comment left on: Jacob Faire's Blog)

Researchers:
Susan P. Wyche, Kelly E. Caine, Benjamin K. Davison, Shwetak N. Patel,
Michael Arteaga, and Rebecca E. Grinter

GVU Center
Georgia Institute of Technology

DUB Group
Computer Science & Engineering

Paper Link:



In this research, Wyche et al. create an example mobile phone application called Sun Dial which is designed to show how applications can be made to go beyond simply providing functionality, but also paying attention to religious aesthetic principles and beliefs and incorporating sacred imagery to make for a program that can more intimately connect a user to a religious experience.

Sun Dial itself is a mobile-phone application that provides a visual interface so that Muslim users can identify the time of day that they need to offer up their prayers. By researching Islam's aesthetics they found that simplicity and a reverence of nature is pleasing to Muslim sensibilities, so they created an interface that shows the sun moving from sunrise to sunset and settling into green circles that indicated the window of opportunity for the believers to pray. In the background they inserted a picture of a mosque along with a skyline that changes colors to indicate dawn, day, and sunset.

Users reported that the interface was simple, pleasing, and offered an easy reminder of the Muslim principles and history.

____________________

My spill:
To me, this paper just reinforces the fact that you should consider your target population when your designing any product. If your target is a particular religious group, then you research what is important to that group, provide the features your application is supposed to address, and then make those features have seamless interface that is pleasing to the aesthetic of the user.

The only fault I find is that this kind of research doesn't really uncover anything really new. They really only made an application for a target population. It just doesn't seem ambitious to me.

As for what could be improved on for this kind of application would be a set of customizable features of what kind of background the user wanted or maybe provide an audio alarm to alert prayer times.

It is interesting for a designer to take a look at a religious perspective. I will admit that it makes me a little uncomfortable at the kind of demographic change that is going on in the world. But it is good business principle to adapt to that kind of change.

Creating a Spoken Impact: Encouraging Vocalization through Audio Visual Feedback in Children with ASD

Comment: Nobody has blogged on this paper so I can't really comment...

Researchers:
Joshua Hailpern
Karrie Karahalios
Jim Halle
(All from the University of Illinois)

Paper Link:
http://delivery.acm.org/10.1145/1520000/1518774/p453-hailpern.pdf?key1=1518774&key2=2274935621&coll=&dl=&CFID=74933136&CFTOKEN=29791244

In this particular study, Hailpern et al. presented the Spoken Impact Project (SIP) which was designed to be used by children (with the help of a professional) with Autism Spectrum Disorder (ASD) in order to increase the rate of vocalizations produced by the children. Children with low-functioning ASD have trouble communicating and may never even develop speech or language skills. ASD children also tend to withdraw into their own world and do not like human interaction which makes acquisition of language skills extremely difficult.



With this in mind, SIP included a computer system that reacted to sounds produced by a child by giving audio, visual, or combined audio and visual output that is meant to stimulate the child to producing more sound. The team was focused on getting the child to produce what they called SSLV, or, Spontaneous Speech-Like Vocalizations, which is a parameter based on classical autism treatments.



After running a base test without feedback on 5 different children, the children used the system in several different sessions where each session had a different kind of feedback.

After the data from the test were gathered and statistical analyses were run, it was found that 3 children reacted to/preferred audio feedback and 2 reacted to visual feedback. It was also found that providing feedback encouraged a positive response from the children. 1 particular child's responses were particularly strong so a follow up study was run on what was called a "Wizard-of-Oz" system which was based on SIP. The child was given the prompt, "Say [word]" and was rewarded with both visual and audio feedback. At the end of the session, the child would repeat all the words that were given to him.

_______________________
My spill:
It is encouraging to see research being taken on for helping these kinds of kids and actually producing some positive results. Future researchers and current therapists for this problem should focus on integrating a fun and flexible system that can adapt to a child's preference of either visual or audio feedback and getting that system to teach them actual language skills and communication.

I think SIP is on the right track. The only drawback is that it needs to keep going.
Also it would be nice to see a study taken on with a larger population. Statistically speaking, they needed a larger population and more trials to get more significant results.

Thursday, February 4, 2010

"Pimp My Roomba" Designing for Personalization

(comment left on Patrick Webster's blog)

Researchers:
JaYoung Sung, Rebecca E. Grinter and Henrik I. Christensen
School of Interactive Computing
Georgia Institute of Technology

Paper Link:
http://delivery.acm.org/10.1145/1520000/1518732/p193-sung.pdf?key1=1518732&key2=6139035621&coll=&dl=&CFID=74772686&CFTOKEN=78984782

In this interestingly named paper, Sung et al. research the effects of personalization on the Roomba and test Blom and Monk's theory of personalization. They found that while there has been multiple studies on personalization with e-commerce and web applications, there wasn't much research that had been done on "off-desktop-interfaces." So the research team chose the roomba as the object to be personalized and provided 30 households with Roombas. They gave 15 of those households personalization kits that included skins, stickers, and markers. These households were a relatively even mix of singles, married couples and families.



The results of the study showed that 6 of the 15 who were provided kits actually customized their Roomba for reasons that included feeling a personal connection and appreciation for their Roomba, wanting the Roomba to blend into the aesthetic design of the home, or conversely, wanting the Roomba to stand out from the background of the home. Some owners even said that they felt the machine worked better now that it was "more human."



For the others who did not customize their Roomba, motivations included that they couldn't find a skin that matched their household or reflect the design they wanted or that they felt that customizing their Roomba would not add value to the device.

The research team also found that of the 15 households who were not provided a kit, none went out of their way to customized their Roomba because they felt that it was too much trouble to do so.

The main conclusion from the research is that customization does increase the sense of ownership and appreciation for the device. As such there is good reason for designers to include these kind of features. The team also found that people need to have a kick-start to begin customizing their devices.

Other conclusions from the paper included that customizing the Roomba should also include easily removable decorations for collaborative environments such as a family household.
______________________

My Spill:
While it is important to have empirical proof that customization increases a sense of ownership, I really think they just found out the obvious from this paper.
They could improve the study by providing a wider set of customization tools.

It is interesting to note that none of the people who were not given kits actually customized their Roomba. I also think it's funny that there are some families that were fighting over how to customize their Roomba. I could see my sister and I doing that back home.

I know I've been provided a big set of stickers for my guitar hero controller but I never use them because I think it somehow lowers the value of the device plus they're hard to remove once their actually on. This also reminds me a friend who sent in his I-pod to engrave some stuff onto back of case to make it feel more like "his own."

With a little common sense, designers and marketing people could make a device truly successful and engaging just by providing some customization tools. It goes to show that even adding something a little value, you can increase the desirability of a product by leaps and bounds.

Monday, February 1, 2010

The Design of Everyday Things

Book: The Design of Everyday Things
By: Donald A. Norman

(comment left on William Hodge's blog.)

In this book, Norman tackles the question of why people struggle with the use of everyday things and what design principles should be applied to solve these problems.
Norman takes the perspective of the user in his book and stresses that things should be made easy to use and understand without the use of instructions or experience. In other words he stresses the design of intuitive devices. His guidelines for making such well made products included:

Provide natural mappings between the action and the desired result.
Providing Feedback to the user.
Use knowledge provided by the environment and provide subtle clues of use in the design of the object.
Use of standardization so the user only has to learn things once.
Design a system so that failure isn't painful.

_________
My views:

The beauty of Norman's approach is that it is focused on what really matters in the design of everyday things: The ease of use and satisfaction it brings to the user.
With all the design processes and constraints and considerations that a product must go through, it is easy to forget that we make devices to make our lives easier and more productive and it's good to see that Norman really stresses that. Another really neat aspect of this book is it's age and perspective. The book was made in 1988 when Nintendo and computer companies were still in their infancy. Norman talks about the trouble that people have with using computers and how the revolutionary take on using a mouse with computer should be praised. While this kind of perception has it's limitations on modern society, sometimes it is prophetic to see modern devices taking the advice of the principles of the book and applying them.

Norman also presented his concerns about the "house of the future" and all the complications in control that are presented in the setup he imagines. Even 22 years later, we still don't see a lot of the innovations that troubled Norman's mind but it is easy to see how these technologies could be potentially problematic and we should strive to take his advice in designing the everyday things of Norman's future, our present, and the world of tomorrow.

With all of that said, I wish Norman had given more examples of both good and bad design and didn't focus so much on the principles and concepts. It would have been a more interesting read that way.