Sunday, January 31, 2010

Ross's Ethnography: Japanese Lang Tools

Idea:
Observe members of the Japan club and other related organizations/classes for hints at what kinds of language tools they use and what features are most appreciated in those tools.

Here is a brainstorming outline:
-Native Members
-

-Non Native Members
-Speaks Japanese
-Electronic Dictionary
-Input and display options
-Kanji
-Onyomi
-Kunyomi
-Stroke Order
-Stroke Number
-JLPT ranking
-Jukugo
-related vocabulary
-Romaji
-Hiragana
-Katagana
-Sketch Recognition
-Battery Life
-Ease of use
-Dictionary
-School issued book
-Flash Cards
-Other tools
-Doesn't speak Japanese

Other observations could include a breakdown of gender, race, religion, income bracket, and political outlooks. More than likely, I'll stick to gender and tool usage.

So far, it's just me doing this ethnography but others are welcome to join.
Leave a comment if you want to.

Sunday, January 24, 2010

User Guided Audio Selection from Complex Sound Mixtures

(comment left on Nate Brown's blog.)

Research Group:
Paris Smaragdis
Adobe Systems Inc.

pdf link:
http://delivery.acm.org/10.1145/1630000/1622193/p89-smaragdis.pdf?key1=1622193&key2=4529534621&coll=ACM&dl=ACM&CFID=73635830&CFTOKEN=62086703


Summary:
Traditional audio editing and manipulation software typically represents audio files as a waveform and selecting and editing individual components from that waveform can be quite difficult even for trained experts. So Paris Smaragdis at Adobe Systems sought out to create an audio selection algorithm that would provide an object based way to identify, select, and edit an individual sound from an entire audio file.

Although the implementation of the algorithm is quite complex, the basic model Smaragdis used was the Probabilistic Latent Component Analysis (PLCA) which is able to separate an audio signal into spectral bases, their temporal weights, and basis
priors. In other words, with PLCA we can identify separate elements of an audio file.

With this algorithm, Smaragdis just needed some way for the user to give the software a way to choose what audio elements the user wants modify. This was done by allowing the user to mimic the sound they want to modify by whistling, humming, singing or even using a matching instrument. The software then uses the PLCA analysis on that input and matches it with a similar section in the audio file. Then the user is free to modify that matched section.

Smaragdis ran a few experiments and found that his algorithm was able to correctly pick up voices in an audio file given input some several different users of both genders as well as particular sounds in an audio file.

He also noted that while the algorithm is able to pick up unique sounds from an audio source, it sadly could not pick out, "...targets that strongly correlated to non-targets," or in other words, "one violin from an orchestra playing in unison."
________

Discussion: This paper is particularly interesting for novice or casual audio editors who want to edit isolated sounds in an audio file. Often times it is difficult to really manipulate a given piece to give rise to your artistic expression but this algorithm is paving the way for more user friendly audio interaction. (Which is cool)

It is a shame though that a conductor still can't edit those irritating off tune instruments in a whole band or orchestra, but the algorithm is still providing a significant step closer to audio editing perfection.

I remember a few years back that after listening to my roommate's friend's work on an audio software called fruity loops that I would really have loved to read in a the song "Time's Scar" and editing out the tambourine to give the song a more somber tone, but I just couldn't do it with that kind of software. But given this particular work, that just might now be possible.

Saturday, January 23, 2010

Ripples: Utilizing Per-Contact Visualizations to Improve User Interaction with Touch Displays

(comment left on Shaun Maier's blog.)

Ripples researchers:
Daniel Wigdor, Sarah Williams, Michael Cronin, Robert Levy, Katie White, Maxim Mazeev, Hrvoje Benko
Microsoft Surface | Microsoft Corp. | Microsoft Research

Paper pdf link:
http://delivery.acm.org/10.1145/1630000/1622180/p3-wigdor.pdf?key1=1622180&key2=5584724621&coll=GUIDE&dl=GUIDE&CFID=74475451&CFTOKEN=47541294

Ripples is essentially a feedback system for making sense of multi touch input on table top or other touch screen interfaces.
The main goal of ripples was have some kind of standardized feedback that would allow the user to know how the tabletop was receiving the user's input. The Ripples system itself doesn't actually increase the accuracy of multi touch input, but rather provides feedback that tells the user what kind of input the system has received. Clearer input feedback reduces the frustration the user experiences with tabletop input and makes the experience easier and more enjoyable.

They conducted a few user studies with the Ripple system where they made the users use several different programs both with and without the Ripple system and found that there was a statistically significant improvement in user accuracy with the Ripple System on.

Here's a few pictures to show what Ripples actually does:








_____________________
My spill:

The really cool thing about the Ripples system is that it is independent of the type of sensing that particular system might user to receive multi-touch input. For example it could be FTIR detection, camera based detection, capicitors, or a hundred other things and Ripples could still be used to enhance the user's interaction with the system.

It's hard for me to find a drawback to this system. The only thing I can really point to is that some users said that the animations were distracting, but given the fact that they took a minimalist approach and the fact that you could turn a feature like this off pretty easily, I think it's a pretty minor drawback.

Personally, I've thought about this kind of idea when I first heard about tabletop computing. I love things that feel futuristic like this. Given all the problems the Microsoft team identified with multi-touch input and feedback, this system seems like it is really needed and I hope it comes as a standard addition to tabletop systems and even other systems that utilize multi-touch. I'd love to see this implemented EVERYWHERE.

I've always thought touch screen always needed a little something to help improve accuracy and while Ripples doesn't technically do that implicitly it certainly seems to achieve it.

Nice work, Microsoft.

Wednesday, January 20, 2010

Mouse 2.0: Multi-touch Meets the Mouse

(Comments left on Nate Brown's Blog)

The idea presented in this paper was to implement multi touch technology on mouse like devices that would allow for more dexterous manipulation and interaction with computer applications.
The research group then creates and tests several multi touch mouse devices and finds out which designs/implementations were most intuitive to new users.

The research team for this paper developed 5 different mice.

The first mouse was an FTIR mouse that detected finger contact across an acrylic board by using frustrated total internal reflection. The acrylic board is lit with infrared light and a finger on the board disperses the light which can be detected with an infrared camera. The FTIR mouse also included a clickable section at the front of the mouse.
Disadvantages with the FTIR mouse includes ergonomic problems, high power consumption, multi touch sensing is limited to the tip of the acrylic and the IR sensitive camera was too sensitive to sunlight and other external sources of IR light.
On the plus side, clicks were easy to detect, user interaction was robust and the acrylic plate design is easily modified to be visually pleasing to the human sense
aesthetics.

The second mouse they introduced was the Orb mouse which also used FTIR techniques to capture the position of the user's fingers. But this time infrared camera was housed within the orb of the mouse which was also clickable.
One disadvantage to the Orb mouse was that it was still susceptible to external infrared light and that the infrared camera consumed a lot of power which is particularly bad for laptops that have limited battery life.
Advantages to the orb mouse included a larger and easier to manipulate multi touch area, a smooth gradient on the multi touch area, and a more ergonomic design than the FTIR mouse.

The third mouse was the capacitive sensing electrode based mouse or "Cap mouse."
This particular mouse looks like a regular mouse but underneath the surface, a grid of small electrode capacitors each 5 mm apart sense the finger positions of the user.
This particular mouse was had much more precise sensing the the infrared camera based designs and featured lower power consumption. It also didn't suffer from any of the problems from external factors that the previous two had. Unfortunately the sensing area on the mouse was too small because the design was constricted to the shape of a regular mouse.

The fourth mouse achieved sensing by using an infrared sheet illuminator beam that is projected from the sides of the mouse. Thus, the mouse was named the side mouse. Whenever the infrared beam is breached a camera picks up the infrared light reflected back to the camera and the finger is sensed by the mouse.
While the side mouse offers the advantage of having an interaction area that isn't limited by the size or design of the mouse itself, the side mouse suffers from the fact that objects not intended to be sensed by the user can be picked up by the mouse. Also the side mouse, looking like a simple block with a rest for the palm is ergonomically challenging and performing a regular mouse click must be done by pressing down on the with the user's palm. This made it difficult to both move and click simultaneously as well as use the multi touch and click simultaneously. Furthermore, the mouse suffers from all the problems all the other infrared mice had like high power consumption and interference. The research team also mentioned that the some of the users' hands were either too big or too small to use the mouse effectively.
The final mouse was the articulated mouse called "Arty."
Arty was made with 3 high resolution optical mouse sensors.
One at the base of the mouse and one at the end of both articulated arms that are manipulated with the user's thumb and index finger. This design allowed for easy pinching action as well as the simple application of traditional 2 button clicking.
Arty also had blue tooth technology that made it the only wireless mouse of the group. The research group noted that the use of 2 separate high resolution optical mouse sensors over each finger allowed "...high sensing fidelity compared with capacitive or camera-based sensing techniques."
____________________________________________
My thoughts:

While many of the mice designs were interesting, I failed to really see how adding multi touch technology to mice was a significant improvement in the ease of use of everyday computing situations. I do understand their passion for making more immersive technologies and improving interaction with the computer by making use of natural human hand dexterity though. So I applaud their work nonetheless.

Really the biggest work that needs to be done here is making the public aware of and excited about the existence and uses of multi touch mice. Otherwise I don't see this going very far.

The side mouse is far from pleasing to the eye. It just looks like a block.
It was a cool idea though.

Bonfire - a tabletop/laptop integration

(Comments left on Jacob Faires' Blog.)



Bonfire was developed by Shaun Kane, Daniel Avrahami, Jacob O. Wobbrock, Beverly Harrison, Adam D. Rea, Matthai Philipose, and Anthony LaMarca of the DUB Group at the University of Washington.

The main goal of Bonfire was to create a self-contained, portable nomadic computins system that combined effectively combined the best features of both laptop and tabletop systems.

The Bonfire system has 2 laptop mounted projectors and cameras
that make an interaction area on either side of laptop that
allows the user to manipulate data on either side of the laptop
like a table system but also provides the existing benefits
provided by a laptop such as keyboard and mouse input and
existing laptop applications.

The Bonfire paper itself claims that the integration of
the laptop and tabletop scheme:
"1) enables observing periphery and responsding appropriately
to the casual placement of objects within its field of view.
2) Enables integration between physical and digital objects via
computer vision.
3) Provides a horizontal surface in tandem with the usual
vertical laptop display, allowing direct pointing and gestures.
4) Enlarges the input/output space to enrich existing applications
"

Object selection - Adaptive Background subtraction
At start up, the current view is the background.
Areas added to the background are classified as foreground objects.
If an introduced object is not recognized it is added
to the background and unused objects can also be incorporated
into the background.

Finger detection uses a color based metric
and identifies extended points as fingers.
A single finger uses the tip of the finger as the cursor.
Multiple fingers use the midpoint as the cursor.

A user can interact with the projection space by using
tapping, dragging, flicking and crossing.

Taps - both accelerometer and user's finger near target
Drags - 2 fingers across the surface
Flicks - 1 finger across the surface. Only in cardinal directions
Crossing - detects when finger moves across interactive elements
______________________

My spill:
Now that all sounds a tad complicated but the basic idea is really cool.
You have an area to each side of your laptop where you can have extra computing
space that not only can show be used for more window space, but also have other cool
features that interact with the physical world.

Features like object recognition. The computer can recognize objects like my spiral
note-books and then bring up the online powerpoints from that class or maybe open
up a word document that has my report based on lecture notes.

The bonfire paper also had a pretty neat feature where the computer would recognize
a cup of coffee and then update how much the user has spent on coffee and chart calorie intakes.

As a gamer, my personal favorite example they provided was using the bonfire system
as a input menu space for MMORPG that would allow the user to quickcast spell or
or maybe display a map on beside the laptop while freeing up space for other things
on the main laptop screen.

This is definitely the direction I like to see computing going in.
More useful interaction with the environment and intelligent use of space and use of computer sensing and intelligence. Very cool.

The paper also touched on a very good point that the bonfire system covers the weakness of both laptops and table tops. Laptops lack interactivity with the physical
while tabletops lack the sophisticated input and portability.
That's progress.

The real faults that Bonfire needs to be aware of is about the marketing and pricing of laptops equipped with this kind of technology. I would suspect that the initial introduction of these units would be pricey and if people aren't aware, this kind of product could flop. The other thing I worry about is how much the system would actually be able to recognize. If the number of items it can recognize is too small, then Bonfire could end up looking like some novelty feature that really doesn't do too much.





Tuesday, January 19, 2010

Test Post

Now launching:
Ross Versus the Machine
Testing...
1...
2...
3...