政堯 王inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation
1. inform system to display the shape. This system for fast, real-time 2.5D shape actuation, co-located projected graphics, object tracking, and direct manipulation. The writer seek to bring the dynamism of visually perceived affordances of GUIs to physical interaction by utilizing shape-changing UIs.
2. Provide the concept of the design space of dynamic affordances and constraints.
1. The system is difficult to implement and the author spend large cost and much time to do this.
2. The key contribution of this paper is the concept shape display in 2.5D used in human computer interaction. This is a good idea and can commercial in the future. The reason is this technology used in stage a long times and successful because it can give the audience a better impress in vision. In addition, the 2D display used to show 3D physical space we accept. The 2.5D display used to show 3.5D physical space (0.5 is haptic) we can accept too.
3. The concept of The Dynamic Affordances and The Dynamic Constraints is opposite. The user broken our feeling and tell us the constraints is also a good point for us. Because the physical space also have a lot of constraints. Constraints give user more real feeling in interaction.
1. No user study. The author can’t do the user study to proof the method to interaction in human and computer is perfect.
2. Using physical objects in display lead to the display must be stay horizontal. This is not convenient in using mobile phone or other portable device.
3. We need a good material to make the device to give the user real feedback. Dynamic display not suitable for us psychology model.
This work outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with their inFORM system. In the inFORM system, affordances are physical elements that the user can touch. Depending on how they are rendered by the system, they either directly react to touch, or react to displacement from the user pushing or pulling them. They have explored the design space of Dynamic Physical Affordances and Constrains, and described methods for actuating physical objects on cutuated shape displays. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
+ Nice Demo Video, Really Cool!! Waoooooooooooo~~ What the Sorcery is this !!!
+ Lots of cool applications and scenario.
- Oh my god.......Super Expensive and the volume of this system is too big.
- low resolution and non-scalable
This paper propose an inFORM system, providing variable stiffness rendering and real-time user input through direct touch and tangible interaction. The authors also demonstrated a set of motivating examples of how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
[Positive: inFORM as Input or Output]
inFORM as an output, can guide interaction with dynamic constraints, and actuate objects with shape displays. The most amazing part for me is inFORM as an input, transforming gestures to physical output. This feature allows people to control objects remotely, and make things happen even if you are not there.
[Positive: Implementation of inFORM]
In my opinion, a platform like inFORM is not a new idea. inFORM is not perfect because of power consumption, and resolution issues. However, the authors chose the straightforward solution (each actuator controls one pin), and showed determination to implement it. Overall, the demos are overwhelming.
[Negative: User Evaluation]
inFORM emphasizes on dynamically changing UIs, and differentiates from other previous work. inFORM also mentioned using restrict to guide interaction. Thus, how users react to 3D clues, and evaluate this system are important.
[Negative: Hardware Solution of inFORM]
This paper used 900 mechanical actuators to control 900 pins, causing high power consumption and low repetitive usability of actuators. Furthermore, the one-to-one mapping relation constrains the resolution of displays. The hardware solution is not efficient, and hard to scale up for commercialization.
A brief summary
inFORM outlines potential interaction techniques and introduces Dynamic Physical Affordances and Constraints with their system. Including methods for actuating physical objects on actuated shape displays. Dynamic shape change’s more general purpose role opens possibilities for using shape change both for content and as a UI elements. 900 mechanical actuators, an overhead depth camera and a projector made up of it’s strong system. Many new interaction techniques for shape- changing UIs, such as Left-to-right, Dynamic-affordance, and Dynamic-constraints. These examples demonstrates how to create novel interaction possibility by dynamic affordances, constraints and object actuation.
2 "positive" topics
1. new design for dynamically changing UI
Most research on shape displays in the past we had ever seen focused on reddening content and user interface elements through shape output , with less emphasis on dynamically changing UI. However, inFORM gives us a great example to breakthrough our stereotype, which maybe can gives us more chance to think out greater ideas in the future!
2. physical feedback
There are some physical feedback. I think it’s good design for some special condition. For example, physical button, which can give user feedback after user presses the button. It’s more interesting and real for user try to do something. Another example, with dynamic physical constraint and audio feedback, it provides a vivid interface between user and device. It’s wonderful to undergo this experience
1. too expensive
inFORM provides an intact device which had never been implement yet. Let us feel it’s quite amazing. However, although physical feedback to users will provide a better user experience for device user, it’s so expensive for us to make it in practice. In addition, when we want to do research, it’s difficult for us to have as much fund as inFORM’s budget. In total, inFORM is quite amazing!
2. not very lucid for response
inFORM has many strong capability in different faces. However, I can’t find an obvious hint about what does it work for now. For instance, inFORM can restrict the physical ball with dynamic constraints, but if I want to use inFORM now, I don’t know whether it’s constraints the ball or perform other capability (e.g. press for button expression).
Previous studies were often limited by the static nature of physical artifacts, and thus could not easily change their form. In this study, the authors presented inFORM to solve the problems, which can facilitate interaction by providing dynamic physical affordances through shape change and can restrict interaction by guiding users with dynamic physical constraints. The system also enables users to manipulate interface by actuating physical objects.
1.Most people have experience of playing construction toys, thus I think that it is not difficult to figure out the idea. However, to carry out the idea may be difficult, so I think that’s why this study stands out from others. On the other hand, this study presents new technologies rather than new ideas.
2.Another contribution is that the study demonstrate lots of examples on how this technique can create novel interaction possibilities. One of the applications is that it can present 3D surface, which is really useful since it is hard to imagine 3D surface on 2D interface.
1.I checked wiki that it said that affordance is a property of an object, which gives an individual hints on how to use the objects to perform an action. I think inFORM only carry out part of this property, because some of its functions are not intuitive and one can hardly imagine how to use it.
2.Another concern is that no user testing has been investigated. Through user testing, we can know more about the feasibility and value of this study.
3.Because the technique interact with humans by changing the height of each pin, the system must occupy a lot of space. Thus, I think this technique can not use in mobile devices, which have to be thin and light.
They made a cool hardware to make virtual-physical interactive become reality. The device facilitates through Dynamic Affordances, to restrict through Dynamic Constraints, and to manipulate passive objects through shape change. They made an exploration of the design space of dynamic affordances and constraints. Interact with physical objects through shape displays. State-of-the-art system for fast, real-time 2.5D shape react, co-located projected graphics, object tracking, and direct manipulation. And three applications that demonstrate the potential of these interaction. In this way, user will make good interactive with computer cause computer now can react not only vision but also physic by those up-and-down sticks.
1. They made virtual control with real objects comes true. Just put objects on inForm device and you can interact with the object physically anywhere. If the object is alive (such as robot), object may use inForm to interact with human.
2. inForm can guide user to what it want user to do. So it can combine with education. For example, piano learning. User can learn what key is going to play via the stick fall down or rise up.
3. inForm works perfect with kinect and projector to make it self artifact. If there is a huge size inForm, it might be good to use on a show.
1. It's very expensive and use a lot of energy. This device is cool but user can never bring to home. ( or never have chance to play with it. )
2. There is no user study. We don't know how's the effect while user interact with it. I thought most user will agree inForm is cool but is it good to use? We'll never know.
The paper presents an interactive surface that can provide the user with physical shapes, a so-called “shape display”. The shape display consists of 900 pins that can move up and down. A projector and an RGBD-camera are placed above the surface.
The authors believe that shape displays need to provide three types of functionality: Facilitating through Dynamic Affordances, restricting through Dynamic Constraints and manipulating passive objects through shape change. Some usages of the surface as presented by the authors are: showing and interacting with 3D surfaces, interacting with tokens and providing user feedback for another device.
政堯 王Lumino: tangible blocks for tabletop computers based on glass fiber bundles
1. The concept of using glass fiber bundles to extend sensing beyond the diffuser.
2. The use of fiber optics to resolve occlusion between tangible blocks.
3. The framework of blocks, markers, and mechanical constraints.
1. The concept of sensing beyond the diffuser remind us to think about linking the 3D physical space and the 2D display space over linking 2D physical space and 2D display space.
2. 2D sensor catching the 3D information is a bid problem now. The user use the optical theory and mechanism design to think about the problem and find that if we look the object as a group of basic geometry like sphere and cube. We can solve the problem preliminary.
3. The author give three methods to design the bundles and give three methods to design the marker in this system. The author also provide the compare of the methods. In addition, he also give three examples to tell us the use situation. The method of author writing papers and design experiments we need to study.
1. Though the method can solve the problem sensing beyond the diffuser. It sensing the high level objects has a very low precision and a high demand. The author is not doing the user study and I think the reason is the user use the approach will feel very hard to use. The method is hard to improve to sensing complex objects in the future because of it method.
2. Using a lot of physical objects in a display is not convenient for the user. The user may be only need a pen or other to feel back and input because it can take everywhere and every times. We need to do a user study to think about this and then to create methods to solve the problems user needs really.
In this paper, they demonstrated how to sense 3D arrangements of building blocks with a regular, unmodified diffuse illumination table.The main idea was to use glass fiber bundles to move the visual focus of the table to the required location in the structure and to use that same glass fiber bundle to rearrange marker images into a 2D configuration the table can read. They present six classes of blocks and matching marker designs, each of thich is optimized for different requirements. it shows three demo applications.They claim that this concept,that each block contains a glass fiber, has the potential to extend the application of diffuse illumination display systems.
+ The idea to use glass fiber bundle to modify light path for sensing 3D arrangements of building blocks with a regular, unmodified diffuse illumination table.
+ They present six classes of blocks with complete Analysis and implementation details.
- No User Study. I am not sure if user really like these applications.
- Scenario is not strong, and I don't know how to evaluate this system.
This paper demonstrated how to track 3D arrangement of building blocks on the table’s surface without modifying it. They presented lumino, a system of building blocks, and used glass fiber bundles in each block, allowing the built-in camera to recognize its marker.
[Positive: Innovative Approach to Tabletop Interaction]
Many tangible physical building block systems sense connections between blocks, requiring extra efforts to maintain batteries. This paper proposed 3D arrangement of building blocks with a regular, unmodified diffuse illumination table. Using glass fiber bundles allows surface to recognize blocks even when they are located above the surface.
[Positive: Great Writing Style]
Reading this paper is like reading a novel! Through the introduction, the authors presented the difference between their solution and related work, brief intro of LUMINOs, and its benefits and contribution. They also emphasized on the main contribution in engineering domain, and analyzed specific strengths and limitation of each glass fiber blocks in detail.
[Negative: Solution for MS Surface]
The surface serves as screen, displaying output signal to users. Thus, if we can move cameras above the table, the solution can totally be changed. I think this paper proposed an amazing idea when the settings are described in this paper. However, is this the best solution for tabletop interaction?
[Negative: Mechanical Constraints]
The authors proposed mechanical constraints to assure rotation. I think the shape may influence the reflected illuminant. Perhaps using the surface of blocks with visual hints can be a much easier solution.
A brief summary
Lumino demonstrates how to use objects arranged in a three-dimensional structure without modifying the table. In order to prevent the marker image from blurring, they presented an effect called ‘’deferred diffusion’’. They presented three types of square fiber bundles(straight, offset, and demagnification). Because of the constraint that square fiber bundles will lead to a certain amount of clipping, including block rotation, they also provide three types of round bundles. They have many useful application and they break through a main idea: they implement to sensor multiple blocks on the same place, by using glass fiber to move the visual focus of the table to the required location in the structure and using the same glass fiber bundle to rearrange marker images into a 2D configuration the table can read.
2 "positive" topics
1. sensing multiple blocks on the same place
Because of the physical object’s characteristic, it can not be penetrated by streams of light. But in this paper, they break through this restriction. By using glass fiber bundle and some and some fabrication, they implement a special block that can be detected on top of multiple markers. Expanding the usability limitation from 2D platform to 3D arrangement.
2. obvious tangible feedback
Although technical progress is really quick in 21 century, it still have some limitation about touchscreen. One of the problem is there is not physical feedback when touchscreen device wants to simulate the real world. With Lumino, it would make a big break through on this topic. With no doubt that after the Lumino research, it will have more opinion and possibility to solve this problem more complete in the future.
1. No user study in this research
In spite of great analysis in the Lumino research in this paper, they don’t do user study about whether their design is great or not. Although their result is really attractive an awesome, if they can do user study in their research, it will have a stronger standpoint for their result and can persuade more viewers.
2. pellucid v.s. expensive
In order to sense multiple blocks on the same place, they use glass fiber bundle and some optic technique. But actually, it would decrease the resolution. Now if we want to solve it, we can choose to use larger blocks or increased camera resolution. However, it’s not convenient with a large blocks and maybe it will be very expensive for plenty of high-resolution camera. If they can solve this problem, Lumino will be a wonderful technique.
This study presented Lumino, tangible blocks which contain glass fiber
Bundle, to solve the problem that diffuse illumination table is not able to identify the arrangement of 3d objects. The author presented three types of glass fiber bundle, including straight blocks, offsets blocks and demagnification blocks. Each type has its advantages and limitations, for example demagnification blocks offer maximum flexibility whereas offset blocks offer maximum marker capacity. Also, the blocks are unpowered and maintenance-free, keeping larger numbers of blocks manageable.
1.One of the contribution is that without modifying the diffuse illumination table and without equipping blocks with power or magnet, we still can interact with the computer using 3D objects. This remind us a really important concept when designing a interface, that is we should not first think to solve the problem by adding something into the existing devices, but we should first think that without any additional functions how the problem can be solved.
2.In addition to just presenting three types of glass fiber bundle, the author also contrast these three types of block, which gave us more information about how to choose appropriate one when designing our own interface. The authors also provide the process they use to make fiber bundles, so by following these steps we also can make our own prototype. On the other hand, I think the writing style of this study is worth learning because of the comprehensive and detail contents.
1.Even though the aim of the article is to provide a new techniques to solve the existing problem, the author should do user testing to verify that this interface is convenience or valuable to users. In our daily life we will not use blocks to construct things or communicate with each others, since we are not children or architects. It is really hard for me to imagine its implication in daily life.
2.Since the diffuse illumination table has to tract the objects by identify the corresponding markers. The size of the objects may be a critical issue that should be considered. For example, the table may not be able to tract the arrangement of objects, which is very small, such as needle.
This paper is talking about how to improve tangible objects on tabletop computers. We know that tabletop is using Computer vision way to recognize the object upon it, it can just recognize objects in 2D. Lumino use glass fiber to make a breakthrough, now tangible objects can construct in 3D way with Lumino. In this way, tabletop computer can see through the glass fiber and recognize the marker on the objects stack. They have developed three classes of blocks and matching marker designs, each of which is optimized for different requirements.
But there still some limitation is that if objects stack too large, the optical flow is week, tabletop computer still can not recognize.
1. They bring tangible objects technique on tabletop from 2D to 3D. It's a big breakthrough that all this years tangible objects can only recognized in 2D, with Lumino, it make tangible objects as fun as "LEGO".
2. They find out a new elements - glass fiber to construct tangible objects. It's a new concept that object might be construct in different elements with different effect.
1. How about the light in environment. I'd like to know if the light enhance in environment will it effects on the glass fiber?
2. There is no study about the recognized rate. User study may not needed in this paper but the recognized rate is important for me. And I'd like to know what is the advantage to use tangible objects on tabletop between using kinect to recognized objects on tabletop?
This paper presents a technique for tangible blocks on tabletops using Glass Fiber Bundles. The authors are interested in this, because it would allow for multiple layers of tangible objects on a tabletop device.
The paper introduces 3 different kinds of Glass Fiber usages; each of these blocks has its own upsides and downsides: a straight block, which is easy to produce. An offset block, which allows for the largest sets of marker based blocks. And finally, the de-magnifying block, which allows for maximum flexibility with transferring markers to the table. Also 3 applications for these Glass Fiber Bundle blocks are introduced: A checkers game, a photo manipulating tool, and a construction tool in which blocks can be used to build structures.
This paper present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause.Also, the experiment session contains a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures.
[Positive: Design gestures from non-tech users]
The participants are non-technology users, and consequently their responses to commands are natural. The authors observed their behavior, and created user-defined set. The proposed set revealed a high degree of consistency in similar operations, and flexibility in the number of fingers, palm, or edges of the hands.
[Positive: Interesting mental model observation]
This paper provides not only quantitative data but also qualitative data, presenting how users think when they perform gestures. I summarized two interesting points here: (1) number of fingers: it may represents the force imposed on objects (2) a land the beyond screen: users intuitively expect more interactive space beside the screen. The observation looks deeper in how users want, and what they expect to have.
[Negative: Relation between timing consumption and Likert scale]
There is no relation between how long users think aloud and how they like the gesture. Since this paper aims at finding the natural gestures, a better solution may be allowing users to testing all referents at once, and report their favorites afterwards.
[Negative: Evaluation of the user-defined set ]
The resulting user-defined gesture set is conflict-free and covers 57.0% of all gestures proposed. Conflicts represent that users control different command with the same gesture. The authors eliminates them by letting the referent with the largest group won the gesture. In that situation, I would like to know how users evaluate the organized set.
◎ A brief summary
￼User-defined gesture can make it a good candidate for deployment in tabletop system. For example, ease of recognition, consistency, reversibility, and versatility through aliasing. They rely on eliciting gestures from users then developing a user- defined gesture set, such as a user can perform a gesture after being prompted by an animation demonstrating the effect by devices. From a taxonomy of surface gestures, 1080 gestures from 20 participants can be roughly classified in Form, Nature, Binding, Flow. With serious research step, I think their analysis is really clear and conscientious.
◎ 2 "positive" topics
1. great research analysis
There are many great points for research analysis in this paper, which is very good for us to learn. For instance, they do their user study from non-technical users, which will make research have results more accurate. In addition, they gather a large quantity of sample ( eg. 1080 gestures ) and conscientious statistic analysis after their user testing. Their taxonomy is also clear in detail. In my opinion, although all of us know about those points about doing research, I think their attitude displayed is important and is worthy for us to learn.
2. useful for better user experience
With user-defined gesture design, users no longer be limited by gesture created by system designer. This is totally conform to minds of tabletop users. With background diversity of device user, this design can deeply improve user experience for using tablet and mobile device, and make more possibility for future device’s design.
◎ 2 "criticisms"
1. idea had been implement
I think this design is a really good idea in 2009, but in my points of view, I had already seen similar design in other device in 2013. I don’t know whether the same capability in today’s product is originally from this design or not. But one thing is for sure is that this is a great idea to break through the limitation from product.
2. too complicated for other user
It’s just a case discussion. If one day we cancel all original input gesture for our mobile device and tablet, and all gesture need to be setting at the beginning of user’s first time usage. Then it will lead to some problem: Maybe it will have too many gesture which will also confuse user himself; Maybe lots of gesture been setting are useless for most of the time; Or maybe it would make more confuse for others who want to use your mobile device. For some condition mentioned above, I think it maybe have for risk so don’t let it go beyond the limit.
This paper is talking about a research principle and try to find out what is the nature gesture on tabletop device. Although the conclusion is that there is no common nature gesture for complicated action actually, but still find out some gesture that most people accept. They also find out gesture is hard to define - even a professional designer on HCI. One funny thing that I'd like to mention is that they said that this is a Windows world while 2009, but now a day, Mac is more popular in USA.
1.They find out that there is some nature gesture, made a big contribution for designer to design gesture for user. In other hand, it means user is teachable.
2. I am interested if that will it be the same if user do twice with one issue?
I means, if we ask user to action "Move" on Tuesday and Sunday, will they do the same gesture as last time?
1. I'm thinking about that is tabletop device good to use? Cause user alway put a lot of things on the table. Will they change their mind that tabletop device is not used for lay up object?
2. They made a big contribution for people who study about gesture - they no need to study anymore XD.
In order to bring interactive surfaces closer to the hands and minds of tabletop users, a study of surface gestures is necessary. In this paper, 20 participants were observed, and total 1080 gestures, the authors witnessed, were analyzed and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Based on the received data, the authors come out and conclude that both complete user-defined gesture set and taxonomy of surface gestures is useful for tabletop user interface design.
In this paper, both taxonomy of surface gestures and user-defined gesture set is conductive to the gesture design on surface technology, even though, the data they collected is not from intercultural users. Still, we have something to count on.
Overall, the authors have done a good job on user study. The data, they collected, is reliable, and the conclusion they drew is convincible. I believe that the methods for the user study is one of the contribution of this paper.
This paper does not have a significant contribution. All I learned from this paper is that we need to conduct a user study before we design something. The user-defined gesture set is helpful; however, the user’s gesture can still vary from many other factors.
In order to make conclusion more convincible, I recommend authors to observe learnability by comparing their user-defined gesture set with existing modern tabletop gesture. It seems to me that some gestures, they give, is not simple enough, that is, it requires longer time to perform the gesture.
A brief summary:
In recent year, surface computing is giving users the fixed gesture by system designers. By observation of those who have no stereotypes in gesture operation. We can taxonomy the variance gesture and provide the user some none fixed gesture in advance.
2 positive topics:
 This paper provide a strong material that you can use if you gonna designed a gesture operation. I think these materials cannot only use in surface gesture operation but also use in the mouse clipping and rowelling. Why I said this? They analysis some motion we will do in the , such as move object a little, nova object a lot, select single object, rotate the screen, rotate the object, shrink the test or diagram, delete the object, zoom in the screen(and zoom out also), open the file or open the hyperlink, next page, cut the connection, minimize the windows, accept the alert confirm, access to menu, ask for the help page, undo something and text or workstation switch. These operations we encounter when we use the webpage and desktop operating system through the mouse and keyboard input. This paper also reminds me that the mouse input is a single point input. I starting thinking can be the mouse input being a multi cursor input? Hope that I can find some inspiration in this paper when I am in shower …
 Last week we writing the review of gesture output, and this week we reading the paper about the taxonomy of gesture. The previous one is about the computer output and the one we talk this week is the user input to the computer. I am very confused that why the gesture they discuss is totally atypical. We can taxonomy the surface gesture that people input the computer into four main parts, the from, the nature, the binding and the flow. But we can only use one with a single stroke English character in computer output. Why we cannot let the computer output of those gestures we used the input. Those gestures are the most mature way to represent some operation. When I try to figure this confusion I found that there is an issue, that the gesture output can only provide a single point movement. It’s totally different from the one we use hand. We got ten fingers, and two hand. And I thinking why not build a multi point gesture output for computer. Maybe we can divide the foil on the screen into peaces and make them move individually.
 As the article says, they want to take the wisdom of crowds to build a much skater gesture recognition. They avoid making the user with experience on macintosh and windows to prevent the influence by then. However, in my opinion the wisdoms of crowds should not ban those people. In deed, the user experience on macintosh and windows has a strong stereotype of the gesture input, but the people have this experience is the great majority and we should not ban them. Besides, people cannot totally exclude the influence from macintosh and windows. The gesture to control them have deeply rooted in people’s mind from the movie, Scion fiction, social media and our daily life. Even if this kind of human kind exists, we better group them into a set and compare them with the people not so pure. So, in this paper, it is not so important to divide people who are not pure but should try to under the influence by macintosh and windows, what effort can we do?
 When this paper talk about the developing an user-defined gesture but they did not mention about the monitor size to the plate that gesture input, they set the Microsoft surface computing device. This means all the research is based on the huge touch screen which is as large as a lunch table. Under this situation, all the gesture is not constrained in a single hand. But, this situation may lead the user into an abyss of confusion. If today we test the user under a limit surface, they must reduce the use of gesture that should use both hands and find some way to replace them such as pan, maximum and large. However, is it really true that those gestures would be still the same of user use when they want to do on a large screen but contract in one hand? That would be an issue have to discover. So, I think this paper can only provide the user experience data on the larger screen, for those mobile device, it can only offer some direction.
This paper is trying to develop a whole new gesture sets which could truly reflect the behavior’s mind. It means that they are trying to help designers keep away from those nasty problems when building gestures for systems, like what kinds of gestures do non-technical users make, does the number of fingers matter like it does in many designer-defined gesture sets. There are four important things we can get from this paper. First, users rarely care about the number of fingers they employ. Second, desktop idioms strongly influence users’ mental models. Third, one hand is preferred to two. Fourth, some commands elicit little gestural agreement, suggesting the need for on-screen widgets.
+ This work helps us to take the interactive surface closer to the hands and minds of users. The one of the most important things is that the authors collect a lot of test data to validate their theory, user-defined gestures. We can learn many important insights of user-mind from those data. For instance, referents’ conceptual complexities correlates significantly with planning time but inversely with average ratings of gesture ease. Nevertheless, gesture articulation time does not significantly affect goodness ratings, but it does not affect ease ratings. Moreover, gestures that take longer to perform are generally rated as easier, maybe because their properties of being smoother or less hasty.
+ Another important thing I like to refer to is that the taxonomy of surface gestures could be useful for analyzing and characterizing gestures in surface computing. Below are the three reasons for supporting this conclusion. First, taxonomy could help us quickly find the right category to develop the best gestures because of those different properties. Second, in the same category we could use those different preferences as our designing references. In the end, as the authors stated that we can translate insight of user mental mind into implications for technology and design.
- There are two concerns about this work. First, the hypothesis, the “wisdom of crowds” would generate a better set than experts, needs to be validated. The best interaction between users and computers may not be determined by the majority of people. It could be terrible design if each thing is determined by the majority as this world does. Because of this reason, the idea of user-defined could be worthless. Second, gestures users defined may already be biased because the fact that “Anything I can do that mimics Windows—that makes my life easier”. I think part of our mind model has been influenced by Windows for a long time so that we just behave like we use Windows when we use touch tablets.
- There are two things need to be considered as the authors mention in the last paragraph. First, the participants cannot change the gestures they defined before if they found the previous gesture could be better for this task. Second, application context could impact users’ choice of gestures, as could the larger contexts of organization and culture. Maybe the authors would get a totally different data set from participants who has diverse cultures.
This paper presents a user study on gesture input for tabletop systems. The participants were all Educated Americans without a background in CS or interface Design. The participants also had never used touch devices such as iPhones etc.
The authors present the user with an animated action that is the result of a gesture. The user then has to decide which gesture is most suitable to cause this action. The users will “think out loud” and they are recorded.