Hackpads are smart collaborative documents. .
1357 days ago
Unfiled. Edited by 政堯 王 1357 days ago
政堯 王 Resource(Tangible Interaction)
 
Interactive Surface 
CHI09 - Slap widgets: http://tinyurl.com/ojosqr8
CHI04 - I/O Brush: http://tinyurl.com/nr4jfr9
CHI02 - illuminating Clay: http://tinyurl.com/prhjwoc
CHI14 - Kickables: http://tinyurl.com/ohlxoas
UIST02 - Actuated Workbench: http://tinyurl.com/oxqorsr
 
Token+Constraint 
CHI01 - DataTiles: http://tinyurl.com/q98xqnx
 
Constructive Assembly
CHI06 - Topobo Backpacks: http://tinyurl.com/ozoyomb
 
TUI on Tablets
CHI12 - CapStones: http://tinyurl.com/qdq3qhd
TEI13 - Magnetic Appcessories: http://tinyurl.com/qars6p2
UIST11 - Portico: http://tinyurl.com/lubnnvh
CHI13 - GaussBits: http://tinyurl.com/p853swn
CHI14 - GaussBricks: http://tinyurl.com/ondbb64
 
TUI on Materialty
UIST12 - Jamming UI: http://tinyurl.com/nhnjytc
TEI14 - JamSheet: http://tinyurl.com/p75rst3
 
 
1364 days ago
Unfiled. Edited by 王政堯 1364 days ago
王政堯 Resource (Mobile Interaction )
 
Sensing Techniques
 
 
 
Camera
Cuddly: Enchant Your Soft Objects with a Mobile Phone, SIGGRAPH Asia 2013 Emerging Technologies
 
 
 
Paper link (not yet)
                     
Capturing Addition Input Dimensions
 
 
 
 
Paper link(not yet)
 
Paper link (not yet)
 
Back-of-Device Interaction
 
 
 
Moving Interaction Into Free space (Around Device Interaction)
 
 
 
One Handed Use
 
 
 
Mobile Text Entry(Typing)
 
 
 
 
 
 
Multi Device Interaction
 
...
1364 days ago
Unfiled. Edited by 政堯 王 1364 days ago
政堯 王 inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation
 
沈超
Summary:
1. inform system to display the shape. This system for fast, real-time 2.5D shape actuation, co-located projected graphics, object tracking, and direct manipulation. The writer seek to bring the dynamism of visually perceived affordances of GUIs to physical interaction by utilizing shape-changing UIs.
2. Provide the concept of the design space of dynamic affordances and constraints.
Positive:
1. The system is difficult to implement and the author spend large cost and much time to do this.
2. The key contribution of this paper is the concept shape display in 2.5D used in human computer interaction. This is a good idea and can commercial in the future. The reason is this technology used in stage a long times and successful because it can give the audience a better impress in vision. In addition, the 2D display used to show 3D physical space we accept. The 2.5D display used to show 3.5D physical space (0.5 is haptic) we can accept too.
3. The concept of The Dynamic Affordances and The Dynamic Constraints is opposite. The user broken our feeling and tell us the constraints is also a good point for us. Because the physical space also have a lot of constraints. Constraints give user more real feeling in interaction.
Negative:
1. No user study. The author can’t do the user study to proof the method to interaction in human and computer is perfect.
2. Using physical objects in display lead to the display must be stay horizontal. This is not convenient in using mobile phone or other portable device.
3. We need a good material to make the device to give the user real feedback. Dynamic display not suitable for us psychology model.
 
Han-Yu,Wang
Summary
        This work outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with their inFORM system. In the inFORM system, affordances are physical elements that the user can touch. Depending on how they are rendered by the system, they either directly react to touch, or react to displacement from the user pushing or pulling them. They have explored the design space of Dynamic Physical Affordances and Constrains, and described methods for actuating physical objects on cutuated shape displays. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
 
Positive
        + Nice Demo Video, Really Cool!!  Waoooooooooooo~~ What the Sorcery is this !!!
 
        + Lots of cool applications and scenario.
 
Negative
        - Oh my god.......Super Expensive and the volume of this system is too big.
 
        - low resolution and non-scalable
 
許嘉容
[Summary]
This paper propose an inFORM system, providing variable stiffness rendering and real-time user input through direct touch and tangible interaction. The authors also demonstrated a set of motivating examples of how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
 
[Positive: inFORM as Input or Output] 
inFORM as an output, can guide interaction with dynamic constraints, and actuate objects with shape displays. The most amazing part for me is inFORM as an input, transforming gestures to physical output. This feature allows people to control objects remotely, and make things happen even if you are not there.
 
[Positive: Implementation of inFORM] 
In my opinion, a platform like inFORM is not a new idea. inFORM is not perfect because of power consumption, and resolution issues. However, the authors chose the straightforward solution (each actuator controls one pin), and showed determination to implement it. Overall, the demos are overwhelming. 
 
[Negative: User Evaluation] 
inFORM emphasizes on dynamically changing UIs, and differentiates from other previous work. inFORM also mentioned using restrict to guide interaction. Thus, how users react to 3D clues, and evaluate this system are important.
 
[Negative: Hardware Solution of inFORM] 
This paper used 900 mechanical actuators to control 900 pins, causing high power consumption and low repetitive usability of actuators. Furthermore, the one-to-one mapping relation constrains the resolution of displays. The hardware solution is not efficient, and hard to scale up for commercialization.
 
許鈞彥
A brief summary
inFORM outlines potential interaction techniques and introduces Dynamic Physical Affordances and Constraints with their system. Including methods for actuating physical objects on actuated shape displays. Dynamic shape change’s more general purpose role opens possibilities for using shape change both for content and as a UI elements. 900 mechanical actuators, an overhead depth camera and a projector made up of it’s strong system. Many new interaction techniques for shape- changing UIs, such as Left-to-right, Dynamic-affordance, and Dynamic-constraints. These examples demonstrates how to create novel interaction possibility by dynamic affordances, constraints and object actuation.
 
2 "positive" topics
 
1. new design for dynamically changing UI
Most research on shape displays in the past we had ever seen focused on reddening content and user interface elements through shape output , with less emphasis on dynamically changing UI. However, inFORM gives us a great example to breakthrough our stereotype, which maybe can gives us more chance to think out greater ideas in the future!
 
2. physical feedback
There are some physical feedback. I think it’s good design for some special condition. For example, physical button, which can give user feedback after user presses the button. It’s more interesting and real for user try to do something. Another example, with dynamic physical constraint and audio feedback, it provides a vivid interface between user and device. It’s wonderful to undergo this experience
 
2 "criticisms"
 
1. too expensive
inFORM provides an intact device which had never been implement yet. Let us feel it’s quite amazing. However, although physical feedback to users will provide a better user experience for device user, it’s so expensive for us to make it in practice. In addition, when we want to do research, it’s difficult for us to have as much fund as inFORM’s budget. In total, inFORM is quite amazing!

2. not very lucid for response
inFORM has many strong capability in different faces. However, I can’t find an obvious hint about what does it work for now. For instance, inFORM can restrict the physical ball with dynamic constraints, but if I want to use inFORM now, I don’t know whether it’s constraints the ball or perform other capability (e.g. press for button expression). 
 
李姿誼
Review:
Previous studies were often limited by the static nature of physical artifacts, and thus could not easily change their form. In this study, the authors presented inFORM to solve the problems, which can facilitate interaction by providing dynamic physical affordances through shape change and can restrict interaction by guiding users with dynamic physical constraints. The system also enables users to manipulate interface by actuating physical objects. 
 
Positive:
1.Most people have experience of playing construction toys, thus I think that it is not difficult to figure out the idea. However, to carry out the idea may be difficult, so I think that’s why this study stands out from others. On the other hand, this study presents new technologies rather than new ideas.
2.Another contribution is that the study demonstrate lots of examples on how this technique can create novel interaction possibilities. One of the applications is that it can present 3D surface, which is really useful since it is hard to imagine 3D surface on 2D interface.
 
Criticism:
1.I checked wiki that it said that affordance is a property of an object, which gives an individual hints on how to use the objects to perform an action. I think inFORM only carry out part of this property, because some of its functions are not intuitive and one can hardly imagine how to use it.
2.Another concern is that no user testing has been investigated. Through user testing, we can know more about the feasibility and value of this study.
3.Because the technique interact with humans by changing the height of each pin, the system must occupy a lot of space. Thus, I think this technique can not use in mobile devices, which have to be thin and light.
 
黃冠捷
Summary:
They made a cool hardware to make virtual-physical interactive become reality. The device facilitates through Dynamic Affordances, to restrict through Dynamic Constraints, and to manipulate passive objects through shape change. They made an exploration of the design space of dynamic affordances and constraints. Interact with physical objects through shape displays. State-of-the-art system for fast, real-time 2.5D shape react, co-located projected graphics, object tracking, and direct manipulation. And three applications that demonstrate the potential of these interaction. In this way, user will make good interactive with computer cause computer now can react not only vision but also physic by those up-and-down sticks.
 
Positive topics:
 
1. They made virtual control with real objects comes true. Just put objects on inForm device and you can interact with the object physically anywhere. If the object is alive (such as robot), object may use inForm to interact with human.
 
2. inForm can guide user to what it want user to do. So it can combine with education. For example, piano learning. User can learn what key is going to play via the stick fall down or rise up.
 
3. inForm works perfect with kinect and projector to make it self artifact. If there is a huge size inForm, it might be good to use on a show.
 
Criticisms:
 
1. It's very expensive and use a lot of energy. This device is cool but user can never bring to home. ( or never have chance to play with it. )
 
2. There is no user study. We don't know how's the effect while user interact with it. I thought most user will agree inForm is cool but is it good to use? We'll never know.
 
Sander Valstar
Brief summary
The paper presents an interactive surface that can provide the user with physical shapes, a so-called “shape display”. The shape display consists of 900 pins that can move up and down. A projector and an RGBD-camera are placed above the surface. 
The authors believe that shape displays need to provide three types of functionality: Facilitating through Dynamic Affordances, restricting through Dynamic Constraints and manipulating passive objects through shape change. Some usages of the surface as presented by the authors are: showing and interacting with 3D surfaces, interacting with tokens and providing user feedback for another device.
...
1364 days ago
Unfiled. Edited by 政堯 王 1364 days ago
政堯 王 Lumino: tangible blocks for tabletop computers based on glass fiber bundles
 
沈超
Summary:
1. The concept of using glass fiber bundles to extend sensing beyond the diffuser.
2. The use of fiber optics to resolve occlusion between tangible blocks.
3. The framework of blocks, markers, and mechanical constraints.
Positive:
1. The concept of sensing beyond the diffuser remind us to think about linking the 3D physical space and the 2D display space over linking 2D physical space and 2D display space.
2. 2D sensor catching the 3D information is a bid problem now. The user use the optical theory and mechanism design to think about the problem and find that if we look the object as a group of basic geometry like sphere and cube. We can solve the problem preliminary.
3. The author give three methods to design the bundles and give three methods to design the marker in this system. The author also provide the compare of the methods. In addition, he also give three examples to tell us the use situation. The method of author writing papers and design experiments we need to study.
Negative:
1. Though the method can solve the problem sensing beyond the diffuser. It sensing the high level objects has a very low precision and a high demand. The author is not doing the user study and I think the reason is the user use the approach will feel very hard to use. The method is hard to improve to sensing complex objects in the future because of it method.
2. Using a lot of physical objects in a display is not convenient for the user. The user may be only need a pen or other to feel back and input because it can take everywhere and every times. We need to do a user study to think about this and then to create methods to solve the problems user needs really.
 
Han-Yu,Wang
Summary
        In this paper, they demonstrated how to sense 3D arrangements of building blocks with a regular, unmodified diffuse illumination table.The main idea was to use glass fiber bundles to move the visual focus of the table to the required location in the structure and to use that same glass fiber bundle to rearrange marker images into a 2D configuration the table can read. They present six classes of blocks and matching marker designs, each of thich is optimized for different requirements. it shows three demo applications.They claim that this concept,that each block contains a glass fiber, has the potential to extend the application of diffuse illumination display systems.
 
Positive
        + The idea to use glass fiber bundle to modify light path for sensing 3D arrangements of building blocks with a regular, unmodified diffuse illumination table.
        
        + They present six classes of blocks with complete Analysis and implementation details.
 
Negative
        - No User Study. I am not sure if user really like these applications.
 
        - Scenario is not strong, and I don't know how to evaluate this system.
        
許嘉容
[Summary]
This paper demonstrated how to track 3D arrangement of building blocks on the table’s surface without modifying it. They presented lumino, a system of building blocks, and used glass fiber bundles in each block, allowing the built-in camera to recognize its marker.
 
[Positive: Innovative Approach to Tabletop Interaction] 
Many tangible physical building block systems sense connections between blocks, requiring extra efforts to maintain batteries. This paper proposed 3D arrangement of building blocks with a regular, unmodified diffuse illumination table. Using glass fiber bundles allows surface to recognize blocks even when they are located above the surface.
 
[Positive: Great Writing Style] 
Reading this paper is like reading a novel! Through the introduction, the authors presented the difference between their solution and related work, brief intro of LUMINOs, and its benefits and contribution. They also emphasized on the main contribution in engineering domain, and analyzed specific strengths and limitation of  each glass fiber blocks in detail.
 
[Negative: Solution for MS Surface] 
The surface serves as screen, displaying output signal to users. Thus, if we can move cameras above the table, the solution can totally be changed. I think this paper proposed an amazing idea when the settings are described in this paper. However, is this the best solution for tabletop interaction? 
 
[Negative: Mechanical Constraints] 
The authors proposed mechanical constraints to assure rotation. I think the shape may influence the reflected illuminant. Perhaps using the surface of blocks with visual hints can be a much easier solution.
 
許鈞彥
A brief summary
Lumino demonstrates how to use objects arranged in a three-dimensional structure without modifying the table. In order to prevent the marker image from blurring, they presented an effect called ‘’deferred diffusion’’. They presented three types of square fiber bundles(straight, offset, and demagnification). Because of the constraint that square fiber bundles will lead to a certain amount of clipping, including block rotation, they also provide three types of round bundles. They have many useful application and they break through a main idea: they implement to sensor multiple blocks on the same place, by using glass fiber to move the visual focus of the table to the required location in the structure and using the same glass fiber bundle to rearrange marker images into a 2D configuration the table can read.
 
2 "positive" topics
 
1. sensing multiple blocks on the same place
Because of the physical object’s characteristic, it can not be penetrated by streams of light. But in this paper, they break through this restriction. By using glass fiber bundle and some and some fabrication, they implement a special block that can be detected on top of multiple markers. Expanding the usability limitation from 2D platform to 3D arrangement.
 
2. obvious tangible feedback
Although technical progress is really quick in 21 century, it still have some limitation about touchscreen. One of the problem is there is not physical feedback when touchscreen device wants to simulate the real world. With Lumino, it would make a big break through on this topic. With no doubt that after the Lumino research, it will have more opinion and possibility to solve this problem more complete in the future.
 
2 "criticisms"
 
1. No user study in this research
In spite of great analysis in the Lumino research in this paper, they don’t do user study about whether their design is great or not. Although their result is really attractive an awesome, if they can do user study in their research, it will have a stronger standpoint for their result and can persuade more viewers.
 
2. pellucid v.s. expensive
In order to sense multiple blocks on the same place, they use glass fiber bundle and some optic technique. But actually, it would decrease the resolution. Now if we want to solve it, we can choose to use larger blocks or increased camera resolution. However, it’s not convenient with a large blocks and maybe it will be very expensive for plenty of high-resolution camera. If they can solve this problem, Lumino will be a wonderful technique.
 
李姿誼
Review:
This study presented Lumino, tangible blocks which contain glass fiber
Bundle, to solve the problem that diffuse illumination table is not able to identify the arrangement of 3d objects. The author presented three types of glass fiber bundle, including straight blocks, offsets blocks and demagnification blocks. Each type has its advantages and limitations, for example demagnification blocks offer maximum flexibility whereas offset blocks offer maximum marker capacity. Also, the blocks are unpowered and maintenance-free, keeping larger numbers of blocks manageable.
 
Positive:
1.One of the contribution is that without modifying the diffuse illumination table and without equipping blocks with power or magnet, we still can interact with the computer using 3D objects. This remind us a really important concept when designing a interface, that is we should not first think to solve the problem by adding something into the existing devices, but we should first think that without any additional functions how the problem can be solved. 
2.In addition to just presenting three types of glass fiber bundle, the author also contrast these three types of block, which gave us more information about how to choose appropriate one when designing our own interface. The authors also provide the process they use to make fiber bundles, so by following these steps we also can make our own prototype. On the other hand, I think the writing style of this study is worth learning because of the comprehensive and detail contents.
 
Criticism:
1.Even though the aim of the article is to provide a new techniques to solve the existing problem, the author should do user testing to verify that this interface is convenience or valuable to users. In our daily life we will not use blocks to construct things or communicate with each others, since we are not children or architects. It is really hard for me to imagine its implication in daily life.
2.Since the diffuse illumination table has to tract the objects by identify the corresponding markers. The size of the objects may be a critical issue that should be considered. For example, the table may not be able to tract the arrangement of objects, which is very small, such as needle. 
 
黃冠捷
Summary:
 
This paper is talking about how to improve tangible objects on tabletop computers. We know that tabletop is using Computer vision way to recognize the object upon it, it can just recognize objects in 2D. Lumino use glass fiber to make a breakthrough, now tangible objects can construct in 3D way with Lumino. In this way, tabletop computer can see through the glass fiber and recognize the marker on the objects stack. They have developed three classes of blocks and matching marker designs, each of which is optimized for different requirements.
But there still some limitation is that if objects stack too large, the optical flow is week, tabletop computer still can not recognize.
 
Positive topics:
 
1. They bring tangible objects technique on tabletop from 2D to 3D. It's a big breakthrough that all this years tangible objects can only recognized in 2D, with Lumino, it make tangible objects as fun as "LEGO".
 
2. They find out a new elements - glass fiber to construct tangible objects. It's a new concept that object might be construct in different elements with different effect.
 
Criticisms:
 
1. How about the light in environment. I'd like to know if the light enhance in environment will it effects on the glass fiber?
 
2. There is no study about the recognized rate. User study may not needed in this paper but the recognized rate is important for me. And I'd like to know what is the advantage to use tangible objects on tabletop between using kinect to recognized objects on tabletop?
 
Sander Valstar
Brief summary
This paper presents a technique for tangible blocks on tabletops using Glass Fiber Bundles. The authors are interested in this, because it would allow for multiple layers of tangible objects on a tabletop device.
The paper introduces 3 different kinds of Glass Fiber usages; each of these blocks has its own upsides and downsides: a straight block, which is easy to produce. An offset block, which allows for the largest sets of marker based blocks. And finally, the de-magnifying block, which allows for maximum flexibility with transferring markers to the table. Also 3 applications for these Glass Fiber Bundle blocks are introduced: A checkers game, a photo manipulating tool, and a construction tool in which blocks can be used to build structures. 
...
1372 days ago
Unfiled. Edited by 政堯 王 1372 days ago
政堯 王 Bringing physics to thwe surface
 
許嘉容
[Summary]
This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and often shape information, and advanced games physics engines. The authors define a technique for modeling the data sensed from such surfaces as input within a physics simulation, and also retain a high fidelity of interaction.
 
[Positive: Simplification of input within a physics sim.] 
I like the idea mentioned in this paper that “one need not completely replicate the physics of object manipulation in order to construct useful applica- tions exhibiting physics-inspired behavior.” This paper thought out of box, and redefined the problem.  The problem is not how to precise describe each parameter in reality, but to have predictable 2D experiences.
 
[Positive: Pseudo-3D interaction like cupping a ball] 
Even though the interactive space is 2D, the proposed technique provides new modes of interaction, such as cupping a ball. Cupping a ball means lifting from the bottom. However, there is no depth sensing in a 2D interactive surface. It is interesting to see pseudo-3D interaction being realized in a 2D interface.
 
[Negative: The evaluation of fidelity ] 
This paper mainly contributes to simplification of coding while maintaining a high degree of fidelity. Nevertheless, in the user study, we can only see the quantitative data of time to completion. Since the fidelity is one of the focus, the authors can quantify the fidelity score. 
 
[Negative: Manipulating objects in 3D through 2D surface] 
Overall, the system performs well not only in fidelity but also responses to gestures. I was wondering how users evaluate one specific gesture category, such as manipulating objects in 3D. Manipulating objects in 3D through 2D surface conflicts to fidelity. I suggest evaluation of gestures and how the objects response to them.  
 
許鈞彥
◎ A brief summary
 
This paper provides a technique for modeling the data sensed from such surfaces as input within a physics simulation. This technique can be used to add real-world dynamics to interactive surfaces. Using surface input within a physics simulation such as direct force, virtual joints, particles, deformable 2D/3D, and proxy objects. With this new physics-based interactions interface, it takes advantage of the fidelity of sensing provided by vision-based interactive surfaces. For example, piling objects, interacting with a ball using collusions and friction, and tearing a mesh. Ultimately, it can enable in a virtual domain the range of object manipulation strategies available to us in the real world.
 
◎ 2 "positive" topics
 
1. many new useful way for new interactive technology
Interactive technology becomes more and more important and valuable for nowadays world. With this paper’s idea, they create many new methods and examples to approach interactive technology. For example, if they can make infant market’s interest, it is without a doubt a enormous market. In addition, Smart-City, which is also a big issue in 21 century, is deeply influenced by interactive technique.
 
2. physics simulation
I think maybe it’s a new way for human interaction with mobile device. There are some cool idea about physics simulation in this paper. For instance, about the topic “Compound objects” in the paper: There are a ball in a carton and a ball can roll but is limited by carton’s border. This example breaks through an original idea that the border for mobile device can be substituted for physical object’s border.
 
◎ 2 "criticisms"
 
1. no tangible feedback
In my opinion, I think it’s a really great idea for interactive technology. But if it wants to replace the interaction between people and physical objects completely, I think tangible feed back from physical objects is inevitable a big issue for this idea. The design at present is really good, but if one day mobile device’s monitor can provide some feedback to users, it will become a fantastic design undoubtedly.

2. more quickly for hand’s movement detection
The design now detects fingers when fingers are put on mobile device’s monitor. And because of this mechanism, it leads to some lag for the response from user fingers’ gesture to mobile device’s monitor. If we can detect finger’s before it is put on monitor or strengthen mobile device’s arithmetic capability, maybe lag from mobile device’s monitor can be reduced and can provide better user experience for users.
 
黃冠捷
Summary:
 
This paper is talking about how to interactive with object which display "inside" the table. They used a vision-type touch table to detect user gestures and implement some objects physical characteristic. It introduces a type of input to interact with surface, the basic principles of physics to be used, the four solutions (direct force, joints and springs, proxy objects, and particles). User can interact the objects via the touch table. Even some complicated gesture input. It might be a new interactive type for game engines. You don't need a controller, just control the objects with your hands and nature gesture.
 
Positive topics:
 
1. The physical characteristic algorithm is good. If it use on touch device now a day, the artistic of user interface will become greater.
 
2. The "big size" touch device with vision-type is much cheaper than electric type touch device.
 
Criticisms:
 
1. There is no physical feedback. User can not feel the real surface of the object. If there is a device will produce the force feedback to user, it will be great.
 
2. If there is 3D display to display the interactive will become better.
 
余子暘
Interactive surface technology is capable to sense multiple touch point, and it also allows to model more complex shape objects. The authors hence simulate manipulation of real world objects through interaction with virtual objects. This paper provides the techniques used to simulate real objects and shows the limitation and challenge they have faced during the implementation and user study phase.
 
Despite the failure of user study, it is conceivable that the users will be able to interact with digital content which adds a physical or tangible quality to the interaction. The idea is novel, and it is valuable to conduct further research.
                
This paper warns researcher whose research topic is about mapping the world with digital objects. List of challenge shows in the paper  is the main contribution of this study.
 
There is no significant contribution in this paper since the simulation they have proposed does not match the expectation, that is, this paper is not worthy to read because of the failure methods used.
 
In this study, complex materials such as soft bodies and cloth is also supported in the simulation. This complicate goal cause incomplete design and eventually leads to the failure results. I believe that narrowing down the scope of this study would help to get better outcome.
 
王俊豪
A brief summary:
Introduction real-time physics simulation nowadays and provides an object proxy model.  Also provide with comparative data about what people reaction when operate those real-time physics surface.  Finally, state the difficulty we have yet to conquer.  
 
2 positive topics: 
[1] I thought the user study is very cleverness and strong confident.  They just talk about the user experience.  They promote the research level from mechanism to a human mind level.  The three tasks that the author provides exactly analyzed various behavior and experiential aspects of interaction.   And they offer reasonable reason for the problem they encounter.  They designed three tasks in user study.  Task 1, leave the object disappear in some situation.  Task 2, sort different assortment of object. Task 3, steering.  This three task is some simple physical task we do every day, with that, all human kind has the expectation of what the rigid body would move at the next moment.  But, the surface does not what we expected.  They record down the different reaction that people did.  And offer some point that why people feel something strange.  And they try to offer a statement that what can we get to improve under the limitation.  Furthermore, the compare the prototype of real-time physics simulation currently also provide powerful views.  
 
[2] In this protocol approach, it provides many useful physical simulation.  These simulation enable the controlling a ball with contours, staking object, some flat-type object folding, tearing, zooming and anchor even compound the object with trampoline.  All of these simulation provide us a three dimensions like vision. Take stacking object as a sample, we must pull things up then we can stack objects.  The gesture that pull was a problem, how we grasp something or how can we put our finger on the bottom side of an object.  To do the full operation, we must experience one of this two gestures.  It is easy to do for those in our real world, however, upon a two dimension planes, it is impossible to do that.  As a result, I do not think it could happen to stack thing or it will stick in some freak way which means you cannot control which one should stick on another.  This way to implementation is a physics simulation successfully conquer this problem.  
 
2 criticisms: 
[1] Although it conquers some problem in a simulation about the problem come from the issue about depth.  It still not offers the solution about how we control through the bottom.  The biggest difference between the real world and the surface contact is that can we touch the opposite of the object.  They can simulate the view very strongly they still encounter the problem of interaction.  The not so natural operation.  I think that is a problem that why should we operate a natural behavior to control the simulation.  I mean we could just try to discover a way to display the computer side in a two dimension display but not keep trying simulate the three dimension effect on a plane shape surface.  I’m talking about the Meta concept.  I do not think keeping digging into realize our three dimension world into a two dimension plates is a good idea.  We should rather develop a three dimension monitors to display them in a manual way or develop the way that suitable for two dimension display.  
 
[2] Although the author import the concept of object proxy model to form a more mature behavior, and all the movement are based on the theory of fundamental physics but the reaction of the motion.  There is still a vivid gap between the real world behavior with the simulator one.  We move the objects, a peculiar movement happened which is the object would vibration when you move some displacement.  I prefer this phenomenon due to the incomplete consideration of the physics Thermo.  According to the article, they had consider the friction forces and collision.  Even they try to implement the simulation under the object proxy, and I do not think they are sufficient consideration for the shearing force, when two or more rigid object pushing together, they tends to begin a vibration.  There still one factor may cause the vibration which is the issue of scope.  The scope of time or the scope of the space may be wrong to cause us aware some displacement is had happened in our daily life but we did not conscious before.   
 
呂永鈞
Summary: 
The aim of this paper is to allow practitioners to understand the nuances of various alternatives of simulating surface input within the physics world so that they may further explore the intersection between interactive surfaces and physics. The authors demonstrate the applicability of creating natural and fluid physics-based interactions without the need either to explicitly program this behavior into the system or to recognize gestures. It just uses a commercially available games physics engine and a prototype vision-based interactive surface.
 
Positive:
 
+ From this work, we can know more about the future design of the gaming physics engine. There are two advantages in this approach, particle proxy approach. First, collisions appear to be more correct because they more closely follow the shape of the contact. This is particularly important when using the flat or side of the hand, tangible objects, or generally any contacts other than fingertips. Second, because it makes few assumptions regarding the shape or movement of contacts, it imposes few limits on the manipulations a user may perform, whether leading to collisions, friction forces, or combination thereof. 
 
+ This paper reviews many previous work and clearly explains the benefits and drawbacks of different approaches. Moreover, it outlines the difference between this approach and others, such as direct force, virtual joints and springs and deformable 2D/3D mesh. The most important thing is that the authors take advantage of the fidelity of sensing provided by vision-based interactive surfaces, with the goal of enabling in a virtual domain the range of object manipulation strategies available to us in the real world.
 
Criticism: 
 
- The user study shows that this technique confuses users in many different ways, such as “my hands are like magnets” or “I can press hard and stick my fingers” . Besides this, it makes users spend more efforts to stop objects because objects need opposite frictions to stop. This fact tells one thing it may not be a good and friendly interface even the goal is trying to mimic the way we move objects in the real world. 
 
- I do not see enough positive experiment-feedback from this paper. It must need more different user tests to support this design. Moreover, this approach will have two limitations. First, this system has no clue to know about how hard the user is pressing. It results the amount of friction applied is instead proportional to the number of proxies applied to the object, which itself is related to the surface area of the touch. Second, it is hard to grasp a virtual object by placing contacts on either side because this system still has only 2D control.
 
Sander Valstar
Brief summary
The paper presents a way of interacting with a table top system via the use of a physics engine. This way, the objects of the screen are not dragged around by directly linking them to the finger positions, but by applying forces to them.
...
1372 days ago
Unfiled. Edited by 政堯 王 1372 days ago
政堯 王 User-defined gestures for surface computing
 
許嘉容
[Summary]
This paper present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause.Also, the experiment session contains a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. 
 
[Positive: Design gestures from non-tech users] 
The participants are non-technology users, and consequently their responses to commands are natural. The authors observed their behavior, and created user-defined set. The proposed set revealed a high degree of consistency in similar operations, and flexibility in the number of fingers, palm, or edges of the hands. 
 
[Positive: Interesting mental model observation] 
This paper provides not only quantitative data but also qualitative data, presenting how users think when they perform gestures. I summarized two interesting points here: (1) number of fingers: it may represents the force imposed on objects (2) a land the beyond screen: users intuitively expect more interactive space beside the screen. The observation looks deeper in how users want, and what they expect to have.
 
[Negative: Relation between timing consumption and Likert scale] 
There is no relation between how long users think aloud and how they like the gesture. Since this paper aims at finding the natural gestures, a better solution may be allowing users to testing all referents at once, and report their favorites afterwards.
 
[Negative: Evaluation of the user-defined set ] 
The resulting user-defined gesture set is conflict-free and covers 57.0% of all gestures proposed. Conflicts represent that users control different command with the same gesture. The authors eliminates them by letting the referent with the largest group won the gesture. In that situation, I would like to know how users evaluate the organized set. 
 
許鈞彥
◎ A brief summary
 
User-defined gesture can make it a good candidate for deployment in tabletop system. For example, ease of recognition, consistency, reversibility, and versatility through aliasing. They rely on eliciting gestures from users then developing a user- defined gesture set, such as a user can perform a gesture after being prompted by an animation demonstrating the effect by devices. From a taxonomy of surface gestures, 1080 gestures from 20 participants can be roughly classified in Form, Nature, Binding, Flow. With serious research step, I think their analysis is really clear and conscientious.
 
◎ 2 "positive" topics
 
1. great research analysis
There are many great points for research analysis in this paper, which is very good for us to learn. For instance, they do their user study from non-technical users, which will make research have results more accurate. In addition, they gather a large quantity of sample ( eg. 1080 gestures ) and conscientious statistic analysis after their user testing. Their taxonomy is also clear in detail. In my opinion, although all of us know about those points about doing research, I think their attitude displayed is important and is worthy for us to learn.
 
2. useful for better user experience
With user-defined gesture design, users no longer be limited by gesture created by system designer. This is totally conform to minds of tabletop users. With background diversity of device user, this design can deeply improve user experience for using tablet and mobile device, and make more possibility for future device’s design.
 
◎ 2 "criticisms"
 
1. idea had been implement
I think this design is a really good idea in 2009, but in my points of view, I had already seen similar design in other device in 2013. I don’t know whether the same capability in today’s product is originally from this design or not. But one thing is for sure is that this is a great idea to break through the limitation from product.
 
2. too complicated for other user
It’s just a case discussion. If one day we cancel all original input gesture for our mobile device and tablet, and all gesture need to be setting at the beginning of user’s first time usage. Then it will lead to some problem: Maybe it will have too many gesture which will also confuse user himself; Maybe lots of gesture been setting are useless for most of the time; Or maybe it would make more confuse for others who want to use your mobile device. For some condition mentioned above, I think it maybe have for risk so don’t let it go beyond the limit.
 
黃冠捷
Summary:
 
This paper is talking about a research principle and try to find out what is the nature gesture on tabletop device. Although the conclusion is that there is no common nature gesture for complicated action actually, but still find out some gesture that most people accept. They also find out gesture is hard to define - even a professional designer on HCI. One funny thing that I'd like to mention is that they said that this is a Windows world while 2009, but now a day, Mac is more popular in USA.
 
Positive topics:
 
1.They find out that there is some nature gesture, made a big contribution for designer to design gesture for user. In other hand, it means user is teachable. 
 
2. I am interested if that will it be the same if user do twice with one issue?
I means, if we ask user to action "Move" on Tuesday and Sunday, will they do the same gesture as last time?
 
Criticisms:
 
1. I'm thinking about that is tabletop device good to use? Cause user alway put a lot of things on the table. Will they change their mind that tabletop device is not used for lay up object?
 
2. They made a big contribution for people who study about gesture - they no need to study anymore XD.
 
余子暘
In order to bring interactive surfaces closer to the hands and minds of tabletop users, a study of surface gestures is necessary. In this paper, 20 participants were observed, and total 1080 gestures, the authors witnessed, were analyzed and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Based on the received data, the authors come out and conclude that both complete user-defined gesture set and  taxonomy of surface gestures is useful for tabletop user interface design.
                                                   
In this paper, both taxonomy of surface gestures and user-defined gesture set is conductive to the gesture design on surface technology, even though, the data they collected is not from intercultural users. Still, we have something to count on.
 
Overall, the authors have done a good job on user study. The data, they collected, is reliable, and the conclusion they drew is convincible. I believe that the methods for the user study is one of the contribution of this paper.
 
This paper does not have a significant contribution. All I learned from this paper is that we need to conduct a user study before we design something. The user-defined gesture set is helpful; however, the user’s gesture can still vary from many other factors.
 
In order to make conclusion more convincible, I recommend authors to observe learnability by  comparing their user-defined gesture set with existing modern tabletop gesture. It seems to me that some gestures, they give, is not simple enough, that is, it requires longer time to perform the gesture.
 
王俊豪
A brief summary:
In recent year, surface computing is giving users the fixed gesture by system designers.  By observation of those who have no stereotypes in gesture operation.  We can taxonomy the variance gesture and provide the user some none fixed gesture in advance.  
 
2 positive topics: 
[1] This paper provide a strong material that you can use if you gonna designed a gesture operation.  I think these materials cannot only use in surface gesture operation but also use in the mouse clipping and rowelling. Why I said this? They analysis some motion we will do in the , such as move object a little, nova object a lot, select single object, rotate the screen, rotate the object, shrink the test or diagram, delete the object, zoom in the screen(and zoom out also), open the file or open the hyperlink, next page, cut the connection, minimize the windows, accept the alert confirm, access to menu, ask for the help page, undo something and text or workstation switch.  These operations we encounter when we use the webpage and desktop operating system through the mouse and keyboard input.  This paper also reminds me that the mouse input is a single point input.  I starting thinking can be the mouse input being a multi cursor input?  Hope that I can find some inspiration in this paper when I am in shower …
 
[2] Last week we writing the review of gesture output, and this week we reading the paper about the taxonomy of gesture.  The previous one is about the computer output and the one we talk this week is the user input to the computer.  I am very confused that why the gesture they discuss is totally atypical.  We can taxonomy the surface gesture that people input the computer into four main parts, the from, the nature, the binding and the flow.  But we can only use one with a single stroke English character in computer output.  Why we cannot let the computer output of those gestures we used the input.  Those gestures are the most mature way to represent some operation.  When I try to figure this confusion I found that there is an issue, that the gesture output can only provide a single point movement.  It’s totally different from the one we use hand. We got ten fingers, and two hand.   And I thinking why not build a multi point gesture output for computer.  Maybe we can divide the foil on the screen into peaces and make them move individually.  
 
2 criticisms: 
[1] As the article says, they want to take the wisdom of crowds to build a much skater gesture recognition.  They avoid making the user with experience on macintosh and windows to prevent the influence by then.  However, in my opinion the wisdoms of crowds should not ban those people.  In deed, the user experience on macintosh and windows has a strong stereotype of the gesture input, but the people have this experience is the great majority and we should not ban them.  Besides, people cannot totally exclude the influence from macintosh and windows.  The gesture to control them have deeply rooted in people’s mind from the movie, Scion fiction, social media and our daily life.  Even if this kind of human kind exists, we better group them into a set and compare them with the people not so pure.  So, in this paper, it is not so important to divide people who are not pure but should try to under the influence by macintosh and windows, what effort can we do?
 
[2] When this paper talk about the developing an user-defined gesture but they did not mention about the monitor size to the plate that gesture input, they set the Microsoft surface computing device.  This means all the research is based on the huge touch screen which is as large as a lunch table.  Under this situation, all the gesture is not constrained in a single hand.  But, this situation may lead the user into an abyss of confusion.  If today we test the user under a limit surface, they must reduce the use of gesture that should use both hands and find some way to replace them such as pan, maximum and large.  However, is it really true that those gestures would be still the same of user use when they want to do on a large screen but contract in one hand? That would be an issue have to discover.  So, I think this paper can only provide the user experience data on the larger screen, for those mobile device, it can only offer some direction.  
 
呂永鈞
Summary:
This paper is trying to develop a whole new gesture sets which could truly reflect the behavior’s mind. It means that they are trying to help designers keep away from those nasty problems when building gestures for systems, like what kinds of gestures do non-technical users make, does the number of fingers matter like it does in many designer-defined gesture sets. There are four important things we can get from this paper. First,  users rarely care about the number of fingers they employ. Second, desktop idioms strongly influence users’ mental models. Third, one hand is preferred to two. Fourth, some commands elicit little gestural agreement, suggesting the need for on-screen widgets.
 
Positive Topics:
+ This work helps us to take the interactive surface closer to the hands and minds of users. The one of the most important things is that the authors collect a lot of test data to validate their theory, user-defined gestures. We can learn many important insights of user-mind from those data.  For instance, referents’ conceptual complexities correlates significantly with planning time but inversely with average ratings of gesture ease. Nevertheless, gesture articulation time does not significantly affect goodness ratings, but it does not affect ease ratings. Moreover, gestures that take longer to perform are generally rated as easier, maybe because their properties of being smoother or less hasty.
 
+ Another important thing I like to refer to is that the taxonomy of surface gestures could be useful for analyzing and characterizing gestures in surface computing. Below are the three reasons for supporting this conclusion. First, taxonomy could help us quickly find the right category to develop the best gestures because of those different properties. Second, in the same category we could use those different preferences as our designing references. In the end, as the authors stated that we can translate insight of user mental mind into implications for technology and design.
 
Criticisms:
- There are two concerns about this work. First, the hypothesis, the “wisdom of crowds” would generate a better set than experts, needs to be validated. The best interaction between users and computers may not be determined by the majority of people. It could be terrible design if each thing is determined by the majority as this world does. Because of this reason, the idea of user-defined could be worthless. Second, gestures users defined may already be biased because the fact that “Anything I can do that mimics Windows—that makes my life easier”. I think part of our mind model has been influenced by Windows for a long time so that we just behave like we use Windows when we use touch tablets.
 
- There are two things need to be considered as the authors mention in the last paragraph. First, the participants cannot change the gestures they defined before if they found the previous gesture could be better for this task. Second, application context could impact users’ choice of gestures, as could the larger contexts of organization and culture. Maybe the authors would get a totally different data set from participants who has diverse cultures.
 
Sander Valstar
Brief summary
This paper presents a user study on gesture input for tabletop systems. The participants were all Educated Americans without a background in CS or interface Design. The participants also had never used touch devices such as iPhones etc.
The authors present the user with an animated action that is the result of a gesture. The user then has to decide which gesture is most suitable to cause this action. The users will “think out loud” and they are recorded.
...
Members (35)
Eric Chiu 簡晉佑 Lucien Lee Andy Lin 楊順堯 以圻 廖 邵元輔 Reinhard Pointner Shen Chao 何駿銘 Tony Tung Akash Harlalka Wang Han-Yu 哲仰 吳 Denly Shih pomelyu 冠捷 黃 Ray Tsai Acsa Lu Ben Yu

Create a New Collection

Cancel

Move XXX to XXX


XXX will be invited to the XXX on XXX.

Cancel

Contact Support



Please check out our How-to Guide and FAQ first to see if your question is already answered! :)

If you have a feature request, please add it to this pad. Thanks!


Log in