02470nas a2200217 4500008004100000245007600041210006900117260004900186300000800235520182300243653001702066653001802083653001502101653000802116653001602124653001802140653002202158653001302180100002302193856003602216 2015 eng d00aMaking Tabletops Useful with Applications, Frameworks and Multi-Tasking0 aMaking Tabletops Useful with Applications Frameworks and MultiTa aBarcelonabUniversitat Pompeu Fabrac01/2015 a2103 a
The progressive appearance of affordable tabletop technology and devices urges human-computer interaction researchers to provide the necessary methods to make this kind of devices the most useful to their users. Studies show that tabletops have distinctive characteristics that can be specially useful to solve some types of problems, but this potential is arguably not yet translated into real-world applications. We theorize that the important components that can transform those systems into useful tools are application frameworks that take into account the devices affordances, a third party application ecosystem, and multi-application systems supporting concurrent multitasking. In this dissertation we approach these key components: First, we explore the distinctive affordances of tabletops, with two cases: TurTan, a tangible programming language in the education context, and SongExplorer, a music collection browser for large databases. Next, in order to address the difficulty of building such applications in a way that they can exploit these affordances, we focus on software frameworks to support the tabletop application making process, with two different approaches: ofxTableGestures, targeting programmers, and MTCF, designed for music and sound artists. Finally, recognizing that making useful applications is just one part of the problem, we focus on a fundamental issue of multi-application tabletop systems: the difficulty to support multi-user concurrent multitasking with third-party applications. After analyzing the possible approaches, we present GestureAgents, a content-based distributed application-centric disambiguation mechanism and its implementation, which solves this problem in a generic fashion, being also useful to other shareable interfaces, including uncoupled ones.
10aApplications10aCollaboration10aFrameworks10aHCI10ainteraction10aMulti-Tasking10aShared interfaces10atabletop1 aJulià, Carles, F. uhttp://phenicx.upf.edu/node/19201518nas a2200181 4500008004100000245009300041210006900134260002000203520088000223653002301103653002701126653002201153653001501175100002301190700001501213700002201228856008601250 2013 eng d00aGestureAgents: An Agent-Based Framework for Concurrent Multi-Task Multi-User Interaction0 aGestureAgents An AgentBased Framework for Concurrent MultiTask M bACMc10/02/20133 aWhile the HCI community has been putting a lot of effort on creating physical interfaces for collaboration, studying multi-user interaction dynamics and creating specific applications to support (and test) this kind of phenomena, it has not addressed the problem of having multiple applications sharing the same interactive space. Having an ecology of rich interactive programs sharing the same interfaces poses questions on how to deal with interaction ambiguity in a cross-application way and still allow different programmers the freedom to program rich unconstrained interaction experiences. This paper describes GestureAgents, a framework demonstrating several techniques that can be used to coordinate different applications in order to have concurrent multi-user multi-tasking interaction and still dealing with gesture ambiguity across multiple applications.
10aagent- exclusivity10aConcurrent interaction10agesture framework10amulti-user1 aJulià, Carles, F.1 aJordà, S.1 aEarnshaw, Nicolas uhttp://www.mtg.upf.edu/system/files/publications/2013%20TEI13%20GestureAgents.pdf