Emotion regulation is a topic that has considerable impact in our everyday lives, among others emotional biases that affect our decision making. A serious game that was built in order to be able to train emotion regulation is presented and evaluated here. The evaluation consisted of a usability testing and then an experiment that targeted the difficulty of the game. The results suggested adequate usability and a difficulty that requires the player to engage in managing their emotion in order to have a winning strategy.
Research on financial decision-making shows that traders and investors with high emotion regulation capabilities perform better in trading. But how can the others learn to regulate their emotions? âLearning by doing’ sounds like a straightforward approach. But how can one perform âlearning by doing’ when there is no feedback? This problem particularly applies to learning emotion regulation, because learners can get practically no feedback on their level of emotion regulation. Our research aims at providing a learning environment that can help decision-makers to improve their emotion regulation. The approach is based on a serious game with real-time biofeedback. The game is settled in a financial context and the decision scenario is directly linked to the individual biofeedback of the learner’s heart rate data. More specifically, depending on the learner’s ability to regulate emotions, the decision scenario of the game continuously adjusts and thereby becomes more (or less) difficult. The learner wears an electrocardiogram sensor that transfers the data via Bluetooth to the game. The game itself is evaluated at several levels.
The development of engineered systems having properties of autonomy and intelligence has been a visionary research goal of the twentieth century. However, there are a number of persistent and fundamental problems that continue to frustrate this goal. Behind these problems is an outmoded industrial foundation for the contemporary discourse and practices addressing intelligent robotics that must be superseded as engineering progresses more deeply into molecular and biological modalities. These developments inspire the proposal of a paradigm of engineered synthetic intelligence as an alternative to artificial intelligence, in which intelligence is pursued in a bottom-up way from systems of molecular and cellular elements, designed and fabricated from the molecular level and up. This paradigm no longer emphasizes the definition of representation and the logic of cognitive operations. Rather, it emphasizes the design of self-replicating, self-assembling and self-organizing biomolecular elements capable of generating cognizing systems as larger scale assemblies, analogous to the neurobiological system manifesting human cognition.
The distinction between implicit and unselfconscious design cultures on one hand and explicit, self-conscious design cultures on the other provides a principle for interrelating a variety of game design approaches within a coherent game design meta-model. The design approaches in order of increasing design self-consciousness include implicit design, ‘cookbook’ design methods, taxonomy and ontology-based game design, theory-driven design and formalist reflexive design. Implicit design proceeds by copying existing examples of game designs, while ‘cookbook’ methods generalize from examples to create lists of design heuristics. Taxonomy and ontology-based game design is based upon more systematic models of the types, features, elements, structure and properties of games. The theory-driven level involves the design of game systems to facilitate game play motivated by cognitive, scientific and/or rhetorical theories of game affect and functionality, or incorporating technical innovations providing the basis for new game mechanics and experiences. The formalist level represents the application of reflexive contemporary artistic perspectives to games, resulting in games that reflect upon, question or reveal game form. In placing these different approaches within a hierarchy of increasing self-consciousness of design practices, the meta-model provides a clear account of the roles of research and artistic methods in game design and innovation, providing a foundation for more explicit design decision making and game education curriculum development integrated with higher-level research.
Schema theory provides a foundation for the analysis of game play patterns created by players during their interaction with a game. Schema models derived from the analysis of play provide a rich explanatory framework for the cognitive processes underlying game play, as well as detailed hypotheses for the hierarchical structure of pleasures and rewards motivating players. Game engagement is accounted for as a process of schema selection or development, while immersion is explained in terms of levels of attentional demand in schema execution. However, schemas may not only be used to describe play, but might be used actively as cognitive models within a game engine. Predesigned schema models are knowledge representations constituting anticipated or desired learned cognitive outcomes of play. Automated analysis of player schemas and comparison with predesigned target schemas can provide a foundation for a game engine adapting or tuning game mechanics to achieve specific effects of engagement, immersion, and cognitive skill acquisition by players. Hence, schema models may enhance the play experience as well as provide a foundation for achieving explicitly represented pedagogical or therapeutic functions of games. This paper has described an approach to the analysis of game play based upon schema theory and attention theory. An empirically basedmethod has been described as a basis for identifying and validating hypothetical game play schemas. Automated schema recognition and the potential uses of explicit schema representations within game systems have been explored. This approach provides for explicit modeling of the C. A. Lindley and C. C. Sennersten 7 cognitive systems and processes underlying game play, both for analytical studies of play and as a potential implementation mechanism for adaptive games. Work on the analysis of games using this approach is ongoing. It is hoped that the results of this work will provide the foundations for future implementation of schema-based adaptive game systems.
This paper presents a novel approach for the classification of planar surfaces in an unorganized point clouds. A feature-based planner surface detection method is proposed which classifies a point cloud data into planar and non-planar points by learning a classification model from an example set of planes. The algorithm performs segmentation of the scene by applying a graph partitioning approach with improved representation of association among graph nodes. The planarity estimation of the points in a scene segment is then achieved by classifying input points as planar points which satisfy planarity constraint imposed by the learned model. The resultant planes have potential application in solving simultaneous localization and mapping problem for navigation of an unmanned-air vehicle. The proposed method is validated on real and synthetic scenes. The real data consist of five datasets recorded by capturing three-dimensional(3D) point clouds when a RGBD camera is moved in five different indoor scenes. A set of synthetic 3D scenes are constructed containing planar and non-planar structures. The synthetic data are contaminated with Gaussian and random structure noise. The results of the empirical evaluation on both the real and the simulated data suggest that the method provides a generalized solution for plane detection even in the presence of the noise and non-planar objects in the scene. Furthermore, a comparative study has been performed between multiple plane extraction methods.
Place recognition is important navigation ability for autonomous navigation of mobile robots. Visual cues extracted from images provide a way to represent and recognize visited places. In this article, a multi-cue based place learning algorithm is proposed. The algorithm has been evaluated on a localization image database containing different variations of scenes under different weather conditions taken by moving the robot-mounted camera in an indoor-environment. The results suggest that joining the features obtained from different cues provide better representation than using a single feature cue.
Spatial mapping is an important task in autonomous navigation of mobile robots. Rodents solve the navigation problem by using place and head-direction cells which utilize both idiothetic and allothetic information in the surroundings. This article proposes a spatial cognitive mapping model based on the concepts of rodent’s hippocampus cells. The model has been tested on position sensor data collected using a UAV platform.