Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI
Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
2014 (English)Student thesis
Abstract [en]

Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\newline Objectives. We investigate whether modern hardware and software, GPGPU, combined with state-of-art Evolution Strategies, CMA-Neuro-ES, can potentially increase the efficiency of solving RL problems.\newline Methods. In order to do this, both an implementational as well as an experimental research method is used. The implementational research mainly involves developing and setting up an experimental framework in which to measure efficiency through benchmarking. In this framework, the GPGPU/ES solution is later developed. Using this framework, experiments are conducted on a conventional sequential solution as well as our own parallel GPGPU solution.\newline Results. The results indicate that utilizing GPGPU and state-of-art ES when attempting to solve RL problems can be more efficient in terms of time consumption in comparison to a conventional and sequential CPU approach.\newline Conclusions. We conclude that our proposed solution requires additional work and research but that it shows promise already in this initial study. As the study is focused on primarily generating benchmark performance data from the experiments, the study lacks data on RL efficiency and thus motivation for using our approach. However we do conclude that the GPGPU approach suggested does allow less time consuming RL problem solving.

Place, publisher, year, edition, pages
2014. , p. 62
Keywords [en]
Reinforcement Learning, Evolution Strategies, GPGPU, Artifical Neural Networks
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:bth-4386Local ID: oai:bth.se:arkivexCD300C9EED8B0F11C1257D0400405635OAI: oai:DiVA.org:bth-4386DiVA, id: diva2:831724
Educational program
PAACI Master of Science in Game and Software Engineering
Uppsok
Technology
Supervisors
Available from: 2015-04-22 Created: 2014-06-27 Last updated: 2018-01-11Bibliographically approved

Open Access in DiVA

fulltext(1605 kB)396 downloads
File information
File name FULLTEXT01.pdfFile size 1605 kBChecksum SHA-512
ff61c9b0f6739efc5c612e4be8bdc75d08a0ae7762ee5206700514fd16ca2a1e097272c1de9013b6ddf5d8683a22f996796e94af59cf56f2b55baea1f97e94f6
Type fulltextMimetype application/pdf

By organisation
Department of Creative Technologies
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 396 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 311 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf