Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Training an Adversarial Non-Player Character with an AI Demonstrator: Applying Unity ML-Agents
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
2022 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Background. Game developers are continuously searching for new ways of populating their vast game worlds with competent and engaging Non-Player Characters (NPCs), and researchers believe Deep Reinforcement Learning (DRL) might be the solution for emergent behavior. Consequently, fusing NPCs with DRL practices has surged in recent years, however, proposed solutions rarely outperform traditional script-based NPCs.

Objectives. This thesis explores a novel method of developing an adversarial DRL NPC by combining Reinforcement Learning (RL) algorithms. Our goal is to produce an agent that surpasses its script-based opponents by first mimicking their actions.

Methods. The experiment commences with Imitation Learning (IL) before proceeding with supplementary DRL training where the agent is expected to improve its strategies. Lastly, we make all agents participate in 100-deathmatch tournaments to statistically evaluate and differentiate their deathmatch performances.

Results. Statistical tests reveal that the agents reliably differ from one another and that our learning agent performed poorly in comparison to its script-based opponents.

Conclusions. Based on our computed statistics, we can conclude that our solution was unsuccessful in developing a talented hostile DRL agent as it was unable to convey any form of proficiency in deathmatches. No further improvements could be applied to our ML agent due to the time constraints. However, we believe our outcome can be used as a stepping-stone for future experiments within this branch of research.

Place, publisher, year, edition, pages
2022.
Keywords [en]
Artificial intelligence (AI), reinforcement learning (RL), imitation learning (IL), non-player character (NPC), deathmatch
National Category
Computer Sciences Software Engineering Computer Engineering Computer and Information Sciences
Identifiers
URN: urn:nbn:se:bth-23853OAI: oai:DiVA.org:bth-23853DiVA, id: diva2:1710854
Subject / course
DV2572 Master´s Thesis in Computer Science
Educational program
DVACO Master's program in computer science 120,0 hp
Supervisors
Examiners
Available from: 2022-11-15 Created: 2022-11-14 Last updated: 2025-09-30Bibliographically approved

Open Access in DiVA

Training an Adversarial Non-Player Character with an AI Demonstrator - Applying Unity ML-Agents(8157 kB)547 downloads
File information
File name FULLTEXT02.pdfFile size 8157 kBChecksum SHA-512
09aa9b44c5d9d1b6fce394c5dea03294bda453a73413c8ef650b2387fe272c32a06dc1b6c848322cff2440e79e8c733a1653140cc13aee6f943008ac5b3ccd00
Type fulltextMimetype application/pdf

By organisation
Department of Computer Science
Computer SciencesSoftware EngineeringComputer EngineeringComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 547 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 529 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf