Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Investigating the use of LLMs for automated test generation: challenges, benefits, and suitability
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
2024 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

This thesis investigates the application of Large Language Models (LLMs) in auto-mated test generation for software development, focusing on their challenges, bene-fits, and suitability for businesses. The study employs a mixed-methods approach, combining a literature review with empirical evaluations through surveys, interviews, and focus groups involving software developers and testers.

Key findings indicate that LLMs enhance the efficiency and speed of test case generation, offering substantial improvements in test coverage and reducing development costs. However, the integration of LLMs poses several challenges, including technical complexities, the need for extensive customization, and concerns about the quality and reliability of the generated test cases. Additionally, ethical issues such as data biases and the potential impact on job roles were highlighted.

The results show that while LLMs excel in generating test cases for routine tasks, their effectiveness diminishes in complex scenarios requiring deep domain knowledge and intricate system interactions. The study concludes that with proper training, continuous feedback, and iterative refinement, LLMs can be effectively integrated into existing workflows to complement traditional testing methods.

Place, publisher, year, edition, pages
2024. , p. 37
Keywords [en]
AI, LLMs, Machine Learning, Software Testing
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-26519OAI: oai:DiVA.org:bth-26519DiVA, id: diva2:1875624
Subject / course
PA1445 Kandidatkurs i Programvaruteknik
Educational program
PAGPT Software Engineering
Supervisors
Examiners
Available from: 2024-06-24 Created: 2024-06-23 Last updated: 2024-06-24Bibliographically approved

Open Access in DiVA

fulltext(6667 kB)985 downloads
File information
File name FULLTEXT01.pdfFile size 6667 kBChecksum SHA-512
d81d074b5c1096f3df0dc569ac26abbc6e8f32274241c266ac1d36b068352cbc6b08047f49c45cf0849240bd6680b7e645be85027a140003d7ca254856618eab
Type fulltextMimetype application/pdf

By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 985 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 1199 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf