CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluating Assessment Practices in Team-Based Computing Capstone Projects
Virginia Tech, United States.
University of Auckland, New Zealand.
Newcastle University, United Kingdom.
Farmingdale State College, United States.
Show others and affiliations
2026 (English)In: ITiCSE-WGR 2025 - Publication of the 2025 Working Group Reports on Innovation and Technology in Computer Science Education, Association for Computing Machinery (ACM), 2026, p. 277-312Conference paper, Published paper (Refereed)
Abstract [en]

Team-based capstone projects are vital in preparing computer science students for real-world work by developing teamwork, communication, and industry-relevant technical skills. Their assessment, however, is challenging, requiring alignment between academic criteria and external stakeholder expectations, fair evaluation of individual contributions, recognition of diverse skills, and clarity on external partners involvement in the evaluation process. The high stakes of these projects further demand transparent and equitable assessment methods that are perceived as fair by all involved. Our working group (WG) addresses the challenges of capstone project assessment by examining the perspectives of instructors, students, and external stakeholders to support fair and effective evaluation. Building on insights from our previous WG and a comprehensive review of the literature, we used a mixed-methods approach combining online surveys (quantitative) and in-depth interviews (qualitative) with instructors, students, and external stakeholders. In total, we collected 66 survey responses and conducted 30 interviews across multiple countries and institutions, capturing a diverse range of global perspectives on capstone course assessments. Insights from instructors and students revealed several commonalities, for example, in the types of assessed components and the challenges of identifying and addressing non-contributing group members. Our findings also revealed clear variation between instructor and student perspectives on how contributions are measured and weighted. Instructors were reluctant to rely heavily on peer or self-evaluation due to concerns about reliability, preferring scaffolded assessments and early-warning systems to gather contribution data and moderate team dynamics. They viewed contribution-based grading as positive but resource-intensive. Students, in contrast, emphasized the need for more transparency, formative feedback, and accurate recognition of individual contributions. They also expressed concerns about the lack of recognition for hidden labor (e.g., project management, team coordination), assessor inconsistency, and a reluctance to critique peers. Instructors treated peer input as supplementary evidence, whereas students perceived it as high-stakes and socially risky. Stakeholder involvement in assessment was generally limited to providing formative feedback and participating in final showcase events. We also identified generative AI as a rapidly evolving challenge, with both students and instructors seeking guidance on acceptable use and exploring opportunities to automate aspects of assessment. Our results offer actionable evidence-based guidance for designing transparent and equitable assessment practices in team-based computing capstones. 

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2026. p. 277-312
Keywords [en]
capstone, individual contribution, team-based assessments, Curricula, Distributed computer systems, Education computing, Employment, Feedback, Human computer interaction, Human resource management, Project management, Students, Teaching, Assessment practices, Capstone projects, Computer science students, External stakeholders, Formative feedbacks, Real-world, Team-based assessment, Working groups, Grading
National Category
Computer Sciences Pedagogy
Identifiers
URN: urn:nbn:se:bth-29268DOI: 10.1145/3760545.3783974Scopus ID: 2-s2.0-105031915816ISBN: 9798400721670 (print)OAI: oai:DiVA.org:bth-29268DiVA, id: diva2:2047515
Conference
30th Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE 2025, Nijmegen, June 27- July 2, 2025
Available from: 2026-03-20 Created: 2026-03-20 Last updated: 2026-03-20Bibliographically approved

Open Access in DiVA

fulltext(1715 kB)29 downloads
File information
File name FULLTEXT01.pdfFile size 1715 kBChecksum SHA-512
122abf8f495c11e80ca3d72d71e402289e64e31c86ccd1a33cb199b9b4ab81dbb00b88eea07dee8379121a1d942e3008b6e4039889e5ab6cc649ed33bfad9772
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Nasir, Nayla

Search in DiVA

By author/editor
Nasir, Nayla
By organisation
Department of Software Engineering
Computer SciencesPedagogy

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1253 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf