Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 136) Show all publications
Pelgrom, N. & Grahn, H. (2025). Different Hallucinations calls for Different Solutions - A Categorisation of LLM Transcription Mistakes. In: Nowaczyk S., Vettoruzzo A. (Ed.), CEUR Workshop Proceedings: . Paper presented at 2025 Swedish AI Society Workshop, SAIS 2025, Halmstad, June 16-17, 2925 (pp. 39-52). Technical University of Aachen, 4037
Open this publication in new window or tab >>Different Hallucinations calls for Different Solutions - A Categorisation of LLM Transcription Mistakes
2025 (English)In: CEUR Workshop Proceedings / [ed] Nowaczyk S., Vettoruzzo A., Technical University of Aachen , 2025, Vol. 4037, p. 39-52Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a contribution to better interpretation of the results we get from GenAI models, more specifically, better interpretation of the mistakes that they make. We have conducted an analysis of 644 (from GPT-4o) + 4858 (from ARIA) mistakes made by two models on a key-value extraction task, and found that they may be categorised into three mutually exclusive groups. These groups are; i problems identifying the requested information, p problems presenting the correct information, and s skewed training data. These categories could be used to indicate which action a user could take to reduce the number of mistakes. Further, we have found a strong correlation between the suggested categories and the Ratcliff/Obershelp pattern recognition score between the generated result and the expected result; all faulty results containing minor mistakes are more than 60% similar to the expected result. Only mistakes based on lack of identifying what was requested had less than 60% similarity to the expected result. 

Place, publisher, year, edition, pages
Technical University of Aachen, 2025
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073
Keywords
Document analysis, Generative AI, LLM, Verification, Artificial intelligence, Errors, Human engineering, Information retrieval systems, Pattern recognition, Documents analysis, Key values, Strong correlation, Training data
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:bth-28782 (URN)2-s2.0-105017736795 (Scopus ID)
Conference
2025 Swedish AI Society Workshop, SAIS 2025, Halmstad, June 16-17, 2925
Available from: 2025-10-17 Created: 2025-10-17 Last updated: 2025-10-17Bibliographically approved
Lundberg, L., Westerhagen, A., Ilie, D., Grahn, H., Granbom, B. & Svärd Olsson, A. (2025). Evaluating Short Forward Error Correction Codes for Avoiding Detection in Airborne Networks. In: International Conference on Military Communication and Information Systems, ICMCIS: . Paper presented at 2025 International Conference on Military Communication and Information Systems, ICMCIS 2025, Oeiras, May 13-14, 2025. Institute of Electrical and Electronics Engineers (IEEE) (2025)
Open this publication in new window or tab >>Evaluating Short Forward Error Correction Codes for Avoiding Detection in Airborne Networks
Show others...
2025 (English)In: International Conference on Military Communication and Information Systems, ICMCIS, Institute of Electrical and Electronics Engineers (IEEE), 2025, no 2025Conference paper, Published paper (Refereed)
Abstract [en]

We evaluate Forward Error Correction (FEC) codes in the context of a novel routing protocol HDARP+ for airborne networks. HDARP+ uses directional antennas and dynamic FEC coding to avoid detection by adversaries. The use of FEC coding is dynamic in the sense that different FEC codes, or no FEC code, will be used depending on the relative position of friendly and adversary aircraft. Due to the real-time restrictions in airborne networks, encoding and decoding must be fast and done through table lookup. Since we use table lookup, the FEC codes must be short. We evaluate two types of short FEC codes: Reed-Solomon (RS) codes, and FEC codes found using greedy search (called GS codes). The results show that the RS codes are better than the GS codes at handling error bursts. However, the GS codes are more flexible when it comes to finding attractive trade-offs between the code's ability to increase the number of the cases when hostile detection can be avoided (related to the coding gain), the code rate and the amount of memory required for implementing lookup tables.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Avoiding Detection, Directional Antennas, Dynamic Fec, Forward Error Correction, Reed-solomon Codes, Routing, Aircraft Detection, Block Codes, Network Coding, Signal Receivers, Table Lookup, Airborne Networks, Directional Antenna, Dynamic Forward Error Correction, Forward Error Correction Codes, Forward Error-correction, Lookups, Reed -solomon Code, Routings, Directive Antennas, Economic And Social Effects
National Category
Telecommunications
Identifiers
urn:nbn:se:bth-28578 (URN)10.1109/ICMCIS64378.2025.11048117 (DOI)001542524800010 ()2-s2.0-105013287144 (Scopus ID)9798331537869 (ISBN)
Conference
2025 International Conference on Military Communication and Information Systems, ICMCIS 2025, Oeiras, May 13-14, 2025
Projects
Tillförlitliga Flygande Ad-Hoc Nätverk för Civil-Militär Uppdragskritiska Applikationer (FANET-MCA)
Funder
Vinnova, 2024-03181
Available from: 2025-09-03 Created: 2025-09-03 Last updated: 2025-11-03Bibliographically approved
Pelgrom, N., Hangelbäck, J., Ericsson, M., Nordqvist, J. & Grahn, H. (2025). Hallucinations and Training-Data Bias: Results from Two Number Transcription Experiments Using GPT Models. In: Computational Science and Computational Intelligence: 11th International Conference, CSCI 2024, Las Vegas, NV, USA, December 11–13, 2024, Proceedings, Part I. Paper presented at 11th International Conference on Computational Science and Computational Intelligence, CSCI 2024, Las Vegas, Dec 11-13 2024 (pp. 59-69). Springer Science+Business Media B.V.
Open this publication in new window or tab >>Hallucinations and Training-Data Bias: Results from Two Number Transcription Experiments Using GPT Models
Show others...
2025 (English)In: Computational Science and Computational Intelligence: 11th International Conference, CSCI 2024, Las Vegas, NV, USA, December 11–13, 2024, Proceedings, Part I, Springer Science+Business Media B.V., 2025, p. 59-69Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents two experiments testing the capabilities of the models GPT-4-vision-preview and GPT-4o, to transcribe images of randomly generated number-strings. We noticed a tendency for longer number strings to be read inaccurately while transcribing invoices using the models, so we created an experiment with images with minimal noise to see what amount of digits these models are able to transcribe with full accuracy. We further tested weather the mistakes that occur in transcribing longer strings are reoccurring when the images are re-tested. We analyzed the character of the mistakes made, and weather any digits are over represented among the ones involved in mistakes. We found, among more results described in the paper, that the models are 100% accurate up to 75 digits per image, that the same mistakes are reoccurring when the same image is rerun, and that hallucinations are only 23% of all mistakes made by the models. 

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2501
Keywords
Generative AI, Large Language Model, LLM Hallucinations, OCR, Experiment testing, Language model, LLM hallucination, Training data, Report generators
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:bth-28946 (URN)10.1007/978-3-031-90341-0_5 (DOI)001585767000005 ()2-s2.0-105006767018 (Scopus ID)9783031903403 (ISBN)
Conference
11th International Conference on Computational Science and Computational Intelligence, CSCI 2024, Las Vegas, Dec 11-13 2024
Available from: 2025-12-01 Created: 2025-12-01 Last updated: 2025-12-01Bibliographically approved
Javeed, A., Borg, A., Grahn, H., Lundberg, L., Patel, D. & Shirinbab, S. (2025). Improving Cloud Efficiency: A Machine Learning-Based Stacking Model for CPU Utilization Prediction. In: Proceedings - 2025 8th International Conference on Data Science and Machine Learning Applications, CDMA 2025: . Paper presented at 8th International Conference on Data Science and Machine Learning Applications, CDMA 2025, Riyadh, Feb 16-17, 2025 (pp. 120-125). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Improving Cloud Efficiency: A Machine Learning-Based Stacking Model for CPU Utilization Prediction
Show others...
2025 (English)In: Proceedings - 2025 8th International Conference on Data Science and Machine Learning Applications, CDMA 2025, Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 120-125Conference paper, Published paper (Refereed)
Abstract [en]

With the rapid growth of internet technologies, IT businesses are transferring to cloud-based systems, and cloud-based services are in high demand among internet users. Therefore, appropriate allocation of resources in cloud computing environments is essential. The companies can reduce costs by saving energy by dynamically scaling up or down the number of active servers. In this context, this study presents a machine learning-based model for accurate prediction of CPU utilization. Previous studies employed timestamp-based data to predict CPU utilization in cloud computing, while the proposed work uses incoming user requests to predict CPU workload so that a timely decision can be made to scale up or scale down the servers in a cloud computing environment. The proposed model is based on several machine learning algorithms that are stacked into a single model called the stacking model for CPU workload prediction. The effectiveness of the proposed stacking model was tested on several evaluation metrics to validate its performance. Furthermore, the performance of the proposed stacking model is also compared with other state-of-the-art machine learning models such as support vector machines (SVM), decision trees (DT), random forests (RF), gradient boosting, and extreme gradient boosting (XGBoost). 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
cloud computing, CPU, machine learning, predicting workload, stacking model, Adversarial machine learning, Cloud platforms, Cloud computing environments, Cloud-based, Cloud-computing, CPU utilization, Gradient boosting, Machine-learning, Performance, Stacking models, Prediction models
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-27697 (URN)10.1109/CDMA61895.2025.00026 (DOI)2-s2.0-105001165019 (Scopus ID)9798331539696 (ISBN)
Conference
8th International Conference on Data Science and Machine Learning Applications, CDMA 2025, Riyadh, Feb 16-17, 2025
Funder
Knowledge Foundation, 20220215
Available from: 2025-04-04 Created: 2025-04-04 Last updated: 2025-09-30Bibliographically approved
Nordahl, C., Boeva, V., Grahn, H. & Netz Persson, M. (2025). On Evaluation of Data Stream Clustering Algorithms: A Survey. IEEE Access, 13, 139524-139546
Open this publication in new window or tab >>On Evaluation of Data Stream Clustering Algorithms: A Survey
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 139524-139546Article, review/survey (Refereed) Published
Abstract [en]

Data stream mining is a research area that has grown enormously in recent years. The main challenge is extracting knowledge in real-time from a possibly unbounded data stream. Clustering, a process in which groupings within the data are identified, is a valuable technique to extract and identify underlying structures of the data. An open question in stream clustering is how to evaluate the proposed algorithms. In this survey, we review the literature in the domain to identify common methodologies, datasets, and evaluation measures, used to evaluate the algorithms. We provide a short summary of the stream clustering algorithms in the literature, but our primary focus lies in the survey of cluster validation relevant to the evaluation of data stream clustering algorithms. We begin our literature review with the inception of clustering incrementally, namely with the introduction of the balanced iterative reducing and clustering using hierarchies (BIRCH) algorithm. We identify that the evaluation methodologies primarily focus on performance, and that aspects such as cluster quality are rarely considered. Performance has been the focal point of all evaluations, both in terms of computational performance and accuracy, since the inception of clustering data streams. We also identify that issues in the conventional clustering domain are present in the data stream clustering. However, minor additions to the evaluation methods can improve both the applicability and usefulness of the algorithms. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Cluster analysis, cluster validation indices, cluster validation measures, clustering, data stream clustering, data stream mining, data streams, evaluation, review, streaming data, Clustering algorithms, Data mining, Iterative methods, Quality control, Cluster validation, Cluster validation index, Cluster validation measure, Clusterings, Data stream, Data streams mining, Validation index, Reviews
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-28542 (URN)10.1109/ACCESS.2025.3596435 (DOI)001550799400033 ()2-s2.0-105013054673 (Scopus ID)
Funder
Knowledge Foundation, 20220068
Available from: 2025-09-01 Created: 2025-09-01 Last updated: 2025-09-30Bibliographically approved
Pelgrom, N., Ericsson, M., Hagelback, J., Nordqvist, J. & Grahn, H. (2025). Using ChatGPT as a Combined Invoice OCR and Key-Value Extractor. In: 2025 10th International Conference on Big Data Analytics, ICBDA 2025: . Paper presented at 10th International Conference on Big Data Analytics, ICBDA 2025, Singapore, March 13-15, 2025 (pp. 320-325). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Using ChatGPT as a Combined Invoice OCR and Key-Value Extractor
Show others...
2025 (English)In: 2025 10th International Conference on Big Data Analytics, ICBDA 2025, Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 320-325Conference paper, Published paper (Refereed)
Abstract [en]

This paper provides details and analysis of findings from three experiments on the capabilities of the OpenAI chatbot ChatGPT-4 with vision capabilities as a dual-purpose tool for Optical Character Recognition (OCR) and key-value data extraction tasks. Our results could be relevant to any task where one is interested in extracting key information from images with approximately equal complexity, and are scalable to be used for large datasets. We did the main experiments in the OpenAI user interface for the model, and a smaller experiment using the API. The experiment used a dataset comprising 1000 digital invoices alongside 1000 photographic images of real receipts, collected from a broad spectrum of market sectors within Sweden. The main experiment gave us a significant accuracy rate, achieving $\mathbf{9 9. 8 \%}$ in extracting critical financial information from the digital invoices. Similarly, when applied to the photographic images of receipts, it maintained a high accuracy level of $\mathbf{9 9. 5 \%}$. The smaller experiment gave us an accuracy of $\mathbf{9 4. 4 \%}$ These findings are particularly noteworthy not only because of the high accuracy levels but also due to the model's effectiveness in performing both OCR and key-value extraction tasks as a one-step process. This dual functionality underscores the model's potential as a highly efficient and reliable solution in automating financial data extraction and processing tasks. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Key-Value Extraction, Large Language Model, Optical Character Recognition, Application programming interfaces (API), Data accuracy, Data handling, Data mining, Extraction, Financial data processing, Large datasets, User interfaces, Accuracy level, Chatbots, Data extraction, High-accuracy, Key values, Language model, Optical-, Photographic image, Photography
National Category
Artificial Intelligence Computer graphics and computer vision
Identifiers
urn:nbn:se:bth-28995 (URN)10.1109/ICBDA65366.2025.11211156 (DOI)2-s2.0-105023839703 (Scopus ID)9798331503932 (ISBN)
Conference
10th International Conference on Big Data Analytics, ICBDA 2025, Singapore, March 13-15, 2025
Available from: 2025-12-12 Created: 2025-12-12 Last updated: 2025-12-15Bibliographically approved
Ahlstrand, J., Borg, A., Grahn, H. & Boldt, M. (2025). Using Transformers for B2B Contractual Churn Prediction Based on Customer Behavior Data. In: Filipe J., Smialek M., Brodsky A., Hammoudi S. (Ed.), International Conference on Enterprise Information Systems, ICEIS - Proceedings: Volume 1. Paper presented at 27th International Conference on Enterprise Information Systems, ICEIS 2025, Porto, Apr 4-6, 2025 (pp. 562-571). SciTePress
Open this publication in new window or tab >>Using Transformers for B2B Contractual Churn Prediction Based on Customer Behavior Data
2025 (English)In: International Conference on Enterprise Information Systems, ICEIS - Proceedings: Volume 1 / [ed] Filipe J., Smialek M., Brodsky A., Hammoudi S., SciTePress, 2025, p. 562-571Conference paper, Published paper (Refereed)
Abstract [en]

In the competitive business-to-business (B2B) landscape, retaining clients is critical to sustaining growth, yet customer churn presents substantial challenges. This paper presents a novel approach to customer churn prediction using a modified Transformer architecture tailored to multivariate time-series data. We suggest that analyzing customer behavior patterns over time can indicate potential churn. Our findings suggest that while uncertainty remains high, the proposed model performs competitively against existing methods. The Transformer architecture achieves a top decile lift of almost 5 and 0.77 AUC. We assess the model’s confidence by employing conformal prediction, providing valuable insights for targeted anti-churn campaigns. This work highlights the potential of Transformers to address churn dynamics, offering a scalable solution to identify at-risk customers and inform strategic retention efforts in B2B contexts.

Place, publisher, year, edition, pages
SciTePress, 2025
Series
International Conference on Enterprise Information Systems (ICEIS), E-ISSN 2184-4992
Keywords
Churn prediction, B2B, Machine learning, Time-series data, Telecommunication, Conformal prediction
National Category
Computer Sciences
Research subject
Computer Science; Computer Science
Identifiers
urn:nbn:se:bth-27614 (URN)10.5220/0013432500003929 (DOI)2-s2.0-105019527699 (Scopus ID)9789897587498 (ISBN)
Conference
27th International Conference on Enterprise Information Systems, ICEIS 2025, Porto, Apr 4-6, 2025
Note

This work was partially funded by Telenor Sverige AB.

Available from: 2025-03-18 Created: 2025-03-18 Last updated: 2025-11-03Bibliographically approved
Åleskog, C., Grahn, H. & Borg, A. (2024). A Comparative Study on Simulation Frameworks for AI Accelerator Evaluation. In: IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024: . Paper presented at 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024, San Francisco, May 27-31 2024 (pp. 321-328). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Comparative Study on Simulation Frameworks for AI Accelerator Evaluation
2024 (English)In: IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024, Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 321-328Conference paper, Published paper (Refereed)
Abstract [en]

Domain-Specific Hardware Accelerators (DSHA) are natural components in the evolution of general computers. However, designing and simulating hardware in Hardware Description Languages (HDL) often requires more effort for the developers and might not be suitable in all scenarios, which makes high-level language-based software simulators for computer hardware attractive. Yet, choosing which simulation framework to use can be challenging due to the lack of comparative studies of high-level language-based simulators. This paper presents a comparative evaluation of state-of-the-art simulation frameworks that simulate computer hardware in high-level languages like C++. The contemporary simulators used in this study were selected from the 79 articles introducing novel AI accelerators referenced in our previous survey. We have identified six simulators that are suitable for AI accelerator evaluation, and provide a deeper analysis of three of them. © 2024 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
AI Accelerator, Comparative Study, Hardware Simulator, Simulation, C++ (programming language), Computer hardware, Computer simulation languages, Computer software, Comparatives studies, Domain specific, Hardware accelerators, Hardware simulators, High-level language, Higher-level languages, Simulation framework, Specific hardware, Computer hardware description languages
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-26819 (URN)10.1109/IPDPSW63119.2024.00073 (DOI)001284697300115 ()2-s2.0-85200768653 (Scopus ID)9798350364606 (ISBN)
Conference
2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024, San Francisco, May 27-31 2024
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2024-08-16 Created: 2024-08-16 Last updated: 2025-09-30Bibliographically approved
van Dreven, J., Boeva, V., Abghari, S., Grahn, H. & Al Koussa, J. (2024). A systematic approach for data generation for intelligent fault detection and diagnosis in District Heating. Energy, 307, Article ID 132711.
Open this publication in new window or tab >>A systematic approach for data generation for intelligent fault detection and diagnosis in District Heating
Show others...
2024 (English)In: Energy, ISSN 0360-5442, E-ISSN 1873-6785, Vol. 307, article id 132711Article in journal (Refereed) Published
Abstract [en]

This study introduces a novel systematic approach to address the challenge of labeled data scarcity for fault detection and diagnosis (FDD) in District Heating (DH) systems. To replicate real-world DH fault scenarios, we have created a controlled laboratory emulation of a generic DH substation integrated with a climate chamber. Furthermore, we present an FDD pipeline using an isolation forest and a one-class support vector machine for fault detection alongside a random forest and a support vector machine for fault diagnosis. Our research analyzed the impact of data sampling frequencies on the FDD models, revealing that shorter intervals, such as 1-min and 5-min, significantly improve FDD performance. We provide detailed information on six scenarios, including normal operation, a minor valve leak, a valve leak, a stuck valve, a high heat curve, and a temperature sensor deviation. For each scenario, we present their signature, quantifying their unique behavior and providing deeper insights into the operational implications. The signatures suggest that, while variable, faults have a consistent pattern seen in the generic DH substation. While this work contributes directly to the DH field, our methodology also extends its applicability to a broader context where labeled data is scarce. © 2024 The Authors

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Data mining, District Heating, Fault detection and diagnosis, Machine Learning, Outlier detection, Fault detection, Forestry, Learning systems, Support vector machines, Data generation, Data scarcity, District heating system, Heating substations, Labeled data, Machine-learning, Real-world, Support vectors machine, detection method, heating, pipeline, Anomaly detection
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Computer Sciences
Identifiers
urn:nbn:se:bth-26822 (URN)10.1016/j.energy.2024.132711 (DOI)001294250900001 ()2-s2.0-85200802963 (Scopus ID)
Funder
Knowledge Foundation, 20220068
Available from: 2024-08-16 Created: 2024-08-16 Last updated: 2025-09-30Bibliographically approved
Lundberg, L., Boldt, M., Borg, A. & Grahn, H. (2024). Bibliometric Mining of Research Trends in Machine Learning. AI, 5(1), 208-236
Open this publication in new window or tab >>Bibliometric Mining of Research Trends in Machine Learning
2024 (English)In: AI, E-ISSN 2673-2688, Vol. 5, no 1, p. 208-236Article in journal (Refereed) Published
Abstract [en]

We present a method, including tool support, for bibliometric mining of trends in large and dynamic research areas. The method is applied to the machine learning research area for the years 2013 to 2022. A total number of 398,782 documents from Scopus were analyzed. A taxonomy containing 26 research directions within machine learning was defined by four experts with the help of a Python program and existing taxonomies. The trends in terms of productivity, growth rate, and citations were analyzed for the research directions in the taxonomy. Our results show that the two directions, Applications and Algorithms, are the largest, and that the direction Convolutional Neural Networks is the one that grows the fastest and has the highest average number of citations per document. It also turns out that there is a clear correlation between the growth rate and the average number of citations per document, i.e., documents in fast-growing research directions have more citations. The trends for machine learning research in four geographic regions (North America, Europe, the BRICS countries, and The Rest of the World) were also analyzed. The number of documents during the time period considered is approximately the same for all regions. BRICS has the highest growth rate, and, on average, North America has the highest number of citations per document. Using our tool and method, we expect that one could perform a similar study in some other large and dynamic research area in a relatively short time.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
bibliometrics, geographic regions, machine learning, research directions, research trends, Scopus database
National Category
Information Studies Computer Sciences
Identifiers
urn:nbn:se:bth-26110 (URN)10.3390/ai5010012 (DOI)001191509100001 ()2-s2.0-85187507366 (Scopus ID)
Funder
Knowledge Foundation, 20220215
Available from: 2024-04-15 Created: 2024-04-15 Last updated: 2025-11-26Bibliographically approved
Projects
Bigdata@BTH- Scalable resource-efficient systems for big data analytics [20140032]; Blekinge Institute of Technology; Publications
Khatibi, S., Wen, W. & Emam, S. M. (2024). Learning-Based Proof of the State-of-the-Art Geometric Hypothesis on Depth-of-Field Scaling and Shifting Influence on Image Sharpness. Applied Sciences, 14(7), Article ID 2748. Yavariabdi, A., Kusetogullari, H., Celik, T., Thummanapally, S., Rijwan, S. & Hall, J. (2022). CArDIS: A Swedish Historical Handwritten Character and Word Dataset. IEEE Access, 10, 55338-55349Nordahl, C., Boeva, V., Grahn, H. & Netz Persson, M. (2022). EvolveCluster: an evolutionary clustering algorithm for streaming data. Evolving Systems (4), 603-623Devagiri, V. M., Boeva, V. & Abghari, S. (2021). A Multi-view Clustering Approach for Analysis of Streaming Data. In: Maglogiannis I., Macintyre J., Iliadis L. (Ed.), IFIP Advances in Information and Communication Technology: . Paper presented at 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2021, Virtual, Online, 25 June 2021 - 27 June 2021 (pp. 169-183). Springer Science and Business Media Deutschland GmbHPetersson, S., Grahn, H. & Rasmusson, J. (2021). Blind Correction of Lateral Chromatic Aberration in Raw Bayer Data. IEEE Access, 9García Martín, E., Lavesson, N., Grahn, H., Casalicchio, E. & Boeva, V. (2021). Energy-Aware Very Fast Decision Tree. International Journal of Data Science and Analytics, 11(2), 105-126Borg, A., Ahlstrand, J. & Boldt, M. (2021). Improving Corporate Support by Predicting Customer e-Mail Response Time: Experimental Evaluation and a Practical Use Case. In: Filipe J., Śmiałek M., Brodsky A., Hammoudi S. (Ed.), Enterprise Information Systems: . Paper presented at 22nd International Conference on Enterprise Information Systems, ICEIS 2020, Virtual, Online, 5 May through 7 May (pp. 100-121). Springer Science and Business Media Deutschland GmbHCheddad, A. (2021). Machine Learning in Healthcare: Breast Cancer and Diabetes Cases. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): . Paper presented at AVI 2020 Workshop on Road Mapping Infrastructures for Artificial Intelligence Supporting Advanced Visual Big Data Analysis, AVI-BDA 2020 and 2nd Italian Workshop on Visualization and Visual Analytics, ITAVIS 2020, Ischia; Italy, 29 September 2020 through 29 September 2020 (pp. 125-135). Springer Science and Business Media Deutschland GmbH, 12585Cheddad, A., Kusetogullari, H., Hilmkil, A., Sundin, L., Yavariabdi, A., Aouache, M. & Hall, J. (2021). SHIBR-The Swedish Historical Birth Records: a semi-annotated dataset. Neural Computing & Applications, 33(22), 15863-15875Sidorova, J., Karlsson, S., Rosander, O., Berthier, M. & Moreno-Torres, I. (2021). Towards disorder-independent automatic assessment of emotional competence in neurological patients with a classical emotion recognition system: application in foreign accent syndrome. IEEE Transactions on Affective Computing, 12(4), 962-973
Green Clouds – Load prediction and optimization in private cloud systems [20220215]; Blekinge Institute of Technology; Publications
Javeed, A., Borg, A., Grahn, H., Lundberg, L., Patel, D. & Shirinbab, S. (2025). Improving Cloud Efficiency: A Machine Learning-Based Stacking Model for CPU Utilization Prediction. In: Proceedings - 2025 8th International Conference on Data Science and Machine Learning Applications, CDMA 2025: . Paper presented at 8th International Conference on Data Science and Machine Learning Applications, CDMA 2025, Riyadh, Feb 16-17, 2025 (pp. 120-125). Institute of Electrical and Electronics Engineers (IEEE)Lundberg, L., Boldt, M., Borg, A. & Grahn, H. (2024). Bibliometric Mining of Research Trends in Machine Learning. AI, 5(1), 208-236
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9947-1088

Search in DiVA

Show all publications