[dipl] Perustieteiden korkeakoulu / SCI

Permanent URI for this collectionhttps://aaltodoc.aalto.fi/handle/123456789/21

Browse

Recent Submissions

Now showing 1 - 20 of 5923
  • Hybrid cloud solution for mission critical applications: A case study on electricity market forecasting
    (2025-10-14) Suojärvi, Leo
    School of Science | Master's thesis
    To ensure the stability of the power grid, it is essential to keep the grid frequency close to a nominal value by balancing electricity production and consumption at every moment. Different electricity markets incentivize generator, storage, and load operators to actively participate in the balancing effort. One key category of markets is frequency reserve markets in which the transition service operator procures reserve capacity from all reserve providers that have generators, storages, or loads prequalified for the market. Providers submit bids before a gate closure deadline, and an auction mechanism determines bid acceptance and the clearing price. For all parties, it is beneficial that the bids are made for the market intervals when the market prices are highest and therefore most needed. To be able to make the bids at the times of the highest prices, accurate and highly available forecasts of the markets are needed. One suitable forecasting model for this is seasonal autoregressive integrated moving average model with exogenous variables (SARIMAX). Due to the gate closure deadline, there is a limited time between the availability of required input data for the model and the deadline by which forecast users require the forecasts to be ready. This creates real-time requirements for forecasting. While prior research has addressed electricity price forecasting and creating highly available real-time systems separately, the integration of these two in the context of electricity market forecasting has not been studied. This thesis aims to bridge this gap by studying how the SARIMAX model can be used to forecast the prices of one of the Finnish electricity reserve markets with high availability using a hybrid cloud approach with diverse redundancy. The thesis begins by introducing the concepts of hybrid cloud, workflow orchestration, redundancy and the forecasting model SARIMAX. After this, a case study of a hybrid cloud system with SARIMAX forecasting for frequency containment reserve for normal operations (FCR-N) market and the results of the system are presented. Based on the results, the SARIMAX model is a viable solution for predicting the FCR-N results and the used system architecture is suitable for this kind of a real-time system.
  • Automatic speaking assessment of spontaneous finnish
    (2025-11-24) Pauna, Olli-Pekka
    School of Science | Master's thesis
    Automatic speaking assessment systems attempt to evaluate the proficiency levels of language learners. Spontaneous speech and dialogues provide valuable information about language proficiency, but remain challenging for automatic speaking assessment. This thesis explores how hidden representations from end-to-end speech recognition models can be used for automatic assessment of spontaneous Finnish monologues and dialogues. Experiments compare features extracted from three different speech recognition models and evaluate their performance under categorical and ordinal classification. Models used for feature extraction include a Finnish-only Wav2vec 2.0 model, a multilingual Whisper model and an English-only Wav2vec 2.0 model. Across all experiments, features generated with the Finnish-only Wav2vec 2.0 and Whisper models produce stable and moderately reliable predictions of proficiency levels. When an English-only model is used for feature generation performance drops, but the deterioration is limited. This suggests that most of the predictive information encoded in the ASR features is language-agnostic. Furthermore, combining both Finnish-only Wav2vec 2.0 features and Whisper features does not yield significant improvements in performance. This indicates that the features extracted from the two speech recognition models encode overlapping information. The results raise questions about the robustness and fairness of relying solely on hidden speech recognition features for automatic speaking assessment. Such systems may depend too much on surface-level indicators of proficiency and fail to capture key aspects of Finnish proficiency. The results also highlight the need to incorporate language-modeling components into automatic speaking assessment. Despite limitations, the thesis establishes a baseline for Finnish dialogue-based automatic speaking assessment and contributes to the research of reliable automatic speaking assessment of Finnish.
  • Detecting digital dependence: Inferring public-sector hosting arrangements from Internet infrastructural records
    (2025-10-05) Kilpi, Jaakko
    School of Science | Master's thesis
    Governments increasingly rely on digital infrastructures provided by companies, raising concerns about digital sovereignty and dependence on a small set of global cloud providers. This thesis asks whether the hosting providers of public-sector digital services can be inferred from publicly observable infrastructural records, and what forms of reliance such analysis reveals. A dataset of verified hosting arrangements was assembled through Freedom of Information (FOI) requests in the United Kingdom, Finland, and the Philippines, supplemented by confirmed cases of Chinese hyperscaler use. These disclosures provided a rare form of ground truth against which predictive models could be evaluated. Observable records, such as DNS records, were collected for each domain and transformed into categorical features. Whereas previous studies often relied on single-record heuristics to attribute hosting, this thesis evaluates predictive models trained with stratified cross-validation under different provider groupings. The findings show clear patterns of reliance. The UK and Finland relied heavily on Amazon Web Services and Microsoft Azure, while the Philippines retained significant self-hosting. No FOI responses indicated use of Chinese hyperscalers. Predictive models reproduced provider classifications with substantially higher accuracy than trivial baselines. Feature importance analysis further showed that accurate predictions did not hinge on a single record but instead drew on a combination of technical records across record types. The study demonstrates that public-sector hosting providers can be inferred from infrastructural records with reasonable reliability, though only under conditions of validated training data and carefully structured categories. Prediction cannot substitute institutional transparency, but it can complement it by offering systematic and scalable visibility into otherwise opaque dependencies.
  • Benchmarking large language models on biomedical knowledge graph content
    (2025-10-31) Koivuniemi, Lilja
    School of Science | Master's thesis
    Recently, large language models (LLMs) have shown remarkable natural language processing capabilities across many domains. One such domain is biomedicine, where the models’ emerging abilities have raised questions about whether they could compensate for biomedical knowledge graphs (BKGs) in tasks such as question answering. However, because LLMs are prone to generating false or nonsensical information (known as hallucinations), it is important to assess how accurately these models reproduce structured graph content. This thesis adopts a quantitative approach to evaluate current LLMs with a curated BKG, OpenBioLink. Connected entities (nodes) and their relations (edges) are sampled from the graph. These samples are then reformulated into multiple-choice questions (MCQs), in which models must identify a correct node based on its connection to another node. LLMs are evaluated using three datasets derived from OpenBioLink. Each dataset contains 2,040 MCQs and three, six, or nine distractors. The results show that GPT-5, the best-performing model, answers 90% of the questions correctly in the three-distractor dataset. Accuracy gradually declines across all models as the number of distractors increases. Questions about gene expression consistently produce the most failures for the models. In general, biomedical sources whose nodes have a greater number of outgoing edges of the same type are more error-prone than ontological sources with lower connectivity. Connected nodes with lower confidence scores also yield higher failure rates. Two performance enhancement strategies are also explored. First, using weighted majority voting across models increases accuracy by 1.1–2.3% over GPT-5. Even greater improvements are achieved by selecting 10–20% of the most confident model responses. This yields nearly 100% (99.6%) accuracy for GPT-4o in both the six- and nine-distractor settings. Together, these findings indicate LLMs cannot yet reliably replace BKGs when precise recall and reasoning over complex or uncertain biomedical associations are required. However, LLMs show promise as complementary tools to BKGs. This is particularly due to their high accuracy in hierarchical ontology-based relations and to their systematic improvements through performance enhancement strategies.
  • How industrial companies create product strategies for aftermarket products
    (2025-11-22) Rytkönen, Tuulia
    School of Science | Master's thesis
    Aftermarkets have become an increasingly important source of customer value, revenue and competitiveness for industrial companies. The management of aftermarket offerings typically falls within the product management function, where product strategy plays a central role in developing and managing products as well as creating business value. However, previous research has not addressed product strategy in the context of product management and aftermarkets. This thesis explores how industrial companies create strategies for their aftermarket products and what purposes such strategies have in product management. The study was conducted as qualitative research by interviewing a total of 17 product management professionals from four large industrial companies. The collected data was analyzed using categorization and thematic analysis. Creating aftermarket product strategies remains an emerging and partly unestablished practice within industrial companies. The main outcome of this thesis is a framework explaining how industrial companies create product strategies for aftermarket products. Strategy work begins with deciding to create a product strategy and defining its scope. Strategies are created by directors and product managers in collaboration with internal stakeholders, and they typically include elements such as a vision, background analyses, targets and focus areas, roadmaps, and an executive summary. Product strategies are seen as living documents that are aligned both vertically and horizontally with surrounding strategies. Once created, product strategies serve four broader purposes in product management: providing long-term direction and meaning for work, linking product-level activities to the company’s broader goals, clarifying the current situation and guiding focus, and supporting the communication of product-level insights and direction. This thesis brings existing product strategy research closer to product-level practice in the aftermarket context, offering practical recommendations to support product management when developing product strategies.
  • From smart contracts to smart applications: Leveraging composability to create a trading instrument in decentralized finance
    (2025-11-23) Tamminen, Tyko
    School of Science | Master's thesis
    This Master's thesis investigates the application of machine learning methods to cryptocurrency market prediction and the development of hybrid trading strategies that combine predictive signals with decentralized finance yield components. The study addresses how machine learning models can predict directional shifts in cryptocurrency markets and whether integrating DeFi yield elements can improve risk-adjusted portfolio returns compared to traditional buy-and-hold approaches. The empirical investigation examined multiple machine learning architectures for binary directional forecasting of Bitcoin price movements. Models were trained on data spanning January 2018 to August 2024 using walk-forward validation. LightGBMRegressor achieved 53 % directional accuracy, while Random Forest reached 52 % accuracy. Other tested models, including LSTM networks and MLP, performed within the 51-56 % accuracy range. These results indicate that while machine learning methods demonstrate potential for market direction prediction when combined with properly formatted datasets and appropriate technical indicators, achieving high prediction accuracy remains challenging. A composed trading strategy was developed that integrated LSTM predictions with real-world DeFi yield rates from liquidity pools. The strategy utilized actual yield data to provide realistic performance assessment. Despite modest directional prediction accuracy of 53 %, the hybrid approach reduced drawdown by 50 % compared to the benchmark buy-and-hold strategy. The DeFi yield component compensated for imperfect directional signals, demonstrating that yield-enhanced strategies can achieve adequate risk-adjusted returns even without superior prediction accuracy. The study also examined structural differences between decentralized and traditional financial systems. DeFi offers global accessibility, programmable infrastructure, and fast settlement, but faces challenges including security vulnerabilities and regulatory uncertainty. However, the primary contribution lies in demonstrating that hybrid strategies combining machine learning signals with DeFi yield mechanisms represent a viable approach to portfolio management, when effective risk management is implemented.
  • AI tools’ effects on working methods, productivity and project metrics
    (2025-11-21) Willberg, Tuomas
    School of Science | Master's thesis
    AI powered tools for software development are rapidly becoming a common factor in developers’ daily workflows. These tools have significant potential to reshape the best practices in software development, yet the responsibility for realizing this potential ultimately depends on their user. While much research has been done on AI and its capabilities and performance, AI tool use cases in software devel-opment and their effects on productivity have received less attention. This master’s thesis studied AI tool usage and their effects on productivity through a literature review as well as a case study. The literature review examined the current state of the subject through scientific literature, whereas the case study aimed at understanding practical application of AI tools in a real-world software development context. The case study also analysed numerical project metrics from time before and after the adoption of AI tools. The research findings showed that AI tools have the potential to increase the productivity of software developers. The most common factors behind the in-crease in productivity with AI tools were automation of simple and monotonous tasks, helping with learning new technologies and topics and improvement of work comfort. The biggest benefits were recorded when AI tools were used with caution and predefined use cases.
  • Evaluating patent acquisitions for out-licensing purposes: A strategic framework
    (2025-11-23) Aaltonen, Ella-Maria
    School of Science | Master's thesis
    This thesis develops a strategic framework for evaluating patent acquisitions conducted for out-licensing purposes. The study was carried out in collaboration with a company in the technology industry engaged, amongst other activities, in intellectual property monetization. A Design Science Research approach was used to combine insights from academic literature on patent valuation and stra-tegic portfolio management with empirical data from expert interviews. The re-sulting framework integrates financial, bibliometric, and strategic perspectives, offering a structured approach to assessing patent value for out-licensing. The study contributes to bridging the gap between patent valuation theory and prac-tical decision-making in out-licensing contexts, and provides managerial guid-ance for organizations seeking to strengthen their licensable patent portfolios through acquisitions.
  • Asiakasarvon luominen tekoälyn avulla pienessä ohjelmistoyrityksessä
    (2025-11-22) Vehniäinen, Anette
    School of Science | Master's thesis
    Although artificial intelligence has many benefits in business, its use remains limited in small companies. Studies indicate that small companies can face numerous challenges that hinder AI adoption. In addition, adopting AI in knowledge-intensive sectors such as software requires special attention to ensure it increases customer value. However, the relationship between AI use and customer value in the software sector has received little attention in the literature. The goal of this thesis was to investigate how a small software company can use AI to create customer value. The literature review explored the nature of customer value in business-to-business software sales, the challenges of AI adoption, and the relationship between AI and customer value based on previous research. The empirical research method chosen was a case study where six key personnel from a small software company were interviewed. The results show that AI adoption challenges in small companies can arise at both the individual and organizational levels. Insufficient knowledge about AI was a major challenge linked both to individual know-how and to organizational structures. Other individual-level challenges included gaps in AI capabilities and negative perceptions of AI. Organizational-level challenges included the lack of mutually agreed upon practices and issues related to data and technologies. The thesis also identified several potential roles for AI in creating customer value in a small software company. AI was seen as a means to produce information to support both the company and its customers, as well as to enhance communication between them. Customer awareness of AI use could strengthen relationship value by creating a positive image of the company. AI was also viewed as a means to develop the business more broadly, for example by increasing productivity. Challenges may include preserving professional and personalized service despite AI use, and ensuring that productivity gains are directed toward increasing customer value. Based on the findings it can be said that in order for AI adoption to support customer value, the company must have a good understanding of how AI works and of its strengths and weaknesses. This helps ensure that the resources of both humans and AI are allocated in ways that create customer value.
  • Enhancing software release velocity
    (2025-11-18) Syed, Maria
    School of Science | Master's thesis
    This thesis investigates critical software delivery latency at a large fintech organization, where a modern micro-application architecture was severely bottle-necked by a legacy, manual, ticketing-based approval system. This hybrid environment created an acute organizational bottleneck, imposing high coordination burdens and unpredictable delays on globally distributed feature teams. Using an Action Research (AR) methodology, the study first established a high-friction baseline, measuring the median Lead Time for Changes (LTC) at 20.2 hours. The core intervention involved replacing the mandatory manual approval gate with a fully automated, self-service deployment model integrated directly into the Continuous Integration/Continuous Delivery (CI/CD) pipeline. The intervention successfully drove significant organizational efficiency, yielding a 69% reduction in LTC, dropping the median time from 20.2 hours to 6.2 hours. Concurrently, Deployment Frequency (DF) increased by 213% (from 47 to 100 releases per week). This improvement solidified the organization's position within the DORA elite performance tier. The primary practical guidance derived from this case study is that sustained software acceleration requires prioritizing the decentralization of control over the deployment trigger. This is achieved not merely through technical automation, but by deliberately eliminating all mandatory human coordination steps via external systems (e.g., tickets), relying instead on real-time visibility tooling integrated into the developer workflow. Additionally, and more importantly, this required a complementary organizational culture shift, which involved transfer-ring accountability for production stability directly from administrative roles, such as the Program Manager, to the autonomous development teams.
  • Incorporating test automation into existing software systems: A case study of incremental and maintainable practices
    (2025-11-11) Lippo, Markus
    School of Science | Master's thesis
    The importance of automated testing has increased as organizations aim to deliver software more frequently, without compromising software quality. However, the distribution of higher- and lower-level tests, differences in test creation techniques, and the principles behind continuous integration are mostly aimed towards new development projects. Furthermore, common pitfalls, such as unclear strategies and technical debt, have been identified. This thesis explores how test automation could be introduced into an existing system, while ensuring maintainability and future extensibility. This challenge was addressed by designing and implementing a pilot for a case company. The pilot consisted of four phases: planning, tool selection, implementation, and evaluation. The pilot defined what to automate, implemented a standalone test automation system, evaluated its feasibility, and outlined directions for future improvement. The case company’s multi-version web application demanded a modular and maintainable testing approach. The pilot prioritized high-value and repetitive steps, so it translated a specification document into automated regression tests on the user interface. This meaningfully reduced manual effort. The proof-of-concept scope enabled fast validation prior to large-scale commitment. Furthermore, aligning the strategy and tool selection with the product context proved essential. While various tools were considered, Robot Framework proved the best fit. This thesis applies established concepts to an existing software environment and highlights a gap in prior research, which often focuses on new development projects. It also confirms known trade-offs, such as increased execution time and maintenance from higher-level testing. Practical recommendations include starting with high-value tests, limiting the initial scope, selecting tools that support the product context, and defining a long-term goal. The findings demonstrate that an incremental and context-aware approach is effective for introducing test automation.
  • Designing a digital service offering and outcome-based contracts for a novel connected motor solution
    (2025-11-23) Granlund, Alex
    School of Science | Master's thesis
    Industrial machinery suppliers increasingly compete on digitally enabled outcomes rather than equipment. This thesis examines how a novel speed-controlled motor solution, MV Titanium, can be used to deliver digital services and performance-based contracts by migrating existing drive-based services. It focuses on four aspects: which services can be ported, what minimum interface and non-functional requirements they need, which customer outcomes they affect, and when a performance-based contract becomes viable. The study applies an abductive qualitative single-case design using interviews, focusgroups, and internal technical and commercial documentation, centered on four drive-origin services: self-service powertrain monitoring, expert-led monitoring, an embedded application environment, and a collaborative diagnostics tool. Findings show that these services can be ported with modest adaptation when MV Titanium provides telemetry, connectivity, and adjusted analytics tuned to motorspecific parameters. Most services operate with read-only interface and lesser latency requirements, while embedded applications require real-time and extended firmware. Services are linked to higher uptime, reduced energy and faster troubleshooting. A minimum viable performance-based contract is co-designed around uptime as the primary outcome metric, underpinned by agreed rules for data access, measurement and verification, attribution, and payment, with a roadmap for scaling this model. The thesis contributes to digital servitization and modular service platform literature by specifying portability conditions, minimum interface requirements, and governance elements that enable a connected motor system to function as a scalable service and outcome-based contracting platform.
  • Improving performance metrics in factory production and intralogistics
    (2025-11-23) Pelander, Sami
    School of Science | Master's thesis
    While many industrial companies use metrics to quantify operational performance, they can struggle to understand the connection between the measured phenomena and organizational outcomes. This thesis contributes to the general understanding of performance measurement in discrete manufacturing by examining how strategic and operational factors jointly shape factory-level metrics, and by proposing a structured framework for aligning measurement practices with decision-making needs. Conducted as part of the TwinFlow research project, the thesis analyses the performance measurement practices of Ponsse Plc, a Finnish forestry equipment manufacturer, through a single case study supported by triangulation interviews with two additional original equipment manufacturers. Based on the literature review, a set of strategic and operational factors influencing metric formation was identified and applied to evaluate the case company’s existing measurement system. By analyzing improvement opportunities in the case company’s measurement system and benchmarking best practices from comparable industrial firms, the research identifies three key development directions for Ponsse's measurement practices. These directions include enhancements to existing metrics used by the company while also introducing new measurement opportunities. Building on a synthesis of academic literature and empirical evidence, the study develops a conceptual framework that links key performance dimensions (productivity, quality, flexibility, time, and cost) to both operational and strategic objectives. The research extends existing theory by demonstrating how some performance dimensions are particularly well-suited to supporting different organizational processes, enabling a more nuanced understanding of the roles of different performance metrics. Through this, the thesis provides generalizable insights for designing performance measurement ecosystems and offers practical guidance for organizations seeking to align their metrics with decision making needs across different managerial levels.
  • Customer selection for B2B integration with low transaction volume customer
    (2025-11-14) Minkkinen, Markus
    School of Science | Master's thesis
    Advances in business-to-business (B2B) integration has made it more accessible for small-scale customer relationships. Traditionally, such integrations have focused on high-volume customers due to their clear financial returns, leaving a research gap in understanding their feasibility and profitability for low-transaction-volume customers. This thesis addresses this gap by examining how low-volume customer integrations contribute to operational and financial performance and by developing a systematic model for selecting customers most suitable for integration projects. The study adopts a single-case research design within an international telecommunications technology company operating in the Nordic market. A mixed-methods approach is applied, combining quantitative analysis of internal performance data with qualitative interviews. The quantitative analysis investigates the impact of B2B integration on operational efficiency using metrics such as order handling time, invoice creation time, on-time delivery rate, and process error rates. Qualitative interviews identify cost components, implementation feasibility, and decision-making criteria. Additionally, a Multi-Criteria Decision-Making (MCDM) framework is tested for customer prioritization. The results reveal efficiency improvements from B2B integration. Order handling time decreased by up to 73%, invoice creation time by 94%, and process errors by 86%, indicating operational gains. Financially, most benefits come from improved working capital efficiency, with smaller savings in labor costs. Cost analysis identified system development as primary cost drivers. Based on these findings, a two-stage customer selection model was proposed: first, evaluating profitability through cost-benefit analysis, and second, assessing feasibility using qualitative criteria such as business continuity and eCommerce strategy alignment. The thesis concludes that B2B integration can be both feasible and financially beneficial for low-transaction-volume customers. The developed model offers practical value by helping companies prioritize integration initiatives more effec-tively. From an academic perspective, the research expands the understanding of supplier-initiated integration in low transaction environment. Future research is recommended to validate the model across industries.
  • Success factors and technological solutions in employee relocation
    (2025-09-14) Ma, Zechen
    School of Science | Master's thesis
    Employee relocation has become an increasingly significant phenomenon in the contemporary labour market, influencing global talent attraction and retention. It has emerged as an essential factor for companies striving to maintain a competitive advantage and meet diverse workforce requirements. However, the practical relocation process for employees has posed several challenges, including process inefficiencies, communication issues and difficulties in expectation management. Given the substantial costs and inherent complexities involved, there is a growing interest in addressing these challenges through recent technological advancements, notably artificial intelligence. The aim of this thesis is to explore the key factors influencing successful employee relocation and to identify technological solutions and opportunities that support these factors. Specifically, the thesis investigates practical processes associated with long-term international relocations to Finland, encompassing stages from initial immigration tasks to integration into the host country. The research was conducted with a grounded theory approach, and the primary data is collected through semi-structured interviews. A total of 31 individuals from Finland were interviewed, including HR professionals, relocation consultants, and technology experts in the field. The thesis identifies compliance, process efficiency, transparency, stakeholder communication, and expectation management as the most critical factors contributing to successful employee relocation. Furthermore, it highlights several technological solutions to support these factors and enhance relocation processes, such as AI agents for information search and document drafting. This thesis contributes to academic literature by being among the first qualitative studies addressing the present topic. Additionally, the research offers practical insights intended to guide practitioners in refining relocation processes.
  • Threat modeling in DevSecOps based development of a cloud-native SaaS system
    (2025-11-24) Turtinen, Henrik
    School of Science | Master's thesis
    Modern software engineering practices and technologies have accelerated delivery to the point where teams routinely deploy multiple times per day. This iterative high-velocity model conflicts with traditional threat modeling practices, which align better with linear and slower development cycles. Conventional threat modeling requires security knowledge, consumes substantial resources, and generates a lot of documentation that, while valuable, can quickly become obsolete in fast moving development environments. To solve this problem, a prototype process was developed that combines threat modeling as a code (TMAC) tool, agentic AI, and the model context protocol. The aim was to make threat modeling more accessible and better aligned with modern DevSecOps workflows. In this process, the agentic AI autonomously gathers relevant system information, drafts and iterates TMAC syntax, and invokes the TMAC tool to generate diagrams and a report. In addition, the AI supplements the TMAC tool identified threats with findings derived from its own contextual reasoning. This design reduces the reliance on security experts and minimizes friction with rapid development practices. The prototype process was evaluated using two scenarios: one in which the AI had access to only the architectural plans of the application to be threat modeled, and another in which it had access to the application's source code. The evaluation showed that access to source code produced noticeably better results. Each scenario was executed twice to assess reproducibility, revealing significant variation between runs. While promising, problems were noted in the ratio of relevant threats to irrelevant threats.
  • Parameter prediction for dilution refrigerators from multivariate time-series data
    (2025-09-30) Tervonen, Aku
    School of Science | Master's thesis
    Dilution refrigerators are among the most important tools for research at ultra-low temperatures, particularly in quantum information science. Efficient methods for simulating their thermal behavior during cooldown are essential for enabling rapid design choices, improving system understanding, and predicting cooldown times. This thesis introduces a parameterized heat-circuit approach for simulating dilution refrigerators, combined with a neural-network-based framework for predicting key component parameters that are otherwise uncertain in such complex systems. Multiple interdependent time-series signals are leveraged to estimate these parameters. The method is demonstrated on XLD-1000 dilution refrigerators, where an in-house thermal circuit simulator is used to generate training data, reducing the time required to produce a single data point from days to minutes. Model performance is evaluated on both simulated data and experimental cooldown measurements from four XLD-1000 cryostats. The results show excellent agreement with simulator-generated data, while larger deviations are observed in real-world tests. These discrepancies are attributed to limitations of the simplified circuit model but also provide valuable insights, pointing to systematic differences between simulations and actual cryostat behavior. Although absolute parameter values may remain uncertain, the predicted parameter variations successfully capture changes in cooldown curves. This makes the approach a promising diagnostic tool for detecting malfunctions and understanding system-to-system performance differences in dilution refrigerators and motivates further research on the topic.
  • Designing Kilpi, an authorization framework for web applications: Modeling and implementing an open source TypeScript framework
    (2025-10-07) Nevavuori, Jussi
    School of Science | Master's thesis
    Designing and implementing an authorization system for a web application is a difficult task, which poses major operational and security risks, if constructed incorrectly. Due to the wide variety of existing authorization models and complexity of requirements for an authorization system, implementing such a system can be a challenging task. There is a lack of comprehensive open source solutions for JavaScript and TypeScript applications for authorization, with most solutions being enterprise-oriented languages, proprietary paid third-party services or platform-specific solutions. This thesis introduces Kilpi, an open source TypeScript library for modeling and implementing authorization systems for web applications. This thesis discusses the goals, requirements and challenges of designing and implementing such a system, and evaluates the fully implemented and available Kilpi open source library against these requirements. The most important goals of Kilpi are developer friendliness and flexibility to suit most applications, use cases and authorization models. Additionally, Kilpi provides the flexible functional policy-based access control model based on defining policies as TypeScript functions. This thesis evaluates Kilpi against guidelines from literature as well as tested in multiple production applications and finds it to fulfill its goals well as a flexible authorization solution. Future development includes primarily only superficial improvements on usability.
  • Valikoidut tiedonsiirrot relaatiotietokantojen välillä
    (2025-10-20) Jalava, Jouni
    School of Science | Master's thesis
    The purpose of this thesis was to propose a new method to provision test data. This new method was compared against other test data provisioning methods and new tools were developed to support this new method. This new test data provisioning method is called selective data transfer. In selective data transfer, testers select what parts of the production database they want to transfer to a test database. During the transfer, new data is copied from the production database to the test database, and data in the test database is refreshed. When data is refreshed, old data is updated to match the data in the production database. Data in the test database is deleted if that data is missing from the production database. In comparison to other test data provisioning methods, selective data transfers offer more freedom for testers to choose what data they want to transfer from the production database. The disadvantages of selective data transfers are the risk of having the test data leak in a data breach even if the test data is anonymized. Data taken from the production database using selective data transfers cannot be used to test changes to the structure of a test database. In addition, selective data transfers are not widely supported since the method is new. To support this new test data provisioning method, new tools were developed to support the two parts of the transfer. These tools were developed to work with any relational database. The first tool was developed to perform the copying phase of the transfer. The second tool was developed to perform the deletion of the extra data in the test database. These tools were tested against a few databases that represent different relational database structures. The copying tool copied the right rows with most of the databases. However, the copying tool could not copy the right rows with a database where the foreign keys form a cycle. The tool for deleting extra rows worked with all databases it was tested against. The deletion tool performed better than the copying tool in the tests because deleting extra rows is a simpler process. Overall, both tools worked with the usual database structures.
  • Training AI agents to navigate web interfaces through visual input
    (2025-11-24) Pärssinen, Henrik
    School of Science | Master's thesis
    Modern multimodal large language models (LLMs) exhibit strong object detection and visual grounding capabilities, enabling vision-based agents capable of perceiving, reasoning, and acting on real web interfaces. However, even small perception errors can compound across steps and interfere with multi-step execution. In this thesis, we explore the training of vision-language models as web agents capable of visually grounded interaction within web interfaces. Using Qwen2.5-VL-32B, we perform prompt distillation from a teacher equipped with a hint to an identical student. This transfers reasoning traces and interaction strategies directly into the student model's weights. We train three different models with three distinct training setups, each cast as a visual question-answering task. We then evaluate the resulting models on agentic single-click web tasks to assess how task-specific fine-tuning transfers to realistic web interactions. Behavioral analysis reveals that both tuned and baseline agents exhibit a bias toward their initial action and text tokens, under-utilizing visual feedback from the environment. Nevertheless, all fine-tuned agents outperform the baseline model in our evaluation, demonstrating that successful task-specific fine-tuning transfers to agentic settings and confirming prompt distillation as a viable approach for improving vision-based agents.