AI-assisted
software development

From speeding up documentation and development to rethinking how requirements are built, AI is now part of our day-to-day workflows and real-world projects delivered to clients. And the results? Tangible. Measurable. Exciting.

Learn more

We’re trusted by

Results in numbers

These results aren’t projections or assumptions. They’re measured outcomes from real projects, real teams, and real codebases.

80%

less time spent on generating and maintaining public class documentation (measured with Cursor)

50%

faster creation of structured technical documentation for complex systems

30%

less effort to reach up to 100% unit test coverage, depending on the codebase

20%

faster identification and resolution of code violations through integrated tooling

15–20%

time savings for junior developers using AI tools is the boost in velocity we’ve seen in real, controlled experiments across teams

How AI has improved our software development

We’ve moved beyond testing. AI is part of how we deliver real value. These results come straight from our projects, where carefully implemented AI tools are making a measurable difference every day.

fiber_new

Reduced time to onboard developers into new domains

especially in regulated industries (thanks to AI knowledge assistants like L.E.A.P.)

format_list_bulleted

Higher quality requirements via AI-supported decomposition

using EARS syntax, reducing ambiguity and review time

people_outline

Improved cross-team alignment

by capturing and reusing internal know-how via Retrieval-Augmented Generation systems

bug_report

Fewer regression bugs and faster root-cause analysis

powered by AI-guided traceability and test generation

pattern

Documented AI usage patterns and tool performance tracking

ensure continuous feedback and safety in deployment

security

Safe integrations with our AI implementations

designed to align with ASPICE, ISO 26262, and other safety and security standards

As a QA Lead, I integrated ChatGPT into my daily workflow to support tasks like UML analysis, test case generation, and API test scripting (e.g., Postman, pytest). It significantly accelerated the creation of edge case scenarios and helped convert user stories into Gherkin format for BDD. I also used it to validate JSON structures, prepare test data, and generate technical documentation drafts. The model reduced repetitive manual work and improved consistency across test artifacts. For any QA working in fast-paced environments, it’s a highly effective tool.

Marcin Sikorski

Marcin Sikorski

Lead QA automation Software Engineer

Implemented, observed
and measured:
Insights you can put into practice

Across teams and domains, we test, tweak, and track how AI changes the way we work. Here are real insights from real projects.

 

Through practical trials and developer feedback, we’ve learned:

  • AI is most valuable in repeatable, pattern-based tasks (e.g. documentation, interface scaffolding, boilerplate code)
  • AI is less helpful for abstract conceptual tasks unless guided by well-prepared prompts
  • Time savings are more likely in integration, testing, and automation tasks than in creative architecture design
  • With this clarity, we focus on adding real value instead of layering on complexity just for the sake of it.
     

    We ran hands-on experiments to see how AI really impacts developer speed. To make it measurable, we tested tools in real workflows. Here’s what works for us:

  • Side-by-side comparisons of AI-assisted vs. non-assisted teams on real project tasks
  • Use of Scrum metrics like team velocity and completion time for epics/stories
  • Analysis of AI effectiveness across seniority levels (Junior, Regular, Senior) and technology familiarity
  • Task completion time
  • Code quality
  • Team velocity
  • Developer satisfaction

    We’ve seen up to 20% faster development, especially when building early-stage prototypes or jumping into new tech stacks.

  • We built and tested our own audit framework to see where AI really fits. It helps us evaluate AI’s impact in real projects, so we’re not guessing, but making informed decisions.

    This goes beyond hype-driven adoption. It’s a structured evaluation of where AI provides value, with metrics like delivery speed, code quality, and how ready the team is to adopt new tools.

    We don’t silo innovation. Every AI success becomes a shared asset across the organisation. When an insight or workflow works in one domain, we don’t let it stop there: we scale it.

    We applied unified AI tooling, adoption standards, and documentation best practices across:

  • Automotive: High-assurance coding environments using GitHub Copilot and Cursor for quality-critical software, fully aligned with ASPICE and ISO 26262
  • Robotics: Applied LLM-powered requirement decomposition, mocking frameworks (like rtest), and accelerated CI/CD workflows in ROS 2 projects
  • Industrial automation: Used AI-supported documentation and test acceleration for PLC and embedded C/C++ code in production lines
  • Healthcare (early-stage): Piloted AI-assisted architecture design and requirement generation under IEC 62304 and ISO 13485 constraints
  • To make AI-assisted development truly useful, we’ve developed internal tools and frameworks that bring clarity, consistency, and measurable impact across teams.

    They help us track what’s working, improve what’s not, and keep pushing boundaries.

    Prompt libraries for common development scenarios

    We’ve built prompt libraries that actually work.
    From generating unit tests in embedded C++, to refactoring old Java code, to translating safety requirements into EARS syntax, we have a growing collection of prompts tailored to real engineering challenges.

    They’re versioned, domain-specific, and help our teams get faster, more consistent results with AI tools.

    CI/CD integrations that track AI use against delivery metrics

    We’ve embedded telemetry right into our pipelines to track exactly how AI tools impact story point burn-down, test coverage, and overall team velocity. These insights show us clearly where AI is making a real difference — and where it’s not quite hitting the mark.

    Engineering leads gain instant, real-time visibility into the ROI of AI adoption, helping them steer their teams smarter and faster.

    AI response quality analysis and prompt refinement tools

    We’ve developed handy tools to assess the quality of AI-generated outputs, checking accuracy, coverage, and formatting against what we expect. These utilities can even suggest improved prompts and create alternative versions for easy comparison.

    That means prompt engineering stops being a guessing game and turns into a smooth, repeatable process.

    AI usage dashboards for team leads

    Our AI usage dashboards provide engineering and project leads with insights into how and where AI is being used. They show tool adoption rates, output quality trends, developer feedback, and correlations with sprint performance.

    Leaders can govern AI usage across distributed teams without micromanaging.

    We integrate tools like GitHub Copilot and Cursor to:

    • Reduce time spent on public class documentation by up to 80%
    • Generate structured technical documentation 50% faster
    • Cut effort to reach up to 100% unit test coverage by 30%, where applicable
    • Identify and fix code violations 20% earlier in the development process

     

     

    Another powerful way we’re using AI is with LLMs to break down requirements using EARS syntax, boosting:

    • Clarity of stakeholder/system requirements
    • Detection of underspecified or corner-case behaviour
    • Compliance with safety-related design standards (e.g. IEC 61508, ASPICE)
    • Efficiency of reviews and approval workflows
    • This includes hands-on validation in client automotive projects and internal quality checks for completeness and consistency

     

     

    At Spyrosoft, we put AI to work in customer support – thoughtfully integrated to cut response times, reduce operational costs, and uncover fresh efficiencies.

    • Automate ticketing and case classification
    • Accelerate troubleshooting and root cause analysis
    • Support faster, data-driven decision-making
    • Power smart knowledge bases, chatbots, voicebots, and AI agents

     

     

    We built and deployed domain-specific Retrieval-Augmented Generation (RAG) systems for:

    • Fast, secure access to internal documentation, specs, and code.
    • Onboarding, debugging, reverse engineering, and system design.
    • Integration with CI/CD pipelines and developer tools.

    This provides engineers with instant, contextual answers from internal sources. No external data exposure, no retraining required.

     

     

    We’re not just implementing AI
    — we’re also creating it.

    Qt's logo

    Qt AI Assistant

    Co-created with Qt, this Copilot-like tool for QML/Qt is transforming how developers code.

    XGenie

    A domain-specific AI co-pilot we developed for the insurance industry to support underwriting.

     CodeLlama-13B-QML

    We open-sourced a custom fine-tuned LLM with Qt on Hugging Face.

    L.E.A.P.

    Our internal natural-language assistant that helps engineers instantly access project knowledge.

    Real-life implementations

    Have a look at some use cases of AI-assisted development in actual commercial and R&D environments.

    Automotive SPICE Potential Analysis 2024
    01

    Automotive and embedded software projects

    • Integrated tools like Cursor into commercial workflows to automate: Public class documentation (↓ 80% time), structured technical documentation (↓ 50% time), violation detection and correction (↓ 20% time), unit test coverage support up to 100% (↓ 30% effort)
    • Used in compliance-driven domains, including projects aligned with ASPICE and ISO 26262.
    • Output verified through internal metrics and code review pipelines.

     

    ASPICE 4.0 Soyrosoft Solutions
    02

    Automotive safety systems and functional specification writing

    • We leveraged LLMs to transform vague or complex stakeholder requirements into precise software-level specs using the EARS (Easy Approach to Requirements Syntax) method.
    • Automated detection of: Ambiguities and incomplete conditions, missing corner-case behaviours, invalid transitions in system states.
    • Helped teams design safety-critical systems faster and more securely, all while meeting standards like IEC 61508.

     

    03

    Knowledge reuse and developer support

    • We built a central prompt repository to support AI across common development tasks such as bug fixing, refactoring, test generation, and clarifying specifications.
    • This lets engineers reuse polished prompts and get consistent results, which is especially important in regulated environments.
    • Plus, prompt performance is tracked, versioned, and fine-tuned over time to keep things sharp.

    Spyrosoft Connect team
    04

    Internal engineering enablement and onboarding

    • Built and deployed an AI-native Q&A engine (LEAP) using Retrieval-Augmented Generation (RAG) architecture.
    • Connects engineers with: Internal documentation, specifications, regulatory requirements, code snippets
    • Used securely on-premises with fine-tuned models trained on project-specific data.
    • LEAP dramatically cuts onboarding time and makes reverse-engineering legacy systems a breeze.
    a photo of a programmer by a desk, in front of three displays
    05

    Benchmarking AI impact in real projects

    • Ran controlled experiments comparing development teams using and not using AI tools (e.g. Copilot, ChatGPT) across various dimensions: Team velocity (Scrum metrics), task completion time, quality of test coverage and documentation
    • Projects included: Mobile app with BLE integration (React Native), .NET enterprise system (AllPro project)
    • Findings confirmed 5–20% time reduction in development depending on developer seniority and tool use.
    Defence_International security
    06

    Safety -and security-critical development environments

    • Built custom wrappers and integrations to: Control AI-generated code input/output, enforce traceability for audits, prevent data leakage (e.g. no external API calls in safety projects)
    • Used in embedded and defense-adjacent systems where certification, reproducibility, and explainability are required.
    Gen AI for legacy code optimisation
    07

    DevSecOps observability and AI governance

    • AI-generated code and suggestions are: Tracked across commits, evaluated against developer velocity, version-controlled and documented
    • Integrated into GitLab workflows, including cursor snapshots and Copilot usage metrics.
    Computer with lines of code
    08

    Regression test creation and bug detection

    • LLMs are used to: Generate unit tests based on public method signatures, summarise regression patterns from failed CI pipelines, identify risky dependencies or functions
    • Applied in internal tools and commercial projects for automotive and industrial automation.
    Building a high-performance support model for KKR
    09

    R&D projects and early-stage feature design

    • Used AI to assist developers when: Requirements are ambiguous or only exist as rough user stories, teams are exploring new technologies (e.g. Junior devs ramping up on React Native)
    • AI helps scaffold the initial code structure, shortening prototyping loops and accelerating experimentation.
    01
    Automotive and embedded software
    02
    Automotive safety systems
    03
    Knowledge management
    04
    Internal engineering enablement
    05
    Benchmarking AI impact
    06
    Safety-critical environments
    07
    DevSecOps observability
    08
    Regression test creation
    09
    R&D projects

    AI support
    across the company

    From developers and testers to product managers and CTOs, we’ve seen how AI speeds things up, improves quality, and makes everyday tasks smoother for every role on the team.

    code

    Developers

    Code generation and refactoring, AI-assisted onboarding, Copilot prompt libraries, faster documentation, smoother onboarding via domain-specific RAG systems.

    check

    Testers

    Unit test generation, regression test gap detection, anomaly detection, automated bug reproduction, AI-assisted test coverage analysis.

    analytics

    Product managers / business analysts

    Automated requirement decomposition (EARS syntax), faster documentation workflows, knowledge discovery through RAG systems.

    rocket_launch

    Engineering leaders

    AI opportunity audits, identifying high-impact integration areas, monitoring AI usage dashboards, scaling AI tools across projects and domains, CI/CD-integrated velocity tracking, measuring AI’s ROI on delivery, workforce enablement, reducing cost-to-deliver across engineering teams.

    support_agent

    Support teams

    Smart ticket classification and routing, 24/7 technical assistant agents, troubleshooting automation, AI-driven chatbots and voicebots tailored to client needs.

    Contact us

    Curious how AI could support your development process?

    AI assisted software development is not a trend. It’s the next chapter of software engineering and it’s happening now.

    Whether you’re looking to speed things up, cut costs, or simply get more out of your team, we’re ready to share what we’ve learned and help you make it real.

    Tomasz Smolarczyk

    Tomasz Smolarczyk

    Director of Artificial Intelligence