Blog
Implementing AI through the right use cases
How companies can use GenAI strategically
High competitive pressure and huge investments mean that AI solutions are improving faster and faster. With tools such as ChatGPT and the GPT5 language model published in August, AI can now develop entire websites with new quality standards or independently implement tasks such as solving support tickets.
The last two years show how rapidly generative AI is developing. According to the Stanford AI Index Report 2025, LLM systems were only able to solve 4.4% of programming tasks in the SWE-bench benchmark in 2023; in 2024, they were already able to solve 71.7%[1], i.e. an improvement by a factor of 16 in just one year.
We are all increasingly noticing how generative AI is establishing itself in our everyday lives. It is changing business processes, creating measurable added value and increasingly becoming a competitive advantage. For companies, this means that those who develop a strategic AI roadmap now can build these competitive advantages and at the same time prevent "shadow AI", i.e. uncontrolled experiments without governance.
Why structured use case development is crucial
Many organizations are experimenting with LLM chatbots or coding aids, but only a few make the leap into productive operation. One central consideration here is the data situation. Experts emphasize that questions like "What can AI do with my data, and is my data ready for it?" are at the beginning[2]. Everything else, such as budget, training, governance or the question of whether workloads belong in the cloud, is derived from this. Jim Weaver, former CIO of North Carolina, puts it in a nutshell:
"Start with the use cases and look at the data elements that are needed"[2]
Romelia Flores from IBM adds that many organizations do not want to start anything with generative AI before they have their data sources under control[2]. We can confirm this challenge from our experience. It is not uncommon for us to encounter heterogeneous data landscapes that do not have end-to-end data consistency. This often leads to a fragmented foundation, which is a difficult starting point for AI systems.
A solid data foundation and clear guidelines for data quality, security and access are therefore a prerequisite for any AI implementation.
So how do you go about utilizing the potential of AI in a targeted manner and creating real added value?
Phase 1 - Recognizing potential and creating strategic foundations
Even before thinking about finding use cases, fundamental questions need to be answered:
- Operational readiness and governance.
The involvement of the management level, the establishment of an AI review board, empowerment procedures and clear responsibilities lay the foundation for the responsible use of AI. - Clarify processes and business goals.
What strategic goals is the company pursuing?
Which processes can be improved or automated using AI?
Studies by Bain show that generative AI can deliver significant added value in practice: manual responses in customer service become 20-35% faster, the creation of marketing and sales content can be shortened by 30-50%, developers save around 15% of coding time and document comparisons in the back office can be automated by 20-50%[3]. - Technological infrastructure and data storage.
IT and specialist departments work together to analyze which systems and data sources are available for AI. According to StateTech, authorities and companies must first clarify whether their data is even AI-ready before tackling complex use cases. Infrastructure decisions such as on-premises vs. cloud, GPU capacities or API interfaces should be made at an early stage[2]. - Potential analysis.
From Business-By analyzing AI goals, process pain points and trends, an initial picture emerges of where AI can provide the greatest benefit. Compliance, data protection, costs and ethics are also part of this analysis.
The result of this phase is a strategic basis: management circles are informed, there is a common target image and there is a rough idea of the relevant data and systems. A foundation that is essential, but is usually lacking in AI projects, especially in the Swiss market, and therefore represents one of the biggest challenges[4].
Phase 2 - Identify use cases and analyze gaps
Once the foundation has been laid, the systematic use case finding begins:
- Collect and evaluate ideas.
Brainstorming with specialist departments, IT and data science provides numerous use case ideas. Industry studies and success stories from other companies can provide inspiration. Important: A use case list does not have to be perfect; it forms the basis for prioritization. A good approach is to look at external, revenue-oriented and internal, cost-saving application options.

- Clustering according to impact and feasibility.
The use cases are clustered according to impact (business value), degree of maturity and technical feasibility. Low-threshold use cases in the back office are ideal starter projects. Examples include document management, IT procurement bots, chatbots for internal knowledge databases or automated log creation. These use cases involve little risk and deliver visible results quickly[2]. The following illustration provides inspiration as to which functions companies can use AI to generate cost savings and increase sales. It should be emphasized that the area of service operations, IT and software engineering has the greatest impact.

The following use cases already showed particularly high added value after the introduction of AI in 2024:
- Customer support teams can respond to requests 20-35% faster.
- Engineering teams can reduce the development time for programming by 15%.
- Sales and marketing teams work 30-50% faster when creating content.
- Back office tasks are 20-50% more efficient[5].
- Create a prioritization matrix.
A matrix of business value, risk, technical requirements and effort helps to identify the most important use cases. Criteria can be: savings potential, customer satisfaction, regulatory risk, data availability and time to proof of concept. - Definition of success criteria.
Clear KPI targets should be defined for each prioritized use case (e.g. "processing time -20%" or "95% routing accuracy"). This is the only way to evaluate later whether the prototype delivers real added value. We also see potential for optimization here in practice.

Anthropic provides a good guide to the most important success metrics:
| Ticket routing | Content moderation | Customer chatbot | Code generation | Data analysis |
| Routing accuracy | False positive rate | Costs per call | Time required for routine coding | Time until realization |
| Detour rate | False negative rate | Conversation completion rate | Errors and bugs in the code | Decision accuracy |
| Time to solution | Accuracy per category | Average solution time | Project duration | Ability to process larger and more diverse data sets |
| Processing time in the queue | Migration of users | First redemption rate | Compliance with coding standards | Customer satisfaction with data-driven products or services |
| Cost per ticket | Objection volume | Escalations to human agents | Developer productivity | Elimination of routine activities |
| Customer satisfaction (CSAT) | Costs per inspection | Percentage of users interacting with the chatbot | Reuse of code | Time saving per analysis |
| Volume processing | Community health | Customer satisfaction (CSAT) | Test pass rate | Ability to process complex requests |
Source: Anthropic-enterprise-ebook-digital.pdf
- Gap analysis of infrastructure and tools.
The comparison between the actual architecture and the requirements of the use cases shows which data is missing, which interfaces need to be expanded and whether new tools are required. This step prevents use cases from ultimately failing due to a lack of integration. It forms the basis for the IT roadmap and subsequent MVP development. We describe how to get from the strategic idea and specification to the prototype or MVP in our blog article: Design Thinking IaC Automation.

Blog
Design Thinking IaC Automation
by Daniel Schweizer & Thomas Somogyi
At the end of phase 2, there is a presentable use case roadmap as a basis for follow-up. On the basis of this roadmap, management can decide whether and which projects are to be turned into prototypes or later into MVPs.
But what about the risks?
Governance, data policy and risk
AI without governance harbors risks such as bias, data breaches, hallucinations and unexpected costs. In our experience, these points need to be considered from the outset. The McKinsey analysis from 2024 shows that only a small group of "high performers" attribute more than 10 % of their EBIT to generative AI. These companies use AI in an average of three business areas and often develop their own models or heavily customize existing models. They have also implemented robust risk management processes and data governance[6].
Important governance elements:
- AI review board
An interdisciplinary committee with representatives from IT, specialist departments, data protection and legal affairs decides on use cases, monitors risks and defines guidelines. - Data governance
Strict guidelines for data classification, access control, encryption and logging are key to ensuring compliance. A strong data policy framework can mitigate a large part of the potential AI risks[2]. - Change management
AI projects affect processes and people. Training and clear communication facilitate acceptance, but in Switzerland they practically do not get through to employees[4]. The vision must be made tangible, the time for learning must be created and the right formats for motivation must be chosen. In addition, managers should set a clear example: according to McKinsey, the influence of the CEO level on AI governance correlates strongly with the EBIT contribution of generative AI[7].
Phases 3 and 4: From the use case roadmap to the MVP
Even though this article emphasizes the early phases, it is worth taking a brief look at the next steps:
- Phase 3: Architecture design & MVP
A target architecture is outlined for prioritized use cases and different system variants are evaluated. Data models, integration layers and security concepts are developed. Prompt engineering and evaluation accompany this step before a minimum viable product (MVP) is created. A prototype, such as a chatbot for first-level support, is iteratively tested and improved. Tools such as GUI prototyping, APIs or infrastructure-as-code (Terraform, Ansible) are used. - Phase 4: Implementation & profitability
Once the MVP has been successfully implemented, the next step is productive operation. This includes implementation planning with a roadmap, a project chart, the appropriate operating model (e.g. on-premises, cloud or hybrid), planning of organization and processes, investment and resource planning as well as risk management. Continuous monitoring and structured improvement are essential, as the operation of AI solutions is a learning process.
Practical examples for initial AI use cases
Intelligent customer service
A chatbot or voicebot can relieve first-level support by answering routine queries and handing over more complex cases to qualified employees. Bain data shows that the use of generative AI reduces response times in customer service by 20-35%[3]. However, such bots require a good connection to CRM systems, secure data storage and well thought-out escalation management. Other options include self-service portals with chatbots or voice recognition and the use of AIOps tools such as Splunk AI for in-depth analyses.
Automated document processing
Back-office functions such as comparing or classifying documents can be significantly accelerated with GenAI. In Bain cases, 20-50% of tasks could be automated[3]. Examples include intelligent contract analysis, invoice verification or the extraction of data from forms. These use cases are relatively low-risk and primarily require well-prepared documents and rights management.
Coding assistance and knowledge management
Thanks to the rapid progress in LLMs, developers can use AI co-pilots to generate code, write tests or prepare migrations. The SWE-bench jump from 4.4% to 71.7% of tasks solved in one year[1] shows how quickly these tools learn. Within the company, such assistants can be linked to internal code repos in order to comply with specific standards. Another use case is company-wide knowledge management: chatbots or retrieval augmented generation (RAG) systems can provide employees with quick access to guidelines, manuals or project knowledge.
Recommendations for companies
- Start small, but strategically.
Choose use cases that have clear added value and can be implemented with existing data. Low-threshold back-office processes or internal chatbots are ideal candidates[2]. Use results from industry studies: customer service, marketing, DevOps and back-office workflows are already benefiting measurably from GenAI[3]. - Put data and governance at the center of an MVP.
Clarify data sources, quality and access rights before starting AI projects[2]. A robust data policy reduces risks and can mitigate many challenges from the outset. Integrate data protection, IT security and compliance into your roadmap. - Create an organizational framework.
Define responsibilities, set up an AI review board and involve management levels. McKinsey shows that high performers with clear governance achieve significant EBIT contributions through GenAI[6]. - Invest in skills and culture.
Make your vision tangible, train employees with exciting formats and create the time for this - skills such as prompt engineering, data competence and the right change mindset must be promoted. A culture is needed in which AI is understood as a helpful tool. To achieve this, the right formats should be chosen for the company, which will ultimately be well received and used by employees. - Iterate and scale.
Use phases 3 and 4 to test prototypes quickly and scale them if successful. Monitor the ROI of your AI solutions and adjust the strategy regularly. You can read more about AI ops here: AI ops and AI automation.

Blog
AIOps and AI automation
from Thomas Somogyi
Conclusion
Generative AI is developing at a breathtaking pace. Within a year, the problem-solving rate in the SWE-bench benchmark rose from 4.4% to 71.7%[1]. This is testimony to the fact that this technology is becoming increasingly mature. At the same time, practical experience shows that considerable efficiency gains are already possible today: less manual work in customer service, shorter content creation times, faster software development and automated document processes. However, the pioneers' recipes for success are clear: they start with use case-driven analysis, focus on data quality and governance and gradually build a scalable AI infrastructure[2]. This is the only way to prevent AI from becoming a shadow project with no business benefit.
If you want to take your company to the next level, we will be happy to support you: from the strategic analysis of potential to the implementation and operation of your GenAI use cases. Contact us to tap into your AI potential together.
Sources
[1] Technical Performance | The 2025 AI Index Report | Stanford HAI
https://hai.stanford.edu/ai-index/2025-ai-index-report/technical-performance
[2] Build AI Readiness Around Use Cases and Data, Experts Say | StateTech Magazine
https://statetechmagazine.com/article/2025/03/build-ai-readiness-around-use-cases-and-data-experts-say
[3] Generative AI: The Business Value Creation ROI
https://btr.geoactivegroup.com/2024/09/generative-ai-business-value-creation.html
[4] bbv Software Services AG, Demo SCOPE AG and Lucerne University of Applied Sciences and Arts: Swiss AI Impact Report 2025
Swiss AI Impact Report 2025 | bbv
[5] Bain & Company: Technology Report 2024:
Technology Report 2024 - Technology Industry Trends | Bain & Company
[6] Generative AI Adoption Soars: McKinsey
https://www.rtinsights.com/generative-ai-adoption-soars-insights-from-mcAInseys-latest-survey/
[7] The State of AI: Global survey | McKinsey
https://www.mcAInsey.com/capabilities/quantumblack/our-insights/the-state-of-ai