The future of networks

The digital era has increased the importance of networks in our daily communication and interaction. In an effort to meet the increasing demands for security, reliability and efficiency, the SCION network protocol is emerging as a promising solution in the field of wide area communication (Internet). In this blog post, we will take a closer look at SCION and understand how it challenges the existing paradigms.

Background: What is SCION?

SCION, which stands for "Scalability, Control, and Isolation On Next-Generation Networks", is an advanced network protocol that aims to overcome the shortcomings of traditional Internet architectures. Developed by researchers at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), SCION offers an innovative solution to challenges such as security, scalability and efficiency.

The functionality of SCION is based on a network of trusted participants and is organized via existing autonomous systems (AS) in independent routing levels, so-called isolation domains (ISD). Each AS requires a corresponding certificate in order to be integrated into an ISD. SCION offers inherent security, as access to a communication network is always explicitly regulated and policies are enforced. SCION traffic is routed along predefined paths, giving users effective control over the path of their data. The multi-path approach ensures that this is reliable even if one path fails, without compromising the path specifications.


Safety as the top priority

An outstanding feature of SCION is its intensive focus on security. In the traditional Internet, threats such as DDoS attacks and routing manipulation are omnipresent. SCION counters these threats by strictly separating the control and data transmission layers. This concept isolates attacks on the control layer, which significantly strengthens the overall resilience of the network.

Trusted paths and improved scalability

SCION introduces the concept of "trusted paths", which are based on predefined routes. Unlike the traditional Internet, where packets often travel unpredictable paths through the network, trusted paths allow precise control over data traffic. This not only improves security, but also the efficiency and scalability of the network, as bottlenecks are avoided and latencies are reduced.

Decentralization of control: a paradigm shift

Another revolutionary aspect of SCION is the decentralization of network control. In the traditional Internet, control over routing and security is centralized by Internet Service Providers (ISPs) and routers. SCION, on the other hand, enables autonomous control at network level, which allows greater flexibility and adaptability. This decentralization not only promises better resistance to attacks, but also promotes innovation in network design.

Outlook for the future

With the ever-growing threat of cyberattacks and the increasing complexity of our digital world, the development of network solutions such as SCION is crucial. The combination of security, scalability and decentralization positions SCION as a promising candidate for the network architecture of the future. While the widespread implementation of SCION is still in its infancy, the first solutions such as the Secure Swiss Finance Network (SSFN) initiated by the Swiss National Bank (SNB) and SIX indicate that we could be witnessing a paradigm shift in the world of networks.

Our consultants will continue to follow the paradigm shift in the world of networks and support our customers in successfully implementing their network projects holistically.

Cloud Network Segmentation

With the possibility of integrating Platform as a Service (PaaS) services into a virtual network, the importance of a secure, scalable and efficiently operable network design also increases. With this blog, we show variants of how Cloud Network Segmentation can look like in the enterprise environment. 

Cloud Network Segmentation Principles 

A hub-and-spoke network architecture has established itself as best practice in the cloud and is now used in most companies in the enterprise sector. Here, overarching network services such as firewall, connectivity or routing are controlled centrally in a hub network. Peerings are used to connect workload networks (spokes) to the hub, which controls all traffic between different spokes in the hub.  

While the path to micro segmentation is very complex on premise, this concept is already used as standard in the cloud. A zero trust approach is increasingly being pursued, which enables much stricter isolation of individual applications. This is also reflected in controls from various best practice security frameworks such as the Microsoft Cloud Security Benchmark (NS-1/NS-2), CIS Controls (3.12, 13.4, 4.4), NIST SP 800-53 (AC-4, SC-2, SC-7) or PCI-DSS (1.1, 1.2, 1.3). 


For supplemental information on Zero Trust, see our dedicated blog post: 

Implementation of a Zero Trust Architecture

In the cloud, there are five basic tools that contribute to a functioning network segmentation: 

However, the question now arises as to how the aforementioned tools can be combined in the best possible way in order to achieve the goal of efficient, secure and scalable segmentation.  

For better readability, Microsoft Azure terminology is used in the following sections, but the principle applies equally to AWS and GCP. 

Variant 1 - Segmentation of virtual networks

Segmentation by means of virtual networks

This variant follows the principle that each application or service is hosted in a dedicated virtual network. The result is that a virtual shell is created around the application by default and communication within it is handled via subnets and network security groups.

The advantage of this variant lies primarily in the low complexity and the continuous isolation of applications, since the application boundaries are enforced directly by means of virtual networks. However, it should be noted that due to the limited number of peerings (currently 500 per hub in Azure), the maximum number of applications is limited accordingly. This limitation is even more significant if separate environments (dev, test, prod) are required for each application.

Another factor to consider in this variant is cost. The number of VNets is not a direct cost factor, but all cross-VNet traffic is charged. One consideration may therefore be that applications that exchange a lot of data frequently are clustered in the same VNet.

Variant 2 - Segmentation by means of subnets

Segmentation by means of subnets

Instead of application-specific virtual networks, this variant uses larger, shared networks (e.g. per environment). The separation of applications within these virtual networks takes place (if at all desired) via subnets and network security groups. This means that open communication within an environment or zone is possible by default.

This variant has the advantage of being easy to use, provided that open, cross-application traffic is desired. If this is not the case, the increasing number of applications leads to a confusing landscape of subnets and network security groups. Above all, the fact that there is currently no overarching view for managing subnets or NSGs makes it difficult to manage rules for additional applications.

Variant 3 - Segmentation by means of route tables and firewall

Segmentation by means of route tables and firewall

This variant achieves a mix between the first two variants by placing applications in a large, shared network, but always routing traffic to the central firewall in the hub and controlling it there. This is made possible with multiple user-defined routes (UDRs) that override the default routes within the virtual network.

Although this variant could give the impression that the advantages of the previously mentioned variants can be combined, handling also becomes complex here as the application volume increases. The reason for this is that the default routes can only be overridden by UDRs if they are equally specific or more specific. A default route ( does not have the desired effect for internal traffic in this case; instead, an additional route would also have to be created for each subnet (max. 400 per Route Table).

Conclusion / Recommendation

In order to comply with the zero trust approach, we recommend variant 1 and thus strict network separation of applications with the aid of dedicated virtual networks. Microsoft has also indirectly anchored this approach in its cloud adoption framework and stipulates that applications are to be placed in dedicated subscriptions as a matter of principle (subscription democratization), which consequently also leads to dedicated virtual networks.

The reason for this recommendation lies mainly in the scalability required in the medium term. Although a shared virtual network may seem sensible at the beginning with a manageable cloud portfolio, the operating effort and complexity increase exponentially as the number of applications increases. Although an architecture change is also possible retrospectively, it means redeploying all components within the virtual network.

Basically, we recommend not to underestimate the network design in a cloud project and to deal with it already at the beginning. After all, the network, together with identity management and governance mechanisms (policies, RBAC, etc.), is one of the cornerstones on which the applications are built.

atrete IT consultants are your one-stop shop for specialized cloud solutions. At a time when the technology landscape is constantly changing, we have bundled our more than 25 years of IT infrastructure expertise in the areas of cloud networking, cloud security, cloud automation and cloud strategy. This is how we develop tailor-made solutions for your challenges.

Cloud service models - which skills are needed and to what extent?

The transformation of an application to a cloud provider not only requires new know-how, but also always requires additional resources - relief only comes after consistent dismantling of existing infrastructures.

What does the Cloud Journey mean for internal IT skills?

Cloud computing, or "the cloud" for short, as it is colloquially known today, is currently experiencing immense hype. Great promises are being made from all sides, expectations are being stoked, but concerns are also being expressed and caution is being urged. This is a typical situation that occurs time and again with disruptive technologies. For IT managers, this means that, together with their business managers, they have to deal with the new possibilities, risks and consequences and define their own path to the cloud.
One facet of the cloud journey which is underestimated, especially at the beginning, and often receives too little attention, is the question: What skills and what know-how does a company need if it produces its IT services mainly with cloud computing services? It is important not to be blinded by the widespread marketing promise, which suggests that cloud services can simply be consumed without any effort on your part.

In the real world, IT applications are rarely isolated systems, but almost always part of a network with interfaces to other applications and peripheral systems. From the user's point of view, the user experience is expected to be as uniform as possible. The integration of the various applications into a meaningful IT landscape is special for each company and individually adapted and optimized for the respective requirements.

The range of service models in use is multifaceted and can be roughly divided into the following generally known categories:

The trick now is to develop and operate an IT landscape that is as uniform, consistent, and cost-optimized as possible from all the available options. As already mentioned, attention must be paid to the aspect of the skills and know-how required for each service model.

Know-how, skills and service models

Depending on the service model, the required skills shift from highly technical to more service-oriented organizational know-how. In addition to the shift in skills, however, the effort required to perform the activities must also be taken into account. In the following graphic, we have compared relevant IT skills with the various service models. The evaluation is based on the required depth of knowledge, the complexity of the contexts, and the time and resources required.

Skills Matrix Cloud Service Models


In-house required operating skills are reduced linearly with the reduction of the vertical integration of the service models, as more and more infrastructure components are outsourced to the provider. Regular maintenance of hardware, operating systems and databases is thus gradually eliminated as the service model increases.


A similar picture emerges with regard to engineering. However, the gradation of the required skills is significantly steeper than in operation. If new services are created or existing ones are further developed, the effort required for PaaS or IaaS solutions is significantly higher than for SaaS, since significantly more interfaces and compatibilities have to be taken into account.


In the area of architecture, internal skills can be saved primarily by using SaaS services. The conceptual effort required to integrate IaaS or PaaS services into the IT landscape is only partially less than designing new on-prem services. Here, too, issues such as compatibility and interfaces to existing services play a greater role.


Security in the IT landscape is equally relevant in all service models. However, the focus of security is shifting vertically from technical skills, such as hardening the system, to organizational efforts, such as checking data locations. Internal efforts can therefore only be reduced by purchasing SaaS services, since all work relating to the underlying infrastructure is taken over by the provider.


In the area of identity management, the demands on the internally required skills increase when services are outsourced from on-prem to an "as-a-service" model until all underlying infrastructure components are left to the provider. PaaS and IaaS thus offer the disadvantages of on-prem and SaaS in identity management without taking over their advantages. Additional identity management comes into play, which must be integrated into the existing system or managed as a supplement. In addition to access to the application, authorizations must also be set for the underlying infrastructure components.

Data management

The relevant know-how for correct data storage differs greatly depending on the service model used. While a SaaS solution requires more thought in advance regarding data location and access, an on-prem approach requires more technical skills in the area of backup and availability. IaaS and PaaS services have higher requirements here, as the know-how for both topics must be readily available.

Cost management

An overview of the costs is required in all service models. However, the effort and knowledge required to create such an overview differs again between the service models. PaaS and SaaS services have particularly high requirements. Services such as an SQL database can be obtained in different functionalities, availability levels, scaling levels and sizes. In addition to this technical diversity, further distinctions can also be made in the billing models. The operating costs can thus quickly rise to an unexpected level and, depending on the configuration, only become transparently apparent at the end of the month. SaaS services with their user or device licenses are much simpler. Since on-prem is not based on the pay-as-you-go principle, it is much more complex to determine the costs of individual components. However, errors in the calculation do not result in an unexpectedly high bill.

Provider Management

Provider management involves the art of being able to react quickly to changes to a related service or application by the provider in order to be able to counteract possible negative effects in a timely manner. The more control is relinquished over one's own infrastructure, the more effort must be expended in this regard. In a worst-case scenario, for example, an application on an IaaS VM is migrated much faster than the same application in a SaaS model. Backup, disaster recovery or even availability are increasingly implemented only contractually and not via configurations of the engineer as the vertical integration decreases.


Under the influence of the increasing use of cloud computing, an IT landscape is based on an ever greater variety of service models. In principle, it can be assumed that the more diverse the service models used, the greater the breadth of expertise required.

Mastering these different service models therefore does not necessarily require new skills, but often adaptations of existing know-how. Although, for example, the operating effort tends to decrease with the migration from on-premises to the cloud, a company should still deal with new topics such as IaC or DevOps.


For supplementary information on CI/CD and IaC, please see our dedicated blog post

CI/CD and IaC

However, new tasks generated by a migration and the associated changes in the required know-how are to be understood as additional effort in all cases. This is the case until the existing environment is actively dismantled. The choice of service model has no influence here.

The cloud strategy must therefore carefully determine which service models are to be used in order to deduce the extent to which the skills available today will be needed in the future. Each company must therefore analyze in detail which know-how should be newly built up, which should no longer be maintained, and which should be brought in from outside if necessary.

As an independent consulting firm, atrete is constantly dealing with problems related to IT and the cloud for various customers. On the one hand, we can support companies in the context of their cloud journey in analyzing and defining the necessary skills. On the other hand, we also provide experts with specific skills for external support of internal teams.

atrete receives further reinforcement

Last month, the IT consulting company atrete received further reinforcement. Our new colleague expands the area cloud.

Marco Jenny
Marco Jenny, Consultant

Marco Jenny joined atrete on 01.09.2022 as a consultant in the cloud practice area. He has many years of professional experience in IT services in the areas of system engineering, application management and cloud. His university degree in business informatics FHNW together with a certification as requirements engineer (IREB Foundation) complete his profile.
Prior to atrete, Marco Jenny worked as an Application System Engineer in the Microsoft Cloud with a strong focus on the design and implementation of Modern Workplace. Due to the technological change in the past years, he gained insight and experience in different types of Microsoft Cloud projects.
Complementing this experience, Marco is currently in further education to become a "Microsoft Azure Solution Architect Expert".

Microsegmentation: Where are we today?

What is microsegmentation ?

Microsegmentation has gained importance in the course of the virtualization of IT and network infrastructures in the data center, the growth due to general digitization, and the associated dynamics. The term microsegmentation is used to describe security technologies and products that permit fine-grained assignment of security policies to individual servers, applications and workloads in the data center. This allows security models and their application to be applied deep within the data center infrastructures and topologies and not just at larger network and zone perimeters. This has become very important as in modern digitized environments, much of the traffic between applications and servers occurs within the data center rather than primarily from the outside in or vice versa.


Microsegmentation has become established to varying degrees.

In classic infrastructures, smaller network segments are formed with increased network virtualization and automation. Generally valid firewall rules, applied to entire zones, are being replaced by pin-holing with individual rules per server/application.

Server and network infrastructure has changed from less flexible, individualized and manual perimeter protection to partially automated zones, types and classes of servers to micro-segmented, highly structured, standardized and automated systems for this purpose.

Server and network infrastructure

This also places different demands on the management and maintenance of security policies and firewall rule sets in particular, as many places do not have one or the other environment in pure form, but rather transitions and interfaces from old to new and software defined to classic infrastructure must be operated and ensured.

Among the available products and technologies, a distinction must be made between

Differences Products and Technologies Microsegmentation

The most common use cases can also be grouped somewhat.

Conventional server infrastructures primarily use products with network-based or OS-based microsegmentation. It is important to consider the extent to which older servers and OS versions of OS-based products are supported at all.

For classic virtualized and private cloud infrastructures, the hypervisor-based, and virtualization-integrated micro-segmentation solution is often used, and for public clouds, the cloud-native solution offered by the cloud provider.

Particularly in larger environments, it is apparent that combinations of technologies and products are frequently used and requirements are placed on cross-product management and administration of security policies and FW rules and objects. These requirements increase with the degree of virtualization, micro-segmentation and highly dynamic automated cloud and container infrastructures. It becomes a challenge to ensure dynamic and automated creation of instances and objects, as well as their deletion in end-to-end configurations in an automated manner. Today's developers are accustomed to creating and deleting entire application environments in an automated fashion, and the infrastructure must keep pace to ensure that the appropriate security policies and rules match the effective instantiations that exist.

Our consultants will be happy to support you to make your microsegmentation project a complete success.

CI/CD and IaC

The integration, delivery and deployment of code are state of the art today, but they involve various non-technical challenges that need to be overcome.

In this blog post, we introduce Continuous Integration, Continuous Delivery and Continuous Deployment (CI/CD) and show what added value can be gained by using them. The article also explains what role CI/CD plays in the Infrastructure as Code (IaC) approach and which deployment variants exist on-prem and in the cloud.

Introduction - Concept CI/CD

In the time before CI/CD, the integration of changes to a digital product rarely went smoothly. The holistic consideration and testing of all dependencies between new and existing code proved to be difficult, and often time-consuming reworking of the code was necessary before it could finally be integrated into the productive system.

The processes, techniques and tools of the CI/CD concept create a premise that enables the continuous and immediate integration of new or changed code into the existing solution. Using CI/CD, applications can be continuously monitored in an automated manner throughout their entire life cycle (Software Development Life Cycle, or SDLC for short), from the integration phase, through the test phase, to the delivery and deployment of the application. The practices used in CI/CD are collectively referred to as the "CI/CD pipeline." The development and operations teams working according to the DevOps approach are thus supported.


CI/CD in the use of IaC

The use of a CI/CD pipeline is also of central importance in the Infrastructure as Code (IaC) approach, among others. In the context of IaC, in addition to the automation logic, various properties of hardware- and software-based infrastructure components from the cloud or on-prem are stored in a repository. Such properties stored in files do not have to be 1:1 copies of current configurations. Parameterized files, for example in YAML notation, are often preferred. The big advantage is that the information in the configuration files in the repository, despite a high degree of automation, can still be read (human-readable) and understood by a person. Whether human-readable or not, in any case the CI/CD pipeline is the prerequisite for an automated, transparent and continuous validation of configuration files and the subsequent integration of changes in the infrastructure.

Continuous Integration (CI)

CI consists of the automated process that ensures the continuous integration of customizations to an application. Continuous Integration supports developers in regularly making and publishing changes to the code. Automated testing processes such as code analysis or unit tests (code tests) ensure that edited branches in the code repository can be merged so that the application continues to function. The branch of a repository is part of the development process and can be viewed as a branch referencing the existing, validated code. If developers need to make changes to the existing code, a new branch is first created in which the change or enhancement is made. A merge is then performed to merge the final code changes with the existing code in the repository.

In summary, the successful CI contains the automated

Unless otherwise configured, a merge is only successful when all defined processes (jobs) within the "CI/CD pipeline" have been successfully run through.

Continuous Delivery (CD)

The abbreviation "CD" has two meanings. On the one hand Continuous Delivery, on the other Continuous Deployment. These similar concepts are often used synonymously. Within the framework of both concepts, the automated processes that follow CI are designed in the pipeline. In some cases, the terms Continuous Delivery and Continuous Deployment are used to specify the characteristics of automation. What this means is explained in this blog in the section Continuous Deployment. A CI integrated into the pipeline is the prerequisite for an efficient CD process. Continuous Delivery enables automated deployment of applications to one or more environments after successful code validation by the CI. Developers typically use environments for the build, test and deploy phases. Possible processes (jobs) from the different phases are many.

For example, this can be

Successfully passing through all Continuous Delivery phases allows the operations team to quickly and easily deploy a validated application or infrastructure configuration to production.

Continuous Deployment (CD)

Continuous Deployment is an extension of Continuous Delivery. Continuous Deployment enables the automation of releases of customizations of an app or even infrastructure configuration files. Continuous Deployment thus allows a seamless and fast integration of code adaptations into a productive environment, sometimes in just a few minutes. This is made possible by sophisticated and extensive automatic tests within the various phases of the CI/CD pipeline. However, fully automated deployment does not always make sense. If the automatic testing elements of the pipeline cannot cover compliance with all guidelines and thus ensure governance due to certain circumstances, deployment must be done manually. At least until the necessary premise has been established. This can be enabled by means of improvements such as standardization, adapting existing processes (if possible), resolving external dependencies or taking them into account by means of access via an interface, etc.

CD/CD and IaC deployment variants

In the cloud as well as on-prem, there are various use cases that are suitable for CI/CD. For example, the DevOps team can distribute serverless or fast, automated code updates using container technology. Depending on the target architecture and the scope of the adaptation, different distribution approaches are available. The following list contains the three best-known, which can be used in the cloud and on-prem.

CI/CD - Distribution approaches
Blue-Green deployment provides for development on parallel dedicated infrastructures. The production environment (blue) contains the latest, working version of applications or adjustments to the infrastructure. In the staging layer (green), the customized applications or the infrastructure modified by customized configuration files (IaC) are extensively tested for their functions and performance. This deployment approach enables efficient implementation of changes, but can be cost-intensive, depending on the complexity of the production environment.
Rolling deployment is used to distribute adjustments incrementally. This reduces the risk of failures and enables simple rollbacks to the previously functioning state in the event of problems. Consequently, as a prerequisite for rolling deployment, the services must be compatible with the old and new versions. Depending on the case, these are application versions or, in the case of IaC, infrastructure configuration files.
Side-by-side deployment is similar to Blue-Green deployment. In contrast to Blue-Green, the changes are not distributed across two environments, but are made available directly to a selected user group in production. Once the user group confirms the functionality and performance previously tested in the CI/CD pipeline, the updates can be deployed to all other users. This allows developers to run different versions in parallel, just as with rolling deployment, and additionally to gather real user feedback without high risk of downtime.

A recommendation of the deployment variant is situational and depends on the product to be deployed and the infrastructure.

Below we describe an exemplary excerpt of solutions for CI/CD and IaC projects:

TerraformProject by HashiCorp, which is very flexible to use and compatible with well-known cloud providers such as AWS, Azure, GCP and OpenStack.
AnsibleProject from Red Hat, which is an orchestration and configuration tool that enables the automation of repetitive and complex processes using playbooks.
AWS CloudFormationIaC service that enables managing, scaling, and automating AWS resources using templates within the AWS environment.
Azure Resource ManagerIaC tool of the Azure environment, which enables, among other things, the deployment and management of Azure resources using ARM templates.
Google Cloud Deployment ManagerInfrastructure deployment service from Google that enables the creation, provisioning and configuration of GCP resources using templates and code.
ChiefWell-known IaC tool that can be used in AWS, Azure and GCP together with Terraform due to its flexible deployment options and provisioning of its own API.
PuppetSimilar tool to Chef, which is also commonly used for monitoring defined and provisioned IaC properties and automatically correcting deviations from the target state.
VagrantAnother solution from HashiCorp, which enables rapid creation of development environments and is aimed at smaller environments with a small number of VMs.

In the hybrid cloud environment, the in-house CI/CD and IaC solutions of cloud platforms such as AWS, GCP and Azure are usually not sufficient. For example, Terraform and Ansible can be a suitable solution for IaC due to their high flexibility and compatibility, especially in multicloud environments.

Multicloud Header



Find out what else there is to consider regarding multicloud scenarios in our blog post about multicloud.

Implementation - opportunity and challenges

Various opportunities can be profited from the implementation of CI/CD. The most important opportunities are summarized in the following list as examples:

Shorter time to production and fast feedback - Automated testing and validation in the CI/CD pipeline eliminates tedious, manual and therefore time-consuming steps.
More robust releases & earlier detection of errors (bugs) - Extensive testing of code and functions. Simple errors are avoided.
High visibility (transparency) - Using the CI/CD pipeline, individual test results can be checked in detail. If defects or errors are discovered in new code, this is shown transparently.
Cost reduction - Reduction of costs based on reduction of simple errors. In the long term, the use of CI/CD is less error-prone due to automation and thus sustainably cheaper.
Increased customer satisfaction - The consistent and reliable development process results in more reliable releases, updates, and bug fixes. This increases customer satisfaction.

However, various challenges must also be mastered for the implementation and operation of CI/CD. One of the biggest and most important is standardization in the company's own infrastructure. It determines the degree of automation to a large extent. In general, it can be assumed that a homogeneous infrastructure enables a high and cost-efficient degree of automation. Heterogeneous environments should be standardized as far as possible if high automation is the goal. It should be noted that introducing or merely increasing standardization can be a major undertaking in itself. Other challenges that need to be mastered are:

Adaptations in the corporate culture - CI/CD is used and lived in the context of agile corporate cultures and approaches, especially DevOps. Consequently, teams must be comfortable and familiar with the iterative, agile way of working.
Expertise - Correct implementation of CI/CD requires a lot of expertise and experience. Not only in the technical area, but also in the organizational area.
Reactive Resource Management - To ensure performance across all of CI/CD's automated processes even under increased demand, resource management should be monitored and responsive.
Initial development costs - The initial expenses for a development environment, the build-up of know-how, conceptual design, standardization and process adaptation can be high, but are justified by the added value gained from CI/CD.
Microservice Environment - To ensure high scalability, a microservice architecture is ideally built. Those responsible for the architecture must be aware of the accompanying increase in complexity and dependencies and the requirement for their administration.

Our assessment

The setup and design of CI/CD depends on the setup in one's own development and infrastructure environment. In our estimation, there is no ready-made concept for the CI/CD pipeline. Which tests and validations are implemented in Continuous Integration, which phases are implemented in Continuous Delivery and how the distribution is carried out in the context of CD must be individually determined and conceptualized despite various best practice approaches. At least at the beginning, the necessary know-how in the various specialist areas such as software development, quality assurance and, especially in the IaC approach, the existing expertise in the various infrastructure areas is often neglected.

The IaC approach offers a visionary solution that has already been tested in practice and enables infrastructure and its services to be designed adaptively, easily maintainable and securely both on-prem and in the cloud. CI/CD is a core element that enables the correct and transparent mapping of configurations from a central repository to infrastructure components. As explained in this blog post, the use of CI/CD is worthwhile. The cost-effective and short implementation times of changes into production, more robust releases and the resulting increased customer satisfaction are just a few of the many opportunities CI/CD offers. In order to take advantage of these opportunities, the implementation as well as the operation of CI/CD must be successfully managed. Challenges such as a high level of standardization in the IT landscape, possible adaptation to an agile organizational form, and the procurement of expertise must be mastered.

We are happy to support you in analyzing where you stand in terms of requirements for the use of CI/CD (degree of standardization, form of organization, etc.) and in evaluating and designing possible CI/CD solutions in the cloud.

Our years of experience in the essential disciplines for your individual Cloud Journey. With our support, you can master the hot topics such as network, organization, availability, automation, CI/CD, IaC, governance/compliance, security & cost management.

Cloud network integration

No matter which cloud strategy is to be implemented, it makes sense to connect the local data centre with the cloud, as the path to the cloud corresponds to a step-by-step procedure. There are always services in the local data centre that require access from the cloud. For example, directory and identity services, or access to databases and DDI integration.

Cloud connection

However, the way in which the connection is realised depends on the required quality. Depending on the use case, the following questions must be asked and answered in advance.


Required quality


  • What quality (availability, loss, delay jitter) of connection is required for my use case?
  • Do I need a quality guarantee from the service provider?

Connection variant

  • Can the required quality be achieved using IPSec VPN, or is a direct connection (e.g. Express Route) required?
  • Can I integrate the DC connection into an existing SD-WAN?

It should be noted that in the cloud for IaaS and PaaS, virtual networks are primarily created with private IP addresses (RFC 1918), which are then routed via the DC cloud connection. SaaS services, which include the cloud portal, are addressed with public IP addresses and are accessed via the internet access that is also used for surfing the internet.

These graphics show the possibilities of connecting a public cloud to the local DC.

Connection to the public cloud via IPsec VPN
Connection of the Public Cloud via IPsec VPN over the Internet

This type of connection is inexpensive and relatively quick to realise. However, this type of connection is not suitable for all quality requirements.

Direct connection
Direct connection

For high quality requirements, a direct connection of the cloud to the local data centre is suitable. Such a connection is made via a service provider who offers these services in the respective country. Here, too, the connection is usually encrypted.

Another variant is SD-WAN, which is usually accompanied by the connection of the users. SD-WAN represents a platform on which various IPsec VPNs with different topologies and transport networks can be realised and centrally managed. Both the IPsec VPN variant and the direct connection can be realised with SD-WAN or, in the case of the direct connection, integrated as a transport.


The following table shows which connection variants are suitable for the different use cases.


Internet IPsec VPN


This type of connection is sufficient for basic needs, unless a guarantee is required from the service provider. It is recommended to realise the connection by means of a TIER 2 internet provider.

Direct connection

Suitable for all use cases and quality requirements. Since this variant also incurs higher costs and is more complex to implement, it should not be considered for use cases that are of a temporary nature.


If SD-WAN already exists for the connection of the users, it is advisable to use SD-WAN for the DC connection as well, since, as already mentioned, the direct connection is usually encrypted as well.

Cloud Networking

In addition to the connection to the local DC, the question arises as to what functional requirements are placed on cloud networking. On the one hand, this involves network components such as routers, VPN gateways and load balancers, as well as the question of whether a fabric technology used locally in the DC should be extended into the cloud.


Network components


  • What are my functional requirements for the cloud network components?
  • Do the cloud native network components meet these requirements or do I need to deploy virtual appliances of network components in the cloud?

DC Fabric Integration

  • Should the fabric technology used in the local DC be extended to the cloud? (e.g. Cisco ACI or VMware NSX)

The question regarding network components in the cloud can be answered as follows from a functional point of view.




Wherever possible, network components from the cloud provider, so-called cloud-native components, should be used. These are usually sufficient for the standard Layer 2 and Layer 3 functions.

Virtual appliances

If the cloud-native components do not support certain functions, virtual appliances are usually used. This is especially the case for load balancers, VPN gateways and transitions from legacy networking to SDN networking (ACI or NSX).

Whether a DC fabric should be extended into the cloud depends on which cloud strategy is being pursued. If the cloud is operated in hybrid mode for a longer period of time and the network segmentation in the cloud is to be managed with the same tools, then it makes sense to extend the local DC fabric into the cloud. However, if the cloud is used temporarily, such an extension makes little sense. In order to extend the local DC fabric into the cloud, "Cisco Cloud ACI" must be implemented and activated on the cloud side in the case of Cisco ACI and "VMware NSX Cloud" in the case of VMware NSX. These are available in the corresponding cloud stores.

Our consultants have built up and developed the necessary know-how in countless cloud projects from conceptual design to implementation. We support our customers to successfully implement their cloud projects holistically, i.e. beyond the network level.

Multicloud - Does it really make sense?

In this blog, we will look at what multicloud means, for which organisations it really adds value and what needs to be done for a successful implementation. 

What is Multicloud?

What exactly do we, as atrete Cloud Consultants, mean by multicloud? If services are obtained from several cloud providers (e.g. Microsoft Azure, Google Cloud Platform (GCP), Amazon Web Services (AWS), or smaller providers), we speak of a multicloud scenario. The deployment model (public, private) or the service model (IaaS, PaaS, SaaS) is irrelevant here. We assume that today almost every organisation - consciously or unconsciously - is in a multicloud scenario.

We distinguish here between multicloud "service procurement" and "service provisioning". In the area of service procurement, we see that SaaS applications are usually obtained from any provider without any problems. In the area of service provision, we assume that applications to generate their own or customer benefits could probably be operated on different cloud platforms, but this is associated with much greater hurdles and higher complexity.

If an application is effectively deployed in a multicloud setup, the goal should be that it can be moved or redirected to other providers at any time based on predefined triggers (availability, cost, location, ...) in a fully automated way. A solid cloud strategy should provide the appropriate framework for this.

In the remainder of this blog, we will go into more depth on the aspects of multicloud service provision of own services and leave the service reference for itself.

Multi-Cloud Layer Public Cloud Provider

Areas of application for Multicloud

Basically, the question arises as to why a service provision in a multicloud setup should be chosen at all. Due to the fact that each cloud is slightly different from the others, the choice of target platform(s) is driven by the use case to be effectively implemented. Cost aspects are no longer the only factor here. The most important distinguishing features and reasons for operating an application on a specific cloud provider lie in the platform services (serverless functions, BigData, database, machine learning, IoT or AI services).

If a specific service of a provider is integrated in the application development, e.g. a serverless database from AWS (Amazon Aurora), a counterpart (e.g. Azure SQL Managed Instance) is probably available from another provider, but cannot be accessed via the same API calls. Significant adjustments must therefore be made when porting an application to another provider. The platform services are therefore also a limiting element in the implementation of a multicloud architecture.

In order to be able to operate an application effectively in a provider-independent multicloud setup (horizontally moveable from provider to provider), it must be developed independently of the infrastructure.

Advantages and disadvantages

With the implementation of multcloud scenarios, there are advantages and, of course, also disadvantages. We would like to list a few of them here as examples:


  • Best of breed - effectively using the strengths of the providers
  • Avoid vendor lock-in - less dependence on a single provider
  • Global-Reach - expansion of the availability zones and simultaneous reduction of the latencies
  • High Availability Design - building redundancies across multiple providers
  • Compliance - achieving internal & external requirements


  • Traffic costs - transfer from one provider to another causes costs (egress traffic)
  • Loss of integration advantages - restrictions in the platform services of the providers (PaaS)
  • Increasing complexity on several levels (architecture, service management, provider management, ...)
  • Skillset of staff must be available on all providers, which causes high costs
  • Reduced economies of scale (e.g. for billing models) compared to a single provider

Supposed advantages are offset by significant disadvantages. Each organisation must analyse and assess the relevant topics for itself. A possible multi-cloud scenario must be developed evolutionarily and the framework conditions must be managed to perfection. Only in this way can the envisaged added values actually be achieved. We will go into more detail in the next section.

Framework conditions

In the next sections, we will address the most important framework conditions. In principle, they also apply in a singlecloud approach, but must be managed with operational excellence in the multicloud scenario.

"Infrastructure as Code(IaC) and thus the fully code-based configuration of all resources is a key element of all cloud projects. This is the only way to ensure quality in recurring activities and to provide and dismantle infrastructures in a fully automated way. For digitalisation, automation is therefore a key element in order to be able to provide resources quickly and with high quality on the basis of customer needs on the respective platforms.

In order to ensure the provision of services across several providers, it is necessary to operate a high-performance and secure "multicloud network". Seamless communication between all services and resources used must be enabled and controlled accordingly across all providers involved.

The resulting potential gateways for hackers or malware must be analysed and reduced as best as possible by means of appropriatesecurity solutions.


Cloud Security Monitoring

Supplementary information on this topic can be found in our dedicated blog post.

Cloud management platforms(CMPs) can be used to monitor and control cloud environments (infrastructures & services). They provide an overview and control of orchestration, security, monitoring, costs incurred and optimisation options, so that the full potential can be used and the infrastructures can be operated efficiently.

In the multicloud scenario, a powerful and highly qualified team with the appropriate "know-how/skillset" across all providers and the technologies used is more essential than ever. The complexity with multiple providers increases significantly and the constantly changing services must be managed proactively and with high quality.

We see "standardisation" as the last and sometimes most important framework condition. Since in a multicloud setup an application must not only run on one provider platform, but also on all other potential platforms, all service components must be standardised and abstracted in such a way that they can be operated everywhere. In other words, specific PaaS services of individual providers cannot be used, otherwise portability is not ensured. One solution to this is certainly that the applications are container-based, so that the direct dependence on underlying infrastructure services is reduced as much as possible. Solutions can also be implemented to ensure connectivity and infrastructure interoperability across cloud providers.

Here is an exemplary excerpt of providers/solutions for multicloud projects:

KubernetesOpen source system for automating the deployment, scaling and management of containerised applications.
HashiCorpMulticloud automation solutions for infrastructure, security, network and applications.
VMwareMulticloud virtualisation layer comparable to OnPremise Software Defined Datacenter solutions.
AviatrixMulticloud Network and Network Security Automation Solutions

Multiprovider Sourcing


Success factors for multi-provider sourcing

In addition to all the technical aspects of multicloud, it is also extremely important to have the contractual management of all the providers involved under control.

Our assessment

As atrete Cloud Consultants, we see that in the medium term most SMEs should focus on a single cloud provider for service delivery. Operational excellence in the essential disciplines can be achieved most quickly in this way. Here, we see the most essential elements as consistently focusing on maximum availability, scaling infrastructures and full automation (infrastructure as code) of all resources. Once the "homework" has been done and there is an effective need / use case for a multicloud implementation, the appropriate framework conditions must be created.

We assume that at most it makes sense for a company's customer-facing core processes to be operated in a multicloud setup. The resulting restrictions in service provision via several cloud providers outweigh this in most other cases. Accordingly, it makes more sense to remain within the ecosystem of a provider for all applications that do not explicitly require multicloud provisioning and to exploit the full potential of the available services (PaaS & SaaS). This makes it possible to operate the cloud infrastructures cost-efficiently.

We use our many years of experience in the essential disciplines profitably for your individual cloud journey. With our support, you can master the hot topics such as network, organisation, identities, availability, governance/compliance, automation, security and cost management.

atrete's team continues to grow

The IT consulting company atrete continues to grow. Two months ago, a new colleague joined the atrete team to strengthen the Cloud division.

Moritz Kuhn, Consultant

Moritz Kuhnjoined atrete on 1 December 2021 as a consultant in the cloud practice area. Previously, he worked for several years as a system engineer at a cloud service provider. His main activities consisted of the development and implementation of new services and automations as well as the onboarding of new customers. In addition to his degree in computer science, Moritz Kuhn holds certifications in Microsoft Azure and ITIL. He is currently in the final semester of his Bachelor of Science FH in Business Informatics.

Cost implications of cloud strategies

With this blog, we create transparency on how costs can develop based on the different strategies.

Cloud as a business enabler

By implementing a suitable cloud strategy, a company's flexibility, cost efficiency, scalability and innovative strength can be significantly increased. As a business enabler, the cloud makes it easier to keep up with a rapidly changing market environment. Different cost models make it possible to optimise the costs incurred. Thanks to flexible adjustments in real time, the scalability of services can also be guaranteed during peak times. Last but not least, the cloud, as the heart of the digital transformation, enables the innovative power that forms the premise for successful digitalisation through its continuous and rapid further development.

Cloud strategies

Strategies such as cloud selective, cloud first and cloud only offer different approaches and procedures for the path to the cloud. Cloud selective provides for the shift of individual, non-critical services to the cloud. This approach primarily serves to gain experience and is invoiced according to need on the basis of "pay-as-you-go". The on-premise infrastructure is supplemented by the cloud services, but not dismantled for the time being. Consequently, the operating resources will increase accordingly by the cloud portion.

Cloud first strategically places the focus on the cloud. If technically and economically feasible, new and adapted services are operated in the cloud. Due to this gradual increase in the proportion of cloud services, their operation is also being intensified.

Cloud only aims to replace all on-premise systems with the cloud in a timely manner. The Lift & Shift approach enables a rapid shift to the cloud through the use of IaaS (Infrastructure as a Service), without having to make fundamental adjustments to the architecture.

Cost implications of cloud strategies

The implementation of the cloud strategies mentioned causes different cost developments in different areas. The costs vary depending on the phase or time horizon of the implementation. The following figure illustrates the differentiated cost generation and distribution per strategy on a short- and medium-term time horizon using an example list.

Cloud strategies

Within the framework of the cloud selective strategy, the cloud share is constantly increasing due to the selective shifting of individual applications. The cloud costs increase, whereby the on-premise infrastructure cannot be reduced or can only be reduced minimally. With the increasing number of productive systems in the cloud and their operation, corresponding resources must be allocated. The resulting parallel operation of cloud and on-premise results in a continuous increase in costs.

During the implementation of the cloud-first strategy, costs rise sharply in the short term due to the increasing cloud resources and their operation. However, on-premise resources cannot be reduced at the same rate. Staying too long in this phase can therefore result in a massive increase in overall costs. The introduction of various measures can counteract the rise in costs. On the one hand, the decreasing need for on-premise infrastructure must be taken into account in lifecycle projects as they arise. The reduction of hardware used offers the possibility of additional optimisation potential in the data centre (rack space). On the other hand, the further development of staff into new job profiles enables them to be used for more value-adding activities (in addition to operating the existing on-premise infrastructure).

The costs of implementing the cloud-only strategy will probably be higher than the on-premise costs in the short term, provided the migration is carried out without adjustments to the system architecture (lift & shift approach). In the medium term, optimisations such as refactoring and replacement, continuous rightsizing of the cloud resources and the application of suitable billing models (e.g. "reservations") will enable costs to be saved. At the same time, existing operating costs should be reduced or their resources used for value-adding activities.


The use of the cloud offers significant advantages in terms of innovation, new technologies, more efficient processes, higher product and service quality and scalability. Their weighting and evaluation can vary depending on the business model, overall strategy and size of a company.

Be it cloud selective, first or only: controlling and optimising costs is a continuous process that needs to be paid attention to. The management of services and the resulting costs must be continuously checked and optimised. For example, this is made possible by increased cost transparency through tagging, adjustments to cost models, rightsising and scale up/down.

In order to be able to exploit the full potential of the cloud, the opportunities and advantages that the cloud can offer your company must be constantly aligned and optimised. Moreover, the opportunities that arise from the continuous and rapid development of the cloud far exceed the potential of an on-premise landscape.

Accordingly, cost reductions should not be the primary incentive for the move to the cloud. In the short term, an increase in costs is always to be expected; in the medium term, only a cloud-only strategy can lead to a cost level comparable to today. In the pragmatic implementation of a cloud first strategy, the reduction of on-premise costs must take place as soon as possible so that the total costs do not get out of control.

The definition and implementation of your cloud journey and its cost management are a major challenge in which we will gladly accompany and support you competently.