Hero mask

Cloud Connect

Available regions

AWS, Microsoft Azure, Google Cloud and Oracle Cloud are only present in a few datacenters in Europe. Since NL-ix has a presence in those datacenters, we can transport the data from any of our 100+ datacenters (POPs) to the datacenters of Cloud Service Providers over a private fiber optic Ethernet line (VLL and/or VPLS) using the NL-ix high-speed bandwidth network . NL-ix provides the last mile to get the Cloud Provider port closer to the customer, so the customer does not need to invest in or manage this part of the infrastructure (i.e. (fiber) connectivity) to his own datacenter.

Microsoft Azure ExpressRoute

NL-ix offers both Microsoft Azure ExpressRoute Standard and Metro.

  • ExpressRoute Standard: A single VLAN-ID with two paths configured at a single Microsoft datacenter site with built-in redundancy.
  • ExpressRoute Metro: leverages physically and logically separated paths across two independent Microsoft infrastructures, ensuring greater fault tolerance and reducing single points of failure at Microsoft side. ExpressRoute Metro is designed to keep traffic strictly within a metro area, offering lower latency and a higher SLA by Microsoft versus ExpressRoute Standard.

The order and provisioning process for Microsoft Azure ExpressRoute contains the following steps:

To deploy an ExpressRoute connection, the customer needs to choose the appropriate ExpressRoute Plan in the Azure portal, which generates a service-key which is shared with NL-ix

For ExpressRoute Standard:

  • Region North Europe: select Dublin2 as peering location and NL-ix as service provider
  • Region West Europe: select Amsterdam2 as peering location and NL-ix as service provider

For ExpressRoute Metro:

  • Region West Europe: specify Amsterdam Metro as peering location and NL-ix as service provider.
  1. NL-ix uses the service-key to 'accept' the ExpressRoute on our ports and state which VLAN is used for this connection
  2. The VLAN information is configured on the relevant (customer) ports and the information is shared with the customer.
  3. The customer sets up the correct BGP Peering in order to create a working connection.

AWS Direct Connect

For AWS, NL-ix serves as the Transport provider for the Direct Connect service. In short the process is like this:

  1. The customer orders a Direct Connect at AWS, stating to which AWS locations they want to connect. For each Direct Connect, the customer gets a LOA.
  2. The LOA is sent to NL-ix, which we sent through to the relevant datacenter to connect to our infrastructure.
  3. NL-ix makes sure that a VLAN is configured over the created connection, from the demarcation with AWS towards the customer port.
  4. The customer sets up the correct BGP Peering in order to create a working connection.

For AWS, NL-ix offers dedicated connections of 1 Gbps, 10 Gbps, and 100 Gbps ports.
Hosted connections are not supplied by NL-ix.

The main differences between a hosted and a dedicated connection lies in the Virtual Interfaces (VIFS). When you have a Direct Connect you must configure Virtual Interfaces to enable access to AWS services. A Hosted Direct Connect supports one (1) VIF. If you have more than one VPC’s in AWS you will need multiple Hosted Direct Connects to access these multiple VPC’s. A Dedicated Direct Connect supports 50 or 51 (in case of a VPC Transit Gateway) and therefore can be more efficient. Each VIF communicates with one VPC.

AWS Direct Connect supports Private VIF’s for communication with your private cloud environment(s) in AWS and Public VIF’s for access to public services such as Amazon S3.

Oracle Cloud FastConnect

The order and provisioning process for Oracle Cloud FastConnect contains the following steps:

  1. The customer orders a FastConnect in the Oracle Cloud domain, which generates a OCID which is shared with NL-ix
  2. NL-ix uses the OCID to 'accept' the FastConnect, state to which Oracle Ramp-up port this FastConnect is connected and determine the transport VLAN.
  3. The VLAN is configured on the relevant (customer) ports and the information is shared with the customer.
  4. The customer sets up the correct BGP Peering in order to create a working connection.

Google Partner Interconnect

The order and provisioning process for Google Partner Interconnect follows the following steps:

  1. The customer orders a Google Partner Interconnect (VLAN-attachment) in the Google Cloud domain, which generates a 'pairing-key' which is shared with NL-ix
  2. NL-ix uses the 'pairing-key' to create a working VLAN-attachment, and determine the transport VLANs.
  3. The VLAN is configured on the relevant (customer) ports and the information is shared with the customer (or can be found in the Google Cloud overview)
  4. The customer sets up the correct BGP peering in order to create a working connection.

SAP

Access your SAP HANA instance using you’re Azure ExpressRoute, AWS Direct Connect or Google Partner Interconnect.


SAP on Azure
ExpressRoute connects your on-premise network to a specific Azure region's Virtual Network (VNet).

  • If your SAP HANA instance resides in the same Vnet (e.g., Azure West), or is peered with the VNet associated with your ExpressRoute circuit, you can access it without additional configuration, provided the VNet is properly linked.
  • If your SAP HANA instance is in a different Azure region or VNet, you need to use features like ExpressRoute Global Reach or VNet Peering to establish connectivity.


Configure SAP HANA via expressroute

  • Create the ExpressRoute Circuit in Azure (see Azure)
  • Deploy SAP HANA:
    • Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New: Azure Data Factory
    • Search for SAP and select the SAP HANA connector
    • Configure the service details, and create the new linked service
    • Place the SAP HANA instance in a subnet that is part of the VNet associated with the ExpressRoute
    • Test Connectivity to SAP HANA Instance using tools like ping or traceroute to the private IP assigned to the SAP HANA instance

SAP on AWS
Having a Direct Connect to AWS does not automatically mean that an enterprise can use that same connection to access their SAP HANA instance in AWS.

  • Dedicated Direct Connect: In case the enterprise has a dedicated Direct Connect to AWS A private VIF (Virtual Interface) must be established to connect the on-premises network to the VPC where SAP HANA is deployed.
  • Hosted Direct Connect: Hosted Direct Connect generally supports only a single VIF (virtual interface) per connection. If you need to access multiple AWS VPCs or services, you would need to create additional hosted connections or switch to a dedicated Direct Connect setup that supports multiple VIFs.

Configure SAP HANA via Direct Connect

  • Set Up aDirect Connect in AWS (see AWS)
  • Deploy SAP HANA:
    • Use the AWS Launch Wizard for SAP or AWS CLI to create SAP HANA instances
    • Test Connectivity to SAP HANA Instance using tools like ping or traceroute to the private IP assigned to the SAP HANA instance
Amazon AWS
Direct Connect
Microsoft Azure
ExpressRoute
Google Cloud
InterConnect
Oracle Cloud
FastConnect
SAP
CSP datacenter locations: Dublin
Frankfurt
London
Amsterdam
(virtual on-ramp)
Amsterdam
Amsterdam Metro
Dublin
Amsterdam Amsterdam
Frankfurt
Amsterdam (Azure)
Frankfurt (AWS)
Frankfurt (Google)
Available regions: EU Central
EU West
West Europe
North Europe
Europe-west4 Amsterdam
Frankfurt
Regions on customer request: EU West (Paris) UK West
UK South
Germany North
Germany West Central
France Central
France South
Europe-west1 (BE)
Europe-west2 (UK)
Europe-west3 (DE)
Fabric options: A-A A-A or A-B A-A or A-B A-A or A-B A-A or A-B
Redundancy: Customer can order single or redundant Mandatory redundant by Microsoft Mandatory redundant by Google Customer needs to order 2 single for redundancy See AWS or Azure
Supported bandwidths Gbps 1, 10, 100 Mbps 50, 100, 200, 500
Gbps 1, 2, 5, 10
Mbps 50, 100, 200, 300, 400, 500
Gbps 1, 2, 5, 10 & 20, 50 Dedicated
Gbps 1, 2, 5, 10 See AWS or Azure