Skip to end of metadata
Go to start of metadata

Executive Summary

This document contains a preliminary study on network slicing considering an end-to-end perspective. It analyzes the state of the art on network slice principles and concepts, use cases and business players. It also elaborates the service management and network slice orchestration aspects considering the life-cycle management operations, service exposure, interaction with mobile network and multi- administrative domain support. In addition, it details considerations regarding control plane enhancements focusing on slice sessions, slice attachment and on the related requirements and overviews the current data plane concepts and deployment options for enabling network slicing. 

Purpose and Scope

Purpose

The purpose of this study is to investigate the concept of network slicing with respect to the BBF MSBN architecture. Network slicing is considered as a fundamental enabler the BBF MSBN needs to support in order to move from the paradigm of “one architecture fits all” to the logical “network per service”. Network slicing will enable value creation for vertical segments that lack physical network infrastructure, by offering network and cloud resources.

 

In particular, this project has the following objectives:

  • Address business needs for network slicing
  • Identify and analyze potentially relevant slice types to be supported  in the BBF MSBN
  • Study existing work on network slicing of other industry bodies incl. ONF Architecture WG, 3GPP SA2/SA5, ITU-T, IETF, ETSI ISG NFV / MEC, MEF, IEEE, etc.
  • Provide the foundations to cooperate with other industry and standards developing organizations, particularly 3GPP SA2 and 3GPP SA5, as needed.
  • Identify specifications to address any gaps to support the identified network slice types provided on MSBN including any potential extensions to enable network slicing

 

This study should explore network slicing considering a combined network and cloud resource allocation, customized to address specific business demands in a flexible and dynamic manner of combining resources across multi-domains and transport/IP technologies. Various network slicing related operations should also be considered including for instance instantiation and orchestration procedures, multi-domain support as well as simultaneous connectivity to multiple network slices, without this list being exhaustive.

 

This work is intended to serve as a first step of a framework for enhancing the current MSBN to be able to accommodate end-to-end network slices across multiple administrative and/or heterogeneous transport/IP technology domains. From a BBF point of view, network slicing will be a significant capability for the delivery of innovative, ultra-fast, hyper-connected and valuable-added services supporting the BB20/20 vision.

Scope

The scope of this project is centered on the requirements for end-to-end network slicing including:

Identify umbrella use cases with relevance to the BBF MSBN, including the traditional and virtual RG/BG scenarios.

  • Identify relevant business entities
  • Identify relevant network slicing new entities and/or functions
  • Identify transport and routing related characteristics, e.g. flexibility, on-demand provision, QoS, etc.
  • Identify slicing-related security and privacy issues that may have to be resolved.
  • Identify network slice operation including, isolation e.g. at layer 2, extending VLAN tagging to the home
  • Consider how an SLA/business request shall be placed to enable a slice
  • Consider the components that support a slice request and how it should be mapped into network resources (e.g. using a slice template, via VNF/SDN means, via decomposing VNFs, i.e. Cloud CO, etc.)

References and Terminology

Conventions


In this Study Document, several words are used to signify the requirements of the specification. These words are always capitalized. More information can be found be in RFC 2119 [5]. 


MUST

This word, or the term "REQUIRED", means that the definition is an absolute requirement of the specification.

MUST NOT

This phrase means that the definition is an absolute prohibition of the specification.

SHOULD

This word, or the term "RECOMMENDED", means that there could exist valid reasons in particular circumstances to ignore this item, but the full implications need to be understood and carefully weighed before choosing a different course.

SHOULD NOT

This phrase, or the phrase "NOT RECOMMENDED" means that there could exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications need to be understood and the case carefully weighed before implementing any behavior described with this label.

MAY

This word, or the term "OPTIONAL", means that this item is one of an allowed set of alternatives. An implementation that does not include this option MUST be prepared to inter-operate with another implementation that does include the option.

References

The following references are of relevance to this Study Document. At the time of publication, the editions indicated were valid. All references are subject to revision; users of this Study Document are therefore encouraged to investigate the possibility of applying the most recent edition of the references listed below.
A list of currently valid Broadband Forum Technical Reports is published at www.broadband-forum.org.

References

[1] NGMN Alliance, “NGMN 5G WHITE PAPER”, White paper, Feb. 2015.

[2] NGMN Alliance, “Description of Network Slicing Concept”, NGMN 5G P1 Requirements & Architecture, Work Stream End-to-End Architecture, Version 1.0, Jan. 2016.

[3] 3GPP TR 23.799, Study on Architecture for Next Generation System, Rel.14, Dec. 2016.

[4] 3GPP TS 23.501, System Architecture for 5G System, Stage 2, Rel. 15, Mar. 2018.

[5] 3GPP TS 23.502, Procedures for the 5G System, Stage 2, Rel. 15, Mar. 2018.

[6] 3GPP TR 28.801, Study on Management and Orchestration of Network Slicing for Next Generation Network, Rel. 15, Jan. 2018.

[7] ITU-T Y.3011, Framework of Network Virtualization for Future Networks, Next Generation Network –future Networks, Jan. 2012.

[8] 3GPP TR 38.801, Study on new radio access technology: Radio access architecture and interfaces, Rel.14, Mar. 2017.

[9] 3GPP TR 22.891, Study on New Services and Markets Technology Enablers, Rel.14, Sep. 2016. 

[10] 3GPP TR 22.863, Feasibility Study on New Services and Markets Technology enablers for enhanced Mobile Broadband, Rel.14.1, Sep. 2016.

[11] 3GPP TR 22.862, Feasibility Study on New Services and Markets Technology enablers for Critical Communications, Rel.14.1, Sep. 2016.

[12] 3GPP TR 22.886, Study on enhancement of 3GPP Support for 5G V2X Services, Rel.15.1, Mar. 2017.

[13] 3GPP TR 22.861, Feasibility Study on New Services and Markets Technology Enablers for Massive Internet of Things, Rel.14.1, Sep. 2016.

[14] BBF SD-407 5G Fixed Mobile Convergence Study

[15] BBF TR-348, Hybrid Access Broadband Network Architecture, Issue 1, Jul. 2016

[16] ATIS, 5G Reimagined: A North American Perspective, Nov. 2015

[17] ONF TR-521, SDN Architecture, Issue 1.1 2016.

[18] NGMN, 5G End-to-End Architecture Framework, v2.0.0, Feb. 2018.

[19] Q. Wu, S. Litkowski, L. Tomotaki, K. Ogaki, YANG Data Model for L3VPN Service Delivery, IETF RFC 8299, Jan. 2018

[20] B. Wen , G. Fioccola, C. Xie, L. Jalil, A YANG Data Model for L2VPN Service Delivery, IETF Internet-Draft, Apr. 2018

[21] 3GPP TS 28.531, Management and orchestration of 5G networks. Provisioning, Rel. 15, Aug. 2018.

[22] 3GPP TS 28.541, Management and orchestration of 5G networks. Network Resource Model, Stage 2 and 3, Rel. 15, Jul. 2018.


 Abbreviations

This Study Document uses the following abbreviations:

3GPP

3rd Generation Partnership Project

5G5th Generation of Mobile Communications

AMF

Access Management Function
APIApplication Programming Interfaces

AR/VR

Augmented Reality / Virtual Reality
ANSMAccess Network Slice Management

BNG

Broadband Network Gateway
CAPEXCapital Expenditures

CCTV

Closed-Circuit Television
CNSMCore Network Slice Management
CPECustomer Provided Equipment
CSMFCommunications Service Management Function
eMBBEnhanced Mobile Broadband
ENIExperiential Networked Intelligence

FMC

Fixed Mobile Convergence
IDSIntrusion Detection System

IMS

IP Multimedia Subsystem

IoT

Internet of Things

KPI

Key Performance Indicator

LINP

Logically Isolated Network Partitions

MANOManagement and Orchestration

mIoT

Massive Internet of Things (MIoT)
MP2PMultipoint to Point

MSBN

Multi-Service Broadband Network
MTNSIMobile-Transport Network Slice Interface
NATNetwork Address Translation
NFVNetwork Function Virtualization
NGMNNext Generation Mobile Network
NSaaANetwork Slice as a Service
NSINetwork Slice Instance
NSMFNetwork Slice Management Function

OAM

Operations, Administration and Management
OPEXOperational Expenditures
P2MPPoint to Multipoint
P2PPoint to Point

PNF

Physical Network Function

PoP

Point of Presence

QoS

Quality of Service

RAN

Radio Access Network
RGResidential Gateway
SASSlice Attachment Session
SDStudy Document

SDN

Software Defined Network

SLA

Service Level Agreement
SMARTERNew Services and Market Enablers

S-NSI

Sub-Network Slice Instance
TNSMTransport Network Slice Management
TOSCATopology and Orchestration Specification for Cloud Applications

UE

User Equipment
URLLCUltra-Reliable Low Latency Communications
V2XVehicle to Everything

VLAN

Virtual Local Area Network

VM

Virtual Machine
VNFVirtual Network Function
VPNVirtual Private Network

WAN

Wide Area Network



Study Document Impact

Energy Efficiency

The impact of network slicing on energy efficiency is a complex topic. On the one hand replacing multiple physical networks with a set of network slices running on a single infrastructure could lead to a decrease in power consumption. However virtualized functions can be more power hungry than bespoke physical hardware. The net result may well depend on how well the power management of the virtual infrastructure is done, e.g. shutting down compute and storage resources when they are not being used.

Security

In order to provide scalable support, Network Slicing will likely use network virtualization technologies, including those described in this Study Document, allowing for shared use of the underlying network operator’s infrastructure.

Security considerations for network virtualization are already documented in several of the relevant Standards. See, for example:

  • clause 8 (“Principles of Bridge Operations” – which includes some description of MAC layer security mechanisms), clause 17.4 (“Security Considerations” relating to Bridge management), and clause 27.20 (“Security considerations” relating to Shortest Path Bridging) of IEEE Std 802.1Q-2014,
  • the “Security Considerations” section of RFC 4364 – “BGP/MPLS VPNs”,
  • all of RFC 4381 – “Analysis of the Security of BGP/MPLS IP Virtual Private Networks (VPNs)”
  • the “Security Considerations” section of RFC 7432 – “BGP MPLS-Based Ethernet VPN”

BBF products specifying use of one or more technologies for which adequate Security Considerations already exist should simply provide appropriate references.


Where not explicitly described in directly relevant standards, BBF products relating to Network Slicing support should include a description of security considerations, possibly expanding on related referenceable standards.

In addition, BBF products defined in support of Network Slicing should include documentation of special requirements that apply specifically to Network Slicing – such as measures need to ensure that:

  • only authorized parties can access configuration for the underlying network infrastructure,
  • traffic associated with one slice cannot be intercepted by users of other slices,
  • slice resources are reasonably protected against encroachment by the demands of other slices, or Denial of Service (DoS) attacks,
  • or any other requirements not explicitly supported by general network virtualization implementation and deployment.

Privacy

In addition to security considerations described above, BBF products defined to support Network Slicing should include documenting privacy considerations used in defining the specific product, especially as those considerations apply specifically to Network Slicing (e.g. – avoiding direct or inferable relationships between customer Personal Identifying Information (PII) and Slice identifiers, avoiding use of related identifiers during different stages of slice activation, use and deactivation, etc.).

As a general observation about privacy, it is in the BBF product consumer’s best interest to be reminded of the need to have appropriate privacy protection policies and procedures as defined by law and as commonly used in the industry.

For Network Slicing, slice identification and potentially geographical location of slice access points may constitute PII.

In addition, BBF products relating to Network Slicing should include (or reference) privacy considerations such as those in related standards and similar activities such as:

  • IEEE 802 current work to document recommended practices for creating new standards, as well as suggesting what to consider in implementing and deploying network technologies. This work may be published as early as sometime during the year 2019.
  • IETF publication (in July, of 2013) of an Information RFC (RFC 6973) – entitled “Privacy Considerations for Internet Protocols.” RFC 6973 includes some of the history relating to privacy considerations, and suggests generically applicable guidance that can be used as “food for thought” in designing network protocols independent of any specific legal (or similar) frame of reference.

Network Slicing Concepts & Principles

Network Slicing Definitions

Network slicing concept as defined by NGMN facilitates multiple logical self-contained networks on a the top of a common physical infrastructure platform enabling a flexible stakeholder ecosystem that allows technical and business innovation integrating physical and/or logical network and cloud resources into a programmable, open software-oriented multi-tenant network environment [1]. Typically, more than one device or User Equipment (UE) may connect to a single slice, while a single device or user equipment may connect to multiple slices at the same time.

The network slicing concept consists of three main layers, (i) the service instance layer, (ii) the Network Slice Instance (NSI) layer, and (iii) the resource layer [2]. Each service instance reflects a service, while the network instance represents a set of abstracted resources customized to accommodate the performance requirements of the particular service.

Network operators adopt a Network Slice Blueprint to create a NSI type providing the network characteristics required by a service instance type. A NSI may be isolated or shared across multiple service instances and can be composed of virtualized and/or non-virtualized resource and/or functions/entities.

A Network Slice Blueprint provides a complete description of the structure, configuration and operations on how to instantiate and control the network slice instance during its life cycle and contains physical and logical resources and/or to Sub-network Blueprint(s). A Sub-network Blueprint in turn is a description of the structure (and contained components, i.e. a set of network functions) and configuration of the sub-network instances and related operations.

Each NSI consists of none, one or more  Sub-Network Slice Instances (S-NSIs) (e.g. from distinct administrative and/or technology domains) being fully or partly, isolated from another network slice instance. Common abstractions of relevant resources and open programmable interfaces allow dynamic control and automation of network slice instances reflecting dynamic service demands.


Fig.1: NGMN network slice concept [2].

 

An overview of the NGMN network slice concept is illustrated in Fig.1 below. It shows different scenarios for creating and using a NSI, which may contain none, one or a number of different S-NSI, being isolated or shared by a network slice instance. A S-NSI can be a network function, e.g. IP Multimedia Subsystem (IMS) or a Gateway, or sub-set of network functions realizing a part of the NSI. A NSI in turn can be exclusively used by a service instance or shared between different service instances typically of the same type [2].   

For 3GPP “Network slicing enables the operator to create networks customized to provide optimized solutions for different market scenarios which demands diverse requirements, e.g. in the areas of functionality, performance and isolation” [3]. A network instance is a set of network functions and resources configured in such a way in order to form a complete logical network that satisfies specified network characteristics. Physical resource isolation means that allocated resources cannot be used by another NSI in order to avoid negative performance effects. Currently, 3GPP is working towards a detailed architecture for network slicing [4] analyzing also the related operations [5]. In addition, 3GPP SA5, i.e. the Management Working Group, is working towards network slicing orchestration and management [6].

3GPP SA5 sees network slicing as a means to transform the static “one size fits all” paradigm to a new paradigm where logical networks or partitions are created with appropriate isolation, resources and optimized topology to serve a particular service category or individual customer(s) – logical system created on-demand. 3GPP also follows the NGMN network slice architecture as documented in [3]. 3GPP further defines in [5] the notion of:

  • Network slice instance: A set of network functions and the resources for these network functions which are arranged and configured, forming a complete logical network to meet certain network characteristics.
  • Network slice subnet instance: A set of network functions and the resources for these network functions which are arranged and configured to form a logical network.
  • Physical resource isolation: physical resource allocated for one network slice cannot be used by other network slices in order to avoid negative effect between multiple network slice instances.

ITU-T defines network slicing as Logically Isolated Network Partitions (LINP) composed of multiple virtual resources (i.e. abstraction of physical or logical resource), which are isolated from other LINPs and equipped with a programmable control plane and data plane [7].

Editor Note: We need to add an interpretation of network slicing from the BBF perspective adopting the 3GPP definition as much as possible

Network Slicing Principles

Since the market has already established basic parameters with regard to guaranteed and peak bit rates for broadband networks, which are measurable by the end-user, (e.g. latency via SamKnows), network slicing needs to offer a significantly enhanced service in terms of functionality and/or performance to avoid all traffic continuing to go over the top.

Once a user connects to the network for the first time, a default slice is assigned until a slice selection function picks up one or more additional slices depending on both user and network related information. Such a default slice, also referred to as the “Vanilla” slice,  provides initial basic connectivity, which is likely to continue to be used until and unless it introduces some service limitations. The Vanilla slice would typically not offer any service specific customization with regard to performance, Virtual Network Functions (VNFs), value-added services, etc.. Networks that support a wide diversity of applications with particular requirements e.g. low latency services, may need more than the Vanilla slice.

 Network slicing can be realized:

  • Statically, e.g. in fixed access where the available resources are essentially constant
  • Partially dynamically, e.g. in roaming scenarios
  • Fully dynamic on-demand, i.e. a slice is need for a specified time only

Even when a slice is statically configured, the resources may be virtualized and hence can move between physical resources during the life-cycle of a NSI.   

A network slice can be allocated to:

  • A Single tenant, i.e. vertical, application provider, etc., who can use it to enable a single or multiple services.
  • A Network operator for optimizing a particular service type offered to its customers.     

Network slicing builds on the top of the following six main principles that shape the concept and related operations:

  • Automation enables an on-demand configuration of network slicing without the need of manual intervention. Automation relies on signaling to allow 3rd parties to request a slice with the desired capacity, latency, jitter, security, etc., with additional scheduling information considering the starting and ending time, duration or periodicity.
  • Isolation facilitates performance guarantees and addresses aspects security within and between slices. However, isolation may reduce multiplexing gain, depending on the means of resource separation. Isolation does not only relate in the data plane but also the control plane. From the resource perspective isolation can be deployed (i) by using a different physical resource, (ii) when separating via virtualization means a shared resource, e.g. via a VM, and (iii) through sharing a resource with the guidance of a policy that defines sharing. Isolation can be applied in the (i) data plane, (ii) control plane and (iii) controller/orchestration system.
  • Customization allows the resources to be allocated to a particular tenant are utilized in order to meet  the particular service requirements. Slice customization can be realized (i) in a network wide level considering the abstracted topology and the separation of data and control plane, (ii) on the data plane with service-tailored network functions and data forwarding mechanism, (iii) on the control plane introducing programmable policies, operations and protocols, (iv) through added value services such as big data and context awareness and (v) by providing different management, controller/orchestration.
  • Adaptability allows the resources associated with a particular slice to be modified in response to changing network conditions e.g. (i) radio and network conditions, (ii) amount of serving users or (iii) varying geographical serving area, e.g. due to user mobility. Such resource adaptability can be realized by scaling up/down or relocating VNFs and added value services, or by adjusting the applied policy and re-program the functionality of data and control plane elements.
  • Programmability enables 3rd party players, (through open Application Programming Interfaces (APIs) that expose network capabilities), to control allocated network resources either via the infrastructure provider or directly allowing on-demand customization and adaptability.
  • Multi-domain allows network slicing (i) to stretch across different administrative domains, i.e. a slice that combines resources that belong to distinct infrastructure provides, and (ii) to unify various heterogeneous resources, e.g. considering Radio Access Network (RAN), core network, transport, home networks and cloud. In particular, network slicing consolidates diverse resources enabling an overlaid service layer, which provides new opportunities for fixed-mobile convergence.


Hierarchical architectures is a property to achieve scalability that can be leveraged to facilitate network slicing. For example, the resources of a network slice allocated to a particular tenant, can be further traded either partially or fully to yet another third player, which relates to the network slice tenant facilitating in this way another network slice service on the top of the prior one.

Adaptability can also alter the amount of initially allocated resources e.g. by selecting a different physical and modifying a VNF, by adding a different access technology or a new VNF,  by adding a data offloading VNF, or by enhancing the VPN resources. Such a process may require inter-slice arbitration since it may influence the performance of other slices that share the same network infrastructure.

While there have been some concerns about the vulnerability of a single network, slicing can contribute to network survivability: i.e. coping with a potentially massive network outage caused by fire, terrorism, cyber attack etc. via its widespread use of softwarization. This can be done by exploiting the ability to change the physical location of VNFs and network resources allocated to network slices.    

Use Cases & Business Players

Network Slicing Use Cases

Network slicing use cases aim to provide an overview of the main business scenarios and point out the corresponding key requirements and business players.

The main objective is to capture the distinct requirements of different industry sectors, whose needs are currently met by bespoke, dedicated networks, or are not captured at all. Examples of such dedicated networks, and their key requirements are: (i) financial networks that require ultra low latency (to allow automatic trading), 100% availability and very high levels of security and privacy and (ii) video content production networks, which require very high, guaranteed bandwidth. Note that both of these are currently fairly limited in extent (e.g. to a few City Centres) and have no obvious need for WAN wireless connectivity. Networks that don’t yet exist at all may include, e.g. 5G, vehicular (i.e.Vehicular to Everything - V2X) and massive Internet of Things (mIoT).

While slicing is not just about 5G, 5G in particular is expected to facilitate a rich business ecosystem enabling innovative services for consumers and new industry stakeholders, such as verticals.The use cases related in this study are organized into two different types, that is, the Network Service as a Service (NSaaS) focusing on fixed networks, and the 5G related 3GPP use cases, including Fixed Mobile Convergence and some considerations in relation to network sharing.

Network Slice as a Service

NSaaS enables an on-demand customized fixed broadband network resource leasing business model on the top of a common network infrastructure. Typically, NSaaS can be realized via signaling means that communicate specific service requirements in order to form a customized logical network, which can be modified dynamically to suit service demands. The notion of customization is understood in terms of:

  • Resources including network functions, value-added services, network resource capabilities, transport/routing protocols
  • Allocating and providing logical connectivity of virtual network functions and value-added services
  • Control plane means and OAM services

Network slice customization can be achieved by the means of programmability either indirectly through the tools offered by an infrastructure provider or directly via 3rd party’s tools installed by the infrastructure provider to operate on the allocated resources. The NSaaS can support a number of use cases including:

  • Verticals enable various emerging applications, which are imposing diverse and potentially stringent requirements on the network infrastructure. In general verticals don't own a network infrastructure and hence they rely on leasing resources. One of the key benefits of the NSaaS, from the verticals and in particular form an IoT perspective, is that it adds value by offering network and cloud resources that can be used in an isolated, disjunctive or shared manner. In this context, network slicing can be used to support very diverse requirements, i.e. in terms of network performance, network control and management, imposed by IoT services as well as flexibility and scalability to support massive connections of different nature. For example, the remote control in the intelligent industry vertical would require low latency and high reliability, whilst smart metering in the case of smart grid vertical introduces more relaxed performance requirements, but needs to support massive concurrent connectivity. Even different applications of the same vertical, e.g. CCTV and sensor data for security, could impose distinct requirements. In that case, if a vertical needs to support diverse applications over the same infrastructure, it has to guarantee the appropriate isolation.
  • Emergency communications supported by regulatory bodies consider public safety as well as communications between individuals and the authorities. It requires ultra-reliable, low latency, highly secure and high throughput communications with the capability of scaling-up/down resources on demand in the geographical area of interest. Emergency communications should support the basic traditional voice and potentially HD video send from individuals to the authorities to receive possible instructions. In addition, emergency communications may involve advanced services related to emergency responder including real-time augmented reality of the region, location information of  emergency responders and threat(s), data of the environment (measurements, e.g. air, or interaction with other sensors e.g. on the road, buildings, etc.) and responder health, e.g. for alerts, which can be collected via wearable sensors.   
  • Enterprise interconnect involve linking remote locations and/or enabling enterprises to migrate certain services to the cloud. Both cases require isolation resource control and automation for establishing dynamic connectivity.  The automation can be achieved by dynamically adjusting the bandwidth allocation or by establishing new connectivity, e.g. for utilizing the optimal cloud location, considering cost, performance or other business requirements. 
  • Multi-domain interconnect addresses the potential for enabling a service across different administrative domains. It raises requirements related to security, service automation and performance assurance, especially since a service may employ different type of network technologies and resources including data-plane and control-plane mechanisms related to underlying resources that belong to different administrative domains. A unified service management needs to be applied across different domains assuring a consolidated network performance, e.g. via stitching or other means.
  • Mobile backhaul/fronthaul concentrates on cases where an application/service provider or vertical requires connectivity or service leasing from an operator or infrastructure provider considering indoor, outdoor and private deployments.
    • Indoor deployments involve service provision e.g. in a stadium or shopping mall, leasing resources from an infrastructure provider responsible only for the network connectivity. Various type of services can be realized including location services, e.g. for advertisement, security service such as CCTV, network access services, entertainment services, e.g. video or camera selection in football game, etc.
    • Outdoor deployment scenarios can involve leasing backhaul/fronthaul resources from a network operator for supporting service-oriented requirements specified by an application/service provider or vertical at specific wireless access points. Various services can be supported including entertainment or tourism services, e.g. AR/VR or city information services, automotive services, e.g. for providing bird eye view by combining information from neighboring access points, etc.     
    • Private deployments considers a network within an organization, i.e. focusing on verticals, e.g. industry 4.0, railway/metro network, etc., where different type of services with diverse requirements need to be supported in isolation assuring the desired performance.
    • Mobile Backhaul/Fronthaul integration accommodates convergence of both backhaul and fronthaul services into common physical network links, supporting in isolation different data plane and control plane protocols. This allows different RAN realizations, i.e. different base station functional splits [8], adopted by distinct services that run on the top of a common network infrastructure, regulating in this way the required fronthaul capacity needed to support particular applications and traffic profiles.    
    • Launch new services focuses on developing and testing new services in a wide scale real network environment before commercial adoption. This type of slice can be used by network operators, application/service provider and verticals. The key requirement is isolation, since new services may not be stable, i.e. it is not clear how they impact existing ones. Another requirement is flexible resource customization that will allow room for experimentation.

Supporting 3GPP 5G Use Cases 

The emerging 5G networks are expected to support the tremendous growth in connectivity, density and volume of data traffic enabling an end-to-end ecosystem that assures a consistent performance. 5G will enable a plethora of use cases with diverse and often conflicting performance requirements, which can be realized efficiently utilizing different logical networks or slices on top of a common network infrastructure.

3GPP Services Working Group SA 1 performed a study named New Services and Market Enablers (SMARTER) [9] specifying more than 70 use cases, which were later grouped into the following three service categories identifying service and network requirements, including also an additional group focusing on network operations and migration:

  • Enhanced Mobile Broadband (eMBB) [10] mainly focuses on facilitating high data rates considering also the uplink direction, accommodating high data traffic volumes and high UE connectivity per area assuring a broadband experience even in densely populated areas. eMBB aims to enable a broadband connectivity across heterogeneous networks including also fixed mobile convergence, assure broadband connectivity even under high user and vehicle mobility and support devices with highly variable data rates.
    Hence, eMBB requires high network capacity, low latency and high network availability.
  • Ultra-Reliable Low Latency Communications (URLLC) [11] enables services requiring ultra reliability and low latency, facilitating mission critical services, vehicular communications, industrial automation and control, Augmented Reality/Virtual Reality (AR/VR), tactile Internet, public safety, disaster and emergency response, etc. URLLC requires assuring the desired isolation, prioritization, rapid communication setup, very low jitter, location precision, etc. A special case of URLLC is:
  • Vehicle to Everything (V2X) [12] that includes direct communications between vehicles and vehicular to RAN and Core network communication considering also networking communication between base stations  for collecting information from the greater geographical area. V2X focuses on (i) safety-related services including bird eye view, situation awareness, and cooperative driving, etc. (ii) autonomous driving including platooning (i.e. closely linked vehicles) and teleoperated support (i.e. remote control).
  • Massive Internet of Things (mIoT) [13] facilitates connectivity for stationary devices with non-time critical service requirements considering a very high amount/density of devices. mIoT requires isolation to assure the desired security, configuration and operational simplicity for allowing long battery life-time for the involved devices as well as enhanced reliability. mIoT is expected to enable wearables, e-Health and sensor networks that allow smart home/city services, farming services and smart utilities, etc. by enabling a common communication and interworking framework considering various devices and diverse connectivity.

From the MSBN perspective, the 3GPP use cases can be seen as a set of link requirements (e.g. topology, QoS parameters, etc.), derived from the 3GPP management system based-on the requirements of a particular network slice request. Such link requirements are communicated to the transport network in order to support connectivity between the 3GPP RAN and/or core networks nodes that belong to the network slice instance, while the 3GPP management system configures the corresponding 3GPP nodes to use such links [6].

Slicing across Fixed-Mobile Converged Networks 

A Fixed-Mobile Converged (FMC) network slice is formed by combining resources from both fixed and mobile, i.e. 3GPP, networks.  A FMC slice builds on the top of SD-407 [14] taking advantage of the development of common network functions and interfaces, i.e. N1, N2, N3 interfaces. A FMC slice aims to support stationary users or devices, via RG/CPE located in home or enterprise environments. It allows the support of data plane across different accesses, i.e. hybrid access [15], with various degrees of deterministic performance in terms of throughput, latency, resiliency, etc. It also allows a common control plane that can optimize the service provision and availability offering a continuous service experience across fixed and mobile networks.     

Editor Note: To be updated according to the 3GPP developments in the topic.

Network Slicing Vs Network Sharing   

Similarly to network slicing, network sharing relies on virtualization techniques enabling logical networks on the top of a common network infrastructure, but with a different business model and hence scope of operation. Network sharing is infrastructure oriented focusing on enabling Virtual Network Operator(s) to share network resources considering a high granular, i.e. aggregated, set of performance requirements mainly with respect to capacity and secondary latency. In contrast, network slicing is service oriented considering resource isolation and customization with respect to the requirements of a particular service. 

In other words, network sharing focuses on resource assurance among pre-determined PoPs, while network slicing concentrates on the service assurance, which can additionally be independent of network PoPs. Service assurance is typically related to particular customers that may have the option of connectivity towards different networks and/or being on the move. This makes a difference in the network slice life-cycle management and control, since network sharing concentrates on resource-oriented KPIs, while network slicing service-oriented KPIs without necessarily imposing requirements related with the specific location of the required resources.

High Level Business Drivers for Network Slicing

The initial business question regarding network slicing is whether it is a network optimization for the operator, e.g. to avoid a multiplicity of bespoke networks, or a way of getting more revenue via new, chargeable network capabilities, or both.

Network slicing can support, on a single network, a number of different use cases with distinct performance requirements, introducing potential savings for the network operator in terms of both Capital Expenditures (CAPEX) and Operational Expenditures (OPEX). One example of optimization is the pooling of functionality, Virtual Machines (VMs) etc., to get some stat mux gain.

The revenue model for network slices is more complex with regard to the value chain, network monitoring and charging than network optimisation, however it is widely believed to be necessary to help justify the investment. Note that the charging model may not only pertain to providing bespoke network connectivity and performance; other potentially chargeable services include monitoring, management, big data collection and analysis.

 

For the purposes of this SD, network slicing is considered both from the perspectives of network optimization for the operator, and a way of getting more revenue via new chargeable network capabilities.

Emerging Network Slicing Business Players 

5G is expected to facilitate a rich business ecosystem enabling innovative services not only for consumers, but also for new industry stakeholders, such as verticals. Among the key business players in the network slicing arena are the following:

  • Infrastructure providers own and offer a physical network infrastructure to 3rd parties and are responsible for the related maintenance, excluding services that may run on the top. Currently, network operators hold this role, but in emerging 5G networks it is foreseen that independent players, which provide hardware, connectivity and related network management and orchestration services can also be infrastructure providers. Infrastructure providers are responsible for (i) resource/billing negotiation with 3rd players that request a network slice and (ii) controlling the slice allocation process carrying out all operations related with inter-slice control and potentially intra-slice control on behalf of a network slice owner, depending on the specific network slice deployment arrangements. 
  • Verticals are industry segments that provide non-telecom specific services, which flourished through the service digitalization era that took place in factories, transportation, health care, etc. Verticals own no networking facilities and rely on infrastructure providers for networking and service orchestration assistance.
  • Service providers can provide free Internet connectivity or free service access to public customers that agree with the imposed business terms. Two common models are considered in [16] including:    
    • Customer-centric model that trades reductions or free Internet connectivity and/or Internet-based services for access to information about end-customers and related service and content usage.
    • Usage free metering model, where customers that receive a free access service will not experience any usage metering related to a conventional carrier contract, while utilizing such a service.
  • Application providers currently offer over the top services with best-effort performance utilizing without any cost network operator resources. Without owning the network infrastructure, application providers have no control of the consumed application’s resources, but can control the application itself. The emerging high data volume applications may force application providers to purchase network resources from infrastructure providers and network operators in order to maintain customers in tune to offered services without being charged per data volume usage (related to their conventional carrier contract). Applications with stringent requirements may additionally purchase an assured service provision by pre-defining a set of requirements that operators need to satisfy.
  • Service broker acts as a mediator between the physical network and 3rd parties, e.g. verticals, being responsible for resource trading, negotiation and pricing. It can be realized as a function within the operator or infrastructure provider networks, or alternatively it can be a standalone entity belonging to independent party.

Partnerships among the aforementioned business players can be established over networking and cloud resources, network capability exposure and provision of network/user context information, support for value-added services, softwarization and resource programmability opportunities.

 


Service Management & Network Slice Control

Overview of Service Management & Network Slice Control Architecture

Network slicing facilitates 3rd party players, e.g. verticals, with service-oriented logical networks on the top of a common infrastructure, which may consist of heterogeneous transport technologies that may stretch across multiple administrative domains. The service-oriented nature of network slicing suggests the need for a continuous process capable to analyze the service requirements and assure the desired performance even when the conditions of the network change or the requirements from the customer perspective evolve with time.  


Fig.2: MSBN service management and network slice management processes and operations

 


The Multi-Service Broadband Network (MSBN) service management provides service abstraction to verticals, Application/service providers and 3rd parties (step 2) based on the resource abstraction (step 1) received by the network slice management, and takes care of the service negotiation (step 3) once it receives a slice request including admission control and potentially charging. Once a slice request is accepted, the MSBN service management analyzes the slice requirements and identifies the appropriate slice template, which is used to create the desired service by combing different VNFs, Physical Network Functions (PNFs), value added services, data/control plane protocols, reliability and security mechanisms, etc. (step 4). Upon composing the desired service (step 5), the MSBN service management provides its description, using e.g. YANG models, including slice specific mapping guidelines, such as configuration parameters for links, cloud platforms, etc., to the network slice management.


The network slice management instantiates a requested slice (step 7) based on the service description and guidelines received (step 6), while it takes care of the service maintenance via performance monitoring, service analysis and slice reconfiguration procedures within the limits of the allocated resources, i.e. for intra-slice (step 9). When an intra-slice reconfiguration solution is not sufficient for maintaining the desired service, the network slice management needs to contact the MSBN Service Management, requesting a service adjustment, i.e. an inter-slice reconfiguration (step 10). A service adjustment can be performed e.g., by modifying service specific parameters, such as VNFs, value added service, data/control plane protocols, etc. and/or by allocating more resources in the topology, links or cloud. To assist the MSBN service management in the service mapping process, the network slice management provides physical resource/topology abstractions (step 1), which are also used to create service abstractions. The customer/user attachment processes, which facilitate slice discovery and selection as well as support for multi-slice connectivity, are also handled by the network slice management (step 8).    

 

Data models expressed, for example, in YANG language [RFC7950] can be used to render SDN Southbound interfaces to configure and obtain operational state of physical and virtual devices, network and services overlaid over the network. Such interfaces may use Netconf [RFC6241], RESTCONF [RFC8040] or other protocols that provide API for accessing data defined in YANG, using the datastore concepts. Data models can provide necessary level of abstraction at device, network and service levels ultimately enabling efficient orchestration and automation of the NSI to align with the needs of different applications via NSI programmability as presented in Fig. 2.

Topology and Orchestration Specification for Cloud Applications (TOSCA) is another data modeling standard that can be used to orchestrate Network Function Virtualization (NFV) services and applications. TOSCA models, expressed as service and orchestration templates, can be integrated with other NFV orchestration technology and tools including those that are compatible with the ETSI management and orchestration (MANO) standards for NFV as well as YANG data models.


Service Exposure, Slice Attributes & Network Slice Request    

3rd parties, verticals, application providers, etc. requiring a network slice need to acquire service abstraction and communicate their slice requests to network operators or infrastructure providers via signaling means through specific APIs. Currently, several APIs exists that can potentially be adopted to acquire service abstraction and request a network slice, e.g. service management models such YANG, RESTful APIs, SDN Northbound API, etc. with relatively minor extensions.

Incoming network slice requests should reflect the corresponding service requirements including:

  • Slice performance requirements: capacity, latency, jitter, etc.
  • Slice automation and scheduling requirements: slice initiation time, slice duration, etc.
  • Service type: eMBB, URLLC, mIoT, FMC, enterprise interconnect, etc.
  • Customization: refers to fine tuning parameters with respect to the slice templates, including value-added services, virtual functions, data/control plane, security, etc.
  • Resource isolation: reflects the degree of isolation in control/data plane and identify the network functions that could be shared with other network slices
  • Service access parameters: location (e.g. enterprise), group of users     


Network Slices are often discussed in terms of their performance, scheduling and functionality, but in fact there is a fourth attribute - connectivity – which includes topology, coverage and scale. Considering each of these in turn:

  •  Performance: There are several obvious performance parameters such as guaranteed bandwidth. Other possible most obvious performance attributes, which need to be specified on a per user, per interface,per handover point and per slice aggregate basis may include (i) peak bandwidth – at least per user, (ii) latency, (iii) jitter, (iv) packet loss, (v) packet error rate and (vi) bit error rate. Perhaps less obvious but in many cases equally important are: (i) availability, since a single x 9s figure is unlikely to be sufficient; hence depending on the vertical, the maximum duration and frequency of any loss of connectivity may be more important, (ii) failover time, which is related to availability, but covers the situation where there is a major node or link failure which requires switchover to an alternative path. All these performance attributes need to have defined parameters and units; in some cases these will be directional i.e. need to be given separately for Upstream and Downstream. The granularity may also need to be specified. Whether or not the actual values should, or could, be specified is an active debate related to Slice Types (see Sect x.y)
  • Scheduling: There are a number of ways in which slices can be instantiated. One involves specifying in advance the start and duration of the Slice, and would involve parameters such as: (i) slice initiation time, (ii) slice duration, (iii) slice termination time, (iii) slice periodicity, (iv) event-based slice initiation/termination
  • Functionality: There are different attributes related to functionality including basic functionality, virtual network functions and value added services.
    • Basic: The functionality of a given slice can be fairly open-ended, but there are a few basic functions: (i) security, (ii) authentication, (iii) encryption and (iv) policy
    • Virtual network functions: Consider particular functions customized for a slice. Some examples of VNFs are: (i)BNG and (ii) RG.
    • Value added services Consider particular value added functions allocated to a slice. Some examples of value added services are: (i) firewalling, (ii) Intrusion Detection System (IDS), (iii) Network Address Translation (NAT), (iv) parental control
  • Connectivity: Connectivity covers both topology and coverage.
    • Topology: There are a number of different possible topologies for a slice: (i) Point to point (P2P), (ii) point to multipoint (P2MP), e.g. to support multicast, (iii) multipoint (MP2P), e.g. a sensor network, (iv) mesh
    • Coverage: Coverage needs to be specified in different ways, again depending on the vertical: (i) geographical including land area, % of population, % of buildings, by association with other infrastructure e.g. road, railways, powerlines, and network type, e.g. mobile, fixed access, etc., (ii) scale e.g. number of end points, users, sessions, value added service instances, (iii) forwarding layer (L1, L2, L3), (iv) session support and (v) mobility support including roaming, nomadicity, handover and seamless handover

Network slices may be deployed and operated by the infrastructure provider on behalf of the slice tenant or alternatively the slice tenant can program and operate directly the allocated resources on its slice.  This relies on the business relationship between the infrastructure provider and the slice tenant, which depends on:

 (i)   the degree of control that the infrastructure provider offers to the slice tenant

 (ii)  the capabilities of the tenant, e.g. some verticals may not hold resource management functions or entities capable to control and/or program the allocated slice resources


Hence, based-on the service type, tenant capabilities and business model, related to a network slice the infrastructure provider may expose different levels of control to the corresponding slice tenant including:

  • No-control restricts slice tenants from interacting with the allocated slice resources. Incoming slice requests are analyzed by the infrastructure provider, which designs and controls the network slice providing only monitoring information to the slice tenant regarding particular KPIs.   
  • Capabilities selection allows the slice tenant to select particular functions, control/data plane protocols, etc. from a catalogue offered by the infrastructure provider.  In addition, it allows monitoring of particular KPIs and may allow a slice tenant to perform a limited form of slice design and parameter configuration using a multi-choice repository offered by the infrastructure provider.   
  • Programmability provides the opportunity to slice tenants to create and operate the allocated slice, facilitating resource programmability using their own software functions and resource configuration parameters. Slice programmability on infrastructure or platform of its allocated slice can be supported either by allowing a slice tenant to (i) integrate the allocated slice resources to its own network and then use its own network slice management or (ii) instantiate a network slice management as a VNF within the allocated network slice.  

Service & Network Slice Management: Functions & Interfaces

Network slicing may concentrate on the fixed network side, in where all related processes are handled by e.g. the BBF MSBN, or it can be initiated by the 3GPP mobile network, which relies on, e.g. the BBF MSBN, for the underlying transport network or both in case of FMC network slicing. The network slicing process consists of two phases, the service management and the network slice management as illustrated in Fig.3 that contains the following main functional entities:

  • Communications Service Management Function (CSMF) is responsible for receiving a slice request from 3rd parties and verticals and translating the communication service related requirements to network slice related requirements
  • Network Slice Management Function (NSMF) takes care of the slice life-cycle management of the NSI concentrating on the RAN and core networks. The related phases include slice preparation, instantiation, configuration and activation, run-time and decommission. In addition, it interacts with the transport network relaying the slice requirements from the mobile network side.      
  • Access Network Slice Management (ANSM) takes care of the slice life-cycle management of the access network slice sub-instance and corresponds with the NSMF.
  • Core Network Slice Management (CNSM) takes care of the slice life-cycle management of the core network slice sub-instance and corresponds with the NSMF.
  • Transport Network Slice Management (TNSM) takes care of the slice life-cycle management of the transport network S-NSI and it provides the capability exposure of the transport network to the 3GPP mobile network, i.e. towards the network slice management function, while it also provides the mapping of the 3GPP mobile network requirements to the corresponding transport network.

 

 

Fig.3: Overview of the service management and network slice control architecture

 

In addition, the network slice process relies on an interface between the 3GPP NSMF and TNSM, besides the interfaces already described in [6]. The Mobile-Transport Network Slice Interface (MTNSI) deals with the transport network capabilities exposure towards the NSMF of the 3GPP mobile network as well as slice requests received from the NSMF, which indicate the required parameters such as latency, delay variation, loss ratio, max bit rate, min bit rate, etc. to mention a few (without being an exhaustive list of parameters). It also carries out the network slice mapping procedures from the 3GPP mobile network towards the transport network, which is shown in Fig.4 and addressed in steps as follows,

  1. The network topology is collected by the TNSM, generally periodically.
  2. The resource pool is constructed, containing the currently available network resources, which is generally periodically updated as the network is dynamically changing. The resource pool information is maintained in the TNSM.
  3. The network capabilities can be exposed to the NSMF by the TNSM, provided that the NSMF is trustworthy to the TNSM.
  4. NSMF can prepare an end-to-end network slice environment based on slice requirements and slice templates
  5. NSMF sends a slice request to the TNSM, indicating the required performance parameters such as bandwidth, latency and loss rate as well as isolation requirements.
  6. The TNSM checks against the requirements received and the available network resources and the current network topology to determine whether the slice request can be satisfied.
  7. If the slice request can be satisfied, then the involved network elements are configured, the connections between the end points are set up, and the network slice is established.
  8. Once the network slice establishment is finished, or the slice request cannot be satisfied at Step5, a Request ACK will be sent to the NSMF.  

 

Fig.4: The procedure of network slice request and establishment.

Once a network slice is established, the slice owner or tenant, i.e. vertical, 3rd party or service provider is able to start using it. During the life-time of a slice the TNSM monitors and applies any potential modifications for maintaining the indicated service performance.

The proposed MTNSI can potentially be supported by using L3SM [19] or L2SM [20], which aims to offer a common understanding of how the corresponding IP VPN service can be deployed over the shared infrastructure. In particular, the L3SM and L2SM can be used for delivering the requirements (e.g. bandwidth, latency and loss rate) and exposing capabilities between the mobile network and the transport network allowing a flexible management of the network slices.

Slice Life-cycle Management

The network slice life-cycle management performs legacy management processes and policy provision. Hence, it makes sense to decouple it from the corresponding on-the-top service instance, which uses it to allow scalability in provisioning NSIs independently from the corresponding service instance. A typical network slice life-cycle management includes the following phases, as identified by 3GPP [6], administrated by the network operator:

  • Slice preparation concentrates on the creation and verification of network slice template and the preparation of the necessary network environment used to support the lifecycle of network slice instance.
  • Instantiation, configuration and activation is divided broadly into (i) instantiation/configuration that carries out the resource allocation considering dedicated and shared resources, i.e. preparing the network slice instance for operation, and (ii) activation that includes processes for the network slice instance to become active handling network traffic and user context.
  • Run-time assumes that the network slice instance is ready to handle traffic related to various communication services. It focuses on the data plane operations, guidance and modifications  based-on reporting and KPI monitoring maintaining a closed loop control process that takes care of upgrades, re-configuration and scaling to reflect evolving service alternations in terms of topology, capacity, network functions, etc.
  • Decommissioning deactivates and releases the allocated slice resources terminating the network slice instance.

 

Fig.5: Lifecycle management of network slices [6]

The run-time process in turn can effectively manage the network slice instance considering the   following management procedures: (i) fault management, (ii) performance management, (iii) configuration management and (iv) policy management. Such management procedures may follow a closed loop paradigm in where related data can be discovered, polled or notified to gain understanding of the actual resource state by comparing it with the desired resource state, which can provided by the resource owner or requested by the client. If a difference, i.e. delta, is identified then an adjustment should be provided considering the operator’s policy, which may result into modifying the resource state, deny the request or notifying an exception.  An overview of this process is provided in Fig.6 taken from [17].

Fig.6: Closed-loop control process for network management procedures [17]

Editor Note: Another contribution is encouraged to reflect the ONF TR-526

One way to provide such a closed-loop control in the run-time process is via the means of artificial intelligence that takes advantage of user and network context information.The ETSI ISG Experiential Networked Intelligence (ENI) [ETSI_ISG_ENI] focuses on improving the operator experience, adding closed-loop artificial intelligence mechanisms based on context-aware, metadata-driven policies to more quickly recognize and incorporate new and changed knowledge, and hence, make actionable decisions. ENI will specify a set of use cases, and the generic technology independent architecture, for a network supervisory assistant system based on the ‘observe-orient-decide-act’ control loop model. This model can assist decision-making systems, such as network control and management systems, to adjust services and resources offered based on changes in user needs, environmental conditions and business goals.

The key component that is being defined in ETSI ISG ENI is denoted as ENI engine that is applying analysis and machine learning technologies to enhance and optimize the network slice management and control operations and to assist the 3GPP NSMF entity to resolve any abnormal operation of each slice; Some of the ENI engine activities are listed below:

  • Analyse the collected data associated to e.g., network topology, network traffic load, service characteristic, user location and movement, VNF type and placement constraints, infrastructure capability and resource usage, etc.
  • Produce a proper context aware policy to indicate to the network slice management function when, where and how to place or adjust the network slice instance (e.g., reconfiguration, scale-in, scale-out, change the template of the network slice instance), including the network slice functions and their configurations, in order to achieve an optimized resource utilization according to the possible change of service requirements and/or the network environment.
  • Other possible scenario is network slicing where an operator can dynamically change a given slice resource reservation, considering that each slice may be assigned for a specific type of service or service class. Moreover, hybrid scenarios, where network slicing and resource sharing are applied at the same time are also envisaged.

Network slice life-cycle management operations over MTNSI

The proposed MTNSI should be able to convey the requirements from the 3GPP NSMF to the transport network while providing the transport network capabilities towards the 3GPP NSMF for carrying out requests related to the network slice-life cycle management as described in section 7.4. In particular, the MTNSI need to support as mentioned in 3GPP SA5:

  • Instantiate S-NSI: This operation instantiates a Sub-Network Slice Instance related to the transport network.
  • Run-Time S-NSI: This operation consists of two parts
    1. Reporting on S-NSI involves the process of acquiring and reporting certain performance parameters related to the allocated resources of an indicated Sub-Network Slice Instance. 
    2. Supervision of S-NSI involves the process of acquiring and modifying, i.e. update, re-configure and scale, of selected resources associated with an indicated S-NSI
  • Terminate S-NSI: This operation terminates a S-NSI related to the transport network.

For each phase of the life-cycle management the MTNSI can support the exchange of the following essential parameters irrespective of the technology and service model used.

Instantiate Sub-Network Slice Instance

Parameter

Description

S-NSI_GC

Coverage or geographical area of the desired S-NSI

S-NSI_EP

Identifier of the end points, e.g. IP addresses, VLAN IDs

S-NSI _LL_ID

Identify a set of logical links between two end-points

S-NSI _LL _QoS_x*

Set of QoS parameters indicating the desired performance for each logical link included in the S-NSI_LL_ID set (e.g. bandwidth, latency, jitter, etc.)

S-NSI_Iso

Identifier of isolation level, i.e. shared or dedicated, of a S-NSI

S-NSI_Topo

Network topology and physical connectivity between end-points of the desired S-NSI, e.g. P2P, hub-spoke, full-mesh, etc.

S-NSI_Sec

Indicate the desired S-NSI security, e.g. authentication, encryption

S-NSI_Policy

Indicate the desired S-NSI policy, e.g. VPN policy, QoS classification policy

S-NSI _S_Time

Indicate the time when a S-NSI should be instantiated

*Note: The notion of x can be defined with respect to a particular QoS parameter, for instance S-NSI_QoS_BW for bandwidth, S-NSI_ QoS_L for latency, S-NSI_ QoS_J for jitter, etc.

S-NSI_LL_ID represents a set of logical links that interconnect 3GPP identified end-points, S-NSI_EP, while the S-NSI_Topo indicates the desired type of the underlying physical topology that carries out the connectivity among the indicated S-NSI_EP.      

Editor Note:The notion of isolation is currently perceived as a parameter to indicate if an allocated slice can be shared among different tenants. Further contributions are encouraged to elaborate different perceptions of isolation with specific details.

 

The TNSM should issue the following messages towards the 3GPP NSMF once the Instantiation phase is finished.

Parameter

Description

S-NSI_ID

Identifier of the instantiated Sub-Network Slice Instance 

S-NSI_ID_S

A confirmation that the request was successfully performed for S-NSI_ID

S-NSI_ID_E

Message indicating the type of error based on a pre-determined list if the request cannot be satisfied (For Further Study)

 

Run-Time Sub-Network Slice Instance

Supervision performed by explicitly adding and/or removing and/or modifying a logical link on an existing S-NSI for the purpose of scaling up/down the requested network slice. A logical link is abstracted towards NSMF and has two end points as perceived by the NSMF, e.g. a logical link connecting a base station with a core network element, with the internal insight topology information related with the transport network being hidden.

 

Parameter

Description

S-NSI_ID

Identifier of a particular Sub-Network Slice Instance

S-NSI_LL_ID

Identify a set of logical links in the indicated S-NSI

S-NSI_LL_MQoS_x*

Modify the set of QoS parameters indicating the desired performance for each logical link included in the S-NSI_LL_ID in terms of bandwidth, latency, jitter, etc.

S-NSI_LL_Ax*

Add a set of logical links by identifying the end points, e.g. IP addresses or VLANs, associated with its indicated S-NSI_LL_MQoS_x*

S-NSI_LL_D

Delete a set of logical links as indicated by S-NSI_LL_ID from the S-NSI

S-NSI_S_Time

Indicate the time when a the supervision action should take place

 

The TNSM should issue the following messages towards the 3GPP NSMF once the Supervision phase is finished.

 

Parameter

Description

S-NSI_ID

Identifier of the supervised Sub-Network Slice Instance 

S-NSI_LL_ID

Identify a set of logical links in the indicated S-NSI

S-NSI_ID_S

A confirmation that the request was successfully performed

S-NSI_ID_E

Message indicating the type of error based on a pre-determined list if the request cannot be satisfied (For Further Study)

 

 Reporting is performed on an indicated S-NSI or on a particular set of specified logical links.  

Parameter

Description

S-NSI_ID

Identifier of a particular Sub-Network Slice Instance

S-NSI_LL_ID

Identify a set of logical links in the indicated S-NSI

S-NSI_LL_QM_x*

Indicate the QoS parameters to monitor, e.g. bandwidth, latency, jitter, etc. for each logical link included in S-NSI_LL_ID

 * Note: The notion of _x can be defined with respect to a particular QoS parameter, for instance _BW for bandwidth, _L for latency, _J for jitter, respectively.

 

The TNSM shall return the following information as result of the reporting operation provided that no error occurred. Otherwise an error message should indicate the type of error according to a pre-determined error list. 

 

Parameter

Description

S-NSI_ID

Identifier of a particular Sub-Network Slice Instance

S-NSI_LL_ID

Identify a set of logical links in the indicated S-NSI

S-NSI _Failure

Indicate the failure state of each logical link included in a S-NSI_ LL_ID

S-NSI_ QR_x*

Performance report as a response to S-NSI_QM_x*

S-NSI_ LL_QR_x*

Performance report as a response to S-NSI_LL_QM_x*

S-NSI_QR_E

Message indicating the type of error for S-NSI_QR_x* reporting based on a pre-determined list (For Further Study)

S-NSI_LL_QR_E

Message indicating the type of error for S-NSI _ LL_QR _x* reporting based on a pre-determined list (For Further Study)

 * Note: The notion of _x can be defined with respect to a particular QoS parameter, for instance _BW for bandwidth, _L for latency, _J for jitter, respectively.

 

Terminate Sub-Network Slice Instance 

Parameter

Description

S-NSI_ID

Identifier of a particular Sub-Network Slice Instance

S-NSI_TTime

Indicate the time when a S-NSI termination should take place


The TNSM should issue the following messages towards the 3GPP NSMF once the Termination phase is finished.

Parameter

Description

S-NSI_ID

Identifier of the Sub-Network Slice Instance to terminate 

S-NSI_ID_S

A confirmation that the request was successfully performed for S-NSI_ID

S-NSI_ID_E

Message indicating the type of error based on a pre-determined list if the request cannot be satisfied.(For Further Study)


Network slice management and orchestration based on 3GPP Network Resource Model Clause 6.3 in [22] is likely what 3GPP has so far on the informational model of 5G networks. Clause 6 of [21] defines the interworking of 5G network elements as part of the life-cycle orchestration process based on that model. These two documents from 3GPP are authoritative references to be used in the further detailed study and gap analysis to which the MTNSI must conform.


Multi-domain Network Slicing

Network Slicing needs to operate end-to-end, otherwise the performance can be compromised. However, in order to get the required coverage, this may involve more than one operator’s network, and each operator will have a different control and data plane. There are 2 cases: the first where a user (i.e. UE) simply roams into a different single operator domain. The more complex case is where two or more domains are concatenated; note that this is a natural consequence of the home-routing network model. These two broad cases for multi-domain network slicing can be categorized according to [18] as:

 

  • Roaming is where a service provider may offer certain slices to end users for supporting particular services. When a user moves away from the home network to a network managed by another provider, the home provider can offer the same services via roaming agreements. The desired slice services that a user requires while roaming, i.e. for achieving the same performance as in the home network, need to be specified in the Service Level Agreement (SLA) between the two providers. This argues for standardized performance parameters, but this removes the opportunity for slice provider performance differentiation.
  • Verticals that require a slice across a geographical region that cannot be addressed by the capabilities of a single service provider. In this case the entity that received the slice request from the vertical, e.g. a service provider or broker, etc.,  may request and use the necessary resources and capabilities from another service provider (or even from more than one) setting-up an SLA that ensures that the desired capabilities and resources can be provided.

 

In the case of verticals, the multi-domain slicing notion can either be over multiple operators or over multiple domains within a single operator’s network. In both scenarios, the assumption of a single orchestrator/manager with end-to-end visibility and control over all the domains and networks may not necessarily be feasible. In the case of multi-domain slicing within a single operator, a hierarchical slice management (e.g. NSMF and NSSMF) is the typical option, while in the case of multi-operator domain management, each domain should interface horizontally to exchange the slice SLA, policy and related rules. When different domains use distinct management entities, there is a need to use industry-wide harmonized information models and standardized APIs.

 

For a network slice that stretches across multiple domains there is need for exchange points that perform the resource negotiation between different administrative domains. Such exchange points need to handle three types of SLA parameters: (i) additive, e.g. delay, jitter (ii) concave or minmax, e.g. bandwidth, and (iii) multiplicative, e.g. probability of successful transmission. Additive and multiplicative parameters need to be apportioned among different domains, while concave or minmax can be handled in the same way within each domain.  

 

Hence, there needs to be a mechanism for per domain allocation, but how and when is this done is related to slice sessions which is discussed in Section 8.1.

 

Control Plane enhancements for Network Slicing  

Slice Sessions

In order to provide a slice’s advertised capabilities, there needs to be some network configuration and dimensioning and possibly routing. This depends in part on the location of the termination of the slice. This cannot be done until the service(s) instance needing the slice is activated.

This will sterilise some resource, which needs to be released when that service instance is no longer active. This may place a requirement on the charging model to encourage such behaviour, e.g. time and volume based charging.

Different services from the same UE can have very different slice requirements e.g. IoT, HD video, voice. Therefore the UE must be able to connect to, and disconnect from, multiple slices, as and when it needs them.

However some background slice connections could be left up indefinitely, where the overhead of slice attachment outweighs the benefit of releasing the resource.

This must be linked to the applications, not the device, unless it is a single function device.

In order for an industry sector to be interested in using a Slice, they presumably expect a fairly significant level of demand. The establishment of the Slice is therefore a matter of commercial negotiation between the Network Operator and the vertical followed by configuration, which will remain fairly static, although re-dimensioning may take place at various times.

The concept of Slice Session only applies to individual UEs that use the slice, i.e. a slice usage instance.

A related issue is Slice admission control;

Slice Attachment Sessions

A network slice is a virtual network that has attributes specifically designed to meet the needs of an industry vertical or service, including aspects of both quality and reliability. A slice has 3 distinct attributes:

  • Performance
  • Connectivity
  • Functionality

 

In order to provide these attributes, the Slice Provider needs to allocate some network resources. The way in which this allocation is done may depend on:

  • the Slice attachment model
  • the nature of the slice
  • the nature of the UE, in particular whether it is a single purpose or multi-function device
  • the charging model.

 

This capability of a given Slice will be the subject of a commercial agreement between the Slice Provider and Slice Buyer, and although it may well support dynamic scale-in/out, the Slice capability will normally be long-lasting i.e. only change on commercial timescales, although this may become more dynamic over time.

 

There will be many end-users of a given slice (albeit normally without their explicit knowledge), otherwise it would not have been worth developing it. A Slice Attachment Session refers to a single end user or device attaching to, and detaching from, a given Slice, not the existence of the Slice capability as a whole. Slice Attachment Sessions allow better utilisation of network resources and facilitate flexible charging models.


Slice Attachment Admission Control

The current Slice attachment model defined in 3GPP is that a device requests attachment to one (or more) network slices when it registers on the network, and this is (somehow) preconfigured in the device.

If it is a single function device that does not consume much network resource - e.g. a simple IoT end-device - this approach makes sense. However if sessions are used to manage slice dimensioning and resources, there is the need for slice admission control. This could either be carried out when the UE attaches to the slice, or when the application that needs it is activated.

 

If this is about resource management, and attachment and activation are not done at the same time, then admission control should be done when the application is activated. Note that some applications and Slice Types may not need admission control, it is only where there is a risk of significant resource sterilisation.

 

While in principle there could be a process of negotiation, this could be very complicated e.g. how would the application know what alternatives were acceptable ? Therefore simple rejection is preferable.

 

However in the case of a multipurpose device, and/or one that can consume a significant amount of network resource, the situation is more complex. One example of such a multi-purpose device is a Smartphone with several applications that might use different Slices e.g. for high quality video, home security and health monitoring. A car is another example of a multi-function device that could benefit from network slicing; it is important to think beyond current devices to avoid the expectation that everything can run over the top and there is no need for slicing.

 

If a Slice Attachment request is rejected, there is the need to decide if the application should simply not be allowed to start, or try to run over the default (or Vanilla) slice. Automatic fallback could be commercially risky for the following reasons:

 

  • the Vanilla Slice may lack key functionality, e.g. Security
  • if the application did not run properly, the User would blame the Service provider
  • if the application did run properly, but the User was made aware it was not using the requested Slice, they might question why they ever needed (and in some way paid extra for) the Slice capabilities

 

Therefore if admission to a Slice is denied, connectivity for that service should not automatically fall back to the Vanilla Slice. There may however be exceptions, for example when failing to connect to an HD video slice, it might be better to drop back to SD video over the Vanilla Slice, as long as the User was made aware of what was happening, and not charged for the HD.


Slice Attachment Session Requirements

  1. Slice Attachment Sessions need to be supported as they allow better utilisation of network resources, and facilitate flexible charging based on:
    1. Attachment and/or
    2. Duration and/or
    3. Data volume
  2. Slice Attachment Session establishment/tear-down need to be able to be linked to applications/service.
  3. Slice Attachment Sessions should be able to be requested at any time after device registration, not just at registration time.
  4. Devices should be able to support multiple, concurrent Slice Attachment Sessions where appropriate, and as allowed by network subscription and network policy.
  5. Session Attachment requests need to be able to be denied by the Slice Provider, but there is no requirement to support explicit negotiation about possible alternatives. Note however that not all Slice types may require Session Admission Control, for example where the attachment of a new device places a minimal load on the system.
  6. A denied SAS request should not necessarily result in connection to the Vanilla slice.
  7. Multi-function end-devices should be able to automatically connect to a basic, default slice on registration.
  8. Slice Attachment requests by a device should be able to signal a subset of parameters, at attachment time, or as a modification request, but some slice parameters cannot be changed.
  9. There is a need to specify what happens if an end-device requests a different value for a fixed Slice parameter, or a parameter type that is not supported by the Slice. Possible actions include:
    1. rejection of the attachment request
    2. granting access to the Vanilla slice instead
    3. the network sends a configuration update to the end-device
  10. The 5G Core needs to consider the impact of a session attachment or modification request on other SASs on a per access line (not end-device) basis.
  11. Individual Slice Attachment Sessions should be able to be terminated before the device deregisters, but deregistration must automatically terminate all the sessions associated with that device.
  12. Slice Attachment Sessions should support both time and volume-based charging.
  13. Slice Attachment authentication/authorisation by the network Slice Provider needs to be able to be done on a per SAS basis.
  14. The Slice Provider should manage the subscription information (e.g. default or non-default) for slice attachment sessions for every end user/device in order to be able to (re)attach to a given slice or slices.
  15. The Slice Provider should manage the network policy related to slice attachment, e.g. location restriction, congestion control, etc.
  16. It is necessary to maintain good synchronisation between the slice attachment information in the end-device with subscription information held in the network.
  17. There needs to be support for simple devices that require permanent connection to a Slice, although this could be done by a session of indefinite duration
  18. A device entering a visited network should only request slice attachment for active or activated applications.
  19. It should be possible to do performance monitoring on a per SAS basis.


The Commercial Implications of Slice Usage and How this Impacts Slice Selection Mechanisms


The main point of providing different network slices is to provide applications with network functionality and/or performance that they would not get from best-efforts connectivity. An application/service provider will only pay a premium to connect to a Slice if they have to in order to deliver that application/service, and it in their commercial interests to pay the lowest possible premium. They will therefore specify the minimum capability over and above basic connectivity commensurate with delivering their service.

 

3GPP seems to support the concept of offering an alternative slice if a Slice Attachment request cannot be met, but this makes little technical or commercial sense.

 

If the alternative offered Slice has a lower performance or less functionality than that requested, it will not, by definition be acceptable, as the requested Slice will have been the minimum requirement. It also begs the question of how the application knows what the capabilities of the alternative slice are, and how it assesses whether or not these are acceptable.

 

If the alternative is superior to that requested, then that would be acceptable from a performance and functionality perspective, but possibly not commercially if the cost of using that Slice was higher. The cost stack of the service will have been calculated on the basis of the advertised price of using the requested slice, and it cannot be left up to the application to make a real-time decision as to whether and how much more it is prepared to pay.

 

Therefore the more sensible approach is for the Slice attachment request to simply be accepted or denied. If the requested Slice capability cannot be provided, the request should just be denied, and no inferior alternative offered. If the Slice provider has a ‘better’ Slice, that is at least as good as that requested in every respect, the requested Slice attachment should be accepted, as long as the Slice provider is prepared to take the commercial hit of providing superior capability with no additional reward.

 

Therefore support for Slice Attachment Session negotiation is not required, with the possible exception of bandwidth, which is the subject of a further contribution.


Editor Note: There is a need to soften the judgement of the veracity of negotiation in the above text

Data Plane Enhancements & Approaches for Network Slicing

Network Data Plane Concepts & Deployment Options

The use of network slicing implies that the data plane would conform to the desired SLA requested by a service provider, vertical or 3rd party. To guarantee the desired performance, especially for critical communication services that require low latency and nearly zero packet loss or for enhanced broadband services with high throughput, besides the control plane that needs to take care of the admission control, there is a need to choose the data plane solution. This is critical for delivering a predictable performance while maintaining a practical economical model for the services.

The following are different options for maintaining the desired SLA with respect to the requested network slices. It should be noted that similar performance assurance options can be applied for the services offered within a network slice too, and that these can be decoupled, i.e. independent from the ones applied on per slice basis.

The data plane can assure the desired performance by:

  • Over-provisioning
  • Shaping traffic at ingress points
  • Applying strict prioritization of certain traffic classes
  • Duplicating data packets and using dis-joint path to propagate can enhance resiliency
  • Enabling path selection, link selection, queue selection, etc.
  • Employing hard isolation at the interface level in order to assure the desired bandwidth

Proactive performance measurement across the whole network domain or along the traffic engineered paths can be used to demonstrate that the desired SLA is being met.

The notion of isolation is achieved (i) by introducing encapsulation, e.g. VPNs, VxLANs, etc., and (ii) by assuring that the resource allocated to a particular slice do not interfere with the ones allocated to other co-existing slices.

The aforementioned mechanisms can be used either independently or in combination and suggest a different type of coordination between the underlay and the overlay networks.


Editor Note: We need to reconsider the notion of jitter in relation with the data duplication and dis-join path point above.

VPN & VLAN Technologies  

In the context of transport networks the roots of network slicing can be traced back to Virtual Private Networks (VPNs) and Virtual Local Area Networks (VLANs), which introduced virtualized networks over a shared infrastructure. The former connects remote locations in a point-to-point or multi-point to multi-point fashion, while the latter groups Ethernet end-stations under the same broadcast domain.

 

The notion of virtualization usually defines an overlay to underlay relation connecting remote locations among different layers. Virtualization has a catalyst role in the convergence of multi-service transport networks. BBF documents a set of architectures for a broadband multi-service network, addressing typical infrastructures, topologies and deployment scenarios, and specifies the associated nodal requirements in TR-178.

 

Furthermore, BBF introduced a series of recommendations in the context of virtualization for the Mobile Backhaul of LTE and beyond. TR-221 concentrates on reference architectures for MPLS in the Mobile Backhaul, TR-221amd1 provides extensions for small cell backhaul and addressed further issues on the control and management planes, and TR-221amd2 looked into time and phase synchronization, seamless MPLS and full E-Tree service using VPLS. TR-331 builds on TR-221 providing the architectural basis and technical requirements needed to successfully deploy PON access nodes within the TR-221 architecture, either independently or alongside Mobile Backhaul access nodes enabling the integration of Mobile Backhaul service with residential service and/or WLAN services.

 

An insight of the technical architecture and equipment requirements for enabling Carrier Ethernet services with MPLS networks considering E-Line, E-LAN and E-Tree is provided in TR-224, while TR-350 focuses on implementing the Ethernet services using BGP MPLS based Ethernet VPNs (EVPN) in IP/MPLS networks. MP-BGP is used for distributing the reachability of MAC addresses over the MPLS network bringing the same operational control and scale of L3VPN to L2VPN. 

The aforementioned work on BBF concentrated on the multi-service support but did not consider the notion of multi-tenancy, e.g. opening the network infrastructure to verticals. In addition, BBF looked into automation via the EVPN using BGP MPLS. There is an expectation for the service management interface that is responsible for triggering the establishment and maintenance of the corresponding transport layer.   

 

In addition, some part of emerging 5G services, e.g. URLLC, with stringent latency requirements may not be supported in an efficient way by the current VPN/VLAN technologies. Extensions are needed in order to assure a deterministic delay bound. Furthermore, BBF has not yet explored the use of virtualization to support the concurrent deployment of different base station functional splits, on the top of a common network infrastructure.     

Access Network Sharing Models   

Network sharing leverages the benefits of network virtualization to introduce network resources assigned to different network operators considering greenfield and brownfield deployment scenarios. TR-370 identified network sharing via the means of (i) management system and (ii) virtual Access Node (vAN), which can have applicability for network slicing. The network management system is introducing a simple separation between the VPN/VLAN allocated to different tenants, while vAN configures virtual machines introducing isolation in the port level as well as in the computing and storage resources within network equipment.   

 

In particular, vAN is a logical entity that represents a physical or a part of an access node, together with its virtual ports, which are mapped to physical customer ports, identified using a virtual port identifier.  The mapping between physical ports and virtual ports is maintained as a particular mapping function maintained in a centralized management system within the network operator, (or infrastructure provider in general). Such a mapping functionality converts a physical port of the access node in the received control packets that include port information to a virtual port related to the vAN, and also converts a virtual port identity in the received control packets to a physical port of the access node.

TR-370 proposed to use the following traffic encapsulation models implemented by the physical access and aggregation nodes to differentiate various tenants:  

 

  • VLAN approach introduces an Operator VLAN (O-VLAN) Tag added to the Ethernet frame (at the A10 reference point) as agreed between the tenant and infrastructure provider, including also the corresponding forwarding rules. Q-in-Q-in-Q (IEEE 802.1ad) tunneling introduces three tags in front of the Ethernet header that corresponds to the operator, service and client respectively.
  • MPLS that introduces an LSP related to a particular tenant-id with an MPLS tag added and discarded at the switch adjacent to the A10 reference point. The Ethernet frame contains also the C-VLAN tag and S-VLAN tag related to the particular customer and service, which continues to be forwarded in the virtual network operator premises.  
  • VXLAN overlays a Layer 2 connection on top of Layer 3 one, which is identified by a 24-bit segment ID also known as the VXLAN Network Identifier (VNI) that is added to the Ethernet frame at the switch adjacent to the A10 reference point. The VNI and the related VXLAN outer header encapsulation are known only to the VXLAN Tunnel End Point (VTEP).

 

It should be noted that for Q-in-Q-in-Q, VLAN approach introduces an additional Tag (C-TAG) associated with the VLAN used by the service provider. This Tag may be added to the subscriber Ethernet frame as agreed between the subscriber (tenant) and provider, including also the corresponding forwarding rules. Another approach known as Provider Bridging was defined in IEEE 802.1ad and subsequently included in IEEE 802.1Q-2011.  If a provider uses three tags (one to identify the service instance – an I-TAG) this may be done as defined in Provider Backbone Bridging, defined in IEEE 802.1ah-2008 and also included in IEEE 802.1-2011.

 

The selection of the tunneling mechanism will depend on the hardware capabilities of the physical access node. TR-101 changes the semantics of VLAN tags stacking in order to increase the number of VLAN supported but results in an increased Ethernet frame size.

 

The MPLS approach brings flexible and scalable network architecture. However, in a purely L2 it might be useful to consider the use of the O-VLAN scheme rather than the MPLS extension. In certain implementations access nodes may not support MPLS, but adding MPLS capability can increase their complexity


Emerging Slicing Technologies  

 

Emerging network slicing technologies are currently under development and are subject to change. Those described here consider only their current state.

Deterministic Networking (DetNet)

 

Deterministic Networking (DetNet) is technology that provides the capability of bounded latency and low data loss rates on selected data flows. The DetNet QoS can be expressed in terms of:

 

  • Bounded delay and delay variation from source to destination.
  • Packet loss ratio under various assumptions as to the operational states of the nodes and links. If packet replication is used to reduce the probability of packet loss, then a related property is the probability (may be zero) of delivery of a extra copies of the replicated packet.
  • Tolerance for out-of-order packet delivery. It is worth noting that some DetNet applications are unable to tolerate any out-of-order delivery.

 

It is a distinction of DetNet that it is concerned solely with worst case values for the end-to-end latency, jitter, and mis-ordering.

 

Three techniques are used by DetNet to provide these qualities of service

 

  • Congestion loss protection addresses latency and packet loss. It operates by allocating resources along the path of a DetNet flow e.g. buffer or link bandwidth, in some or all intermediate nodes. 
  • Service protection addresses packet loss due to random media errors and equipment failures. Packet replication and elimination and packet encoding (e.g. network coding) are the mechanisms employed to provide service protection, which are constrained by the latency requirements.   
  • Explicit routes and pinning are used to avoid temporary interruptions caused by the convergence of routing or bridging protocols and for assisting sequentialization and replication/elimination.

 

Resources, e.g. buffer or bandwidth, not utilized by a DetNet flow are available to non-DetNet packets and may be preempted when DetNet traffic flow needs them. The IETF Deterministic Networking (DetNet) working group focuses on aspects related to deterministic data paths: this includes overall architecture, data plane, data flow information models and YANG models. The DetNet working group collaborates with other IETF working groups as well as other SDOs, including IEEE.

 

 

In the context of network slicing, DetNet would be one key enabler for data plane supporting e.g URLLC traffic and can be used in the following two different ways:

 

  • To configure non-overlapping allocation of resource between different slices, in where a slice would be represented as a DetNet Flow, with deterministic behavior. In this way different DetNet Flows can be configured with distinct types of deterministic behavior that reflect the particular service types and can co-exist with other best-effort traffic flows on the same network infrastructure. 
  • Within a network slice instance in order to distinguish various types of traffic. In this case a network slice instance can be allocated to a particular tenant. DetNet can then assure the deterministic requirements of such different services, treating each service as a DetNet flow.    

 

 

Time Sensitive Networking (TSN)

 

Time Sensitive Networking (TSN) is a Task Group (TG) within IEEE 802.1 that (among other things) defines Ethernet Bridging technology developed to provide deterministic communication on Ethernet. TSN defines an umbrella of standards that currently developed by the IEEE 802.1 working group, which can be arranged into the following key component categories:

  • Time synchronization: All devices participating in real-time communication with hard time boundaries need to have a common time reference and hence have to synchronize their clocks. In Std IEEE 802.1AS applications, time is typically distributed by a central source directly to the network using IEEE 1588 Precision Time Protocol (PTP) and its corresponding profiles, i.e.  IEEE 802.1AS and IEEE 802.1AS-Rev, which reduce the list of PTP options to a few critical ones that are relevant for particular services, e.g. home networks, automotive or industry 4.0.  
  • Scheduling and traffic shaping: Ethernet can support different traffic classes with distinct priorities.  IEEE 802.1Q defines eight priorities that are visible from the VLAN tag of the standard Ethernet. It should be noted that even the highest priority class is not absolute, i.e. the end-to-end latency is not guaranteed, when considering standard Ethernet switches, since Ethernet needs to complete the processing and transmission of a packet before handling a new one. This can impact the latency of critical traffic because it may introduce non-deterministic buffering delay. Hence, the strict priority scheduling of IEEE 802.1Q needs further enhancements. IEEE 802.1Qbv specifies time-aware scheduler that separates the communication into fixed length pre-determined cycles assigned to one or more priority classes that grant exclusive use to guarantee non-interrupted transmissions. This can eliminate the buffering delay related to Ethernet transmission handling time-critical traffic without interruption. IEEE 802.1Qbv ensures that the Ethernet interface is not busy transmitting a frame when the scheduler changes from one class to another by introducing a guard band to restrict new frame transmissions before each time-critical class, with only ongoing transmissions handled. To mitigate the negative effects of guard bands, IEEE 802.Qbu and IEEE 802.3br introduce frame fragmentation and pre-emption on a physical link-by-link basis, where a fragmented frame can be reassembled after transmitting a time-critical frame. Frame pre-emption can significantly reduce the guard band, but has to be activated on each link, for example using the Link Layer Discovery Protocol (LLDP). The combination of time synchronization and frame pre-emption is an effective way to guarantee the coexistence of different traffic classes, while assuring latency guarantees for time-critical classes.
  • Selection of communication paths, path reservations and fault-tolerance: For fault-tolerance, TSN specifies IEEE 802.1CB Frame Replication and Elimination for Reliability (FRER) that replicates frames of critical traffic across disjoint network paths, with extra frames being eliminated by the Elimination sub-function. In addition, other protocols such as High-availability Seamless Redundancy (HSR) or Parallel Redundancy Protocol (PRP) specified in IEC 62439-3 can also be utilized. To avoid overloading the network, replication can occur on critical traffic only. IEEE 802.1Qca focuses on the control of explicit paths considering protection bandwidth reservation, synchronization, etc. with e.g. a Path Computation Element (PCE) responsible for path calculations including redundant paths. IEEE 802.1Qci Per-Stream Filtering and Policing (PSFP) facilitate ingress policing and gating on a per flow basis preventing traffic overload that can be caused by e.g. excessive critical traffic. PSFP performs per-flow filtering to identify flow, it applies corresponding policy, then the gating coordinates flows such that frames proceed in an orderly and deterministic fashion and the metering enforces predefined bandwidth profiles.

 

 

Standard/Projects

Function group

Title

Status

IEEE 802.1AS-Rev

Timing and Synchronization

Timing and Synchronization for
Time-Sensitive Applications

Draft 7.0 (Mar. 2018)

IEEE 802.1Qbv

Forwarding and Queuing

Enhancements for
Scheduled Traffic

Published (Mar. 2016)

IEEE 802.1Qbu

Forwarding and Queuing

Frame preemption

Published (Aug. 2016)

IEEE 802.1Qca

Stream Reservation (SRP)

Path Control and Reservation

Published (Mar. 2016)

IEEE 802.1CB

Stream Reservation (SRP)

Seamless Redundancy

Published (Sep. 2017)

IEEE 802.1Qcc

Stream Reservation (SRP)

Enhancements and Performance Improvements

Draft 2.3 (May 2018)

IEEE 802.1Qci

Forwarding and Queuing

Per-Stream Filtering and Policing

Published (Sep. 2017)

IEEE 802.1Qch

Forwarding and Queuing

Cyclic Queuing and Forwarding

Published (Jun. 2017)

IEEE 802.1CM

Vertical

Time-Sensitive Networking
for Fronthaul

Published (Jun. 2018)

IEEE 802.1Qcr

Forwarding and Queuing

Asynchronous Traffic Shaping

Draft 0.5 (Jun. 2018)

IEEE 802.1CS

Stream Reservation

Local Registration Protocol

Draft 1.5 (Jun. 2018)

 

In the context of network slicing, TSN defined standards will be key enablers for supporting e.g. URLLC, traffic and can be used in the following two different ways:

 

  • To configure slice identifying stream filters and allocate resources required for deterministic behavior
  • Within a network slice in order to distinguish various traffic classes

Enhanced Virtual Private Networks (VPN+)

 

Enhanced Virtual Private Networks (VPN+) is an individual contribution that defines a set of attributes that enrich VPN to support 5G services considering also network slicing. VPN+ is designed considering: (i) isolation in the data plane, (ii) scalability in the control plane, i.e. avoiding introduction of per-flow state in the network, (iii) support of diverse performance guarantees, (iv) integration of network functions and value added services. VPN+ may utilize the following underlay solutions:

 

  • FlexE creates a point-to-point Ethernet with a specific fixed bandwidth, supporting (i) bonding of multiple links, (ii) sub-rating that allows the use of a portion of a link and (iii) channelization that allows a link to carry several lower-speed links from different sources. FlexE can be used to provide hard isolation on the interface level. A tenant can then use other methods, e.g. queuing, to manage the relative priority of the corresponding traffic.
  • Dedicated Queues can reduce the negative performance influence between competing VPNs, by steering traffic to dedicated input and output queues.
  • Segment Routing (SR) is a main component of VPN+ either over IPv6 or MPLS that provides packet instructions at the entry and optionally at various points inside the network by using Segment IDs (SIDs) to specify i.e. packet treatment instructions. SR controls packets routes (on strict or loose paths other than the shortest path) for various traffic engineering reasons.

 

A VPN path that travels by other than the shortest path through the underlay normally requires controlling node to keep a state in the underlay to specify that path. With well-behaved SR, this can be achieved by introducing necessary network link and node states to the network, without the additional per-path state.


Identify gaps for BBF MSBN





End of Broadband Forum Study Document SD-406





  • No labels