SOFTWARE DEFINED NETWORKING Cost For such an - TopicsExpress



          

SOFTWARE DEFINED NETWORKING Cost For such an attention-getting word, there is less to say about cost than other motivators. Cost has capital (CAPEX) and operational (OPEX) components. Cost is driven by its companions: scale (a CAPEX driver), complexity, and stability (OPEX drivers). Let’s start with the obvious statement about CAPEX—for many customers (particularly service providers or large enterprises with data center operations), the cost of processing power is very cheap on generic compute (COTS) in comparison to the cost of processing in their network elements. The integration costs associated with the integrated service and control cards drive some of this cost differential. Admittedly, some of this cost differential is also driven by a margin expectation of the vendor for the operating system (those control, management, and service processes), which are not always licensed separately. It’s a way to recover their investment in their intellectual property and fund ongoing maintenance and development. This is a subtle point for the conversation going forward. While SDN will definitely reduce the hardware integration component of this cost, the component that is the vendor’s intellectual property (control or service) may be repriced to what the vendor perceives as its true value (to be tested by the market). Additionally, an integration cost will remain in the software components.[14] Innovation An argument can be made that there are innovation benefits from the separation of the control and data planes (the argument is stronger when considering the separation of the service plane as well). Theoretically, separation can benefit the consumer by changing the software release model in a way that enables innovations in either plane to proceed independently from each other (as compared to the current model in which innovations in either plane are gated by the build cycle of the multipurpose integrated monolith). More relevant to the control/data separation would be the ability to support the introduction of new hardware in the forwarding plane without having to iterate the control plane (for example, the physical handling of the device would be innovation in the data plane component via new drivers). Stability The truth is that when we talk about the separation of these planes in an SDN context, there will probably be some subcomponents of the control plane that cannot be centralized and that there will be a local agent (perhaps more than one) that accepts forwarding modifications and/or aggregates management information back to the central control point. In spite of these realities, by separating the control and data planes, the forwarding elements may become more stable by virtue of having a smaller and less volatile codebase. The premise that a smaller codebase is generally more stable is fairly common these days. For example, a related (and popular) SDN benefit claim comes from the clean slate proposition, which posits that the gradual development of features in areas like Multiprotocol Label Switching (MPLS) followed a meandering path of feature upgrades that naturally bloats the code bases of existing implementations. This bloat leads to implementations that are overly complex and ultimately fragile. The claim is that the implementation of the same functionality using centralized label distribution to emulate the functionality of the distributed LDP or RSVP and a centralized knowledge of network topology could be done with a codebase at least an order of magnitude smaller than currently available commercial codebases.[15] The natural claim is that in a highly prescriptive and centralized control system, the network behavior can approach that of completely static forwarding, which is arguably stable. Complexity and its resulting fragility The question of how many control planes and where these control planes are located directly impacts the scale, performance, and resiliency—or lack thereof, which we refer to as fragility—of a network. Specifically, network operators plan on deploying enough devices within a network to handle some percentage of peak demand. When the utilization approaches this, new devices must be deployed to satisfy the demand. In traditional routing and switching systems, it’s important to understand how much localized forwarding throughput demand can be satisfied without increasing the number of managed devices and their resulting control protocol entities in the network. Note from our discussion that the general paradigm of switch and router design is to use a firmly distributed control plane model, and that generally means that for each device deployed, a control plane instance will be brought up to control the data plane within that chassis. The question then is this: how does this additional control plane impact the scale of the overall network control plane for such things as network convergence (i.e., the time it takes for the entirety of running control planes to achieve and agreed upon a loop-free state of the network)? The answer is that it does impact the resiliency and performance of the overall system, and the greater the number of control planes, the potential at least exists for additional fragility in the system. It does also increase the anti-fragility of the system if tuned properly, however, in that it creates a system that eventually becomes consistent regardless of the conditions. Simply put, the number of protocol speakers in distributed or eventual consistency control models can create management and operations complexity. Initially, an effort to curtail the growth of control planes was addressed by creating small clusters of systems from stand-alone elements. Each element of the cluster was bonded by a common inter-chassis data and control fabric that was commonly implemented as a small, dedicated switched Ethernet network. The multichassis system took this concept a step further by providing an interconnecting fabric between the shelves and thus behaved as a single logical system, controlled by a single control plane. Connectivity between the shelves was, however, implemented through external (network) ports, and the centralized control plane uses multiple virtual control plane instances—one per shelf. It was also managed as such in that it revealed a single IP address to the network operator, giving them one logical entity to manage. Figure 2-8 demonstrates both approaches. Cost For such an attention-getting word, there is less to say about cost than other motivators. Cost has capital (CAPEX) and operational (OPEX) components. Cost is driven by its companions: scale (a CAPEX driver), complexity, and stability (OPEX drivers). Let’s start with the obvious statement about CAPEX—for many customers (particularly service providers or large enterprises with data center operations), the cost of processing power is very cheap on generic compute (COTS) in comparison to the cost of processing in their network elements. The integration costs associated with the integrated service and control cards drive some of this cost differential. Admittedly, some of this cost differential is also driven by a margin expectation of the vendor for the operating system (those control, management, and service processes), which are not always licensed separately. It’s a way to recover their investment in their intellectual property and fund ongoing maintenance and development. This is a subtle point for the conversation going forward. While SDN will definitely reduce the hardware integration component of this cost, the component that is the vendor’s intellectual property (control or service) may be repriced to what the vendor perceives as its true value (to be tested by the market). Additionally, an integration cost will remain in the software components.[14] Innovation An argument can be made that there are innovation benefits from the separation of the control and data planes (the argument is stronger when considering the separation of the service plane as well). Theoretically, separation can benefit the consumer by changing the software release model in a way that enables innovations in either plane to proceed independently from each other (as compared to the current model in which innovations in either plane are gated by the build cycle of the multipurpose integrated monolith). More relevant to the control/data separation would be the ability to support the introduction of new hardware in the forwarding plane without having to iterate the control plane (for example, the physical handling of the device would be innovation in the data plane component via new drivers). Stability The truth is that when we talk about the separation of these planes in an SDN context, there will probably be some subcomponents of the control plane that cannot be centralized and that there will be a local agent (perhaps more than one) that accepts forwarding modifications and/or aggregates management information back to the central control point. In spite of these realities, by separating the control and data planes, the forwarding elements may become more stable by virtue of having a smaller and less volatile codebase. The premise that a smaller codebase is generally more stable is fairly common these days. For example, a related (and popular) SDN benefit claim comes from the clean slate proposition, which posits that the gradual development of features in areas like Multiprotocol Label Switching (MPLS) followed a meandering path of feature upgrades that naturally bloats the code bases of existing implementations. This bloat leads to implementations that are overly complex and ultimately fragile. The claim is that the implementation of the same functionality using centralized label distribution to emulate the functionality of the distributed LDP or RSVP and a centralized knowledge of network topology could be done with a codebase at least an order of magnitude smaller than currently available commercial codebases.[15] The natural claim is that in a highly prescriptive and centralized control system, the network behavior can approach that of completely static forwarding, which is arguably stable. Complexity and its resulting fragility The question of how many control planes and where these control planes are located directly impacts the scale, performance, and resiliency—or lack thereof, which we refer to as fragility—of a network. Specifically, network operators plan on deploying enough devices within a network to handle some percentage of peak demand. When the utilization approaches this, new devices must be deployed to satisfy the demand. In traditional routing and switching systems, it’s important to understand how much localized forwarding throughput demand can be satisfied without increasing the number of managed devices and their resulting control protocol entities in the network. Note from our discussion that the general paradigm of switch and router design is to use a firmly distributed control plane model, and that generally means that for each device deployed, a control plane instance will be brought up to control the data plane within that chassis. The question then is this: how does this additional control plane impact the scale of the overall network control plane for such things as network convergence (i.e., the time it takes for the entirety of running control planes to achieve and agreed upon a loop-free state of the network)? The answer is that it does impact the resiliency and performance of the overall system, and the greater the number of control planes, the potential at least exists for additional fragility in the system. It does also increase the anti-fragility of the system if tuned properly, however, in that it creates a system that eventually becomes consistent regardless of the conditions. Simply put, the number of protocol speakers in distributed or eventual consistency control models can create management and operations complexity. Initially, an effort to curtail the growth of control planes was addressed by creating small clusters of systems from stand-alone elements. Each element of the cluster was bonded by a common inter-chassis data and control fabric that was commonly implemented as a small, dedicated switched Ethernet network. The multichassis system took this concept a step further by providing an interconnecting fabric between the shelves and thus behaved as a single logical system, controlled by a single control plane. Connectivity between the shelves was, however, implemented through external (network) ports, and the centralized control plane uses multiple virtual control plane instances—one per shelf. It was also managed as such in that it revealed a single IP address to the network operator, giving them one logical entity to manage. Figure 2-8 demonstrates both approaches.
Posted on: Thu, 20 Nov 2014 08:47:40 +0000

Trending Topics



Recently Viewed Topics




© 2015