Welcome!

SDN Journal Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, TJ Randall

Related Topics: Containers Expo Blog, @CloudExpo, SDN Journal

Containers Expo Blog: Blog Post

Two Options for Web Content Filtering at the Speed of Now | @CloudExpo #SDN #Cloud #Virtualization

To ensure service level and capacity as internet traffic increases, organizations need higher-speed networks

Two Options for Web Content Filtering at the Speed of Now

Because not everything the internet offers is suitable for all users, organizations use web filters to block unwanted content. However, filtering content becomes challenging as networks speeds increase. Two filtering architectures are explored below, along with criteria to help you decide which option is the best fit for your organization.

How Fast Can You Filter?
To ensure service level and capacity as internet traffic increases, organizations need higher-speed networks. In telecom networks, to serve hundreds of thousands of users, 100 Gbps network links are introduced to keep up with the demand. Today, the market has reached a state of maturity regarding solutions for web content filtering at 1 Gbps and 10 Gbps, but filtering at 100 Gbps poses a whole set of new challenges.

To filter content this quickly, the system must expend a huge supply of processing power. Furthermore, there is a need for distribution of traffic across available processing resources. This is usually achieved with hash-based 2-tuple or 5-tuple flow distribution on subscriber IP addresses. In telecom core networks, subscriber IP addresses are carried inside GTP tunnels and, consequently, support for GTP is required for efficient load distribution when filtering traffic in telecom core networks.

Building Filtering Capacity
There are two choices to meet the needs of high-speed filtering by processing resources and providing load distribution. The first option is a stacked, distributed server solution. It is comprised of a high-end load balancer and standard COTS servers equipped with several 10 Gbps standard NICs. The load balancer connects in-line with the 100 Gbps link and load distributes traffic to 10 Gbps ports on the standard servers. The load balancer must support GTP and flow distribution based on subscriber IP addresses.

Because the load balancer cannot guarantee a100 percent even load distribution, there is a need for overcapacity on the distribution side. A reasonable solution comprises 24 x 10 Gbps links. For this solution, three standard servers, each equipped with four 2 x 10 Gbps standard NICs, in total provide the 240 Gbps traffic capacity (3 x 4 x 2 x 10 Gbps). 24 cables for 10 Gbps links round out the solution.

Though the load balancer is costly, the standard COTS servers and standard NICs offset the expense with their reasonable cost. The solution involves many components and complex cabling. Furthermore, the rack space required is relatively large, and system management is complex due to the multi-chassis design.

The second option consolidates load distribution, 100G network connectivity and the total processing power in a single server. This is called a single, consolidated server solution, and it requires a COTS server and two 1 x 100G Smart NICs. Since up to 200 Gbps traffic needs to be processed within the same server system, the server must be equipped with multiple cores for parallel processing. For example, a server with 48 CPU cores can run up to 96 flow processing threads in parallel using hyper-threading.

To fully use CPU cores, the Smart NIC must support load distribution to as many threads as the server system provides. Also, to ensure balanced use of CPU cores, the Smart NIC must support GTP tunneling. The Smart NIC should also support these features at full throughput and full duplex 100 Gbps traffic load, for any packet size.

This single-server solution has multiple benefits. It provides a one-shop system management, where there are no complex dependencies between multiple chassis. The cabling is simple due to single component usage. The footprint in the server rack is very low, thereby reducing rack space hosting expenses.

Determining Factors
The technical specifications for a high-speed web filtering solution are important, but so is the total cost of ownership. Here are some significant parameters for operations expenditure (OPEX) and capital expenditure (CAPEX) calculations. For OPEX, consider rackspace hosting expenses, Warranty and support, and power consumption - including cooling - for servers, NICs and load balancers. CAPEX considerations include the costs of software, servers and Smart NICs or standard NICs.

So, which web content filtering option is right for your organization? It depends on your use case. The difference in costs between the two options will certainly be a determining factor, so consider carefully which method will best serve your needs and those of your customers. If your situation is better served by a simplified, consolidated method, take a closer look at how Smart NICs can provide the support for the speed you need.

More Stories By Sven Olav Lund

Sven Olav Lund is a Senior Product Manager at Napatech and has over 30 years of experience in the IT and Telecom industry. Prior to joining Napatech in 2006, he was a Software Architect for home media gateway products at Triple Play Technologies. From 2002 to 2004 he worked as a Software Architect for mobile phone platforms at Microcell / Flextronics ODM and later at Danish Wireless Design / Infineon AG.

As a Software Engineer, Sven Olav started his career architecting and developing software for various gateway and router products at Intel and Case Technologies. He has an MSc degree in Electrical Engineering from the Danish Technical University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.