9 min read

In this article by Stuart Fordham, the author of the book Cisco ACI Cookbook, helps us to understand ACI and the APIC and also explains us how to create fabric policies.

(For more resources related to this topic, see here.)

Understanding ACI and the APIC

ACI is for the data center. It is a fabric (which is just a fancy name for the layout of the components) that can span data centers using OTV or similar overlay technologies, but it is not for the WAN. We can implement a similar level of programmability on our WAN links through APIC-EM (Application Policy Infrastructure Controller Enterprise Module), which uses ISR or ASR series routers along with the APIC-EM virtual machine to control and program them. APIC and APIC-EM are very similar; just the object of their focus is different. APIC-EM is outside the scope of this book, as we will be looking at data center technologies.

The APIC is our frontend. Through this, we can create and manage our policies, manage the fabric, create tenants, and troubleshoot. Most importantly, the APIC is not associated with the data path. If we lose the APIC for any reason, the fabric will continue to forward the traffic.

To give you the technical elevator pitch, ACI uses a number of APIs (application programming interfaces) such as REST (Representational State Transfer) using languages such as JSON (JavaScript Object Notation) and XML (eXtensible Markup Language), as well as the CLI and the GUI to manage the fabric and other protocols such as OpFlex to supply the policies to the network devices. The first set (those that manage the fabric) are referred to as northbound protocols. Northbound protocols allow lower-level network components talk to higher-level ones. OpFlex is a southbound protocol. Southbound protocols (such as OpFlex and OpenFlow, which is another protocol you will hear in relation to SDN) allow the controllers to push policies down to the nodes (the switches).

Figure 1

This is a very brief introduction to the how. Now let’s look at the why. What does ACI give us that the traditional network does not?

In a multi-tenant environment, we have defined goals. The primary purpose is that one tenant remain separate from another. We can achieve this in a number of ways.

We could have each of the tenants in their own DMZ (demilitarized zone), with firewall policies to permit or restrict traffic as required. We could use VLANs to provide a logical separation between tenants. This approach has two drawbacks. It places a greater onus on the firewall to direct traffic, which is fine for northbound traffic (traffic leaving the data center) but is not suitable when the majority of the traffic is east-west bound (traffic between applications within the data center; see figure 2).

Figure 2

We could use switches to provide layer-3 routing and use access lists to control and restrict traffic; these are well designed for that purpose.

Also, in using VLANs, we are restricted to a maximum of 4,096 potential tenants (due to the 12-bit VLAN ID).

An alternative would be to use VRFs (virtual routing and forwarding). VRFs are self-contained routing tables, isolated from each other unless we instruct the router or switch to share the routes by exporting and importing route targets (RTs). This approach is much better for traffic isolation, but when we need to use shared services, such as an Internet pipe, VRFs can become much harder to keep secure.

One way around this would be to use route leaking. Instead of having a separate VRF for the Internet, this is kept in the global routing table and then leaked to both tenants. This maintains the security of the tenants, and as we are using VRFs instead of VLANs, we have a service that we can offer to more than 4,096 potential customers. However, we also have a much bigger administrative overhead. For each new tenant, we need more manual configuration, which increases our chances of human error.

ACI allows us to mitigate all of these issues.

By default, ACI tenants are completely separated from each other. To get them talking to each other, we need to create contracts, which specify what network resources they can and cannot see. There are no manual steps required to keep them separate from each other, and we can offer Internet access rapidly during the creation of the tenant. We also aren’t bound by the 4,096 VLAN limit. Communication is through VXLAN, which raises the ceiling of potential segments (per fabric) to 16 million (by using a 24-bit segment ID).

Figure 3

VXLAN is an overlay mechanism that encapsulates layer-2 frames within layer-4 UDP packets, also known as MAC-in-UDP (figure 3). Through this, we can achieve layer-2 communication across a layer-3 network. Apart from the fact that through VXLAN, tenants can be placed anywhere in the data center and that the number of endpoints far outnumbers the traditional VLAN approach, the biggest benefit of VXLAN is that we are no longer bound by the Spanning Tree Protocol. With STP, the redundant paths in the network are blocked (until needed). VXLAN, by contrast, uses layer-3 routing, which enables it to use equal-cost multipathing (ECMP) and link aggregation technologies to make use of all the available links, with recovery (in the event of a link failure) in the region of 125 microseconds.

With VXLAN, we have endpoints, referred to as VXLAN Tunnel Endpoints (VTEPs), and these can be physical or virtual switchports. Head-End Replication (HER) is used to forward broadcast, unknown destination address, and multicast traffic, which is referred to (quite amusingly) as BUM traffic.

This 16M limit with VXLAN is more theoretical, however. Truthfully speaking, we have a limit of around 1M entries in terms of MAC addresses, IPv4 addresses, and IPv6 addresses due to the size of the TCAM (ternary content-addressable memory). The TCAM is high-speed memory, used to speed up the reading of routing tables and performing matches against access control lists. The amount of available TCAM became a worry back in 2014 when the BGP routing table first exceeded 512 thousand routes, which was the maximum number supported by many of the Internet routers. The likelihood of having 1M entries within the fabric is also pretty rare, but even at 1M entries, ACI remains scalable in that the spine switches let the leaf switches know about only the routes and endpoints they need to know about. If you are lucky enough to be scaling at this kind of magnitude, however, it would be time to invest in more hardware and split the load onto separate fabrics. Still, a data center with thousands of physical hosts is very achievable.

Creating fabric policies

In this recipe, we will create an NTP policy and assign it to our pod. NTP is a good place to start, as having a common and synced time source is critical for third-party authentication, such as LDAP and logging.

In this recipe, we will use the Quick Start menu to create an NTP policy, in which we will define our NTP servers. We will then create a POD policy and attach our NTP policy to it. Lastly, we’ll create a POD profile, which calls the policy and applies it to our pod (our fabric).

We can assign pods to different profiles, and we can share policies between policy groups. So we may have one NTP policy but different SNMP policies for different pods:

The ACI fabric is very flexible in this respect.

How to do it…

  1. From the Fabric menu, select Fabric Policies. From the Quick Start menu, select Create an NTP Policy:
  2. A new window will pop up, and here we’ll give our new policy a name and (optional) description and enable it. We can also define any authentication keys, if the servers use them. Clicking on Next takes us to the next page, where we specify our NTP servers.
  3. Click on the plus sign on the right-hand side, and enter the IP address or Fully Qualified Domain Name (FQDN) of the NTP server(s):
  4. We can also select a management EPG, which is useful if the NTP servers are outside of our network. Then, click on OK.
  5. Click on FinishWe can now see our custom policy under Pod Policies:

    At the moment, though, we are not using it:

  6. Clicking on Show Usage at the bottom of the screen shows that no nodes or policies are using the policy.
  7. To use the policy, we must assign it to a pod, as we can see from the Quick Start menu:
  8. Clicking on the arrow in the circle will show us a handy video on how to do this. We need to go into the policy groups under Pod Policies and create a new policy:
  9. To create the policy, click on the Actions menu, and select Create Pod Policy Group.
  10. Name the new policy PoD-Policy. From here, we can attach our NTP-POLICY to the PoD-Policy. To attach the policy, click on the drop-down next to Date Time Policy, and select NTP-POLICY from the list of options:

    We can see our new policy.

  11. We have not finished yet, as we still need to create a POD profile and assign the policy to the profile. The process is similar as before: we go to Profiles (under the Pod Policies menu), select Actions, and then Create Pod Profile:
  12. We give it a name and associate our policy to it.

How it works…

Once we create a policy, we must associate it with a POD policy. The POD policy must then be associated with a POD profile.

We can see the results here:

Our APIC is set to use the new profile, which will be pushed down to the spine and leaf nodes.

We can also check the NTP status from the APIC CLI, using the command show ntp .

apic1# show ntp
 nodeid     remote          refid    st    t  when   poll   reach   delay   offset   jitter
 -------- - --------------- -------- ----- -- ------ ------ ------- ------- -------- --------
 1          216.239.35.4    .INIT.   16    u  -      16     0       0.000   0.000    0.000
apic1#

Summary

In this article, we have summarized the ACI and APIC and helps us to develop and create the fabric policies.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here