Monday, June 22, 2015

ACI Programmability: Part 1 ACIToolkit

ACI programmability is rightfully getting a lot of attention lately due to its robust GUI and a RESTful API that supports both JSON and XML.  ACI 's RESTful API will drastically help reduce the time it takes to deploy new network policy, update existing policy and deprovision policy for a new or existing application.  Therefore, as a network engineer with 15 years in the industry and mostly only touching a CLI, I started to ask whether I should learn the GUI or learn how to leverage the RESTful API to reduce provisioning and deprovisioning times.   I decided to invest my time in learning how best to take advantage of this rich API.  However, I did recognize the ACI GUI is great for creating best practice configurations and verifying and testing them; but, leverage the API to automate the configurations best practices I created from the GUI for any new add, moves, changes or deletions.

I started researching the tools Cisco currently makes available to help a CLI jockey learn how to use this API.  On https://github.com/datacenter, I came across Arya, CobraSDK and ACIToolKit as popular tools to program ACI.  Which one should I learn first?

Arya (https://github.com/datacenter/ACI/tree/master/arya) is a code generator.  It essentially allows me to save APIC objects in XML or JSON format that I created in the GUI and convert them into ACI Python SDK code, where I can automate and deploy configurations in seconds.  For example, I can take JSON or XML from APIC Visor, the API Inspector or a REST query, and generate Python source code to build a full configuration in seconds.  There are some caveats: Arya doesn't validate configurations or perform advanced lookup logic, but it's a great way to create code from JSON or XML objects or classes that already exist.  I view it as a great tool to help with creating great code with the Cobra SDK.   But I wasn't ready to create code yet.

Next, I looked at the Cobra ACI Python SDK (https://github.com/datacenter/cobra).  Cobra has an all-inclusive ACI object model implemented that can essentially do everything to configure an ACI fabric.  It not only has a Python library that provides native bindings for all the REST functions but also a complete copy of the object model, which means data integrity can be ensured.  This is important as it validates the configuration before executing it on APIC.  The Cobra SDK provides methods for performing lookups or queries and object creation, modification and deletion that match the REST methods used by the GUI.  What's cool is that I can take policy built in the GUI and use it as a programming template to build new code for super fast deployments for other tenants and their applications.  Here's a link to the Cobra API documentation (http://cobra.readthedocs.org/en/latest/).  Although I felt I wasn't quite ready for Cobra, I knew I would want to come back and learn how to use this SDK.  If new to Python here is a great place to start:
http://www.pythonlearn.com/
https://docs.python.org/2/tutorial/index.html
http://www.learnpython.org/
https://developers.google.com/edu/python/

Ultimately, I decided to learn the ACIToolkit first https://github.com/datacenter/acitoolkit.

The ACIToolkit is very use-case oriented; for example, it knows how to build an EPG, but doesn't know how to manipulate all of the underlying objects and class inheritances.  The ACIToolkit also provides an NXOS like CLI for ACI; therefore, I can leverage some of my CLI skills.  Here is an example of some of the CLI available to configure ACI:  https://github.com/datacenter/acitoolkit/blob/master/applications/cli/clicommands.txt

Take a look at a sample ACIToolkit script for building out a simple tenant and the application network profile to support a new application deployment at https://github.com/datacenter/acitoolkit/blob/master/samples/aci_demo_contract.py.  I don't need to ssh or telnet to a number of switches to build this configuration, but instead I leveraged a single API where APIC distributes the policy to the fabric.  You can imagine you can easily further customize this script to deploy for other applications.

If you want to evolve past the CLI samples and dive into Python, take a look at an example that leverages a simple Python script to create a tenant with a single EPG and assign it statically to 2 interfaces: http://goo.gl/cLTYlR

You can find a list of other ACIToolkit Python applications you can start to use at https://github.com/datacenter/acitoolkit/tree/master/applications.  One of my favorites is the end point tracker.  It allows me to search for a mac address or IP address anywhere within the fabric.  I can sort by Tenant, Application Network Profile, or by EPG.  It's extremely flexible to have a single interface to configure, monitor, audit and troubleshoot the fabric. 

Lastly, you can find additional examples that allow you to build out the fabric or query information from the fabric leveraging ACITookit python scripts at https://github.com/datacenter/acitoolkit/tree/master/samples.  The ACIToolkit is a great way to start to evolve from what I know with CLI, move onto python scripts, and get me ready to dive into learning the ACI Python SDK next (Cobra).

In summary, I chose to start with ACIToolkit because I wanted something easy to learn.

Start here and go through the documentation: http://acitoolkit.readthedocs.org/en/latest/ and let me know what you think.

You can install the ACIToolkit as a VM, Docker image or just git clone and run natively on your PC or MAC.
https://cisco.app.box.com/s/j0jlkl3b1m7yj6oqm3e5yazesfkiicsh
https://registry.hub.docker.com/u/dockercisco/acitoolkit/
https://registry.hub.docker.com/u/dockercisco/aci/

To get you started, I provided a short video to show you where to get started and a few sample demonstrations.  Enjoy!


For clearer video:  http://youtu.be/6pTv8L0rLfE





Tuesday, June 16, 2015

ACI Security: Part 1


Now more than ever it is critical to choose the right security architecture to address your organizations requirements.   There is a lot of noise about micro segmentation lately.  Instead of focusing on a sub set of a security architecture wouldn’t it be great to implement a fabric-wide segmentation solution that addresses multi-hypervisor and bare-metal workloads?   Wouldn’t it be valuable to firewall above and beyond L4 and inject not only a virtual security appliance but also the option to use a physical NGFW with deep packet inspection for 10G and 40G performance?  Wouldn’t it be valuable to automate the enforcement of security policies anywhere in the datacenter across the entire application lifecycle and removing security policies when applications components are removed?  Wouldn’t it be valuable to obtain PCI and HIPAA compliance with full auditing capabilities and health scores for how the fabric is handling your application including a health score from the security appliances?



Here are a few ideas to explore when researching a data center security architecture:

  1. Focus on use-case: is this about policy automation, service insertion, per application security requirements or compliance
  2. Address what is enough from a firewall perspective: port filtering vs. application-level security.
  3. Is a NGFW required coupled with service chaining capabilities to an IPS or Web Application FW. 
  4. Securing a Netapp NAS or baremetal database?   
  5. Is it a broader security architecture (TrustSec/Security Tagging/Policy integration for Campus+Branch+DC, or auditing). 


If you were nodding yes to any of the questions above than ACI is an option for your organization.  ACI provides you an option of deploying a zero trust data center security model by assuming a no default trust between application components regardless of the location of the entity unless there is a whitelist policy explicitly defined to allow connectivity.   Of course, you don’t have to deploy a zero trust model; in fact, you can mix and match based on your security needs.


ACI policies are expressed as contracts that permit, deny, log, or redirect traffic between two end point groups.  Keep in mind no IP addresses are required to implement the security policies.  Imagine two end points belonging to distinct endpoint groups (EPGs) connected to interfaces on the same physical or virtual switch, there is no connectivity between these end points unless there is an explicit whitelist policy tied to a contract to allow communication between these end point groups. This is in comparison to the blacklist model for existing network switches, which allow all traffic unless otherwise specified.

Lets take a look at 2 options available today with ACI for securing between two end point groups.   Keep in mind that your security model with ACI can be a combination of Option 1 and Option 2 depending on what your application security requirements dictate.  


Option 1:  ACI Fabric can direct traffic to centralized or distributed pool of virtual and/or physical Next Generation Firewalls but eliminate the challenges with ACL cleanup on Firewalls by automating the removal of policies, as applications are de-commissioned.



Option 2: As an alternative, an ACI fabric can enforce a semi-stateful firewall at line rate.  ACI checks initial flag for directionality.   In addition, with ACI a distributed firewall security policy no longer needs to be based on IP addresses and layer 4 ports but can be rules based on endpoint group membership and layer 4 ports.  Consider for a moment giving out a single IP Subnet for new three-tier application and being able to secure between tiers not based on the 5 tuples but on EPG membership.  Security becomes part of the fabric and enforced at all leaf nodes in the fabric even as workloads move across the datacenter.  




If it isn’t obvious, ACI integrates with broad set of security eco-system and partner technologies such as Next-Generation Firewalls (Cisco, Checkpoint, Palo-Alto), IDS/IPS (Cisco), DDoS (Radware) and DNS Security (Infoblox) to secure north/south and east/west application traffic. Together with ACI these security integrations guarantee management plane, control plane and data path isolation for any workload across the data center including micro-segmentation use cases and the broader use case of fabric-wide segmentation. 

ACI also supports a L4 Stateful Firewall with AVS as well as endpoint group membership based on VM Attributes.  Remember you have the option to go above and beyond L4 firewalling by redirecting to Next-Generation Firewall for any workload.   Its important to remember that a L4 firewall has its limitations but does help reduce the scope of what is unprotected.    If you need deep application inspection the ideal method is having ACI leverage a service-graph to redirect to a Next-Generation Firewall coupled with a Next-Generation IPS.  In order to protect your assets in the datacenter you need to consider all phases of a security architecture and not just a L4 Firewall on a virtual NIC or IP tables.


 



Over the next few weeks I will provide follow on demonstrations highlighting ACI’s security architecture.  Starting with this blog, I will highlight ACI with AVS.  In the diagram below is a lab setup with an ACI fabric integrated with UCS and a traditional Nexus 5k/2k environment connected.  In the video below I demonstrate the fabric providing EPG (endpoint group) semi-stateful contract enforcement as well as AVS providing L4 stateful FW for traffic that has bypassed the distributed semi-stateful FW.  The demo includes an Avalanche test tool sending 1000s of port 80 flows at test hosts.   The Avalanche traffic will be permitted by the contract allowing port 80 between the two EPGs and in addition you will see state being tracked by AVS.  Lastly, I have an additional test host with nmap installed.  I launched syn and fin floods and port scans at a virtual host within a different EPG.  You will see the fabric deny and log all non port 80 traffic and the syn and fin floods being delivered to AVS where the traffic is dropped by AVS for not being stateful.  Hope you enjoy Part 1 on this topic and look forward to your feedback.  Lastly, I would also like to give credit to Brett Huffman for assistance with the demonstration.