Virtualization

You are currently browsing the archive for the Virtualization category.

The traditional security model has put significant emphasis on what’s typically called the ‘external edge’.  That is, the connection between your network and any third party network.  This is also where we create a delineation between ‘trusted’ and ‘untrusted’ networks.  Regardless of how you define this boundary, it becomes the focal point for any security related tooling.  This creates some interesting challenges…

Scale – Applying security tooling at the external edge introduces some possible scale concerns.  You now have a single point in the network has to scale to provide connectivity and security services to all of the users and applications.  While this might make sense in smaller networks, aggregating everything in one place on larger networks can be challenging.  Considering that many security tools can handle significantly lower amounts of traffic than routers and switches, you may find that doing this all in one place introduces a bottleneck in the network.  Scaling security appliances is often a much larger task than scaling network links. 

Network magic – I often joke that network engineers have to perform network magic to get all of the security tools all of the traffic they want to see.  The security industry is booming right now with thousands of different vendors offering new and different security tools. Even if you wanted to, you couldn’t possibly implement them all. This sprawl has made dedicated packet broker networks that manage spanning traffic to all of these tools almost a requirement in modern networks.  To make matters more interesting, many of the newer tools are deployed ‘inline’ making the network engineering piece of this considerably harder.

Trust vs. Untrusted – The external edge model makes a clear delineation between what networks are considered trusted and which are considered untrusted.  Given that the majority of the major attacks and data breaches we read about originate from inside the company (from the ‘trusted’ network), we need to change this model.  Considering everything on your internal network 100% completely secure is no longer entirely possible.

It’s pretty clear we can’t continue focusing solely on the external edge of the network.  So where do we go from here?

Enter Skyport Systems and their SkySecure computing platform.  Skyport sells hardened servers which are the building blocks for your virtual machine infrastructure.  But it’s much more than just servers running virtual machines, it’s an entire security platform.  So how is this different than any other server running a hypervisor?

Secure Hardware – The system is built from the ground up to be entirely secure.  The system is split into two halves, an x86 system and a security co-processor.  Each half has its own dedicate TPM (Trusted Platform Module) which measure system registers of each hardware component.  When the system boots, it calls home and checks the current measurements against known good values.  If the measurements check out, the system is allowed to start its secure boot process.  Once the system boots, the hardware measurements are continually taken to ensure the system hasn’t been tampered with. 

Security I/O Co-Processor – The security co-processor offloads much of the security related tasks from the x86 half of the system.  It has a 40 gig flow processor and is the connection point for the system to the network.  This is a unique piece of hardware built by Skyport systems and allows much of the security related tasks to stay off of x86 compute. 

Security Model – Each VM in the system is described as being it’s own individual DMZ.  As VMs boot on the system, each instance is given it’s own dedicated instance of the NIC.  There are no vSwitches or port-groups for VMs to talk directly to each other.  Each VM has to transit the security co-processor to get to another VM on the same system.  Additionally, Skyport defaults to a zero trust model.  That is, VMs can’t talk to anything without explicit rules allowing the communication. 

Management – The Skysecure compute nodes are managed through a cloud based secure portal which makes the system largely plug and play.  In addition to management, the portal also provides a secure data warehouse which exists for the lifetime of a system.  This provides an important feature – full accounting and auditing.  This means that everything from administrative tasks to attacks on the system can be reported on and correlated.  The portal is also designed to support multi-tenancy right out of the box. 

Applications Proxies – The system itself provides service proxies for common protocols that need to go off box.  These proxies cover everything from crypto to active directory and allow the system to shield the VMs from any possible protocol misuse.

All of these components (and much more) make Skyport the first of it’s kind.  There are some obvious benefits to this type of infrastructure.  From a network perspective, it helps reduce the need for ‘network magic’ by including many of the security tools directly into the compute platform.  The use of the security co-processor makes this almost entirely transparent to the x86 compute.  It also helps manage security scaling.  If the controls Skyport delivers meet your security requirements, you can consider excluding server traffic from inspection on your external edge.  In some scenarios ,like DMZs, the Skyport system may in itself provide the majority of the security controls required.

However, as with any new technology, there will be some concerns…

Correlation – While I believe Skyport does a great job of correlating data between all of their tools on box, this will likely not entirely replace your security toolset.  This means that we still have the problem of correlating data between all the other disparate security systems.  While this is not Skyport’s problem, their system is another portal and another system that needs to be added to the mix.

Compute – For Skyport to work well, it requires custom hardware.  The trend today has been to move to more ‘open’ compute platforms that are more generic in nature.  The reasoning for this trend has been mostly around cost.  The tradeoff between open compute and Skyport is likely obvious – cost vs inherent security.  In addition, Skyport doesn’t sell you the systems.  The servers are leased and Skyport takes care of refreshing the hardware over time making sure you don’t get stuck with out dated proprietary hardware.

Brand familiarity – While I’ll be the first to admit this is wrong, lots of us are prone to brand preference.  While there are many reasons for this, one of the big problems is that we usually have a lot of time and money committed to incumbent products.  I expect most security experts will want to see comparisons between the service Skyport offers and competitive products.

All in all, I think Skyport is offering a turn key secure compute platform.  Many of the features they offer are the first of their kind in this space and the culmination of all of these tools on on box is more than appealing.  If you’re looking for more information on Skyport, I recommend you check out the following videos…

Introduction to Skyport and Skysecure

Skyport’s Skysecure system

Breaking the kill chain demo

Discussion with Skyport executives

Tags: ,

I’ve recently started to play around with OpenStack and decided the best way to do so would be in my home lab.  During my first attempt, I ran into quite a couple of hiccups that I thought were worth documenting.  In this post, I want to talk about the prep work I needed to do before I began the OpenStack install.

For the initial build, I wanted something simple so I opted for a 3 node build.  The logical topology looks like this…

image

The physical topology looks like this…

image
It’s one of my home lab boxes.  A 1u Supermicro with 8 gigs of RAM and a 4 core Intel Xeon (X3210) processor.  The hard drive is relatively tiny as well coming in at 200 gig.  To run all of the OpenStack nodes on 1 server, I needed a virtualization layer so I chose ProxMox (KVM) for this.

However, running a virtualized OpenStack environment presented some interesting challenges that I didn’t fully appreciate until I was almost done with the first build…

Nested Virtualization
You’re running a virtualization platform on a virtualized platform.  While this doesn’t seem like a huge deal in a home lab, your hardware (at least in my setup) had to support nested virtualization on the processor.  To be more specific, your VM needs to be able to load two kernel modules, kvm and kvm_intel (or kvm_amd if that’s your processor type).  In all of the VM builds I did up until this point, I found that I wasn’t able to load the proper modules…

image 
ProxMox has a great article out there on this, but I’ll walk you through the steps I took to enable my hardware for nested virtualization.

The first thing to do is to SSH into the ProxMox host, and check to see if hardware assisted virtualization is enabled.  To do that, run this command…

Note: You should first check the systems BIOS to see if Intel VT or AMD-V is disabled there.

In my case, that yielded this output…

image
You guessed it, ‘N’ means not enabled.  To change this, we need to run this command…

Note: Most of these commands are the same for Intel and AMD.  Just replace any instance of ‘intel’ below with ‘amd’.

Then we need to reload the ProxMox host for the setting to take affect.  Once reloaded you should be able to run the above command again and now get the following output…

image 
It’s also important that we make sure to set the CPU ‘type’ of the VM to ‘host’ rather than the default of ‘Default (kvm64)’…

image 
If we reboot our VM and check the kernel modules we should see that both kvm and kvm_intel are now loaded. 

image
Once the correct modules are loaded you’ll be all set to run nested KVM/

The Network
From a network perspective, we want our hosts to logically look something like this…

image 
Nothing too crazy here, just a VM with 3 NICs.  While I’m used to running all sorts of crazy network topologies virtually, this one gave me slight pause.  One of the modes that OpenStack uses for getting traffic out to the physical network is dot1q (VLAN) trunking.  In most virtual deployments, the hypervisor host gets a trunk port from the physical switch containing multiple VLANs.  Those VLANs are then mapped to ports or port-groups which can be assigned to VMs.  The net effect of this is that the VMs appear on the physical network in whatever VLAN you map them into without having to do any config on the VM OS itself.  This is very much like plugging a physical server into a switch and tagging it as an access port for a particular VLAN.  That model looks something like this…

image 
This is the model I planned on using for the management and the overlay NIC on each VM.  However, this same model does not apply when we start talking about our third NIC.  This NIC needs to be able send traffic tagged on the VM itself.  That looks more like this…

image

So while the first two interfaces are easy, the third interface is entirely different since what we’re really building is a trunk within a trunk.  So the physical diagram would look more like this…

image 
At first I thought as long as the VM NIC for the third interface (the trunk) was untagged, things should just work.  The VM would tag the traffic, the bridge on the ProxMox host wouldn’t modify the tag, and the physical switch would receive a tagged frame.  Unfortunately I didn’t have any luck with that working.  Captures seemed to show that the ProxMox host was stripping the tags before forwarding them on its trunk to the physical host.  Out of desperation I upgraded the ProxMox host from 3.4 to 4 and the problem magically went away.  Wish I had more info on that, but that’s what fixed my issue. 

So here’s what the NIC configuration for one of the VMs looks like…

image
I have 3 NICs defined for the VM.  Net0 will be in VLAN 10 but notice that I don’t specify a VLAN tag for that interface.  This is intentional in my configuration.  For better or worse, I don’t have a separate management network for the ProxMox server itself.  In addition, I manage the ProxMox server from the IP interface associated with the single bridge I have defined on the host (vmbr0)…

image 
Normally, I’d tag the vmbr interface in VLAN 10 but that would imply that all VMs connected to that bridge would also be in VLAN 10 inherently.  Since I don’t want to do that I need to not tag at the bridge level and tag at the VM NIC level.  So back to the original question, how are these things on VLAN 10 if I’m not tagging VLAN 10?  On the physical switch I configure the trunk port to have a native VLAN of 10…

image
What this does is tell the switch that any frames that arrive untagged should be a member of VLAN 10.  So this solves my problem and frees me up to either tag on the VM NIC (as I do with net0) or tag on the VM itself (as I’ll do with net2) while having all VM interfaces a member of a single bridge. 

Summary
I cant stress the importance of starting off on the right foot when building a lab like this.  Mapping all of this out before you start will save you TONS of time in the long run.  In the next post we’re going to start building the VMs and installing the operating systems and prerequisites.  Stay tuned!

Tags: , ,

So we’ve done quite a bit with docker up to this point.  If you’ve missed the earlier posts, take a look at them here…

Getting Started with Docker
Docker essentials – Images and Containers
Docker essentials – More work with images

So I’d like to take us to the next step and talk about how to use docker files.  As we do that, we’ll also get our first exposure to how docker handles networking.  So let’s jump right in!

We saw earlier that when working with images that the primary method for modifying images was to commit your container changes to an image.  This works, but it’s a bit clunky since you’re essentially starting a docker container, making changes, exiting out of it, and then committing the changes.  What if we could just run a script that would build the image for us?  Enter docker files!

Docker has the ability to build an image based on a set of instructions referred to as a docker file.  Using the docker run command, we can rather easily build a custom image and then spin up containers based upon the image.  Docker files use a fairly simple syntax where you specify a command followed by an argument.  Any line can be prefaced with a ‘#‘ making anything after that a comment.  Let’s take a look at some common commands you’ll use in docker files.  I’ll list each command with a comment afterwards showing an example of its usage…

Note: Docker suggests that all commands are specified in uppercase but it is not required.  Interestingly enough, you’ll see later on that the docker file’s name itself (Dockerfile) is case sensitive.

FROM – From specifies the base image you’ll use to build your new image.

MAINTAINER – Lets you specify the author of the image.

RUN – Lets you run a command in the container.  After the run completes, the build process will commit the image to the imagestack.  This is important to remember and we’ll see an example of this when we run a test later.  There are two options for running this command.  One where you pass a simple command to docker (this is what I’ll be using) and the other referred to as exec syntax where you run the command and don’t require /bin/sh on the image to use it.

CMD – Specifies the command that should be run when a container is built off an image.  Recall that when you create a container, you need to tell it what process to run.  CMD lets you specify this in the image creation so it doesn’t need to be specified at container runtime.  You can only have one CMD per docker file much like you can only have one command ran when you launch a container.

EXPOSE – Expose tells the container which ports it should expose to the host the container is running on.  For instance, in our example we’ll have two expose statements since our container will host both Apache and a SSH daemon.  When we build the container, we tell docker to read the expose commands and only expose those ports.

ADD – Copies a file from the docker system into the image.  This is useful for copying configuration files as well as any other files you may want to be on the host for other purposes.  You specify the local location as well as where you want the file on the docker image.  Pretty simple.

So that’s just a taste of the available commands and examples of them but it’s all we need for our example at this point.  So let’s build an example docker file so we can see how powerful this is.

So at this point, we should be familiar with the commands I’m using.  The instructions following each command might not make sense if you aren’t a Linux person (trust me, I’m barely a Linux person).  I broke the config down into a couple of sections so lets walk through each and see what I’ve done…

Documentation stuff(s)
Here I’m telling the docker file which image to use as a base image.  If you attempt to build the image without having the specified base image already on the host, docker will download it for you.  Secondly, I’m putting a brief comment in about who created and owns the image.

Install the EPEL repo
I need the EPEL repo for some of the stuff I want to install.  Namely Supervisor.  Supervisor (process is named supervisord) is a process control system written for Linux.  Essentially, it’s what allows us to run and maintain multiple processes in the docker container.

Download the Supervisor config
Recall that when you launch a container, you can only list one process. Since we want to run more than on in this case, we need to install Supervisor into the image, provide the config file, and list the supervisord process as the one process we want to load in the container.  Supervisor is not a part of docker, but it certainly seems to be a good fit.  Check them out here.  The config file we’re downloading looks like this…

So this is pretty straight forward.  First we tell the supervisord process to not run as a daemon since it needs to run in the foreground of the container.  If it doesn’t, then the container will run the process once and then quit because it completed running it. Next we specify the apps we want to run and also indicate that we want them run in the foreground for the same reason.

Download the apache index page
Since I’m once again building this image to impress my friend Bob, I want to have a super cool index page that apache uses when I spin up the image, I’ll load it in during image creation.

Configure SSH
Since this is a brand new host, we need to do all of the normal stuff to setup SSH.  I’m hoping that some of this (besides the key creation) can get added into the base image at some point to make this easier.

Set the root password
I set the root password so I can login.  I’m sure there are better and more secure ways to do this but I do it this way here just to prove that SSH works.

Expose SSH and HTTP ports
These are the ports I want to expose to my end host.  If I list them here in the docker file I can just pass the ‘–P’ flag to docker during build and it will automatically use these ports rather than having me specify them.  The downside is that I cant specify a host port with the expose command, only the destination port.  More on this later.

Command to run supervisord in foreground
As we already discussed, I need to do this to keep the container running with multiple processes.

So now that we have a file, let’s talk about the build process in docker.  To build the image, I use the docker build command.  Let’s give it a try and see what happens…

image

So the first thing I do is show you that I made a folder called webapp.  In this folder are three things.  The docker file, the index.html I want to copy to the image, and the supervisord configuration I want to copy to the image.  If the two files I wanted to copy were in a different location than the docker file, I would have to specify their full path in the docker file.  The next thing I do is kick off the actual build of the image.  Note that I specify the image name with the –t flag and then specify a ‘.’ at the end of the command to signify the location of the docker file is local.  Then the build starts.  The output is huge so this is only the first part of the output.  The tail end of the output should include a success message…

image

So it built successfully!  One thing I want to point out here is that if you look at the output, you can see that the build process is building intermediate containers for each step of the docker file.  And while it doesn’t call it out entirely, it’s also creating an image for each step of the process.  These image names can be seen on the next line after the ‘Running in <container name>’ log.  If we look at our image tree now we see…

image

Wohah!  That’s a lot of images!  You can see after the base image of centos:latest that we have 15 user images!  Looking back our docker file we can see that we had 15 executable lines without the last CMD command.  So docker creates a image for each line in the docker file with the exception of the CMD line.  You can see that if you had an extensively long docker file how this might get out of hand.  Hopefully at some point, there will be a way to squash the user images together while maintaining the base image.

So now that we have an image, let’s run it as a container and see what we get…

image

Notice how we passed 3 flags to the container.  The name is obvious, the ‘–d’ flag tells docker to run the container as a daemon, and the ‘–P’ flag means to expose the ports from the docker file to random ports on the host.  We can see these ports by looking at the running containers…

image

Looking at the container, we can see that docker has mapped port 22 on the container to port 49159 on the physical host.  Additionally, port 80 on the container has been mapped to port 49160 on the physical host.  So let’s test this out and see how it works!

I’ll connect via SSH by passing the port in my connect command…

ssh –l root -p 49159 docker

Then we authenticate with the password of ‘docker’ as set in our docker file and we’re good to go…

clip_image002

Notice how the hostname of the host is the same as the container ID.  Now, let’s see if our apache web server came up as we expected…

image

There it is, you can now see Bob’s awesome home page!  Note that if you were to stop the supervisord process through the SSH connection, the container would quit.

Now, let’s send this docker file over to Bob and see what happens.  We’ll zip up the entire webapp folder and send it over to him…

image

Now that Bob has it, he can unpack all of the files into his own ‘webapp’ directory on his docker host…

image

Now Bob can try and build the same image that I did…

image

Note that Bob didn’t have the base centos container so the first step is to download it.  After that, the rest of the script runs and we end in a successful build…

image

Now Bob can build a container around his image in the same manner that I did.  Once running, he can check the container to see what ports were mapped to the container services…

image

And let’s check the webpage quick..

image

So as we can see, the docker file is a much more portable option then sending entire image stacks when sharing information.  Docker files appear to be the best way to build images at this point but I’ll say again that I think being able to squash user images together would be very beneficial.  Creating a new image every time a command in the file is run can get huge pretty quick.

Tags:

« Older entries