CoreOS

You are currently browsing articles tagged CoreOS.

At this point we’ve deployed three hosts in our first and second CoreOS posts.  Now we can do some of the really cool stuff fleet is capable of doing on CoreOS!  Again – I’ll apologize that we’re getting ahead of ourselves here but I really want to give you a demo of what CoreOS can do with fleet before we spend a few posts diving into the details of how it does this.  So let’s dive right back in where we left off…

We should have 3 CoreOS hosts that are clustered.  Let’s verify that by SSHing into one of CoreOS hosts…

image

Looks good, the cluster can see all three of our hosts.  Let’s start work on deploying our first service using fleet.

Fleet works off of unit files.  This is a systemd construct and one that we’ll cover in greater detail in the upcoming systemd post.  For now, let’s look at what a fleet unit file might look like…

image

Note: These config files are out on my github account – https://github.com/jonlangemak/coreos/

Systemd works off of units and targets.  Suffice to say for now, the fleet service file describes a service we’d like to run.  In this case, the ‘Unit’ chunk of the service file defines the service and what services need to be running for this service to launch.  As you can see here, we’re relying on the docker service.  This makes sense since this service file attempts to launch a container.  The ‘Service’ chunk of the file lists specifics about how to start and stop the service.  Again, we’ll dive into this in much greater detail later but you can see that the start command appears to be launching a container.  We can see that it’s called webserver1 and that the image is ‘jonlangemak/coreos:webserver1’.  This piece is important.  The image we want to load is in a public repo called ‘jonlangemak/coreos’.  This is a public repo so you should be able to access it if you want to reproduce this in your own lab.  The repo location is here…

https://registry.hub.docker.com/u/jonlangemak/coreos/

In addition of the Web Server 1 service file I also have service files for Web Server 2 and 3.  Those configurations look like this…

image

image

So let’s copy these over to one of the CoreOS hosts so we can load them into fleet…

image

Now that they’re on the host, we can load them into fleet.  Fleet refers to these service files as ‘unit’ files.  To get them into fleet we can use the ‘submit’ command…

image

Once they’re submitted we can list the unit files that fleet is aware of.  Now, all we have to do is tell fleet to launch the services.  Let’s start by launching webserver1.service…

image

Now let’s take a look and see the status of the unit we just launched…

image

It looks like the unit got launched on the current host we are on (coreOS1).  Let’s take a look at docker and see what’s going on…

image

We can see that docker downloaded a container image called ‘jonlangemak/coreos:webserver1’.  In addition, it started the container and its currently running with port 80 on the CoreOS host mapped to port 80 on the container.  Let’s check and see what that gives us…

image

On each of the container images I’ve loaded the Apache service that is serving up a couple of files.  First is a index.php page that shows the overall status page.  In addition each container image has a localpage.html file which shows a basic HTML page describing the local container.  So to quickly break it down, the container layout looks like this…

image

Basically – This means that I can browse to each container on its respective port and see the main status page.  The status page queries all of the possible CoreOS nodes on ports 80, 81, and 82 so we can see which container is running on which host.  The reason for the different ports is so that each container can run on the same CoreOS node.  So let’s start the other 2 service units with fleet…

image

As you can see, fleet has deployed the 2 service units we just started on the other two CoreOS nodes.  Pretty cool huh?  Let’s check out status page again…

image

Cool!  So fleet has deployed all 3 containers across all 3 hosts.  Before we go further, let’s break down what happened…

1 – We loaded the service unit file into fleet
2 – We tell fleet to start the unit
3 – Fleet picks a CoreOS node and deploys the service on the node
4 – The service starts and docker is told to run the defined container image
5 – Since the container image doesn’t exist locally, docker downloads it from the public repo
6 –  Once the image is downloaded, the image is started

Note that step 5 can take some time since the public repo is out on the internet.  Give docker some time to download the image.  In my experience this takes less than 2 minutes.

So now what happens if a host goes offline?  For instance, let’s shutdown node CoreOS1…

Note: Since the index.php page is on all 3 containers you can pick anyone to check page status.  For instance, since the webserver3 container is running on CoreOS2 (10.20.30.113) I can browse to http://10.20.30.113:82

It doesn’t take long after telling CoreOS1 to shutdown that our status page reflects the container going offline…

image

Let’s check the fleet status on CoreOS2 and see what’s going on…

image

Interesting, it says that webserver1.service is now running on CoreOS2 (10.20.30.112).  Let’s check docker…

image

Sure enough!  Fleet say CoreOS go down and moved the container to CoreOS2.  Let’s check the status page again…

image

Awesome!  So as we can see, Fleet is a pretty powerful container scheduling system that keeps track of container status and reschedules as needed.  In the post I do dedicated to fleet we’ll dive more into some of the other options we have with Fleet in regards to tuning the cluster.  For now, I just wanted to give you the chance to get this up and running on your own so you can start playing around and see how powerful CoreOS is.

Tags: ,

Quick note: For the sake of things formatting correctly on the blog I’m using a lot of screenshots of config files rather than the actual text.  All of the text for the cloud-configs can be found on my github account here – https://github.com/jonlangemak/coreos/

Now that we have our first CoreOS host online we can start the cool stuff.  We’re going to pick up where we left off in my last post with our first installed CoreOS host which we called ‘coreOS1’.  Recall we had started with a very basic cloud-config that looked something like this…

image

All this really did was get us up and running with a DHCP address and the base system services running.  Since we’re looking to do a bit more in this post, we need to add some sections to the cloud-config.  Namely, I’m going to configure a static IP address for the host, configure and start etcd, and configure and start fleet.  So here’s what my new cloud config for coreOS1 will look like…

image

So there’s a lot more in this cloud-config.  This config certainly deserves some explaining.  However, in this post, I want to just get a couple more hosts online and get them clustered.  So bear with me, I promise follow up posts are going to dig much further into CoreOS services and the cloud-config itself.  There are two items that I need to cover right now to get us going.  The first is the etcd discovery URL.  Note that the URL has a syntax of…

The bold piece of the URL is a unique key.  This NEEDS to be the same on each host that you want to be part of the cluster.  To generate the URL, you can use this URL…

https://discovery.etcd.io/new

For now, just know that each time you build a cluster you need to generate a NEW etcd discovery URL and that all of the host in the same cluster need to use the same discovery URL.  Again, much more to come on this when we dive into etcd.

Note: The discovery URL is used by the hosts for clustering.  Since this is an external URL, the hosts require internet access for this to work.

The second thing we need to know for our new cloud config is the servers interface name.  Note that the cloud-config specifics a interface name to match on.  In our case, this is ‘ens18’.  This can be found just by doing a quick ‘ifconfig’ on the host…

image

For now, that’s all we need to keep moving forward with building additional hosts.

So now that we have a more robust cloud-config, how do we get that on the first host that we built?  The cloud-config used to build the current iteration of the host can be found in this location…

The file is protected so we’ll have to use the following command to edit it…

You should see a copy of your original cloud-config as shown below…

image

Here you could manually edit the file but I prefer to load a copy of the file that I know has been validated.  That being said, I would exit out of vi and run the following command to copy a new cloud-config from my local tools server…

Next we need to tell the CoreOS host to load the new cloud-configuration file.  This can be done either via command or by simply rebooting the host.  Let’s use the cloud-init command to load the file without a reboot…

So let’s give it a try…

image

Now let’s check and see if the services are now running as we expect…

Note: Again, indulge me here, we’re going to cover this stuff in much greater detail in future posts.

image

 

We’ve now verified that the two services we need for CoreOS clustering are running.  The next step is to configure our second and third hosts.  These will have identical cloud-configuration files with the exception of the hostname and IP address configuration.  The cloud-configs will look like this…

image

image

Just like we did above, ensure that you set the discovery URL the same in each cloud-config and verify the network interface name.  You can see for my host coreOS3 that this is different than the others since it’s a different type of machine.

For hosts coreOS2 and coreOS3 Im going to load the cloud-config during the install much like we did in the first post.  If you need some refreshing pop back over to the first post and follow those directions.

<Time Passes as you install the 2nd and 3rd host>

I’ll assume at this point you have all 3 hosts online.  You should be able to log into each host independently and verify that etcd and fleet are running by issues these two commands…

Fleet relies on etcd so if you’re having issues with etcd its very likely fleet wont be working either.  Please make sure that both services are running on each of your 3 hosts before continuing.  You should be able to SSH into each of the 3 hosts directly and verify the services are running as expected.

At this point, if all has gone well, you should be able to see if your devices are clustered by running this command…

When run you should see this…

image

If you do, then we’ve succeeded in creating out first CoreOS cluster!  Note that fleet and etcd are distributed services so you can run fleet and etcd command on any of the cluster hosts and you should see the same response.  Pretty slick huh?

In the next post (coming very soon!) we’ll deploy our first application using fleet to our cluster.  After we wrap up the fleet demo we’ll be diving into some of the specifics of how all of this works with deep dives on systemd, etcd, and fleet.

Tags:

Installing CoreOS

If you haven’t heard of CoreOS it’s pretty much a minimal Linux distro designed and optimized to run docker.  On top of that, it has some pretty cool services pre-installed that make clustering CoreOS pretty slick.  Before we go that far, let’s start with a simple system installation and get one CoreOS host online.  In future posts, we’ll bring up more hosts and talk about clustering. 

The easiest way to install CoreOS is to use the ‘coreos-install’ script which essentially downloads the image and copies it bit for bit onto the disk of your choosing.  The only real requirement here is that you can’t install to a disk you’re currently booted off of.  To make this simple, I used a ArchLinux lightweight bootable Linux distro.  So let’s download that ISO and get started…

Note: I use a mix of CoreOS VMs and physical servers in my lab.  In this walkthrough I’ll be doing the install on a VM to make screenshots easier.  The only real difference between the install on either side was how I booted the ArchLinux LiveCD.  On the virtual side I just mounted the ISO and booted it.  On the physical side, I had to make a bootable USB drive and boot the server with it.  After several failed attempts at booting off of USB I finally found the USBwriter tool (http://sourceforge.net/projects/usbwriter/) which successfully wrote the ISO to USB and allowed me to boot into ArchLinux.  After that was fixed, the install process was identical.

As I mentioned I chose to use the ArchLinux distro which is available here…

https://www.archlinux.org/download/

Specifically, I used archlinux-2014.06.01-dual.iso but I don’t think it really matters which one you use.  We just need to be able to get to a Linux prompt and have network access.

Let’s get that booted up and make sure we can download the CoreOS install script…

image

Looks good!  We were able to download the install script.  At this point, we could run the install and reboot with a CoreOS image.  However, you can’t locally log into a CoreOS system.  So we need to provide a little bit of configuration to our CoreOS install script to make sure we can login through SSH.  This is done through what CoreOS refers to as a ‘cloud-config’.  In later posts we’ll add quite a bit more to the cloud-config, but for now, let’s just start with this…

image

The cloud-config file is written in YAML.  In this example, all we are really doing is providing a hostname and setting a couple of pubic SSH keys.  I struggled initially with the cloud-config syntax and then found out that they have a validator tool that can check your config before you try and use it.  Its located at…

https://coreos.com/validate/

As I mentioned earlier, you can’t log directly into the console of a CoreOS host.  You need to SSH in and even then you need to login by doing certificate authentication.  If you aren’t comfortable with how that works, see this post I wrote about it the other day…

http://www.dasblinkenlichten.com/generating-ssh-keys-to-use-for-coreos-host-connectivity/

So now that we have the cloud-config, lets get it over to the host…

image

I put the cloud-config file on a local web server and then just used wget to get it over to the host we’re building.  The next step is to run the installer.  We need to pass a couple of variables to the installer.  Namely, the device we want to install CoreOS on, which version of CoreOS to use, and where the cloud-config file is.  Let’s first check and see what drives we have available…

image

On this host the drive we want to use is ‘/dev/sda’.  Now let’s run the install command…

image

Here we pass the ‘-d’ flag to specify the device, the ‘-C’ flag to tell it down the latest stable release, and the ‘-c’ flag to specify the cloud-config file to use.  The server will download the CoreOS code and then begin imaging.  In my experience this entire process takes less than 4 minutes.  When done, you should see a success message at the bottom of the screen…

image

Now just reboot the host and we should be in CoreOS land!

image

After booting the console should tell you the IP address of the host.  Note that since we didn’t specify an IP, the CoreOS system will just use DHCP to get one off of the local LAN segment.  At this point, we should be able to SSH to the server.  Let’s try…

image

The client prompts you to add the RSA fingerprint to the local store and then I get prompted for my local private key’s passphrase that I had set when I generated the certificate.  If you didn’t set a passphrase you wouldn’t see this step.

So now we’re in!  Like I mentioned earlier, CoreOS is built to run Docker, so you’ll notice that it’s already installed…

image

To make sure things are working as expected, let’s try pulling down and spinning up a Docker container from a public repo…

image

Now that it’s downloaded, let’s kick off a container and map port 9000 on the CoreOS host to port 90 on the container…

image

We can see the container running so let’s see if it worked…

image

Sure enough!  So in this post we got a single CoreOS host up and running and made sure docker worked as expected.  In the next post we’ll talk more about working with the multiple CoreOS hosts and additional cloud-configuration options.

Tags: ,

« Older entries