Using Linuxkit Generated Iso In Docker

  

To build the example configuration. You can also specify different output formats, eg linuxkit build -format raw-bios linuxkit.yml to output a raw BIOS bootable disk image, or linuxkit build -format iso-efi linuxkit.yml to output an EFI bootable ISO image. See linuxkit build -help for more information. Booting and Testing. You can use linuxkit. Using LinuxKit generated ISO in Docker For Mac #2580. Closed Copy link Collaborator docker-desktop-robot commented May 9, 2018. Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale comment. Stale issues will be closed after an additional 30d of inactivity. Next, follow the cRPD installation instructions (and specifically the section titled Download the cRPD Software ) to download a.tgz file containing the crpd image. This is a tarball that Docker will recognize and allow you to import into your local Docker image cache. Continue only once you see your image in the output of docker image ls grep crpd. I wanna create a docker image from an ISO file. And I meet the same question like this iso to docker file. I did same operations with him, and I know it's wrong now. Now what i have is: an ISO file. My own ISO file, based on ubuntu but it's not ubuntu. A computer, running ubuntu on it.

Lately I’ve been looking at some tools to make it easier to package containerized applications as lightweight virtual machines, to get the best of both worlds: the developer experience of a Dockerfile with the added protection of a hypervisor.

As part of this process, I’ve been digging into Juniper’s containerized routing stack called cRPD, and trying to get that into a virtual form factor, so that I can effectively have a Linux router that happens to use the Junos routing stack. I’ve worked out an approach for doing this that could, in theory, extend to all kinds of other software, including other disaggregated routing stacks, such as Free-Range Routing.

NOTE - you need access to download the cRPD software from Juniper if you wish to follow the instructions in this post, and at the time of this writing, there is no free trial for cRPD. However, a lot of us at Juniper are actively working on getting cRPD into your hands more easily, so stay tuned!

This post is written for folks like me who are looking to define their own modular, and automated OS build. As a result, this requires some advance knowledge of concepts pertaining to Linux and Docker. You’ll need a few things set up in advance:

  • Docker
  • A linux-based host OS (I’m using Ubuntu)
  • A hypervisor (these instructions are for QEMU)
  • Git

What is cRPD? What is LinuxKit?

cRPD takes some of the best parts of Junos (RPD, MGD), disaggregates and packages them in a lightweight container image.

Because it’s so stripped down, it is not designed/able to function as a full-blown operating system, or manage network interfaces, as we’re accustomed to being able to do with a typical Junos device - it’s just a routing stack with a management process. This means that in order to turn it into a real routing element, we need to run it on an existing OS, like Linux.

Linuxkit is “a toolkit for building custom minimal, immutable Linux distributions”. It allows you to build your own lightweight Linux distribution using containers. It uses a YAML-based specification where you can specify the components that go into your distribution, like the kernel, init systems, and the userspace processes you wish to deploy inside - all running as containers, but with the added layer of true virtual isolation.

For our purposes, this is a match made in heaven - pairing the powerful, mature routing stack and management daemon/CLI of cRPD with the automated, modular, and lightweight Linux distribution built with LinuxKit, we get the best of both worlds.

Building Our Custom cRPD Image

Before we get to building our own Linux distribution, we need to make some modifications to the Docker image we get with cRPD. Out of the box, cRPD is designed to be minimal, meaning there are a number of options that are not enabled or configured by default. To get things like SSH and NETCONF working, we’ll actually build our own custom Docker image that takes the base image we’ll download from the Juniper website, and adds the relevant configurations to make it useful for our purposes.

I created the crpd-linuxkit repo with all of the files we’ll be using forthis build, so start off by cloning that repository:

Next, follow the cRPD installation instructions (and specifically the section titled Download the cRPD Software ) to download a .tgz file containing the crpd image. This is a tarball that Docker will recognize and allow you to import into your local Docker image cache. Continue only once you see your image in the output of docker image ls | grep crpd.

We’re going to build a custom Docker image that uses this cRPD image as its base, so we can add things like a proper SSH configuration. This Dockerfile uses crpd:latest as its base image reference, which doesn’t yet exist. The following command will look up the ID of the image we just imported, and re-tag it as crpd:latest so we can use it in our custom build:

If you take a peek at the files in the repository, you’ll notice with have a few things that will go into our customDocker image:

  • Dockerfile - copies our modified configuration files into the container image, and also sets up authentication options
  • launch.sh - a launch script which is executed when the container starts, to copy our configuration files into the correct location, and also start relevant services like the ssh service.
  • sshd_config - this is our modified SSH configuration, which includes a very important step of pointing the NETCONF subsystem to the appropriate path. This way, NETCONF requests will go straight to cRPD.

This walkthrough won’t go into much more detail on the files here, as well as the other files in the repository, so if you see something you don’t understand, cat its contents and read it for youself! I added comments where I could.

These are all tied together with the Dockerfile, so you should be able to run the below to build everything:

To be clear, our ultimate goal is to run cRPD inside a virtual machine that we create with LinuxKit, but why don’t we take a second to marvel at the fact that we now have a container image that runs the Junos routing stack! Let’s start an instance of it to play around with real quick for funsies:

We can interactively enter the Junos CLI with:

This should start looking a lot more familiar:

However, it’s not without its oddities - for instance, there is no show interfaces terse command!

Keep in mind, this is just the Junos routing stack and management daemon. It has no control over the network interfaces themselves, that’s still firmly in control of the underlying operating system, which in this case is my laptop since I’m running this natively in docker.

We can view things like the routing status for network interfaces, as that’s relevant to what cRPD is designed to do:

This is just the beginning. If you’re new to cRPD and just want to play with it instead of setting it up, one of the reasons I’ve been working on this is building a reproducible image for the NRE Labs curriculum. Follow along there or on Twitter, as I’m hoping to be able to publish something like this in the next few months.

Let’s exit and delete our cRPD container. From now on, we’ll be running cRPD inside a LinuxKit virtual machine.

Building the Router VM Image with LinuxKit

Okay, so we have our cRPD container image, but again, it’s not designed to function as a full-blown operating system. To actually pass traffic, we will use this container image as an “app” to be deployed in a brand-new Linux distribution that LinuxKit will create for us. The end-result is that we have a Linux-based VM that runs Junos software.

At the time of this writing, LinuxKit’s latest release is v0.7. Since I’m running on Linux, I’ll grab the precompiled binary for my platform, and pop it into /usr/local/bin:

LinuxKit comes with a linuxkit run command, but I already have existing scripts for running VMs the way I want, soI just want a virtual hard drive image to be generated that I can just execute myself.

Take a look at the contents of crpd.yml, as the entire manifest for defining this virtual machine, including picking a Linux kernel version, even including an init system, and finally, running our cRPD image, is all defined there. We can feed this filename into the linuxkit build command to build our VM.

LinuxKit is able to package to a variety of form factors. I’ll be running my VMs with QEMU, so packaging as qcow2-bios is most appropriate. This is possible with the -format flag:

Note that the above requires KVM extensions to be available and enabled, and your user to be added to the kvm group. If you run into problems here, check that KVM virtualization support is configured properly.

This may take a little time, as the linuxkit tool needs to download relevant images referenced in the build manifest, and then assemble them into a working distribution. At the end, you should end up with a file called crpd.qcow2 in your current directory.

Running a Routed Topology

We now have a virtual disk image we can use to boot instances of our cRPD virtual machine. We’ll look to start two instances, crpd1 and crpd2, and connect them via their eth1 interfaces, both connected to a bridge on the host. We’ll consider success to be that we’ve formed an OSPF adjacency, and have learned the route to crpd1’s loopback interface, and can ping it from crpd2.

So, first we’ll need to prep our host network configuration by creating a bridge and adding tap interfaces. This will allow our VMs to communicate with each other. Run the following as root:

Next, we need to start the virtual machines. I provided a helper script you can run to copy our disk image once for each VM we want to start, as well as running the VMs with QEMU:

Note that this runs the VMs as detached screen sessions, so either make sure screen is installed, or you can run the commands in that script in separate terminal sessions/tabs.

Don’t forget to stop these VMs with ./stop-vms.sh when you’re done with this walkthrough!

Once the script returns, you can use telnet 127.0.0.1 5000 and telnet 127.0.0.1 5001 to connect to the serial port for crpd1 and crpd2 accordingly. Once you see the below prompt (If you don’t see it after a while, try hitting enter), the VM is booted, and cRPD should be running :

If you are still connected to the serial port, hit Ctrl+] and type quit to disconnect and return to the host shell.

At this point, we have two VMs that we’ll call crpd1 and crpd2 that are connected via their eth1 interfaces using a host bridge. So, the name of the game is to configure networking on these VMs, as well as routing within cRPD so we can have an OSPF adjacency.

First, we’ll access crpd1 by connecting via ssh with the command ssh [email protected] -p 2022 (password is Password1!):

Note that we still have a bash shell here, not the Junos CLI. This is because cRPD is built within a Ubuntu container image, so we still have familiar Linux primitives here. Better still, the container is running with NET_ADMIN permissions, so we can make network changes here, and it will apply to the VM as a whole.

Using Linuxkit Generated Iso In Docker

Then, in another terminal session, run ssh [email protected] -p 2023 (password is Password1!) to access crpd2 and paste the following commands:

If you ran the above commands, you should still be in crpd2’s Junos CLI. You can verify that you have an established OSPF adjacency, and have learned the route to crpd1’s loopback address here:

Type exit to return to the bash shell, and run a ping to verify connectivity:

The Junos configuration we loaded into the Docker image early in this walkthrough was minimal. The point of this exercise was to develop an image configuration that could be programmed further via NETCONF at runtime.

So, for all the marbles, you can run these commands to install a Python virtualenv, install PyEZ (the Python library for Junos), and execute a python script that outputs the OSPF neighbors from crpd1. This ensures that we can communicate to the instance via NETCONF (be sure you’ve exited from your SSH session in previous steps - these are meant to be run back in the crpd-linuxkit repository you cloned earlier):

You should see this:

There are a number of projects that could be useful as alternatives - both in the Linux build portion, as well as the disaggregated software. This post is about proposing one possible path, not that this is the only one you could choose.

In addition to using something like Free-Range Routing as your routing stack, you don’t even have to run a routing stack at all. There are a bunch of use cases where the UX of a Dockerfile can/should be married with the isolation of a hypervisor. You also don’t have to use LinuxKit for the Linux build portion - there are a number of alternatives that could be used, each with its own pros/cons:

  • Firecracker Ignite - This is a tool the folks at Weave built to add a GitOps workflow for building Firecracker images. I looked at this briefly but it seems to be fairly opinionated about how it runs things. It’s simpler than linuxkit, but it seems that I don’t have the ability to export a VM image that I can then run myself in a traditional hypervisor. This tight coupling doesn’t really give me what I want today. YMMV.
  • RancherOS - This seems to be much closer to the traditional Docker toolchain, and could very well work for this.
  • CoreOS - Also might work, though not sure of the state of this post RH acquisition. May also work.

After I published this blog post, a few folks gave me a few more to keep an eye on:

  • Vorteil (Don’t ask me how to pronounce it)
  • BottleRocketOS - Seems to be made by the AWS folks, so I have to assume it has something to do with Firecracker, but not sure how. Seems cool, and looks like it’s written in Rust, which is cooler.

I hope this was helpful to someone that might be thinking of doing the same things I am. For a long time, I’ve viewed the UX of a Dockerfile as something I could only get if I was willing to part with the security and isolation of virtual machines. I love that there are a number of ways in 2020 that I can bypass that tradeoff entirely.

Linuxkit is a new project presented by Docker during the DockerCon 2017. If welook at the description of the project onGitHub:

A secure, portable and lean operating system built for containers

I am feeling already exited. I was an observer of the project when JustinCormack and the othercontributors wasworking on a private repository. I was invited as part of ci-wg group into theCNCF and I loved this project from the first day.

You can think about linuxkit as a builder for Linux operating system everythingbased on containers.

It’s a project that can stay behind your continuous integration system to allowus to test on different kernel version and distribution. You can a light kernelswith all the services that you need and you can create different outputsrunnable on cloud providers as Google Cloud Platform, with Docker or with QEMU.

Continuous delivery, new model

I am not really confident about Google Cloud Platform but just to move over I amgoing to do some math with AWS as provider.Let’s suppose that I have the most common continuous integration system, one bigbox always up an running configured to support all your projects or if you arealready good you are running containers to have separated and isolatedenvironment.

Let’s suppose that you Jenkins is running all times on m3.xlarge:

m3.xlarge used 100% every months costs 194.72$.

Let’s have a dream. You have a very small server with just a frontendapplication for your CI and all jobs are running in a separate instance, tiny asa t2.small.

t2.small used only 1 hour costs 0.72$ .

I calculated 1 hour because it’s the minimum that you can pay and I hope thatyour CI job can run for less than 1 hour.Easy math to calculate the number of builds that you need to run to pay as youwas paying before.

194.72 / 0.72 ~ 270 builds every month.

If you are running less than 270 builds a months you can save some moneytoo. But you have other benefits:

  1. More jobs, more instances. Very easy to scale. Easier that Jenkinsmaster/slave and so on.
  2. How many times during holidays your Jenkins is still up and running withoutto have nothing to do? During these days you are just paying for the frontendapp.

And these are just the benefit to have a different setup for your continuousdelivery.

LinuxKit CI implementation

There is a directory called./test that containssome linuxkit use case but I am going to explain in practice how linuxkit istested. Because it uses itself, awesome!

In first you need to download and compile linuxkit:

You can move it in your $PATH with make install.

At the moment the CLI is very simple, the most important commands are build andrun. linuxkit is based on YAML file that you can use to describe your kernel,with all applications and all the services that you need. Let’s start with thelinuxkit/test/test.yml.

Linuxkit builds everythings inside a container, it means that you don’t need alot of dependencies it’s very easy to use. It generates different output inthis case kernel+initrd, iso-bios, iso-efi, gpc-img depends of theplatform that you are interested to use to run your kernel.

I am trying to explain a bit how this YAML works. You can see that there aredifferent primary section: kernel, init, onboot, service and so on.

Pretty much all of them contains the keyword image because as I said beforeeverything is applied on containers, in this example they are store inhub.docker.com/u/mobylinux/.

The based kernel is mobylinux/kernel:4.9.x, I am just reporting what theREADME.md said:

  • kernel specifies a kernel Docker image, containing a kernel and afilesystem tarball, eg containing modules. The example kernels are built fromkernel/
  • init is the base init process Docker image, which is unpacked as the basesystem, containing init, containerd, runc and a few tools. Built frompkg/init/
  • onboot are the system containers, executed sequentially in order. Theyshould terminate quickly when done.
  • services is the system services, which normally run for the whole time thesystem is up
  • files are additional files to add to the image
  • outputs are descriptions of what to build, such as ISOs.

At this point we can try it. If you are on MacOS as I was you don’t need toinstall anything one of the runner supported by linuxkit is hyperkit itmeans that everything is available in your system.

./test contains different test suite but now we will stay focused on./test/check directory. It contains a set of checks to validate how thekernel went build by LinuxKit. They are the smoke tests that are running on eachnew pull request created on the repository for example.

As I said everything runs inside a container, if you look into the checkdirectory there is a makefile that build a mobylinux/check image, that imagewent run in LinuxKit, into the test.yml file:

You can use theMakefileinside the check directory to build a new version of check, you can just usethe command make.

When you have the right version of your test we can build the image used by moby:

Part of the output is:

And if you look into the directory you can see that there are all these filesinto the root. These files can be run from qemu, google cloud platform,hyperkit and so on.

On MacOS with this command LinuxKit is using hyperkit to start a VM, I can not copypaste all the output but you can see the hypervisor logs:

When the VM is ready LinuxKit is starting all the init, onboot, the logs iseasy to understand as the test.yml is starting containerd, runc:

Using Linuxkit Generated Iso In Docker Download

The last step is the check that runs the real test suite:

The last log is the output ofcheck-kernel-config.shfiles.

If you are on linux you can do the same command but by the default you are goingto use qemu an open source machine emulator.

I did some test in my Asus Zenbook with Ubuntu, when you run moby run this isthe command executed with qemu:

By default is testing on x86_64 but qemu supports a lot of other archs anddevices. You can simulate an arm and a rasperry pi for example. At themoment LinuxKit is not ready to emulate other architecture. But this is the mainscope for this project. It’s just a problem of time. It will be able soon!

Detect if the build succeed or failed is not easy as you probably expect. Thestatus inside the VM is not the one that you get in your laptop. At the momentto understand if the code in your PR is good or bad we are parsing the output:

Explain how linuxkit tests itself at the moment is the best way to get how itworks. It is just one piece of the puzzle, if you have a look here everypr has a GitHub Status that point toa website that contains logs related that particular build. That part is notmanaged by linuxkit because it’s only the builder used to create theenvironment. All the rest is managed bydatakit. I will speak about it probably inanother blogpost.

Conclusion

runc, docker, containerd, rkt but also Prometheus, InfluxDB, Telegraf a lot ofprojects supports different architecture and they need to run on differentkernels with different configuration and capabilities. They need to run on yourlaptop, in your IBM server and in a Raspberry Pi.

This project is in an early state but I understand why Docker needs somethingsimilar and also, other projects as I said are probably going to get somebenefits from a solution like this one. Have it open source it’s very good andI am honored to be part of the amazing group that put this together. I just didsome final tests and I tried to understand how it’s designed and how it works.This is the result of my test. I hope that can be helpful to start in the rightmindset.

My plan is to create a configuration to test InfluxDB and play a bit with qemuto test it on different architectures and devices. Stay around a blogpost willcome!

Some Links:

Using Linuxkit Generated Iso In Docker

Reviewers: Justin Cormack

get 'Docker the Fundamentals'by. Drive your boat as a Captain

Using Linuxkit Generated Iso In Docker Container

You can get the Chapter 2 of the book 'Drive your boat as a Captain' just leave click on the cover and leave your email to receive a free copy.

Using Linuxkit Generated Iso In Docker Windows 10

This chapter is getting started with Docker Engine and the basic concept around registry, pull, push and so on. It's a good way to start from zero with Docker.