Secure Services – It’s not just for REST or SOAP anymore

A note: I meant to write this blog post for years. I am finally knocking this blog post off of my todo list. Hurray!

In the beginning, there was the terminal, and it was great. After networking had come around, the terminal was networked and called telnet. It was thought to be good but later realized to be a disaster when people realized it was not secure. Then there came the secure shell which was a secure wrapper around telnet. Later the features for SSH expanded into:

  1. Port Forwarding (used as a jump box/service)
  2. Proxying network traffic
  3. X11 forwarding
  4. Secure shell environment
  5. Key based authentication
  6. Secure File Transfer (SFTP)

There are a lot of different uses of SSH and how you can configure it to do some pretty extraordinary things. That is something that would be out of the scope of this blog post. A great reference on SSH can be found in this book.

One of the features that caught my attention is that it is possible to create services that are purely in the Unix environment and are incredibly secure. The attack surface is small, communication is encrypted, and that your environment is sandboxed (well as much as you make it).

Authorized Keys

Passwords are incredibly low effort to unlock a system. They tend to be short, and they can be brute forced. (Even worse, they frequently have a small space of combinations as that they are human chosen). Randomly generated keys with lots of bits were created to avoid this issue. This was added to SSH to add in passwordless login and to avoid sharing passwords as well. All of the public keys are stored in the user’s authorized_keys file.

Within the authorized_keys, each entry has the following format:

<key encryption used> <public key> <comment>

Within each line, it is possible to extend the features (shell, a command to run, environment variables, etc.) of that particular login. (See the man page for more details)

To build a secure service, use the command variable. You will see in the section labeled your first secure service for an example.

The setup

  1. To setup an account, you are going to need a private key. Generate that with ssh-keygen without a password. (You can use a password, but it will make automation tough)
  2. Add the entry into the authorized_keys file. (~/.ssh/authorized_keys) With the ssh-copy-id command (ex. ssh-copy-id –i [location of your private key] [user running the command]@[server]
  3. Your ~/.ssh/authorized_keys file should have entries similar to the following:
    1. cat ~/.ssh/authorized_keys
    2. ssh-rsa A……..iJu+ElF7 steven@server
      1. See the section Authorized Keys section for an explanation of what this means.
  4. Test the access to the service by sshing into the box under that user and with that key. (Sample command: ssh –i [private key] user@server
    1. The first time you connect to a server with SSH you are going to get a message asking if it is ok to connect to a particular box. (This is something you need to do if you are automating a process as well)

Your first service

Your first service will be an incredibly simple example. Open up your authorized_keys file that was modified from the setup section. Add in the following command in front of the ssh-rsa/dsa line for the new entry.

command="echo hello the time is `date`"

The line of the login should now look similar to the following:

command="echo hello the time is `date`" ssh-rsa A.......

Now you can make a call to the configured service as such:

ssh -i .ssh/id_sitetest steven@localhost

You will receive an output of:

hello the time is Fri Nov 11 22:28:37 CST 2016
Connection to localhost closed.

Congratulations, if you followed along with the previous instructions: you have created your first secure service. All of the input communicated to the service and coming back from the service is encrypted. You have control over the format that is output and how you are going to take in input. The beauty of this service is that a terminal is not left open, and only the command that is defined is run. Once the command is done, the session automatically closes.

Suggestions

How should one best develop services?

You should develop a bash script to replicate the functionality that you would like the server to perform before setting it up to run in SSH. This gives you an isolated environment to test the process and to test it before it goes live. The SSH service setup is merely a layer above the script.

How should I get input into the service?
To do this, you would need to take in input just like how you would in a bash script. I would suggest using the ‘read’ command to do this. See this guide.

A note on this: Always validate the input. Attackers are unlikely (assuming that the key is managed properly), but it never hurts to

Is it webscale?

Honestly, I do not know the answer to this question. I guess it is possible to do this via the web, and I am not sure how stable it would be with lots of concurrent users. It is at most as scalable as the SSH service and system.

I did a brief search to see if this was possible in javascript on the client side and I could not find a source to show that it was possible.

Can you automate the use of these secure services?

Yes. However, when you create the key, you should never add a passphrase as that requires manual interaction. A word to the wise, keys should have a required live cycle and should be phased out periodically. Key management will be an issue in this situation.

Project: Music Organization System

Before ITunes and online music stores/streaming-options came arround, you had to build up a digital music collection if you wanted to load an MP3 player with it. I always prefered this option. This meant that I could manage my own collection, and that I wasn’t fixed to a service that would eat up all of my data and that I could listen to what I wanted. For example, in most US music services you can’t find the band Die Toten Hosen. They’re a great German band, but they haven’t hit the US market. Also, having your own collection it’s a lot easier to move the collection to other devices without having the direct integration (such as my car stereo).

The downside to managing your own music collection is that you’re subject to managing the collection yourself. That means that a large collection can get unwieldy very quickly. Thankfully there are a few tools to help with that. I found the JThink tools (SongKong and Jaikoz) to be very helpful with keeping an organized music collection.

What is it?

This project is intended to automatically standardize files into a human-friendly collection without user intervention.

What technologies are used?

  • Bash
  • SongKong (Jthink.net)- For file formatting, metadata correction, and metadata improvements (From an online source)
  • FFMpeg (for media conversion)
  • Docker
  • Docker Registry

How did I solve it?

To solve this issue I did the following:

  1. Created the Dockerfile and outlined the general steps used.
  2. Identified the software dependencies.
  3. Opened up X forwarding to test out SongKong (It’s mainly an X application, that has the possibility of a command line tool)
  4. Ensured that Songkong could operate from within the Docker container
  5. Moved over the Ogg2MP3 and Flac2Mp3 scripts. (Which can be found at Github.com/monksy)
  6. Created a docker registry so that I can keep the docker image local. (Songkong is a licensed and for pay product)
  7. Setup the CI pipeline with Jenkins
  8. Create a script to run on the box managing the music collection. This uses the Docker Registry to pull down this process and run the organization utility
  9. Setup the Crontab scripts to run the container

Some of the challenges that I had while doing all of this included:

  1. The difference between the run command and entrypoint. The entrypoint command within docker runs the command when the container is invoked. The RUN command may run only when the container is being built.
  2. The Jenkins Docker plugins are a little difficult to use and setup. I tried using the Docker-build-step plugin, however, it tended to include very little documentation, was very unhelpful about invalid input and was difficult to build and publish. Fortunately the Cloudbees Docker Build and Publish plugin was just what I was looking for.
  3. Debugging the Docker Registry was a pain. For the most part you’ll have to depend on the standard output coming out. If not that, do a docker exec -ti <container id>  /bin/bash and look for the log files.
    1. This really needs to be improved to output what is broken and why
    2. Bad logins to the docker registry from the client go from Version 2 of the API to Version 1 if something goes wrong on the Version 2 request. (I.e. a bad certificate file). This is frustrating.
  4. If you have a LetsEncrypt certificate to use on the Docker registry, it’s not very well documented that you should be using the Fullchain certificate file. Without it, you’ll have security issues.
    1. Another note on this, it should be a lot easier to create users on the registry rather than to generate HTAccess files.
    2. If you are generate a user access file, you have to use bcrypt as the encryption option. Otherwise, the docker registry won’t accept your credentials.
  5. The storage option that I used for storing the collection was a network mount point. Not having the proper permissions on the server side for the child folders caused a wild goose chase. That lead to studying up on the options of the mount.cifs tool. (For example file_mode, dir_mode, noperms, and forceuid options).
  6. Reproducing the application’s environment was a little difficult as that it wasn’t clear about where it’s private files were located.
  7. The id3 tagging command originally used no longer exists. I had to upgrade to the Id3v2 software and reformat the command usage.

Exit Codes: Why Java Gets it Wrong

Exit Codes

The standard protocol of using command line interface tools in Unix is based on a few things: standard out, standard in, standard error and the exit code. The exit code is the reason why the start method of a C program includes an int as a return type. That value is being passed back to the code that executed the application. (Typically the shell). The expected values of an exit code are: 0 for a success and anything non-0 is known as a failure code. This gives the developer a way to communicate what went wrong in a very quick fashion.

Java is a weird beast in that regard. Unless there was a JVM failure, Java will always report back a 0 exit code. This can be incredibly irritating when you want to create Java applications that are meant to be execute in a Unix environment or in a chained fashion. (As the Unix philosophy intends for an application to be run as).

The workaround for returning an non-0 exit code is to call System.exit(<code>).This has 2 draw backs. Firstly, it’s a very abrupt call, and can introduce issues later down the line. (It could cause confusion as to why the application just failed, similarly to multiple return statements in a method) Secondly, the shutdown request to the JVM is concerning, it doesn’t attempt to resolve any other threads running at the moment or give them a chance to finish before closing. For example: resources could remain unclosed or unfinished, temporary files may not be cleaned up, and network connections could be dropped. The only way to get a notification that this is happening is to setup a shutdown hook. (That is described in the documentation for System::exit)

Need to install The PG (Postgres GEM) via Bundler?

When attempting to upgrade a copy of Gitlab that I installed from source, I ran into an issue. Gitlab is a Ruby On Rails application that uses Bundler to handle all of it’s dependencies. When it attempted to bring in the Postgres SQL gem dependency, this had an issue with the sources that were available on the local system.

That required an install of Postgres-devel. On top of that it took some time to get bundler to recognize that the location of the libs and headers that it needed were located under/usr/pgsql-9.3/*.

 

To solve this, I found an answer on Stackoverflow.

bundle config build.pg –with-pg-config=/usr/pgsql-9.3/bin/pg_config

This is the equivalent to setting configuration arguments on your configure script in bash based application compiles/installs. After setting the configuration parameter, your bundler install should succeed without an issue about missing headers or pg_config binaries.

Having difficulty getting your NGinix service working on the Digital Ocean install instructions?

Recently I’ve setup a Ruby web application service with the instructions from Digital Ocean. The instructions are great, however they do not mention how to start the Nginix process up on startup.

The command to start the service on boot [under Centos] is:

sudo chkconfig –levels 235 nginx on

However their start script would prevent you from doing that as that it is not a service script that is compatible with chkconfig. The error message that will be given is: service nginx does not support chkconfig.

To fix it, add the following comments to the top of the script (/etc/init.d/nginix) right after the first line (#!/bin/sh)

### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: nginx initscript
# Description: nginx
### END INIT INFO

Installing Maven on Centos 5 or 6/RHEL

At the moment there is no RPM package or yum install available for the latest version of Maven on Centos. The user is left to install Maven manually. To attempt to overcome this, I created a script to install the latest, at the moment: 3.1.1. At the moment, there are many things that should be added to the script, they’re listed in the TODO section of the documentation, but those features may be added later.

Instructions on how to run the script, and the script it’s self may be found at: https://github.com/monksy/centos-maven-install 

Installing ArchLinux on an Asus UX31A?

If you’re installing linux on a Asus UX31A laptop, and intend on using the 3.10 or 3.11 kernel, you may want to take a look at this before trying to fight with X.

If you reboot and nothing shows up after grub, there is an issue with modeset, Linux 3.10/11 kernel, and the i915 driver. To get the laptop to boot in a non-X and non-modeset mode, append “nomodeset” to the kernel arguments via Grub.

These guys have seem to have fixed the issue directly on the kernel. [Note: You must have make, gcc, and patch installed to get the package]

Additionally the guys from Fedora found that you can fix the issue by enabling CSM and disabling secure boot.

https://ask.fedoraproject.org/question/29186/black-screen-after-upgrade-to-kernel-310/

Also, if you’re having issues wtih NetworkManager not connecting, it may be due to dhcpcd running in the background.

This Week I Learned [19 May 2013 Edition]

This week I learned about:

  • There is a new change to how network devices are named. This may change your network device from eth0 to something like env10p0.
    http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames
  • Archlinux is very similar to Gentoo, but it is based on a script to install everything.
  • 2to3 is a python script that will convert a python script from a version Python 2.0 to a 3.0 compatible script.
    • One major difference between 2 and 3 is that the print statement became a function, thus requiring parenthesis.
  • It is easier to hard code the Python 2 environment, rather than to convert the script into a Python 3.0 script.
    • #!/usr/bin/env python2.7 [or whatever Python 2 environment that is installed] should be used in the shell definition of the script.
  • OpenVAS– Open Vulnerability Scanner: I haven’t tried this yet, but this is a neat utility that can help you keep tabs on the software you currently have installed and the possible vulnerabilities that they may have.
  • Always snapshot your VM after you finish major installs. [This saved my butt this week]
  • Ansible –  This looks like a nice Open Source version of HP Operations Orchestration.

Don’t let the User Fail

One of the things that has been bothering me quite a bit as of late is debugging setup issues with OpenStack. Usually this consists of tracking down errors amongst 12 different log files when something goes wrong. There are multiple issues going on here. Firstly, the guides for setting up OpenStack, Folsom, are not so great, or situated for different environments (multinode vs single-node). [That being said, DevStack is pretty cool and easy to use]  Secondly, debugging a new setup and learning at the same time can be a pain.

The major issue of installing OpenStack could be solved through a dedicated install process. However, the larger issue is that the user is allowed to fail so quickly. It’s similar to refusing to validate an email address prior to submission and processing on a website. My suggestion is that many of these issues can be solved by doing step by step confirmations of the process. For example, if you were to install a web framework, the installation process/script should confirm:

  1. That a server is installed
  2. The installing user has permissions, or can be elevated to the permission level, to install the web application
  3. Copy the files over to the web server and start the web application
  4. The web application prompts for a configuration on the first run (I.e. Checking for writable permissions at the get go, setting up users, etc.)

Asking a system administrator to do all of these manual confirmations causes quite a few issues. For example, Some web applications have an issue with the database configuration after the web site has been put in use.  Lastly, if a script fails it should fail in a sandbox rather than the live environment. Gentoo sandboxes ebuilds prior to installing the executables on the system.

Another thing to note, if a user is configuring a database on install ask for the host, confirm that the host is enabled, and then allow for a database selection.

Things I learned a week or so ago

These few items are a few things that I found a week or so ago, but I hadn’t had the time to post them:

  • If the Nvidia package has difficulty finding the version number for the kernel, which is an issue for 3.7 and higher then link the Versions.h file to /usr/src/linux/include/linux/.
  • If you have “AutoAddDevices” in your xorg.conf file, your mouse and keyboard may not be found when X starts up.
  • For Gentoo users and people who manually build their applications, if you get an error on the build, make sure you that you reduce the number of concurrent builds when searching for the error. This is a matter of reducing the argument -jXX to -j1. This will help you resolve an issue where groupware_dav in the KDEPIM-runtime package fails to build. The solution to that issue: The build was having an issue with a libsasl library, and required for a revdep-rebuild.
  • For those who have an NVidia Optima: you may get an error that “Cannot load glx on :0.” This causes for the graphics performance to be less than stellar, and for KDE Desktop Effects to not operate. To resolve this issue: Make sure that the Intel/Mesa graphics driver is reinstalled and to switch the default OpenGL reference to be Xorg-server rather than NVidia. [Use the “eselect opengl set ” command to accomplish this] Setting the OpenGL setting will not affect Bumblebee’s OpenGL switching.