Written by: Jan Ivar Beddari (@beddari)
Edited by: Nicholas Valler (@nvaller)
Introduction
ChatOps is a great idea. Done right, it creates a well-defined collaborative
space where the barriers to entry are low and sharing improvements is quick.
Because of the immediate gains in speed and ease, ChatOps implementations have
a tendency to outgrow their original constraints. If this happens, the amount
of information and interrupts a team member is expected to filter and process
might become unmanageable. To further complicate the issue, reaching that limit
is a personal experience. Some might be fine with continuously monitoring three
dashboards and five chat rooms, and still get their work done. Others are more
sensitive and perhaps ending up fighting feelings of guilt or incompetence.
Being sufficiently explicit about what and when information reaches team
members takes time to get right. For this reason, I consider shared filtering
to be an inherent attribute of ChatOps, and a very challenging problem to
solve. As humans think and reason differently given the same input, building
and encouraging collaboration around a visible ‘robot’ perhaps isn’t the best
way?
Defining the Team CLI
As an engineer, taking one step back, what alternative approaches exist that
would bring a lot of the same gains as the ChatOps pattern? We want it to be
less intrusive and not as tied to communication, hopefully increasing the
attention and value given to actual human interaction in chat rooms. To me, one
possible answer is to provide a team centric command line interface. This is
a traditional UNIX-like command line tool to run in a terminal window,
installed across all team members environments. Doing this, we shift our focus
from sharing a centralized tool to sharing a decentralized one. In a
decentralized model, there is an (apparent) extra effort needed to signal or
interrupt the rest of the team. This makes the operation more conscious, which
is a large win.
With a distributed model, where each team member operates in their own context,
a shared cli gives the opportunity to streamline work environments beyond the
capabilities of a chatbot API.
Having decided that this is something we’d like to try, we continue defining a
requirements list:
- Command line UX similar to existing tools
- Simple to update and maintain
- Possible to extend very easily
There’s nothing special or clever about these three requirements. Simplicity is
the non-listed primary goal, using what experience we have to try getting
something working quickly. To further develop these ideas we’ll break down the
list and try to pinpoint some choices we’re making.
Command line UX similar to existing tools
Ever tried sharing a folder full of scripts using git? Scripts doesn’t really
need docs and reading git commits everyone can follow along with updates to the
folder, right? No. It just does not work. Shared tooling needs constraints.
Just pushing /usr/local/bin
into git will leave people frustrated at the lack
of coherency. As the cognitive load forces people into forking their own
versions of each tool or script, any gains you were aiming for sharing them are
lost.
To overcome this we need standards. It doesn’t have to involve much work as we
already mostly agree what a good cli UX is - something similar to well-known
tools we already use. Thus we should be able to quickly set some rules and move
on:
- A single top level command
tcli
is the main entry point of our tool All sub-commands are modules organized semantically using one of the two
following syntax definitions:tcli module verb arguments tcli module subject verb arguments
Use of options is not defined but every module must implement
--help
Unlike a folder of freeform scripts, this is a strict standard. But even so the
standard is easy to understand and reason about. It’s purpose is to create just
enough order and consistency to make sharing and reuse within our team
possible.
Simple to update and maintain
Arguably - also a part of the UX - are updates and maintenance. A distributed
tool shared across a team needs to be super simple to maintain and update. As a
guideline, anything more involved than running a single command would most
likely be off-putting. Having the update process stay out of any critical usage
paths is equally important. We can’t rely on a tool that blocks to check a
remote API for updates in the middle of a run. That would our most valued
expectation - simplicity. To solve this with a minimal amount of code, we could
reuse some established external mechanism to do update checks.
- Updates should be as simple as possible, ideally
git pull
-like. - Don’t break expectations by doing calls over the network, shell out to
package managers or similar. - Don’t force updates, stay out of any critical paths.
Possible to extend very easily
Extending the tool should be as easy as possible and is crucial to its long
term success and value. Typically there’s a large amount of hidden specialist
knowledge in teams. Using a collaborative command line tool could help share
that knowledge if the barrier to entry is sufficiently low. In practice, this
means that the main tool must be able to discover and run a wide variety of
extensions or plugins delivered using different methods, even across language
platforms. A great example of this is how it is possible to extend git with
custom sub-commands just by naming them git-my-command
and placing them in
your path.
Another interesting generic extension point to consider is running Docker
images as plugin modules in our tool. There’s a massive amount of tooling
already packaged that we’d be able to reuse with little effort. Just be sure to
maintain your own hub of canonical images from a secure source if you are doing
this for work.
Our final bullet point list defining goals for extensions:
- The native plugin interface must be as simple as possible
- Plugins should be discovered at runtime
- Language and platform independent external plugins is a first class use case
Summoning a Python skeleton
Having done some thinking to define what we want to achieve, it’s time to start
writing some code. But why Python? What about Ruby, or Golang? The answer is
disappointingly simple: for the sake of building a pluggable cli tool, it does
not matter much what language we use. Choose the one that feels most
comfortable and start building. Due to our design choice to be able to plug
anything, reimplementing the top command layer in a different language later
would not be hard.
So off we go using Python. Anyone having spent time with it would probably
recognize some of the projects listed on the http://www.pocoo.org/ site, all of
them highly valued with great documentation available. When I learned that it
also hosts a cli library called Click, I was intrigued by its description:
“Click is a Python package for creating beautiful command line interfaces in a
composable way with as little code as necessary.”
Sounds perfect for our needs, right? Again, documentation is great as it
doesn’t assume anything and provide ample examples. Let’s try to get ‘hello
tcli’ working!
Hello tcli!
The first thing we’ll need is a working Python dev environment. That could mean
using a virtualenv, a tool and method used for separating libraries and
Python runtimes. If just starting out you could run [virtualenvwrapper] which
further simplifies managing these envs. Of course you could also just skip all
this and go with using Vagrant, Docker or some other environment, which will be
just fine. If you need help with this step, please ask!
Let’s initialize a project, here using virtualenvwrapper:
mkvirtualenv tcli
mkdir -p ~/sysadvent/tcli/tcli
cd ~/sysadvent/tcli
git init
Then we’ll create the three files that is our skeleton implementation. First
our main function cli() that defines our topmost command:
tcli/cli.py
import click
@click.group()
def cli():
"""tcli is a modular command line tool wrapping and simplifying common
team related tasks."""
Next an empty file to mark the tcli
sub-directory as
containing Python packages:
touch tcli/__init__.py
Last we’ll add a file that describes our Python package and its dependencies:
setup.py
from setuptools import setup, find_packages
setup(
name='tcli',
version='0.1.0',
packages=find_packages(),
include_package_data=True,
install_requires=[
'click',
],
entry_points='''
[console_scripts]
tcli=tcli.cli:cli
''',
)
The resulting file structure should look like this:
tree ~/sysadvent/
/home/beddari/sysadvent/
└── tcli
├── setup.py
└── tcli
├── cli.py
└── __init__.py
That’s all we need for our ‘hello tcli’ implementation. We’ll install our newly
crafted Python package as being editable - this just means we’ll be able to
modify its code in-place without having to rerun pip:
pip install --editable $PWD
pip will read our setup.py file and first install the minimal needed
dependencies listed in the install_requires
array. You might know another
mechanism for specifying Python deps using requirements.txt which we will not
use here. Last it installs a wrapper executable named tcli
pointing to our
cli() function inside cli.py. It does this using the configuration values
found under entry_points
, which are documented in the [Python Packaging User
Guide].
Be warned that Python packaging and distribution is a large and sometimes
painful subject. Outside internal dev environments I highly recommend
simplifying your life by using fpm.
That should be all, if the stars aligned correctly we’re now ready for the
inaugural tcli
run in our shell. It will show a help message and exit:
(tcli) beddari@mio:~/sysadvent/tcli$ tcli
Usage: tcli [OPTIONS] COMMAND [ARGS]...
tcli is a modular command line tool wrapping and simplifying common team
related tasks.
Options:
--help Show this message and exit.
Not bad!
Adding commands
As seen above, the only thing we can do so far is specify the --help
option,
which is also done by default when no arguments are given. Going back to our
design, remember that we decided to allow only two specific UX semantics in our
command syntax. Add the following code below the cli() function in cli.py:
@cli.group()
def christmas():
"""This is the christmas module."""
@christmas.command()
@click.option('--count', default=1, help='number of greetings')
@click.argument('name')
def greet(count, name):
for x in range(count):
click.echo('Merry Christmas %s!' % name)
At this point, we should treat the @sysadvent
team to the number of greetings we think they deserve:
tcli christmas greet --count 3 "@sysadvent team"
The keys to understanding what is going on here are the @cli.group()
and
@christmas.command()
lines: greet() is a command belonging to the
christmas group which in turn belongs to our top level click group. The
Click library uses decorators–a common python pattern–to achieve this.
Spending some hours with the Click documentation we should now be able to
write quite complex command line tools, using minimal Python boilerplate code.
In our design, we defined goals for how we want to be able to extend our
command line tool, and that is where we’ll go next.
Plugging it together
The Click library is quite popular and there’s a large number of
third party extensions available. One such plugin is click-plugins, which
we’ll use to make it possible to extend our main command line script. In Python
terms, plugins can be separate packages that we’ll be able to discover and load
via setuptools entry_points. In non-Python terms this means we’ll be able to
build a plugin using a separate codebase and have it publish itself as
available for the main script.
We want to make it possible for external Python code to register at the
module level of the UX semantics we defined earlier. To make our main tcli
script dynamically look for registered plugins at runtime we’ll need to modify
it a little:
The first 9 lines of tcli/cli.py
should now look like this:
from pkg_resources import iter_entry_points
import click
from click_plugins import with_plugins
@with_plugins(iter_entry_points('tcli.plugins'))
@click.group()
def cli():
Next, we’ll need to add click-plugins to the install_requires
array in our
setup.py file. Having done that, we reinstall our project using the same
command originally used:
pip install --editable $PWD
Reinstalling is needed here because we’re changing not only code, also the
Python package setup and dependencies.
To test if our new plugin interface is working, clone and install the example
tcli-oncall project:
cd ~/sysadvent/
git clone https://github.com/beddari/tcli-oncall.git
cd tcli-oncall
pip install --editable $PWD
After installing, we have some new example dummy commands and code to play
with:
tcli oncall take "a bath"
Take a look at the setup.py and tcli_oncall/plugin.py files in this project
to see how it works.
There’s bash in my Python!
The plugin interface we defined above obviously only works for native Python
code. An important goal for us is however to integrate and run any executable
as part of our cli as long as it is useful and follows the rules we set. In
order to do that, we’ll replicate how git extensions work to add commands
that appear as if they were built-in.
We create a new file in our tcli
project add add the following code (adapted
from this gist) to it:
tcli/utils.py
import os
import re
import itertools
from stat import S_IMODE, S_ISREG, ST_MODE
def is_executable_posix(path):
"""Whether the file is executable.
Based on which.py from stdlib
"""
try:
st = os.stat(path)
except os.error:
return None
isregfile = S_ISREG(st[ST_MODE])
isexemode = (S_IMODE(st[ST_MODE]) & 0111)
return bool(isregfile and isexemode)
def canonical_path(path):
return os.path.realpath(os.path.normcase(path))
The header imports some modules we’ll need, and next follows two helper
functions. The first checks if a given path is an executable file, the second
normalizes paths by resolving any symlinks in them.
Next we’ll add a function to the same file that uses these two helpers to
search through all directories in our PATH
for executables matching a regex
pattern. The function returns a list of pairs of plugin names and executables
we’ll shortly be adding as modules in our tool:
def find_plugin_executables(pattern):
filepred = re.compile(pattern).search
filter_files = lambda files: itertools.ifilter(filepred, files)
is_executable = is_executable_posix
seen = set()
plugins = []
for dirpath in os.environ.get('PATH', '').split(os.pathsep):
if os.path.isdir(dirpath):
rp = canonical_path(dirpath)
if rp in seen:
continue
seen.add(rp)
for filename in filter_files(os.listdir(dirpath)):
path = os.path.join(dirpath, filename)
isexe = is_executable(path)
if isexe:
cmd = os.path.basename(path)
name = re.search(pattern, cmd).group(1)
plugins.append((name, cmd))
return plugins
Back in our main cli.py
, add another function and a loop that iterates
through the executables we’ve found to tie this together:
tcli/cli.py
import tcli.utils
from subprocess import call
def add_exec_plugin(name, cmd):
@cli.command(name=name, context_settings=dict(
ignore_unknown_options=True,
))
@click.argument('cmd_args', nargs=-1, type=click.UNPROCESSED)
def exec_plugin(cmd_args):
"""Discovered exec module plugin."""
cmdline = [cmd] + list(cmd_args)
call(cmdline)
# regex filter for matching executable filenames starting with 'tcli-'
FILTER="^%s-(.*)$" % __package__
for name, cmd in tcli.utils.find_plugin_executables(FILTER):
add_exec_plugin(name, cmd)
The add_exec_plugin function adds a little of bit magic, it has an inner
function exec_plugin that represents the command we are adding, dynamically.
The function stays the same every time it is added, only its variable data
changes. Perhaps surprising is that the cmd variable is also addressable inside
the inner function. If you think this sort of thing is interesting, the topics
to read more about are scopes, namespaces and decorators.
With a dynamic search and load of tcli-
prefixed executables in place, we
should test if it works as it should. Make a simple wrapper script in your
current directory, and remember to chmod +x
it:
tcli-ls
#!/bin/bash
ls "$@"
Running the tcli
command will now show a new module called ‘ls’ which we can
run, adding the current directory to our PATH
for the test:
export PATH=$PATH:.
tcli ls -la --color
Yay, we made ourselves a new way of calling ls
. Perhaps time for a break ;-)
An old man and his Docker
As the above mechanism can be used to plug any wrapper as a module we now have
a quick way to hook Docker images as tcli modules. Here’s a simple example
that runs Packer:
tcli-builder
#!/bin/bash
sha256="95ad93dc3ba8673410d919f44e86002659f5c157fc96f1afee4d44146069e189"
docker run --rm -it "hashicorp/packer@sha256:$sha256" "$@"
The last command below should run the entrypoint from hashicorp/packer,
and we’ve reached YALI (Yet Another Layer of Indirection):
export PATH=$PATH:.
tcli
tcli builder
Hopefully it is obvious how this can be useful in a team setting. However,
creating bash wrappers for Docker isn’t that great, it would be a better and
faster UX if we could discover what (local?) containers to load as tcli modules
automatically. One idea to consider is an implementation where tcli used data
from Docker labels with Label Schema. The org.label-schema.name
and
org.label-schema.description
labels would be of immediate use, representing
the module command name and a single line of descriptive text, suitable for the
top level tcli --help
command output. Docker has an easy-to-use Python API
so anyone considering that as a project should be starting from there.
Other plugin ideas
The scope of what we could or should be doing with the team cli idea is
interesting, bring your peers in and discuss! For me however, the fact that it
runs locally, inside our personal dev envs, is a large plus.
Here’s a short list of ideas to consider where I believe a team cli could bring
advantages:
git projects management, submodules replacement, templating
tcli project list # list your teams git repositories, with descriptions tcli project create # templating tcli project [build|test|deploy]
This is potentially very useful for my current team at $WORK. I’m planning to
research how to potentially do this with a control repo pattern using
gitman.Secrets management
While waiting for our local Vault implementation team to drink all of their
coffee we can try making a consistent interface to (a subset of) the problem?
Plugging in our current solution (or non-solution) would help, at least.If you don’t already have a gpg wrapper I’d look at blackbox.
Shared web bookmarks
tcli web list tcli web open dashboard tcli web open licensing
Would potentially save hours of searching in a matter of weeks ;-)
On-call management
E.g as the example
tcli-oncall
Python plugin we used earlier.Dev environment testing, reporting, management
While having distributed dev environments is something I’m a big fan of it
is sometimes hard figuring out just WHAT your coworker is doing. Running
tests in each team members context to verify settings, versioning and so on
is very helpful.And really there’s no need for every single one of us to have our own,
non-shared Golang binary unzip update routine.
Wait, what just happened?
We had an idea, explored it, and got something working! At this stage our team
cli can run almost anything and do so with an acceptable UX, a minimum of
consistency and very little code. Going further, we should probably add some
tests, at least to the functions in tcli.utils. Also, an even thinner design
of the core, where discovering executables is a plugin in itself, would be
better. If someone want to help making this a real project and iron out these
wrinkles, please contact me!
You might have noticed I didn’t bring up much around the versus ChatOps
arguments again. Truth is there is not much to discuss, I just wanted to
present this idea as an alternative, and the term ChatOps get people thinking
about the correct problem sets. A fully distributed team would most likely
try harder to avoid centralized services than others. There is quite some power
to be had by designing your main automated pipeline to act just as another
user, driving the exact same components and tooling as us non-robots.
In more descriptive, practical terms it could be you notifying your team ‘My
last build at commit# failed this way’ through standardized tooling, as
opposed to the more common model where all build pipeline logic and message
generation happens centrally.
No comments :
Post a Comment