By: Robert Treat (@robtreat2)
Edited By: Daniel “phrawzty” Maher (@phrawzty)
Like so many things in our industry, our job titles have become victims of the never-ending hype cycle. While the ideas behind terms like “DevOps” or “Site Reliability Engineering” are certainly valid, over time the ideas get lost and the terms become little more than buzzwords, amplified by a recruiting industry more concerned about their own short-term paychecks than our long-term career journeys. As jobs and workers are rebranded as DevOps and SREs, many Ssysadmins are left wondering if they are being left behind. Add in a heavy dose of the cloud, and a sysadmin has to wonder whether they will have a job in a few years. The good news is that despite all the noise, the need for sysadmins has never been stronger, you just need to see the connections between the technology you grew up on, and the technology that is going to move us forward in the next several years.
It used to be that when you started a new project you first had to determine what hardware to use, both from a physical standpoint but also from a pricing standpoint. In the cloud, these concerns are still there but have shifted. While most people in the cloud no longer worry about sizing infrastructure correctly at the start of a project, the tradeoff of being able to re-size VMs with relative ease is that it is also easy to oversize your instances, and in the cloud oversize means overspend, all day every day; at some point you need to work out the math to determine what your growth curve looks like in terms of resource needs vs dollars spent. Ever made that joke about having to bust out the slide rule to run cost comparisons between Riverbed, NetApp, and LSI? As much as they try, the cloud hasn’t made IOPS go away. Helping set estimates on how many IOPS an application will consume still requires a bit of maths, only now you also need to know your way around EBS and SSDs vs Provisioned IOPS in order to determine a reasonable IOPS per dollar ratio. But hey, you’ve done that before.
And that’s the thing; there are many skills which transfer like that. Scared about the new world of microservices? Don’t be - you were dealing with microservice like architectures long before the rest of us had even heard of the term. At it’s core, microservices are just a collection of loosely coupled services designed to meet some set of business goals. While we have not traditionally built and run applications that way, the mental leap for a sysadmin familiar with managing machines running Apache, Sendmail, OpenLDAP, and Squid is much less than for a developer who has only ever dealt with building complex monolithic applications. As sysadmins, we don’t think twice about individual services running on different ports, speaking different protocols, and providing completely different methods for observing their behavior; that’s just the way it is. Compare that to a development community that has wasted a generation trying to build ORMs to abstract away the concept of data storage rather than just learning to talk to another service in its own language.
This isn’t to say you can rest on your laurels. The field of Web Operations and the software that powers it is constantly changing, so you need to develop the ability to take what you already know and apply it to new software and systems. It is worth pointing out that this won’t always be easy or clean; new technology often misrepresents itself. For example, the rise of tools like Chef and Docker left many sysadmins wondering which direction to turn, but if you study these tools for a bit, you see that they draw similar patterns to old techniques. It can certainly be difficult for folks who have spent years coding on the command line to grok the syntax of a configuration management tools DSL, but you can certainly understand why companies want to automate systems; the idea of replacing manual labor with something automated is something we print on t-shirts just for fun. And sure, I understand how the yarn ball of recipes, resources, and roles might look like overkill to a lot of people, but I’ve also seen some crazy complex mixes of bash and Perl used as startup scripts during a PXE boot, so it’s all relative.
When Docker first came on the scene, it also promised to revolutionize everything we know about managing systems. All the hype around containers seemed to focus on resource management and security, but the reality was mostly just a new way to package and distribute software, whereby new I mostly just mean different. I’ve often lamented that something is lost when an operator never learns how to compile code or bypasses the experience of packaging other people’s software. The promise of Docker is that you can put together systems like using a set of legos using pre-existing pieces, but stray from the well trodden path even a little and you’ll find all of those magic compile time errors and strange library dependencies that you are familiar with from systems in the past. Like any system built on abstractions, there are assumptions (and therefore dependencies) baked in three levels deep. If you ever debated whether to rewrite an rpm spec file from scratch after a half day hacking on the distro’s spec file trying to add in the one module you need the maintainers didn't… replace rpm spec file with dockerfile and you have someone to share root beers with. Sure the container magic is magic when it works, but the devil is in the dependencies.
Of course no conversation about the role of the sysadmin would be complete without touching on the topics of networks and security. While sometimes made the purview of dedicated personnel, at some level these two areas always seem to fall back to the operations team. Understanding the different networks within your organization, the boundaries between those networks, and the who or how to traverse them has always been a part of life as a sysadmin. Unfortunately in what should be one of the most directly applicable skillsets (networks are still networks), the current situation in cloud land has actually gotten worse; the stakes are fundamentally higher in a world where the public internet is always just a mis-configuration away. Incorrect permissions on a network file share might expose sensitive material to everyone in the company, but incorrect permissions in S3 expose those files to the world. Networking is also more complicated in the cloud world. I’ve always been pretty impressed by how far one could get with VPNs and SSH when building your own, but with cloud providers focused on attracting enterprise clients in regulated industry, you’ll have to learn new tooling built to meet those needs, for better or worse. It can still be done, just be aware it is going to work a little differently.
So the good news is that the role of the sysadmin isn’t going away. While the specifics may have changed, resource management, automation, packaging, network management, security, rapid response, and all the other parts of the sysadmin ethos remain critical skills that companies will need going forward. Unfortunately that doesn’t solve the problem of companies thinking they need to hire more “DevOps Developers” (though I never see jobs for “DevOps Operators” - go figure!) and other such crazy things. As I see it, it is easy to make fun of companies who want to hire DevOps engineers because they don’t understand DevOps, but you can also look at it like hiring a network engineer or security engineer - believing you need someone who specializes in automation (and likely CM or CI/CD specifically) is not necessarily a bad thing. So the next time you’re feeling lost or wondering where your next journey may take you, remember that if you focus on the principles of running systems reliably and keep your learning focused on fundamental computing skills, even though the tools may change the problems are fundamentally the same.
No comments :
Post a Comment