On IT Operations and Infrastructures

To content | To menu | To search

Friday 3 December 2010

Réponse au post de l'Institut Agile : "Devops - premières rencontres et survol"

This post is exceptionally in french, since I wasn't able to comment directly on the institut agile blog.

N'ayant pu répondre dans les commentaires sur le blog de l'institut agile, je le fais exceptionnellement ici, en français.

--

Laurent,

Merci pour ce compte-rendu, je n'ai malheureusement pas été en mesure de venir à ce premier meetup-parisien.

Je m'inscris un peu en faux sur l'impression que me donne ta phrase "Vous aurez compris qu'on est entre techniciens...".

Je conviens que le choix du nom devops est un peu malheureux et de nature à donner l'impression qu'il s'agit là d'un mouvement de techniciens pour les techniciens.

Du chemin a été parcouru depuis le premier Devopsdays à Gand en 2009 où le nom est apparu, et déjà à l'époque le nom était apparu comme trop réducteur car il n'adressait qu'une partie des centres d'intérêts des personnes présentes.

En fait les difficultés entre dév et prod n'étaient que la partie cachée de l'iceberg, et les problèmes abordés étaient aussi bien techniques qu'organisationnels ou humains.

Du fait du rayonnement international de la conférence et parce qu'il remplissait un vide, le terme devops (1) s'est propagé comme une trainée de poudre et s'est imposé.

Si l'on peut regretter que le terme soit trompeur, ce défaut est à mon sens contrebalancé par le fait qu'il existe désormais une étiquette sous laquelle nous sommes nombreux à nous retrouver pour échanger.

Je ne doute pas qu'au vu des profils des gens ayant participé au premier devops meetup parisien la majorité des problématiques abordées aient été plutôt techniques, et il est de fait logique que ce soit ce point qui apparaisse dans ton compte rendu, mais il me semblerait dommageable que cela renforce chez tes lecteurs l'ambiguité déjà imputable au nom.

A mon sens devops est un mouvement qui traite des problèmes liés à l'informatique d'entreprise, ce qui est un vaste sujet! De fait, je partage complètement l'avis de Damon Edwards : devops n'est pas la réponse à un problème technique mais à un problème business (2). J'invite les lecteurs curieux ou réfractaires à l'anglais à aller lire la rapide présentation que j'ai posté il y a de cela plusieurs mois déjà sur devops.fr (3).

Cordialement, Gildas Le Nadan @endemics --

1- Le plus souvent d'ailleurs sous une forme, "DevOps", différente de celle souhaitée initialement par Patrick Debois pour qui les majuscules rappellent malheureusement la séparation entre dév et prod.

2- http://dev2ops.org/blog/2010/11/7/devops-is-not-a-technology-problem-devops-is-a-business-prob.html

3- http://www.devops.fr/ qu'il est plus qu'urgent que je mette à jour

Tuesday 27 July 2010

Devops Meetups and Devops Dojos

Devopsday USA 2010 and the first Silicon Valley Devops Meetup

In late june/early july this year I went to San Francisco for devopsday USA 2010 that I had the pleasure to co-organized with Damon Edwards, Patrick Debois and Andrew Shafer.

I really enjoyed the experience and am glad so many people came to attend the conference (spéciale dédidace to the French Diaspora: Alexis, Olivier, Patrice and Jérôme). I look forward now for another chance to contribute to the next events!

I was still in the Bay Area on july the 6th when the first Silicon Valley Devops Meetup was organized by Dave Nielsen in Mountain View and I decided to join attend their first meetup.

Although Patrick and I have been in contact and working on presentations about "Agile and Operations" and "Continuous Deployment pipelines" months before he pinned the devops term and decided to create the first Devopsdays, we don't live close enough to one another to be able to see each other regularly, and I don't know yet enough devops-minded people locally to be able to start regular meetups, so it was interesting for me to to see what form it would take (sadly I haven't managed to attend to the popular london meetups yet).

The meetup started with a little discussion on the group name and on what the content and form should be for the following meetups.

The first Devopsdays was a 2 days conference with speakers in the morning and openspaces/unconference in the afternoon. I felt it was a nice format since the morning presentation would raise interest on specific subjects and fuel the afternoon debates without restricting them. (We were more constrained by time -only one day- for devopsday USA 2010 and had plenty of speakers so we decided to only have panels and a few lightning talks to raise interest/awareness to other subjects.)

I guess I felt that I was passing on the torch somehow and since some of the topics and discussions that took place during (and after) Devopsdays came back, it was an opportunity to share what was said and done back then. I think that the biggest benefit the devops movement is that it enables people to share their experience with one another, and I believe this is one of the way we can solve the problem I addressed on my first post.

One of the things from Ghent that I mentioned was the very nice experiment by Lindsay Holmwood when he proposed a 1-hour gang-development session on "cucumber as a script language". Not only because the subject was cool, but also because there was actually concrete code produced after this session, and I believe this is great if we can not only exchange ideas but also produce something that goes in the right direction.

Even though the devops movement is very much about people, about having the right mindsets, about breaking silos and about business alignment and change management, it is also about tools. And I think that since developers and ops (and network and security and QA) people meet together during the meetups and conferences, it is also probably the right place for new tools to emerge, tools that can efficiently and elegantly solve the daily pain points and bring people together/help them concentrate on what's really important.

This is why I was really happy to see that the meetup then followed by a nice presentation by Alex Honor on the "devops toolchain project". I'm glad I had the opportunity to meet Alex several time during my stay in the USA as he also had been thinking about those issues for a long time. His work on the toolchain helps pointing the gaps, the same way the "missing tools?" session during Ghent's Devopsdays did and there is a lot to do!

Devops Dojos?

Before I was involved in the devops movement, I was very much influenced by the Agile community, thanks to my friend Raphaël Pierquin (I also met Patrick thanks to him). He is the one who introduced me to the notion of "Coding Dojos".

I'm not sure who invented the Coding Dojos in the first place (it might have been Laurent Bossavit and al), but the idea is roughly "how come you are supposed to become a java expert after a one-week course when it takes a life time of regular training to become a martial art expert?", and as a martial art practitioner myself I find this idea sound.

Still, while I'm sure regular trainings on devops ideas makes sense, I'm not sure exactly how this should be done:

  • Do we need to train on a specific problem, a specific tool or on a specific method?
  • Maybe we could do retrospectives on a problem we've had and the solution we've implemented, to see how others would have fixed it?
  • Maybe this could be an opportunity to design a tool that would solve a specific problem, or a modification on an existing tool so it would be a better fit?

If you guys have an idea about this, I'd be really interested hearing it!

Wednesday 2 June 2010

Why I don't want a 1024x600 screen

I know this is slightly off topic, but I've been complaining a lot lately about 1024x600 screens on twitter and I probably need to explain why :)

Apparently, lately 4:3 was declared deprecated and bad, so everyone moved to 16:10 or 16:9 formats. It was alright for me as long as the resolution was above 1024x768, but unfortunately almost all the netbooks and tablets seem to be afflicted with a 1024x600 screen. Except Apple's Ipad.

I might not want an apple device for other reasons, but I don't think 1024x768 is a bastard size. It certainly was the standard not long ago on CRTs or LCD. So much in fact that (for worse more than for bad) almost all websites are designed for that size.

So what happen when I try to view a website on my 1024x600 netbook? I scroll. Vertically or Horizontally. All the time. And I swear a lot too.

Watching a video on youtube is a pain unless I'm browsing full size. Same for all the flash stuff. All in all, the user experience is really not enjoyable, and most of the time comparable to having a 800x600 screen (I even wonder if all this scrolling isn't causing me carpal tunnel syndrome on top of all the other pains). And that's not limited to browsing the web. Using a regular OS (not optimized for this form-factor) or editing a document is a pain too.

So, no thanks, no 1024x600 screen for me in the future. If you really insist on the 16:9/16:10 format because it gives a nicer form factor for the device or is cheaper to produce, then fine, but I would gladly pay a premium for a 1366x768 and avoid the broken 1024x600 resolution.

Thursday 25 March 2010

the certified DBA

Yesterday my friend and ex-colleague Iain send me a link to this dailyWTF article. I felt the content of the article and of (most of) the comments were so wrong on so many levels I had to write something about it...

the RH Performance Tuning course

I suspect he did this because he remembered a heated discussion I had in a team meeting with our team leader back when Iain and myself worked together: our team leader was coming back from a "Red Hat Performance Tuning" course and said there was a lot of things that we could do to improve the performance of our systems, including:

  • ensure that all systems had swap defined as twice the amount of RAM
  • ensure that the /tmp partitions were created on the outside parts of the "spindles"

I expressed serious doubts about the validity of those assumptions in a modern IT environment.

First of all, memory is cheap nowadays and QoS matters. In most cases, a swapping server is the best way to guarantee that it won't be able to offer the right level of service: it is either an indication that there is something wrong with the software like a memory leak, or that the server is not properly sized for the task.

The partitioning issue is very similar to the case described in the dailyWTF article. It is based on physical assumptions that are not necessarily true nowadays, especially when we are talking about partitions made on a hardware raid1 volume using multi-platter drives. In my opinion, there was no guarantee that the firmware of the RAID controller nor the one from the drives will do what we think they do.

Proof versus Belief

Interestingly enough, it seems that I was wrong and that drives manufacturers do their best to keep a mapping that is still in sync with the belief system in place, as the proved by the zcav tests pointed in one of the article's comment.

What is important here is the experimental evidence as opposed to beliefs or possibly outdated knowledge.

Still, it is important to remember that the zcav published datas are only valid in the context of the tests: they might not be valid for your production system with your set of drives, your raid controller and moreover, for your application needs.

Of course, with enough experience with a specific application, there is a possibility that generic rules can be deducted, as long as they are methodically deduced rules and not just wild assertions.

Alas, if the conditions changes, the rules are no longer valid, so you can't blindly follow them when you do performance optimization, you can just use them as hints or possible things to try: only in-situ measures can validate an hypothesis and prove performance increases.

Which means that if you have to optimize performance, you have to use your brains, common sense and produce reproducible test results!

Overcomplicated setup

With such a complicated setup, it can be difficult to measure the right thing and there can be plenty of unwanted interactions.

On the other hand, if you can't prove it makes a difference, you just over-complicate for the sake of it, which means it will be more difficult to maintain and diagnose for no provable benefit.

If it brings more performance, the benefits will still have to be evaluated against the operational risks that the complexity brought.

Not only the amount of complexity but also where you add the complexity matters: the more complexity you push down the stack, the harder it is to change things: a configuration option in an application is easier to do (and revert) than changing the version of the software.

If you depend on a specific software and hardware stack for your system/application to work, you are tied and have very limited ways to make your solution evolve or adapt. This tend to create systems where changes induce more risks.

It might not necessarily be a problem, especially if your system does not evolve a lot either in functionality or scale, and the risks can be mitigated by tests, but those costs must be clearly understood when the decision is taken.

Usually a lot of optimization can be done on the highest level, i-e the user side, with limited risks and efforts. However it can't be achieved if you don't understand what you're doing nor if you don't understand what the client application/users are doing,

Instead of shooting in the dark by applying random recipes, talking to the users to get the Big Picture can help making the system more in sync with the actual needs, and will let you identify which path that can be explored.

Sometimes it can be as easy as spreading the load over the course of a day/week/month instead of having everyone doing their queries at the same time.

Also, provided the DB is not used by a blackbox system, there are different things that can be done either on the DB or at the application system:

  • pruning the tables
  • optimizing the schema/indexes/queries
  • queuing/asynchronous queries
  • spliting/sharding the tables

From my experience, the need to do performance tuning usually comes from a bad design and the inability to scale horizontally. The IT industry has been relying too much on the ability to do vertical scaling, and unfortunately, it seems that apart from the big web players, only a few companies have realized that vertical scaling is barely an option now.

The communication problem

I believe the communication issue and lack of trust to be the most fundamental problem.

It is obvious that the company has a "Us vs Them" syndrome between DBAs and Ops, indicating a big silo problem, and I doubt this problem is only limited to the Ops DBA interaction, but probably spans to other teams interactions as well.

I think the Ops persons and his boss did the wrong thing there by hiding things under the rug. I believe that it was only pride that made the Ops guy behave the way he did, and I think this will only create more problems for him in the future.

Maybe a better way to do is to show the DBA that there was a better way to do things on a system level, to create a trust relationship with him and to encourage communication with the DB users.

Pouring oil on the fire is not going to stop the fire spreading nor the false beliefs...

Monday 2 March 2009

self documented agile infrastructure

In my latest position, as an IT Operations Manager I was confronted to the classic problems of a non-mature Operations: We were understaffed, in a fire-fighting mode, there was poor documentation (either missing or not up-to-date, often misleading), almost no backup, and the team members had almost no overlap in their skillsets and were demotivated.

I couldn't afford to lose a single person of my team as the knowledge lost would be dire for the company, and to make things even more complicated, our CEO wanted us to be able to deploy our home made software to remote client sites.

On the good side, one of my team member had an excellent knowledge of the home made software, another was a good perl developer, there was a good knowledge of Suse, rpm packaging and they already had a set up a subversion repository and a basic puppet setup.

To consolidate the knowledge and move away from manual operations, it was decided to use svn, puppet, Suse and pxe to build a self-documented agile infrastructure where anyone would be able to deploy new services.

The basic blocks

The applications was packaged using rpm and the latest valid version stored on a file server, but all the configuration files (including those needed to build the packages) were stored in subversion.

This way, it was possible to keep track of the changes (who, why) while at the same time having a way to retrieve the latest valid version using a simple 'svn co'. The svn commits were sent to all team members, so it kept everyone informed of what was going on.

The recipes

The services and server setup were described in puppet and stored in subversion. The services were described in a generic manner using templates as configuration files so you could instantiate a new service by deploying the needed rpms and creating "on the fly" the configuration files adapted to that specific instance. The important idea was that no manual operation was needed to deploy a new service thus allowing it to be perfectly reproductible.

Thanks to this solution, one could easily deploy a new instance of a service on either a physical or virtual machine. As we were in a j2ee world with a multi-tiered application, you could either stack several services on a machine (for development or testing for instance) or one service per machine, depending on your needs.

The nice side effect is that puppet is the live documentation of your systems as it defines and enforces the active configurations! Since the puppet files are also stored in svn, it is possible to see all the changes for a file through time with the associated comments.

The drawback of the system is that extreme care must be taken not to manually tamper with the configuration of the servers: everything MUST go through puppet, and the comments must be kept relevant.

The deployment system

The machines could be either physical or virtual machines, and pxe combined with kickstart is used to deploy a basic setup consisting of a basic Suse + puppet. Of course the kickstart files are stored in svn. Once the server is deployed, puppet can then populate the server with a set of services/configuration.

The backup server

Since a service/server could be easily reinstalled using this solution, there was no need to backup them which is a big time and tape saver.

This way you can concentrate on saving your application data, that is your production dataset as well as the files on the file server and the subversion repository.

In our setup, it was decided to sync the subversion repository and the files stored on the fileserver between 2 sites. Also, thanks to the use of subversion, everyone in the team had the files on their own machine.

Disaster recovery

During the implementation, cross-dependencies between the subversion, installation, puppet, file and backup servers were considered in order to allow a complete restoration of the infrastructure, provided that we had access to the backup tapes and could reinstall the backup server manually using a Suse install media.

It was decided that the subversion, file, build and installation services would be installed on a single machine. From there, you could reinstall the puppet server via a very limited set of operations that were documented with care (basically, installing the packages and checking out the svn repository).

Once this is done, and provided all your infrastructure is described using puppet recipes, you can easily repopulate your servers in a case of disaster recovery, but it could also be used to install everything on a remote site, provided you have a machine were you can bootstrap your infrastructure.

Friday 16 January 2009

On the Shortcomings Of Systems and Networks Engineers Training

As far I know, there is no course to become a Systems and Networks Engineer, aside from courses to learn (and gain certification in) a given vendor's product. In fact, back in my university years, I remember that my teachers seemed to assume that there was no interest in this kind of thing as learning the options and caveats of a particular product was all you needed. In their eyes, algorithmic and development approaches (RAD and OO at the time) were where the real focus lay.

In my case, the situation might have been worsened by the traditional friction in France between university (were the "real, pure, academic" research is done) and the Ecoles d'Ingénieur (where you learn about engineering and sometimes conduct "applied research"), but I'm not so sure the situation would have been so different in an engineering school or another country (I'll be interested in your feedback there to prove me wrong!).

So, how does one becomes a Systems and Networks Engineer? Well, it's easy, you learn by yourself, usually starting with a small set of machines and mainly by a trial-and-error approach. If you're lucky enough, you might benefit from someone else's experience and coaching. But still, it remains mostly an ad-hoc approach.

Of course, you quickly learn to avoid tinkering with the production platform on a Friday evening, and given enough experience you can even begin to "guesstimate" - to a greater or lesser degree of accuracy - the impact of such-and-such a modification, then hopefully the number of systems you manage will increase until eventually you find out the hard way that complexity doesn't grow linearly with the number of systems.

I would even claim that given the chance to work with different environments and large scale platforms (highly available, highly loaded web platforms; HPC clusters; heterogeneous banking environments), one might infer common rules of thumb and even have the hubris to try to find a meaning in the chaos.

The fact, however, is that I believe this ad-hoc approach to learning the job and the lack of (field proven) best-practice references to be The Source Of All Evil.

First of all, from this learning process comes an approach comprising unproven beliefs, mythology or carved-in-stone rules ("one needs twice the amount of ram as swap space"). It also makes it difficult to assess someone's ability as a Systems and Networks Engineer if not by considering her technical knowledge/certifications or previous experience in a similar position.

Secondly, the good practice of "not changing what works" forged by the trial-and-error approach, tends to encourage cruft accumulation and creates a certain reluctance to change anything at all. As a result risk-mitigation approaches such as continuous integration and minor steps are replaced by "big-bang" style changes with increased risks of failures.

All in all, I believe that it has created a situation whereby IT Operations is working against the (in my eyes desirable) goal of becoming agile and business-oriented - a true competition differentiator and not just a "cost center" working in firefighting mode.

The "cost center" aspect has motivated the few approaches trying to address the lack of maturity in IT Operations: ITIL, Cobit and so on. To the best of my knowledge, they are all process-oriented and mostly address the problem from a financial perspective (ROI, risk management).

While I believe there are interesting ideas in all of them, and that cost is an important factor in the need - solution equation, I am not too convinced by the "process" approach which limits risk but adds weight and inertia to the organisation and kills pleasure and innovation. I confess I might be too influenced by the ideas of the Agile Manifesto here, but I can't stop myself thinking that neither Google nor Facebook used ITIL to get where they are.

I also find them too complicated to be real enablers and believe that even though they warn against it, they incite dogmatism where pragmatism should rule. Because of this, I think they fight against the exact goals they are trying to achieve.

So how can we get out of this mess?

We would definitely benefit from an increase in interest from the academic world towards IT Operations and Infrastructure realities. Consider Google's study on Hard Drives failures. Before its publication different people had wildly differing beliefs about disk failures based on factors such as: their own experience with a statistically-insignificant sample size of drives; manufacturer advertising (propaganda); luck. With a large scale, scientific study to turn to, people gained a much better understanding of the subject matter.

Naturally, courses about availability, scalability, large scale systems and networks design and management would be welcome in Universities.

But successful companies such as Google or Amazon couldn't have emerged without good IT engineering practices and a sound infrastructure (after all Amazon even sells its services now via EC2 and S3!), so, it is certainly possible today to build an IT infrastructure that makes a difference.

Then we definitely have the responsibility to learn from those leaders and spread that information around if we want IT Operations and Infrastructures to mature and serve the business and our own users (kudos here to websites such as High Scalability or Storage Mojo for their excellent work).

Undoubtedly most of the technologies those companies use to manage their infrastructures are purpose-built in-house developments that won't be published, so we as a community need to build the tools we need in the same way developers have started open-source re-implementations of well known building blocks such as MapReduce for instance hadoop.

Tools such as Luke Kanies' Puppet configuration management, rapid deployments tools such as openqrm or easily adaptable and scalable monitoring systems such as hobbit (now renamed Xymon) should be endemic to our infrastructures, yet they are sadly too often an exception.

Tuesday 13 January 2009

Yet Another Blog?

Hello there!

In this introduction post I will try to explain why on earth I've started Yet Another Blog.

For years now I've exchanged ideas about IT Infrastructure and Operations with my colleagues and friends, be they IT Ops guys or dev dudes (or even from a completely different background). I've learned a lot from those discussions and I believe my work has matured as a result.

Lately though, this flow of communication has dried up for several reasons and I've grown frustrated about it, hence the idea of this blog. Hopefully it will allow for fruitful interaction with people I know and indeed others that I don't know. People with whom I am impatient to share ideas and experience!

So, welcome aboard!

Gildas