Mostrando entradas con la etiqueta SVN. Mostrar todas las entradas
Mostrando entradas con la etiqueta SVN. Mostrar todas las entradas

martes, 1 de julio de 2014

Continuous Delivery at OLX (part III)

On the tooling part, we have developed, besides Octopush, a very simple web app, called TagReporter, where we track & show everyone what version of components are deployed on each environment (components on the rows, envs on columns, links takes the user to JIRA, Github or JFrog). 

Additionally we provide information on the Release JIRA ticket that was or is about to be deployed LIVE and its current status. The last column provides different actions according to the logged user profile, devs can create tickets automatically and Release Engineers can trigger deployments/rollbacks. The app integrates all this tools via REST API, providing a Transactional deployment service: it calls jenkins (triggering deploys), interacts with JIRA (creating and transitioning tickets) and reports to IRC (notifying beginning or start of deployments).
Deployments scripts report back to TagReporter with the Component, version and environment where it has just deployed.
Since the tool saves all this information, it also provides reports of  historical deploys, allowing all kind of filters.
We used Hubot for IRC interactions, not only for notifications but also to ask him things like versions deployed in different servers or settings values. 
Finally our DBA team has developed a tool for tracking and running SQL scripts, devs commit scripts to Git (using Pull request to have them reviewed) and the tool runs them.

Continuous Delivery at OLX (part I)

A year ago we started an ambitious project at OLX, achieve Continuous Delivery, you may ask why? It's a fair question, and there's a good answer too, because we needed a change and like most changes this was triggered by pain. This is a very common story among IT companies, every Release is painful, is a whole bunch of features and fixes that most Operations guys don't know about, and it's infrequent, and is manual and error prone. Even though  we were releasing once every 2 weeks which is a lot for many companies, for us it was clearly not enough, we had several issues, rollbacks and even blackouts every time.
So basically we started meeting once every week to discuss how would Continuous Delivery would look like at OLX, how to build a Pipeline, WHAT the HELL was a Pipeline!? Anyway, many of us had the chance to read the famous book written by Jez Humble and were excited about the possibilities. But we were light years away from what was described there. I mean, most teams didn't even got to Continuous integration, our architecture didn't help either since it was basically a gigantic monolithic PHP code that we would checkout from SVN and rsync completely into our web servers in about 10 minutes or so.
Another issue was that the few dev teams that were Continuously integrating had different Jenkins installation, each with different set of plugins and tools. Even myself, as a Release Manager, had my own Jenkins, which I found very useful for some automatic deployment tasks that I had already started implementing. So the first challenge was to decide what to do with our Jenkins; If we would integrate the whole thing in one Jenkins it would be a mess, too many hands on the same plate, many different technologies (from different teams) and different roles as well (compiling, testing, deploying), who would be the owner of such a tool? It was clear to us that it wasn't a great solution, I mean for smaller companies it would be more than fine, but it doesn't scale.
So we decided to keep our Jenkins separated and started researching how to integrate them, sadly there's no plugin out there for these (none that we could fine anyway) and the master/slave Jenkins schema doesn't fit our problem because the Slave is just that, a plain simple slave which mirrors its master's jobs.
So we used the Bash console on Jenkins job to curl another Jenkins and trigger the remote job, it turned out to be pretty easy, and thanks to Jenkins API the caller job would loop until the callee finished, and then we could even retrieve the result, whether it failed or succeeded.

Our first experiments were made thanks to Mobile Team collaboration, this a key part: find a team willing to experiment, shake things up a little bit. So they would build the code on each commit with their Jenkins, and then call the Release Management (RM) Jenkins to get their code deployed on a QA environment, and once this was successful they could run a battery of acceptance test against the environment. Easy, right? well... it took us some time to get there.
We will continue talking about CD Project at OLX on following posts, there is a lot more in this story, including the open sourcing of our deployment tool Octopush!