At Industrial, we're strong proponents of Agile Project Management. However, Agile is something we take to heart across all of our internal processes and we constantly evolve them if any one tool or process starts to feel a bit too square-peg-round-hole.
In this post, we'll cover what our current development workflow looks like and some future trends and tools that we're evaluating as alternatives and why.
Consistent Development Environments
A key to ensuring that projects stay on budget is to reduce any friction during the development, QA and release to production process. To do this, automation is absolutely key. As projects move through developers, both frontend and backend, the ability to spin up the project quickly is a must have.
We also use these tools to mirror the environment we're either anticipating or recommending for the production release. If we develop and test/QA against the same operating system, software versions etc. then the likelihood of further wasted time in release to production is greatly reduced.
We use a combination of Vagrant and Chef to setup and install the necessary software dependencies that our projects need. Through custom recipes that we develop for each project, we specify the operating system that we're developing against, all of the dependent software from database, languages we're using and their associated dependencies and other supporting software such as cache, background job processing, etc.
We keep these custom cookbooks alongside our project repositories (more on that later) so a developer can clone our projects, and run a few commands to get the latest version of the site up-and-running in a few minutes.
For example, provided someone had Vagrant (and a few plugins), Virtualbox, and the Chef Development Kit installed, they could clone one of our repositories and run a
vagrant up in the root of that folder, re-source a database and have a fully functional site in minutes.
As much success as we have had with Chef, we're considering Ansible as an alternative. It will still work with Vagrant, but the simplicity and explicit nature of Ansible could potentially reduce some of the day-to-day issues we encounter with Chef.
Also, HashiCorp has recently released Otto which, if successful, would further reduce the time spent in spinning up new environments as it will assume proper defaults based on the project requirements.
Another tool that we're working with is Docker. This is a particularly powerful tool to automate and manage application environments and dependencies. A particular tool for development workflows is Docker Compose. It uses Docker under-the-hood, but through a single file, you can define all of the dependencies of your application environment and launch it with a single command.
Code Management and Workflow
As strong proponents of open source software (OSS) we rely heavily on Git for version control for each of our projects. Covering Git and its capabilities is a blog post in-and-of-itself, but for the purposes of this post we'll stick to the workflow aspect of Git and why this is important.
Our code is hosted on Bitbucket by Atlassian. The pricing model provided for private repositories is such that it is far more economical and secure to use Bitbucket over Github as the latter does not offer the number of private repositories we would need to support the number of projects we manage.
However, in the coming year we're planning on hosting a number of open source initiatives that we currently have underway on Github.
As for the workflow, simplicity is key. Git as a tool can be a bit daunting for new staff or when our clients want access to the code repository. For this reason, we follow the Github Flow.
The key is that the master branch is always the latest release on production. Features or hotfixes are branched separately, and only merged back into master once the developer opens a pull request and another staff member reviews the code, runs tests and verifies the site or application on a staging environment.
Once a release is made to production, the version is tagged which makes it easier to roll-up release notes, or revert back in the event of any issues encountered.
Through this relative simplicity and oversight, along with testing, we ensure our clients receive the best quality code, releases and keeps our processes inline with some of the best practices in the community.
One item that is particularly contentious among developers is their code editors, or IDEs (Integrated Development Environments). At Industrial, we don't prescribe tools for our staff, particularly ones that developers hold strong opinions about. Keyboards, laptop/desktop choices and editors are sacrosanct.
We use a combination of Notepad++, IntelliJ, Atom, Sublime Text and more. Ones that are extensible, and support plugins that allow us to customize our development environments at will. Failing that, we'll always rely on the ever present vi when managing or configuring our server environment.
It must be stated though, that with this flexibillity comes potential for different default settings within editors to make code unmaintainable across developers. Wherever possible, we adopt the coding style guides contributed by and maintained by the various communities. For example, when working with Laravel (a PHP Framework that we love), we adopt the coding style they use.
Testing, Testing and More Testing
We strongly believe in testing our software. We use a combination of automated software testing through frameworks such as PHPUnit, minitest and phantomjs as well, building comprehensive test cases in TestRail that our QA team uses to ensure that the functional requirements of a site or application is met.
For cross-browser testing, we rely heavily on BrowserStack as it provides a really comprehensive list of current and legacy browsers and some powerful tooling to assess site compability with selected browsers and troubleshoot any issues identified.
As much as we would like to rely on automated testing (and, to be honest, there's never enough of it or time to do as much as we would like) the reality is that our human-led tests are essential for determining that the core project deliverables are met, and that our sites are usable, accessible and responsive.
One tool that we've had some relatively recent success with is a visual regression tool called Depicted. It allows you to compare two sites in their rendered form and display the differences between them. This is useful for ensuring that styling/markup changes at later stages of a project do not introduce issues that clients have signed-off on in earlier project builds or releases.
In direct contradiction to the last point, we would still love more automated testing. Automated test suites are amazingly good at picking up on regressions introduced during the entirety of a project lifecycle, not just up to launch.
For the right projects, investing time and effort into continuous integration with a fully fleshed-out test suites would prevent regression issues that are code related detracting from the more important QA process I described above.
DevOps – Development and Deployment
Automating as much of your release process is critical to ensuring consistency. However, it does come with an upfront cost of developing against all of the contingencies that can arise during deployment. Tools like Chef and Ansible are great for automating as much of the release process as possible, but come with some overhead/management that is not always desirable (especially Chef).
So there are 2 other utilities in our toolchain that we include:
- Gulp - a node.js tool for CSS preprocessing, JS transpiling, minification, live reloading, and much more.
- Fabric - a python tool for command line scripting.
The use of these tools makes for a powerful combination that allows us to do two things. First, with Gulp, we can quickly provision our development environments with libraries such as JQuery and SASS, and use these libraries to efficiently build our asset resources. Secondly, with Fabric, we can prepare our development images quick for deployment to both staging and production environments.
While Gulp is an invaluable development tool, it begins to fall short when a project enters into deployment-ready or maintenance-and-support phase.
That is where Fabric shines. It's an open source tool developed in Python that runs commands over SSH and the only dependencies required is installing it on the machines that will be facilitating the deployment. The Fabric website somewhat vaguely describes itself as:
Fabric is a Python command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
In a more practical sense, Fabric is a tool that allows for scripting of build and deployment routines. This is critically important as it means that there is an automated way to build and deploy a project — thus removing manual intervention and possibility of human error from the provisioning step. Of course this means that we are able to quickly deploy and rollback a site to a known working state in the event of an emergency.
For example, we have posted a sample fabfile.py that allows us to both build an entire project with all the requisite tools, and to automatically deploy to either staging or production servers. The steps to build the project are as concise as:
[…] 7 @task 8 def build(composer_extra=""): 9 with lcd('src/private'): 10 local("composer install %s" % composer_extra) 11 with lcd('src'): 12 local("npm install") 13 local("gulp --production") […]
This simple script defines different environments and different tasks that need to run on them. It automates the install of PHP's composer and setting some permissions on files within a web folder.
Most importantly though, that rsync command is what is distributing the last version of the site code.
As security is important to us, it's important that our deployment scripts rely heavily on environment variables to run, rather than storing credentials/paths directly in the scripts. This also allows us to generalize the scripts across projects and reduce the number of times that we have to re-write them.
As mentioned in the development environments section, ensuring that our QA environment mirrors production is esstential for QA. For this reason, we rely on a combination of AWS or Azure to create these environments reliability.
With Azure, creating cloud-based Windows environments is really easy and affordable as Windows on AWS can sometimes be problematic.
Managing all of the new and changing environment, particularly on these particular providers, introduces issues with addressing and access. To solve this, we use the nearly magical DNS management services provided by Cloudflare. We can setup domains within Cloudflare and manage A/CNAME records and have them propagate almost instantly.
Given that our development tooling is largely automated, we would like to make our staging environment builds and updates as automated as possible. We've done some work with OpsWorks, but it is something we'd like to expand on. OpsWorks uses Chef and Berkshelf, tools that power our vagrant builds already.
In the modern internet, security is paramount. More than just secure code, but secure processes, transparency and ultimately trust between us and our clients that we have their best interests in mind.
There is enough here for another blog post, but I want to highlight some of the tools that we rely on outside of our code review process to ensure that the information and access that our clients provide us is as secure as possible, and that our projects maintain it throughout their lifetime.
The tools we use fall into two separate camps, both pre-development and deployment, and post-deployment.
For pre-development and deployment, we rely and review the secure-by-default settings that accompany the frameworks and tools that we use. This metric is one that we use when evaluating the tools we select, and we regularly review them when later versions are released or when additional tools enter our toolbox.
Our clients trust us with an enormous amount of secure data, as partners, we take that trust seriously. We rely heavily on password managers to ensure access to environment settings, credentials and more.
Post-deployment, we use penetration testing tools provided in Kali Linux to scan sites running Joomla, Wordpress and Drupal. Not fully trusting one source, we also use OWASP ZAP to run all out penetration tests against our sites for known vulnerabilities such as Cross-site Scripting, Cross-site Request Forgery, and more.
We're pretty excited about the announcement of 1Password for Teams. As avid fans of 1Password, the ability to share vaults among our team members, all with fancy new React web front-end on the tool. Add to that, the success of Agile Bits as a Canadian company, and we're predicting a new tool in our toolshed!
Another project we're following is another HashiCorp success store, Vault. Since we rely on a number of shared secrets such as deployment keys, hashes and salts etc. it looks like a great service for storing them securely, but it allows us to action off this information as well.
There are a lot of tools listed here, but critical to the success of our projects are the ones that faciliate communications both about the individual projects and tasks as well as day-to-day. For task and issue management, we use JIRA. No surprise there, it is one of the most used and powerful task, release and issue tracking systems available today.
For day-to-day communications, we rely heavily on Slack. It support team, channel and person-to-person communcations both on our laptops/desktops as well as across all of our various mobile devices so we can always keep in-touch on the state of our projects.
Our development workflow is near and dear to us as it is the tools and processes that we use on a daily basis. While this post is lengthly, there is a lot of material to cover and, as mentioned, the security section is another post entirely (which may come in the future).
These development processes and tools form the heart of what we do as developers here at Industrial and they're on our screens every day, so we take them seriously. However, as in all elements that we use, if they're not up to the task, or something better is forthcoming or released, we will evaluate, incorporate and remove as necessary.
This constant refinement is essential in modern day software and web development as the ecosystem is ever-evolving, cyclical and our ability to meet our customer and client expectations is synonymous with our ability to stay current and use the best tools available.