One of the unspoken skill sets of a developer is having command line chops. This can cover service management, navigating around the file system, file management and other knowledge required to carry out hosting a website or application. This usually ends up being in a linux environment, at least with most open source tech. We're not saying you need to become a system admin over night, or at all, but you'll end up needing some proficiency. You'll likely start finding this out when that tutorial you're following asks you to run certain commands, or is assuming knowledge of the command line already.
Below are some not-so-beginner tricks that really save time and are sometimes not mentioned, so we're making a point of that now. It's those tricks that you learn over time that you probably wish you knew earlier.
A lot of times you'll see parts of a tutorial asking you to "source" your .bash_profile,.bashrc or some other file. The author will likely just print out the command you'll need, and copy/pasting ensues. There's two ways this can be done. The first is with the source builtin command, the other is with it's synonym, the dot. Both do the same thing on the majority of systems you'll encounter.
What it's basically doing is evaluating the contents of the file following the command, executed in the current context. The "current context" is usually meant to mean the current terminal session into which the user is typing commands. A common example is making some changes to .bash_profile (usually an export statement). Then, to make those updates active right away, you'd "source" file, thus executing the new changes. Then in this example, the next time you'd login, the bash_profile would be read and there'd be no need to source it again. One key thing to note here is that
. .bash_profile aren't the same thing. The latter is to do with source, the former to do with treating the file as an executable, which it's not.
A quick way to get to your home directory is to just type
cd. The tilde is used to reference home as well, and can be useful for targeting files within your home directory. But to go there, I often see
cd ~ or
cd /home/john which isn't necessary. Another trick to get back to the previous directory you were in, is to type
Ever find yourself typing out the same long commands over and over? There's a way to define an abbreviation, or alias, for any command you want. Typically these are defined in your .bash_profile or .bashrc files so they are ready to go upon entering each terminal session. Basically you just type the word alias, followed by the new command name you want to define, then equals, then the real command. Once placing that in to either of the two previously mentioned files, you can source those files to make the aliases active. Here's an example stack of aliases for use with vagrant:
alias vh='vagrant halt'
alias vssh='vagrant ssh'
alias vgs='vagrant global-status'
alias vha='vagrant global-status | grep running | cut -c 1-9 | while read line; do echo $line; vagrant halt $line; done;'
Create a nested directory structure
Sometimes you need to create a new directory, or several leading up to that directory. Using mkdir alone, we'd get an error explaining that the parent directories leading up to the last one don't exist. To get around this, we can use the -p flag to make parent directories as needed. In this case, to make the tree folder, as well as the other 3 leading up to it, we'd do something like the following:
mkdir -p a/deep/directory/tree
Backup and re-source databases
While not strictly command line generic, it's one that is common to web related jobs. Given the hefty size of most databases we work on, the best approach is to gzip the output file as a general rule to keep size down. Given that the default output of mysqldump is text, we're basically just zipping that text. Then, to reload the data, we're using zcat (just like the normal cat command but for zipped files). The output of zcat gets piped into the mysql command which is also specifying the database name to use.
mysqldump -u my_user -p my_database | gzip > my_database.sql.gz
zcat my_database.sql.gz | mysql -u my_user -p my_database
Use tab, like all the time
In most shells, typing the first bit of a command name, file or file path followed by hitting tab will autocomplete the rest for you or give suggestions. Try it out by just typing "cd" in any directory, then tab a couple times. It'll then give you all the possible options you can cd in to. Alternatively for commands, you can see this in action by typing the first bit of a possible command, let's say "whi", and hitting tab a couple times gives 3 possibilities (at least for me on Centos which are which, while and whiptail). Basically before long you'll be compulsively hitting tab in between everything you do, which is awesome. Here's more information about command-line completion.
Watching files as they change
Typically you'll want to watch log files while your app runs. You'll want to use the tail command for this. Typically on it's own, tail just prints the last 10 lines of a specified file. It also has the option of following a file as well, using the -f flag. A common use case is to tail the web server logs. A common one for nginx is below, but that can be subbed for any file. Tail can also just be used on it's own without the follow option on a file that you don't need to watch. It's convenient for logging just because it'll bring you right to the end of the file, hence it's name! As a side note, you'll probably need to sudo for log files if you're not already logged in as the root user.
tail -f /var/log/nginx/error.log
Speed up your keyboard repeat rate
This is a bit of personal preference, but it's something noticeable if not done, especially when moving to another machine that this hasn't been done to. A lot of time can be saved just by increasing the press and hold repeat rate for your keyboard. This becomes especially useful for arrowing through long commands. There are ways to jump words like in other text editors, as well as jump to the beginning and end of lines, but in general you'll want to try it and see if you can see the benefit for yourself.
Recalling commands run in the past; history
A lot of the time, you'll find yourself running the same commands over and over. If those commands are vaguely complex, it can be an annoyance to have to type them out all the time. Thankfully there's ways to reference commands we know we've already run. The primary way is to use CTRL+R. This keyboard combination will present an autocomplete style prompt. It'll start referencing commands that have been run starting in reverse chronological order. To see that full list, you can type the history command. The shortcut above though is the quicker and more usable way to get at known commands. The "history" command itself can be useful for glancing over to see what's been happening in general.
If you find a command in the history list you'd like to run, you can use the number that gets listed out beside the command. Just prefix the number with an exclamation. An example being
!514 which is just a short way of typing
Lastly, you can run the last ran command with
!!. This is handy when you have to run a command again with a sudo or other prefix again, which would look like
Copying files with Rsync
Whenever you find yourself needing to move files around servers, maybe to a stage from prod, or moving and existing site to a new server, you'll want to employ some method of transferring those files. Some people will use the scp command for this. We'll skip that and recommend you use rsync. It's essentially scp, but next level. Essentially rsync supports many more options to fine tune it than does scp. It also keeps track of files already transferred, which is one of its biggest selling points. This means if you're running a backup job, it'll only top up the files that are new and not in the destination. Rather than stuff a big comparison list here, you can read up on the differences between scp and rsync elsewhere if you're interested.
Local to Remote
rsync -avz --progress myfiles/ email@example.com:/home/
Remote to Local
rsync -avzh --progress firstname.lastname@example.org:/home/john/files /tmp/myfiles
Sometimes you start running a command only to find out it's taking way longer than you expected. You'd like to be able to do something else in the meantime, but your terminal is occupied by that running command. The go to solution is to just open another terminal, right? That's the brute force approach. The more elegant solution is to utilize background jobs.
There's a concept of being able to put new commands right into the background to give you the terminal back. To do that, before running a new command, put an ampersand at the end. Using the database dump example from above, that would look like this:
mysqldump -u my_user -p my_database | gzip > my_database.sql.gz &
You can also put running commands in to a suspended/stopped state, giving you the terminal back, to then decide if you want to put that suspended job in to the background or bring it back to the foreground like it was. Taking the above command as the example again, while it's running, use the keyboard combination CTRL+Z. You'll then see the following:
+ Stopped mysqldump -u my_user -p my_database | gzip > my_database.sql.gz
In this state, that job is waiting to either be put back in to the foreground or run in the background. If you wanted to continue running it in the foreground, just typing
fg would resume it. To put it in to the background, type
bg. To see all jobs, just type
jobs. Each job has an index associated to it. By default, using the fg and bg commands will address the first one in the list. If you wanted to choose from another, for example the second job, just add it as a parameter on to fg or bg, like "fg 2" or "bg 2".
Running multiple commands at once
When you know you have to run multiple commands, you don't have to wait for the first command to finish before running next. You can use a semi-colon to separate commands on the same line. What this does is run all the commands listed in order. Now there's no need to wait for the previous commands to finish before running the next.
command1; command2; command3
A variation on the above is to set up multiple commands to run in order, but only proceed if the previous command didn't fail. You can use double ampersands for this. In this example, command2 will only run if command1 was successful
command1 && command2
Less is more
To read a text file, you may have been told to use cat. The truth is it's not the best tool and wasn't designed to read files like you'd want to.
You can use editors like vi or vim but if you just want to read a file, the less command is a better choice.
Moving around in less is similar to vi/vim if you're already familiar with that. If not, you can learn one and apply in both places (for some things).
Reading compressed files without extracting
Files are sometimes gzip compressed to save space. Good examples of this are database dumps and log files. Rather than picking files to look at and extracting them one at a time, we have versions of common commands we can use to deal with this layer of compression. Using a file called website.db.gz, we can use the following commands. Just note we're using a DB file here as an example just because they're usually large text files and so these commands aren't specific to this use case.
using zless to read the database file (good for confirming specifics before running an import):
using zcat to read the database contents and pipe it in to the database:
zcat website.db.gz | mysql -uroot -p website_db
using zgrep to search through the db to find a value:
zgrep -in example.com website.db.gz|less