Command-Line Unix Crash Course - Bowdoin College
Maintained by Sean Barker
(with credits to xkcd)


Accessing the Terminal

As a first step, you need to access a Unix terminal window. If you're on a Mac or Linux machine, you already have a Terminal application ready to go. On a Mac, the Terminal application is located at Macintosh HD/Applications/Utilities/Terminal. Open the terminal application, then type the following command to login to one of Bowdoin's Linux machines:


where userid is replaced by your Bowdoin username (e.g., sbarker). Press return, and you should be prompted for a password. Enter your Bowdoin password and you should be logged into the sever dover.

If you're on Windows, you don't have a native Unix terminal, but you can login to dover using PuTTY. Install PuTTY, then connect to using your Bowdoin credentials and you'll be greeted by the terminal window, which will look something like this:

The text showing in your terminal window is called the command prompt (usually ending in a % or $ depending on your system). The command prompt waits for you to enter a Unix command, which is then executed by pressing enter. Any output that the command produces will be shown below the command prompt. After the command has been executed, another command prompt is displayed so that you can enter another command, and so forth. If you are familiar with the Python shell, this is the same idea.

Escaping to the Command Prompt

You won't need this right now, but almost certainly will at some point: if at any point you are in a terminal window and don't have a command prompt, but want or need one, type Control-C to interrupt the currently executing command and give you a fresh command prompt. An example where you might want this is if you executed a program that contains an infinite loop - since you normally don't get a new terminal prompt until the previous command has finished executing, you can use Control-C to terminate whatever is running and get back to the shell.

Navigating the Filesystem

When using the terminal, at any given point your terminal session "exists" within a specified directory. This is called your working directory. The output of many commands depends on your working directory. Initially, your working directory is your home directory, which is typically where all of your files are stored. Each user on a machine has a home directory, which is normally accessible only by that user and no others.

To see your current working directory, execute the pwd command (print working directory):
sbarker@dover$ pwd
This says that my current (home directory) is the directory called 'sbarker', located in the directory called 'home', which is one of the top-level directories on dover.

To see the files that are in your current directory, use the ls command (list files):

sbarker@dover$ ls
Your home directory on dover is actually the same as your microwave network directory, so you should see any files that you have ever saved to your microwave space. You can also list the files in a directory other than your current directory by passing an argument to ls. For instance, try viewing the home folders of all the users on dover by listing the /home directory:
sbarker@dover$ ls /home
Note that while you can see all of the other home directories, you can't actually look inside any but your own. To see this, try to list the files inside my home directory (/home/sbarker).

Be Lazy!

The terminal, when used appropriately, can actually save you time and be much more efficient than using a GUI interface to navigate the filesystem. However, if used improperly, you can waste a lot of time typing long commands over and over again. Here are some simple time-saving features that you should immediately get in the habit of using.

Command History: The first is accessing your command history using the arrow keys. Pressing the up arrow will autofill your terminal prompt with the last command you entered. Pressing up again will move back in time again to the 2nd most recent command you entered (and so forth). Similarly, pressing down moves forward in time in your command history. A very common scenario is either rerunning a command that you recently ran, or correcting a command that you ran but mistyped (and thus did not run correctly the first time). Don't waste your time retyping commands! Instead, use the arrow keys to access your terminal history.

Tab-completion: Another very useful feature that will help you type less is tab completion. Basically, whenever you're typing a filename or directory name (like /home/sbarker), pressing tab in the middle of entry will automatically complete as much of the name as possible with the name of a file or directory that already exists. For instance, if you type only /home/sb (as an argument to a command like ls) and then hit tab, the path will autocomplete to /home/sbarker so long as no other directories exist in /home that start with the letters "sb". Try this out. Pressing tab multiple times will display all files that match what you've currently entered, while leaving your existing command fragment intact. Get in the habit of pressing tab when entering file and directory names!

To move around in the filesystem (that is, to change your current working directory), use the cd command (change directory):
sbarker@dover$ cd /home
Now your current working directory is /home. Try running ls again (without any arguments). Now change back to your home directory.

Relative vs Absolute Paths

Whenever you specify a filename or a pathname in a Unix command, you do so either using a relative path or an absolute path. An absolute path starts with a forward slash (/) and specifies all parent directories of the path in question, e.g., /home/sbarker is an absolute path.

A relative path, on the other hand, does not start with a forward slash, and refers to a pathname relative to the current working directory. So, for example, if I am currently located in /home and I try to cd to the directory named sbarker (NOT /home/sbarker), then I am saying I want to go to the directory named sbarker, located within the current working directory. Obviously, the file indicated by a relative path depends on the current working directory. In contrast, an absolute path does not depend on the current directory. If you cd to an absolute pathname, it will work the same way regardless of where you currently are in the filesystem.

In most cases, relative pathnames are used, as they usually involve less typing.

One special and important case of relative pathnames are the special directories 'dot' (.) and 'dot dot' (..). The . directory refers to the current working directory, while the .. directory refers to the parent directory of the current working directory. So, for example, if you are located in your home folder, running the following:

sbarker@dover$ cd ..
Would move you to the /home directory. If instead you ran the following (also starting from /home/sbarker):
sbarker@dover$ cd ../steve
This would move you from /home/sbarker to /home/steve (assuming that directory existed and that you were actually allowed to access it).
Now let's try making our own directories. First, cd back to your home directory if you aren't already there (tip: you can run the cd command without any arguments to go back to your home directory, regardless of where you currently are).

Now, let's make a new directory called unixworkshop using the mkdir command (make directory):

sbarker@dover$ mkdir unixworkshop
Note again that here we're using a relative path -- we're saying to create a directory using the relative pathname unixworkshop, which creates a directory in our home directory since that's where we currently are. We could've specified the same thing by writing an absolute path:
sbarker@dover$ mkdir /home/sbarker/unixworkshop
but this would've been much more typing than necessary.

Now that we have a new directory, cd into it (remember to use tab completion!) and then run pwd to verify that you're in the right place.

Working with Files

Let's start by copying an existing file into your new directory. You can copy files using the cp command, which has the basic form:
cp [sourcefile] [destfile]
This copies the first named file to the second second named file. If the second argument is an existing file, that file is overwritten. If the second argument is not an existing file, that file is created. Try copying the existing file /etc/pine.conf (this is a system configuration file) to your current directory. Remember that you can use . to represent the current directory!

Now that you've made a copy of the file, let's view the contents of your new copy. A convenient utility to read a file is called less, which takes a filename as an argument and opens the file for viewing in the terminal window.

less [file]
Try opening pine.conf using less to see what's in the file.

Inside the less interface, use j/k to scroll down/up (or you can use arrow down / arrow up, but getting used to j/k will save time). To quit and return to your command prompt, type q.

Asking for Help!

rtfm Unix has lots of commands, and many of these commands also have lots of different command line flags and options. For example, try running ls with the command line flag -l (long listing), as follows:
ls -l
This will show the files in the current directory in an extended format that shows a lot more information, such as who owns the files, who has permissions to read the files, the file sizes, and when the files were last modified.

Due to the number of different commands and command-line options for many commands, it can be hard to keep track of them all! Luckily, there is a handy built-in manual that contains detailed information on how to use every individual command. To view the the manual on a particular command (which most notably, describes all the command line flags that a command accepts), use the man command, e.g.:

man ls
The manual interface behaves just like the interface for less. If you are ever in doubt as to how to use a command, consult man!

Try this as an exercise: create another directory named dir1, copy pine.conf into it, then copy the entire dir1 directory to dir2 (i.e., make an entire copy of the directory, including the enclosed pine.conf) using cp. You'll need to consult the manual for cp to determine how to copy a directory using cp!

A command that's very similar to cp is the mv (move) command, which is called in the same way as cp but simply moves a file to the desired location (as opposed to making a copy there). This is also how you can rename a file - by simply calling mv with a different destination filename than the source filename.
mv [sourcefile] [destfile]
Try renaming your pine.conf file to spruce.txt using mv. Check using the ls command that you successfully renamed the file.

Finally, to delete a file, us the rm (remove) command:

rm [file-to-delete]
Be very careful with the rm command! We don't want you to lose important data!

Try deleting dir1/pine.conf (which you should've created earlier) using rm. Now try deleting the entire dir2 directory (which includes the other copy of the file). You might need to consult man again!

Using a Command-Line Editor

So far we've just manipulated files that someone else created (or that we copied). Now let's try creating our own files! To create (or edit) files via the command line, we use a command-line text editor. This is an editor that has its entire interface within a Terminal window.

There are many command-line text editors that have various pros and cons. The two most well-known command-line editors are Vim and Emacs - many users exhibit a strong preference towards one or the other, and in some cases have less than charitable opinions towards the other. Yours truly is a stalwart proponent of Vim (and is currently producing this document in a Vim window), but notes that several other members of the department have fallen victim to the Emacs epidemic. editors

In all seriousness, however, both Vim and Emacs are extremely powerful editors in the hands of experienced users. For novices, however, both editors exhibit a learning curve that can be frustrating. As such, my usual advice for complete beginners is to use neither vim nor emacs but to use nano, which is a much simpler editor that is more easily picked up.

To start editing a new file using nano, simply call the nano command with the desired filename of the new file to create.

nano myfile.txt
You are now in the nano interface, and can type just as you would in any other editor. Some commands are shown along the bottom of the window -- note that the caret symbol (^) refers to the control key. The most important commands in nano are ^O (control-O) to save the file and ^X (control-X) to quit nano. Nano includes many other commands for things like cutting/pasting text and searching through the file. A good reference on Nano commands is here.

New users of command-line editors (including simpler editors like nano) are often frustrated at the lack of mouse control and familiar shortcuts found in most graphical applications. Channel this frustration into learning editor commands! These editors have commands to do nearly anything you might want to do, and usually faster than using a mouse in a conventional word processor. As a concrete example, novices often just scroll through an entire line from the end using the arrow keys when they want to go to the start of the line. This is slow, frustrating, and a waste of time! In Vim, this can be done with a single keystroke (and similarly efficient commands exist for other editors). Bottom line - don't settle for doing things slowly and awkwardly. If you find yourself repeatedly losing time on editor tasks, consult Google (or ask someone) how to do it faster!

Type some text in your new document, then save and quit the editor. Verify the contents of your new file with the cat command (man cat!). To open the now existing file and change it, simply run nano myfile.txt again, which will open up the existing file in the Nano window (hopefully you didn't type myfile.txt again and either used your terminal history or tab-completion on the filename!).

We've now covered the basics of navigating in a Unix command line environment. While there are many more commands than those described above, these should be enough to get you started working with the command line.

Before starting the next part, cd back to your home directory. You can also clear the contents of the terminal window using the clear command.

Graduating from Nano

The simple nano editor is fine to start out with, but I don't recommend using it long-term, as it is quite basic and likely to ultimately frustrate you with its lack of features. Once you have gotten your bearings in the Unix environment (perhaps after spending a week or two using Nano), I strongly encourage you to graduate to either Vim or Emacs (Vim is my preference as noted above). It is well worth your time to become proficient in one of these editors, and the more time you spend using them, the more efficient you will become.

Luckily, both Vim and Emacs include built-in tutorial modes. To run the Vim tutorial, run the vimtutor command. To run the Emacs tutorial, start the editor by running emacs, then type Control-h followed by t to start the tutorial. I am a fairly knowledgeable vim reference and am happy to answer questions, while others in the department (but likely not me) can speak to emacs usage.

Of course, if you'd rather just skip Nano and start with a 'real' editor immediately, feel free! There's no particular need to work with Nano initially, other than reducing the overall learning curve as you get used to working in the command-line environment.

Compiling and Running a Program

compiling Now let's try compiling and running a C program using the command line. First, in your home directory, create a new directory called myprogram. Cd into that directory and create a new file called hello.c. Type the following Hello World program into your new file:
#include <stdio.h>

int main() {
  printf("Hello World!\n");
  return 0;
Now let's compile the program by calling the standard C compiler program, gcc:
gcc -Wall -o hello hello.c
The gcc command takes a list of source files to compile (in this case, just hello.c) and outputs the compiled executable (or a list of errors if the program does not compile). In the above command, we are passing two flags in addition to the filename: -Wall (Warnings: all) says to turn on all compiler warnings, and -o hello says we want the output executable file to be named hello. If we omit the -o hello option, then gcc defaults to producing an executable named a.out (not very informative).

Note that if we are writing C++ code instead of plain C code, everything is exactly the same except that we use the g++ command instead of gcc.

To run the compiled executable, we run the executable name as a command:

Note that the ./ indicates that we're running a program located in the current directory.

General GCC Tips

Always compile using the -Wall flag. This flag essentially instructs the compiler to give you as much programming feedback as possible, and will often output warnings about things that aren't strictly wrong, but are still symptoms of bugs in your code. Let the compiler help you as much as it can! If you use a Makefile (see below), then you can easily automate including this flag so you don't have to worry about forgetting it.

Also, get in the habit of treating all warnings like errors, even if the warnings only appear when using the -Wall flag and not without. Warnings are very often symptoms of bugs, and even if they aren't, they're usually an indication of bad programming style or gaps in understanding. Never be satisfied by a program that produces warnings, even if it compiles in spite of them! Fix all warnings and only then try running the program.

Using a Makefile

Typing compilation commands over and over each time you change your program wastes a lot of time (and encourages you to leave off things like -Wall, which you should never do). As our programs get more complicated and involve multiple source files, the compilation command gets longer and this problem gets worse. Since our objective in using the terminal is to be lazy efficient, we can automate the build process by using a special file called a Makefile. This is a file used by the program make that says how to compile your program.

Create a file called Makefile alongside your source file and input the example contents below.

CC = gcc
CFLAGS = -Wall

hello: hello.c
    $(CC) $(CFLAGS) -o $@ hello.c

    rm -f hello
Note that the two indented lines must be indented with actual tabs (not spaces). If you copy and pasted from above, you'll likely need to fix this.

A Makefile consists of a set of targets, each of which tells make how to do something. The example above contains two targets: hello, which says how to compile the hello executable, and clean, which says how to clean up after the compiler by deleting generated files (in this case, just the hello executable). Each target has a set of commands that are executed when make is run like so:

make [target]
For example, running make clean would execute the clean target, running the rm command to delete the compiled executable. If you run make by itself (i.e., without any argument), it defaults to building the first target (which is hello in this case).

Run make clean to delete the existing executable (if it exists), then run make to build the hello executable.

One of the reasons make is useful is that it only compiles files when it actually needs to -- i.e., only if the corresponding source files have actually changed. This is the significance of the hello.c located on the same line as the hello target. This is called a dependency and tells make to recompile if and only if the file hello.c has changed since the last compilation.

Try running make again. What happens? Now edit your source code (say by changing the message that prints) and run make again.

This is the basic idea behind Makefiles -- rather than having to repeatedly type long compilation commands, we just type make to compile the program. For large projects with many files and dependencies, Makefiles can get very complicated and scary looking. Focus on just producing simple Makefiles that do what you need and try not to get bogged down in advanced features of make that you don't need! For the masochists among us, here is the make online documentation.

Using Multiple Terminal Windows

A very common scenario in programming is that you are using a command-line editor to edit your source code, and periodically recompiling by running make and then executing the program to test. If you are doing this all in one terminal window, this is very time consuming, since you are constantly having to quit your editor to rebuild and run, then going back into the editor to make more changes.

Instead, you will have a much easier time if you open up a second terminal window (command-N on a Mac). Whenever you open up a new Terminal window, you will be located in the home folder of the local machine, so you will need to use ssh again as described in the beginning of this tutorial to log back into dover. You can then use one terminal window to keep your source code open in the editor, and the other terminal window to compile and run your code. Note that each terminal window is independent and has its own working directory, so remember you'll need to cd to the appropriate location when you open the second window.

Multiple terminal windows in general let you avoid switching back and forth between servers, working directories, etc as much, so be liberal in opening new windows. As an example, I presently have 8 different terminal windows open on my machine, in various locations and editing various files (okay, that may be an extreme example, but it illustrates the point!)

Command-Line Arguments

The standard way that information is passed to programs executed on the command line is via command-line arguments. For example, consider running a command to copy the file foo.txt to the file foo2.txt:
cp foo.txt foo2.txt
Here, the name of the program is cp, while foo.txt and foo2.txt are command-line arguments. The cp program is given these arguments and acts accordingly. Most programs expect command-line arguments to run, and may behave differently depending on how many arguments are given. For example, the normal use of the cd command is with one argument specifying the directory to change to, but you can also call cd without any arguments, which will change to your home directory regardless of your current working directory.

Many of the command-line programs that you write will also use command-line arguments. A program can read the arguments that it's passed using the argc and argv parameters to the main function that you may recognize. Here is an example of a C program that prints out the number of arguments that it is given and then prints out the first such argument (if it exists):

#include <stdio.h>

// a test of command-line arguments
int main(int argc, char** argv) {

  printf("program was called with %d arguments\n", argc - 1);
  if (argc > 1) {
    printf("the first argument is %s\n", argv[1]);

  return 0;
Save this program as argtest.c and compile it with gcc (remember to use -Wall and -o to specify the name of the compiled executable). In the following example, I'll assume the compiled program is named argtest:
sbarker@dover$ ./argtest bowdoin computer science
program was called with 3 arguments
the first argument is bowdoin
This program demonstrates the behavior of the main function parameters - argc is the number of command line arguments passed to the program, and argv is the arguments themselves (represented as an array of char* objects, i.e., essentially a bunch of strings).

You hopefully noticed something odd in the above program - namely, that we printed argc - 1 and argv[1] instead of argc and argv[0], respectively. The reason for this is that the way command-line arguments are defined, the first argument is the name of the program itself - i.e., argtest in the above example. In other words, a program will never have an argc value of zero, since the name of the executing program will always be available as argv[0]. Since we don't really think of the name of the program itself as a true command-line argument, however, we are excluding it from consideration. In the above example, the actual value of argc is 4 (not 3).

While the argc and argv parameters of the main function are optional (note that we did not include them in the earlier Hello World example), you must include them if your program needs to make use of command-line arguments.

Using a Version Control System

An important tool in any programmer's toolchest is a good version control system (VCS). A VCS is used to track changes to documents and manage multiple users' access to the same files. Here at Bowdoin, you should mostly care about version control in the context of working on a program in a group. Conventional ways of working in a group include (1) having one person do all the coding, (2) emailing or otherwise sending the updated program files between group members every time someone makes changes, or (3) always sitting in front of the same computer together when working. All of these options are either inefficient or limiting, and using a VCS is a much more flexible approach.

As with text editors, there are several widely-used VCSes to choose from. The most popular options are Subversion (svn) or git. This tutorial will focus on Subversion (as it is the author's VCS of choice and supported by IT here at Bowdoin), but git should be easy to pick up with a basic understanding of how a VCS works. My general feelings on Git are expressed in the figure to the right, which existing Git users may appreciate. git

The basic setup of Subversion is shown in the figure below. A single Subversion server (which is a computer located elsewhere) holds the "master" copies of the files that we wish to collaborate on. Users (or 'clients') that wish to access the files will maintain a local copy of all the files that they can edit at will. Periodically, clients will send their updated files to the server (this is called 'committing'), which informs the server that the master copy of the files should be updated appropriately. Clients will also periodically ask the server to send them all file updates that have been sent to the server in the meantime (this is called 'updating'), and will therefore stay in sync with other clients using the same files. The important point here is that at any given time, there can be multiple copies of the same files -- the 'authoritative' copies are located on the server, and the 'working' copies are located on the client machines. The clients share their edits with each other via the server through updates and commits.


For example, suppose Client 1 and Client 2 both have local copies of a file, foo.txt, that is being shared by the Subversion server. Client 1 updates its local copy of foo.txt. At this point, Client 1 has a different version of the file from both the server and Client 2. Once finished, Client 1 commits its changes to the repository. At this point, Client 1 and the server both have the updated version of the file, but Client 2 still has the old version. Some time later, Client 2 issues an update request to the server. The server notices that the master copy of foo.txt is more recent than Client 2's local copy, so the new version (which was changed by Client 1) is sent to Client 2. Both clients are now again in sync with each other, despite never working on each other's files directly.

Let's try out Subversion using Bowdoin's Subversion server. First cd back to your home directory. Now, we want to create a local copy of the remote file repository that we can view and edit. This is called checking out the repository, and is accomplished using the command svn checkout, as such:

svn checkout
This command says to checkout the repository located at the above URL. Enter your Bowdoin credentials (answer 'no' if you get a warning about storing your password unencrypted), and you should be able to checkout the directory. Doing so should have created a directory called "public" in your current working directory (named since that was the lowest-level directory of the URL that was checked out). This new directory is your local copy of the file repository.

Important: Checking out out a repository is normally a one-time action that you do when first getting set up. While updates and commits happen often while working, you should not normally need to use the checkout command more than once (unless you are starting to work with a new repository).

Cd into your new checked-out repository. At any time, you can use the svn status command to show whatever changes you have made to your local copy of the repository that you haven't yet committed back to the server. Try running that now -- since you just checked out the repository, it should not show you anything.

Now let's try adding a new file to the repository. Create a text file and store some text in it (remember that anything you put here will be viewed by everyone else checking out the same repository!). Run svn status again and it should show that there is a file in your local repository that isn't committed (indicated by the question mark). First let's mark that file to tell Subversion that we want to add it to the repository:

svn add [filename]
After adding the file, if we run svn status again, the output shows that we have marked the file for addition to the repository. However, we still haven't actually committed our changes to the repository. For that, we need to run the svn commit command.
svn commit -m "added a new file"
This command says to actually push our local changes out to the remote repository. The -m flag is used to pass a 'commit message' (basically like a comment describing the commit). You can use any descriptive text for the commit message. If you don't specify a commit message, svn will dump you into an editor window, which unfortunately (for beginners) is vim by default. If this happens, to quit vim, type :wq and press enter.

In general, before you ever commit, you should update (i.e., pull down other client's changes to the repository) by calling svn update:

svn update
Try updating now. If anyone else has committed any changes to the repository between the time you checked out the repository (or your most recent svn update) and now, those changes will download and be applied to your local copy. Rule of thumb is to update frequently, as it minimizes the chance that you will accidentally try to apply conflicting changes (in which case Subversion will require you to make decisions about how to resolve the conflict -- i.e., whose changes should stick).

Now that you've added a new file, let's try changing an existing file. First, do another status to check that you have no outstanding changes. Since you just committed your new file, you should no longer have any changes and the status should output nothing. Now, let's edit the file you just added. Open it up in an editor and change its contents. Now do another svn status and you should see that you have pending modifications to your file. Do another commit to push these changes out to the repository.

Subversion (and other VCSes) have many more capabilities, such as undoing recent changes and rolling back your files to their state at the time of earlier commits, but the essential components are checking out (one-time operation), updating, and committing.

Note: When you're deleting or moving files that are part of a Subversion repository, don't use the regular rm or mv utilities -- this will confuse Subversion when it sees that the files it was expecting aren't there. Instead, use the Subversion commands svn rm and svn mv, exactly as you would with the regular Unix commands. As with every other type of change you make to a Subversion repository, you'll need to do a commit before these changes are reflected in the master copy.

Finally, one more reason to use Subversion or another VCS is backups! Since multiple copies of your files exist on different machines when you use a VCS, it is much more unlikely to lose data due to hardware failure, accidental deletion, etc. I keep all of my important files in a Subversion repository, even those on which I am the only user.

Copying Files To and From Remote Servers

Note: This section is less relevant to those of you on Windows (which is not itself a Unix environment) as opposed to Macs or Linux machines (which are both Unix environments).

If you are writing code on your own machine (e.g., using an IDE like Xcode or Eclipse) but ultimately need to run on a Linux machine (e.g., on dover), then you will need to copy your files over from your local machine to the Linux machine once you are done developing so that you can test/submit/etc. To copy files from your local machine to dover, you would use the scp (secure copy) command. To copy a file called (located on your local machine) to unixworkshop/ (located in your home directory on dover), you would run the following on your local machine:

Remember to substitute your real Bowdoin username for userid. The basic form of the scp command is this:
scp [sourcefile] [destfile]
where each of the source file and destination file is either a local filename (e.g., in the above example) or a remote filename (e.g., unixworkshop/ located on the remote server in the above example). Try copying a file to dover in the above way using scp.

You can also copy files back from a remote machine to your local machine in the same way by flipping the order of arguments:

scp .
The above would copy somefile.txt located on dover to the current directory on the local machine. We could equivalently write the above by specifying the full destination pathname as opposed to just the directory:
scp ./somefile.txt
If you omit the filename when copying and just provide the directory, the copied file will retain the same filename as the original.

Connecting to a Server using Public Key Authentication

The most common way most of us authenticate when connecting to a machine is by providing a password (such as what SSH prompted you for at the beginning of this tutorial). However, many servers are configured to provide a second (and often preferable) type of authentication called public key authentication. The basic idea in public key authentication is that rather than providing a password to authenticate yourself, you provide a key file to the server. Using a key file instead of a password to login has two main benefits: one, it's more secure (since an attacker might guess your password, but cannot guess your keyfile), and two, it's more convenient, since you don't need to type in a password to login.

Your key (or 'keypair', as it's usually called) actually consists of two files - a private key (often named something like username-keypair) and a corresponding public key (often named something like The public key is not sensitive, and is also stored on the server as a way to identify your account. The private key, on the other hand, is the file that actually authenticates you, and must be kept private.

Keep your private key safe, as it provides access to your server account without a password. Don't send anyone your private key file or leave it anywhere publicly accessible - this is like writing your password on a post-it note!

You can either create a keypair and install it on a server that you already have access to (though doing so is beyond the scope of this tutorial), or you can use a keypair that's already been provided to you to login to a preconfigured server. Assuming you already have a keypair such as named above, you can use it to login to a server that's configured to accept your key ( in this example) like so (remember to substitute your actual username and key file name):

ssh -i username-keypair
Note that your keypair must be present in the current directory when running this command. Assuming your keypair is accessible, this command will login to the server without prompting you for a password.

If you get an error that references the permissions on your keypair file, you can fix that using the chmod command, like so:

chmod 600 username-keypair*
Then try running ssh again.

If you are logging in from Windows, you can use PuTTY as discussed previously, but will need to configure PuTTY to use your private key file. To do so, within PuTTY, go to Connection, then SSH, then Auth, then click Browse next to 'Private key file for authentication'. Change the 'Files of type' setting to 'All files', then select your private key file (e.g., username-keypair - NOT your public key file ending in .pub). Now you should be able to login in the usual way to the server and your private key file will be used.

Some servers support authentication by password only, some by keypair only, and some support either. Now for a bit of bad news - most Bowdoin Linux machines (such as are configured to disallow keypair authentication (though the author disagrees with this decision and hopes that IT will revisit it in the future). Luckily, for those of you who are or will be taking a systems course at Bowdoin, you will probably be using a separate server that is configured for keypair authentication only (i.e., no passwords).


Downloading files: Downloading files from the Internet is easy from the command line using a utility like wget. Note that some systems do not have wget installed -- if this is the case, you can use curl instead, which is similar (check the manpage, as usual!). Here is an example of downloading my homepage using wget:
Decompressing archives: A popular type of file archive used in Unix-land is a tarball, which is a type of compressed archive that ends in .tar.gz (or sometimes just .tgz). When you download software for Unix (such as for OS class!) it is likely to be in a tarball format. Let's look at how we can decompress a tarball at the command line. First let's download a tarball that I've placed on my website:
This downloads a single file called unixfiles.tar.gz that can be expanded into all the files actually contained within the archive. To do this, we use the tar (tape archiver) command:
tar xzvf unixfiles.tar.gz
The tar command is another general purpose utility that can be used either to make new tarballs or extract existing tarballs. In the above idiomatic usage, we are saying to extract the archive (flag x), and simultaneously decompress it (flag z), showing verbose output (i.e., extra information about what the command is doing -- flag v), and finally specifying the archive filename (flag f followed by the archive name). After running this command, you will have an extracted directory containing all the original files that were packaged into the tarball. tar

Last updated January 2018.