Linux - Command Line Cheatsheet
tree
To see the hierarchy of a directory.
less
To open a scrollable pager of a file.
head
To see the first n lines of a file.
tail
To see the last -n lines of a file. tail -f file is also very useful when
streaming is being done on that file.
Hard Link and Soft Link
What is inode? -> Inode is a data structure which contains metadata about the file/directory. Whenever you try to open a file, the location of the content is fetched from the metadata stored in inode.
Hard Link
Let’s say you have a original file f. You can create a hard link to f by ln f dup-hard. The new file
points to the same node as the old one. Deleting one doesn’t affect the other as the underlying inode is
not deleted as someone else is referencing it.
Soft Link
Soft link is a file which points to another file. It is created using ln -s f dup-soft. The new file
points to the original file. If original file is deleted, the sym link no longer works as the file that it
was pointing to doesn’t exist anymore.
tldr; hard link creates a new file which points to the original file’s inode. soft link creates a new inode to point to original file.
If underlying/original inode is deleted, the link(hard or soft) will not work. The inode is deleted only when the hard-link count becomes 0 (i.e., no hard links remain) and no process has it open.
Memory: “Hard” = hard-attached to the data “Soft” = like a shortcut
file
Tells the content type that a file contains. Eg.:
1
2
Downloads % file Wireshark\ 4.6.0.dmg
Wireshark 4.6.0.dmg: zlib compressed data
stat
Shows detailed metadata about file like: permissions, owner, size, timestamps.
du vs df
du - disk usage (works on directories)
df - disk free (only for partition)
sort
Examples:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 1) Alphabetical sort (default)
sort names.txt
# 2) Numeric sort
sort -n numbers.txt
# 3) Human-readable size sort (largest first)
du -sh * | sort -hr
# 4) Unique sorted values (deduplicate)
sort -u words.txt
# 5) Sort by 2nd column, comma-separated (e.g., CSV)
sort -t ',' -k2,2 data.csv
Useful flags:
-n numeric sort (treat lines as numbers)
-h human-numeric sort (understands sizes like 10K, 2M, 1G)
-r reverse order
-u unique (remove duplicates after sorting)
-k sort by a specific key/column
-t set field delimiter (default: whitespace)
- Permissions
For files:
r = read contents
w = change contents
x = run/execute
For directories:
r = list names (ls)
w = create/delete/rename inside (needs x too)
x = enter/traverse (cd), access files if you know the name
Numeric values:
r - 4
w - 2
x - 1
1
2
3
4
chmod +x something
chmod 755 something
chmod -x something
chmod g+rwx something
In normal Linux permissions, there are only these owners:
User owner (UID)
Group owner (GID)
chown
Change owner both user and group:
1
chown user:group file
chgrp
Anything chgrp can do, chown can also do:
1
2
3
# both are equivalent
chgrp dev file
chown :dev file
umask
Manage the read/write/execute permissions that are masked out (i.e. restricted) for newly created files by the user.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
umask 022
# Directories: 777 - 022 = 755 -> rwxr-xr-x
# Files: 666 - 022 = 644 -> rw-r--r--
# Another way to look at it:
# umask is a “turn OFF these permission bits” mask (not real subtraction).
# Rule: final = base & ~umask
# Defaults:
# dirs base = 777 (rwxrwxrwx)
# files base = 666 (rw-rw-rw-)
# Example umask 022:
# 022 mask = --- -w- -w- (remove write for group + others)
# dirs: 777 & ~022 = 755 → rwxr-xr-x
# files: 666 & ~022 = 644 → rw-r--r--
SUID and SGID
- Normal user: can only change own password and must provide current password first.
- Root: can change anyone’s password without knowing it and bypass restrictions.
How passwd actually works with SUID:
The /etc/shadow file structure:
1
2
3
root:$6$encrypted_hash...:19000:0:99999:7:::
samantha:$6$encrypted_hash...:19000:0:99999:7:::
john:$6$encrypted_hash...:19000:0:99999:7:::
Each line = one user’s password info; users should only modify their own line.
File permissions:
1
2
-rw------- 1 root root /etc/shadow (only root can read/write)
-rwsr-xr-x 1 root root /usr/bin/passwd (SUID bit set)
What happens when Samantha (or any user) runs passwd:
- SUID makes the process run as root.
- Process effective UID = 0 (root).
- Process real UID = Samantha’s UID (kernel tracks both).
- Process can now read/write /etc/shadow.
- Kernel allows it because effective UID = root.
Internal logic checks real UID:
- “Who started me?” -> Samantha.
- “Allow her to only modify the line: samantha:…”
- Prevents her from changing root’s or john’s password.
Program writes only Samantha’s line:
- Reads entire file.
- Modifies only her line.
- Writes back to /etc/shadow.
Alternative 1: what if we give write permission to all users?
Attempt:
1
2
-rw-rw-rw- 1 root root /etc/shadow (world-writable)
-rwxr-xr-x 1 root root /usr/bin/passwd (no SUID, just execute)
What happens:
- Process runs as Samantha.
- Internal logic: “She can change her password.”
- Kernel allows write (file is world-writable).
- Program modifies only her line.
Also:
- Samantha can bypass passwd entirely.
- She can directly edit /etc/shadow with any text editor.
- She can change root’s password, delete other users, etc.
Alternative 2: what if we only give execute permission?
Attempt:
1
2
-rw------- 1 root root /etc/shadow (only root can write)
-rwxr-xr-x 1 root root /usr/bin/passwd (no SUID, just execute)
What happens:
- Samantha can execute passwd.
- Process runs as Samantha (process UID = Samantha).
- Kernel won’t allow write because she lacks write permissions.
Execute permission != write permission:
- Process runs as Samantha.
- Samantha has no write access to /etc/shadow.
- Kernel blocks the write at system call level.
- Internal logic never gets to execute the write.
Sticky bit
Purpose:
- Directories (modern, most common): in a shared writable directory (like /tmp), it prevents users from deleting or renaming other users’ files. Only the file owner, directory owner, or root can delete/rename entries.
- Executables (historical/mostly obsolete): on older Unix systems, it improved performance by keeping a program’s code (“text segment”) cached in swap/memory after it exited so it could start faster next time.
Why it’s called “sticky”:
- Historically, the executable’s code would stick in swap/memory.
- Today, files in a shared directory stick to their owners (others can’t remove them).
How it’s represented:
ls -l:
- Shows t in the “others execute” position for directories (e.g., drwxrwxrwt).
- Shows T if sticky is set but others-execute isn’t set(others can’t go into the directory).
grep
Common patterns:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Basic match
grep "POST" app.log
# Case-insensitive match
grep -i "POST" app.log
# Invert match (lines that do NOT match)
grep -v "POST" app.log
# Context around matches
grep -C5 "POST" app.log # before + after
grep -A5 "POST" app.log # after
grep -B5 "POST" app.log # before
# Recursive search
grep -r "POST" directory
# Line numbers + filenames
grep -n "POST" app.log
grep -l "POST" app.log # list filenames with matches
# Match-only output + counting
grep -o "POST" app.log
grep -o "POST" app.log | wc -l
# Extended regex examples
grep -E '^ERROR' app.log
grep -E 'ERROR$' app.log
grep -E 'ERROR|WARNING' app.log
grep -E '^4..$' app.log
grep -E '^[234]00$' app.log
grep -E 'app/api/v2/.*ui/user' app.log
# Exit status (0 = found, 1 = not found)
grep -q 'POST' app.log; echo $?
sed
Stream editor for fast, non-interactive text edits.
sedprocesses input line by line.- You give it commands like substitute (
s///), delete (d), print (p). - By default, it prints every line after applying commands. Use
-nto suppress auto-print and print only what you choose. Also, p can be used with g.
1
2
3
4
5
6
7
8
# Replace first match per line
sed 's/error/ERROR/' app.log
# Replace all matches per line (global)
sed 's/POST/HTTP_POST/g' app.log
# Use a different delimiter when slashes exist in the pattern
sed 's|/api/v1|/api/v2|g' access.log
s/old/new/replaces the firstoldper line.- Add
gto replace all matches in the line. - Any delimiter works (
s|a|b|is often easier for paths).
Print only matching lines (like grep, but with transformations if needed):
1
sed -n '/ERROR/p' app.log
-nturns off default printing./ERROR/selects matching lines;pprints them.
Delete lines (filtering):
1
2
3
4
5
# Delete lines that match a pattern
sed '/DEBUG/d' app.log
# Delete a line range (inclusive)
sed '5,12d' app.log
/pattern/dremoves matching lines from output.start,endddrops a numeric line range.
Targeted edits by line range:
1
2
# Only change lines 10 to 20
sed '10,20s/timeout=30/timeout=60/' config.ini
- Line ranges let you be precise without touching the rest of the file.
1
2
# GNU sed (Linux): in-place edit(i flag), or it will print the changes to only stdout
sed -i 's/ENV=dev/ENV=prod/' .env
awk
Programming language for text processing. awk reads input line by line, splits each line into fields, and lets you write small programs to print, filter, and transform data.
Basics and fields (field separator, whole line, and column access):
1
awk -F ',' '{print $0}' file.csv
-F ','sets the field separator to a comma (default is whitespace).$0means “the entire current line”.
1
awk -F ',' '{print $1}' file.csv
$1is the first field/column,$2is the second, etc.
1
awk -F ',' '{print $NF}' file.csv
NFis the number of fields in the current line.$NFis the last field in the line, no matter how many columns there are.
1
awk -F ',' '{print NR ":", $0}' file.csv
NRis the current record (line) number.- Output looks like
1: <line contents>for each each line
Filters and matches (pattern matching and numeric comparisons):
1
awk '/ERROR/ {print}' app.log
/ERROR/is a regex pattern.- For any line that matches the pattern, the action
{print}runs. {print}with no arguments prints the whole line (same asprint $0).
1
2
3
awk '$4 > 200' app.log
awk '$2 == 234 && $3 == 233' app.log
awk '$9 ~ /^5/' access.log
Program structure (BEGIN / per-line / END):
1
awk 'BEGIN { } { } END { }' file
BEGIN { }runs once before any input is read.{ }(the middle block) runs for each line.END { }runs once after all input is processed.- This structure is great for setting counters, computing totals, and printing summaries.
Count values in a column (frequency table):
1
awk '{c[$2]++} END {for (i in c) print i, c[i]}' file
c[$2]++increments a counter for the value in the second field.- The
ENDblock prints each distinct value and its count. - Output order is arbitrary because
awkiterates associative arrays in hash order.
Sample output:
1
2
3
200 10
400 5
500 3
systemd
Default init system on most modern Linux distros.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Service status and lifecycle
systemctl status nginx
systemctl start nginx
systemctl stop nginx
systemctl restart nginx
systemctl reload nginx
# Enable/disable on boot
systemctl enable nginx
systemctl disable nginx
# Show unit file + drop-ins
systemctl cat nginx
systemctl show nginx
statusis the first thing you check in production; it shows logs and the last exit code.reloadsends a reload signal if the service supports it (no downtime).enablecreates symlinks so the service starts at boot.
Logs with journald (the other 20% that saves you in incidents):
1
2
3
4
5
6
7
8
# Logs for a unit
journalctl -u nginx
# Follow logs like tail -f
journalctl -u nginx -f
# Logs since a time
journalctl -u nginx --since "1 hour ago"
journalctlis centralized logs; you rarely grep files directly on systemd systems.