Ever feel lost in the Linux terminal? Yeah, us too. But here's the tea β β mastering filter commands is like having a superpower. Let's dive in!
What Are Filter Commands Anyway?
Think of them as bouncers at a club. They let in what you want and kick out the rest. Simple as that.
1. grep β The Ultimate Bouncer π‘οΈ
The OG of filtering. Find text patterns like you're Sherlock Holmes.
# Find lines containing "error" in a file
grep "error" logfile.txt
# Case-insensitive search
grep -i "ERROR" logfile.txt
# Count matching lines
grep -c "error" logfile.txt
# Show line numbers
grep -n "error" logfile.txt
# Invert match (show lines WITHOUT the word)
grep -v "success" logfile.txtPro tip: grep is your BFF. Seriously.
2. awk β The Data Slicer πͺ
For when you need to parse columnar data like a boss.
# Print specific columns (space-separated)
awk '{print $1, $3}' data.txt
# Filter lines where a column meets a condition
awk '$2 > 100 {print $0}' numbers.txt
# Count lines in a file
awk 'END {print NR}' file.txt
# Sum a column
awk '{sum += $1} END {print sum}' numbers.txtFun fact: awk was named after its creators' initials (Aho, Weinberger, Kernighan). How cool is that? π
3. sed β The Stream Editor π
Replace, delete, or transform text like a wizard.
# Replace first occurrence on each line
sed 's/old/new/' file.txt
# Replace all occurrences
sed 's/old/new/g' file.txt
# Delete lines containing a pattern
sed '/pattern/d' file.txt
# Delete specific line numbers
sed '5d' file.txt
# Print only lines matching a pattern
sed -n '/pattern/p' file.txtWarning: sed can be spicy. Always backup before you sed. πΆοΈ
4. cut β The Minimalist π
Extract columns or characters. Dead simple.
# Extract columns (colon-separated)
cut -d: -f1 /etc/passwd
# Extract character range
cut -c1-5 file.txt
# Extract by byte
cut -b1-10 file.txtVibe: If awk is the chef, cut is the sushi knife. Precise. Clean. Elegant.
5. sort β The Organizer π
Put things in order because chaos is overrated.
# Sort alphabetically
sort file.txt
# Sort numerically
sort -n numbers.txt
# Reverse sort
sort -r file.txt
# Sort by specific column
sort -k2 -n data.txt
# Remove duplicates while sorting
sort -u file.txt6. uniq β The Duplicate Detective π
Find or remove duplicate lines. It's that simple.
# Show unique lines only
uniq file.txt
# Count occurrences
uniq -c file.txt
# Show only duplicated lines
uniq -d file.txt
# Show only unique lines (no repeats)
uniq -u file.txt
# Case-insensitive
uniq -i file.txtRemember: uniq works best on sorted files. They're like a coupleβbetter together! π«
7. wc β The Counter π’
Count words, lines, and bytes. Sounds boring but super useful.
# Count lines
wc -l file.txt
# Count words
wc -w file.txt
# Count bytes
wc -c file.txt
# Count characters
wc -m file.txt
# All the stats
wc file.txt8. tr β The Transformer π
Translate, delete, or squeeze characters.
# Convert lowercase to uppercase
tr a-z A-Z < file.txt
# Delete specific characters
tr -d '0-9' < file.txt
# Squeeze repeated characters
tr -s ' ' < file.txt
# Replace one character with another
tr ':' ',' < file.txtCool factor: Only accepts stdin. It's like the minimalist of the group. β¨
9. head & tail β The Boundary Guards π
Look at the beginning or end of files.
# First 10 lines
head file.txt
# First 5 lines
head -n 5 file.txt
# Last 10 lines
tail file.txt
# Last 5 lines
tail -n 5 file.txt
# Live monitoring (follow mode)
tail -f logfile.txtHack: tail -f is perfect for watching logs in real-time. You're welcome. π
10. tee β The Splitter π³
Split output to both stdout AND a file.
# Write to file AND display
command | tee output.txt
# Append to file
command | tee -a output.txt
# Chain it
echo "hello" | tee file1.txt | tee file2.txt11. xargs β The Argument Builder ποΈ
Convert stdin into command arguments. Powerful stuff.
# Pass file list to a command
ls *.txt | xargs wc -l
# Execute with null delimiter
find . -name "*.tmp" -print0 | xargs -0 rm
# Limit arguments per invocation
echo -e "file1\nfile2\nfile3" | xargs -n 1 echoπ― Pro Tips: Chaining Magic
The real power? Pipes. Combine these bad boys:
# Find all errors, count occurrences, show top 5
grep "error" app.log | sort | uniq -c | sort -rn | head -5
# Extract usernames, sort, remove duplicates
cut -d: -f1 /etc/passwd | sort | uniq
# Find large files, list details
find . -type f -size +10M | xargs ls -lhπ One-Liners to Impress Your Friends
# Count total lines in all .txt files
find . -name "*.txt" -exec wc -l {} + | tail -1
# Find most frequent words in a file
cat file.txt | tr ' ' '\n' | sort | uniq -c | sort -rn | head -10
# Extract unique IP addresses from logs
grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' log.txt | sort -u
# Replace tabs with spaces in all Python files
find . -name "*.py" -exec sed -i 's/\t/ /g' {} +π The Golden Rules
Know your data β Understand what you're filtering.
Test first β Always test on a copy before using
-i(in-place edit).Pipe wisely β Each pipe adds overhead, but readability > micro-optimization.
Combine tools β One tool does one thing well. Mix them!
RTFM β Seriously,
man grepis your friend.
π Final Thoughts
Filter commands are the spice rack of Linux. Master them, and you'll cook up command-line magic every day. Start small, combine fearlessly, and soon you'll be writing one-liners that make senior devs jealous. π
Now go forth and filter like a boss! π
Happy filtering,
Your Friendly Neighborhood Linux Enthusiast π§