sed
- Stream Editor
It read line by line and allows you to substitute, delete or transform text streams.
sed [options] 'command' file
Essential Commands & Options
Command | Description |
---|---|
s/pattern/replacement | Substitute pattern with replacement (first match) |
s/pattern/replacement/g | Substitute all matches in line |
d | Delete line |
p | Print line |
-n | Suppress automatic printing |
-i | Edit file in place |
Example
sed 's/decline/accept' status.txt # replace "decline" with "accept"
sed 's/decline/accept/g' status.txt # replace all "decline" with "accept"
sed '/^#/d' code.py #deletes all the comments
sed -n '/error/p` code.py' #print lines that only contains 'error'
Exercises
- In
server.conf
, replace all occurences oflocalhost
with127.0.0.1
, and save changes in place.
sed -i 's/localhost/127.0.0.1/g' server.conf #edits the file in place
- In
logfile.txt
, delete all lines that start with#
.
sed -i '/^#/d' logfile.txt
- In
users.txt
, print only lines that contain the wordadmin
.
sed -n '/admin/p' users.txt #-n suppresses default output
- In
data.txt
, add the string"END"
to the end of every line.
sed 's/$/"END"/' data.txt
- In
report.txt
, replace the first occurence ofERROR
withWARNING
in each line, but only print lines where substitution happened.
sed -n 's/ERROR/WARNING/p' report.txt
awk
- Text And Field Processing
awk
processes text line by line and splits each into fields(columns). Great for reports, CSVs, logs.
awk 'pattern { action }' file
Built-In Variables
Variable | Descriptio |
---|---|
$0 | Entire Line |
$1 , $2 … | First, second field, etc. |
NF | Number of fields |
NR | Line number |
Example
awk '{print $1}' file.txt #print only the first line
awk -F ':' '{print $1}' /etc/password #print usernames
awk '$3 > 50' data.txt #print lines where third field > 50
awk '{sum += 2} END {print sum}' prices.csv #sum of 2nd field
awk '/error/' log.txt #print lines containing 'error'
Exercises
- In
report.csv
(comma-separated), print only the first and third columns.
awk -F, '{print $1 "," $3}' report.csv
- In
sales.txt
(space-separated), print all lines where the second field (column) is greater than 500.
awk '$2 > 500' sales.txt
- In
scores.txt
, calculate the average of values in the third column and print the result.
awk '{sum += $3; count++} END {print sum/count}' scores.txt
- In
/etc/passwd
, print usernames (first field) and their default shells (last field)
awk -F: '{print $1, $NF}' /etc/passwd
- In
data.csv
, print lines where the first field equals"FAILED"
awk -F, '$1 == "\"FAILED\""' data.csv