Last quarter a client’s deployment script hit 900 lines with zero bash functions. Just a massive wall of sequential commands. When something broke at 2 AM, nobody could figure out which section was failing. We refactored the whole thing into functions in about two hours, and suddenly the error logs actually made sense.
If you’re writing bash scripts longer than 30 lines without functions, you’re making your life harder than it needs to be.
Two Ways to Define a Function
Bash gives you two syntax options. Both do the same thing.
The first uses the function keyword:
function check_disk {
df -h / | tail -1
}
The second drops the keyword and adds parentheses:
check_disk() {
df -h / | tail -1
}
I prefer the second style. It’s shorter, and it’s what you’ll see in most scripts out in the wild. The parentheses are purely decorative — they never hold arguments like they would in C or Python. That trips people up constantly.
One rule that bites newcomers: the function definition must appear in the script before you call it. Bash reads top to bottom. You can’t call something you haven’t defined yet.
Calling Functions and Passing Arguments
Calling a function is dead simple. Just type its name:
check_disk
If you don’t call it, the code inside never runs. It just sits there.
To pass arguments, add them after the function name separated by spaces. Inside the function, grab them with $1, $2, $3, and so on — same positional parameters you’d use for script arguments.
#!/bin/bash
greet_server() {
echo "Checking host: $1 on port $2"
}
greet_server db-prod-01 5432
Output: Checking host: db-prod-01 on port 5432
We use this pattern across our managed environments to pass hostnames and service names into reusable health-check functions. One function, dozens of targets. Way better than copy-pasting blocks of code for each box.
Return Values Are Not What You Think
Here’s where bash gets weird. If you come from Python or JavaScript, forget what you know about return values.
Bash functions don’t return arbitrary data. The return keyword sets an exit status — an integer between 0 and 255. Zero means success. Anything else means failure. You access it through $? immediately after calling the function.
#!/bin/bash
check_if_root() {
if [[ "${EUID}" -eq "0" ]]; then
return 0
else
return 1
fi
}
if check_if_root; then
echo "User is root!"
else
echo "User is not root!"
fi
This works great for true/false checks. We had a manufacturing client whose backup scripts needed root for snapshot mounts — a function like this at the top saved us from cryptic failures halfway through execution.
If you skip the return statement entirely, the function returns the exit code of the last command it ran. That’s fine until the last command is something like echo, which almost always returns 0 regardless of your actual logic. Be explicit.
Getting Actual Data Out
Need to return a string or number larger than 255? You have two options.
Option 1: Command substitution. Have the function echo the value, then capture it:
get_hostname() {
echo $(hostname -f)
}
my_host=$(get_hostname)
echo "Running on: $my_host"
Option 2: Global variables. Set a variable inside the function and read it outside:
get_uptime() {
RESULT=$(uptime -p)
}
get_uptime
echo "$RESULT"
Command substitution is cleaner. Global variables get messy fast in larger scripts.
Local Variables Matter
By default, variables inside a function are global. That means they can silently overwrite variables in the rest of your script.
function myfunc {
a=4
local b=4
}
a=3; b=3
myfunc
echo "$a $b" # output: 4 3
The local keyword keeps b scoped to the function. Without it, a leaks out and overwrites the outer value. I have a strong opinion here: always use local unless you specifically need a global side effect. We got called in after a client’s previous MSP wrote deployment scripts where every function stomped on shared variable names. Took us a full day to untangle.
Putting It Together
Here’s a pattern we run across managed endpoints for quick service checks:
#!/bin/bash
check_service() {
local svc="$1"
if systemctl is-active --quiet "$svc"; then
return 0
else
return 1
fi
}
for service in nginx postgresql redis; do
if check_service "$service"; then
echo "[OK] $service"
else
echo "[FAIL] $service"
fi
done
Clean, reusable, easy to debug at 3 AM.
Caveats Worth Knowing
Bash functions aren’t namespaced. Two functions with the same name? The second one wins silently. No warnings. In larger scripts or when sourcing multiple files, this can nuke functionality without any indication something went wrong.
Also, return values cap at 255. Try to return 256 and you get 0. I’ve seen this cause phantom “success” results in monitoring scripts. Use echo with command substitution if you need real data back.
For comparison, PowerShell functions handle return values and parameter validation far more gracefully — we covered some of that in our PowerShell Module Management post. But bash is what’s on every Linux box by default, so you work with what you’ve got.
The Takeaway
Define functions before you call them. Pass arguments positionally. Use return for exit codes and echo with command substitution for actual data. Always scope variables with local.
If your bash scripts are turning into unreadable walls of commands, functions are the fix. And if you’re managing a fleet of boxes and need help standardizing your automation, reach out to us — we’ve cleaned up enough spaghetti scripts to know where the problems hide.
For more scripting patterns, check out our guide on automating ODBC DSN management — different language, same principle of keeping things modular and maintainable.


