The Network Engineer's Guide to Python Tooling
The Python tooling landscape has evolved rapidly over the past few years. While tools like uv
promise faster dependency management and poetry
offers elegant packaging solutions, the real question for network engineers is: what actually works in day-to-day automation projects?
A recent discussion among network automation practitioners reveals the gap between theoretical tooling perfection and practical workflow reality. Here's what engineers are actually doing to manage Python environments, dependencies, and tooling in their network automation projects.
NOTE: This blog doesn’t display code very well, feel free to view the PDF version of this post if you want: View this article as a PDF
The Core Challenge: Environment Management
The fundamental problem hasn't changed: you need your Python tools to work reliably across different environments, from your local development machine to CI/CD pipelines to production automation systems. While uv
creates virtual environments efficiently with uv sync
, and VS Code can auto-detect .venv
directories, the integration isn't always seamless.
The Reality Check: While uv run
provides a convenient way to dynamically load environments and run commands, it's not mandatory. You can still activate environments the traditional way (source .venv/bin/activate
) and run commands normally. However, the discussion revealed that many engineers prefer approaches that eliminate the need to remember whether to activate environments manually or use tool-specific prefixes, especially when troubleshooting network issues at 2 AM when cognitive overhead matters most.
Practical Solutions That Work
The Makefile Approach
Many experienced engineers gravitate toward Makefiles as a universal wrapper. This isn't about nostalgia—it's about consistency across the entire toolchain evolution:
# Works whether you're using pip, poetry, or uv underneath
make install # Sets up environment
make run # Runs your automation
make test # Runs tests
make clean # Cleans up
The key insight: the interface stays constant while the underlying tools evolve. Your repositories might contain a mix of requirements.txt
, pyproject.toml
, and legacy setup.py
files as projects migrate over time. Makefiles provide a stable interface that works across this heterogeneity.
The Automatic Environment Solution
For engineers who want seamless tool integration, automatically loading environments eliminates the manual activation step entirely. Tools like direnv
can automatically set up your environment when you enter a project directory:
# In .envrc
layout uv
# or manually:
# export VIRTUAL_ENV=.venv
# export PATH=$VIRTUAL_ENV/bin:$PATH
This modern approach makes every tool "just work"—VS Code picks up the right Python interpreter, ansible-playbook
uses the right dependencies, and command-line tools work without prefixes or manual activation. Many engineers combine this with simple aliases (alias venv='source .venv/bin/activate'
) for manual control when needed.
The Simple Script Approach
Some teams are moving toward plain bash scripts with functions:
#!/bin/bash
# run.sh
setup() {
uv venv
uv sync
}
test() {
uv run pytest
}
deploy() {
uv run ansible-playbook -i inventory playbook.yml
}
"$@" # Call the function passed as argument
Usage: ./run.sh setup
, ./run.sh test
, ./run.sh deploy
. No external dependencies, works everywhere, and easy for team members to understand and modify.
The Tool Dependency Problem
Network automation projects often require more than Python packages. You might need jq
for JSON processing, yq
for YAML manipulation, gh
for GitHub API interactions, or specific versions of ansible-core
. Asking users to install these manually creates friction and version inconsistency.
The Solution: Download tools automatically to your project directory:
download-tools:
@mkdir -p tools
@./scripts/fetch-tool.sh jq 1.6 linux amd64
@./scripts/fetch-tool.sh yq 4.30.8 linux amd64
@./scripts/fetch-tool.sh task 3.28.0 linux amd64
# Use project-local tools
run-automation: download-tools
./tools/jq --version
./tools/task deploy
This approach leverages the fact that modern tools (written in Go/Rust) produce statically-linked binaries. Users run make download-tools
once, and everything "just works" without system-level installation.
Development Container Reality
While development containers (devcontainers) offer consistent environments, they come with trade-offs. They excel for:
- Complex multi-language projects
- Teams that need identical environments
- Integration with cloud development platforms
But they add complexity for simple automation scripts and can be overkill for many network automation use cases.
What Actually Matters for Network Engineers
Based on real-world usage patterns, prioritize:
- Consistency: The same commands should work across all your projects
- Simplicity: Prefer solutions that don't require learning new tools
- Reliability: Tools should work when you need them most (during outages)
- Team Adoption: If it's too complex, people won't use it
Recommendations by Use Case
For Individual Scripts: Use uv
with shebang lines for standalone automation:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# dependencies = ["netmiko", "click"]
# ///
For Team Projects: Combine uv
or poetry
for dependency management with Makefiles for consistent interfaces.
For Complex Environments: Consider development containers, but ensure the complexity is justified by the use case.
For Automatic Environment Management: Use direnv
with layout uv
for seamless environment activation, or combine traditional activation methods with simple aliases for manual control.
The Bottom Line
The "best" Python workflow isn't about using the newest tools—it's about reducing friction for your team while maintaining reliability. Many successful network automation teams use hybrid approaches: modern dependency management under the hood with simple, consistent interfaces on top.
The key insight from practitioners: optimize for human workflow, not tool elegance. Your automation is only as reliable as your team's ability to run, modify, and troubleshoot it when things go wrong.
This post is based on community discussions and represents the collective experience and opinions of individual practitioners, including: Roman Dodin, Ryan Hamel, Jesus Illescas, Urs Baumann, Bart Dorlandt, Christian Drefke, Steinn (Steinzi) Örvar, Cristian Sirbu, and Adam Angell. Approaches should be evaluated and adapted based on your specific network environment and requirements.
The conversation continues in the Network Automation Forum community – find us on Slack or LinkedIn.