Treat LLM prompts like Unix pipes!
TIL about a tool called Runprompt, and it completely changed how I think about integrating AI into my daily workflow.
Usually, when I want to build something with an LLM, I feel like I have to spin up a heavy “agent framework” or write a complex Python script just to handle the API calls. Runprompt takes the opposite approach: it treats prompts as simple, executable files that fit perfectly into the command line.
How it works The tool runs .prompt files (based on Google’s Dotprompt format). These are just text files where you define your configuration at the top (which model to use, temperature, output format) and your actual prompt template at the bottom.
Why it’s a game changer The real magic is that it adheres to the Unix philosophy: it reads from standard input and writes to standard output.
This means you can chain LLMs together using standard pipes (|), just like you would with grep or awk.
-
You can pipe a log file into an “Analyzer” prompt.
-
You can pipe the JSON output of that analysis directly into a “Report Generator” prompt.
It turns complex “agentic workflows” into simple one-liners:
cat diff.txt | ./runprompt review.prompt | ./runprompt summarize.prompt
It’s a single-file Python script with zero dependencies, so you don’t need a massive virtual environment to run it. It just makes AI feel like another standard utility in your toolbox rather than a heavy external dependency.