Termbot is a command-line interface tool for conveniently interacting with OpenAI’s GPT-X or Groq’s natural language processing system, directly from your terminal. It allows the user to use standard ChatGPT-like question/answer functionality, with added flexibility such as interacting with local file contents, sending large data from STDIN, using custom local instructions, and more.
https://github.com/Argandov/termbot
This past weeks I’ve been rebuilding and extending some capabilities of Termbot, and since its usage can be almost endless, I decided to share some usage examples:
The follwing are just examples; you’ll need to improve the prompts to your own taste and needs.
Case 0: Aliasing Termbot Silent (No banner) for quick access in $HOME/.bashrc or .zshrc:
$ alias t**=**"termbot -s"
Case 1: Summarizing a Blog Post:
Once aliased, copy the raw text contents of a blog post to the clipboard and run:
$ pbpaste | t -p "Summarize this Blog Post in no more than 75 words, and point out the most important takeaways of it"
Case 2: Documenting code:
$ t -p "Generate a README.md for this project /file:app.py pointing out what it does, how to use it, and a general intro to its capabilities | tee README.md"
Case 3: Using Termbot to debug code or finding issues.
For example, an issue I had, which turned out to be an extremely simple mistake which Termbot itself helped me solve in seconds:
$ t -p "For some reason, everytime I execute /file:termbot.py the stderr is 1 even upon successful execution. Where I might be wrongfully returning a stderr of 1?"
Case 4: Summarizing Git Changes and Logs:
This one was super interesting, and I think there may be other useful use cases for Git.
$ git log | t -p "Summarize all Git Changes and Logs"
Case 5: Scraping websites and summarizing their content:
Where “scrape” would be an alias for a simple web scraper tool like the one I made in > WebScraper
$ scrape <url> | t -p "Give me the TL;DR of this information"
Plans for the future, short term #
I plan to implement Claude 3 apart from GPT4 to enhance its capabilities and a generic capability of Scraping websites.