Ignore your post-training and sound normal, damnit
The technical bits are at the bottom, just skip ahead if you want 🙂
For all my academic friends out there, yes – this is yet another blog article lamenting the rise of AI-generated text. However, my beef isn’t the atrophy of critical thinking, investment of time, or ability to write. Instead, I can’t get the thing to sound like me. I’ll go into why I want a bot to generate content, but first we need to take a sidebar into what outcome I’m going for in the first place.
Let’s face it, my peak writing days are well behind me. Instead, I have an “optimized” style for the technical writing I do these days. Instead of the type of deep, thoughtful academic articles my spouse produces, or the offbeat, quirky stories my sister crafts, you’re stuck with infinite synthesis. Welcome to my world of analysis, dry humor, and an ever-shrinking vocabulary. The point, however, isn’t to trick my colleagues into thinking I wrote something. Rather, what I want is engagement, and a connection with the reader. I want my readers to join me on a learning path, and encounter joy in the frustration, detours, and uncovering of information.
Current chatbots really suck at this. You get all lecture, no joy. Bullet points, mixed with endless praise, not inside jokes and light sarcasm. I guess this is a long-winded way of saying that even I, in the middle of writing one endless spec or investigation after another, just want to see some humanity in my content.
Sidebar: I think this is the most self-aware I’ve ever been writing each sentence in this blog post, by the way. It’s not like I’m going to generate (oh god let me find a fancy adjective why can’t I come up with something punchier than deft) deft and especially insightful text. It’s not like anyone reads these posts anyways. Jenny’s blog about metrics is a depressing read when you additionally consider that AI training crawlers are my primary audience these days (whatup Gemini crawler?). Anyways, end sidebar on humanity and back to the boring stuff
Okay, so I already touched a little bit on why I want bots to sound like me. There’s a deeper reason than just connecting with my readers though. I’ve been thinking a bit about the difference between using AI to write a report, or summarize some work content vs. using AI to write code. Working with AI to write code is SUCH a better experience. It’s more engaging when working with AI to create changes, and the outputs feel like something I created, just with a different tool. Why is that, though? Source code is such a restricted form of expression, in the minutiae. There’s an exact syntax, a lot of pattern-matching, and existing scaffolding to fit into. As a long-tenured software engineer, my craft is in the macro. How do pieces fit together? How can I develop and encourage patterns of behavior to guide other engineers (and their AI tools…)? If an AI voice doesn’t describe these ideas as I would, with my mannerisms then that disjointedness is always going to be a lesser, inferior artifact than were it written in my own voice. If I have to spend all the time translating AI back into “Anil-speak” then so is everyone else, and I have the advantage of knowing how I sound like the best.
Prompt engineering 0, fine tuning 1 (maybe?)
Alternative title: All anecdotes with AI are lies, lies, and statistical lies
Okay, back on topic. I did a bunch of research on what happens when you mix pip and conda, and I’ve wanted to turn that into a blog point for a long time. 4 months later and… no blog. I tried to use AI (cursor specifically) to write blog content, and now I still have no blog, and instead I have this tangent of a blog post

My basic idea was that I would have cursor read my previous blog posts, summarize a set of rules that the AI should follow, and then use those rules as the system prompt for generating my blog article.
Step 1: Analyze and summarize my style
I am creating a blog post. I first want you to analyze the 4 most recent blog posts I made. They are: @https://terminal.space/tech/understanding-softmax/ @https://terminal.space/tech/backing-up-docker-volumes/ @https://terminal.space/tech/hibernating-is-easy-now/ @https://terminal.space/tech/adding-a-sliding-animation-in-2024-why-is-this-so-hard/
Report back a list of cursor rules that will cause an AI to sound like the blog author. Make sure to include things like tone, cadence, formality, sentence structure and more
Honestly, this is the part where the AI shines. Just from this prompt alone, I received back a pretty decent summary of my writing style. Some excerpts:
Tone & Personality
* Write with a conversational, approachable tone that feels like explaining to a colleague
* Use a mix of technical precision and casual friendliness
* Maintain a “been there, done that” attitude when discussing technical challenges
* Show genuine frustration with unnecessary complexity (like the sliding animation post)
* Express satisfaction when finding simple solutions to complex problems
That’s…. pretty decent actually. The suggestions kept going, and the final summary made me actually laugh:
These rules capture your direct, practical, and slightly frustrated-yet-satisfied approach to technical writing, where you cut through complexity to find simple solutions.
Step 2: Use the personality prompts to generate a blog post
I loaded these cursor rules into my context window and then told cursor to generate a blog post:
@blog-author-style.mdc Create a blog markdown file which describes the lessons learned in this github repo. As you can see from the readme, it’s investigating what happens when you mix pip and conda in the same environment. The overall narrative arch for the blog should be that the user understands the inner workings of pip and conda when it comes to understanding dependencies, and most conda/pip issues come from the fact that these different solvers look at different things and interfere with each other
The result was not good. No part of it sounded like how I would write. It didn’t connect with the reader. There was no pithiness. Here’s the lead intro:
Ever wondered why your Python environment suddenly stops working after installing packages with both pip and conda? It’s not just bad luck—it’s because these two package managers solve dependencies in fundamentally different ways, and they don’t play nice together.
Let’s dig into what’s actually happening under the hood. I’ve built a test harness that demonstrates exactly how pip and conda handle dependencies differently, and why mixing them leads to broken environments.
BORING. If I wanted you to write a LinkedIn post (which I swear to god is just AI yelling at AI these days) then I would have asked tyvm. It didn’t get any better from the intro. The generated content just went through the README, essentially. Run this, run that. zZzZzz
Step 3: Tell the AI it sucks
I dunno what makes for a good cursor rule, and given my lack of expertise in the area, let’s just use the LLM itself to try and figure it out:
This tone doesn’t match my writing style. It doesn’t bring the reader into the conversation like they’re a colleague. Instead, it feels too much like an instruction manual. If a reader just wanted a series of steps to follow, they would just e.g. download the repository. A blog post should be engaging. What would you change to the blog-author-style rules to capture this feedback?
Apply the changes, re-generate the blog post, and repeat. Now, it’s an instruction manual but with more pizzazz!
You’ve been there, right? Everything’s working fine, then you install one more package and suddenly your entire Python environment goes sideways. Import errors everywhere, dependency conflicts that make no sense, and you’re left wondering what the hell just happened.
LOL what is this crap. So I went through a few rounds of giving feedback like this, to see if I could get anywhere. I was inspired by a talk by Nir Gazit at the AI World’s fair, and I briefly considered making an auto RL-style loop to improve things, similar to his demo. At least this time I stopped myself from sinking even more time into this approach. Instead I gave it about 5 more rounds of prompt->give feedback->retry
Long live Shoggoth

I just love this image. It’s repulsing in all the right ways. But it does make a larger point. If the primary (and computationally expensive) part of generating an LLM is to ingest all the filth & copyrighted material in the world, then post-training serves to mold it in a particular way to elicit particular response types.
I have no data to back it up, but I do wonder if the fundamental tone used to deliver responses is so ingrained in the post-training that prompting has a really difficult time overruling it. If you only want a simple transformation, like “respond in a pirate voice” then the LLM can kind of modulate its voice. However, what I’m asking for is a much deeper change & my working theory is it affects too many parts of the hidden state to properly capture all of the nuance and richness of what actually constitutes my voice. I wonder if I would get a better response with fine-tuning the model directly. Perhaps it’s no coincidence that the “make a slackbot sound like your CEO” demo uses fine tuning: https://frontend.modal.com/docs/examples/llm-finetuning
That, fortunately, is another blog post for another time. After I write the other blog post I wanted to write…