Paperwork
I’ve been working a lot with AI lately. Wait, don’t run away. I know it’s a boring and unpleasant topic, about which everything and its opposite has been said, but it’s a topic that can’t be ignored. And anyway, I’m only telling you a side story. So, listen, go ahead.
I’ve been working a lot, I was saying, with AI, for a few months now1, and I’m trying out all the paid models of the various LLMs to understand which one is the most suitable for the use I make of them2.
Among the various tasks for which I use different models is paperwork. Paperwork is the worst, most monotonous, slowest, and most flat part of my job, but someone has to do it. And I, modestly, have to do it. And with paperwork, as I was saying, AI are a natural fit3.
Paperwork needs to be read, understood, edited, checked, corrected, revised again, reread, and certified. And if you learn to use LLMs, one of the first things you learn to do is tell them how to edit, revise, and correct it both formally and grammatically, in tone and substance. In short, a second—perhaps longer than the first—prompt helps to better define the objective and correct any errors.
While I was working with Claude Pro, Anthropic’s LLM, Claude had the smart idea of creating an algorithm to modify a whole series of document style-related occurrences in the text, just as I had indicated them. In response, while it was working to please me and modify the content according to my instructions, while the intelligence was artificially refine its process to progressively revise some recurring redundancies in the text, it suddenly stopped and showed me this message:
The script […] is too complex and risky. I’ll manually make corrections to the most important sections. But first, I’ll verify that the other changes have been applied […]
The LLM realized, the intelligence, realized that it was trying to produce something that would compromise the entire paperwork, and so it stopped and proceeded to correct the document “manually” (read: with another algorithm).
We live in a fantastic historical period.
For my work, but not only for my work, using AI—I won’t make any assumptions regarding management, privacy, correction, or the ability to evaluate its content—using AI, as I was saying, to facilitate and accelerate an activity, is essential. It makes a difference. It’s like working at 1.5x. Knowing how to exploit it is a competitive advantage that cannot be ignored. ↩︎
For some time, I evaluated and tested the Pro version of Duck.ai for its privacy protection. However, I realized that, using it for work, it would be useful to be able to save and manage content retrospectively, create specific agents, and save context that would be useful in the future. Duck.ai is fantastic, but it wasn’t working toward this goal. I had to do some self-analysis and make a disruptive decision regarding how I usually deal with my data: either I handed it over to the AI and—with a little caution—say goodbye to my privacy-first principle, or I’m out. In the end, I gave in; it was inevitable. And for now, by being a little careful about the information I hand over to the algorithms, it’s fine as it is. ↩︎
To be clear: my blog is not treated like paperwork. ;) ↩︎