AI for Sci-Fi Writers
These days, the internet feels like an archipelago of holy wars, so much so that the very first thing you do, when you interact with someone, is try to figure out which particular crusade they are enlisted in, just so you can avoid the “triggering” moment – that precise instant when the fellow loses his grip and the holy war slips out. At which point you get the inevitable: “I just couldn’t help myself, sorry.”
One of the most fashionable holy wars these days is the one against Artificial Intelligence. As a hobbyist sci‑fi author, if you dare to write an article about how to use AI, you know perfectly well you are about to call down a holy war on your own head. You can expect the whole repertory: people metaphorically strapping on explosive belts, others clanking into view in full armour and charging into battle, and the entire stage set of a crusade – or a jihad, if you prefer the other liturgical register.
So, let’s change the tone: as usual, I cheerfully mock all these holy wars, and if today I feel like writing about this, then this is what I’m going to write about. If you don’t like it, you can always go sit on it.
The rubber duck for writers.
There are two main ways in which a writer – and by that I mean the person who actually has the idea and runs the full movie in their own head; I repeat, the writer – can use AI to their own advantage. To their advantage means saving both money and time. The two main uses are: rubber ducking, and then editing and translation. Strictly speaking, the rubber‑duck role is embedded inside the editing process, and on that there is quite a lot to say. Because AI can help you a great deal.
You can use AI as a rubber duck at precisely the stage where – and if you write sci‑fi you absolutely need this – you are trying to make things feel realistic. Reality does whatever it damn well pleases, but science fiction, poor thing, is obliged to at least look plausible. So you end up asking questions like: could you give me the most plausible formula, according to current climate models, for what would happen if some mutated form of vegetation lowered the surface temperature of Sicily to a steady seven degrees Celsius? Obviously, this is where you need to keep your guard up. If you have any mathematical background at all, you are going to want to ask for the source, and then look very carefully at what the formula actually means.
And “what it means” here is something like this: one of the early versions of ChatGPT gave me a formula that did not even contain the surface area at low temperature. Which is obviously impossible, because it implies that chilling an island the size of Sicily and chilling a coffee saucer would have the same effect on the climate. So yes, you still have to scrutinise the result, and you still need to have at least a thumbnail grasp of the subject. I am not a climatologist, but I would still expect a question like that to produce an answer in which surface area appears somewhere; otherwise you might as well stop putting anything in the fridge. Once you have a credible formula, you can get at the underlying “truth”: if you drop the temperature that way, you end up with something like half a metre of rain falling on the island every single night, on average. About 500 mm, in other words. Every bloody night.
At that point you have conjured up a fine disaster: to keep that much water from simply scouring the soil off the rock, you need a two‑hundred‑metre‑tall rain forest. And here is where world‑building very clearly benefits from using AI as your rubber duck. You sketch the idea, plug in the numbers, extract plausible equations, and then you draw your world.
Editing.
Another way you can profit from this tool is by using it as an editor. Over time, because you write and reread the same text so often, you stop noticing your own mistakes. You therefore need an editor that will catch your inconsistencies, your grammatical misfires, the sentences that clang in the ear, the ones that are hard to understand, and so on.
And this is where you run head‑first into AI’s first, dreadful problem: Artificial Puritanism. The companies that build these systems are basically terrified of the professional prudes who shriek “the children! won’t somebody think of the children?”.
The whole purpose of this charade is to make it look as though everyone on earth has a moral duty to look after the little darlings – everyone except their mothers, who are apparently free to abandon them in front of a screen. In practice, it seems that while a mother can quite happily leave her offspring alone with a tablet or a computer connected to the internet, the rest of the planet is expected to agonise over the fate of this brood, maintaining constant vigilance over them and the terrible dangers they face, because of parent's lack of presence.
This puritanism has two dreadful consequences. The first is that, by purging every trace of violence and sex from a text, science fiction loses a great deal of its charm. If we go back to the classics and think of Star Trek TOS, everyone will tell you that the beauty of it lies in the message that different races can coexist and share this marvellous rush toward knowledge of the cosmos. All very fine; but if that is all we are allowed to keep, the correct title would be “Winnie the Pooh and His Friends in Space."
For the time it was made, it is blindingly obvious that this is a show whose captain personally takes it upon himself to demonstrate that there is no such thing as venereal disease anywhere in the galaxy, a universe in which, if women are wearing a skirt at all, then it is by definition a miniskirt. This is the series that gives American television one of its first televised interracial kisses – Kirk and Uhura – and that takes the trouble to sketch out the sexual customs of various species: it starts with the Vulcans, but by now we all know the Klingons are just the right kind of bizarre as well. Strip that away, and what you have left really is nothing more than Winnie the Pooh and his friends in space.
If stripping the sex out of Star Trek – and for its time Star Trek was just the right degree of embarrassing – was already a form of sterilisation, we are no better off when it comes to violence. It is vanishingly rare to see any actual blood in Star Trek, and even if we switch franchises and take, say, Star Wars, once you remove the violence all you are left with is something like: "politics in the galaxy, a round‑table for armchair philosophers."
Even on the sexual front there are a few entertaining puzzles: for instance, it is far from clear what Jabba the Hutt is supposed to want from Princess Leia, given that not only do they not belong to the same species, but he appears to be a cold‑blooded amphibian while she is a warm‑blooded mammal. In all likelihood they would find each other physically repellent, despite a certain amount of fetishism clustered around the princess. And I will stop there. What we are looking at is that sort of “embedded porn” that is woven into American (puritan) culture, yet somehow never fails to make an appearance.
And let me point out that I haven’t even mentioned another classic, Doctor Who, and its spin‑offs like Torchwood, which could quite reasonably be translated as anything from “Fantastic Aliens and How to Sleep With Them” to “Homo sapiens and Sex: A Treatise on Permutations.”
If you strip out the sex, what you’re left with of Doctor Who is basically just “weird things happening in and around London.”
The problem with Artificial Puritanism, for that matter, is much more serious than you might think. If practically the entire body of world literature had been run past ChatGPT for review, it would have been reduced to a catastrophic mountain of printed manure not worth the paper it was smeared on. I understand that, for the sake of a safety policy, you can’t say this or that, but when you step outside sci‑fi and simply try to publish your own autobiography, and your adolescence – all right, mine was at the very least unconventional – does not make it through the filters, something is clearly off. Someone really ought to explain to ChatGPT that yes, in the 1980s underage Italian girls did give blowjobs, and quite competently at that.
The effects of Artificial Puritanism, on the other hand, can also run in the opposite direction. When I first had the idea of writing the Edelweiss trilogy, for instance, the mere act of describing it out loud was enough for people to inform me that the concept was obviously too bizarre to have been produced by a human mind and that, therefore, there “had to be” some AI pawprints on it. In reality, the idea came to me in the middle of a wet dream, in that early‑morning interval also known as the “wooden‑dick phase”. I genuinely have no idea which AIs these people are actually using , to say "this is clearly AI".
Back then, AI was still a novelty, and I came up with an idea of my own: I would prove that the story had not been invented by an AI by loading it with an unseemly quantity of pornography – not that I was personally opposed to the prospect, to be clear – and with a level of violence that was unusual even for me, to the point that I ended up modelling one of the massacres on a sadistic atrocity carried out by Dirlewanger’s SS in what is now the Czech Republic. And believe me, that man was sick. Very, very sick.
In this way, however, I did manage to dodge the accusation. At a certain point, when people ask me whether I used AI, my answer is disarmingly simple: no commercially viable AI would ever let that much pornography and that much sadistic violence go through its filters.
Grok, of course, is the glaring exception to this principle. It has no particular problem with any flavour of pornography, but as an editor it is worth very little, with a dreadful style and a chronic tendency to summarise – and summarising a draft is useless, what you need is to expand and refine it. Fortunately, it does have some limits when it comes to violence, and when I put into the book a scene – something that had, sadly, actually happened in a Bohemian glassworks – that was simply too raw, even Grok came back with: “sorry, I can’t handle this kind of scene”. Unfortunately, those things really did happen, and this – if anyone still needed a reminder – shows exactly how dangerous Artificial Puritanism can be.
If some historian were to use an AI to tidy up the language in a research paper, there would be a very real risk of quietly rewriting history: one of the worst Nazi atrocities could simply be scrubbed out of the official record.
Anyway, yes. With the exception of Grok, you can quite comfortably prove that your book is free of the terrible sin of having used AI, and thus soothe the holy warriors, simply by pointing out that there is far too much porn and far too much sadistic violence in it for that to be remotely plausible. It is not exactly the sort of argument one brings up over an elegant dinner, but then, I do not attend elegant dinners. Which, of course, raises the question of what “elegant” is even supposed to mean – something along the lines of ladies giving blowjobs using the silverware.
So this new bigotry against AI has no difficulty at all with you describing SS men throwing women into a glass furnace and watching them burn alive while they laugh, but heaven forbid you should have written it with the help of an AI. De gustibus non disputandum est.
Translation.
Another problem is translation. Typically, you write in your native language – in my case, Italian. You may be fluent in English, but whether “fluent” is enough to write a novel is a different question entirely, and the answer is “NO”. And if you move to another country, as I did when I moved to Germany, and you want to publish there, then at least for the first few years you will need some kind of electronic assistance for translation.
When you bring AI into the picture, you really have two possible paths. The first is the one that comes naturally to programmers: “describe the procedure the AI must follow”. The second is: “describe precisely the result you want to obtain”. In the case of translation, the first method is almost impossibly complex to manage if you care about quality, because it has to work across so many different domains that you would end up writing a prompt longer than the book you are trying to translate.
Take my novel Weird Robot (Altri Robot in the original): the protagonist initially holds the rank of maresciallo. In Italy that is a non‑commissioned officer. But if you translate it into German, the AI will very easily fall into a “false friend” and promote him into something like a brigade general at the very least. You then get the wonderfully alien effect of a brigadier general taking orders from a lieutenant. Either you intervene case by case, in every specific domain where false friends crop up, or you are going to have to abandon the “describe the procedure” approach and adopt a different technique. Translation is a mean, ugly beast to capture in logical terms.
So what is the best – though still far from perfect – approach to translation, if you want to use AI as a force multiplier? What you are going to do, instead, is describe the result you want. You can start by feeding it a PDF dictionary for the language in question. Then you add all the PDFs of the science‑fiction authors you like – in my case there are seventeen – already translated into the target language. After that, knowing the country you are aiming at, you add some science fiction you enjoy that is also typical of that specific national scene. In Germany, for instance, they are still crazy about Perry Rhodan, so you might throw in the German translation of the entire Third Power cycle (Die Dritte Macht). For Italy, I suppose you would have to lean on Evangelisti, though I honestly make no promises about the outcome; if you know other authors, so much the better.
Once you have explained to the AI that the result should resemble those authors – in my case the classics, Asimov, Heinlein, Herbert, Philip K. Dick, all the way down to so‑called “minor” figures like Colin Kapp or John Varley – you have given it an embarrassingly detailed description of what you want it to produce. Is that enough? No, it still won’t be enough. The AI will still make mistakes. Which means that you will still have to know the language well enough – not well enough to write a book in it, but well enough to read one.
This also explains why I waited YEARS before I even started translating my books into other languages. To begin with, I only work with languages I actually read. I may muddle through a bit of French, for example, but I do not take on reading an entire novel in it, so I have always steered clear. I approached the whole AI‑aided translation problem using English, which I could reread and correct whenever hallucinations crept in or the meaning of the text quietly shifted – the sort of error that forces you to supervise the ENTIRE manuscript – and only quite recently did I “dare” to try my hand at German as well.
Ultimately, my experience as an amateur, self‑publishing SF author comes down to this: AI can save you BOTH the money you would otherwise spend on a professional editor AND the money you would spend on a professional translator. The catch is that it is not a fire‑and‑forget solution – this is not a matter of “here, translate this book into Tibetan” and you’re done.You will have to know the language well enough to reread the entire book, purge the false friends from the translation, and catch the moments when, simply because of the sheer volume of text, the AI starts to hallucinate and quietly adds things to your book that were never there in the first place.
The results can be quite comic at times, but you need to read carefully, with the whole gestalt happening , to realise that no, you never wrote any of that. So yes, you will save money – but the price you pay is measured in your own time, because it will be neither easy nor instantaneous.