Spahlman strikes back!

One of the things I have been noticing with increasing clarity over the past few years, every time “digital security” comes up, is that a growing part of the problem is not actually being solved by companies at all: it is simply being redistributed onto end users.

In practice, instead of building systems that are genuinely more secure without turning everyday life into a permanent bureaucratic procedure, a vast number of companies are offloading the operational cost of security directly onto the people who use their services.

The result is that keeping your accounts “alive” is slowly becoming a full-time job.

It is no longer just a matter of remembering a decent password, or avoiding the Nigerian attachment written in Comic Sans.

No.

It now means managing an entire personal authentication infrastructure: recovery systems, trusted devices, codes, PINs, apps, backups, emergency procedures, and pseudo-liturgical rituals that change from platform to platform, with no real standardization that normal human beings can actually understand.

For example: you go on vacation.

A theoretically simple scenario. You take your personal phone with you, perhaps you even want to detox from work, so you leave your company phone at home, along with your laptop, desktop, secondary tablet, that old phone number used only for certain accounts, and for two weeks you live like an almost normal person.

Mistake.

Because when you come back, you may discover that your digital ecosystem has interpreted your absence as a form of heresy.

Your Apple account, for example, may decide to regard you with suspicion because you dared to turn off or ignore a device for too long. It does not necessarily lock you out, but it often begins that subtle psychological warfare made up of verifications, confirmations, codes, authentication requests, and recovery processes that seem designed by someone convinced that the average user is simultaneously an international criminal and an idiot.

And that is when the circus begins.

Password.

Then the password required to access the system that stores the passwords.

Then the PIN.

Then the second PIN.

Then the authentication app.

But careful: not always the same one.

Because some want Microsoft Authenticator, complete with a little number to type in as though you were defusing a bomb in 1997.

Others want Google Authenticator.

Still others may accept standard TOTP, perhaps, but only if Mercury is in retrograde and your browser has not decided to reinterpret the protocol according to its own local religion.

I, naïvely, had even convinced myself that a YubiKey might represent some kind of elegant solution: a physical key, a standard, something rational.

Naivety.

Because in the wonderful world of contemporary security, the exact same technology is called different names by different websites, different browsers, different systems, and is often implemented in ways so creatively inconsistent that a standard becomes a semantic lottery.

Every platform seems to have its own esoteric dialect of the word “security.”

And naturally, not everyone accepts a normal password manager, because that would be far too simple.

Some websites have issues with password copy-and-paste, as though disabling a basic function actually increased security instead of merely increasing the amount of profanity.

Others insist on sending requests to “trusted devices” that, at that exact moment, may be 800 kilometers away, inside a backpack, powered off, or locked in a drawer.

For mysterious “compatibility reasons,” of course.

Then there are those like Revolut, which occasionally seem to wake up one morning and decide that, for no reason remotely comprehensible to the user, you must once again prove that you are yourself.

Not because anything happened.

Not because there was an attack.

Simply because yes.

Because some algorithm had a mystical vision.

And so there you are again, photographing documents, taking biometric selfies, confirming emails, SMS messages, codes, push notifications, and, presumably soon enough, blood tests.

The number of authentication factors, meanwhile, keeps growing.

At first it was a password.

Then two-factor authentication. Then three.

They are beginning to resemble razor blades.

At this point, we are well on our way to a system in which, just to check your account balance, you will need:

a password, a PIN, an OTP, an email confirmation, facial recognition, a fingerprint, a YubiKey, a notarized certificate, a feudal oath, and perhaps an eyewitness.

Soon I will probably have to start breeding carrier pigeons to support the new Pigeon Authentication Factor, in which a certified dove will fly to my house carrying a code written on encrypted parchment.

And the truly grotesque point is not that security is useless.

Security matters.

It absolutely matters.

The problem begins when complexity is transferred almost entirely onto the user, turning protection into a form of continuous maintenance that is stressful, opaque, and often arbitrary.

Because at that point, you are no longer building secure systems.

You are simply externalizing the organizational cost of your own design failures.

And, as so often happens, the one working for free to keep the whole system standing… is the customer.


This trend is unfolding on a global scale, and rather effectively too.

By now, whenever we open something on a phone, we are never entirely certain that it will simply open and function normally.

Far more often, we are asked to verify our account.

And this does not really change even if you use password managers.

The most common browsers will have their own authentication menus, each trying to save your password themselves, with the frequent result of making manual entry absurdly difficult—especially on mobile devices.

Meanwhile, mobile keyboards increasingly tend to distrust your password manager, treating it as something potentially suspicious, almost as though it were a possible keylogger.

As if that were not enough, websites themselves are not always particularly friendly toward password managers either.

And, as though that still were not sufficient, passwords must follow ever-changing criteria from site to site, meaning that your password manager is often obstructed precisely in what should be its most logical function: generating secure, practical credentials.

In the end, the result risks becoming deeply paradoxical.

You may find yourself with a nice little notebook containing all your passwords.

Written on paper.

Is all of this what security looks like?

Absolutely not.

Because the real problem is that every company, in the name of “innovation,” increasingly tends to invent its own private method of doing “security.”

And at that point, the user can do little more than find some way—somewhere—to keep track of all those strange requirements, all different, all often incompatible with one another.

Each company will insist that its own method is better.

But from the user’s perspective, precisely because they are all different, the final result is simply chaos.


Needless to say, if cybersecurity becomes a business, and that business consists of selling compliance instead of solving problems, then the problem itself can only explode.

Because at that point, the real issue is no longer making systems genuinely secure, but constructing an entire economy based on certifying that formally correct procedures are being followed.

Companies will inevitably begin, first and foremost, by forcing everyone internally to obtain useless certifications—professional, institutional, or global.

Today TISAX is fashionable. Once it was ISO27001. Naturally, there have been many others, and there will be many more.

But in the end, what these certifications really do is certify that one is dealing with so-called good practices, or best practices.

Whether those best practices are actually effective is another matter entirely.

In fact, quite often they are not.

Security incidents continue.

Data leaks continue.

Compromises continue.

Breaches continue.

And this alone should strongly suggest that there is a rather substantial difference between formally following a checklist and actually making systems secure.

And so the true corporate question inevitably becomes this:

“We have no idea how to actually secure our systems, security is expensive, and now there are even penalties for data leaks, and we do not want trouble. What do we do?”

We spread the problem onto the users.

Or, to preserve the original Italian flavor more precisely, we “spalmare” it onto them—because spalmare in Italian literally means to spread something around, like butter over bread, smearing or distributing it across a surface.

And so enters Spahlman, the superhero whose special power is precisely this: spreading the problem across ordinary people, smearing operational burden onto the public, while moving at the speed of a liability waiver.

You should also read:

Spahlman colpisce ancora.

Una delle cose che sto notando sempre più chiaramente negli ultimi anni, ogni volta che si parla di “sicurezza digitale”, è che una parte crescente del problema non viene realmente risolta dalle aziende: viene semplicemente redistribuita sugli utenti finali.