That silly Illusion of Ownership

Google’s decision to impose a KYC framework on developers is making a certain impression online. In practical terms, this means that Google is progressively requiring that anyone distributing Android apps — not only on the Google Play Store, but increasingly outside it as well — must be verifiably identified, with a legal name, address, real contact details, and in many cases official documents or corporate data.
For organizations, this may also include a D-U-N-S number — a formal business identifier. What has struck many people is not simply the existence of controls on the Play Store itself — which has required verification for quite some time already — but the fact that this logic is being extended more and more toward the very concept of software distribution on certified Android.
The timeline, in fact, is this: Google began early access in late 2025, opened broader registration in March 2026, and from September 2026 will begin actual enforcement in its first selected countries (Brazil, Singapore, Indonesia, and Thailand), with global expansion expected from 2027 onward.
In practice, the idea is that, progressively, in order to install apps on certified Android devices, those apps will need to be tied to a verified developer.
And so, right on cue, the usual campaign began from performative millennial idiots and counterfeit Gen Z activists who, under the cry of “your phone isn’t really yours anymore!”, launched into the same useless initiatives this kind of theatrical activism reliably produces: signature drives, public appeals, demands for European intervention, petitions, serial outrage, and that entire social-network political choreography whose real purpose often seems less about changing anything and more about being seen, noticed, perceived as still relevant within the debate.
Obviously, the most immediate response is that they do not fully believe what they are saying themselves.
Because the first point — the most banal, and the hardest to ignore — is that this is essentially the exact same logic Apple has always applied.
Apple, in fact, built its ecosystem precisely on the opposite principle to the idea of absolute openness: App Store control, strict developer identification, centralized approval, revocation power, regulated distribution, and a model in which the device is materially yours, but the system remains heavily mediated by whoever controls the platform.
And yet, no comparable mobilization comes to mind.
There were no great popular petitions, no waves of demands for the European Union to sanction Apple simply because Apple had always required precisely that same level of identification, control, and gatekeeping that is now suddenly being described as an intolerable drift the moment Google adopts a similar trajectory.
This inevitably raises a rather simple question: are we really looking at a principled battle over digital freedom, or merely the usual selective outrage — where a measure becomes scandalous only when it comes from a different actor than the one people had already grown comfortably accustomed to?
Even the slogan claiming that your phone “would no longer really be yours” is ridiculous for one very simple reason: it arrives decades too late.
It is a belated protest, and precisely for that reason it sounds more like posturing than genuine awareness.
For years — indeed, for decades — proprietary operating systems have not been “yours” in any real sense, except for the extremely limited one of physically owning the hardware they run on. And in truth, this principle does not apply only to smartphones, nor only to Android or Apple: it normally applies to virtually all modern proprietary software.
The fundamental issue is legal, not technological.
The overwhelming majority of users have never truly “bought” the software they use. Not Windows, not macOS, not iOS, not certified Android, and not even a large portion of traditional commercial software.
That software was never sold in the classical sense of the word.
It was licensed.
And this distinction — which many either ignore or pretend to ignore — changes everything.
Licensed means precisely that the product does not become yours. You do not acquire full ownership. You do not truly possess the code, nor the substantive rights over it. The owner remains Microsoft, Apple, Google, Adobe, or whoever else. You, in exchange for money — whether a one-time payment or a subscription — simply obtain permission to use a copy, within conditions unilaterally established by whoever holds ownership.
In other words: you are paying for use, not possession.
This means that limitations, revocations, forced updates, restrictions, contractual changes, and ecosystem control are not some sudden revolution of 2026.
They have been the structural normality of proprietary software for more than forty years.
In fact, this model was already consolidating in the 1980s, when the first major OEM licenses emerged and software definitively ceased to be perceived as a mere accessory product of hardware, transforming instead into an autonomous business — formalized, contractualized, and legally fortified.
From that moment onward, the paradigm changed: you are not truly buying software, you are buying a conditional right to use it.
So to feel suddenly betrayed today, as though until yesterday your phone had been a space of full individual sovereignty, requires a certain degree of historical amnesia.
The point is not that “it is no longer yours.”
The point, rather, is that in most cases it never truly was.
I do not recall ever seeing you scandalized before. If you are only discovering this now, that merely suggests you are a pile of idiots.
So let us try, instead, to talk about something more serious. Why make this move, given that billions of frauds and idiots had convinced themselves that Google and Android “belonged to them,” and that Android itself long benefited from precisely this perception of greater openness?
The problem is that attacks belonging to the family of supply chain attacks are increasing — both in frequency and in impact.
What are they?
In short, they are attacks that, rather than directly striking the final target — the user — strike the chain of distribution, development, or updating of the software that user relies on.
In practice, instead of trying to breach millions of phones one by one, attackers target whoever produces, updates, signs, distributes, or maintains the software that will ultimately end up on those devices.
It is a far more efficient logic: instead of entering through the front door of every individual victim, you poison the water supply.
This can mean compromising a developer account, infiltrating third-party libraries, inserting malicious code into apparently legitimate updates, abusing marketplaces, or creating fictitious companies and identities that publish software which appears formally compliant but is in fact designed to distribute malware, spyware, or fraud on a massive scale.
The central point is that the software, on the surface, appears trustworthy because it comes from a source the user considers legitimate.
And that is precisely what makes these attacks so dangerous: they do not merely attack a single machine, but trust itself in the infrastructure that distributes software.
And this does not concern only the “final product” — that is, the individual app a user downloads.
The point is that modern software development now relies almost entirely on the use of libraries, frameworks, packages, and dependencies: in other words, large blocks of code written by others, reused by programmers as foundational building bricks for constructing more complex applications without having to reinvent everything from scratch each time.
In practice, very few developers truly write every part of their software from the ground up. Nearly all of them assemble existing components: modules for authentication, image management, payments, interfaces, networking, cryptography, data analysis, advertising, and so on.
This makes development faster, cheaper, and more scalable.
But it also creates an enormous problem.
Because if someone somehow manages to contaminate one of these fundamental building blocks — a popular library, a widely used framework, a dependency distributed through repositories considered trustworthy — then they are no longer striking a single app.
They are potentially contaminating every application that uses that component.
And since the same library may be present across thousands, or even millions, of different projects, the result can become enormous: a single upstream “injection” can propagate downstream across vast, often uncontrollable numbers of platforms, companies, services, and end users.
This is the true core of the modern supply chain attack: not infecting one target at a time, but infiltrating one of the bricks everyone uses, transforming technical trust itself into a vector of mass distribution.
In other words, the goal is no longer to poison a single meal.
To paraphrase: it is to contaminate the industrial ingredient that will end up in millions of different meals.
Are there solutions?
There are mitigations, certainly. But I am not at all convinced that the path chosen by Google and Apple is automatically the worst one — at least if one examines it from the purely industrial perspective of security and risk management.
Because the reasoning of some supporters of this approach, however brutal or unpleasant it may sound, is rather simple.
How much do we really need millions of small, unknown programmers?
Does the functioning of digital society truly depend on yet another useless, marginal, or entirely superfluous app? Do you genuinely need the one that simulates fart noises, changes your flashlight color, or replicates for the seven-hundredth time a function already performed by larger, more verified, and more controlled software?
The argument, stated bluntly, is that the romantic ideal of a totally open market where anyone can produce anything may no longer coincide with the idea of an efficient, stable, and defensible ecosystem.
Ultimately, proponents of this theory argue, the overwhelming majority of the time people spend on their phones — just like the overwhelming majority of professional use cases — is already concentrated today around a relatively narrow number of applications, services, and platforms.
Messaging. Maps. Browsers. Email. Social networks. Payments. Office suites. Streaming. Enterprise management.
Always the same categories. Often the same names.
From this point of view, the software market may already contain more than enough official, structured, identifiable companies — entities with more monitorable supply chains, compliance processes, audits, legal accountability, and response capabilities vastly superior to those of the improvised small developer.
Within this vision, then, the objective would no longer be to maximize absolute publishing freedom, but to reduce the overall risk surface by privileging larger, more verifiable, and theoretically more controllable operators.
In essence: less chaos, less fragmentation, fewer opaque actors, fewer points of entry.
More concentration — but also, according to them, more security.
In such a scenario, the software running on phones would, in practice, end up being limited to a handful of large corporations.
The underlying idea would be simple: concentrate the market in the hands of large, official, identifiable, economically solid entities — theoretically more controllable, and presumably equipped with internal processes sufficient to prevent crude infiltrations.
According to this view, if software is produced by companies large enough to be subjected to controls, audits, legal accountability, and structured review, then risk should decrease. There would be no “North Korean spies,” nor some opaque developer slipping malware into an apparently harmless app, and even if such attempts existed, these companies would at least possess the technical, organizational, and financial tools to conduct code reviews, internal security, supply-chain controls, and verification before software ever reaches the public.
Put that way, it almost sounds reassuring.
The problem is that this line of reasoning begins to creak the moment one examines it a little more closely.
For example: does “large corporation” really automatically mean “good”?
Not necessarily.
Personally, I do not honestly know whether I would use an email client written by Palantir — or one of its spin-offs, or any company too deeply intertwined with intelligence structures, surveillance apparatuses, or opaque geopolitical interests — with complete peace of mind.
Because at that point, the problem is no longer merely “could there be malware?”, but also “whose interests belong to whoever controls the software I use every day?”
And so the first dogma — large company equals security equals good — already begins to wobble.
Being large does not necessarily mean being neutral, benevolent, or worthy of trust. At most, it means being more visible. But visible is not synonymous with trustworthy.
The second argument also deserves a rather brutal reconsideration: the idea that large corporations truly do everything in-house simply because “they can afford to.”
The reality is that many large corporations outsource.
They subcontract code. They externalize development. They delegate entire modules to contractors, third-party firms, remote teams, offshore consultancies, opaque supply chains, and distributed production networks.
And the moment code is fragmented, distributed, and entrusted to a long chain of actors working under downward cost pressure, real control inevitably tends to diminish.
Because on paper, you may be an impeccable multinational corporation.
But if parts of your software end up in the hands of subjects selected primarily on cost criteria — perhaps through intermediaries who themselves subcontract yet again — then the idea of a perfectly secured supply chain becomes far more theoretical than real.
In other words: corporate scale does not automatically eliminate the problem of trust.
It relocates it.
And often, it merely relocates it upward — into larger, more complex, more expensive structures that are also much harder to genuinely inspect.
Even the argument that, if they wanted to, large corporations at least COULD audit and review their software is rather misleading.
Because in reality, many companies do not develop inside some sterile, fully autonomous laboratory where every single tool is internally built, verified, and controlled.
They develop using an enormous quantity of external tools.
CI/CD services. Automated pipelines. Frameworks. Libraries. Scanners. Security tools. Plugins. Automation marketplaces.
One enormous — almost unavoidable — example is GitHub.
GitHub is not simply a place where you upload code: it is often the entire operational infrastructure within which that code is tested, validated, compiled, distributed, and approved.
Through GitHub Actions, Pull Request automation, dependency checks, code scanning, and automated workflows, it is possible to build extraordinarily sophisticated systems that, at least in theory, are supposed to control code quality, verify vulnerabilities, prevent regressions, and even detect suspicious behavior before something is merged or distributed.
By strict logic, all of this should increase security.
The problem is that these tools are themselves part of the supply chain.
And if the tool you use to verify security is compromised, then the controller itself becomes a vector of infection.
That is precisely why certain recent attacks have frightened the industry so deeply.
One of the most notable cases was the compromise of the supply chain linked to GitHub Actions, where attackers managed to strike components or dependencies used in automated workflows, transforming verification and automation tools into potential vehicles for the exfiltration of secrets, tokens, or malicious code.
The most famous case was not called “Morpheus,” but one of the most widely cited examples was the 2025 compromise of tj-actions/changed-files, in which a widely used GitHub Action was altered in order to exfiltrate credentials from the CI logs of thousands of repositories that relied on it. In practice, companies that believed they were using automation to improve control and security found themselves with a trusted component capable of transforming the pipeline itself into a point of exposure.
And that is exactly the point.
When the software that controls your software is compromised, the problem becomes vastly more severe.
Because now you are no longer merely infecting a final application.
You are contaminating the very process through which software is declared “secure.”
In practice: every time GitHub checked code quality through that compromised tool, that very act of verification risked becoming a potential attack surface.
A disaster, precisely.
Because at that point, it is no longer only the product that is struck.
It is trust in the entire validation mechanism itself that is hit.
So no: the idea that if software — meaning your apps — were produced only by good, large, immensely powerful, and therefore automatically ultra-secure market pachyderms, everything would somehow become genuinely safer, does not make much sense to me.
It is an almost childish oversimplification.
Because corporate scale, by itself, does not eliminate the supply chain problem, does not eliminate dependence on external tools, does not eliminate outsourcing, compromises, mistakes, distorted incentives, or infiltration.
At most, it changes the scale of the problem — and often increases its impact when something goes wrong.
A pachyderm may have more money, more lawyers, more compliance, and more auditing.
But it may also have a longer, more opaque, more bureaucratic, more fragmented supply chain and therefore, in certain respects, one that is even harder to truly control.
That is why the equation “big = safe” does not automatically hold.
And so Android seeks mitigation.
Not necessarily resolution, because a perfect solution probably does not exist — but mitigation, certainly: reducing attack surface, increasing traceability, making abuse more costly, and above all attempting to contain the structural chaos that derives from an ecosystem historically far more open.
The point is that, when one looks at the real market, Apple — whether people like it or not — represents the only major industrial example of a heavily controlled mobile ecosystem that, at least in terms of software security through its official distribution channel, has shown results perceived as relatively more stable.
Not perfect. Not immune. Not morally superior.
But from the perspective of those thinking like a platform, sufficiently functional to constitute an observable model.
And so the trajectory is fairly obvious.
If the competing system appears, at least superficially, more effective at containing certain categories of risk, then it is only natural that Google would look in that direction.
Not because Apple has suddenly become some paradise of digital freedom.
But because, on the purely pragmatic level of mitigation, that is the already existing industrial model that appears to have produced at least partially better results than the chaos of far broader openness.
To this, one must add the problem of so-called “layer 8” — the human factor.
Because not every supply chain attack necessarily passes through sophisticated technical exploits. Sometimes, they pass simply through human beings under pressure.
Years ago, as became painfully clear in the now-famous case tied to the Node.js ecosystem and the event-stream library, the developer or maintainer of an extremely widespread tool can find themselves relentlessly bombarded with requests to fix bugs, improve features, resolve issues, answer tickets, endure complaints, pressure, and — inevitably — abuse.
Their software may have become a critical junction for thousands or millions of users, yet behind it there is often no large, structured corporation: there is one person, or very few people.
And when for months or years you are constantly hit with demands, urgencies, criticism, and expectations, a very simple and deeply human feeling can easily emerge: the sense that outside your door stands an enraged mob with torches and pitchforks, ready to lynch you if you do not fix everything immediately.
For those outside the industry, that image captures the point well.
You are no longer merely a developer.
You become the one who must keep the mob from breaking down the door.
To the point that many maintainers seriously begin to consider abandoning everything, walking away from the project, disappearing, or surrendering control simply to escape that pressure.
And that is precisely when the good Samaritan arrives.
The person who appears saying: “Hey, let me help. Here’s the patch you need.”
In that moment, they do not look like an invader.
They look like someone helping you keep the pitchforks at bay.
In the event-stream case, the maintainer ceded space to an apparently helpful collaborator who initially seemed to simply want to assist. Only later did it emerge that this trust had opened the door to the insertion of malicious code intended to compromise specific users.
And just like that, the software remains contaminated.
Not because someone necessarily smashed the system by force, but because they came through the front door — exploiting exhaustion, psychological pressure, and trust.
That is layer 8.
The point at which the supply chain ceases to be merely a technical matter and becomes something profoundly human.
Will it work?
No — in the sense that it is not a definitive solution, but a mitigation measure.
And that distinction is fundamental, because mitigation does not mean the problem is solved. It means, far more modestly, that one is trying to attenuate it — to reduce its frequency, impact, or attack surface.
In practical terms: you do not eliminate risk, you try to make it harder, more expensive, or less devastating.
Apple, in fact, is by no means invulnerable to supply chain assaults.
Quite the opposite.
One of the most famous cases was XcodeGhost, which emerged in 2015.
In that case, some Chinese developers downloaded compromised versions of Xcode — Apple’s official development environment — from unofficial sources, often because downloads from Apple’s own servers were slow or difficult to access locally.
The result was devastating precisely because it struck upstream.
Developers believed they were using the legitimate tool to create secure iOS apps, but in reality they were compiling software with a contaminated version of the IDE. This inserted malicious code into apparently normal applications, which were then published on the official App Store.
So the malware did not enter by bypassing Apple from the outside, as a trivial pirated APK might.
It entered through the development supply chain itself.
Perfectly “regular” apps, signed by apparently legitimate developers, ended up in the official store carrying embedded malicious components.
Among the affected apps were even highly prominent names in the Chinese market, and millions of users downloaded compromised software directly from what was considered the industry’s most tightly controlled ecosystem.
That is the point.
Even a rigidly centralized system like Apple’s can be struck if the attack occurs far enough upstream to contaminate trusted tools, processes, or components.
For this reason, even the Apple example does not prove that total control solves the problem.
If anything, it demonstrates that no supply chain is ever truly immune.
At best, some systems may react better, faster, or with lower average exposure.
But “more secure” does not mean “secure.”
It merely means “less vulnerable under certain conditions.”
As you can see, the problem is complicated.
It is not simply the usual black-and-white fantasy that performative millennial idiots and counterfeit Gen Z activists want to sell — reducing everything to social-media slogans, hysterical petitions, or ideological posturing.
We are not dealing with some childish fairy tale where on one side there is “freedom” and on the other “control.”
What we are facing instead is a far more complex, structural phenomenon — one that will in all likelihood continue to grow both in danger and in scale.
Because the more the phone becomes the center of digital life — banking, identity, communication, work, payments, documents, images, audio, authentication — the more it also becomes the ideal target.
And the more this process expands, the greater the risk that all those convenient things we do on that strange computer we still insist on calling a “phone” may become so exposed to espionage, manipulation, compromise, or surveillance that the psychological price of convenience becomes progressively less acceptable.
In other words: if the phone becomes too invasive, too fragile, or too compromised, sooner or later someone will seriously begin to wonder whether it is still worth concentrating everything inside it.
And for certain companies, that scenario is deeply frightening.
Because the real industrial nightmare is not simply selling a few fewer phones.
The true danger is that the very concept of the smartphone as a monolithic, all-encompassing, centralized, and enormously expensive object could begin to disintegrate.
That the phone might progressively return to being, above all, a connectivity node.
A hub.
An object that connects, but does not necessarily contain everything.
And then cameras and photographic systems could once again separate, becoming intelligent external modules connected when needed. Microphones and speakers could follow the same path, as has already partly happened with earbuds and wearable devices. The display itself could progressively migrate toward glasses, visors, or distributed interfaces.
At that point, the “phone” would not necessarily disappear.
But it might fragment.
It could become a distributed platform — less glamorous, less centralized, and perhaps far less suited to sustaining the premium-device model of the thousand-, two-thousand-, or three-thousand-euro smartphone as the absolute center of digital life.
And that, for giants like Apple or Google, represents an enormous strategic danger.
Because it would not merely mean changing product.
It would mean risking the loss of the current device’s economic and symbolic centrality.
Proposals, visions, and technologists imagining precisely these kinds of scenarios already exist: modular ecosystems, wearable-first architectures, distributed computing, post-smartphone paradigms.
For now, they often remain niches, experiments, or futurist visions.
But the very fact that they exist demonstrates that the current model is not inevitable.
And that is precisely why companies like Apple and Google have every interest in preventing trust in the phone from collapsing to the point where the market is pushed toward a genuine disintegration of the smartphone concept itself.
They will therefore do everything they can to keep the device sufficiently secure, sufficiently reliable, and sufficiently central to prevent that exodus.
One hopes.
Or maybe not?