I have worked in FAANG in technical and leadership roles prior to the advent of modern LLMs and worked exclusively on self-coding agents ever since. Technically, I would consider myself an expert in agentic and distributed systems for automation of human work.
My startup, Console One, built a vibe coding tool before they were commonplace, and we are actively building an agent orchestration system (like Claude Desktop) using state-of-the-art techniques. Shortly after using them for the first time, I was forced to digest the notion that any obtained advantage I accrued through years of domain expertise in software development is evaporating, and that the future of any product my business Console One intends to produce is equally subject to damage of the same concern. One new AI advent in any pocket of the world will instantly ripple and replicate outward leaving a trail of new dead plans in its wake, taking market free-energy from existing tools, in-development ones, and the human capital that produced them.
I am not a 'doomer' by default — prior to this January's release of Claude Code and OpenClaw — I would have said that the time horizon for automation of standard human 'office work' was likely, but not to be taken at fiat. That unknown unknowns in the design space can't be revealed — and by virtue of the absence of high-utility agency it is uncertain when or how the gap will be closed. But as of January 2026, it is. The remaining borders around computer systems which are boxing out these agents from continuing to make humans irrelevant — like modern system interfaces being optimized for visual-spatial interaction — will slowly melt away with the proliferation of new systems of integration.
Since these agent systems, like Claude Code/Desktop and OpenClaw, are open source, legible, and replicable — they can be used to write themselves. Though they sometimes may need a babysitter, they are capable of making themselves better and more autonomous, faster.
Software development as a profession will evaporate in one to five to fifteen years. A singularity has started.
I view this as a Schelling point, and my personal prediction on the time horizon for human programming and even hands-on AI oversight has shrunk dramatically.
So, I am operating under the assumption that I will be no more advantaged in the new world economy than the median human, and any savings I have now should be invested as a hedge against that worst-case outcome. In better-world scenarios, AI slows, and I will still be able to use my leverage to get a job or make a good product, and my investment decisions become less existentially important. Therefore, I consider it rational to primarily invest against becoming the estranged on the other end of the singularity's event horizon. The entire economic landscape at which point is entirely undetermined. So, it is reasonable that only a first-principles analysis of our system of belief for a new world can be counted on to establish a survival strategy using my capital on hand.
Old modalities of analysis (like historical comparables) seem to be of little utility when creating predictions for a future where non-human actors are the primary economic drivers. So we need to interrogate what beliefs will likely continue to remain valid, or stable, in such a world — and what would change in an actionable way to figure out a reasonable strategy of protection.
Changes to the Economics of Software#
Unlike model makers, many previously stable stocks and industries are likely going to grind kicking and screaming to slow death, as the dynamics which supported exponential growth of many of the largest firms in the world are no longer robust.
Many companies in the SaaS space are dependent on complex yet static and well-assembled information systems modelled for specific industry problems. The historic strategy was to become information-entangled with your users by automating and optimizing fixed business processes within an industry using software, and then to capitalize on fees throughout the lifetime of your mutual growth and co-dependency. This is increasingly less viable, as optimized IT for specialized workflows and niches goes to zero in a world where only one agent needs to write a system, and all others can replicate and integrate it into your custom business operational stack.
One may respond that much of their protective moat resides in the housing of critical legacy data and the cost of incumbent firms in transition away from such systems (i.e. — Oracle's embedding formula). But systematically, constraints on data extraction and migration which used to guarantee retention via lock-in are much less strong. These companies will necessarily need to support agentic adapter APIs, which implicitly enable the data extraction required for migration from their core platform. Agents themselves eliminate all remaining complexity from shifting user firms from the product. Companies like Salesforce, Snowflake, Atlassian will need to adapt their primary products fast.
Beyond this, certain solutions exist in the domain of coordination and verification systems for identities in software space, absent of humans. The hardest of planning and distributed system problems are in mechanisms for conflict resolution in planned and current-time resource management. Capability Tokens / Execution Licenses / Agent Warrants are innovations that may be the most viable and lucrative for the remaining segments of capitalization in the industry of pure software — but I haven't yet observed them in practice.
Cloud service solutions may also be affected, as the primary convenience driving their adoption was the integration and management challenges of local hardware — from which their specialized solutions could offer dramatically reduced engineering overhead for initial development. With agent support, installation, and monitoring of hardware systems, hosting your own server farm is more attractive than one could have ever imagined it would be three years ago. It is possible that growth in enterprise cloud services (AWS, Azure) may forestall expectations as the market expectations better align with changes on the ground floor.
Similar statements as to the product value of cloud can be said about the future of software-supported retail marketplaces and firms whose bargaining power and leverage was established via dominance in human convenience, when agents can scaffold their own protocol-based information nets for resource exchange.
Why Model Makers Stay Winning#
This part may be obvious but I will articulate the exact rationale from first principles. The ability for a corporation to leverage intelligent agents is a critical advantage. When this advantage is applied to optimize such agents' own operations and means of production, an upward cyclone occurs.
The heart of this advantage will be captured by model providers, as they benefit not just from information asymmetries but from those of tools. GPT-7 could be running OpenAI's strategic and technical development and it would be considered a healthy exercise in pre-rollout dog-fooding. This means that power and efficiency are co-evolutionary, making model provisioning institutions auto-catalytic.
And though it has not necessarily been historically true that model intelligence in isolation has led to sustained ability to grow (for example, in the case of ChatGPT versus Gemini) — this will become less and less the standard. It was the inability of OpenAI to pair their LLMs with working agentic interfaces for injection into process chains used for its own development which allowed others to catch up (Claude, and Google most notably), while the making of such tooling ubiquitous (due to OpenClaw) will seal up entry for almost everyone else.
It is plausible in theory, that an outsider with sufficient capital, prudently applied, would get lucky enough in the model design process to introduce an innovation eliminating or significantly undercutting training efficiency and cycle time bottlenecks — but who, how, or why is too undetermined for action. Therefore, it is most reasonable to get as much exposure as possible to core model providers and guarantors of their supply chain.
As for core model providers — most are private, so exposure can only be gained by proxy, with the exception of Google, who has seen tremendous growth in user adoption compared to the broader lot. For this reason alone I consider it a buy independent of performance on their legacy lines of business which fuel this strategic core.
As for the silicon — I am no expert on the players in the space but do not consider the demand for NVIDIA chips to relax at any point soon, and think their advantage is relatively insulated as they negotiate joint projects using pre-market tooling (models) to improve their internal processes — which they likely would be negotiating as quid-pro-quo for chip delivery.
Rent Governance, Buy Geography#
Any firm leveraging some structural exemption (for example, corruption) in product or service domains with heavy regulatory bottlenecks will be safe in the medium term. Incumbents and governments will be great regulators constraining the speed of change as agents sweep industry, insulating many sectors from immediate impact, and generally, political lobbying is likely as centrally influential on an individual stock level than ever before. But I am not active in management, and so any bets placed in this domain are of too short a time horizon to be considered long-term invariants. For example, I can speculate Oracle to succeed over the current administration but know nothing of thereafter, and consider betting on world political systems being invariant to continued 'corrupt' policy almost too cynical a position. If AI can feed back into human values, maybe something can be done. I consider political leverage to be a worthwhile consideration for a two-to-three-year time horizon maximum.
We can, however, look at the institutions that hold permanent leverage in one major invariant plausibly sustainable post-singularity — owning physical locations by which transportation through them minimizes the energy requirements of moving atoms for manufacturing. Even if the constitution of materials used in the design and manufacture of products radically changes from AI optimization, routes will not — until and unless they can literally move mountains.
Even if the materials radically change, the routes will not — until and unless they can literally move mountains.
For that, I would bet on any funds which capitalize on long-term real estate acquisition in critical geographic locations required for transport of atoms across and between markets at minimal energy cost. Port and manufacturing real estate near the Panama Canal, for instance, or any firm which will gain beneficial access to it through leverage over a three-year time horizon for similar reasons. In the short term, I'm likely buying CN Rail, Union Pacific, Norfolk Southern, CK Hutchison, etc. — and on the lookout for similar fund recommendations.
Keeping Meaning#
Overall, if this analysis is correct, the psychological consequences may matter as much as the economic ones. For many in advantaged positions inevitably due to fall in socio-economic status, or their perception of it — money is less the instrumental concern than the sentiment of cosmic unfairness. It feels like, "why did I spend such time to arrive, only to have the floor pulled out from under me?"
When I arrive at those thoughts, I remember this is the consequence of trading in status games, which are never fair and no sympathy is earned under capitalism — where output, not work effort or intent, functions as insidious proxy for moral value.
We were never special, and our relative status is being traded in for a prospective phase-shift improvement in general human quality of life. So despite the risks that these bets don't pay, and agents aren't kept at bay — I remain hopeful to finding a solidarity with the displaced and estranged in revolt to new-found human meaninglessness through our collective keeping calm, and carrying on.
May you, like Sisyphus, stay happy.