It’s the year of flying cars, robots, and all-digital lives. At least one of those things was a big focus at CES ( robotics). In the spirit of looking back in 2025, I thought I’d share a collection of things that might be considered, explored, or should be abandoned in 2026.

No one should be writing interface code/domain models

I’ve been thinking about this for years. Adapting to frameworks like DropWizard, Spring REST, Akka HTTP, and Play has been a constant pain point. Much of the work done on the interface definition layer is repetitive rather than unique to the business logic. I anticipate you might object: “What if my file upload spec requires a GET request?” These deviations create unnecessary complexity, hinder documentation, and often lead to replacement with more maintainable solutions after a few years. Many attempts to simplify this process have resulted in even more complicated and specialized solutions, such as AWS AppSync. We once seemed to be moving towards more generic and consistent approaches, but I’m not sure what happened. (See JAX-RS). Another example is Taplir. I’m still waiting to see if it gains traction, though I’m very supportive of the project.

What should happen? Despite its imperfections, I believe gRPC offers a strong foundation as a generic definition language that can be compiled into interface and domain generation libraries. With its extensions, it can support integrations requiring REST-based communications.

OpenAPI is effective, but its generators lack consistency. This is a key weakness that could be addressed with more robust testing and validation. Establishing a common language would allow us to verify and benchmark OpenAPI generator outputs against the libraries they are intended to support.

Splitting Boundaries in Large Systems Should be Automated/Data Driven

Over the last 10+ years we’ve seen microservices being split apart and communication between them become incredibly complex. In some cases, it’s been overkill, with overly fragmented services that shouldn’t have been split. We’ve also seen a resurgence of monoliths in recent years.

What I believe will happen:

Decisions regarding service boundaries will be automated, identifying where splitting provides value and where it doesn’t. Rather than deploying utilities as separate services, we may increasingly see them embedded directly into dependent services. Reliable and consistent automation of this process requires a tool that leverages observable metrics and service endpoint generation.

Code is focused on building and transformation rather than formal modeling against things

Since the 1980s, with the rise of C++, object-oriented programming has emphasized modeling domain objects (which is valuable) and assigning them inherent behaviors (deceptively limiting). More recent languages have demonstrated a gradual shift away from strict OOP towards a functional style. As a fan of functional programming in Scala, I hope this trend gains momentum.

Functional programming separates domain structure from behavior and transformation logic.

Where have we seen progress in this?

  • Golang: Models domain objects through structures, and adding friction to attaching behvaior against them. Although, the error model for Golang functions limits the ability to work with code in a functional manner.
  • Java The Oracle/Java Language Specification has embraced functional paradigms from Scala with features like the Streams API, default methods, record classes (similar to case classes), and tools for writing more declarative concurrent code (parallel streams).
  • JavaScript: Its lack of class-based inheritance encourages functional approaches.
  • Rust: Natively supports higher-order functions and is attracting engineers from the Scala community.

The challenge lies in overcoming varying definitions and resistance to adopting new approaches. Many developers remain attached to the traditional pattern of embedding behavior within domain objects.

We get back to structured testing

Giving LLMs ambitious tasks for large projects has been problematic. I think this will lead us in two directions:

Firstly, we may move to writing tests and letting the LLM generate the implementation code. This would give AI coding systems the ability to validate, verify, and restructure the code. For this to be realized, the test/specification code needs to become easier to humanly write. Given the current state of QA related technologies, frameworks, and systems, I believe that will require a new language.

Secondly: Strongly structured testing techniques will be needed to verify AI code. AI can generate a lot of code, but it isn’t always what you want. In a way, it’s like having an untrusted “gremlin” in your codebase. Testing will be needed to verify on many levels for regression, acceptance, and reliability.

We Finally Give up on the idea of forcing “Real time decisions” On Services

Creating services that handle complex rules and data is expensive, and that expense increases when demanding real-time performance. This gets worse every time a feature is added or the service is reused in a different context. What does this mean? The push toward using queuing systems, asynchronous processing, and large-scale streaming platforms.

I anticipate two key shifts:

  1. A clearer understanding of the appropriate roles and limitations of microservices and realistic performance expectations
  2. A greater acceptance of communicating a longer processing status to users through the UI to build trust. This shift is supported by the ongoing development of resources like the second edition of “Designing Data Intensive Applications.”

Additionally, this way of solving a problem ends up being cheaper to run with the right infrastructure (FaaS), and it’s cheaper to maintain.

Libraries become comprehensive

I think this may be the most uncertain prediction here. Javascript and Python introduced a wave of self-promotion of libraries that either riffed on other libraries, were forked for self-interest, or were overly specialized. We may move back to a more established, reputable, and confirmed process of publishing reusable dependencies. Or not. Golang, a popular language, has a completely decentralized, disorganized, and loose definition of dependency management: it’s based on actual repositories of raw source code.

AI Models

I don’t think AI models will get much better. I believe that they may become more space, memory, and parameter efficient, and I believe they may get updated. The big next thing here is qualifying and validating the training data. That’s going to be a murky and difficult path for LLMs. Additionally, it’s going to be worse if the data that the LLM is trained on is Generated AI content.

People become more withdrawn from spaces where bot based content exists

We’re currently seeing spaces where large amounts of people gather being invaded by organizations and malicious individuals to promote products and scams. Email and text messaging are being inundated with nonconsensual messages from vendors we use. I would imagine this creates distrust by the customer and may cause more withdrawing and quiet disconnects.

I’ve seen this with email and text marketing mostly through Square marketplace connected shops. I’ve never asked for marketing from those companies and there seems to be no way to opt out of it from Square. Also, my understanding with Facebook, it’s filled with distracting and non-asked for AI generated “content.”

AI Slows down and gets better focused on single tasks

AI is currently overhyped. There’s a lot of buzz around its potential to revolutionize everything and eliminate human labor. We are starting to see the limitations of General AI and the practical constraints imposed by current models and hardware. Hopefully, for software engineers, we’ll see better tools, integration, and ways to identify flaws in languages, improvements in processes, and better ways to deliver more trustworthy solutions. I think that’s the best-case scenario. Given the more likely scenario of: adapt or die, buy over build, and deny failure, I suspect we’ll see a lot of turbulence and further large-scale technical disasters.

What can we expect from this? Hopefully, we’ll see specialized research involved in optimization and better definition of tasks that are best fit for it.

Rust Continue to Grow

2026 may be the year for Rust. Who knows. It looks like a great language, and I’m learning it. Coincidence? With the economic downturn, there may be an increased desire to make systems more efficient to reduce cloud spend, and this may be the answer. Who knows, I just hope it isn’t Go. (Go is currently experiencing turbulence and is deviating on its original claims of “easy language” due to growing pains.)

In conclusion,

Do I believe all of this will come to pass in 2026? No. However, I believe these considerations are worthwhile. Software engineering is cyclical, and while past performance isn’t indicative of future results, I hope we can move towards more communication, efficiency, and the creation of truly great software.