Guided Emergence: Building an AI-Durable Organisation at The Lo & Behold Group

Rifeng Gao · February 2026

Earlier this year, we came across Claude Code and saw its potential for a business like ours. We could rapidly stress-test ideas before committing resources. Venue concept feasibilities, menu changes, and pricing scenarios could be modeled in hours instead of days. Optimization could happen continuously in the background, analyzing bookings, missed calls, and waitlists. And we could make better decisions, with AI analyzing data from across the business and recommending where to focus.

We put together a team of five to bring AI into the business, identified for their ability to think across the whole organisation, with experience cutting across finance, operations, strategy, talent, and guest experience. On the technical front, we brought in one of the pioneers of Claude Code in Singapore to provide guidance.

The early results were exciting. But a worry kept surfacing: whatever we spent time building now might not survive the next model release. How do we invest our time and effort in a way that compounds, rather than building things that become obsolete?

We do not have a complete answer, but along the way, we found a way of thinking about it that has shaped how we build, and we wanted to share it with others on the same journey.


Guided Emergence

The first thing we had to figure out was how to work with AI’s unpredictability.

AI models are non-deterministic. Given the same input, they produce different outputs. The natural response is to add structure: constraints, validation, tighter pipelines. That instinct is often right. On its own, the unpredictability is hard to work with.

But that unpredictability is not just noise. It is also where the best outcomes live.

We experienced this building TLBG 360, an internal performance review app. The first version came back full of bugs. We started by giving the AI broad autonomy: “just fix it.” It went in circles. So we tried a different angle, directing it step by step. It was laborious. What worked was a third approach: we mapped out what the expected user journeys should look like, then set up an autonomous loop where AI agents tested the app against those journeys, recorded every bug, and a master agent fixed them, repeating until every journey worked. What surprised us was that it found its way there without being shown how.

The AI that went in circles and the AI that surprised us were the same model. The difference was the structure around it. Emergent behavior, the variance that causes hallucination and inconsistency, is also what produces creative solutions and unexpected capability.

What we have found is that emergent behavior, to be useful, needs to be guided. We have started anchoring our design principles around this idea. We call it Guided Emergence. “Emergence” because the best outcomes come from leaving room for AI to find its own path. “Guided” because without the right structure, emergence is just unpredictability.

The question we keep returning to is how to add structure that grows with the model rather than against it. The goal becomes: raise the floor without lowering the ceiling. We think about structure in three categories, based on what happens to each as models improve.

Guardrails are permanent. They prevent the worst outcomes: safety checkpoints to roll back if something goes wrong, boundaries that protect users from model errors, hard limits on what the system can touch. In our experience, the need for guardrails does not change as models improve.

Scaffolding is temporary. It compensates for what the model cannot yet do reliably: step-by-step orchestration, explicit criteria, coordination logic. Scaffolding is useful today, but it carries a risk. Structure that is hard to remove gets entangled in the architecture and gradually becomes the constraint it was meant to prevent. We have started designing scaffolding for phased retirement, so it can be shed as the model outgrows the need for it.

Opinionated architecture is the organisational knowledge that shapes what AI produces. It is the outcome definitions that describe what good looks like, the institutional knowledge that gives the model context, and the values and quality judgment that define what good means here. Unlike scaffolding, opinionated architecture does not compensate for what the model cannot do. It tells the model how the organisation operates and what it values.

Structure Examples As models improve
Guardrails Rollback mechanisms, access boundaries Unaffected. The need does not change.
Scaffolding Step-by-step task breakdowns, prompt templates Becomes lighter over time. Design for easy retirement.
Opinionated architecture What “good” looks like, organisational principles Becomes more valuable. This is where we invest.

One example is structured team memory: documenting decisions alongside the reasoning behind them, the alternatives that were considered and rejected, and how thinking evolved over time. A stronger model can read all of that and extend the reasoning to new situations, or challenge the original decisions when circumstances have changed.

Another is wrapping AI capabilities with organisational principles. The base model already knows how to write code, review work, and analyze data. What it does not know is how the team thinks about quality, what the standards are, and how work gets done. That wrapper encodes judgment. A better model expresses that judgment with more sophistication.

Here is the part that took us a while to fully internalize. This distinction matters more as models improve. When model capability begins to exceed what most teams designed for, opinionated architecture becomes the highest-leverage investment. Teams that over-scaffolded may never realize their own structure is holding them back. Those that invested in opinionated architecture find theirs compounding.

We experienced this firsthand. We were designing a team lead structure for orchestrating sub-agents. Within the same week, Anthropic released the Agent Teams feature that did exactly this. What we had built as scaffolding, the model now handled on its own.

Experiences like this have reinforced Guided Emergence as our foundational framework. It is also, we think, our most important piece of opinionated architecture.

The next section looks at how we are approaching AI across the business: what deserves the most care, where we are tempted to move fast but the risks give us pause, and where we see the most upside.


Where We Focus

We think about AI’s impact on the business in three universal categories:

We are spending most of our energy on the last two. Each involves a different mix of guardrails, scaffolding, and opinionated architecture.

Replacement is the most obvious of the three, and where most businesses tend to start. Identify what humans do, find what AI can do instead, measure the savings. It is also where we find ourselves being the most careful. These are decisions about people and their roles, and getting them wrong is not something we can easily walk back. We do not yet have an answer, but we are being deliberate and considered about how we find one. We want to take the time to get this right.

Improving current work covers the largest surface area. Recipe costing, contract reviews, training materials, event proposals, social media content, SOPs. The list goes on. It extends across every department, and it is tempting to move fast on all of it.

But the risks are real, particularly two:

Guardrails come first. Sensitive systems like payroll and guest data have strict access controls, and security hooks prevent destructive actions. These boundaries stay regardless of how capable the models become. For now, we are also keeping tools non-agentic until proper training and these guardrails are fully in place.

Pre-built tools and training programs serve as scaffolding, helping teams start using AI effectively while they build familiarity. As teams develop fluency and models improve, this scaffolding becomes lighter.

The goal is to reach the point where each team shapes these tools around their own knowledge, standards, and way of working. When a team has defined what good looks like for their domain, that becomes opinionated architecture, and every future model gets more out of it.

Unlocking new abilities is where we are most curious. We are building our first project in this category: a venue leader advisor. The decisions venue leaders make have the biggest impact on the business, and they often depend on intuition and an incomplete picture of the data available to them. P&L reviews, menu engineering analysis, manpower planning, and others. These happen today, but separately, and with whatever thoroughness time allows. They do not produce a single, prioritized view of where to focus. The advisor processes operational and financial data holistically and more regularly than what previous manual effort allowed, and compacts it into prioritized action items. That is what we think AI actually unlocks here.

Today, we provide step-by-step templates that guide each analysis, prescriptive instructions on how to apply frameworks like menu engineering or manpower planning, and explicit logic for what to evaluate in what order. These compensate for what the model cannot yet reason through on its own. As models improve, this scaffolding simplifies.

Knowing what matters to a venue leader and the business, what to prioritize and how to weigh competing demands, does not come from the model. That is encoded as opinionated architecture. A stronger model will weigh those competing demands with more nuance, and surface sharper recommendations.

Across everything we are doing with AI, replacing and improving current work have a clearer shape. Unlocking new abilities is less predictable. The venue leader advisor emerged from combining analyses that already existed in isolation. We think there are many more combinations we have not yet seen, and that is what excites us most.


The question that started this work was about what survives. The question we keep returning to now is what becomes possible. We are building a team to answer both, people who can think across the organisation and see combinations others miss. If this resonates, we would love to hear from you. Reach out.