

Opinion
What If the AI Just Did It? Where Gen AI earns its place in drinks operations
Published on 17 Apr, 2026 by Jonathan
I have lost count of the number of conversations I have had this year that begin with some version of the same question: "where should we actually be doing something with AI?" The phrasing varies. The fatigue behind it does not.
We are past the point where "we should probably do something with AI" is a strategy. The drinks companies I speak to, and I include organisations like Carlsberg Britvic in that, are well past the experimentation phase. They want to know where AI changes the economics of their operation, and where it is a distraction. They are right to be sceptical. Most of the AI conversations in our sector have led with technology and ended with a chatbot nobody uses.
The frame we find useful is this: stop thinking of AI as something people talk to, and start thinking of it as something that acts. Not a dashboard, not a search bar, not a helpful assistant waiting for input. An agent that works continuously across your operational data, takes the routine decisions itself, orchestrates the next step in the process, and only escalates to a human when judgement or authority is genuinely needed.
That distinction matters. It is the difference between installing a clever tool and actually reclaiming capacity. Three examples, drawn from conversations in our sector, where this is already starting to land.
An agent that manages maintenance rather than predicts it
A bottling line is a complicated piece of choreography. Fillers, labellers, pasteurisers, conveyors, each one a potential point of failure, each failure a cascade into the rest of the line. Most drinks manufacturers I speak to still approach maintenance as either scheduled, which wastes good asset life, or reactive, which means a callout at 2am and a morning of lost production. Unplanned downtime on a single line can cost tens of thousands of pounds per hour.
Most "AI for maintenance" conversations stop at prediction: the model flags something, a human decides what to do. The agentic version keeps going. It correlates sensor patterns with known failure modes, cross-references the maintenance calendar and spare parts inventory, raises the work order, books the technician, orders the part, and updates the production schedule to route around the affected line during the window. A human only sees it when the system needs approval for a cost threshold or an unusual call. The outcome is not a better alert. It is one fewer thing for the plant manager to manage.
An agent that closes the quality loop
The second use case starts from a frustration I hear almost universally: quality data is everywhere and useful nowhere. Lab systems, MES, ERP, spreadsheets on someone's laptop. When a batch is flagged, the quality team spends days reconstructing what happened from a dozen sources. By the time root cause is established, the same pattern may have repeated twice.
The chat version of AI here would be "ask questions of your quality data in plain language." Useful, but it still relies on someone thinking to ask. The agentic version monitors every batch as it is produced, correlates results across systems as the data arrives, spots pattern drift before anyone sees it, and triggers the next action autonomously. If carbonation starts trending low on Line 2, the agent holds the next batch, notifies the shift supervisor with a summary of the drift and probable causes, and logs a compliance record automatically. The quality lead sees a decision to approve or override, not a stack of reports to read.
An agent that owns the field team's admin layer
The third is quieter but arguably has the fastest payback. Field teams, technicians, account managers, sales reps, spend a frustrating amount of their day on work that is not really their job: writing up visit notes, filing expense claims, chasing product specifications, preparing for customer calls, updating the CRM after a conversation. That overhead compounds across hundreds of people and thousands of interactions.
A knowledge search tool would let a technician look things up faster. An agent does the work around them. Before a visit, it prepares a briefing drawing on the customer's history, recent orders, open support tickets, and any live operational data from their equipment. After the visit, it drafts the visit note from a two-minute voice memo, updates the CRM, raises any follow-up tickets, and flags anything the account manager needs to see. The technician's job becomes what it should be: being in the room, making the judgement call, talking to the customer. The agent handles everything either side of that.
The common thread
Notice what these three have in common. None of them are generic Gen AI. Each one is an agent connected to real operational systems (IoT sensors, MES, ERP, CRM) with the authority to take bounded action on its own. The language model is a component, not the point. What makes the approach work is the integration with your data and your workflow, and the clarity of where the agent acts autonomously versus where it escalates.
Getting this right is not a tooling decision. It is a question of where your business will most benefit from reclaimed human capacity, what data the agent needs access to, and which decisions you are comfortable delegating. That is exactly what our Gen AI Strategic Assessment is built to answer. Four weeks, structured, ends with a prioritised roadmap your leadership can act on.
If any of the three examples above feel close to a conversation you are already having internally, I would welcome a short call to compare notes.
Jonathan Custance is CEO of Green Custard, an AWS Consulting Partner specialising in IoT, connected products, and Gen AI for the drinks sector and beyond. Find out more about the Gen AI Strategic Assessment or get in touch at info@green-custard.com.