The Scepticism Is Rational
Every Mittelstand CEO I speak with has heard the AI pitch. Usually multiple times. Usually from software vendors with no experience in industrial environments, safety-critical systems, or the specific constraints of hardware-software integration.
The scepticism that follows is not technophobia. It is pattern recognition. These are people who have watched ERP implementations fail, Industry 4.0 projects stall, and digital transformation initiatives produce dashboards nobody uses. They have learned to be suspicious of technology that promises to transform everything.
That scepticism is healthy. The problem is that it sometimes extends to cases where AI genuinely does create bounded, measurable value — and where not adopting it is a competitive disadvantage.
The Right Framework
The question is not "should we use AI?" It is two separate questions:
- Where does AI create measurable, bounded value in our specific operations?
- Where does AI introduce risk — safety, reliability, compliance, or operational — that we cannot afford?
These questions have different answers for every company. But there are patterns.
Where AI Works Well in Industrial Contexts
Structured data analysis at scale. If you have sensor data, production logs, or quality inspection records that a human analyst would need weeks to review, a well-scoped ML model can surface patterns in hours. This is not magic — it is statistics applied to data you already have. The value is real and measurable.
Document processing and classification. Supplier contracts, technical specifications, compliance documents, customs paperwork. If your procurement or engineering team spends significant time reading and categorising documents, LLM-based processing can reduce that time by 60–80% for well-defined document types. The risk is low because a human reviews the output before acting on it.
Anomaly detection in production. Identifying when a production line is drifting outside normal parameters before it produces defective parts. This is a well-understood application with clear ROI and manageable risk, because the AI flags anomalies for human review rather than taking autonomous action.
Supplier communication and translation. For companies sourcing from China, AI-assisted translation of Mandarin-language technical documentation has improved significantly. It is not a substitute for fluent human review of critical documents, but it dramatically reduces the time required to get a working understanding of a Chinese-language datasheet.
Where AI Introduces Unacceptable Risk
Safety-critical control systems. If a failure mode can injure a person or damage critical infrastructure, the verification requirements for AI-based control are currently prohibitive for most SMEs. The EU AI Act classifies these as high-risk systems with corresponding compliance obligations. This is not a reason to avoid AI forever — it is a reason to be honest about the current state of verification tooling.
Opaque decision-making in regulated processes. If you need to explain to a regulator, a customer, or an auditor why a specific decision was made, a black-box ML model is the wrong tool. Explainability is not a nice-to-have in regulated industries — it is a compliance requirement.
Replacing human judgment in novel situations. AI systems trained on historical data perform well on situations that resemble their training distribution. Industrial environments are full of novel situations — new suppliers, new materials, new failure modes. The risk of an AI system confidently producing a wrong answer in a novel situation is real and underappreciated.
The Practical Implication
The most common mistake I see is companies trying to apply AI to the wrong problems — either the highest-stakes, most complex processes (where the risk is too high) or the most visible, most hyped applications (where the ROI is unclear).
The highest-ROI AI applications in industrial SMEs are usually unglamorous: document processing, data cleaning, anomaly detection, translation. They are not on the cover of industry magazines. But they create measurable value, they are verifiable, and they do not require betting the company on a technology that is still maturing.
Start there. Build the organisational capability to evaluate, implement, and verify AI systems in low-stakes contexts. Then expand to higher-stakes applications as the tooling matures and your team develops the judgment to use it well.
That is not a conservative position. It is an engineering position.
Ready to talk?
Start with a 30-minute conversation.
A direct conversation about your architecture and supplier situation.
Book a Free 30-Min Call