Who is SENEC?
SENEC is one of Germany’s leading providers of home energy storage and solar solutions, part of the EnBW Group. With thousands of installations across Europe and a rapidly growing customer base, SENEC’s IT support desk handles a high volume of inbound requests — from routine configuration questions to critical system incidents.
The Challenge
As SENEC’s customer base scaled, so did inbound support volume. Most of that volume was repetitive: standard how-tos, known error codes, procedural questions that had documented answers sitting in Confluence. But every ticket still went through the same queue and required the same human effort to process, regardless of complexity.
Two things made this costly. First, routine tickets consumed time that should have gone to complex problems. Second, urgent incidents had no fast track — high-priority issues entered the same queue as routine ones, slowing the response when it mattered most.
The Solution
Incident triage and routing
Every incoming ticket is classified on arrival. High-severity and urgent cases are flagged immediately and routed directly to the right specialized team, bypassing the general queue. Routine tickets proceed to automated resolution. This routing layer alone reduced the volume of tickets requiring general agent involvement.
Multi-step retrieval across data sources
For tickets routed to automated resolution, the agent runs a structured retrieval sequence: internal Confluence articles first, then purpose-built APIs for live system data, then an external research fallback for queries outside the knowledge base. Each step only runs if the previous one doesn't produce a confident answer, keeping latency low for the simple cases that make up the majority of volume.
Clean escalation with context
When the agent can't resolve a ticket, it escalates to the service desk with a structured summary: what was retrieved, what was tried, why it didn't resolve. Agents pick up with full context, not a cold handoff.
Impact
Looking Under the Hood
Multi-source retrieval with confidence gating. The retrieval pipeline runs in sequence: Confluence semantic search, live system APIs, external research fallback. Each layer only activates if the previous one doesn’t return a high-confidence answer. This keeps straightforward tickets fast and expensive API calls rare.
LLM selection per use case. Not every step runs on the same model. Classification and routing use a smaller, faster model optimized for latency and cost. Answer synthesis uses a higher-capability model where accuracy matters more than speed. The result is a system that’s both faster and cheaper than a single-model approach.
Langfuse for output evaluation. All agent responses are logged and scored via Langfuse. This gives the team ongoing visibility into answer quality, lets them catch regressions early, and creates a feedback loop for improving retrieval and prompting over time.
“We partnered with nexamind to reduce the workload of low-level internal technical support tickets. They built an AI system that classifies inquiries into categories, autonomously handles relevant low-level technical questions, and leverages both our internal Confluence database and an external research agent to fill knowledge gaps. If the AI agent cannot resolve an issue, it seamlessly redirects the question to our service desk team. This solution has significantly streamlined our support process, allowing our team to focus on more complex tasks.”
Facing similar challenges in your call center?
We'd love to understand your situation before proposing anything. Tell us what you're working on.