🔐
Docs & Data
Tina K · Ceraluna Labs
Falsches Passwort
← Zurück zur Übersicht
EN DE
Interview Preparation

Agentic AI
Platform Owner

Company: enercity Netz GmbH, Hannover Job ID: J2026011 Type: Hybrid, Vollzeit
Requirement Match Score
10 / 12
Education
Exceeds
PhD agent-based models + MBA equiv.
Multi-Agent Systems
Rare Match
240k agents PhD + 4-agent LLM platform
LLM / RAG
Strong
GPT-5, LangChain, Azure OpenAI, production
AI Governance
Direct
EU AI Act at Wintershall + hands-on
Cloud & MLOps
Strong
Azure AI/ML, DevOps, Docker, MLflow
Scalable Platforms
Strong
MLOps frameworks + config-driven design
Team Leadership
Direct
6-person team, mentored 4, budget mgmt
Enablement
Strong
120+ staff change mgmt, conferences
Communication
Strong
Published, lectured, presented
Languages
Exact
German native, English fluent
Purview / Defender
Gap (30d ramp)
Azure ecosystem transfers; concepts known
M365 Graph API
Gap (30d ramp)
API integration experience transfers

🎤 Your 60-Second Pitch

Memorise this

"I'm a data scientist with a PhD focused on complex agent-based models — my doctoral research simulated 240,000 interacting agents on high-performance computing clusters. At Wintershall Dea, I spent four years taking AI from concept to production — including LLM-powered document intelligence, EU AI Act-compliant governance, and MLOps frameworks. I led a 6-person team and ran a change management program for 120 people.

On the side, I built a full agentic data platform from scratch — four specialised LLM agents orchestrated by 24 automated workflows, running without human intervention. That taught me what actually breaks in agentic systems: silent failures, missing authorisation gates, and governance gaps that only surface in production.

I bring the combination: enterprise AI governance from Wintershall, academic depth in agent-based systems from my PhD, and practical builder scars from operating agents solo."

🗺 Evidence Map: Requirement → Proof

Architecture, Strategy & Roadmap

Wintershall

Directed end-to-end delivery of multiple AI/ML projects. Implemented MLOps frameworks (CI/CD, monitoring, governance). Delivered LLM document intelligence on Azure — 99% accuracy on multilingual PDFs. Presented "MLOps Design Principles" at EAGE Digital 2024.

PhD

240,000-agent simulations on HPC clusters. Published research on systemic risk and failure cascades in agent networks.

CapeTownData

Multi-agent pipeline: classification → dedup → geocoding → translation. Config-driven architecture — new use cases via JSON, not code. 11 architectural decisions documented with rationale.

AI Governance (EU AI Act, GDPR)

Wintershall

Built EU AI Act risk classification workflows. Created governance documentation for full AI model lifecycle. Delivered secure solutions in a regulated energy company.

CapeTownData

Human-in-the-loop gate: All AI-classified data arrives locked, requires explicit approval. Data minimisation: Tiered access — precise data only for authorised users, enforced by automated tests. Learned from mistakes: Almost leaked precise geographic data to free-tier users. Built 3,085 automated tests to prevent it.

Zero-Touch Operations & Self-Healing

Wintershall

MLOps frameworks with CI/CD, monitoring, governance. Anomaly detection (LSTM) for safety-critical monitoring. Docker containerised workflows.

CapeTownData

24 automated zero-touch workflows. Self-healing: DB connection recovery, stale-data guards, graceful LLM degradation. Fail-loud principle: every failure surfaces immediately. Compliance-aware deployment with dry-run flags and 3,085-test CI gate.

LLM/Agent Frameworks & RAG

Wintershall

LLM document intelligence (GPT-5, LangChain, Hugging Face, Azure OpenAI). RAG in practice — retrieval + generation for enterprise knowledge.

PhD

240k-agent simulations. Published: how information propagates through agent networks.

CapeTownData

4 production LLM agents. Cascade architecture: rules first (90% of cases), AI last. Cost-aware: $0.001–0.005/run with safety limits.

Team Leadership & External Experts

Wintershall

Led 6-person cross-functional team. Mentored 4 data scientists. Worked with Azure/Microsoft partners.

Econometrix

Managed project budgets, resource planning, P&L across sectors. On time, within financial targets.

Enablement & Change Management

Wintershall

Change management for 120+ staff. Agile ceremonies, sprint reviews, workshops.

Academic

University lecturer at UCT. Conference presenter (ADIPEC 2022, EAGE Digital 2024). Published researcher.

Your 5 Differentiators

What other candidates probably don't have:

01
PhD literally on agent-based models
240,000 agents on HPC clusters. Published research on agent network behaviour, failure cascades, emergent dynamics. Most AI platform candidates have never studied agent theory at this depth.
02
Enterprise governance + founder scars
EU AI Act compliance at Wintershall (the proper way). Built governance from scratch in own project (the hard way). You know what the framework says AND what actually breaks.
03
Change management at scale
120+ people, AI adoption program. Most technical architects can't do organisational change. Critical for Agent 365 adoption across business units.
04
Energy sector domain
4 years at Wintershall Dea. enercity is energy. You understand the domain, the risk appetite, the regulatory environment. No ramp-up needed on industry context.
05
Three layers of agent experience: academic + enterprise + founder
PhD (agent theory, 240k agents) → Wintershall (enterprise LLM production, Azure, governance) → CapeTownData (built 4 LLM agents from scratch, 24 zero-touch workflows, made every governance mistake personally). This combination is extremely rare.

💬 7 Interview Questions & Answers

1 "Tell us about your experience with agent-based systems."
"I have three layers. In my PhD, I built complex agent-based models with 240,000 agents on HPC clusters — studying how agents interact, how information spreads, how failures cascade. At Wintershall, I delivered LLM-powered AI systems in production — document intelligence with GPT-5 on Azure, 99% accuracy on multilingual PDFs. In my own project, I built a full agentic platform from scratch — four specialised LLM agents, 24 automated workflows. That taught me what actually breaks: silent failures, missing authorisation gates, cost runaway, privacy leaks. That combination — academic foundations, enterprise delivery, founder-level scars — is what I'd bring to Agent 365."
2 "Tell us about a time an AI system failed in production."

Pick one based on vibe

Enterprise story (safer, shows team leadership):

"At Wintershall, our LSTM anomaly detection for well integrity flagged too many false positives. Operators stopped trusting the alerts — dangerous in safety-critical monitoring. We redesigned the alerting layer with confidence thresholds and physics-based validation. Lesson: false positives train people to ignore alerts. In agentic systems, trust is everything."

Founder story (bolder, shows hands-on depth):

"My LLM deduplication agent found 17 duplicate groups but removed zero. I built the authorisation flag in the code but forgot to wire it through the orchestration layer. The agent could detect but wasn't authorised to act. That's a fundamental agentic AI failure mode — the gap between capability and authorisation. Now every new capability gets traced end-to-end before it ships."
3 "How do you handle AI governance and the EU AI Act?"
"At Wintershall, I built EU AI Act risk classification workflows and governance documentation in a regulated energy company. Compliance wasn't optional. In my own platform, I learned governance from the other side — by making the mistakes. I almost leaked precise data to unauthorised users. That taught me: governance isn't a document — it's automated checks that run on every deployment. For enercity: the regulatory framework from Wintershall, enforced through automated compliance in the pipeline. Governance should be code, not paperwork."
4 "How would you approach building Agent 365's architecture?"
"Four principles: 1. Cascade, don't replace. Rules handle simple cases, AI handles what rules can't. 90% without an LLM call. Costs down, reliability up. 2. Governance gates at every action. Agents recommend; execution requires authorisation matching the user's Entra ID role and the action's risk level. 3. Config-driven extensibility. New use cases = JSON definition, not code. What the agent accesses, what it can do, who approves. Scales from 1 to 50 agents. 4. Fail-loud monitoring. Silent failures are catastrophic. Every agent gets health checks, failure surfaces immediately. First 90 days: map M365 stack to these patterns, build one reference agent with full governance, use it as template."
5 "You don't have Purview/Defender experience. How will you ramp up?"
"Correct. But I've been in the Azure ecosystem for four years: Azure AI/ML, Azure OpenAI, Azure Functions, Azure DevOps. And I've built the concepts Purview and Defender implement: data classification, DLP, identity governance, audit logging. The principles are identical — I need the specific product interfaces. With my Azure foundation, 30 days: Microsoft Learn for foundations, then a hands-on prototype demonstrating governance patterns I already know, implemented in the M365 stack."
6 "How do you balance innovation with compliance?"
"Make compliance invisible to developers. At Wintershall, governance checks were in the CI/CD pipeline — you couldn't deploy without passing them. In my project, 3,085 tests verify privacy boundaries on every change. Innovation slows when compliance is manual review. It stays fast when compliance is automated pipeline gates. For Agent 365: bake Purview classification, DLP rules, Entra ID checks into the deployment pipeline. Developers build agents; the pipeline ensures compliance. Guardrails invisible until you try to break them."
7 "How would you drive AI adoption with business units?"
"At Wintershall, I ran change management for 120+ people. Biggest lesson: people adopt tools that solve their specific pain, not tools that are 'strategically important.' 1. Listen first. Meet each unit. Find their most painful repetitive process. Don't pitch Agent 365 — ask what wastes their time. 2. Build one quick win. Highest pain, lowest risk. MVP in 2–3 weeks. 3. Let success spread. First unit = reference case. Others come asking. The turning point at Wintershall: we automated a document task that took hours, got it to 99% accuracy. After that, teams came to us with ideas."

🚩 Red Flag Preparation

⚠ "Your last title was Senior Data Scientist, not Platform Owner."
"The title was Senior Data Scientist. The scope was platform-level: I designed the MLOps framework, led the team, drove governance strategy, ran change management for 120 people. And in my own project, I am literally the platform owner — architecture, governance, monitoring, deployment, end-to-end. The transition is from implicit platform ownership to explicit."
⚠ "You've been out of corporate employment since November 2025."
"I used the time to build a production agentic platform from scratch — not a tutorial, a real product: 4 LLM agents, 24 automated workflows, 3,085 tests, tiered data governance. I wanted hands-on founder experience with agentic AI before stepping into a platform ownership role. I'm sharper now than when I left Wintershall — because I've done every part of the stack myself."
⚠ "Your Azure experience is with OpenAI, not Purview/Defender."
"Correct. But three things: First, I've been in the Azure ecosystem for 4 years — AI/ML, OpenAI, Functions, DevOps. The ecosystem knowledge transfers. Second, I've built the concepts these tools implement — classification, DLP, identity governance, audit trails. Third: 30 days. With my Azure foundation and Microsoft Learn, I'll be productive. I've ramped on new Microsoft tooling before — the principles transfer, only the interface changes."

🔤 Vocabulary Translation

Use their language. Map your experience to their words:

Your ExperienceSay This
240k-agent PhD simulationsComplex multi-agent systems at scale
Wintershall MLOps CI/CDAutomated validation pipelines with compliance gates
LLM doc intelligence on AzureEnterprise LLM deployment on the Microsoft stack
EU AI Act risk classificationRegulatory-compliant AI governance framework
Change mgmt for 120 staffOrganisational AI enablement and adoption
4-tier dedup cascadeDeterministic-first, AI-last agent architecture
publish_locked patternHuman-in-the-loop governance gate
Tiered data accessRole-based data classification with DLP enforcement
24 automated workflowsZero-touch agent orchestration
3,085 automated testsAutomated compliance validation in CI/CD
Graceful LLM degradationResilient agent design — AI enhances, never blocks
Config-driven datasets.jsonScalable, config-driven platform extensibility
Econometrix budget mgmtExternal vendor coordination with P&L accountability

📅 Your 90-Day Plan

Days 1–30
Learn & Map
  • Deep-dive M365 security: Purview, Defender, Entra ID, Graph API
  • Map current Agent 365 architecture and governance posture
  • Interview stakeholders: product, compliance, security, business units
  • Document gaps. Identify quick-win use case for first MVP.
Days 31–60
Build & Prove
  • Build reference agent with full governance: Entra ID → Purview → DLP → Defender → audit trail
  • Establish agent lifecycle: develop → validate → deploy → monitor → retire
  • Automated compliance checks in deployment pipeline
  • First business unit engagement: pain point → MVP design
Days 61–90
Scale & Enable
  • Generalise reference agent into reusable template
  • First enablement workshop with pilot business unit
  • Publish Agent 365 governance playbook
  • Present 6-month roadmap to leadership, prioritised by business value

🤔 Questions to Ask Them

  1. "How mature is Agent 365 today?" — Agents in production, or greenfield?
  2. "Biggest pain point?" — Reliability, compliance, scaling, or adoption?
  3. "Purview/Defender for AI workloads?" — Already configured, or part of what I'd build?
  4. "Success at 6 months?" — Governance framework, number of agents, or adoption?
  5. "External experts?" — Microsoft partners, consultancies, or freelancers?
  6. "Team interaction?" — How does this role work with M365 engineering and Cloud & Security?
  7. "Strategy ownership?" — Executing existing AI strategy, or shaping it?

Cover Letter — Key Paragraph

Deutsch

Sehr geehrte Damen und Herren,

als promovierte Wirtschaftsinformatikerin mit Schwerpunkt auf komplexen agentenbasierten Modellen (240.000 Agenten auf HPC-Clustern) und vier Jahren Erfahrung in der Entwicklung produktiver KI-Systeme bei Wintershall Dea bringe ich genau die Kombination mit, die diese Rolle erfordert: tiefes Verständnis für Multi-Agenten-Systeme, praktische Erfahrung mit LLM-Pipelines auf Azure (GPT-5, LangChain, RAG), und nachgewiesene EU AI Act-konforme Governance in einem regulierten Energieunternehmen.

Bei Wintershall Dea habe ich MLOps-Frameworks implementiert, ein 6-köpfiges Team geleitet, und ein Change-Management-Programm für 120+ Mitarbeitende durchgeführt. Parallel dazu habe ich als Gründerin eine vollständige Agentic-AI-Plattform aufgebaut — vier spezialisierte LLM-Agenten, orchestriert durch 24 automatisierte Zero-Touch-Workflows. Diese Hands-on-Erfahrung hat mir gezeigt, wo agentische Systeme tatsächlich scheitern: stille Ausfälle, fehlende Autorisierungsgates, und Governance-Lücken, die erst auffallen, wenn es zu spät ist.

Diese Kombination aus Enterprise-Erfahrung, akademischer Tiefe und Gründer-Pragmatismus möchte ich als Agentic AI Platform Owner bei enercity einbringen.