Description
Join as the first engineer and researcher behind GenAI Protect, building LLM-powered discovery, classification and policy-enforcement services that secure enterprise generative-AI usage.
Responsibilities
- Design and implement scalable inference services (Python/FastAPI, Triton, ONNX) with GPU orchestration.
- Develop and fine-tune transformer models for prompt inspection, PII/red-flag detection and intent analysis.
- Collect datasets, build evaluation pipelines and drive continuous model improvement.
- Collaborate with Threat-Intel and Product to respond to emerging GenAI attack vectors.
- Publish internal/external tech blogs; help hire and mentor future AI-security engineers.
Desired Background
- 8+ yrs software/ML engineering- deep hands-on with PyTorch/TensorFlow and NLP.
- Experience deploying ML in low-latency SaaS environments under compliance constraints.
- Strong background in data-privacy, DLP or AI-security research; publications/patents a plus.
- Self-starter comfortable defining architecture, tooling and roadmap from scratch.