Skip to content

Senior AI Engineer (RAG/LLM)

  • On-site
    • Palo Alto, California, United States
  • Engineering

Join Trustero as a Sr AI Engineer! Focus on leveraging LLMs for reasoning and implementing RAG pipelines. Collaborate with the team to enhance AI-driven compliance solutions. In-office, Palo Alto, CA.

Job description

About Trustero

Trustero is an advanced AI application, purpose-built for the Security and Compliance vertical. Our patented AI agents can accurately and consistently do the most time-consuming jobs in Governance, Risk, and Compliance, like perform gap analysis, provide remediation guidance, questionnaire automation, evidence collection + mapping, and more, saving companies hundreds-of-thousands of dollars and returning 100s of valuable working hours each month.


Role Overview

As a Senior AI Engineer at Trustero, you will play a crucial role in optimizing/evaluating large language models (LLMs) for reasoning and decision-making processes, as well as implementing RAG pipelines and agentic architectures/workflows.

You will work closely with our engineering team to implement cutting-edge LLM/RAG techniques that enhance the intelligence and performance of our AI driven Governance, Risk and Compliance platform.

The ideal candidate has hands-on experience integrating LLMs and working with RAG pipelines, and is passionate about applying these technologies to deliver tangible impact. Familiarity with NLP and ML methods—particularly text classification—or experience fine-tuning LLMs for specific industry verticals would be highly desirable.

Salary Range: $150,000 - $220,000 USD per year, plus stock options, based on experience and qualifications.

Key Responsibilities

  • LLM Integration & Reasoning

    • Develop LLM-based reasoning techniques, including automated decision-making and context understanding.

    • Integrate LLM solutions into the core product to enhance user experiences and streamline compliance workflows.

  • RAG Pipeline & Optimization

    • Proficiency in chunking large documents and creating efficient indexes for retrieval-augmented generation.

    • Experience running queries on RAG databases or vector stores to retrieve relevant context.

    • Skill in re-ranking and optimizing results to provide the highest-quality context for LLM queries.

  • Collaboration & Engineering

    • Collaborate with the engineering team to integrate LLM solutions into our infrastructure, ensuring compatibility and scalability.

    • Partner with product managers and engineers to align functionality with the product vision and deliver tangible user value.

    • Conduct code reviews, write clean, maintainable code, and follow software engineering best practices.

  • Continuous Improvement

    • Continuously experiment with cutting-edge ML techniques and frameworks, particularly those related to LLMs and emerging technologies.

    • Monitor and evaluate LLM system performance in production, iterating on solutions and model selection to ensure high reliability.

Salary Range: $150,000 - $220,000 USD per year.

Job requirements

Requirements

  • 5+ years of software engineering experience with a focus on ML, NLP, LLMs, or RAG.

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field (advanced degrees are a plus).

  • Proven track record with large language models (LLMs), applying them to reasoning and decision-making tasks.

  • Expertise in LLM pipelines and RAG frameworks (e.g., Haystack, LlamaIndex, LangChain) and strong programming skills in Python or similar languages.

  • Solid understanding of cloud platforms (AWS, GCP, or Azure) for production deployment and performance monitoring.

  • Excellent collaboration and communication skills, able to work effectively with cross-functional engineering teams.

  • Strong problem-solving abilities, adept at navigating complex technical challenges in a fast-paced environment.

Preferred Qualifications

  • Proficiency in Go, TypeScript, gRPC, and Protocol Buffers (Protobuf).

  • Hands-on experience with LLM model tuning and performance benchmarking.

  • Familiarity with NLP and ML techniques for text classification.

  • Exposure to compliance, governance, or security-related platforms.

  • Knowledge of microservices architecture, DevOps, and containerization tools (Docker, Kubernetes).

or