Braze
Braze Innovation & Technology Culture
Braze Employee Perspectives
How do your teams stay ahead of emerging technologies or frameworks?
The teams are encouraged to experiment with new technologies during quarterly hackathons. This allows them to explore and learn new approaches, and at the same time contribute to Braze’s products and goals. An example of an innovative project from the last hackathon is an observability pipeline from our Kubernetes-based job runner through Datadog to a custom visualization tool in Streamlit that shows overprovisioned ML pipeline jobs and allows us to more accurately right-size our infrastructure.
Our product and engineering team is actively experimenting with applying agentic coding techniques in their daily work. This is a fast moving field, with technologies such as Model Context Protocols, Retrieval-Augmented Generation and agent skills that are developing quickly. We have a biweekly AI lunch-and-learn where we share experiences and best practices: e.g., multi-agent workflows, different types of RAGs, and experiences with applying the latest models (via Cursor and Command-Line Interface) to our codebase.
Finally, it’s helpful to simply experiment with agentic tools hands-on, both in personal “toy projects” and applied to our core products. We have a very active Slack channel (#vibe-coding) where engineers learn from each other and share experiences and resources.
Can you share a recent example of an innovative project or tech adoption?
BrazeAI Decisioning Studio™ is a platform that leverages Reinforcement Learning to automate and optimize customer interactions. In simple terms, instead of a marketer manually guessing which message version is best, or running slow, manual A/B tests, an RL agent continuously learns from user engagement (clicks, conversions) to dynamically serve the optimal content at the optimal time via the optimal channel. However, configuring RL environments is notoriously difficult; it requires precise definitions of state, actions and reward functions.
To help our forward-deployed data scientists configure Decisioning Studio for new customers, we have developed the BrazeAI Decisioning Assistant, an internal agentic application designed to act as a co-pilot for setting up and maintaining these complex ML configurations. Unlike a standard RAG chatbot that only retrieves documentation, this assistant creates a bridge between LLMs and our runtime environment. It can actively verify proposed configurations against known standards, execute SQL queries to analyze model performance, and autonomously diagnose issues by interpreting real-time data logs. The goal is to shift our forward-deployed service posture from manual configuration and troubleshooting to a more automated, smart verification.
How does your culture support experimentation and learning?
Primarily, by tackling tough problems and building exciting AI products. Productizing reinforcement learning at scale for AI decisioning is a cutting-edge challenge that requires significant research and experimentation. That makes our product interesting for engineers to work on, in addition to being valuable for our customers. Outside of the core product work, we encourage learning via regular hackathons, provide a generous learning stipend to spend on materials, courses and conferences and try to match engineers and applied scientists to areas of work that are particularly interesting to them. Just now, we’re tackling scalability via optimizing contextual bandit algorithms in Spark + Scala, building a next generation marketer UI for Decisioning Studio inside the Braze Platform and investigating how to apply causal inference techniques to make our AI models learn faster from limited data. Simply executing on our ambitious roadmap requires a lot of learning and growth, which my team and I enjoy.
