Design and deploy AI agent workflows to automate recurring analytical tasks, data summarization, and insight generation pipelines.
Evaluate and integrate LLM-based tools into the data team's workflow, assessing their reliability, accuracy, and fitness for analytical use cases.
Collaborate with product and business teams to define analytical questions, success metrics, and KPIs.
Build and maintain analytics foundation using SQL and dbt, enabling reliable reporting and self-serve analytics.
Design, build, and maintain Tableau dashboards that bring metrics to life and support day-to-day decision-making.
Perform A/B testing and experimentation, including experiment design, statistical inference, significance testing, and result interpretation.
Perform ad-hoc, exploratory, and statistical analyses to uncover insights and validate hypotheses.
Communicate findings clearly to both technical and non-technical stakeholders, translating data into actionable recommendations.
Partner with stakeholders to iterate on metrics, dashboards, and analyses as business needs evolve.
Requirements
Hands-on experience with AI agent frameworks (e.g., LangChain, LlamaIndex, CrewAI, or similar) and demonstrated ability to build and deploy agentic systems in a production or near-production context.
Proven experience with prompt engineering and evaluating LLM outputs for data-related tasks such as automated reporting, anomaly narration, or natural language querying.
Experience orchestrating multi-step AI pipelines that combine LLMs with structured data sources, APIs, or internal tooling.
Strong SQL and Python skills for data analysis and modeling.
Experience with dbt for analytics engineering workflows.
Experience building dashboards in Tableau (or similar BI tools).
Solid foundation in statistics, experimentation, and hypothesis testing.
Ability to work cross-functionally and communicate insights effectively.