The Sr. Data Architect will work to design, build, and deploy generative AI and data integration solutions leveraging Databricks, PyTorch, and modern ML techniques. The role requires deep technical expertise in building AI/ML pipelines, embedding models, and integrating ServiceNow data into enterprise-scale architectures.
This is a highly visible role where the architect will be responsible for the following key responsibilities:
Key Responsibilities:
- Develop and deploy generative AI solutions and embedding pipelines using Databricks.
- Engineer ML workflows and data pipelines in Databricks, integrating with enterprise data platforms.
- Analyze and model ServiceNow data to extract insights and drive automation.
- Collaborate with cross-functional teams to understand business needs and translate them into technical solutions.
- Present and explain past projects hosted on GitHub, including walkthroughs of code and architecture.
- Communicate findings and recommendations clearly to both technical and non-technical stakeholders.
Required Skills & Experience:
- Strong experience with PyTorch, transformers, and embedding techniques.
- Strong experience with chat completions utilizing multiple different LLMs and fine tuning prompts (gpt, gemini, claude).
- Hands-on expertise with Databricks (including Unity Catalog, MLflow, and notebooks).
- Excellent communication and presentation skills.
- Strong experience with GitHub (commits, pull requests, Databricks integration).
- Active GitHub portfolio with at least one relevant project demonstrating generative AI or NLP capabilities.
- Ability to lead technical walkthroughs and articulate design decisions.
- Experience deploying models/code in a production setting.
Preferred Qualifications:
- Databricks certifications (e.g., Data Engineer Associate, Machine Learning Professional, Generative AI Engineer Associate).
- Familiarity with ServiceNow data models.
- Familiarity with RAG architecture and vector databases.
- Background in enterprise data architecture and governance.