Charter:
Be a founding member of a team building the first accurate AI systems for drug toxicity prediction to replace lab and animal experiments
About Axiom and the role:
We’re building AI systems for drug safety and toxicity assessment. Drug toxicity causes about half of drug program failures and by tackling it we can help drug discovery teams across the industry bring new medicines to patients far faster. We’re looking for a data engineer who’s excited to own the pipelines, systems, and tooling that turn raw chemical, biological, and clinical data into ML-ready training data and into customer-ready insights. You’ll work closely with our ML, lab, and product teams to build LLM-driven literature research and data platforms, scale inference of image and graph neural networks, automate ETL from diverse sources, and ensure the integrity of datasets that drive critical decisions internally and externally. This role is ideal for someone who wants to build clean, reliable systems that directly impact the success of hundreds of drug programs.
What are we looking for:
We want to hire people who inspire us and level up the entire team. They should be high energy, high agency, and have great taste for what matters. They should have a relentless “observe, orient, decide, act” loop, and be constantly identifying what needs to happen and getting it done. They need to be technically excellent and obsessive masters of their craft, as well as having a great curiosity which will keep them at the frontier of tech and help them interface between AI, engineering, product, biology, chemistry, and business. They could work in big tech, but it won’t satisfy them. They want to go on an adventure which will be brutally challenging, and to share in the rewards and satisfaction at its end.
What you will be doing:
Build and maintain the core data systems for Axiom’s research platform, including ingestion, processing, storage, and serving
Work with scientists to understand their data needs and create simple APIs for accessing chemical and biological datasets
Architect LLM systems to curate, clean, and analyze human clinical trial data, and evaluations and observability for these systems
Develop distributed systems to run large-scale LLM jobs that clean and curate biological and clinical data
Set up quality checks, testing tools, and monitoring systems to ensure data and model outputs stay accurate and reliable
Various expertise which gets us interested:
Leading large-scale data platform buildouts serving multiple internal teams or external users
Designing and maintaining high-throughput data systems capable of processing petabytes of data
Building AI- or LLM-powered data systems, particularly for research workflows and retrieval use cases
Gathering technical requirements from end users and translating them into effective data infrastructure
Taking ownership of data systems at an early-stage startup and significantly boosting team productivity
Key criteria:
Strong proficiency in Python and core data libraries such as Pandas, NumPy, and the broader Python data ecosystem
Hands-on experience building distributed systems from scratch using tools like Kubernetes, Slurm, Modal, Anyscale, Ray, Daft, Dask, or Spark
Passion for large-scale data processing and building systems for high-performance computation
Enjoys collaborating with researchers tackling complex scientific and technical problems
Comfortable working in fast-changing environments with evolving research needs
Solid DevOps background—experience with CI/CD systems, cloud platforms (AWS, GCP, Azure), Terraform, and compute provisioning
Deep, obsessive curiosity about both the science and the business driving the work