Oracle Analytics is used by customers across the world to discover deep insights about their business, improve collaboration around a single view by securely including all relevant data, and increase agility by quickly spotting patterns and powering data-driven decisions with AI and machine learning.
Oracle Fusion Data Intelligence platform (FDI) is the next generation of Oracle Fusion Analytics Warehouse built for Oracle Fusion Cloud Applications, bringing together business data, ready-to-use analytics, and prebuilt AI and machine learning (ML) models to deliver deeper insights and accelerate the decision-making process into actionable results.
The backbone of FDI is the lights-out data pipeline that manages the data warehouse for all the customers. For details about the product, visit
https://docs.oracle.com/en/cloud/saas/analytics/25r3/index.html
FDI Pipeline Data Model team defines the application development language and uses the same to build applications – deliver analytic data models for Fusion, NetSuite, Salesforce etc. sources.
As a member of the team, you’ll build and support scalable data models for analytics. You’ll have opportunities to learn and gain hands-on experience with business processes, data processing, semantics, the modern data stack, and AI-driven development using large language models and intelligent agents. You’ll also collaborate on developing language processors, user interfaces, and automation solutions that showcase your creative thinking—all within an agile, innovative environment that values your growth.
You will:
- Learn from senior engineers how to translate business processes/analytics requirements into effective data model designs.
- Build, test, and maintain scalable data models and pipelines
- Work with QA to validate data accuracy and resolve issues as they arise.
- Troubleshoot and fix routine data or performance problems with support from experienced teammates.
- Create and maintain technical documentation for team reference, knowledge sharing and even for Customers.
- Contribute to automation, language processing, and user interface projects within Pipeline team.
- Stay curious and keep learning about new tools, technologies, and approaches in data engineering and AI.
Qualifications:
You have:
- BS or MS degree in computer science, Engineering, or a related field, or equivalent practical experience.
- 3+ years of experience developing and delivering data products, data pipelines, or software/data engineering solutions.
- Strong programming skills in Python/PySpark/Scala/Java and SQL.
- Solid experience with cloud databases, data modeling concepts, ETL processes, and at least one public cloud platform (Oracle Cloud, AWS, Azure, or Google Cloud), especially for data services.
- Good understanding of data management and software design fundamentals.
- Proven problem-solving and analytical skills with attention to detail.
- Ability to effectively communicate technical concepts verbally and through design and documentation.
- Interest or exposure to data science, machine learning, or AI-driven development tools is a plus.
- Willingness and enthusiasm to learn about business processes, data processing, semantics, and the modern data stack.
- Positive attitude, curiosity, and eagerness to contribute new ideas while thriving in an inclusive, innovative, and collaborative team culture.