Job Roles and Responsibilities:
1. Data Pipeline Design and Development: Collaborate with data scientists,
analysts, and other stakeholders to understand data requirements. Design,
develop, and implement efficient data pipelines, integrating various data
sources to transform raw data into usable formats.
2. Cloud Data Platform: Utilize your expertise in working with cloud-based
data platforms, especially Azure, to architect, deploy, and maintain scalable
and reliable data infrastructure.
3. Databricks Expertise: Demonstrate a strong understanding of Databricks and
its capabilities, using it as a primary platform for big data processing,
analytics, and machine learning.
4. ETL (Extract, Transform, Load) Processes: Implement ETL processes to
extract data from diverse sources, transform it into suitable formats, and
load it into the data warehouse or analytical systems.
5. Data Security and Compliance: Ensure data security and compliance with
data privacy regulations throughout the data engineering process.
6. Collaboration: Work closely with cross-functional teams, including data
scientists, analysts, and business stakeholders, to understand data
requirements, share insights, and provide technical support.
7. Troubleshooting and Issue Resolution: Proactively identify data-related
issues and provide timely resolutions to maintain smooth data operations.
8. Documentation: Maintain comprehensive documentation for data pipelines,
architecture, and processes, making it easier for team members to understand
and contribute to the data engineering initiatives.
Experience: 4 to 7 Years
Location: Bangalore or Hyderabad or Noida
Work Mode: Hybrid
Notice Period: 15 Days or Less