Sr Officer-Data Engineer

Job Description

Role Purpose

Responsible for developing, constructing, testing, and maintaining robust data pipelines and architectures that support the collection, storage, and processing of data. You will work closely with data scientists, analysts, and business teams to ensure that the data infrastructure is optimized for analytics and reporting, enabling data-driven decision-making across the organization.

  • Scope of Work

Area Of Responsibilities

Key Activities

Deliverables

Operational Delivery Excellence

  • Deliver & lead project services to customer
  • Give consultation to team as expert of Data engineer
  • Deliver services to customer as SoW successfully

Strategic Cloud Delivery Management

  • Conduct regular assessments of data engineer and identify areas for improvement.
  • Assessment and improvement plan reports

Client and Stakeholder Engagement

  • Prepare and present data engineer performance reports to stakeholders.
  • Gather client feedback and use it to improve service delivery

Minimum Requirement

Qualification

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field. Master’s degree is a plus.
  • Experience working with real-time data processing (e.g., Apache Kafka, Kinesis).
  • Familiarity with machine learning and data science workflows.
  • Cloud certifications (AWS Certified Solutions Architect, Microsoft Azure Data Engineer, etc.).

Experience

  • 3+ years of experience as a Data Engineer, Data Analyst, or in a similar role.
  • Experience with large-scale data processing and cloud-based data platforms (e.g., AWS, Azure, Google Cloud).
  • Proficient in SQL and working knowledge of NoSQL databases (e.g., MongoDB, Cassandra, etc.).
  • Experience with ETL frameworks and tools (e.g., Apache Airflow, Talend, Informatica).
  • Familiarity with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery).
  • Understanding of data modeling and design patterns.

Skill

  • Proficiency in Python, Java, Scala, or similar programming languages.
  • Hands-on experience with distributed computing frameworks (e.g., Hadoop, Spark).
  • Knowledge of version control systems (e.g., Git).
  • Familiarity with data visualization and reporting tools (e.g., Tableau, Power BI) is a plus.
  • Design, implement, and maintain scalable and reliable data pipelines to extract, transform, and load (ETL) large datasets from multiple sources.
  • Optimize and automate data processes to handle high-volume, real-time, and batch data.
  • Integrate data from various external and internal sources, ensuring that the data is accurate, accessible, and usable for downstream users.
  • Work with different data storage solutions (e.g., relational databases, NoSQL databases, data lakes).
  • Build and manage data warehouses, ensuring data integrity, consistency, and availability.
  • Ensure that data models and structures are optimized for reporting and analytics use cases.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver relevant data solutions.
  • Work closely with software engineers to design and implement data infrastructure solutions for applications.
  • Develop processes for monitoring data quality and integrity across various systems and databases.
  • Troubleshoot and resolve issues related to data availability, accuracy, and performance.