Job Description
Python, AWS, Snowflake, MySQL, PySpark, Power BI, REST APIs,
Job Title: Data Engineer (Contract)
About The Role
We are seeking an experienced Data Engineer with expertise in Snowflake, AWS, Python, MySQL, DevOps, PySpark, Power BI, REST API, and Azure. In this role, you will be responsible for designing, building, and optimizing scalable data pipelines and infrastructure to support analytics, reporting, and business intelligence needs.
Contract-based Opportunity
Key Responsibilities
Develop and maintain scalable ETL/ELT pipelines using PySpark and other data processing frameworks.
Design and optimize data models in Snowflake and MySQL for high-performance analytics.
Implement and manage cloud-based data solutions on AWS.
Develop, integrate, and manage REST APIs for data ingestion and processing.
Collaborate with DevOps teams to automate deployments and ensure data pipeline reliability.
Create Power BI dashboards and reports for data visualization and business insights.
Monitor and troubleshoot data workflows, ensuring data accuracy and efficiency.
Enforce data security, governance, and compliance best practices.
Requirements
5+ years of experience in Data Engineering or a related field.
Strong expertise in Snowflake, AWS, and Python.
Hands-on experience with PySpark for big data processing.
Proficiency in SQL and database management (MySQL, Snowflake).
Experience working with DevOps tools and CI/CD pipelines for data engineering.
Familiarity with REST APIs for data integration and automation.
Strong knowledge of Power BI for data visualization.
Ability to work with structured and unstructured datasets, ensuring optimal data flow.
Nice to Have
Experience with Kafka or Kinesis for real-time data streaming.
Knowledge of Terraform or Infrastructure as Code (IaC).
Exposure to machine learning pipelines and data science workflows.
Let me know if you’d like any refinements!