We offer:
- Participation in interesting and demanding projects
- Flexible working hours
- A great, non-corporate atmosphere
- Stable employment conditions (contract of employment or B2B contract)
- Opportunity to develop your career
- Attractive benefits package
- Remote work
Your responsibilities:
- Working with ETL solutions on AWS
- Building real-time/batch data pipelines and manage multiple datasets
- Migrating data from on-premise sources into AWS storage
- Using effectively programming languages like Python for optimization
- Ensuring smooth process
We are looking for you, if you have:
- Minimum 8 years’ experience of working with Cloud Data Platforms especially AWS
- Proven experience as a Data Engineer, Data Scientist or Data Analyst
- Experience in building data pipelines using Python and AWS Glue
- Experience with Scala, Java & R is a plus
- Familiarity with Amazon Web Services (AWS) – EC2, EMR, S3, Athena, Glue, Redshift, etc.
- Experience in using data frameworks (e.g. Spark, Hive and HBase) and related tools
- Experience in using step functions, lambda, DMS, RDS, S3, EC2, Glue, EMR, Athena, etc
- Exposure to ETL process management
- Excellent communication, presentation skills and people Interactions within own team or department at operational level.
- BSc/BA in Computer Science, engineering field (STEM), or equivalent experience; graduate degree in Data Science or other quantitative field is preferred
- AWS Certifications are a plus