Specialize in Python, SQL and cloud technologies designing, developing and supporting data pipelines, warehouses and so on.
Design, build, and maintain big data systems with stakeholder requirements in mind.
Develop data processing solutions using technologies such as Hadoop, Spark, and Kafka and ensure data accuracy, consistency, and integrity.
Collaborate with data scientists and analysts to produce customer data analytics, models, and algorithms to be deployed.
Ensure compliance with data privacy and security regulations during development and stay up-to-date with new big data trends.
Be a part of a pioneer team with new and exciting technologies|Highly flexible working conditions
Experience working with big data technologies such as Spark (preferred), Hadoop, or Kafka.
Strong programming skills in languages such as Python, SQL, Terraform, or Docker. Pyspark is a plus.
Familiarity with data warehousing concepts and technologies such as BigQuery (preferred), Redshift, or Snowflake.
Experience with data visualization tools such as PowerBI (preferred), Tableau or Qilksense
Proven experience in working with databases such as MSSQL (preferred), Postgres, and NoSQL.
Strong problem-
A global conglomerate, our client is a household name in property, healthcare, retail and other industries. A publicly listed organization since early 1980\'s, our client strives to use data to remain ahead of competitors and drive organizational success.
Be a part of a pioneer team with new and exciting technologies