to design, build, and optimize scalable data pipelines and cloud-based data platforms. The ideal candidate will have strong hands-on experience with
AWS services
,
ETL/ELT frameworks
,
PySpark
, and modern data warehousing tools such as
Snowflake
and
Amazon Redshift
.
Key Responsibilities
Design, develop, and maintain scalable
ETL/ELT pipelines
using
AWS Glue
,
PySpark
, and other AWS data services.
Build and manage
data ingestion
,
transformation
, and
processing workflows
for structured and unstructured data.
Develop
DBT models
and implement best practices for data transformation and version control.
Implement and optimize
cloud data warehouses
such as
Snowflake
and
Amazon Redshift
.
Ensure high data quality, governance, and security across all data pipelines.
Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable datasets.
Monitor data pipelines, troubleshoot issues, and improve performance.
Work with AWS services including
Strong SQL experience for data modeling and transformations
Experience with CI/CD and version control (Git, CodePipeline, etc.)
Soft Skills
Excellent communication and problem-solving abilities
Ability to work in agile, fast-paced environments
Strong analytical and debugging skills
Preferred Qualifications
Bachelor's/Master's in Computer Science, Engineering, or related field
Experience with data lake/lakehouse architectures
Knowledge of containerization (Docker) or orchestration (Airflow) is a plus
Job Type: Full-time
Pay: RM10,000.00 - RM15,000.00 per month
Benefits:
Health insurance
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.