to design and build scalable data infrastructure supporting
AI-driven automation and analytics systems
. You will develop reliable pipelines, manage large datasets, and ensure high data quality for real-time decision-making across industrial applications.
Key Responsibilities
Build and maintain
data pipelines
for real-time and batch processing.
Design, implement, and optimize
data lakes and warehouses
.
Integrate
Kafka, Spark Streaming
, and other event-driven architectures.
Ensure data accuracy, quality, and performance optimization.
Collaborate with software, AI, and IoT teams to support analytics workflows.
Automate data validation and monitoring processes.
Support visualization and reporting via
Power BI, Tableau
, or similar tools.
Continuously improve data systems for cost, reliability, and scalability.
Requirements
Bachelor's Degree in Computer Science, Engineering, or related field.
3-6 years' experience
in data engineering or data infrastructure.
Proficient in
Python, SQL, and Spark
for ETL and analytics pipelines.
Experience with
Spark Streaming, Kafka, Delta Lake, Databricks, or Azure Synapse
.
Familiar with
cloud platforms
(AWS, GCP, or Azure).
Strong analytical mindset and attention to detail.
Excellent teamwork and communication skills.
Why Join Us?
Work on large-scale
AI and IoT data projects
driving real-world efficiency.
Exposure to
industrial automation and smart system innovation
.
Fast-paced and collaborative environment.
Competitive salary and clear growth pathways.
Job Type: Full-time
Pay: Up to RM17,000.00 per month
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.