Senior Data Engineer

Kuala Lumpur, Malaysia

Job Description

Grow your career with us
Here at Averis, our common purpose is to improve lives by developing resources sustainably. Our people are crucial in helping us to realise our vision to be one of the best Global Business Solution (GBS) organization to support our customers in creating value for the Community, Country, Climate, Customer and Company.
Responsibilities:
Hello, we are Averis!
Averis Information Technology is a global IT services organization headquartered in Kuala Lumpur that designs and delivers IT solutions for large enterprises, to drive economies of scale and business transformation. Our core areas of expertise are: infrastructure and networking, ERP and procurement systems, cybersecurity and data governance, digital workplace and end-user management. Our journey started in 2006, and today, more than 300 IT professionals are servicing major customers in resource-based manufacturing industries such as i.e. paper, packaging and tissue, edible oils, and energy which collectively encompass $35B of assets and 80,000 employees sited across 32 locations globally.
Role Overview
We are hiring a Senior Data Platform Engineer to join our growing data team. This role focuses on building and maintaining robust, scalable, and secure data platform infrastructure, with a strong emphasis on DataOps, data quality, observability, and automation. You will work extensively with Databricks on AWS, and contribute to the reliability and efficiency of data pipelines powering analytics and decision-making across the enterprise.
Key Responsibilities

  • Design, provision, and maintain secure and scalable data infrastructure using Databricks on AWS, supporting both batch and streaming workloads.
  • Collaborate with Data Engineers, Architects, and Product Owners to implement DataOps best practices, including CI/CD pipelines using GitHub Actions.
  • Implement and manage data quality and observability toolchains (e.g., Great Expectations, Databricks expectations framework).
  • Build and maintain reusable components for data ingestion, transformation, orchestration, and metadata management.
  • Automate infrastructure and resource provisioning using Terraform for AWS services and Databricks workspaces.
  • Integrate monitoring, alerting, and logging solutions for data pipelines, clusters, and jobs via CloudWatch, Databricks monitoring, and Prometheus/Grafana.
  • Enforce governance and compliance controls including IAM, audit logging, lineage tracking, and secure data access management in Databricks.
  • Optimize compute and storage resource usage through job tuning, cluster configuration, and cost-aware architecture decisions.
  • Lead post-incident analysis and RCA for platform reliability and data pipeline failures.
  • Maintain team documentation in Confluence and track work using Jira.
Technical Skills
  • Platform: Strong experience with Databricks (including Unity Catalog, Delta Lake, Job APIs) on AWS (S3, IAM, EC2, VPC, CloudWatch).
  • Infrastructure as Code: Proficiency with Terraform for AWS and Databricks workspace management.
  • DataOps: Familiarity with GitHub Actions, dbt, Airflow, or similar orchestration tools.
  • Monitoring & Observability: Experience with Databricks built-in monitoring, CloudWatch, Prometheus, Grafana, and data quality frameworks.
  • Automation & Scripting: Strong in Python, Bash, or PowerShell for scripting automation and utilities.
  • Containerization (Optional): Knowledge of Docker and Kubernetes, especially for orchestration of auxiliary services.
  • Governance & Security: Good understanding of IAM, data encryption, role-based access control, and regulatory compliance (e.g., GDPR, PDPA).
  • Version Control & Collaboration: Proficiency in GitHub, Confluence, and Jira as daily workflow tools.
Soft Skills
  • Team Collaboration: Proven ability to work cross-functionally with engineers, product owners, and stakeholders.
  • Problem Solving: Deep troubleshooting skills across infrastructure, platform, and data layers.
  • Documentation: Ability to maintain high-quality documentation and share knowledge effectively.
  • Agility: Comfortable working in agile, iterative environments with frequent delivery cycles.
Experience
  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
  • Professional Experience: Minimum of 5 years in data platform engineering or DevOps for data ecosystems.
  • Project Experience: Hands-on involvement in building or scaling a cloud-based enterprise data platform, preferably with Databricks and AWS.
When you send us your resume and personal details, it is deemed you have provided your consent for us to keep or store your information in our database. All the information you have provided is only used for the recruitment process. Averis will only collect, use, process or disclose personal information where and when allowed to under applicable laws.
Only shortlisted candidates will be contacted for an interview. We endeavour to respond to every applicant. However, if you receive no response from us within 60 days, please consider your application for this specific position unsuccessful. We may contact you in the future if there are opportunities that match your qualifications and experience. Thank you for considering a career with Averis.

Skills Required

Beware of fraud agents! do not pay money to get a job

MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD1372781
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Kuala Lumpur, Malaysia
  • Education
    Not mentioned