Required Skills & Qualifications
Bachelor's, Master's, or PhD in Computer Science, Data Engineering, or a related discipline.
5-7 years of experience in data engineering and distributed data systems.
Strong hands-on experience with Apache Hive, HBase, Kafka, Solr, Elasticsearch.
Proficient in data architecture, data modelling, and pipeline scheduling/orchestration.
Operational experience with Data Mesh, Data Product development, and hybrid cloud data
platforms.
Familiarity with CRM systems, including CRM system, and data sourcing/mapping strategies.
Proficient in managing metadata, glossary, and lineage tools like Apache Atlas.
Proven experience in generating large-scale batch files using Spark and Hive.
Strong understanding of document-based data models and the transformation of relational
schemas into document-oriented structures.
Expertise in data administration, modelling, mapping, collection, and distribution.
Strong understanding of business workflows to support metadata governance.
Hands-on experience with analytics and DWH tools (e.g., SAS, Oracle, MS SQL, Python, R
Programming).
Familiarity with data modelling tools (e.g., ERWIN), and enterprise databases (Oracle, IBM
DB2, MS SQL, Hadoop, Object Store).
Experience working across hybrid cloud environments (e.g., AWS, Azure Data Factory).
In-depth knowledge of ETL/ELT processes and automation frameworks.
Analytical thinker with strong problem-solving and communication skills.
Able to collaborate effectively across technical and business teams.
Proven ability to deliver high-quality outcomes within
Job Types: Full-time, Permanent
Pay: Up to RM12,000.00 per month
Work Location: In person
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.