In this role, you will collaborate closely with the Data Science team to gather data from diverse sources, develop scalable data pipelines, and support advanced analytics initiatives. You will play a key role in building and maintaining the data platform infrastructure, enabling real-time and batch data processing, and supporting machine learning operations.
Desired Skills and Experience
Skills & Abilities
Strong analytical and problem-solving skills with the ability to interpret and visualize data effectively.
Eagerness to explore Big Data technologies and environments.
Passion for leveraging data to influence business decisions and tell compelling stories.
Ability to communicate highly technical concepts to non-technical stakeholders.
Strong collaboration and relationship-building skills across internal and external teams.
Familiarity with data governance principles, data quality standards, and best practices.
Education & Experience
Bachelor's degree in
Computer Science
or a related field (required).
Hands-on programming experience in
Python, Scala, or Java
(Spark preferred).
Experience with relational and/or NoSQL databases, including modeling and writing complex queries.
Practical experience with
Big Data tools and frameworks
such as Databricks, Azure Data Factory, Spark, PySpark.
Exposure to
public cloud platforms
(Azure, AWS, or GCP) preferred.
Experience with large-scale distributed systems, data pipelines, and data processing.
Familiarity with cloud data warehouses (e.g.,
).
Knowledge of CI/CD pipelines and Infrastructure-as-Code (IaC) is an advantage.
What You Will Be Doing
Collaborate with product owners, managers, and engineers to define scope and Minimum Viable Products (MVPs).
Design, build, and maintain scalable and robust data pipelines integrating data from diverse sources, APIs, and applications.
Apply modern data architecture patterns (e.g., microservices, event-driven, data lake) to ensure scalability and performance.
Perform data mapping, establish data lineage, and document information flows for observability and traceability.
Partner with analytics stakeholders and data scientists to streamline data acquisition and curation processes.
Monitor, optimize, and troubleshoot data pipeline performance issues, coordinating resolution with relevant teams.
Research and implement new tools and techniques to enhance the data platform, including proof-of-concept development.
Support MLOps teams with deployment and optimization of machine learning models for batch, streaming, and API scenarios.
Architect and manage the data platform infrastructure, ensuring high availability, scalability, and security using IaC and CI/CD practices.
Who We Are:
Steelcase is a global design and thought leader in the world of work. Along with our expansive community of brands, we design and manufacture innovative furnishings and solutions to help people do their best work in the many places where work happens.
Why People Choose to Work with Us:
At Steelcase, we put people at the center of everything we do. We believe work can bring meaning and purpose to life, and we support our employees in all aspects of their journey. Together, we make a lasting impact through our work and our communities.
What Matters to Us:
More than qualifications, we value talent, potential, and diverse perspectives. We welcome applicants who are open-minded, respectful, and eager to build positive relationships across our global community. We value applicants who are comfortable interacting with people different from themselves, building mutual respect and positive relationships.
#LI-NA2
#data_engineer
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.