Data engineer
Context
As part of our ambition to accelerate our Data Transformation, we are looking for an experienced Data Engineer to support the design and delivery of data products across various business domains. This role will contribute directly to building a strong, scalable data foundation by delivering high-quality, reusable data products in line with evolving business priorities.
The Data Engineer will:
- Build and maintain robust data pipelines that integrate multiple data sources.
- Embed Data Governance by design to ensure data quality, consistency and compliance
- Align with our data architectural principles and contribute to the implementation of our Data Product architecture.
- Collaborate with business oriented and cross-functional teams and focus on delivering new data products in scope of different use cases/projects in line with Business priorities
- Contribute to the implementation of our Data landscape and Data Product architecture
General description
The data engineer works mainly on the engineering of (batch or near-realtime) end-to-end data pipelines on our cloud-based data platform (AWS), processing data in(to) the data lake and or warehouse (Snowflake).
Data is processed to build domain-oriented data products for AI/ML, BI and data integration use cases.
You'll be part of agile, business-oriented scrum teams delivering tangible business value while respecting architectural and engineering best practices.
Way of working keywords: Agile principles, continuous improvement, peer reviewing / pull requests, clean code, testing.
Good listening and communication skills are required.
Main responsibilities
- Structuring and modelling data, e.g. Schema design, dimensional & Data Vault modelling, ...
- Creating technical designs for data products that support AI, BI and/or data integration use cases.
- Building, maintaining and running E2E data pipelines according to defined architectural guidelines, development patterns, standards & guidelines.
- Defining and upholding development standards, guidelines & best practices.
Skills & experience
Hands-on, proven experience with ("the basics"):
- Advanced SQL
- Data modelling, dimensional modelling, Data Vault modelling (strong experience required)
- Building data pipelines with DBT & PySpark
- Data processing in an AWS cloud environment: AWS services (S3, Glue, Athena, AppFlow, DMS, …) & IAM
- Agile delivery, CI/CD, Git
Experience with, notions of or profound interest in ("nice to have"):
- Data ingestion (e.g. CDC, API, ...)
- Event-based data processing
- Data product architectures
- Data catalogue, data lineage tooling (e.g. DataHub)
Fluent in English (verbal and in writing), Dutch or French optional.