Data Engineer (40220)
Join a dynamic team as a senior Data Engineer. You’ll work on building scalable models from complex datasets, enhance them with domain-specific business logic, and deliver results through Power BI and Excel. This role calls for deep experience with Python, Spark, and SQL, plus strong Databricks and Delta Lake skills. You should be confident working within Azure (Data Factory, ADLS, Cosmos DB), practicing CI/CD, versioning your work, and deploying with Docker. Knowledge of RDBMS and NoSQL is required. If you're also comfortable with AI/ML or Kubernetes, all the better. Join a team that builds reliable data foundations for critical decisions. Apply today.
🚀 Project
- handling large datasets in a cloud-based environment for faster speeds and efficient processing
- allowing users to update mapping tables and adding business logic (DAX) to enhance data models
- deploying enhanced models to the cloud for secure analysis using tools like Excel or Power BI
🎯 Skills
- advanced Python knowledge and software engineering skills
- proven experience with Spark and SQL for data engineering and analysis
- extensive experience with Databricks for data ingestion, transformation and all its latest features
- familiarity with Delta Lake and data warehousing concepts
- hands-on experience with one or more cloud services (Azure - must, AWS, GCP)
- experience with Azure Data Factory, Azure Cosmos DB, Azure functions, Event store, ADLS, Key Vault
- proficiency in RDBMS/NoSQL data stores and appropriate use cases
- experience with Data as Code; version control, small and regular commits, unit tests, CI/CD, packaging, familiarity with containerization tools such as Docker
- knowledge of reporting tools like Power BI
- solid understanding of the software development life cycle
- strong communication, interpersonal and presentation skills
💡 Nice to have
- experience with ML/AI
- Kubernetes (plus)