Data Engineer
(3-month contract)
ago
Location
West Midlands, Solihull
Hybrid
Salary
£350+ per day
Employment Type
Contractor
Experience Level
Entry
Junior
Mid
Senior
Expert
Our Client
Global Financial Solutions Provider
SPECIALTY
Personal Loans, Revolving Credit, Loan Consolidation Solutions, Payment Facilities/Short Term Credit, Real Estate Credit, Short Term Credit, In-Store Credit, Retail Credit Card, Car Leasing Services
INDUSTRY
Financial Services
Company Size
20000+ Employees
Aubay's Take
Our client is a leading finance provider in the UK and Europe and is part of one of the world’s largest financial entities. In this fast paced digital first world, our client provides responsible consumer finance solutions in a B2B and B2C capacity and offer a variety of creative strategies and adaptable lending choices that convert aspirations into reality. Their vision is to be a driver of positive change and to provide helpful and affordable finance options that customers can trust and use every day. Over the last 50 years, our client has built a portfolio of over 27 million clients and employs more than 20,000 employees globally – with a team of around 700 here in the UK. With their firm focus on creating a leading working environment with a strong emphasis on inclusivity and CSR, our client has been recognized as one of the top employers in the UK for three consecutive years.

Role Summary
Aubay is seeking a skilled Data Engineer to join a local data squad within a leading financial services organisation in the UK. The successful candidate will play a key role in maintaining and enhancing the organisation’s DataHub, ensuring the stability, accuracy, and security of data services. This role involves supporting day-to-day business needs, optimising data pipelines, and contributing to the migration and modernisation of infrastructure into cloud-based environments. This is a hands-on position ideal for a technically strong Data Engineer with experience in big data technologies, cloud environments, and scalable data pipeline design.
Required Skills and Experience:
- Strong experience in data engineering, with proven ability to design, build, and optimise data pipelines.
- Proficiency in Scala and Spark for data transformation and enrichment operations.
- Experience with SQL and structured databases.
- Solid knowledge of Hadoop, HDFS, and cloud object storage (e.g., S3, COS).
- Experience with orchestration and scheduling tools such as Apache Airflow.
- Hands-on experience with CI/CD tools (e.g., GitLab, Jenkins).
- Understanding of software development lifecycle (SDLC) and Agile methodologies.
- Strong troubleshooting and problem-solving skills in complex data environments.
- Experience producing clear technical documentation for specifications and operational use.
Desired Skills and Experience:
- Familiarity with Kubernetes containerisation.
- Knowledge of data virtualisation tools such as Dremio.
- Exposure to streaming processes (Kafka or similar).
- Experience with Elasticsearch and Kibana.
- Awareness of data science/AI platforms (e.g., Dataiku).
- Knowledge of secrets management tools such as HVault.
- Working knowledge of the banking or financial services industry.
- French language skills would be an advantage.
Role Responsibilities:
- Maintain and support the DataHub, ensuring minimal disruption and meeting agreed uptime requirements.
- Troubleshoot and resolve data-related issues, discrepancies, and anomalies.
- Modify existing code and implement improvements to optimise performance, maintainability, and scalability.
- Design, build, and optimise pipelines to enrich and transform large data volumes with complex business rules.
- Integrate data from multiple sources and formats into the Raw Layer of the DataHub.
- Ensure consistency, accuracy, and security of data infrastructure, following best practices in data engineering.
- Set up CI/CD pipelines to automate deployments, testing, and development workflows.
- Develop and execute unit and validation tests to guarantee accuracy and integrity of delivered code.
- Implement and manage scheduling processes for pipeline execution using tools such as Airflow.
- Support the migration of existing Hadoop infrastructure into cloud-based services (Kubernetes, Spark as a Service, Airflow as a Service, COS).
- Produce clear technical documentation to ensure knowledge sharing and continuity.
- Work closely with cross-functional squads, including business and IT teams, to ensure solutions meet business requirements.