Vacancy detail

Quantitative Developer (Azure, PySpark, Databricks)

£80,000 – £150,000 + bonus/market leading benefits

Location: United Kingdom (London – City of London, London) Type: Permanent

The ideal candidate will have experience working closely with quantitative analysts and data engineers to build and maintain a range of tools. You will help to implement robust data ETL pipelines to automate and optimise the processing of raw data from a number of third-party vendors, running a suite of tests to ensure reliable and efficient services. You will help implement and maintain these pipelines using Airflow, Databricks, Docker and Kubernetes, as well as support quantitative researchers with tools to efficiently utilise cloud compute services. The ideal candidate will have extensive experience working with Azure services (i.e. databases/datalakes, distributed compute, etc) and industry-standard technologies and practices such as PySpark, Cython etc.

Adaptability and a growth-mindset are crucial for every member of the team and an ability to critically analyse the right languages, tools, services and products on the market for the task at hand. A flat hierarchy means everybody's opinions are equally important and we will look to you for strong input on all things relating to code optimisation and expect you to form the bridge between the Engineering and Quant teams.

Principal Responsibilities:

  • Work with quantitative researchers to operationalise (CI/CD integration), optimise (speed & memory) and manage (i.e. configurations, code profiling) quant packages
  • Support quant researchers with implementing extensive unit tests for their quant libraries
  • Build and maintain data pipelines in a number of development environments
  • Develop tools to support portfolio management & quantitative research
  • Implement monitoring and alerting for new and existing pipelines
  • Investigate and resolve operational problems, which may involve bugs/performance issues across the full stack (i.e. task scheduling, monitoring, script code and operational systems)

Essential Knowledge, Skills, Qualifications and Experience:

  • Extensive experience of Azure services, PySpark, Databricks and other big data tools and Python extensions for code optimisation (eg. Cython)
  • Experience with task orchestration tools like Airflow and developing optimised ETLs
  • Solid knowledge of software development best practices and creating user-friendly solutions
  • Experience building and maintaining CI/CD pipelines
  • Hands-on experience with SQL relational databases and modern data storage solutions (Datalake, non-SQL databases etc.)


  • Knowledge of container services and tools like Docker and Kubernetes
  • Experience working with quantitative researchers and understanding of finance
  • Knowledge of API solutions (eg. REST)

Reference: AMC/AMO/DM1048686

Apply for this vacancy

Your CV will be sent to the selected department. At no time will your CV be sent outside of Anson McCade without your authorisation.

Your name

Your email address

Attach a file (CV formats accepted: .doc, .docx, .txt, .pdf)


Prior to submission of this form, the user acknowledges and accepts Anson McCade's Terms and Conditions of Use + Privacy Policy

I acknowledge and accept