Description
BioCatch is the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online user’s physical and cognitive digital behavior to protect individuals online. BioCatch’s mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust, and ease coexist.Today, 32 of the world's largest 100 banks and 210 total financial institutions rely on BioCatch Connect™ to combat fraud, facilitate digital transformation, and grow customer relationships.. BioCatch’s Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps BioCatch to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, BioCatch continues to innovate to solve tomorrow’s problems. For more information, please visit www.biocatch.com.
Main responsibilities:
- Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
- Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
- Optimize and monitor the team-related cloud costs.
- Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
- Implement CI/CD for Data Workflows
Requirements
- 5+ Years of Experience in data engineering and big data at large scales. - Must
- Extensive experience with modern data stack - Must:
- Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
- Kafka, RabbitMQ, or similar for real-time data processing.
- Pyspark, Databricks
- Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must
- Hands-on experience with Docker and Kubernetes. - Must
- Expertise in ETL development, data modeling, and data warehousing best practices.
- Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
- Experience with infrastructure as code, deployment automation, and CI/CD.
- Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
Our stack: Azure, GCP, Databricks, Snowflake, Airflow, Spark, Kafka, Kubernetes, Neo4J, AeroSpike, ELK, DataDog, Micro-Services, Python, SQL
Your stack: Proven strong back-end software engineering skills, ability to think for yourself and challenge common assumptions, commitment to high-quality execution, and embrace collaboration.