Glossy red exclamation mark with glowing edges.
Heads up! We only email from @empat.tech — anything else is a scam.

Senior Data Engineer (Scala)

We’re looking for a Senior Scala / Data Engineer for one of our clients — a company operating in a data-heavy, distributed systems environment.

Vladyslav Kravets
Published on
May 8, 2026

Build data-intensive systems that actually scale. Own distributed processing, not just services. Shape how large-scale data flows power products.

This role is for someone who takes full ownership of distributed data systems — from architecture to performance. You’ll work on complex, large-scale data processing challenges where efficiency, scalability, and fault tolerance are critical.

Your responsibilities will include:

  • Build and optimize distributed data processing systems using Scala;
  • Design and implement high-throughput, low-latency data pipelines;
  • Work with big data frameworks (e.g., Spark) and distributed architectures;
  • Develop and maintain scalable backend services for data-intensive workloads;
  • Optimize performance of data processing jobs and infrastructure;
  • Ensure fault tolerance and resilience of distributed systems;
  • Implement monitoring, logging, and observability for data pipelines;
  • Collaborate with engineering teams to design scalable system architectures;
  • Contribute to cross-functional backend and data engineering efforts;
  • Ensure high standards of code quality, performance, and reliability.

What we expect from you:

  • Strong experience with Scala in production environments;
  • Proven experience as a Senior Data Engineer or Backend Engineer in data-heavy systems;
  • Strong understanding of distributed systems and parallel processing;
  • Experience with big data technologies (Spark, Kafka, etc.);
  • Strong knowledge of JVM ecosystem and performance tuning;
  • Experience building high-load, scalable systems;
  • Strong system design and architectural thinking;
  • Ability to work independently and take ownership;
  • Strong problem-solving skills in complex, distributed environments;
  • English level: B2+ (C1 preferred);
  • Availability to work in EU timezone.

Nice to have:

  • Experience with functional programming paradigms;
  • Experience with cloud infrastructure (AWS / GCP);
  • Experience with stream processing frameworks (Flink, Akka Streams);
  • Background in low-latency or real-time systems;
  • Experience in niche or hard-to-hire tech domains.

We offer:

  • Six-month full-time contract engagement;
  • Remote work with overlap in US business hours;
  • High-impact project with direct influence on production AI systems;
  • Close collaboration with client’s product and engineering teams;
  • Opportunity to build core infrastructure for real-world AI applications — not prototypes.
Apply