Job opportunities
All job opportunities

Data Engineer

ABOUT NAVIXY

Navixy is a leading IoT and telematics platform provider, empowering businesses with real-time visibility into their assets, vehicles, and operations. Our solutions span global fleet tracking, smart logistics, advanced geospatial analytics, and more. We harness data from sensors, GPS devices, and machine telemetry to deliver actionable insights that optimize efficiency and drive innovation.

We are on a mission to revolutionize how businesses leverage IoT data – turning raw streams of geospatial and time-series information into predictive intelligence and AI-driven decision-making. Join us to reshape industries and define new standards in data and analytics.

 

ROLE OVERVIEW

As a Data Engineer at Navixy, you will work alongside our Head of Data and a passionate team of data professionals to build and maintain our modern data infrastructure. Your focus will be on designing and optimizing data pipelines, ensuring reliable ingestion, transformation, and storage of vast amounts of IoT and telematics data. You’ll collaborate closely with data scientists, analysts, and software engineers to enable cutting-edge analytics, predictive models, and business intelligence solutions.

 

KEY RESPONSIBILITIES

Data pipelines and orchestration

  • Design, develop, and maintain ETL/ELT workflows using tools such as Apache Airflow or Apache NiFi (or comparable orchestration platforms)
  • Implement CDC (Change Data Capture) processes using technologies like Debezium (or its popular competitors) to ensure real-time data updates

Data storage and warehousing

  • Manage and optimize data storage on object stores like Amazon S3 (or equivalent services from Azure/GCP)
  • Implement columnar databases like ClickHouse (or comparable technologies such as Snowflake, Apache Druid, or Parquet-based data lakes) to enable high-performance analytics
  • Collaborate on setting up and maintaining data warehouse solutions, integrating with tools like Databricks or Apache Spark

Data transformation and modeling

  • Develop and maintain data transformation pipelines using frameworks such as dbt (or similar SQL-based transformation tools)
  • Implement best practices for data modeling, including schema design, partitioning strategies, and performance tuning
  • Work with PySpark for large-scale data processing and distributed computing

Real-time and batch data processing

  • Collaborate on both streaming and batch data ingestion/processing solutions, ensuring reliable and scalable pipelines
  • Integrate real-time data flows for IoT devices, leveraging popular stream-processing frameworks when necessary

Performance optimization and reliability

  • Monitor and troubleshoot data pipelines, ensuring minimal downtime and high reliability
  • Implement robust logging, alerting, and testing strategies to maintain data quality and operational excellence
  • Conduct performance tuning of queries, pipelines, and database systems

Collaboration and continuous improvement

  • Work closely with data scientists, analysts, and other engineers to translate data requirements into scalable technical solutions
  • Participate in code reviews, architecture discussions, and knowledge-sharing sessions to foster a high-performing data culture
  • Stay current with emerging data engineering technologies and propose innovative solutions to drive Navixy’s analytics capabilities forward

 

WHAT YOU’LL BRING

  • BS / MS in Computer Science, Data Science, Mathematics
  • 5+ years of professional experience
  • Proficient in Python and SQL, with a solid understanding of data structures and algorithms
  • Experience building and orchestrating data pipelines with Apache Airflow, NiFi, or similar tools
  • Familiarity with CDC technologies like Debezium (or equivalents) for real-time data replication
  • Hands-on experience with Apache Spark, Databricks, or comparable big data processing frameworks
  • Working knowledge of columnar databases (e.g., ClickHouse, Snowflake) and data lakes
  • Exposure to dbt (or similar SQL-based data transformation tools)
  • Understanding of DevOps principles, CI/CD pipelines, and version control (Git)
  • Bonus: Experience with geospatial analytics or time-series data in an IoT setting
  • Able to work effectively in a cross-functional, multi-lingual team environment
  • Fluent in Russian and English, with the ability to convey complex technical concepts to non-technical stakeholders
  • Eagerness to take initiative and ensure high-quality deliverables

 

WHY JOIN NAVIXY

  • Impactful work: contribute to an IoT platform that processes and analyzes massive streams of real-time data, influencing industries worldwide
  • Cutting-edge tech: Collaborate with a modern, evolving data stack
  • Expand your skills in a supportive environment that encourages experimentation, learning, and professional advancement
  • Enjoy a hybrid work arrangement that promotes work-life balance
  • Join a diverse team pushing the boundaries of IoT innovation for a global client base

If you are passionate about building reliable data pipelines, orchestrating large-scale data workflows, and driving excellence in analytics, we would love to hear from you.

 

WHAT WE OFFER

  • Location: Belgrade, Serbia.
  • Conditions: Hybrid work format with a 40-hour workweek (meeting in a coworking space once a week).
  • Employment: Contract-based.
  • Development: Opportunities for workshops, conferences, professional courses, and corporate English classes.
  • Well-being: Access to psychological support service through “Yasno.”
  • Transparency and Growth: A culture of open feedback, with vertical and horizontal growth opportunities.
  • Impact and Openness: Opportunity to see the results of your work, receiving timely feedback from the team, Lead, and HR.
  • Corporate Culture: Online and offline events, team-building, experience exchange, and a young, vibrant team of professionals.

Send your CV to: [email protected]