




Job Summary: We are seeking a Data Engineer to design, build, and scale a data platform that drives product, operations, and customer decisions, ensuring the quality and accessibility of insights. Key Highlights: 1. Real impact from day one and growth with the data team. 2. Collaborative environment with autonomy and opportunity to propose ideas. 3. Access to modern tools and an up-to-date technology stack. We are looking for a Data Engineer whose mission is to design, build, and scale a data platform that drives product, operations, and customer decisions. **You will be responsible for creating and maintaining an architecture where:** * BigQuery serves as the source of truth * Data flows reliably from operational systems * Internal teams and customers can access clear, actionable insights * The platform is prepared for Machine Learning and Artificial Intelligence usage ### **Responsibilities** **Data Architecture:** * Design and evolve a BigQuery data warehouse as the single source of truth * Design and build a Data Lake for ingestion and storage of raw data * Design scalable, analytics- and ML-oriented architectures * Define business-oriented data models (fact/dim, data marts) * Lead architectural decisions for scalability and cost efficiency **Pipelines and Processing:** * Build robust ETL/ELT pipelines using: + Apache Spark (transformations) + Airflow * Integrate multiple data sources: + PostgreSQL + MongoDB + Firebase * Design incremental pipelines and efficient MERGE processes in BigQuery **Source of Truth & Data Quality:** * Ensure consistency between operational systems and BigQuery * Implement data quality validations, monitoring, and alerts * Define data SLAs and failure recovery strategies **Data Lifecycle and Optimization:** * Implement TTL and cleanup strategies in operational databases * Optimize storage and processing costs * Eliminate redundancies **Analytics and Reporting:** * Build datasets ready for analysis and BI * Enable dashboards and reporting for: + Internal teams (product, growth, ops) + Customers (restaurants) * Collaborate on key business metrics (sales, inventory, performance) **Collaboration:** * Work closely with product engineering teams on event instrumentation * Support stakeholders with ad-hoc analysis * Participate in cross-team technical decisions **Technology Stack** * Cloud: GCP (BigQuery, Cloud Storage, Dataflow, Cloud Functions) * Data Lake: Cloud Storage, formats (Parquet) * Data Warehouse: BigQuery * Processing: Spark * Orchestration: Airflow * Databases: PostgreSQL, MongoDB, Firebase * Languages: Python, SQL * BI #### **Requirements** * 4+ years of experience in Data Engineering * Solid experience designing Data Lakes and modern Data Warehouses * Hands-on experience with GCP and BigQuery * Advanced proficiency in SQL and data modeling * Experience with Spark and Airflow * Experience with Python for data processing * Experience working with large-scale data volumes * Ability to work with high autonomy * Ability to work autonomously in a startup environment. #### **Nice-to-Have** * Experience building data platforms from scratch * Experience preparing data for: * Knowledge of feature stores or ML pipelines * Experience with streaming (Kafka, Pub/Sub) * Experience in B2B SaaS or foodtech * Experience optimizing BigQuery costs #### **What We Offer** * Real impact from day one: you’ll be a key member of the data team. * A collaborative environment with significant autonomy and space to propose ideas. * Opportunity to grow alongside the data team as the company scales. * Access to modern tools and an up-to-date technology stack.


