Caro usuário, habilite o javascript para que esse site funcione corretamente.

Senior Data Engineer

CLT (Efetivo)Presencial (Local)São Paulo-SPEmpresa Confidencial (Cadastre-se)

* Salário: R$ 11.000 a R$ 20.000 por mês (estimado)

* O valor exibido é uma estimativa calculada com base em dados públicos e referências do mercado. Não garantimos que este seja o salário oferecido para esta vaga específica.

Área: Tecnologia da Informação

Nível: Senior

Detalhes da vaga

  • Tempo integral

Qualificações

  • Gestão
  • Data Lake
  • Criptografia
  • Certificação AWS
  • Git
  • Inglês
  • Banco de Dados
  • SQL
  • Docker
  • Desenvolvimento de Software
  • ETL
  • Data warehouse
  • Python

Descrição completa da vaga

Responsibilities

  • Design and implement robust, scalable data pipelines that ensure data accuracy and availability across multiple platforms.
  • Work on the data integration of various Confidencial (Apenas para Cadastrados) platforms and products worldwide.
  • Contribute to data pipelines development, performance, quality, monitoring, and maintenance.
  • Optimize existing data workflows and databases for performance and scalability, using best practices and cutting-edge tools.
  • Identify opportunities for improvement in our products, business, and architecture through the strategic use of data.
  • Lead data engineering projects , serving as a technical reference and providing guidance to team members (e.g., through code reviews).
  • Collaborate actively with data scientists, analysts, and product teams to understand data needs and deliver high-quality solutions.
  • Advocate for Data Engineering best practices both inside and outside the team.
  • Stay abreast of emerging technologies and industry trends to contribute innovative ideas to our data strategy.

Must Have

  • Consistent experience as a Data Engineer or in a related role.
  • Strong knowledge of software development (e.g., Python , Spark, Git, CI/CD, Docker).
  • Expertise in SQL to query data and build ETL/ELT processes.
  • Proficiency in designing modern data pipelines and architectures.
  • Experience working with Data Lake and Data Warehouse concepts, using best practices to structure and store big volumes of data.
  • Ability to troubleshoot and optimize data pipelines for performance and reliability.
  • Experience in data pipeline creation/orchestration tools (e.g., Airflow , Dagster ).
  • Hands-on knowledge of cloud environments (e.g., AWS or GCP ).
  • Experience using Databricks or database technologies such as Snowflake or Delta Lake .
  • Knowledge of different data architectures (e.g., Data Lake, Data Mesh, Data Fabric).
  • Fluent English for effective communication with technical and business stakeholders.
  • Demonstrated ability to collaborate effectively and communicate complex ideas clearly .

Nice to Have

  • Experience with real-time data processing and related tools/frameworks.
  • Experience designing and implementing new data architectures.
  • Experience using any data governance/management tool .
  • Experience using a tool (e.g., Dremio ) to create a virtualization layer .
  • Knowledge of data security best practices (encryption, access controls).
  • Ability to organize and break down complex projects/initiatives into manageable tasks.