Caro usuário, habilite o javascript para que esse site funcione corretamente.

Software Engineer (Apache Hadoop) - Remote Work

CLT (Efetivo)Presencial (Local)Curitiba-PREmpresa Confidencial (Cadastre-se)

* Salário: R$ 3.000 a R$ 6.000 por mês (estimado)

* O valor exibido é uma estimativa calculada com base em dados públicos e referências do mercado. Não garantimos que este seja o salário oferecido para esta vaga específica.

Área: Tecnologia da Informação

Nível: Junior

At Confidencial (Apenas para Cadastrados)®, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.

Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.

When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.

As a Software Engineer specializing in Apache Hadoop, you will manage and develop large-scale data solutions on Hadoop clusters. You will play a key role in designing robust data storage and processing workflows, ensuring high performance and scalability for enterprise-level data environments.

What You'll Do:

  • Design, deploy, and manage production-grade Hadoop clusters, focusing on HDFS storage and YARN resource management.
  • Develop and optimize efficient data processing jobs using MapReduce and Hive to handle petabyte-scale datasets.
  • Maintain and scale the broader Hadoop ecosystem of tools to ensure seamless data integration and availability.
  • Collaborate with data teams to implement best practices for data partitioning, compression, and cluster performance tuning.
  • Ensure system reliability and data integrity through proactive monitoring and troubleshooting of distributed components.

What we are looking for:

  • 4+ years of experience in Software Engineering, Data Engineering, or Big Data infrastructure.
  • Proven expertise in managing and developing on Hadoop clusters and HDFS storage.
  • Proficiency in MapReduce and YARN processing for large-scale data workloads.
  • Hands-on experience with the Hadoop ecosystem, including Hive.
  • Advanced proficiency in English.

How we do make your work (and your life) easier:

  • 100% remote work (from anywhere).
  • Excellent compensation in USD or your local currency if preferred
  • Hardware and software setup for you to work from home.
  • Flexible hours: create your own schedule.
  • Paid parental leaves, vacations, and national holidays.
  • Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
  • Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.

Join a global team where your unique talents can truly thrive and make a significant impact!

Apply now!


BUSCAS DE VAGAS SEMELHANTES