Skip to main content
Search Jobs

See All Jobs

Brazil - Remote: Back-end Developer

Rio de Janeiro, Rio de Janeiro, Brazil

Job Description

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.

Job Duties

We are building a platform that merges real-time physics, structured knowledge, and autonomous intelligence. We are looking for developers ready to tackle one (or more) of those challenges:

  • Ingestion: Handling massive streams of sensor data with minimal latency.
  • Intelligence: Making LLMs deterministic and reliable within robust multi-agent systems.
  • Context: Modeling complex ontologies to map thousands of physical assets.

What You’ll Do

You will join a high-performance team, contributing to the Core Backend while focusing on one of the following distinct tracks:

Core Responsibilities

  • High-Performance APIs: Build low-latency Python services (FastAPI) to serve live data to frontend and AI models.
  • System Reliability: Debug complex concurrency issues and ensure production reliability in distributed systems.
  • Rapid Delivery: Adopt a "deliver fast" mentality without compromising on code quality, testing, or API design standards.

Possible paths to work on the Project (you can contribute to one or more):

Streaming & High-Throughput Data 

  • Build Streaming Pipelines: Design scalable services using Kafka and Spark/Flink to process raw sensor data in real-time.
  • Time-Series Optimization: Optimize database schemas (TimescaleDB) to enable fast historical data retrieval and implement algorithmic checks to validate sensor readings.

GenAI & Autonomous Agents

  • Build Autonomous Agents: Deploy stateful agents (using LangGraph) that plan tasks, query Knowledge Graphs, and execute tools without hallucinating.
  • Advanced RAG: Build Graph-RAG pipelines that combine semantic search with structured knowledge traversal for grounded answers

Graph & Knowledge Engineering

  • Knowledge Graph Engineering: Design domain ontologies in Neo4j, defining relationships between assets, documents, and time-series data.
  • Search Infrastructure: Implement Hybrid Retrieval logic combining Vector Search, Full-Text Search, and Graph traversal.

Qualifications

Must Haves:

  • Complete Bachelor's degree in Computer Science, Engineering, or related.
  • 5+ years of experience in Python Backend development.
  • Advanced English communication skills.
  • Computer Science fundamentals: Data Structures and Algorithms.
  • Strong experience building and documenting REST APIs (FastAPI).

Good to Have:

  • Streaming: Proficiency with Streaming Technologies (Kafka, Flink, Spark) and Time-Series Databases (TimescaleDB, InfluxDB).
  • GenAI: Practical experience building applications with LLMs, Agentic Frameworks (LangGraph), and Vector Databases.
  • Graph: Hands-on experience with Graph Databases (Neo4j/Cypher), SQL database design, and Hybrid Search strategies
  • Background in Heavy Industry, O&G, or IoT data (MQTT, OPC UA).
  • Experience with Local LLMs (Ollama) for privacy-focused deployments.
  • Familiarity with Data Lineage or Metadata management.

Knowledge, Skills, and Abilities

We use a modern, high-performance stack. You should be proficient in the Core and deeply knowledgeable in your chosen track.

  • Core Backend: Python 3.12+ (FastAPI, Pydantic), Polars, Docker, Kubernetes.
    • Streaming: Apache Kafka, Flink, Spark Streaming, TimescaleDB, Tiger Data.
    • GenAI: LangGraph, LangChain, LiteLLM, Azure OpenAI/Anthropic/Local SLMs.
    • Graph/Data: Neo4j (Cypher), PostgreSQL (pg_vector, Full-Text Search).

Desired Profile

  • Problem Solver: You dig into logs to find the root cause of a data spike, a silent failure, or a disconnected node in a graph.
  • Modern Pythonista: You are up-to-date with modern Python async patterns and typing, ensuring code is high-performance and maintainable.
  • You understand Big O notation and how to optimize code for CPU/memory efficiency.

   Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Fully remote position.

Job Details

Requisition Number: 205554 
Experience Level: Entry-Level
Job Family: Engineering/Science/Technology
Product Service Line: Landmark Software & Services 
Full Time / Part Time: Full-time
Employee Group: Temporary

Compensation Information
Compensation is competitive and commensurate with experience.

Apply Job ID 205554 Date posted 01/22/2026 Category Engineering/Science/Technology

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.

Job Duties

We are building a platform that merges real-time physics, structured knowledge, and autonomous intelligence. We are looking for developers ready to tackle one (or more) of those challenges:

  • Ingestion: Handling massive streams of sensor data with minimal latency.
  • Intelligence: Making LLMs deterministic and reliable within robust multi-agent systems.
  • Context: Modeling complex ontologies to map thousands of physical assets.

What You’ll Do

You will join a high-performance team, contributing to the Core Backend while focusing on one of the following distinct tracks:

Core Responsibilities

  • High-Performance APIs: Build low-latency Python services (FastAPI) to serve live data to frontend and AI models.
  • System Reliability: Debug complex concurrency issues and ensure production reliability in distributed systems.
  • Rapid Delivery: Adopt a "deliver fast" mentality without compromising on code quality, testing, or API design standards.

Possible paths to work on the Project (you can contribute to one or more):

Streaming & High-Throughput Data 

  • Build Streaming Pipelines: Design scalable services using Kafka and Spark/Flink to process raw sensor data in real-time.
  • Time-Series Optimization: Optimize database schemas (TimescaleDB) to enable fast historical data retrieval and implement algorithmic checks to validate sensor readings.

GenAI & Autonomous Agents

  • Build Autonomous Agents: Deploy stateful agents (using LangGraph) that plan tasks, query Knowledge Graphs, and execute tools without hallucinating.
  • Advanced RAG: Build Graph-RAG pipelines that combine semantic search with structured knowledge traversal for grounded answers

Graph & Knowledge Engineering

  • Knowledge Graph Engineering: Design domain ontologies in Neo4j, defining relationships between assets, documents, and time-series data.
  • Search Infrastructure: Implement Hybrid Retrieval logic combining Vector Search, Full-Text Search, and Graph traversal.

Qualifications

Must Haves:

  • Complete Bachelor's degree in Computer Science, Engineering, or related.
  • 5+ years of experience in Python Backend development.
  • Advanced English communication skills.
  • Computer Science fundamentals: Data Structures and Algorithms.
  • Strong experience building and documenting REST APIs (FastAPI).

Good to Have:

  • Streaming: Proficiency with Streaming Technologies (Kafka, Flink, Spark) and Time-Series Databases (TimescaleDB, InfluxDB).
  • GenAI: Practical experience building applications with LLMs, Agentic Frameworks (LangGraph), and Vector Databases.
  • Graph: Hands-on experience with Graph Databases (Neo4j/Cypher), SQL database design, and Hybrid Search strategies
  • Background in Heavy Industry, O&G, or IoT data (MQTT, OPC UA).
  • Experience with Local LLMs (Ollama) for privacy-focused deployments.
  • Familiarity with Data Lineage or Metadata management.

Knowledge, Skills, and Abilities

We use a modern, high-performance stack. You should be proficient in the Core and deeply knowledgeable in your chosen track.

  • Core Backend: Python 3.12+ (FastAPI, Pydantic), Polars, Docker, Kubernetes.
    • Streaming: Apache Kafka, Flink, Spark Streaming, TimescaleDB, Tiger Data.
    • GenAI: LangGraph, LangChain, LiteLLM, Azure OpenAI/Anthropic/Local SLMs.
    • Graph/Data: Neo4j (Cypher), PostgreSQL (pg_vector, Full-Text Search).

Desired Profile

  • Problem Solver: You dig into logs to find the root cause of a data spike, a silent failure, or a disconnected node in a graph.
  • Modern Pythonista: You are up-to-date with modern Python async patterns and typing, ensuring code is high-performance and maintainable.
  • You understand Big O notation and how to optimize code for CPU/memory efficiency.

   Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Fully remote position.

Job Details

Requisition Number: 205554 
Experience Level: Entry-Level
Job Family: Engineering/Science/Technology
Product Service Line: Landmark Software & Services 
Full Time / Part Time: Full-time
Employee Group: Temporary

Compensation Information
Compensation is competitive and commensurate with experience.

Apply

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.

Job Duties

We are building a platform that merges real-time physics, structured knowledge, and autonomous intelligence. We are looking for developers ready to tackle one (or more) of those challenges:

  • Ingestion: Handling massive streams of sensor data with minimal latency.
  • Intelligence: Making LLMs deterministic and reliable within robust multi-agent systems.
  • Context: Modeling complex ontologies to map thousands of physical assets.

What You’ll Do

You will join a high-performance team, contributing to the Core Backend while focusing on one of the following distinct tracks:

Core Responsibilities

  • High-Performance APIs: Build low-latency Python services (FastAPI) to serve live data to frontend and AI models.
  • System Reliability: Debug complex concurrency issues and ensure production reliability in distributed systems.
  • Rapid Delivery: Adopt a "deliver fast" mentality without compromising on code quality, testing, or API design standards.

Possible paths to work on the Project (you can contribute to one or more):

Streaming & High-Throughput Data 

  • Build Streaming Pipelines: Design scalable services using Kafka and Spark/Flink to process raw sensor data in real-time.
  • Time-Series Optimization: Optimize database schemas (TimescaleDB) to enable fast historical data retrieval and implement algorithmic checks to validate sensor readings.

GenAI & Autonomous Agents

  • Build Autonomous Agents: Deploy stateful agents (using LangGraph) that plan tasks, query Knowledge Graphs, and execute tools without hallucinating.
  • Advanced RAG: Build Graph-RAG pipelines that combine semantic search with structured knowledge traversal for grounded answers

Graph & Knowledge Engineering

  • Knowledge Graph Engineering: Design domain ontologies in Neo4j, defining relationships between assets, documents, and time-series data.
  • Search Infrastructure: Implement Hybrid Retrieval logic combining Vector Search, Full-Text Search, and Graph traversal.

Qualifications

Must Haves:

  • Complete Bachelor's degree in Computer Science, Engineering, or related.
  • 5+ years of experience in Python Backend development.
  • Advanced English communication skills.
  • Computer Science fundamentals: Data Structures and Algorithms.
  • Strong experience building and documenting REST APIs (FastAPI).

Good to Have:

  • Streaming: Proficiency with Streaming Technologies (Kafka, Flink, Spark) and Time-Series Databases (TimescaleDB, InfluxDB).
  • GenAI: Practical experience building applications with LLMs, Agentic Frameworks (LangGraph), and Vector Databases.
  • Graph: Hands-on experience with Graph Databases (Neo4j/Cypher), SQL database design, and Hybrid Search strategies
  • Background in Heavy Industry, O&G, or IoT data (MQTT, OPC UA).
  • Experience with Local LLMs (Ollama) for privacy-focused deployments.
  • Familiarity with Data Lineage or Metadata management.

Knowledge, Skills, and Abilities

We use a modern, high-performance stack. You should be proficient in the Core and deeply knowledgeable in your chosen track.

  • Core Backend: Python 3.12+ (FastAPI, Pydantic), Polars, Docker, Kubernetes.
    • Streaming: Apache Kafka, Flink, Spark Streaming, TimescaleDB, Tiger Data.
    • GenAI: LangGraph, LangChain, LiteLLM, Azure OpenAI/Anthropic/Local SLMs.
    • Graph/Data: Neo4j (Cypher), PostgreSQL (pg_vector, Full-Text Search).

Desired Profile

  • Problem Solver: You dig into logs to find the root cause of a data spike, a silent failure, or a disconnected node in a graph.
  • Modern Pythonista: You are up-to-date with modern Python async patterns and typing, ensuring code is high-performance and maintainable.
  • You understand Big O notation and how to optimize code for CPU/memory efficiency.

   Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Fully remote position.

Job Details

Requisition Number: 205554 
Experience Level: Entry-Level
Job Family: Engineering/Science/Technology
Product Service Line: Landmark Software & Services 
Full Time / Part Time: Full-time
Employee Group: Temporary

Compensation Information
Compensation is competitive and commensurate with experience.

Apply Job ID 205554 Department Engineering/Science/Technology

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.

Job Duties

We are building a platform that merges real-time physics, structured knowledge, and autonomous intelligence. We are looking for developers ready to tackle one (or more) of those challenges:

  • Ingestion: Handling massive streams of sensor data with minimal latency.
  • Intelligence: Making LLMs deterministic and reliable within robust multi-agent systems.
  • Context: Modeling complex ontologies to map thousands of physical assets.

What You’ll Do

You will join a high-performance team, contributing to the Core Backend while focusing on one of the following distinct tracks:

Core Responsibilities

  • High-Performance APIs: Build low-latency Python services (FastAPI) to serve live data to frontend and AI models.
  • System Reliability: Debug complex concurrency issues and ensure production reliability in distributed systems.
  • Rapid Delivery: Adopt a "deliver fast" mentality without compromising on code quality, testing, or API design standards.

Possible paths to work on the Project (you can contribute to one or more):

Streaming & High-Throughput Data 

  • Build Streaming Pipelines: Design scalable services using Kafka and Spark/Flink to process raw sensor data in real-time.
  • Time-Series Optimization: Optimize database schemas (TimescaleDB) to enable fast historical data retrieval and implement algorithmic checks to validate sensor readings.

GenAI & Autonomous Agents

  • Build Autonomous Agents: Deploy stateful agents (using LangGraph) that plan tasks, query Knowledge Graphs, and execute tools without hallucinating.
  • Advanced RAG: Build Graph-RAG pipelines that combine semantic search with structured knowledge traversal for grounded answers

Graph & Knowledge Engineering

  • Knowledge Graph Engineering: Design domain ontologies in Neo4j, defining relationships between assets, documents, and time-series data.
  • Search Infrastructure: Implement Hybrid Retrieval logic combining Vector Search, Full-Text Search, and Graph traversal.

Qualifications

Must Haves:

  • Complete Bachelor's degree in Computer Science, Engineering, or related.
  • 5+ years of experience in Python Backend development.
  • Advanced English communication skills.
  • Computer Science fundamentals: Data Structures and Algorithms.
  • Strong experience building and documenting REST APIs (FastAPI).

Good to Have:

  • Streaming: Proficiency with Streaming Technologies (Kafka, Flink, Spark) and Time-Series Databases (TimescaleDB, InfluxDB).
  • GenAI: Practical experience building applications with LLMs, Agentic Frameworks (LangGraph), and Vector Databases.
  • Graph: Hands-on experience with Graph Databases (Neo4j/Cypher), SQL database design, and Hybrid Search strategies
  • Background in Heavy Industry, O&G, or IoT data (MQTT, OPC UA).
  • Experience with Local LLMs (Ollama) for privacy-focused deployments.
  • Familiarity with Data Lineage or Metadata management.

Knowledge, Skills, and Abilities

We use a modern, high-performance stack. You should be proficient in the Core and deeply knowledgeable in your chosen track.

  • Core Backend: Python 3.12+ (FastAPI, Pydantic), Polars, Docker, Kubernetes.
    • Streaming: Apache Kafka, Flink, Spark Streaming, TimescaleDB, Tiger Data.
    • GenAI: LangGraph, LangChain, LiteLLM, Azure OpenAI/Anthropic/Local SLMs.
    • Graph/Data: Neo4j (Cypher), PostgreSQL (pg_vector, Full-Text Search).

Desired Profile

  • Problem Solver: You dig into logs to find the root cause of a data spike, a silent failure, or a disconnected node in a graph.
  • Modern Pythonista: You are up-to-date with modern Python async patterns and typing, ensuring code is high-performance and maintainable.
  • You understand Big O notation and how to optimize code for CPU/memory efficiency.

   Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Fully remote position.

Job Details

Requisition Number: 205554 
Experience Level: Entry-Level
Job Family: Engineering/Science/Technology
Product Service Line: Landmark Software & Services 
Full Time / Part Time: Full-time
Employee Group: Temporary

Compensation Information
Compensation is competitive and commensurate with experience.

Apply

Forge paths and opportunities

You have no Recently Viewed Jobs

You have no Saved Jobs

Join our
talent community

Be the first to hear about the latest news and updates at Halliburton.

Join Us

Sign up for job alerts

Sign up for job alerts and SMS text messages to be the first to know about personalized career opportunities at Halliburton. Plus, get all the latest on company news and happenings.

Sign Up