Kafka / Flink Architect

Job opening at Remote Job

Location

Remote Job

Address

Remote Job

Employment

Full Time

Qualification

Bachelor Of Engineering - Bachelor Of Technology (B.E./B.Tech.)

Payment

1500000 to 2000000

Date Posted

2025 Aug,25

HR

Ayushi Chouhan

Contact

ayushi.chouhan@white-force.in

Mobile

91094 72707


Job description

Hello,


We are looking for Kafka / Flink Architect.


Position Overview:

We are looking for a highly skilled and experienced Kafka/Flink Architect to lead the design and development of scalable, robust real-time data streaming and processing platforms. As an architect, you will be responsible for defining the technical strategy and architecture for low-latency data streaming applications using Kafka and Apache Flink, while ensuring alignment with business requirements. You will act as a technical leader, collaborating with multiple teams to design enterprise-grade, scalable, and resilient real-time data platforms.

Key Responsibilities:

Architect and Design Event-Driven Systems:

  1. Lead and architect real-time event-streaming data pipelines using Kafka and Apache Flink.
  2. Define best practices, patterns, and guidelines for a distributed, event-driven architecture.
  3. Establish a scalable architecture that supports use cases such as streaming analytics, event sourcing, and change data capture (CDC).

Platform Implementation and Delivery:

  1. Design and implement Kafka topics, partitions, and retention schemas for optimal performance.
  2. Architect Flink job pipelines to handle event stream transformations, joins, aggregations, and windowing operations.
  3. Oversee the integration of Kafka and Flink with other data systems like data lakes, relational databases, and NoSQL databases.

Thought Leadership:

  1. Act as the SME (Subject Matter Expert) for Kafka and Apache Flink, driving innovation and championing best practices for real-time data streaming solutions.
  2. Keep up to date with the latest trends and technologies in the streaming data ecosystem.
  3. Provide technical mentorship and guidance to developers, engineers, and data teams.

System Scalability and Optimization:

  1. Design resilient, fault-tolerant, and highly available distributed systems.
  2. Ensure the scalability of Kafka architectures by managing broker configurations, partition strategies, and replication factors.
  3. Optimize Flink job performance for throughput and latency, leveraging parallelism and resource-efficient strategies.

Monitoring and Governance:

  1. Implement Observability, monitoring, alerting, and logging solutions for Kafka and Flink ecosystems.
  2. Define and oversee security policies for Kafka (e.g., role-based access controls, encryption, and authentication mechanisms).
  3. Establish data governance, schema evolution strategies, and enforce usage patterns to maintain a well-structured streaming data architecture.

Collaboration and Stakeholder Management:

  1. Partner with product owners, stakeholders, and business units to understand requirements and convert them into scalable technical solutions.
  2. Collaborate across engineering teams to drive the organization’s real-time data strategy.
  3. Present technical concepts, architecture diagrams, and actionable recommendations to senior management and leadership teams.

Cloud and DevOps:

  1. Integrate Kafka and Flink with CI/CD pipelines for seamless deployment and testing.
  2. Leverage containerization tools for deploying and managing data streaming infrastructure.

Required Skills and Expertise:

  1. Kafka Expertise:
  2. Advanced understanding of Kafka’s architecture, including brokers, topics, partitions, producers, and consumers.
  3. Hands-on experience with building and designing Kafka’s ecosystem (e.g., Kafka Streams, Schema Registry, Kafka Connect, KSQL).
  4. Proven experience designing and optimizing Kafka clusters for high availability, fault tolerance, and low latency.

  1. Apache Flink Expertise:
  2. Strong experience working with Apache Flink for building distributed data processing and analytics pipelines.
  3. Deep knowledge of Flink’s APIs (DataStream, Dataset, and Table API) and stream processing features like stateful processing, windowing, and time semantics.
  4. Experience optimizing Flink jobs for resource efficiency and performance in large-scale environments.


  1. Big Data and Distributed Systems:
  2. Deep understanding of distributed systems principles, including consistency, availability, fault tolerance, and scalability.
  3. Hands-on experience with big data technologies, such as Spark, Elasticsearch, etc.

  1. Programming and Engineering Expertise:
  2. Proficiency in languages like Java, or Python for building Kafka/Flink applications.
  3. Solid knowledge of serialization formats such as Avro, Protobuf, or JSON for working with structured data in Kafka.

  1. Leadership and Architectural Design:
  2. Proven track record of designing and delivering enterprise-scale, production-ready streaming platforms.
  3. Ability to articulate architectural decisions, trade-offs, and best practices.
  4. Experience collaborating with cross-functional teams and mentoring engineering teams.

  1. Cloud and DevOps:
  2. Extensive experience deploying Kafka and Flink solutions in cloud environments
  3. Knowledge of containerization for managing microservices and event-driven architectures.
  4. Familiarity with CI/CD tools like Azure Pipelines.


Interested candidates share their CV on ayushi.chouhan@white-force.in or 9109472707


Job requirements

  • Experience: 7 to 10 Year.
  • Education : Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.)
  • Specilization : CS / IT...
  • Skills :
  • Industry Type : IT-Software / Software Services
  • Status : Not Disclose.

Company Name : Whiteforce

Website

About Company

Whiteforce Read more