Job description
Hello,
We are looking for Kafka / Flink Architect.
Position Overview:
We are looking for a highly skilled and experienced Kafka/Flink Architect to lead the design and development of scalable, robust real-time data streaming and processing platforms. As an architect, you will be responsible for defining the technical strategy and architecture for low-latency data streaming applications using Kafka and Apache Flink, while ensuring alignment with business requirements. You will act as a technical leader, collaborating with multiple teams to design enterprise-grade, scalable, and resilient real-time data platforms.
Key Responsibilities:
Architect and Design Event-Driven Systems:
- Lead and architect real-time event-streaming data pipelines using Kafka and Apache Flink.
- Define best practices, patterns, and guidelines for a distributed, event-driven architecture.
- Establish a scalable architecture that supports use cases such as streaming analytics, event sourcing, and change data capture (CDC).
Platform Implementation and Delivery:
- Design and implement Kafka topics, partitions, and retention schemas for optimal performance.
- Architect Flink job pipelines to handle event stream transformations, joins, aggregations, and windowing operations.
- Oversee the integration of Kafka and Flink with other data systems like data lakes, relational databases, and NoSQL databases.
Thought Leadership:
- Act as the SME (Subject Matter Expert) for Kafka and Apache Flink, driving innovation and championing best practices for real-time data streaming solutions.
- Keep up to date with the latest trends and technologies in the streaming data ecosystem.
- Provide technical mentorship and guidance to developers, engineers, and data teams.
System Scalability and Optimization:
- Design resilient, fault-tolerant, and highly available distributed systems.
- Ensure the scalability of Kafka architectures by managing broker configurations, partition strategies, and replication factors.
- Optimize Flink job performance for throughput and latency, leveraging parallelism and resource-efficient strategies.
Monitoring and Governance:
- Implement Observability, monitoring, alerting, and logging solutions for Kafka and Flink ecosystems.
- Define and oversee security policies for Kafka (e.g., role-based access controls, encryption, and authentication mechanisms).
- Establish data governance, schema evolution strategies, and enforce usage patterns to maintain a well-structured streaming data architecture.
Collaboration and Stakeholder Management:
- Partner with product owners, stakeholders, and business units to understand requirements and convert them into scalable technical solutions.
- Collaborate across engineering teams to drive the organization’s real-time data strategy.
- Present technical concepts, architecture diagrams, and actionable recommendations to senior management and leadership teams.
Cloud and DevOps:
- Integrate Kafka and Flink with CI/CD pipelines for seamless deployment and testing.
- Leverage containerization tools for deploying and managing data streaming infrastructure.
Required Skills and Expertise:
- Kafka Expertise:
- Advanced understanding of Kafka’s architecture, including brokers, topics, partitions, producers, and consumers.
- Hands-on experience with building and designing Kafka’s ecosystem (e.g., Kafka Streams, Schema Registry, Kafka Connect, KSQL).
- Proven experience designing and optimizing Kafka clusters for high availability, fault tolerance, and low latency.
- Apache Flink Expertise:
- Strong experience working with Apache Flink for building distributed data processing and analytics pipelines.
- Deep knowledge of Flink’s APIs (DataStream, Dataset, and Table API) and stream processing features like stateful processing, windowing, and time semantics.
- Experience optimizing Flink jobs for resource efficiency and performance in large-scale environments.
- Big Data and Distributed Systems:
- Deep understanding of distributed systems principles, including consistency, availability, fault tolerance, and scalability.
- Hands-on experience with big data technologies, such as Spark, Elasticsearch, etc.
- Programming and Engineering Expertise:
- Proficiency in languages like Java, or Python for building Kafka/Flink applications.
- Solid knowledge of serialization formats such as Avro, Protobuf, or JSON for working with structured data in Kafka.
- Leadership and Architectural Design:
- Proven track record of designing and delivering enterprise-scale, production-ready streaming platforms.
- Ability to articulate architectural decisions, trade-offs, and best practices.
- Experience collaborating with cross-functional teams and mentoring engineering teams.
- Cloud and DevOps:
- Extensive experience deploying Kafka and Flink solutions in cloud environments
- Knowledge of containerization for managing microservices and event-driven architectures.
- Familiarity with CI/CD tools like Azure Pipelines.
Interested candidates share their CV on ayushi.chouhan@white-force.in or 9109472707