Senior Data Engineer

Bengaluru, Karnataka, India | Engineering | Full-time

Apply

MoEngage is an intelligent customer engagement platform, built for customer-obsessed marketers and product owners. We enable hyper-personalization at scale across multiple channels like mobile push, email, in-app, web push, on-site messages, and SMS. With AI-powered automation and optimization, brands can analyze audience behavior and engage consumers with personalized communication at every touchpoint across their lifecycle.
Fortune 500 brands and Enterprises across 35 countries such as Deutsche Telekom, Samsung, Ally Financial, Vodafone, and McAfee along with internet-first brands such as Flipkart, Ola, OYO and Bigbasket use MoEngage to orchestrate their cross-channel campaigns and engage efficiently with their customers sending 80 billion messages to 900 million consumers every month!
Our vision is to build the world’s most trusted customer engagement platform for the mobile-first world.
We promise to care about your customers as much as you do. And that justifies our top ratings for service and support in Gartner Magic Quadrant, Gartner Peer Insights, and G2 Summer Reports. We have also been recognized as one of the 25 Highest Rated Private Cloud Computing Companies To Work For in a list released by Battery Ventures, a global investment firm based on the employee feedback on Glassdoor where employees reported the highest levels of satisfaction at work during the first six months of the pandemic."

About the Data Engineering Team at Monengage:
Join our innovative Data Engineering team, where your expertise will play a critical role in architecting and executing the data infrastructure that powers real-time data ingestion, large-scale data lake, and Kafka clusters where more than a million messages per second are produced. Our team is responsible for handling high-volume user and event data, business critical pipelines, not only ensuring that they are robust, high-performing but also scalable and efficient to meet the demands of our dynamic data environment.

Key Requirements:
  • Experience: A minimum of 3-4 years in the data engineering field, demonstrating a track record of managing data infrastructure and pipelines.
  • Programming Skills: Expertise in at least one high-level programming language, with a strong preference for candidates proficient in Java and Python.
  • Cloud Infrastructure: Proven experience in setting up, maintaining, and optimizing data infrastructures in cloud environments, particularly on AWS or Azure.
  • Tech Stack Proficiency: Hands-on experience with a variety of data technologies including, but not limited to:
    • Kafka for stream-processing
    • Kubernetes for container orchestration
    • AWS S3, Athena, and Glue for storage and ETL services
    • Spark for large-scale data processing
    • Debezium for change data capture
    • Apache Airflow for workflow management
  • Data Processing: Demonstrable skills in cleansing and standardizing data from diverse sources such as Kafka streams and databases.
  • Query Optimization: Proficient in optimizing queries to achieve optimal performance with large datasets, minimizing processing times without sacrificing quality.
  • Problem-Solving Abilities: An analytical mindset with robust problem-solving skills, essential for identifying and addressing issues during data integration and schema evolution.
  • Cost Optimization Expertise: A keen eye for cost-saving opportunities without compromising on system efficiency. Capable of architecting solutions that are not only robust and scalable but also cost-effective, ensuring optimal resource utilization and avoiding unnecessary expenses in the cloud and data processing environments.