Data Engineer
Pune, Maharashtra
Joining: Immediate
Type: Full-Time
Experience Level: 2 – 5 Years
About Us:
Turinton Consulting is the brainchild of Vikrant Labde and Nikhil Ambekar, the
dynamic duo who previously founded Cuelogic Inc., a global Outsourced Product
Development firm. Cuelogic’s success led to its acquisition by the publicly traded, $4
billion USD IT giant LTIMindtree. With a proven track record of engineering cutting-
edge data and AI products, the teams led by Cuelogic have partnered with
companies across the spectrum—from high-growth startups to unicorns to Fortune
500 firms. Now, with Turinton Consulting, we’re channeling this expertise into
building state-of-the-art AI and ML solutions that empower organizations to harness
the full potential of artificial intelligence. As part of our ambitious growth, we are
seeking an AI & ML Architect or Data Scientist to lead the development of advanced
AI systems, with expertise in Deep Learning, Natural Language Processing (NLP),
Convolutional Neural Networks (CNNs), and the latest large language models
(LLMs).
Job Summary:
As a Senior Big Data Architect, you will be responsible for architecting and
overseeing the development of scalable, high-performance data systems that can
handle massive amounts of data efficiently. You will leverage your deep technical
expertise in the Modern Data Stack, including hyperscale cloud environments,
Snowflake, Databricks, and other key tools, to build robust and scalable data
solutions. You will work closely with cross-functional teams to design and implement
data architectures that meet the needs of our clients and enable advanced analytics
and AI capabilities.
Key Responsibilities:
- Architect and Design Big Data Systems:
Lead the architecture, design, and implementation of large-scale Big
Data systems using the Modern Data Stack.
Develop scalable and secure data architectures that support advanced analytics, AI, and machine learning workloads.
Ensure data architectures meet high standards for performance, reliability, and scalability.
- Hands-on Development and Implementation:
Actively participate in the hands-on development of data solutions,
including data ingestion, processing, storage, and analytics.
Build and optimize data pipelines using tools like Apache Spark,
Databricks, and other ETL/ELT frameworks.
Design and implement data models and schemas optimized for
Snowflake and other cloud data platforms.
- Cloud Infrastructure and Hyperscalers:
Architect and manage cloud-based data infrastructure on hyperscale
platforms such as AWS, Azure, or Google Cloud.
Leverage cloud-native tools and services to build robust and scalable
data ecosystems.
- Collaboration and Leadership:
Assist aligned sales teams with big data opportunities – (Pre-sales)
Work closely with data engineers, data scientists, and other
stakeholders to ensure that data architectures align with business
objectives.
Provide technical leadership and mentorship to junior team members,
fostering a culture of innovation and excellence.
Assist, Collaborate & Troubleshoot with the DevOps team to implement
CI/CD pipelines for data solutions and ensure smooth deployment.
- Innovation and Continuous Improvement:
Stay up-to-date with the latest trends and advancements in Big Data,
cloud technologies, and the Modern Data Stack.
Continuously evaluate and integrate new tools and technologies that
can enhance the performance and capabilities of data systems.
Qualifications:
- Experience:
2 – 5 years of experience in designing and implementing large-scale
Big Data systems, with at least 5 years of experience working with the
Modern Data Stack.
Proven experience in architecting and building data solutions on
hyperscale cloud platforms (AWS, Azure, Google Cloud).
Extensive hands-on experience with Snowflake, Databricks, Apache
Spark, and other key tools in the Modern Data Stack.
Strong background in data modeling, ETL/ELT processes, and data
pipeline development.
- Technical Skills:
Deep understanding of cloud-based data storage solutions, including
data lakes, data warehouses, and real-time streaming, Event Driven
architectures.
Expertise in data integration, processing, and transformation using
tools like Apache Spark, Databricks, dbt, and more.
Proficiency in SQL, Python, Java, Scala, with experience in optimizing
queries and scripts for large datasets.
Strong knowledge of data governance, security, and compliance best
practices.
Familiarity with cloud platforms like AWS, Azure, or Google Cloud.
Knowledge of microservices architecture and design patterns is a plus.
- Soft Skills:
Excellent problem-solving skills and the ability to think critically and
strategically about data architecture and design.
Strong communication skills, with the ability to articulate complex
technical concepts to both technical and non-technical stakeholders.
Proven leadership and mentoring abilities, with a passion for fostering
talent and driving technical excellence.
- Education:
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
Why Join Us:
- Be part of a cutting-edge team driving innovation in Big Data and AI.
- Work with senior leadership in the company.
- Work on challenging and impactful projects that push the boundaries of technology.