Big Data Engineer - Java/Python/Hadoop/Scala (6-10 Yrs) Chennai (Systems/Product Software) by Getinz
❱ साईट पर देखें
इस नौकरी के लिए आवेदन करें
अलर्ट सब्सक्राइब करें
मुझे इसी तरह की नौकरियां भेजें
✕
XpatJobs
कृपया अपना अनुभव साझा करें
मानदंड
रेटिंग
जवाब देने का तरीका
जवाब देने का समय
प्रस्ताव की गुणवत्ता
पूरा अनुभव
सुरक्षा टिप्स:
क्लिकइंडिया केवल विभिन्न उपयोगकर्ताओं के विज्ञापन की मेजबानी में शामिल है... अधिक जानिए
नौकरी का सारांश
Big Data Engineer - Java/Python/Hadoop/Scala (6-10 Yrs) Chennai (Systems/Product Software) by Getinz
वेतन - चर्चा योग्य
नौकरी का प्रकार - ऑफिस से फुल टाईम नौकरी
रोजगार का प्रकार - कंपनी के पेरोल पर नौकरी
XpatJobs (November-2017 से पंजीकृत) ने 11 दिन पहले इस नौकरी को पोस्ट किया था
नौकरी के लिए आवश्यक मानदंड
न्यूनतम अनुभव - फ्रेशर
कौन आवेदन कर सकता है - पुरुष / महिला दोनों
नौकरी का विवरण
Desired profile: - Minimum 6 years of experience in Big Data Technologies.
-Background experience in anyone of the Databases is mandatory (MySQL, Oracle, PostgreSQL, MS SQL).
-Must have set up big data analytics platform from data warehouse using Java, Hadoop, Pig, Hive, NoSQL, DBs, Spark, AWS Redshift.
-Should have worked on any one of the programming / scripting languages like JAVA, Python, Scala.
-Experience in AWS/Azure or related cloud based solutions.
-Hands on experience with scripting on Unix/Linux.
-Must have experience in installing DWH solutions.
-Good to have experience in BI tools like Tableau, Qlikview - Exemplary technical knowledge with good communication skills - An excellent team player, capable of handling Individual Contributor role, when required - Quick learner with boundless spirit to work.
-Ready to take ownership and responsibilities on related tasks and modules - Adaptable to new cutting edge technologies and flexible to learn - Good to be in decision-making and handling crunch situations Roles and Responsibilities :- - Design, Implement and support Data Models, ETLs that provide structured and timely access to large datasets.
-Build fault tolerant, self-healing, adaptive and highly accurate Big Data solutions.
-Performance tuning on queries running over billion of rows of data running in an AWS Redshift/Hive/Spark.
-Create and maintain optimal data pipeline architecture, - Build analytics tools that utilize the data pipeline to provide actionable insights into -customer acquisition, operational efficiency and other key business performance metrics.
-Work with stakeholders including the Executive, Product, Data and Design teams to -assist with data-related technical issues and support their data infrastructure needs.
-Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
-Work with data and analytics experts to strive for greater functionality in our data systems
Required Skills : English
-Background experience in anyone of the Databases is mandatory (MySQL, Oracle, PostgreSQL, MS SQL).
-Must have set up big data analytics platform from data warehouse using Java, Hadoop, Pig, Hive, NoSQL, DBs, Spark, AWS Redshift.
-Should have worked on any one of the programming / scripting languages like JAVA, Python, Scala.
-Experience in AWS/Azure or related cloud based solutions.
-Hands on experience with scripting on Unix/Linux.
-Must have experience in installing DWH solutions.
-Good to have experience in BI tools like Tableau, Qlikview - Exemplary technical knowledge with good communication skills - An excellent team player, capable of handling Individual Contributor role, when required - Quick learner with boundless spirit to work.
-Ready to take ownership and responsibilities on related tasks and modules - Adaptable to new cutting edge technologies and flexible to learn - Good to be in decision-making and handling crunch situations Roles and Responsibilities :- - Design, Implement and support Data Models, ETLs that provide structured and timely access to large datasets.
-Build fault tolerant, self-healing, adaptive and highly accurate Big Data solutions.
-Performance tuning on queries running over billion of rows of data running in an AWS Redshift/Hive/Spark.
-Create and maintain optimal data pipeline architecture, - Build analytics tools that utilize the data pipeline to provide actionable insights into -customer acquisition, operational efficiency and other key business performance metrics.
-Work with stakeholders including the Executive, Product, Data and Design teams to -assist with data-related technical issues and support their data infrastructure needs.
-Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
-Work with data and analytics experts to strive for greater functionality in our data systems
Required Skills : English
कंपनी प्रोफाइल
Getinz के लिए XpatJobs द्वारा पोस्ट किया गया
XpatJobs से संपर्क करें
पता : Chennai, Tamil Nadu, India
चेन्नई में Big Data Engineer - Java/Python/Hadoop/Scala (6-10 Yrs) Chennai (Systems/Product Software) की तरह की नौकरियां
चेन्नई में सबसे ज्यादा देखी गयी बिग डेटा प्रशासक नौकरियां
XpatJobs द्वारा पोस्ट की गयी अन्य नौकरियां
✔ चेन्नई में बिग डेटा प्रशासक नौकरियां