Find Jobs
Hire Freelancers

Need someone with Spark Scala, Hadoop and Hive Knowledge

₹12500-37500 INR

Закрито
Опублікований over 1 year ago

₹12500-37500 INR

Оплачується при отриманні
Any one having the knowledge of following technologies can Bid: -programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka
ID проекту: 34535500

Про проект

7 пропозицій(-ї)
Дистанційний проект
Активність 2 yrs ago

Хочете заробити?

Переваги подання заявок на Freelancer

Вкажіть свій бюджет та терміни
Отримайте гроші за свою роботу
Опишіть свою пропозицію
Реєстрація та подання заявок у проекти є безкоштовними
7 фрілансерів(-и) готові виконати цю роботу у середньому за ₹22 786 INR
Аватарка користувача
Hello. I am an expert in Big Data analytics and have 3 years of experiences in this field. And I have been worked as a data analyst in big data project team and mastered data sources(Strucutred and undstructured data), data cleaning and data mining, ETL using Apache Spark(python,scala). I have much experiences in Datawarehousing with Hive and HBase and Apache Kafka. And I mastered Data visualization with Tableau,Pentaho,Grafana. So I think that I am perfectly fit to your project. If you want to hire me, please contact with me on chat. Thanks.
₹25 000 INR за 7 дні(-в)
0,0 (0 відгуки(-ів))
0,0
0,0
Аватарка користувача
I have knowledge in software testing including web and mobile testing. I have handled 4 testing project. I have knowledge in automation testing including web and mobile using appium .
₹25 000 INR за 7 дні(-в)
0,0 (0 відгуки(-ів))
0,0
0,0
Аватарка користувача
Hi, I have 8 years experience in IT industry and 3+ years of experience in big data technologies like spark, scala, Hadoop, kafka, Unix , hive, sql, junit etc. Would be a good fit for your requirement. Thanks, Kopal
₹25 000 INR за 15 дні(-в)
0,0 (0 відгуки(-ів))
0,0
0,0
Аватарка користувача
Hello, I can help you in the project. Computer engineer expert in Big data. Throughout my professional career I have developed my expertise in implementing data storage, transformation and visualization solutions. Highlight the development of real-time flows and service APIs to help business needs. Going into detail about my professional career, I have the following experiences: Design of architectures based on distributions: Cloudera/Hortonwork and Azure Cloud. Distributed data storage and processing: DFS, Hive, Impala and Kudu. NoSQL Database: MongoDB, Couchbase, Cassandra. Data ingestion in Big Data environments: Apache Nifi, Flume, kafka. Processing data in memory in Batch processes and Streaming: Apache Spark. User Tools: Hue, Oozie and Sqoop. Indexing and information search: ELK, SOLR. Maching Learning Project Methodology for Models predictive. Programming languages: Java, Kotlin, Scala, Python, R, C++, C, VHDL, JavaScript, HTML, AngularJs. Knowledge in data processing libraries: Numpy, Pandas, sickit-learnig. Knowledge of network calculation tools Neural TensorFlow and Keras. Knowledge in Anaconda and Jupyter Notebook. Knowledge in Azure, Azure Data Factory, Databricks.
₹25 000 INR за 7 дні(-в)
0,0 (0 відгуки(-ів))
0,0
0,0
Аватарка користувача
Currently working as Hadoop administrator on Cloudera distribution for 3 clusters ranges from POC clusters to PROD clusters. Involved in designing and deploying a multitude application utilising almost all of the AWS stack. Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files. Architecture design and implementation of hadoop deployment, configuration management, backup, and disaster recovery systems and procedures. Responsible for cluster availability and experienced on ON-call support Experienced in Setting up the project and volume setups for the new Hadoop projects. Involved in snapshots and HDFS data backup to maintain the backup of cluster data and even remotely. Experience in importing and exporting data from different database like MySQL, Oracle, RDBMS. Good Knowledge on Kerberos Security. Responsibilities Installing and configuring Linux Administration and monitoring Linux Installing, Upgrading and Managing Hadoop Yarn Clusters Configured Hadoop High Availabilty Mapreduce performance tuning Managing and reviewing Hadoop and HBase log files. Provisioning, installing, configuring, monitoring, and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Oozie, Pig, Hive Recovering from node failures and troubleshooting common Hadoop cluster issues Scripting Hadoop package installation and configuration.
₹22 000 INR за 7 дні(-в)
0,0 (0 відгуки(-ів))
0,0
0,0

Про клієнта

Прапор INDIA
UJJAIN, India
5,0
40
Спосіб оплати верифіковано
На сайті з жовт. 27, 2018

Верифікація клієнта

Дякуємо! Ми надіслали на вашу електронну пошту посилання для отримання безкоштовного кредиту.
Під час надсилання електронного листа сталася помилка. Будь ласка, спробуйте ще раз.
Зареєстрованих користувачів Загальна кількість опублікованих робіт
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Завантажуємо для перегляду
Дозвіл на визначення геолокації надано.
Ваш сеанс входу закінчився, і сеанс було закрито. Будь ласка, увійдіть знову.