Hadoop sales require strong programming skills in Java, Python and Scala. The candidate should also have strong verbal and communication skills dealing with customers and partners. It requires a good understanding of analysis, design, coding and testing. Crafting enterprise data solutions for large organizations will be part of the skills required.



Найняти Hadoop Salespeople

Фільтрувати

Мої недавні пошуки
Фільтрувати по:
Бюджет
до
до
до
Тип
Навички
Мови
    Статус роботи
    9 робіт знайдено, ціни вказані в USD
    Big data analysis using Hadoop, nifi 6 дні(-в) left
    ПІДТВЕРДЖЕНО

    Can you work with big data, using nifi, Kafka, and sqoop and Hadoop.

    $20 (Avg Bid)
    $20 Сер. заявка
    1 заявки

    - Must be available full-time EXCLUSIVELY (9-6pm on weekdays, IST). - Project Duration: 6-12 months - Remote/WFH. JD: "Minimum 5+ years of relevant experience in microservices development. Strong in below technologies : Spring Boot Spring Cloud Java 8 and above Postgres/oracle AWS/GCP Git Significant development experience in Java focused on data, APIs & Microservices. Strong understanding of data and data integration concepts (SQL, data governance, enterprise patterns). Proven problem-solving skills and the ability to propose new solutions. Experience with development best practices (TDD/BDD, CI/CD and other DevSecOps concepts). APIs and related concepts (OData, GraphQL, API Gateways, OAuth, Swagger etc.). Excellent written & verbal communication skills are essential. Good e...

    $9 / hr (Avg Bid)
    $9 / hr Сер. заявка
    2 заявки
    Data Analysis using Map/Reduce -- 2 2 дні(-в) left
    ПІДТВЕРДЖЕНО

    One of the advantages of cloud computing is its ability to deal with very large data sets and still have a reasonable response time. Typically, the map/reduce paradigm is used for these types of problems in contrast to the RDBMS approach for storing, managing, and manipulating this data. An immediate analysis of a large data set does not require designing a schema and loading the data set into an RDBMS. Hadoop is a widely used open source map/reduce platform. Hadoop Map/Reduce is a software framework for writing applications, which process vast amounts of data in parallel on large clusters. In this project, you will use the IMDB (International Movies) dataset and develop programs to get interesting insights into the dataset using Hadoop map/reduce paradigm. Please use the following links f...

    $104 (Avg Bid)
    $104 Сер. заявка
    7 заявки
    Big Data 2 дні(-в) left

    Experience in guiding with Big Data Technology (MapReduce, Hadoop, Spark, Cassandra)

    $16 (Avg Bid)
    $16 Сер. заявка
    4 заявки

    Looking for someone who has at least 4/5 years experience in Big Data field and hands on experience on PySpark, HDFS, Hive, Impala, Shell scripting, SQL, HQL and scheduling tool like Autosys/Airflow. This is a long term project and we will pay monthly basis.

    $1136 (Avg Bid)
    $1136 Сер. заявка
    11 заявки

    Looking for someone who has a depth knowledge in Unix/Shell Scripting (bash script), PySpark, HDFS, Hive, Impala and any Scheduling tools like CA Automic or Control-M or Autosys.

    $1126 (Avg Bid)
    $1126 Сер. заявка
    14 заявки
    Cloudera Disk Expansion/Addition of Name Nodes & Data Nodes 11 годин(-и) left
    ПІДТВЕРДЖЕНО

    - Cloudera HDFS disk space expansion needed on 3 existing data nodes on VMWare ESXI - Need to add 3 additional Name Nodes - Need to add 3 additional Data Nodes

    $200 (Avg Bid)
    $200 Сер. заявка
    1 заявки
    Hibench build and run 2 годин(-и) left
    ПІДТВЕРДЖЕНО

    Need help to run Hibench with hadoop, spark etc. Regards VSR

    $15 (Avg Bid)
    $15 Сер. заявка
    2 заявки

    Python ETL Developer/Data Engineer Parse unstructured data, semi structured data such XML etc. Design and develop efficient Mapping and workflows to load data to Data Marts Map XML DTD schema in Python (customized table definitions) Write efficient queries and reports in Hive or Impala to extract data on ad hoc basis for data analysis. Identify the performance bottlenecks in ETL Jobs and tune their performance by enhancing or redesigning them. Responsible for performance tuning of ETL mappings and queries. import tables and all necessary lookup tables to facilitate the ETL process required to process daily XML files in addition to processing the very large (multi-terabytes) historical XML data files

    $174 (Avg Bid)
    $174 Сер. заявка
    6 заявки

    Топ-статті спільноти Hadoop