Sr IT Architect (R1061331) in Bangalore, India at IQVIA™

Date Posted: 2/7/2020

Job Snapshot

Job Description

IQVIA™ is the leading human data science company focused on helping healthcare clients find unparalleled insights and better solutions for patients. Formed through the merger of IMS Health and Quintiles, IQVIA offers a broad range of solutions that harness the power of healthcare data, domain expertise, transformative technology, and advanced analytics to drive healthcare forward.

This requirement is for Hadoop experts:
Should have a good understanding and background on Bigdata ecosystem (Hadoop, Kafka, Kudu, Zookeeper etc.)
Should have good understanding of Hadoop security (preferably on Sentry).
Working knowledge and experience with wide range of big data components such as HDFS, Map Reduce, Sqoop, Flume, Pig, Hive, Hbase.
Excellent understanding / knowledge of Hadoop architecture (1.x and 2.x) and various components such as HDFS, Yarn, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce paradigm.
Solution provider for Capacity Planning of a cluster and creating roadmaps for Hadoop cluster deployment
Responsibilities also include any coding involved in Hadoop ecosystem technologies and assist other developers.
Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache Hadoop, Cloudera Distribution Hadoop (CDH-5.x) distributions (preferred) and Hortonworks Data Platform (HDP-2.x) on Amazon Web Services (AWS)
Good working Knowledge in Hadoop security like Kerberos and sentry
Experienced in Cloudera installation, configuration and deployment on Linux distribution
Commissioning and Decommissioning of nodes as required
Managing and monitoring Hadoop services like Name node, Data node & Yarn
Experienced in loading data into the cluster from dynamically-generated files using Flume and RDBMS using Sqoop also from the local file system to the Hadoop cluster
Performance tuning, and solving Hadoop issues using CLI or by WebUI
Troubleshooting Hadoop cluster runtime errors and ensuring that they do not occur again
Accountable for storage and volume management of Hadoop clusters.
Ensuring that the Hadoop cluster is up and running all the time (High availability, big data cluster etc.)
Evaluation of Hadoop infrastructure requirements and design/deploy solutions
Backup and recovery task by Creating Snapshots Policies, Backup Schedules and recovery from node failure
Working experience in Installation of various components and daemons of Hadoop eco-system
Responsible for Configuring Alerts for different types of services which is running in Hadoop Ecosystem

Join Us

Making a positive impact on human health takes insight, curiosity, and intellectual courage. It takes brave minds, pushing the boundaries to transform healthcare. Regardless of your role, you will have the opportunity to play an important part in helping our clients drive healthcare forward and ultimately improve outcomes for patients.

Forge a career with greater purpose, make an impact, and never stop learning.

Job ID: R1061331


  1. Architect Jobs
  2. Systems Engineer Jobs