Sr DevOps Engineer (R1077687) in Warsaw, PL at IQVIA™

Date Posted: 11/27/2019

Job Snapshot

  • Employee Type:
    Full-Time
  • Location:
    Warsaw, PL
  • Experience:
    Not Specified
  • Date Posted:
    11/27/2019
  • Job ID:
    R1077687

Job Description

IQVIA™ is the leading human data science company focused on helping healthcare clients find unparalleled insights and better solutions for patients. Formed through the merger of IMS Health and Quintiles, IQVIA offers a broad range of solutions that harness the power of healthcare data, domain expertise, transformative technology, and advanced analytics to drive healthcare forward.

Join us on our exciting journey!

IQVIA™ is The Human Data Science Company™, focused on using data and science to help healthcare clients find better solutions for their patients. Formed through the merger of IMS Health and Quintiles, IQVIA offers a broad range of solutions that harness advances in healthcare information, technology, analytics and human ingenuity to drive healthcare forward.

TO-UPDATE-TO-DEVOPS-GRADE-AND-TITLE,  – Warsaw

Real-World & Analytics Solutions (RWAS Technology).

We are seeking a DevOps Engineer to join our Predictive Analytics (PA) team in Warsaw. You will play an important role in scaling and automating our development activities, working closely with our wider team who are collectively building innovative machine learning solutions, addressing some of the most pressing issues in healthcare, such as under-diagnosis of rare diseases and identifying patients at high risk of disease progression.

The PA team is currently based in London and Philadelphia. You will be one of the first hires into a new software engineering function in Warsaw. Initially there will be a single Warsaw based scrum team, with the expectation that we will build additional scrum teams over the next one to two years. The cross-functional team will comprise both software engineering and DevOps experts, with daily collaboration expected between the two. There will also be an ongoing need to collaborate closely with colleagues in London and Philadelphia.

Working with petabytes of data, modern distributed systems, advanced data science models and challenging requests in an agile environment, you will help to set the standards for the automation and scalability of our data-driven analytics products. This crucial role will involve identifying opportunities to scale every aspect of what we do, including data-processing, increasing adoption and utilization of internally developed tools, and other data engineering activities that will help our team maximise our efficiency in delivering data science projects. You will have the opportunity to provide technical guidance to data-scientists in the UK and US delivery teams, and with your team you will set the technical standards around how we work.

You will be responsible for creating robust processes and mechanisms around the tools that we develop and use, making the deployment and use of our tools fast and simple for data-engineers data scientists alike. Amongst other things, these tools will include:

  • Data engineering tools to enable complex querying and diverse feature engineering tasks on very large data (hundreds of millions of patients) from a Hadoop environment using Python and PySpark.
  • Creating analytical pipelines to support a range of analytical functions, from data inspection through advanced machine learning solutions.
  • Initially carrying out proof-of-concepts to validate the use of new technologies, and if successful roll out team-wide solutions that will allow us to scale more effectively and tackle more complex problems.

Responsibilities

  • Plans, conducts, and coordinates the application of continuous build and deployment delivery mechanisms, leveraging innovative programmatic solutions and automation.
  • Proactively identify and drive automation of tasks associated with end-to-end deployment delivery.
  • Supporting and working alongside Agile development teams.
  • Introducing best practices and tools to optimally automate deployment processes.
  • Work closely with the engineering team to ensure stability, reliability and performance of the environment.
  • Engage in the team’s agile practices such as daily stand ups, sprint planning, sprint refinements, and retrospectives; work to fortnightly sprints and be proactive in suggesting evolution to team process that will make you a stronger unit.
  • Work with your team lead to maintain a healthy scrum backlog, engaging in story refinement sessions and inputting your own ideas.

Our ideal candidate will have:

  • 3+ years of experience with medium to large-scale Linux production environments.
  • Strong Linux administration skills.
  • Strong scripting and automation skills (python, Perl, Bash).
  • Experience with automation/configuration management tools such as Puppet, Chef, Ansible or similar.
  • Experience with Continuous integration systems (preferably Jenkins).
  • Experience with Docker.
  • Proven experience of working with an Agile team and helping develop a functioning DevOps process.
  • Practical experience with the Hadoop ecosystem including tools like YARN, Hive, Impala, HDFS including some knowledge about Hadoop cluster architecture.
  • Fluency in English (spoken and written).

We would also appreciate if you have some of the following:

  • Experience working in a function as part of or supporting a data-science team with a machine learning focus.
  • Some experience in any Cloud technologies (AWS, Azure or Openstack).

Join Us

Making a positive impact on human health takes insight, curiosity, and intellectual courage. It takes brave minds, pushing the boundaries to transform healthcare. Regardless of your role, you will have the opportunity to play an important part in helping our clients drive healthcare forward and ultimately improve outcomes for patients.

Forge a career with greater purpose, make an impact, and never stop learning.



Job ID: R1077687

CHECK OUT OUR SIMILAR JOBS

  1. Software Engineer Jobs
  2. Project Engineer Jobs