Data Engineer - US based Pharma MNC

Contract Type
Academic title
Job description

Create and maintain optimal data pipeline architecture ETL/ ELT into Structured data - Assemble large, complex data sets that meet functional / non-functional business requirements and create and maintain multi-dimensional modelling like Star Schema and Snowflake Schema, normalization, de-normalization, joining of datasets. - Expert level experience creating Fact tables, Dimensional tables and ingest datasets into Cloud based tools. Job Scheduling, automation experience is must. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. - Setup and maintain data ingestion, streaming, scheduling, and job monitoring automation. Connectivity between Lambda, Glue, S3, Redshift, Power BI needs to be maintained for uninterrupted automation. - Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and “big data” technologies like AWS and Google - Build analytics tools that utilize the data pipeline to provide actionable insight into customer acquisition, operational efficiency, and other key business performance metrics - Work with cross-functional teams including external consultants and IT teams to assist with data-related technical issues and support their data infrastructure needs - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader


4-8 years of in-depth hands-on experience in data warehousing (Redshift or any OLAP) to support business/data analytics, business intelligence (BI) - Advanced knowledge of SQL and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases and Cloud Data warehouses - Data Model development, additional Dims and Facts creation and creating views and procedures, enable programmability to facilitate Automation - Data compression into PARQUET to improve processing and finetuning SQL programming skills required - Experience building and optimizing “big data” data pipelines, architectures and data sets - Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement - Strong analytic skills related to working with structured and unstructured datasets - Experience with manipulating, processing, and extracting value from large unrelated datasets


Excellent Compensation

  • Global Exposure
  • Visible Career Path
  • General Shift
  • 5 Days Work
Other notes
For more related job opportunities visit