The candidate should have 4 - 7 years of experience with Data Warehousing Systems involving various Databases, ETL and BI tools, in particular Hadoop HDFS and Google Cloud Platform (GCP).
The candidate should be highly proficient in Software Releases using tools like Jenkins and Ansible, Source Code Administration, System upgrades and a good understanding of ETL development and administration.
The candidate should have good Administrative and Connectivity experience with the following databases - Teradata, Oracle, MySQL and SQL Server.
Hands-On Administrating, Supporting, Upgrading Hadoop (HDFS) systems using tools like Yarn (Hadoop Resource Management).
Highly skilled in Apache Airflow including DAG creation, dependency management and orchestration.
Hands on experience with Distributed Open Source Query Engines like Hive, Presto, Impala and Spark.
A good understanding of Linux Operating Systems and administrative roles.
Extensive scripting knowledge using Bash and Python.
Hands on experience of Networking, Security, LDAP and other administrative functions within a multi-tier, high availability cloud (GCP) and on premise (Linux & Windows) enterprise environments.
Demonstrable experience of ETL technologies like ODI, Informatica, Apache Airflow and SSAS.
Demonstrable experience of SQL & PLSQL.
Working with big data sets and master data management (MDS).
High attention to detail and appreciates the importance of data integrity.
Experience of being a highly motivated self-starter.
Working in an Agile Environment.
A good understanding of BI Reporting, Modelling and Analytics using technologies like SSAS Tabular Modelling and PowerBI would be advantageous.