Multiple positions. Develop and Involve in Designing, Developing Ab-Initio Graphs/Unix Shell scripts as per the ETL Testing requirements. Run Unix Shell Scripts, one time PL-SQL scripts for inserting/updating values in the various tables. Able to Use Python to unload the High-volume of Data Transfer. Involve in Data validating, Data integrity, performances related to DB, Field size validation, check Constraints and Data Manipulation and updates by using SQL. Analize large data sets by running hive queries. Implement partitioning, Dynamic partitioning and bucketing in Hive. Data Analysis & Quality Check on loading to Staging and Operation Data Source area for the different source systems. Design and implement end-to-end data pipelines on AWS using services like S3, Glue, Redshift, and Lambda, enabling seamless data integration and transformation. Develop ETL processes using Python and AWS Glue to extract data from various sources. Extract data from Oracle database, JSON, XML and EXCEL and Flat Files and loading into File System. Able to prepare test cases as per the requirement and monitor SIT, UAT and PROD batch after completion of development. Bachelors degree in any computer or any engineering or foreign equivalent + 3 years of IT industry experience or masters degree in any computer or any engineering or foreign equivalent + 1 year IT Industry experience.
Requires 1 year experience in Ab Initio 4.0.2.3, Conduct>It, Express>It, ACE, BRE, m-hub, DQE and Hadoop Ecosystems like HDFS and Hive, MS SQL DB Server, Oracle 11g/10g/9i, Teradata, DB2, MySQL, Mongo Db and Exadata, Data modelling, Star Schema Modelling, Snow-Flake modelling, FACT and Dimensions tables, physical and logical data modelling, Erwin and Oracle Designer. UNIX shell scripting, SQL, PL/SQL, UNIX, LINUX, Windows XP/NT versions, Toad, SQL Plus, SQL Developer, Autosys, Control-M and Control Centre, AWS, Azure, MS Office, MS Excel, MS Visio, JIRA, Service Now, Data Profiling, Data Cleansing, Data Validation.