Data Engineer: Big Data
Company Name:-
IBM
Job Location:-
Pune, Maharashtra
Job Summary:-
Introduction
At IBM, work is more than a job – it’s a calling: To build.
To design.
To code.
To consult.
To think along with clients and sell.
To make markets.
To invent.
To collaborate.
Not just to do something better, but to attempt things you’ve never thought possible.
Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.
Your Role and Responsibilities
As Data engineer, you will develop and move data from the operational and external environments to the business intelligence environment using Ab Initio software.
Skills include designing and developing extract, transform and load (ETL) processes.
Responsibilities:
Coordinate with multiple technical teams to ensure apt integration of functions to identify and define necessary system enhancements to deploy new products and process improvements
Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
If you thrive in a dynamic, collaborative workplace, IBM provides an environment where you will be challenged and inspired every single day.
And if you relish the freedom to bring creative, thoughtful solutions to the table, there’s no limit to what you can accomplish here.
Required Technical and Professional Expertise
Minimum 1-2+ years of experience in Ab Initio development
Working knowledge of Ab Initio component and PLAN, EME, MHUB
Basic Knowledge of Metaprogramming with awareness of any one scheduler tool – CA7/ESP for Job monitoring
Basic knowledge of Ab Initio dependency analysis and lineage.
Basic JCL knowledge and Unix Commands and work experience in Unix Shell Scripting
Good knowledge of SQL – Teradata/Oracle and Hadoop, HIVE, Parquet, Avro
Ability to design, test, debug and migrate Ab Initio code
Proven analytical and problem-solving skills
Solid understanding of Data warehousing and SCD
Able to identify and fix bug
Preferred Technical and P
FOR MORE DETAILS CLICK BELOW LINK [convertful id=”110657″]