Hadoop Data And Information Specialist

Hadoop Data And Information Specialist


Hadoop Data And Information Specialist

Details of the offer

The Hadoop Data and Information specialist will be responsible to design, develop and support application solutions with focus primarily on Hadoop for a Financial Services Environment. This role will play a vital role in building new data pipelines from various structured and unstructured sources into Hadoop. Must be eager to wear multiple hats, and be capable of picking up new technologies at a fast pace.


- Strong conceptual understanding of the context and business requirements. Should be able to understand the business needs, High Level design and produce Low level design documents, implement code in accordance with the best practices.
- Hadoop development and implementation.
- Loading from disparate data sets.
- Pre-processing using Hive and Pig
- Ability to perform data quality checks in a methodical manner to understand how to accurately utilize client data.
- Expert level programming skills using Hadoop to meet the challenges of advanced data manipulation, complicated programming logic, and large data volumes is required.
- Ability to communicate results and methodology with the project team and clients. Should be able to work in offshore/onshore model.
- Ability to meet deadlines and thrive in a banking environment.
- Provides solutions for data driven applications involving large and complex data and providing reconciliation and test cases.
- Understand customer's Business processes and pain areas which need attention
- Source system understanding and analysis.
- Solution Architecture for the entire flow from source to end reporting data marts.
- Design Conceptual and physical data model for a global datawarehouse.in the Hadoop world (ETL versus ELT)
- High Level & Low Level design for ETL Components in Hadoop
- Test prototypes and oversee handover to operational teams.
- Propose best practices/standards.
- Hands on work on Sqooping, MapReduce, Hive transformations, combiners
- Build monitoring and testing mechanisms around Sqooping and data transformations
- Continuous improvements to the current Hadoop set up in terms of scalability, reliability and monitoring.
- Analyse and enhance the architecture of the current implementation
- Build and manage customer relationships
- Manage personal delivery on projects and enhancements
- Ensure personal service level agreement standards are met
- Implement initiatives to improve application performance
- Ensure quality of programming code
- Translate business requirements into system requirements
- Design and document robust, scalable solutions according to set standards
- Ensure accuracy of code and adherence to requirements
- Ensure all production changes are managed within the release cycle
- Participate in the development of key standards
- Seek new ways to optimise or innovate as it relates to the use of technology
- Ensure personal adherence to agreed governance procedures
- Proactively identify and manage risks
- Responsible for the design and implementation of effective cross-functional business intelligence systems and processes.
- Analytical with an even mix of business acumen and technical capability.
- Responsible for working with analysts, managers, and executives to understand business needs and working with source owners to understand the data sources.
- An innate curiosity and some analytical capability with a passion for learning.
- Translate business and technical requirements into efficient sustainable solutions
- Be able to do gap and impact analysis on the requirement
- Accept coaching and mentoring from senior developers/architects
- Do QA on designs and development and completed projects as required
- Complete documentation of requirements and development according to defined standards.
- Deliver necessary documentations where required


- Tertiary qualifications with majors in at least one of the following: Computer Science, Information Systems or similar
- Certification in Hadoop Development.
- Strong experience in Hadoop – HIVE, Pig, Spark, Impala, Oozie, Sqoop, and Map Reduce
- Writing high-performance, reliable and maintainable code.
- Ability to write MapReduce jobs.
- Good knowledge of database structures, theories, principles, and practices.
- Ability to write Pig Latin scripts.
- Hands on experience in HiveQL.
- Familiarity with data loading tools like Flume, Sqoop and Kafka.
- Knowledge of workflow/schedulers like Oozie.
- Analytical and problem solving skills, applied to Big Data domain
- Proven understanding with Hadoop, HBase, Hive, Pig, and HBase.
- Good aptitude in multi-threading and concurrency concepts.
- Must have Java experience.
- Financial Services experience.
- In-depth knowledge of Data Warehouse and Big Data best practices.
- Knowledge and technical appreciation of the interconnectivities and interfaces between various technical platforms, operating systems and processes.
- Good understanding of data ITIL
- Must understand the need to align the IT and business strategies.

Source: Executiveplacements


  • Other Jobs / Other Jobs - Crafts