W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9ba3jpc3mgl2pwzy9iyw5uzxitzgvmyxvsdc5qcgcixv0

Job

Big Data Engineer

Working for a global high tech travel company based in central London as a Big Data Engineer. Great, friendly and collaborative working environment at the UK’s #1 Place to Work, as recognised by Glassdoor.

General Responsibilities for Big Data Engineer: 
- Develop components of databases, data schema, data storage, data queries, data transformations, and data warehousing applications. 
- Drive technical direction for small to mid-sized projects. 
- Assess business rules and collaborate internally, and with business owners to understand technical requirements and implement analytical and technical solutions. 
- Provide alternative solutions to a given problem. Contribute to advancing the team's design methodology and quality programming practices. 
- Effectively resolve problems and roadblocks as they occur; and consistently follow through on details and drive issues to closure. 
- Write documentation and communicate database design. Collaborate with program management and testing peers in the development of assigned components. Participate in, and provides input to, requirements definition. 
- Actively participate in group technology reviews to critique work of self and others. 

Desired experience and skills: 

We do not believe in matching against a list of buzzwords - we look for smart people with good general programming skills as we believe that creative developers can learn new technologies quickly and well. 

However, it wouldn't hurt if you have experience with some of the following (or at least an interest in learning them): 

• At least 3 years of Java or Scala programming. 
• The Hadoop ecosystem – including Hive, Map Reduce, Azkaban, Presto 
• Agile development methodologies including scrum, code reviews, pair programming. 
• Object oriented design and development. 
• Performance and scalability tuning, algorithms and computational complexity. 
• Data Warehousing and ETL development. 
• SQL. 
• Open source libraries and tools such as Spring, Maven, Guava, Apache Commons, Eclipse, Git, Jira, Jenkins. 
• AWS applications (S3, EC2, EMR, Lambda, SNS etc) 
• All things Linux or Python (bash scripting, grep, sed, awk etc.) 
• MS/BS degree in a computer science field or related discipline is nice but not essential. 
• Processing massive structured and unstructured data sets.