Do you want the chance to be one of the engineers dedicated to running advanced ETL jobs across 100+ machines, bridging the gap between Business Intelligence and Data Engineering, and enabling trivago's BI teams to deliver insights, data and reports to internal and external stakeholders?Our Advertiser Relations - BI team is looking for a skilled and courageous engineer, who is not scared at the prospect of working with such huge data and complex structures, but cannot wait to get started! In this role, you will work closely with both Data Scientists and other Data Engineers to continuously enhance our data-driven decision making.If this sounds like you, then read on…What you'll do:
- Design, implement and optimize data pipelines to produce clean and unified data from multiple data sources.
- Maintain and optimize existing data transformation processes.
- Research technologies and build software to support the work of Data Scientists and other team members in their daily work.
- Act as an intermediary between Data Scientists and other Data Engineers: you will need to "translate" the technical and business jargon and find new solutions for data processing challenges.
What you'll definitely need:
- Good SQL skills. You are able to optimize queries executed in distributed systems by improving query plans and carefully thinking about data partitioning and compression.
- Experience with at least one other programming language in a data related context (e.g Python, Java). You can read, understand and improve other people's code.
- You are able to experiment with new technologies and acquire new skills to find clever solutions to the unique challenges we will encounter along the way.
- To speak fluent English (our company language).
What we'd love you to have:
- Around 1-2 years work experience in Analytics/Business Intelligence, Software Engineering/Web Development or another, role-relevant field.
- A degree in Computer Science, Software Engineering or related field.
- Familiarity with Hadoop stack (Oozie, Yarn, Hive, Impala).
- Familiarity with version control systems (particularly with Git).
Why the role is cool:
- The chance to move from data to big data. This role offers the opportunity to work with the latest big data infrastructure on our own Hadoop cluster, working with terabytes of data in real-time and batch processing.
- Gain experience with the fundamentals of big data and develop your skills with the opportunity to also explore other methods of improving our data processing and handling.
- Work alongside a team of experienced data engineers who experiment and deploy production workflows be an integral part of trivago's data pipeline.
- Work on stream technologies, such as stream processors and get involved with teams working on our cloud infrastructure (AWS and Google technologies).
Life at trivago is:
- The opportunity for self-driven individuals to have a direct impact on the business.
- The freedom to embrace small-scale failures as a path to large-scale success.
- The belief that factual proof is the word of truth and determines the way forward.
- The chance to develop personally and professionally due to a strong feedback culture and access to training and workshops.
- A unique culture with a strong sense of community and an agile, international work environment.
- Thriving on a campus that supports your health and happiness with world-class ergonomics, 30+ sports and a multi-cuisine cafeteria to satisfy your inner foodie.
- Flexibility for all employees to contribute value and maintain a healthy work-life balance.
- To find out more about life at trivago follow us on Facebook - @lifeattrivago.
- This is a fixed-term position of 1 year with a long-term perspective.
- trivago N.V. is an equal opportunity employer. Applications from individuals with disabilities are welcome.