Careers

Job Title:

Data Engineer

CAREERS

Description

We are seeking a Data Engineer who thrives in a fast-paced, agile development environment! This position reports to the CTO within the Engineering Team and also partners with our Analytics Team. This position will problem solve, create, and support our data platform and analytics engine. Qualified candidates will have experience with major data solutions like Hadoop, Spark, MapReduce, Pig, and Hive.

Why Work For Us?

  • When you feel well, you do well! We offer rich benefit options including vision, dental, and medical benefit options paid at 100% for employees
  • Top Venture Partners are Accelerating our Growth: We completed a Series A round of funding led by one of the top venture capitalist firms in the nation, with participation from existing investors.
  • Contribute to an inspiring workplace! Partake in company culture collaborations, social gatherings, and enjoy the amenities at our Denver office.
  • In a hard-charging work environment, it’s good to take time for yourself! We offer unlimited PTO to provide the flexibility to take time off for vacation, illness, family obligations or whatever else life has in store.
  • Competitive salary, bonus, and equity.

 

Minimum Qualifications / Skillsets

 Responsibilities

  1. Work with ETL tools to build a robust, high volume data pipeline using Hadoop ecosystem or other scalable technology stack
  2. Evaluate programs/scripts in languages like Python, C#, Java, etc. and work with ETL tools such as Talend, Informatica, and/or Pentaho
  3. Develop a practical roadmap for an enterprise-wide BI reporting and analytics platform
  4. Define the overall BI data architecture, including ETL processes, data marts and data lakes
  5. Oversee data acquisition flow and building data pipeline
  6. Data modeling, administration, configuration management, monitoring, debugging, and performance tuning
  7. Handle structured datasets and potentially semi- structured and un-structured datasets
  8. Translate existing code base to ETL specification with the help of current code owners

Qualifications

Please only apply if you meet the majority of below requirements:

Required

  1. Overall 5+ years’ programming/data analytics experience
  2. Ability to understand and evaluate existing code written in programming/scripting languages like Python, C#, Java, etc.
  3. 3+ years’ experience in working with ETL tools such as Talend, Informatica, and/or Pentaho
  4. 1-2 years’ hands-on experience with Hadoop, Spark, MapReduce, Pig
  5. Experience with at least one of the large cloud-computing infrastructure solutions like Azure or Amazon Web Services
  6. Ability to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them
  7. Experience assessing pros and cons of various technologies and platforms
  8. Ability to document use cases, solutions and recommendations
  9. Excellent written and verbal communication skills and the ability to explain technical concepts in plain language
  10. Experience effectively teaming with project managers and multi-disciplinary team members in the design, planning and governance of technical projects
  11. Based in Denver
  12. Bachelor’s degree in computer science or related field

Preferred

  1. Exposure to distributed data processing (e.g. Hortonworks, Cloudera)
  2. Experience working with an analytical data pipeline and teaming with data scientists
  3. Strong grasp of data pipeline, difference between batch and real-time data processing and familiarity with Lambda Architecture’s batch, speed and servicing layers
  4. Ability to build a strong data pipeline team
Apply For This Job