CoverWallet, an Aon company
Acerca del empleo
CoverWallet, an Aon company, is the leading digital insurance platform for small and medium-sized businesses. We are dedicated to making insurance simple, fast and convenient so that businesses around the world can get the protection they need and get back to what matters most – growing and managing their business. Powered by deep analytics, thoughtful design, and state-of-the-art technology, CoverWallet is reinventing the $200 billion commercial insurance market for small and medium-sized businesses.
CoverWallet is the easiest way for businesses to understand, buy, and manage insurance online and has been recognized as a CNBC Upstart 100, won the Best Insurtech Solution from the Benzinga Awards and was named “One of the Most Entrepreneurial Companies in America” by Entrepreneur Magazine.
We have built an incredible team of ~ 350 people (60% in New York and 40% between Madrid and Valencia).
As a part of Aon, a leading global professional services firm providing a broad range of risk, retirement and health solutions with 50,000 colleagues in 120 countries, CoverWallet has the mentality and culture of a high-growth startup with the backing and support of a global multinational company.
This position will be based in Madrid, Valencia or Sevilla.
About the role
You will work with business stakeholders and our Engineering team (mostly with Data Scientists, Data
Analysts, Backend Engineers and SRE’s) to develop and have a real impact on the overall design of the
data architecture to allow the growth of CoverWallet, thanks to your exceptional skills designing,
developing, testing, maintaining and optimizing highly data management systems.
Essential Job Functions:
- Work in a multi-cloud environment with GCP/AWS with cutting-edge technologies like Apache
- Airflow, Pub/sub, Redshift, Kubernetes, MongoDB, AWS Lambda among others and using
- Python as your main programming language (but we are open to others) .
- Develop, maintain and optimize high-quality, reliable and robust data pipelines on Kubernetes
- and Airflow that convert data streams into valuable information.
- Model and architect our data infrastructure building real time streaming and batch processing
- solutions (Right now we are working with Google Cloud Pub/Sub and Amazon Aurora).
- Design and deploy high-load microservices with strong quality requirements and high business value for the development of Machine Learning models and data tools.
- Keep constantly improving your technical knowledge and expertise by implementing solutions
- based on open-source projects (e.g. DBT).
- Bachelor’s Degree and/or 5+ years working in data or software engineering environments.
- Solid background in DWH solutions as Amazon Redshift.
- Expertise in SQL/noSQL databases (PostgreSQL, MongoDB, Redis, etc.).
- Coding proficiency in Python is a must. Additionally at least one programming language (Scala, Go or Ruby) is highly desirable.
- Experience working with bash scripting, Docker, Kubernetes and Linux environments on a daily basis.
- Experience with AWS/GCP stack or another cloud computing platform is highly valuable.
- Strong written and verbal communication, presentation, and technical writing skills.
- Team player, technology passionate and self-motivated individual.
- Strong communication skills in English
What we offer:
- The possibility to disrupt one of the biggest industries, and in one of most developed digital markets in the world.
- Great offices in New York, Madrid, and near the beach in Valencia.
- Competitive and flexible compensation (tickets restaurant, transport card, daycare checks, and external training)
- Company-paid Life and Accident Insurance, and Medical insurance as benefits.
- 23 days of vacation per year.
Por favor, para apuntarte a este trabajo visita www.linkedin.com.