We are honu.ai, a team of highly experienced and ambitious technologists, on a mission to build the world's most loved, trusted and intelligent platform for small businesses... and empower business owners with superhuman decision-making capabilities.
The technology we are building is novel and we will be pushing the boundaries of what has been done in this space.
We are currently in stealth mode and are backed by Europe’s top investors (early backers of TransferWise, Tide, Coinbase, CityMapper, Cazoo, Nested, Habito, Cleo, King, Snyk, etc...)
We are looking for a highly versatile, polymath engineer with strong foundations in numerical methods, ML, and statistical techniques - who has experience being part of the initial data team in a startup. You will be the first data science hire in the company and be part of the core team of 12 people building the first iteration of a unique product.
The role will reporting directly to the founder, and working closely with him and the tech team to build the various intelligent components of the platform. The problems you will be solving will be quite diverse and cover a wide spectrum of complexity. You will be contributing to building the data infrastructure along with the engineers. You will also need to be able to use sound judgement and an 80/20 approach in a rapidly evolving environment.
As part of the core team you will also be expected to spend time to get well versed on the business context, and understand our clients, as to gain a good intuition of the problem space.
We’re taking a unique approach to the tech we are building, and people who bring diversity of experience and a track record of creative solutions are particularly welcome. All levels of seniority welcome.
- Degree in a technical or quantitative discipline like statistics, mathematics, physics, electrical engineering, or computer science (all levels welcome, from bachelor’s to doctorate)
- Strong systems thinker.
- Minimum 3+ years of professional work experience. Including at least 1 year in a startup.
- Experience building ETL pipelines. You should be able to operate to a large degree autonomously, while knowing you have the support of engineers when needed.
- Experience in at least three of the following: deep learning, operational research, risk modelling, probabilistic graphical models, robotics, gaussian processes, bayesian methods.
- Experience independently building models and deploying them in production at scale.
- Proficiency in Python, including pandas, scikit, numpy libraries. Experience with Pytorch/Tensorflow.
- Experience connecting to external APIs to ingest third-party data.
- Experience in data engineering is a plus.
- Experience across diverse sectors is a plus.
- Essentials: ‘Getting things done’ attitude, Full ownership of your work and always striving for excellence and going the extra mile. Strong attention to detail. Clear communicator.
- You value honest feedback and creating an environment where it flourishes. You're comfortably organised amongst chaos.
- You're aligned with the mission of the company.
Our initial stack is mainly in Python with a bit of Go for the back-end, and React and Typescript on the front-end, and we are running on AWS. We have a pragmatic, non-dogmatic approach and will refine the tech stack and find the right tools for the job, as the product evolves. Scientific computing will be mostly implemented in Python, for now.
How to Apply
Email your CV, and any relevant links to firstname.lastname@example.org
Our Process (1-2 weeks)
- Introductory phone call with founder ( 30 mins )
- Deeper dive with founder (1 hour)
- Meet the wider team (2-2.5 hours )
We are growing fast and will be sure to respect your time and expectations each step of the way.