Factories are highly complex systems. In some cases the manufacturing can happen too fast for humans to interact with. In other cases, workers are exposed to dangerous, physically demanding, and repetitive tasks all day long. In either case, at Overview we help the workers by providing factories with state-of-the-art AI solutions, to act as an extra set of eyes. This reduces repetitive tasks, and helps factories increase their efficiency. We don't just identify production issues in real-time. Our customers change their production processes around Overview's platform.
We are a YCombinator backed startup based in San Francisco, California. We are a small hybrid-remote team of awesome engineers working on an end-to-end platform: camera/sensor installations in factories to AI R&D. We provide clients a simple, reliable and powerful platform allowing to monitor their factories, making everyone's job easier, thanks to advanced deep learning and computer vision.
We are looking for someone to help innovate even more during our time of rapid growth, and who believes in our mission. A Deep Learning Data Scientist (Research) who wants to join our team of passionate people, striving to get the best out of our customer's data. Improving the accuracy and speed of our models will allow us to better help our customers
Responsibilities
- Ownership of an entire CV/DL module in our pipeline
- Research and improvements on our core DL models
- Identifying what kind of data and how much data to generate to get desired results
- Adding to Overview's toolbox of state of the art algorithms
- Using small datasets to generate impressive results
- Backtest new model architectures on one of a kind datasets
- Relentlessly staying on top of the state of the art computer vision papers
Helpful Experience
- Having participated in Kaggle or other DL competitions
- Having worked on CV/DL research projects
- Having taken an existing model and increased accuracy through a variety of methods
- Having worked with PyTorch and/or Tensorflow
- Implemented state of the art papers just for fun
- Worked on squeezing networks / drastically reducing inference times with compare results