Engineering - San Mateo, United States
The engineering team at SendBird is solving some of the biggest challenges related to building reliable, feature-rich, and scalable real-time conversational experiences across different platforms globally.
The challenges range from building a platform that can scale to some of the largest user-bases across distributed environments with optimal latency; creating a feature-rich yet lightweight and high-performance client-side SDK; and building products and services that can help customers incorporate real-time conversational technologies more rapidly. We believe that the next critical step for us would be investing in data and building up the ability to leverage machine learning and data science to add more value to our customers and our product.
You will be participating in building the best real-time conversational products and solutions possible. You are expected to learn and expand on your engineering knowledge and experiences to build a world-class product that solves the difficult problems of our customers and make it as easy as possible for them to harness the power of real-time chat.
IN THIS ROLE, YOU WILL
- Lead the development of analytics and machine learning products, services and tools in Python, Java, Scala
- Leverage your intuitive feel for products,and envision how a tool or approach could best fit product needs and present your approach effectively.
- Design distributed, high-volume ETL data pipelines that power SendBird analytics and machine learning products
- Participate in building the production service using open-source technologies such as Kafka, Spark, Elasticsearch and AWS cloud infrastructure such as EMR, Kinesis, Aurora, S3, Athena, Redshift
- Aggregate, normalize and process various types of data from disparate sources to gain perspective on user behavior, prevent and detect fraud, detect system anomalies and build forecasting models.
- Collaborate with other teams and work cross-functionally for data-related product initiatives
YOU WILL HAVE
- 5+ years of academic or professional experience in data science, building ETL pipelines or machine learning features in production.
- Strong analytic skills related to working with unstructured datasets
- Working knowledge of message queuing, stream processing, and highly scalable data stores
- Fluency in several programming languages such as Python, Java, or Scala
- Ability to find the optimal solution given resource constraints; understands under-engineering and over-engineering concepts
- Passionate about working with data at scale.
NICE TO HAVE:
- Work experience in building natural language processing products
- Work experience in AWS data pipeline eco-system
- Familiar with Spark and Hadoop
- Understanding of RDBMS, NoSQL, and distributed databases
PERKS
- Pick your new laptop!
- 4 weeks PTO!
- 99.99% Paid Benefits
- 12 US Paid Holidays
- Fun working environment!
- Flexible work schedule
- Opportunity to work for one of the hottest startups on the planet! -