Senior Software Engineer [Remote for US & Canada ONLY] at Planet
100% remote position (in US or Canada) (Posted Sep 24 2019)
About the company
Welcome to Planet. We believe in using space to help life on Earth.
Planet designs, builds, and operates the largest constellation of imaging satellites in history. This constellation delivers an unprecedented dataset of empirical information via a revolutionary cloud-based platform to decision-makers in commercial, environmental, and humanitarian sectors. We are both a space company and data company all rolled into one.
Customers and users across the globe use Planet's data and machine learning-powered analytics to develop new technologies, drive revenue, power research, and solve our world’s toughest challenges.
As we control every component of hardware design, manufacturing, data processing, and software engineering, our office is a truly inspiring mix of experts from a variety of domains.
We have a people-centric approach toward culture and community and we are iterating in a way that puts our team members first and prepares our company for growth.
Join Planet and be a part of our mission to change the way people see the world.
*will depend on market salary for candidate's physical location in CAD or USD
Planet’s Search Team owns the systems used to store and access imagery from our constellation of satellites. These systems are primarily responsible for providing external customer access to the continuous feed of imagery we receive from the satellites. This team’s contributions will enable any number of new techniques to understand our changing world.
This role will be responsible for the infrastructure and software making up the search, storage and indexing layer providing access to imagery for our consumers. Our tech stack consists of Go, Python, Elasticsearch, Bigtable, PostgreSQL, and Kubernetes running on Google’s Cloud. At Planet our teams are a blend of pragmatic operators and software craftspeople. Planet is looking for a developer who specializes in large backend data services. Some backend services at Planet are distributed systems (e.g. Elasticsearch) and require pragmatic engineering with data-driven decision making. The ideal candidate will be able to apply sound engineering principles, operational discipline, and mature automation to our services.
The Search Team is highly distributed and you will thrive in an environment of remote work and asynchronous communication. You're expected to have strong written communication skills and be able to develop working relationships with coworkers in locations across several time zones.
As an Engineer on the Search Team you will:
- Improve reliability and scalability by resolving edge cases, studying failure modes, and writing tests
- Evolve customer facing data search services with an emphasis on scale and customers requirements
- Work to enable efficient and rapid access to our variety of new and growing data sets
- Manage underlying persistence layers in Bigtable and indexing in Elasticsearch
- Use GCP tools like Cloud Pub/Sub, Cloud Dataflow, BigQuery, and Cloud Storage with the Go/Python Google SDKs
- Own the operation of these services by measuring performance, creating alerts, runbooks, and responding to incidents and performance anomalies
- Participate in an on-call rotation in support of our team’s services
Skills & requirements
This role is 100% REMOTE and must be located within the US or Canada.
Must have strong back-end Python experience - with equally strong Go/Golang app development - to be considered for this role.
You may be a fit for this role if you:
- Have strong programming skills in Python and/or Go [back-end python and go app development experience are must-haves]
- Have experience building services that leverage cloud-based infrastructure and tooling such as AWS or GCP
- Have experience building, operating and optimizing Elasticsearch clusters
- Have experience working with monitoring tools like Prometheus, InfluxDB, or equivalent
- Have experience with SQL databases (PostgreSQL or MySQL) and NoSQL databases (e.g. Bigtable, Redis, HBase, etc.) and understand when to use each
- Have experience with a large shared codebase and continuous integration and deployment workflows and tooling like GitLab CI and especially Docker
- Have managed networking of a high traffic (thousands to tens of thousands reqs/sec) website using technologies like nginx, Envoy, or HAProxy
- Have maintained infrastructure with Kubernetes, Terraform, and Ansible
- Have a deep understanding of the Linux operating system
Education & Experience
- Bachelor or Master degree in Information Systems, Computer Science, Engineering, or equivalent job-related experience with 4+ years experience as a Software Engineer (or similar title).
- Excellent interpersonal and communication skills, written and oral