Backend Python Developer
(Work Remotely from anywhere in the World)
- Must design and communicate external and internal architectural perspectives of wellencapsulated systems (e.g. Service Oriented Architecture, Docker based Services, microservices) using patterns and tools such as Architecture/Design Patterns and Sequence Diagrams.
- Must have experience with some amount of ‘Big Data’ technologies such as:
- ElasticSearch, NoSql Stores, Kafka, Columnar Databases, DataFlow or Pipeline Systems, Graph DataStores.
- Should have experience with design, implementation, and operation of data intensive, distributed systems. (The book, Designing Data Intensive Applications, is a good reference)
- Should embrace the discipline of Site Reliability Engineering.
- Should have experience using Continuous Integration and Continuous Deployment (CI/CD) with an emphasis on a wellmaintained testing pyramid.
- Should have API and Data Model Design or Implementation experience, including how to scale out, make highly available, or map to storage systems.
- Should have experience with multiple software stacks, have opinions and preferences, and not be married to a specific stack.
- Should have experience designing and operating software in a Cloud Provider such as AWS, Azure, or GCP.
- Might have experience using Feature or Release Toggles as a code branching strategy.
- Might have experience designing, modifying, and operating multitenant systems.
- Might know about algorithm development for intensive pipeline processing systems.
- Might understand of how to design and develop from a Security Perspective.
- Might know how to identify, select, and extend 3rd Party Components (Commercial or Open Source) that provide operational leverage but does not constrain our product and engineering creativity.
- Event Bus and Event Sourcing capabilities that provide business and engineering leverage and efficiencies.
- Highly scalable and crazy performant search systems.
- Transactional or eventually consistent stores that provide wellencapsulated domain object semantics.
- Orchestrated scaleout data pipelines that can leverage serverless and containerized compute that balances cost, latency, and duration.
- Algorithmically intensive data engines that operate on streaming, large, or multitenant datasets.
How To Apply:
Click on the Apply button below to process a job application for this position.
- Salary Offer To be discussed
- Experience Level 1-2 Year(s), 3-5 Years, 5-8 Years, 8-10 Years, 10+ Years
- Education Level Not Required
- Working Hours To be discussed
- Closing Date June 18, 2019
- Job Application Via Custom Application Page