Sifchain
Data Pipeline Engineer
As a Data Pipeline Engineer you would have a hand in the design, implementation, deployment, and support of our Sifchain Data Service project. As a member of the Data Service team you will work on foundational data infrastructure.
You’d optimize Sifchain’s external visibility and assist various 3rd party sources to leverage our Data Service. You’d help architect real-time microservices that capture Sifchain’s blockchain events and build systems that provide support to our external API services where users can monitor network health and other various analytics over time.
As a Sifchain team member, you would be responsible for creating technically viable software with a team of senior engineers specializing in devops, distributed systems, system architecture, testing, and other related fields. You would be collaborating with some of the most diligent minds in the cryptocurrency industry on product direction, both on the core Sifchain team and among its partners, investors, and advisors. As an early team member, you must feel comfortable working in a fast-paced environment where the solutions aren’t already predefined.
Prior experience with blockchain projects is helpful but we are primarily interested in capacity to grow into the role. You should have prior experience in developing high-quality backend architecture and some passing knowledge of how such architecture principles should apply to blockchain data services.
We are looking for individuals who are passionate about being at the forefront of a new technological paradigm and can lead the design and development of scalable applications.
Requirements
- Build and support a data pipeline platform that allows a customer’s behavioral data to directly impact their individualized experiences on Sifchain.
- Learn to evaluate multiple technical approaches and drive consensus with your engineering peers
- Use data to solve real world problems and assist both internal and external partners with data integrations
- You will be responsible for ensuring access to data and tooling for the Core Engineering/Product team to leverage for direct customer use
- Developing with sound testing and debugging practices
- Creating technical documentation and well-commented code for open source consumption
- Participating in open source development on shared resources with external development teams
Qualifications:
- Experience with data modeling, data warehousing, and building data pipelines
- Experience in SQL and building a time series database (Postgres)
- Knowledge of data management fundamentals and data storage principles
- Knowledge of distributed systems as it pertains to data storage and computing
- Proficiency in, at least, one modern scripting or programming language such as Python, NodeJS
- Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
- Strong understanding of distributed systems and Restful APIs.
- You’re opinionated about tooling and curious about new trends and technologies in the software development world.
- Independence and self-motivation
- 4+ years engineering experience
Bonus Points:
- Data Processing - experience with building and maintaining large scale and/or real-time complex data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper
- Experience with large-scale distributed storage and database systems (SQL or NoSQL, e.g. MySQL, Cassandra)
- Background in academic economics or finance
- Familiarity with Cosmos, Tendermint, or Thorchain
- Familiarity with the Rust/Golang programming language
- Experience in small startup environments
- Experience with a distributed team / remote work