Technical Accounting Specialist Job October 2025
Consensys, a pioneering blockchain and web3 software company, is actively seeking an experienced Data Engineer to join its vital Data Team. This team, operating within the Office of the COO, is dedicated to building and optimizing the analytical infrastructure that empowers the entire organization. If you are passionate about designing robust data pipelines, ensuring unparalleled data quality, and enabling insightful analytics within the dynamic Web3 ecosystem, this unique opportunity at a company at the forefront of technological innovation since 2014 is tailor-made for you.
October 24, 2025
November 24, 2025
Full-Time / Remote-Friendly
📄 Job Description
- Designing, building, and maintaining robust data pipelines that integrate sources across the business.
- Collaborating closely with analysts, business stakeholders, and other engineering teams to gather requirements, align timelines, and ensure successful delivery.
- Documenting data pipelines, best practices, and processes to support onboarding and knowledge sharing.
- Developing and optimizing data models to deliver trusted, structured, and business-ready data.
- Ensuring data quality, security, and governance are embedded in all pipelines and systems.
- Orchestrating and monitoring pipeline execution for reliability and scalability.
- Deploying and managing infrastructure as code.
- Building and tuning big data pipelines using SQL, Python, and distributed processing frameworks.
- Working with cloud data warehouses (Snowflake, BigQuery, Redshift) to enable insights and analytics.
- Maintaining and updating reporting solutions and user dashboards.
- Automating workflows and improving CI/CD pipelines to reduce manual processes.
📌 Requirements
- Over 6 years of proven experience as a Data Engineer.
- Strong SQL skills and extensive experience with cloud warehouses (e.g., Snowflake, BigQuery, Redshift).
- Hands-on experience with transformation and orchestration tools (e.g., dbt, Airflow, Dagster).
- Comfort with Python or other scripting languages for ETL and automation.
- Familiarity with data governance and metadata management (e.g., DataHub).
- Experience deploying and managing infrastructure as code (e.g., Terraform, Pulumi).
- Exposure to data integration and ingestion tools (e.g., Airbyte, Segment).
- Experience with big data and distributed processing (e.g., Apache Spark, AWS EMR, S3).
- Exposure to open-source projects.
- Experience maintaining and improving reporting solutions and dashboards (e.g., Preset/Superset, Cube.dev).
- Familiarity with CI/CD practices and automation (e.g., GitHub Actions).
- A collaborative mindset and eagerness to work with both technical and non-technical colleagues in a remote-friendly environment.
📝 How to Apply
❓ Frequently Asked Questions
Q1: What is the application deadline for this Data Engineer position?
The closing date for applications is **November 24, 2025**. We encourage all interested candidates to apply as soon as possible.
Q2: Is this a remote or onsite position?
This Data Engineer role is **remote-friendly**, allowing for flexibility in location while fostering a collaborative environment.
Q3: What is the expected salary range for this role?
For US-based candidates, the salary range for this position is **$156,000 – $187,000 USD**. This range does not include bonus, equity, or other benefits.
Q4: What key technical skills are essential for this role?
Essential skills include strong SQL proficiency, experience with cloud data warehouses (e.g., Snowflake, BigQuery), transformation and orchestration tools (e.g., dbt, Airflow), and Python for ETL and automation.