Remote/Hybrid
4 - 7+ years
Hands-On SQL
About BuildingBlocks
Regardless of whether accomplishing astounding customer work or aiding assemble our organization from the inside, our different gifted organization shares a solitary mission: to fabricate lovely encounters that decidedly affect individuals’ lives and the organizations we serve.
Our way of life is more reason than work, mixing the quest for certified human association with a fastidious way to deal with the things we make. All with an idealistic soul. We’re an organization that is adequately youthful to be fun and defiant – yet intense and driven enough to immensely affect the business.
Our way of life is more reason than work, mixing the quest for certified human association with a fastidious way to deal with the things we make. All with an idealistic soul. We’re an organization that is adequately youthful to be fun and defiant – yet intense and driven enough to immensely affect the business.
Role Overview
We’re looking for a hands-on Data Engineer to own the data layer of customer go-lives — ensuring migrations are validated, analytics pipelines are hardened, and business dashboards are powered by accurate, performant data. You’ll be responsible for validating and signing off on end-to-end data migrations, building high-quality SQL models, and implementing automated data quality checks to catch issues early.
This is a highl technical and impact-driven role focused on migration testing, SQL performance tuning, and data quality automation — aligning with AWS and industry best practices.
This is a highl technical and impact-driven role focused on migration testing, SQL performance tuning, and data quality automation — aligning with AWS and industry best practices.
Key Responsibilities
- End-to-End Migration Validation: Design and execute functional and performance validation for data migrations — including parity, nullability, PK/FK, duplication, and sampling checks — with complete documentation and sign-off aligned to AWS migration testing guidelines.
- Advanced SQL Development: Write and optimize analytical SQL (CTEs, window functions, incremental loads). Use EXPLAIN plans to tune query performance and ensure indexes and statistics support BI workloads.
- Automated Data Quality Frameworks: Implement and maintain data validation frameworks using Great Expectations, Deequ, or similar tools. Automate validation and publish Data Docs to ensure transparency across teams.
- Modeling & Documentation (dbt): If using dbt, build models with tests, exposures, and documentation to ensure traceability between dashboards and upstream data sources.
- Orchestration & Reliability: Productionize data validation and transformation jobs within Airflow DAGs, ensuring welldefined SLAs, alerts, and reliable pipeline operations.
- (Optional) Cloud Data Engineering: Build incremental pipelines and optimize batch processing for Snowflake (Streams & Tasks) or PostgreSQL, ensuring performance and cost efficiency.
Minimum Qualifications
- Experience: 4–7+ years as a Data Engineer or Analytics Engineer.
- SQL Expertise: Advanced proficiency in SQL and strong RDBMS fundamentals (PostgreSQL required), with proven experience in query tuning using EXPLAIN/analyze.
- Migration Validation: Hands-on experience designing and executing data migration validation (parity, integrity, and performance testing).
- Tooling Knowledge: Experience with one or more of the following — dbt, Great Expectations or Deequ/PyDeequ, Airflow.
- Version Control: Comfortable with Git-based workflows and CI/CD integration.
Nice to Have
- Experience with Snowflake (Streams, Tasks, cost optimization, and warehouse tuning).
- Exposure to BI tools such as Looker, Power BI, Tableau, or Metabase.
- Working knowledge of Python for lightweight data transformations and validation frameworks.