BI Data Engineer - REMOTE

Rolling Meadows, Illinois

Open to Remote

Full Time

$145k - $160k

Motion has partnered with a premier client in filling a full-time, fully REMOTE employee position for a BI Data Engineer. This is a great opportunity to expand your career and work with a well-known company in the greater Chicago area. Do you get excited working on Azure cloud platforms specifically ingesting data using Azure Data Factory (ADF)? Are you experienced with Snowflake, Databricks, SQL, Python, within the enterprise data warehouse environments? This position may be for you.

Required Skills & Experience

  • A relevant technical BS Degree in Information Technology and 5 years of relevant professional experience implementing well-architected data pipelines that are dynamically scalable, highly available, fault-tolerant, and reliable for analytics and platform solutions
  • 3+ years of data engineering experience leveraging technologies such as Snowflake, Azure Data Factory, ADLS Gen 2, Logic Apps, Azure Functions, Databricks, Apache Spark, Scala, Synapse, SQL Server
  • Understanding the pros and cons, and best practices of implementing Data Lake, using Microsoft Azure Data Lake Storage
  • Experience structuring Data Lake for the reliability, security and performance
  • 5 years writing SQL, TSQL queries against any RDBMS with query optimization and performance tuning
  • Experience implementing ETL for Data Warehouse and Business intelligence solutions
  • Working experience with Python, and Power Shell Scripting
  • Skills to read and write effective, modular, dynamic, parameterized and robust code, establish and follow already established code standards, and ETL framework
  • Strong analytical, problem solving, and troubleshooting abilities, experience performing root cause analysis
  • Good understanding of unit testing, software change management, and software release management
  • Experience working within an agile team, In-depth knowledge of agile process and principles


What You Will Be Doing

  • Build the infrastructure required for optimal ETL/ELT pipelines to ingest data from a wide variety of data sources using Microsoft Azure technologies such as Azure Data Factory and Databricks.
  • Construct and maintain of enterprise level integrations using the Snowflake platform, Azure Synapse, Azure SQL and SQL Server.
  • Design ETL pipelines and reusable components to implement specified business requirements Troubleshoot and optimize ETL code; interpret ETL logs, perform data validation, understand the benefits and drawbacks of parallelism, proper use of expressions, scoping of variables, commonly used transforms, event handlers and logging providers, understand and optimize the surrogate key generation and inconsistent data type handling
  • Create data tools for data analytics and data science team members to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Conduct code reviews, performance analysis and participate in technical design
  • Orchestrate large, complex data sets that meet functional/non-functional business requirements.
  • Seek out, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Partner with data and analytics talent to strive for greater functionality in our data systems.

Applicants must be currently authorized to work in the US on a full-time basis now and in the future.

Posted by: Aaron Rontal


Specialization: Business Intelligence