Lead Data Architect (Remote)

Lands' End

Dodgeville Wisconsin

United States

Customer Service / Call Center
(No Timezone Provided)

We are looking for a visionary Lead Data Architect to join our growing team of data science and data engineering experts. You will be responsible for expanding and optimizing our data and data pipeline architecture as well as critical data flows. Your background is one of an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This is a leadership role that requires a hands-on individual who has experience building complex data platforms for large companies that support a variety of applications.

As the lead member of the data engineering team, you would support data scientists and data analysts on data initiatives and will ensure that optimal data delivery architecture is consistent throughout ongoing projects. You are a visionary, self-directed and comfortable supporting the data needs of multiple teams, systems and products. In addition, you would be excited by the prospect of optimizing or even re-designing the company’s data architecture to support next generation architectures and processes.

To help with your efforts, you can expect a comprehensive technology stack, talented co-workers, an AWS-based analytics environment with a comprehensive array of on-line and off-line data, and a data science team developing algorithms to enhance business performance and the customer experience using agile development processes. Because of its dedication and efforts, Lands’ End is #19 on the NRF’s list of fasting growing retailers, an Internet Retailer top 100 company, and a great part of your future.

Responsibilities

  • Set the architectural vision for marketing and analytics
  • Coordinate efforts with the EDW team, DevOps, etc
  • Create and maintain an optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional and non-functional business requirements
  • Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and AWS Glue technologies
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
  • Continue the effort to create an optimal data science operational platform leveraging data tools like a metadata catalog and analytic tools including a curated Python library
  • Keep data separated and secure across national boundaries
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Share technical knowledge and mentor other staff members within broader analytics and e-commerce communities.
  • Qualifications

  • 5+ years of experience productionizing applications and Big Data platforms
  • 5+ years of experience in a Data or Software Engineering role, who has attained a BS in Computer Science or related field (applicable graduate work a plus)
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Strong analytical skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets
  • Experience with architecting streaming ingestion using tools like Kafka or Kinesis
  • Working knowledge of message queuing and highly scalable ‘big data’ data stores
  • Strong project management, communication and organizational skills
  • You should also have experience using the following software and tools: 5+ years of experience with Linux4+ years of experience with object-oriented/object function scripting languages: Python, Scala, Java 8, etc.3+ years of experience with big data tools: Apache Spark, Presto, Impala, etc.Experience with relational SQL and NoSQL databases, including Redshift, Postgres/Netezza, MySQL and ElasticsearchExperience with AWS cloud services: EC2, EMR, EKS (Kubernetes), S3, Elasticache, Lambda, API Gateway, Glue ETL preferredExperience with infrastructure as code utilizing TerraformExperience with BI tools: Kibana, Apache Superset, Tableau, Grafana, etc.Experience with RESTful API and Docker preferred
  • Lead Data Architect (Remote)

    Lands' End

    Dodgeville Wisconsin

    United States

    Customer Service / Call Center

    (No Timezone Provided)

    We are looking for a visionary Lead Data Architect to join our growing team of data science and data engineering experts. You will be responsible for expanding and optimizing our data and data pipeline architecture as well as critical data flows. Your background is one of an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This is a leadership role that requires a hands-on individual who has experience building complex data platforms for large companies that support a variety of applications.

    As the lead member of the data engineering team, you would support data scientists and data analysts on data initiatives and will ensure that optimal data delivery architecture is consistent throughout ongoing projects. You are a visionary, self-directed and comfortable supporting the data needs of multiple teams, systems and products. In addition, you would be excited by the prospect of optimizing or even re-designing the company’s data architecture to support next generation architectures and processes.

    To help with your efforts, you can expect a comprehensive technology stack, talented co-workers, an AWS-based analytics environment with a comprehensive array of on-line and off-line data, and a data science team developing algorithms to enhance business performance and the customer experience using agile development processes. Because of its dedication and efforts, Lands’ End is #19 on the NRF’s list of fasting growing retailers, an Internet Retailer top 100 company, and a great part of your future.

    Responsibilities

  • Set the architectural vision for marketing and analytics
  • Coordinate efforts with the EDW team, DevOps, etc
  • Create and maintain an optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional and non-functional business requirements
  • Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and AWS Glue technologies
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
  • Continue the effort to create an optimal data science operational platform leveraging data tools like a metadata catalog and analytic tools including a curated Python library
  • Keep data separated and secure across national boundaries
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Share technical knowledge and mentor other staff members within broader analytics and e-commerce communities.
  • Qualifications

  • 5+ years of experience productionizing applications and Big Data platforms
  • 5+ years of experience in a Data or Software Engineering role, who has attained a BS in Computer Science or related field (applicable graduate work a plus)
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Strong analytical skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets
  • Experience with architecting streaming ingestion using tools like Kafka or Kinesis
  • Working knowledge of message queuing and highly scalable ‘big data’ data stores
  • Strong project management, communication and organizational skills
  • You should also have experience using the following software and tools: 5+ years of experience with Linux4+ years of experience with object-oriented/object function scripting languages: Python, Scala, Java 8, etc.3+ years of experience with big data tools: Apache Spark, Presto, Impala, etc.Experience with relational SQL and NoSQL databases, including Redshift, Postgres/Netezza, MySQL and ElasticsearchExperience with AWS cloud services: EC2, EMR, EKS (Kubernetes), S3, Elasticache, Lambda, API Gateway, Glue ETL preferredExperience with infrastructure as code utilizing TerraformExperience with BI tools: Kibana, Apache Superset, Tableau, Grafana, etc.Experience with RESTful API and Docker preferred