Technical Data Operations Engineer - Remote

PlaceIQ

null

United States

Engineering
(No Timezone Provided)

Description

About the Company:

PlaceIQ is a leading data and technology provider that powers critical business and marketing decisions with location data, analytics and insights. An early industry pioneer, PlaceIQ has become the standard for fueling better decisions by marketers, analysts and publishers through powerful, location-based consumer insights, real-world measurement and attribution.

About the Engineering Organization:

PlaceIQ pipelines turn hundreds of terabytes of raw location data into valuable products. The leader of this team will be responsible for our overall technical operations strategy, develop systems and software that help increase site reliability, and will interface and collaborate directly with our application and data engineering teams, infrastructure teams, data scientists, product managers and our client service and operations teams. This is a great opportunity for someone looking to take the next step in their career.

Please note, our headquarters is located in New York City, but this role can be remote anywhere in the US. 

The Role Will Include but not be limited to:

  • Overall operational responsibility for systems and processes that generate and deliver products to clients, including reporting and monitoring of these processes
  • Support/tune/troubleshoot data pipelines that generate our suite of data products
  • Provide support for client data artifact generation and delivery, including troubleshooting failures and delays as well as supporting ad hoc needs around client deliveries
  • Provide first line of defense for data defect investigation when clients detect anomalies or issues in the data they receive
  • Monitor all jobs and identify opportunities for efficiency in scheduling, dependencies, and functionality
  • Advance our continuous integration practices by automating the build, test, and deploy life cycle.
  • Work with infrastructure on overall job scheduling to maximize efficiency and usage of available resources
  • Relevant Experience

  • Experience with programming languages such as Java and Scala
  • Experience with Spark, Scala and/or the Hadoop ecosystem a major plus
  • 3-4 years prior experience in a tech ops role, preferably in a big data environment
  • Ability to be flexible in the face of changing priorities
  • Good communicator with a demonstrated history of cross-functional collaboration
  • Technical Data Operations Engineer - Remote

    PlaceIQ

    null

    United States

    Engineering

    (No Timezone Provided)

    Description

    About the Company:

    PlaceIQ is a leading data and technology provider that powers critical business and marketing decisions with location data, analytics and insights. An early industry pioneer, PlaceIQ has become the standard for fueling better decisions by marketers, analysts and publishers through powerful, location-based consumer insights, real-world measurement and attribution.

    About the Engineering Organization:

    PlaceIQ pipelines turn hundreds of terabytes of raw location data into valuable products. The leader of this team will be responsible for our overall technical operations strategy, develop systems and software that help increase site reliability, and will interface and collaborate directly with our application and data engineering teams, infrastructure teams, data scientists, product managers and our client service and operations teams. This is a great opportunity for someone looking to take the next step in their career.

    Please note, our headquarters is located in New York City, but this role can be remote anywhere in the US. 

    The Role Will Include but not be limited to:

  • Overall operational responsibility for systems and processes that generate and deliver products to clients, including reporting and monitoring of these processes
  • Support/tune/troubleshoot data pipelines that generate our suite of data products
  • Provide support for client data artifact generation and delivery, including troubleshooting failures and delays as well as supporting ad hoc needs around client deliveries
  • Provide first line of defense for data defect investigation when clients detect anomalies or issues in the data they receive
  • Monitor all jobs and identify opportunities for efficiency in scheduling, dependencies, and functionality
  • Advance our continuous integration practices by automating the build, test, and deploy life cycle.
  • Work with infrastructure on overall job scheduling to maximize efficiency and usage of available resources
  • Relevant Experience

  • Experience with programming languages such as Java and Scala
  • Experience with Spark, Scala and/or the Hadoop ecosystem a major plus
  • 3-4 years prior experience in a tech ops role, preferably in a big data environment
  • Ability to be flexible in the face of changing priorities
  • Good communicator with a demonstrated history of cross-functional collaboration