Purpose of Position:
- Support product development and operations by creating and supporting automated solutions
for collecting, curating, analyzing, visualizing, and integrating data
- Coordinate with business users, operations and data analytic team to understand their data
needs and develop applications, automated tools and reports to meet those needs.
- Identify and implement processes and standards that improve the analysis, storage, and curation
of data products provided to our customers.
- Perform analysis, transformation, and loading of large data sets sourced from in-house and
external data repositories.
- Continually expand skills and knowledge in the areas of new data management and analysis
technologies
Knowledge, Skills, and Abilities:
- Bachelor's degree in Computer Science, Information Technology, Information Management,
Mathematics, or comparable programing experience. A Master degree is a plus
- 2+ years paid work experience as a Data Engineer
- Highly proficient at developing programs in Java, or Groovy.
- 2+ years' paid work experience with Spark/Hadoop
- 2+ years' paid work experience with AWS (EMR, S3, lambda, EC2, glue, RDS)
- 2+ years' paid work experience with SQL (MYSQL is a Plus) and NoSQL Databases
- Experience with Elasticsearch is a plus
- Experience with Python is a plus
- Proficient in installing, troubleshooting and administering applications running on Linux.
- Experience with GitHub or other version control management
- Experience with Scala (Zeppelin) is a plus
- Experience with Airflow or other ETL is a plus
- Ability to interpret client/customer requests and propose technical solutions.
- Ability to assess and maintain the quality of code structure and design.
- Ability to plan, design, coordinate software installation, test, and deployment
- Good client-facing presentation skills
- Certification or verified training in one of more of the following technologies/products: AWS,
ElasticSearch, Apache Spark is a plus
Job Requirements:
Knowledge, Skills, and Abilities:
- Bachelor's degree in Computer Science, Information Technology, Information Management,
Mathematics, or comparable programing experience. A Master degree is a plus
- 2+ years paid work experience as a Data Engineer
- Highly proficient at developing programs in Java, or Groovy.
- 2+ years' paid work experience with Spark/Hadoop
- 2+ years' paid work experience with AWS (EMR, S3, lambda, EC2, glue, RDS)
- 2+ years' paid work experience with SQL (MYSQL is a Plus) and NoSQL Databases
- Experience with Elasticsearch is a plus
- Experience with Python is a plus
- Proficient in installing, troubleshooting and administering applications running on Linux.
- Experience with GitHub or other version control management
- Experience with Scala (Zeppelin) is a plus
- Experience with Airflow or other ETL is a plus
- Ability to interpret client/customer requests and propose technical solutions.
- Ability to assess and maintain the quality of code structure and design.
- Ability to plan, design, coordinate software installation, test, and deployment
- Good client-facing presentation skills
- Certification or verified training in one of more of the following technologies/products: AWS,
ElasticSearch, Apache Spark is a plus
Synerfac
Ambler Pennsylvania
United States
Information Technology
(No Timezone Provided)
Purpose of Position:
- Support product development and operations by creating and supporting automated solutions
for collecting, curating, analyzing, visualizing, and integrating data
- Coordinate with business users, operations and data analytic team to understand their data
needs and develop applications, automated tools and reports to meet those needs.
- Identify and implement processes and standards that improve the analysis, storage, and curation
of data products provided to our customers.
- Perform analysis, transformation, and loading of large data sets sourced from in-house and
external data repositories.
- Continually expand skills and knowledge in the areas of new data management and analysis
technologies
Knowledge, Skills, and Abilities:
- Bachelor's degree in Computer Science, Information Technology, Information Management,
Mathematics, or comparable programing experience. A Master degree is a plus
- 2+ years paid work experience as a Data Engineer
- Highly proficient at developing programs in Java, or Groovy.
- 2+ years' paid work experience with Spark/Hadoop
- 2+ years' paid work experience with AWS (EMR, S3, lambda, EC2, glue, RDS)
- 2+ years' paid work experience with SQL (MYSQL is a Plus) and NoSQL Databases
- Experience with Elasticsearch is a plus
- Experience with Python is a plus
- Proficient in installing, troubleshooting and administering applications running on Linux.
- Experience with GitHub or other version control management
- Experience with Scala (Zeppelin) is a plus
- Experience with Airflow or other ETL is a plus
- Ability to interpret client/customer requests and propose technical solutions.
- Ability to assess and maintain the quality of code structure and design.
- Ability to plan, design, coordinate software installation, test, and deployment
- Good client-facing presentation skills
- Certification or verified training in one of more of the following technologies/products: AWS,
ElasticSearch, Apache Spark is a plus
Job Requirements:
Knowledge, Skills, and Abilities:
- Bachelor's degree in Computer Science, Information Technology, Information Management,
Mathematics, or comparable programing experience. A Master degree is a plus
- 2+ years paid work experience as a Data Engineer
- Highly proficient at developing programs in Java, or Groovy.
- 2+ years' paid work experience with Spark/Hadoop
- 2+ years' paid work experience with AWS (EMR, S3, lambda, EC2, glue, RDS)
- 2+ years' paid work experience with SQL (MYSQL is a Plus) and NoSQL Databases
- Experience with Elasticsearch is a plus
- Experience with Python is a plus
- Proficient in installing, troubleshooting and administering applications running on Linux.
- Experience with GitHub or other version control management
- Experience with Scala (Zeppelin) is a plus
- Experience with Airflow or other ETL is a plus
- Ability to interpret client/customer requests and propose technical solutions.
- Ability to assess and maintain the quality of code structure and design.
- Ability to plan, design, coordinate software installation, test, and deployment
- Good client-facing presentation skills
- Certification or verified training in one of more of the following technologies/products: AWS,
ElasticSearch, Apache Spark is a plus