Data Warehouse Analyst/Epic Clarity/Caboodle-Remote

Engage Partners Inc.

Philadelphia Pennsylvania

United States

Warehousing / Logistics
(No Timezone Provided)

  • Full Remote approved states: FL, IA, LA, ME, MD, MA, MN, NH, TN, VA, or WI. (New hire will need to be located in one of the states listed). Those that are willing to relocate to the Portland, Maine area will be considered as well; Hybrid Remote work schedule available.
  • Perm, FTE only. This is not a contract.
  • Salary + Benefits
  • Epic data experience (Clarity and Caboodle) *** strongly preferred
  • Experience creating custom Caboodle data models *** strongly preferred
  • New hire will be expected to customize the Epic Caboodle data warehouse. To do that an individual has to be Epic certified or proficient in Epic Clarity and Caboodle (multiple). The non-Epic is specific to owning the Epic ETL processes.


***Skills being sought: SSIS, Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases, Schema design and dimensional data modeling (Kimball preferred), Star Schema, and Use of APIs for data integration. Key words: Data Engineer, ETL Developer, Data Modeler.


The ideal candidate will:

• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

• Support our business intelligence developers, data architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

• Partner with Health Care Organization analytics teams to design or improve data models that feed business intelligence tools.

• Transform data into a format that can be easily analyzed by data professionals (Data analysts, BI developers, data scientists).

• Create and maintain an optimal data pipeline architecture.

• Coordinate with Health Care Organization data professionals to create and augment unique data infrastructure.

• Proactively identify, design and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.

• Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using SQL technologies

• Work with stakeholders including data analysts, business intelligence professionals, and executive teams to assist them with data-related technical issues

• Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.

• Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

• Designs and evaluates open source and vendor tools for data lineage.

• have excellent analytic skills associated with working on unstructured datasets

• Use your knowledge of different programming languages to code and update data systems.


The ideal candidate will also leverage their technical knowledge in:

• SSIS

• Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

• Schema design and dimensional data modeling (Kimball preferred)

• Star Schema

• Use of APIs for data integration


Preferred candidates may have:

• A graduate degree in information systems, informatics, statistics, computer science or another quantitative field (or equivalent work experience).

• Epic data experience (Clarity and Caboodle) *** strongly preferred

• Experience creating custom Caboodle data models *** strongly preferred

• Experience with modern BI tools such as Power BI (preferred), Tableau or QlikView/Sense

• Experience with SSAS and SSRS

• Experience with big data tools: Hadoop, Spark, Kafka, etc.

• Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

• Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

• Experience with cloud services such as: EC2, EMR, RDS, Redshift

• Experience with stream-processing systems: Storm, Spark-Streaming, etc.

• Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

• Data Engineer Certificate

• Strong analytic skills related to working with unstructured datasets.

• Experience with or knowledge of Agile Software Development methodologies


Minimum Knowledge, Skills, and Abilities Required

1. Experience in integrating data from multiple source systems.

2. Experience identifying potential solutions, designing, developing, documenting, testing, and maintaining databases/applications required.

3. Requires knowledge and experience (Min 2 to 5 yrs) in data warehousing (ETL, ODS, Star Schemas, OLAP, Dimensional modeling etc.); relational database experience/knowledge preferably SQL server as well as SQL working knowledge.

4. Demonstrated experience working with users to identify information system needs.

5. Bachelor's degree in Information Technology or equivalent work experience

6. Ability to identify potential solutions and take initiative.

7. Must be able to work independently and within a team and handle sensitive data professionally.

8. Ability to communicate effectively and work collaboratively with disparate clients.

9. Ability to independently lead or facilitate meetings.

10. Must be able to work on multiple projects concurrently and work within deadlines.

Data Warehouse Analyst/Epic Clarity/Caboodle-Remote

Engage Partners Inc.

Philadelphia Pennsylvania

United States

Warehousing / Logistics

(No Timezone Provided)

  • Full Remote approved states: FL, IA, LA, ME, MD, MA, MN, NH, TN, VA, or WI. (New hire will need to be located in one of the states listed). Those that are willing to relocate to the Portland, Maine area will be considered as well; Hybrid Remote work schedule available.
  • Perm, FTE only. This is not a contract.
  • Salary + Benefits
  • Epic data experience (Clarity and Caboodle) *** strongly preferred
  • Experience creating custom Caboodle data models *** strongly preferred
  • New hire will be expected to customize the Epic Caboodle data warehouse. To do that an individual has to be Epic certified or proficient in Epic Clarity and Caboodle (multiple). The non-Epic is specific to owning the Epic ETL processes.


***Skills being sought: SSIS, Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases, Schema design and dimensional data modeling (Kimball preferred), Star Schema, and Use of APIs for data integration. Key words: Data Engineer, ETL Developer, Data Modeler.


The ideal candidate will:

• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

• Support our business intelligence developers, data architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

• Partner with Health Care Organization analytics teams to design or improve data models that feed business intelligence tools.

• Transform data into a format that can be easily analyzed by data professionals (Data analysts, BI developers, data scientists).

• Create and maintain an optimal data pipeline architecture.

• Coordinate with Health Care Organization data professionals to create and augment unique data infrastructure.

• Proactively identify, design and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.

• Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using SQL technologies

• Work with stakeholders including data analysts, business intelligence professionals, and executive teams to assist them with data-related technical issues

• Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.

• Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

• Designs and evaluates open source and vendor tools for data lineage.

• have excellent analytic skills associated with working on unstructured datasets

• Use your knowledge of different programming languages to code and update data systems.


The ideal candidate will also leverage their technical knowledge in:

• SSIS

• Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

• Schema design and dimensional data modeling (Kimball preferred)

• Star Schema

• Use of APIs for data integration


Preferred candidates may have:

• A graduate degree in information systems, informatics, statistics, computer science or another quantitative field (or equivalent work experience).

• Epic data experience (Clarity and Caboodle) *** strongly preferred

• Experience creating custom Caboodle data models *** strongly preferred

• Experience with modern BI tools such as Power BI (preferred), Tableau or QlikView/Sense

• Experience with SSAS and SSRS

• Experience with big data tools: Hadoop, Spark, Kafka, etc.

• Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

• Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

• Experience with cloud services such as: EC2, EMR, RDS, Redshift

• Experience with stream-processing systems: Storm, Spark-Streaming, etc.

• Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

• Data Engineer Certificate

• Strong analytic skills related to working with unstructured datasets.

• Experience with or knowledge of Agile Software Development methodologies


Minimum Knowledge, Skills, and Abilities Required

1. Experience in integrating data from multiple source systems.

2. Experience identifying potential solutions, designing, developing, documenting, testing, and maintaining databases/applications required.

3. Requires knowledge and experience (Min 2 to 5 yrs) in data warehousing (ETL, ODS, Star Schemas, OLAP, Dimensional modeling etc.); relational database experience/knowledge preferably SQL server as well as SQL working knowledge.

4. Demonstrated experience working with users to identify information system needs.

5. Bachelor's degree in Information Technology or equivalent work experience

6. Ability to identify potential solutions and take initiative.

7. Must be able to work independently and within a team and handle sensitive data professionally.

8. Ability to communicate effectively and work collaboratively with disparate clients.

9. Ability to independently lead or facilitate meetings.

10. Must be able to work on multiple projects concurrently and work within deadlines.