You are viewing a preview of this job. Log in or register to view more details about this job.

Enterprise Analytics Office Data Science Internship Program (Computational Research Specialization)

Do you consider yourself a "wizard" with high-performance computing? Nationwide is seeking passionate rock-star individuals with experience working in large-scale, petascale/exascale computing environments to join the Enterprise Analytics Office as Data Science Interns, lasting from approximately mid-May through mid-August of 2021 (based on schedules).

We are dedicated to working with business partners to develop integrated, end-to-end solutions that contribute to the company’s bottom line. As a High-Performance Computing Data Science Intern, you’ll have the opportunity to work with real-world data in cloud-computing environments, build efficient, optimized solutions using large volumes of data, and deliver your findings to both technical and non-technical audiences.

Interested candidates must understand application design and programming principles with the ability to code, test, debug, optimize, document, and maintain complex application programs while being competent to work on most phases of development life-cycle with provided instruction and guidance in phases.

Desired Qualifications
The ideal candidate should have:
·        Background (undergraduate or graduate level) in Computer Science, Computer/Software Engineering, Mathematics, or a related field
·        Experience with batch, micro-batch, streaming, and distributed processing platforms
·        Experience with distributed processing frameworks
·        Experience with distributed storage and database platforms
·        Experience with one or more programming languages in the context of data integration, data management, or analytics
·        Experience working in collaborative, team-based, multidisciplinary environments

Additional desirable qualifications include:
·        Experience working within Amazon Web Services (AWS) cloud computing environments
·        Experience with terabytes, petabytes, or even exabytes of data
·        Experience with platforms such as Flink, Hadoop, Kafka, Spark, Hudi, or Storm
·        Familiarity with geospatial datasets and services, mobile device location and GPS tracks
·        Experience with Git-based version control system such as GitHub or Bitbucket
·        Experience with developing and implementing APIs (particularly REST and streaming APIs)
·        Experience with GPU-accelerated frameworks or libraries

Special consideration will be given to candidates with graduation dates in 2021.