Python Developer - Helena, MT

  • Deloitte Consulting
  • Jan 11, 2022
Full time Computer Science Engineering Information Technology

Job Description

Responsibilities
Function as an integrator between business needs and technology solutions, helping to create technology solutions to meet clients' requirements. Be responsible for developing and
testing solutions that aligns with clients' systems strategy, requirements, and design as well as supporting system implementation. Manage data pipeline process starting from acquisition to ingestion, storage, and provisioning of data to point-of-impact by modernizing and enabling new capabilities. Facilitate Data Integration on traditional and Hadoop environments by assessing client's enterprise IT environments. Guide clients to the future IT environment state to support meeting their long-term business goals. Enhance business drivers through enterprise-scale applications that enable visualization, consumption and monetization of both structured and unstructured data.

The Team
Deloitte Consulting's Analytics & Cognitive offering leverages the power of analytics, robotics, and cognitive technologies to uncover hidden relationships from vast troves of data, create and manage large-scale organizational intelligence, and generate insights that catalyze growth and efficiencies.

Qualifications

Required

  • Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience
  • 3+ years of strong technical experience with Hadoop (Cloudera distribution), Spark with Scala or Python programming, Hive Tuning, Bucketing, Partitioning, UDF, UDAF
  • NOSQL Data Base such as HBase, MongoDB or Cassandra
  • Experience and knowledge working in Kafka, Spark streaming, Sqoop, Oozie, Airflow, Control-M, Presto, No SQL, SQL
  • Expert level usage with Jenkins, GitHub is preferred
  • Strong technical skills including understanding of software development principles
  • Hands-on programming experience
  • Travel up to 20% annually (while 20% travel is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice.)
  • Must live in one of the commutable distance: Rosslyn, Los Angeles, Boston, McLean, Orange County, Alexandria, Washington DC, Chicago, Dallas, Houston, Denver, Philadelphia, Minneapolis, Sacramento, Baltimore, San Diego, Atlanta, Austin, Boca Raton, Charlotte, Huntsville, Miami, Milwaukee, New Orleans, Portland, Raleigh, Richmond, Rochester, Orlando, Mechanicsburg, Pittsburgh, Tampa, Kansas City, Cleveland, Cincinnati, Phoenix, Harrisburg, Indianapolis, St. Louis, Tallahassee, Grand Rapids, Tulsa, Louisville, Columbus, Greensboro, San Antonio, Nashville, Dayton, Salt Lake City, Memphis, Jacksonville, Las Vegas, Hermitage, Des Moines, Davenport, San Juan, Birmingham
  • Limited immigration sponsorship may be available


Preferred

  • Experience on data lakes, datahub implementation
  • Knowledge on AWS or Azure platforms
  • Knowledgeable in techniques for designing Hadoop-based file layout optimized to meet business needs
  • Able to translate business requirements into logical and physical file structure design
  • Ability to build and test solution in agile delivery manner
  • Ability to articulate reasons behind the design choices being made
  • Bachelor of Science in Computer Science, Engineering, or MIS, or equivalent experience
  • Any bigdata certification is a plus