BLue Sky Clarity Logo

Big Data Platform Reliability Engineer, Boston, 110k to 140k base

Twitter Facebook
Location
Boston, MA
Salary
$110,000 - $140,000
Job Type
Direct Hire
Date
Jan 09, 2019
Job ID
2649770
Big Data Platform Reliability Engineer, Boston, 110k to 140k base

Our downtown Boston based client is seeking a Big Data Platform Reliability Engineer to ensure that both its development and production data infrastructure remains active and stable, performs at scales, is extensible, can manage required user and transaction volumes, can deal with peak spikes, is secure and recoverable, all the good things that keeps their business working like a fine-tuned machine.

Responsibilities:
  • Responsible for implementation and ongoing administration of big data cloud infrastructure.
  • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
  • Performance tuning of big data cloud clusters and associated big data routines. 
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
  • Point of Contact for Vendor escalation
  • Maintain database including tasks such as Data modelling, design & implementation, Software installation and configuration, Database backup and recovery, Database connectivity and security, Performance monitoring and tuning, Cloud infrastructure management including space allocation requirements, Software patches and upgrades and Automate manual tasks.
  • DWH Development. Data Stewardship and Tuning Tasks such as Ensure Referential integrity, Perform primary key execution, Accomplish data restatements, Load large data volumes in a timely manner.
 
Qualifications
  • BS or MS in Computer Science, Mathematics, Physics or other STEM studies
  • Significant experience managing big data infrastructure preferably in a high volume, low latency environment
  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
  • Hadoop stack skills (HBase, Hive, Pig, Mahout, etc._
  • Can deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
  • Good knowledge of Linux. Bash, Python, Pearl or other scripting language
  • Familiarity with open source configuration management and deployment tools such as Puppet or Chef
  • Knowledge of Troubleshooting Core Java Applications is a plus