Big Data Architect – 1 Nos

Experience Required: 5+ years.
Education

Bachelors- computer information technology, computer science, management

Masters Degree Preferred- computer information technology, computer science, management

Summary:

The Big Data Architect leads the architecture and design of Client’s Data Platform in alignment with the Client’s vision of providing a unified data solution: aggregating customer and healthcare operational data, and providing easy access to powerful insights. As such, this position will be responsible for the technical and security architecture of the software applications, as well as the supporting infrastructure. This position will help drive quality, reliable, secure applications, using industry-standard best practices.

Primary Duties And Responsibilities:
  • Lead a development team of big data designers, developers, data scientists, and DevOps
  • Implement a big data enterprise warehouse, BI, and analytics system using Hive, Spark, Redshift, EMR (Hadoop), and S3
  • Develop and maintain processes to acquire, analyze, store, cleanse, and transform large datasets using tools like Spark, Kafka, Sqoop, Hive, NiFi, and MiNiFi
  • Provide recommendations, technical direction, and leadership for the selection and incorporation of new technologies into the Hadoop ecosystem
  • Participate in regular status meetings to track progress, resolve issues, mitigate risks and escalate concerns in a timely manner
  • Contribute to the development, review, and maintenance of product requirements documents, technical design documents, and functional specifications
  • Help design innovative, customer-centric solutions based on a deep knowledge of large-scale, data-driven technology and the healthcare industry
  • Help develop and maintain enterprise data standards, best practices, security policies, and governance processes for the Hadoop ecosystem
Experience & Skills:
  • Strong communication skills, ability to learn, demonstrate intellectual curiosity, ability to work in a team environment. Self-motivated, passionate, and strong leadership skills
  • Minimum 5+ years of experience in large systems analysis and development, addressing unique issues of architecture and data management.
  • Has the experience to work at the highest technical level of all phases of systems analysis and development activity, across the full scope of the system development cycle
  • 4+ years related experience on data warehousing and business intelligence projects
  • 3+ years implementation or development experience with the Hadoop ecosystem
  • Working knowledge of the entire big data development stack
  • Experience handling very large data sets (10’s of terabytes and up preferred)
  • Experience with secure RESTful Web Services
  • Highly proficient with Java/Scala application development
  • Expert in Apache Spark infrastructure and development
  • Experience with Sqoop, Spark, Hive, Kafka, and Hadoop
  • Experience with automated testing for Big Data platforms
  • Experience with best practices for data integration, transformation, governance, and data quality
  • Experience with developing, designing, and coding, completing programming and documentation, and performing testing for complex ETL applications (Spark & Scala preferred)
  • Experience with Agile software development process and development of best practices
  • Experience with Big Data text mining and big Data Analytics preferred
  • Understanding of Big Data Architecture along with tools being used on the Hadoop ecosystem
  • Ability to lead tool suite selection and lead proofs of concepts
  • Ability to share ideas among a collaborative team and drive the team based on technical expertise and sharing best practices from past experience