Quantcast

Big Data Architect Resume Jersey city, N...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Big Data Architect
Target Location US-NJ-Jersey City
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
Candidate's Name
Data Pipeline Engineer | Chief Of Data Engineering | Principal Data
Architect
Address Jersey City, New Jersey, Street Address  United
States

Phone PHONE NUMBER AVAILABLE
E-mail EMAIL AVAILABLE


Leveraging cutting-edge technologies like Apache Beam and Luigi, they masterfully orchestrate data
workflows, ensuring efficient data transfer and accessibility for strategic insights and decision-making.
Their expertise spans beyond mere technical skills, encompassing diligent monitoring of system
performance, proactive issue resolution, and continuous optimization to enhance the robustness and
scalability of data frameworks. Adopting a comprehensive approach to data handling, Data
Engineers work in close collaboration with business stakeholders to adapt to changing needs and
devise bespoke data solutions. Championing the ethos of data stewardship and fostering a culture of
analytical excellence, they position organizations at the forefront of a competitive market by
transforming data into a pivotal resource for ongoing growth and innovation.


Websites, Portfolios, Profiles
                          https://LINKEDIN LINK AVAILABLE


Skills
                       My proficiency spans various domains, including ETL
                       processes, Python or Java programming, and data
                       warehousing technologies like Amazon Redshift or
                       Snowflake. Moreover, I possess a deep understanding
                       of SQL and NoSQL databases such as MySQL and
                       MongoDB, along with a comprehensive grasp of big
                       data technologies like Hadoop, Spark, and Kafka.
                       Additionally, I am well-versed in data governance
                       principles, security protocols, and cloud platforms such
                       as AWS and Azure. My robust problemysolving skills and
                       analytical acumen enable me to navigate complex
                       data challenges effectively, while my adept
                       communication and collaboration abilities foster
                       seamless teamwork across diverse teams. By delivering
                       reliable data solutions and facilitating informed
                       decision-making processes, I contribute to the success
                       and growth of the organization in a professional and
                       impactful manner.
Work History
2021-01 - Current   Big Data Architect
                    Founders Village
                    I've driven multiple initiatives focused on refining data infrastructure to boost
                    operational efficiencies and quality assurance practices. Using my deep
                    knowledge in data engineering, I crafted and deployed scalable data
                    pipelines with technologies such as Apache Spark and Hadoop, facilitating the
                    immediate intake and analysis of data. My proficiency in database systems,
                    including MySQL and MongoDB, enabled me to create durable data storage
                    solutions adept at managing the extensive data produced by water treatment
                    activities. By utilizing cloud services like AWS and Azure, I designed economical
                    and robust data storage frameworks that ensure constant availability and are
                    primed for disaster recovery situations. Furthermore, I developed advanced
                    data analytics platforms using tools such as Apache Kafka and Apache Flink,
                    providing stakeholders with valuable insights from continuous data streams.
                    Through rigorous system optimization and performance enhancement
                    strategies, I improved the efficiency and capacity of data processing systems,
                    significantly supporting the smooth functioning of water treatment operations in
                    compliance with strict regulatory requirements.

2015-07 - 2020-10   Senior Big Data Engineer
                    Rolustech
                    I led the development and refinement of sophisticated data pipelines,
                    employing advanced technologies to efficiently manage large datasets.
                    Leveraging my expertise in Hadoop ecosystem tools like HDFS, MapReduce,
                    and Spark, I enhanced data processing capabilities, significantly improving
                    system performance and scalability. I integrated streaming data sources to
                    support real-time analytics, enabling more dynamic business strategies. Utilizing
                    SQL and NoSQL databases, including MongoDB, I developed scalable storage
                    solutions that met specific industry needs. My proficiency in cloud platforms,
                    such as AWS and Azure, facilitated the effective migration of data infrastructure
                    to the cloud, optimizing costs and flexibility. I also ensured data security and
                    compliance by implementing rigorous encryption and access control measures
                    across data pipelines. Through continuous system tuning and monitoring with
                    tools like Apache Kafka, I maintained high reliability and efficiency of data
                    operations, ensuring the availability of actionable insights for the organization.

2012-07 - 2015-05   Data Architect
                    Fourspan Technologies
                    I managed thorough information integration endeavors, utilizing ETL methods
                    and instruments such as Informatica and Talend to simplify diverse data sets. I
                    guaranteed data integrity across many municipal systems and increased
                    database speed by utilizing sophisticated SQL querying techniques. Using tools
                    like Hadoop and Apache Spark, I built powerful information warehousing
                    solutions into place that make it easier to store as well as access data rapidly for
                    use in municipal analytics and reporting. In addition, I created and managed
                    scalable data pipelines using Apache Airflow and Python to automate data
            processing and ingestion, enhancing operational reliability and effectiveness for
            municipal operations.


Education
            Bachelor of Science: Computer Science
            Education University

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise