Quantcast

Data Engineer Senior Resume Bloomfield, ...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Senior
Target Location US-NJ-Bloomfield
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Senior Piscataway, NJ

Senior Data Engineer Manhattan, NY

Senior Big Data Engineer Manhattan, NY

Senior Data Engineer Brooklyn, NY

Data Engineer Senior Newark, NJ

Senior Big-Data Engineer Manhattan, NY

Senior Data Engineer Philadelphia, PA

Click here or scroll down to respond to this candidate
Candidate's Name
Sr. Data EngineerE-mail: EMAIL AVAILABLE Phone: PHONE NUMBER AVAILABLE LinkedinIDPROFESSIONAL SUMMARY:Senior Data Engineer with over 10 years of expertise in Python, Pandas, NumPy, and various data manipulation libraries. Specialized in creating sophisticated data transformation and analysis pipelines to maintain the precision and integrity of data. Proficient in diverse database technologies such as MySQL, DynamoDB, PostgreSQL, and MongoDB, ensuring seamless data management. Possesses extensive experience in big data technologies like Hadoop, Spark, and PySpark, enabling efficient processing of large datasets. Skilled in leveraging AWS, and Azure cloud services to architect scalable and cost-effective data solutions. Expert in ETL tools such as Informatica, AWS Glue, Azure Data Factory, and SSIS. Proficient in containerization technologies like Docker and Kubernetes. Well-versed in version control and CI/CD practices, utilizing Git, Jenkins, and GitHub for streamlined workflows. Experienced in data visualization with Tableau, Power BI, and Looker, proficient in Infrastructure as Code (IaC) using Terraform and AWS CloudFormation. Strong background in log analysis and monitoring solutions, ensuring robust performance of data pipelines. Highly experienced in Agile and Scrum methodologies, employing Jira for efficient project management.TECHNICAL SKILLSLanguages and Frameworks : Python, SQL, Django, Flask Fast APIWeb Development : HTML, Bootstrap, jQuery, Vue.js, AJAXDatabase Technologies : Oracle, MongoDB, PostgreSQL, DynamoDB, RDSCloud Services : AWS, AzureInfrastructure as Code and Deployment : AWS CloudFormation, ARM Templates, Terraform,AnsibleCI/CD : Jenkins, GitHub actions, AWS Code Pipeline, GitLabCI/CD, Azure pipelinesContainer : Docker, Amazon ECS, OpenShift, Kubernetes, MavenVersion Control : GIT, GitHub, BitbucketTesting : PyUnit, Selenium, Jest, PyTest, MockitoMonitoring and Logging : AWS CloudWatch, Azure Monitor, DataDog, ELK Stack,New Relic, SplunkData Integration : AWS Glue, Kinesis, Azure data factory, Informatica, SSISMachine Learning and Data Analysis : NumPy, Pandas, SQLAlchemy, Apache spark SQL,TensorFlow, PyTorch, LookerWeb Servers and Application Servers : Nginx, WebLogic, WebSphere, Gunicorn (WSGI server)IDEs : Visual Studio Code, Eclipse, IntelliJProject Management and Collaboration : Agile, Kanban, Scrum, JIRA, ConfluenceWeb Services : RESTful APIs, GraphQL, SOAP, WSDLMessage Queuing : Apache Kafka, Apache ActiveMQ, AWS SQS, Azure servicebusData Formats : JSON, XML, CSV, Parquet, YAML, AvroSecurity & Visualization : Tableau, PowerBI, Matplotlib OAuth, JWTOperating Systems : Windows, Linux, UNIX, macOSWORK EXPERIENCE:Verizon, Dallas, TX November 2022 - PresentSr. Data EngineerVerizon is a leading player in the telecommunications industry, known for its extensive network infrastructure, technological innovation, and commitment to providing reliable communication services to its customers.Responsibilities:Leveraged Python, Pandas, NumPy, and various data manipulation libraries to develop sophisticated data transformation and analysis pipelines, ensuring data accuracy and integrity.Worked on relational databases, including MySQL and NoSQL databases like DynamoDB, optimizing data structures and query performance to meet stringent business requirements.Design and management of data processing workflows on Elastic MapReduce (EMR) clusters, incorporating ETL processes to handle large-scale data sets efficiently.Utilized Informatica to build and maintain ETL pipelines, transforming data from diverse sources into a consistent format for further analysis and reporting.Developed and maintained RESTful APIs for seamless data integration and accessibility, employing JSON and XML for data interchange and ensuring backward compatibility.Utilized AWS services, including EC2, S3, Lambda, and Step Functions, to create scalable and cost-effective data pipelines, automating data ingestion, transformation, and orchestration.Implemented stringent data access controls and user permissions using AWS IAM, ensuring data security and compliance with industry standards and regulations.Managed data lakes using AWS Lake Formation, enabling efficient storage, cataloging, and access to vast structured and unstructured data volumes.Managed data storage and processing on Hadoop and HDFS, utilizing PySpark, Hive, Pig, and Spark for complex data transformations and analytics.Used Amazon Redshift, snowflake data warehouse design, and performance optimization, facilitating rapid querying and reporting for business users.Employed Git for version control and Jenkins for CI/CD to ensure code quality and automation in the data engineering pipeline.Worked on data security and compliance initiatives, implementing encryption, access controls, and auditing mechanisms to protect sensitive data.Defined infrastructure as code (IaC) using AWS CloudFormation templates to provision and manage AWS resources, enhancing infrastructure reliability and scalability.Utilized Docker and Kubernetes for containerization and orchestration of data engineering applications, ensuring portability and scalability across environments.Collaborated with data visualization teams, using Looker to create interactive dashboards and reporting tools that empowered stakeholders with data-driven insights.Implemented log aggregation and analysis solutions using Splunk to monitor and troubleshoot data pipeline performance, ensuring robust data operations.Thrived in Agile and Scrum environments, using Jira for sprint planning and project management to deliver data engineering projects on time and within scope.Environment: Python, MySQL, DynamoDB, Elastic MapReduce, ETL, Informatica, REST API, JSON, XML, AWS, EC2, S3, Lambda, Step Functions, IAM, AWS Lake Formation, Hadoop, HDFS, PySpark, Hive, Pig, Spark, Amazon Redshift, snowflake data warehouse, Pandas, NumPy, Git, Jenkins, CloudFormation, Docker, Kubernetes, Looker, Splunk, Agile, Scrum, Jira.Apps Associates, India September 2018 - July 2022Data EngineerApps Associates is a reputable IT consulting firm with a strong focus on cloud migration, application development, and managed services.Responsibilities:Designed and implemented ETL pipelines using Azure Data Factory to ingest, transform, and load data from various sources into Azure Blob Storage and Azure SQL Data Warehouse.Configured and managed Azure Service Bus topics and subscriptions for real-time data streaming and event-driven processing.Led the migration efforts from on-premises infrastructure to Microsoft Azure, ensuring a smooth transition while adhering to security best practices.Familiar with Azure Security Center, my expertise in optimizing security measures tailored to specific project requirements.Developed and maintained HDInsight clusters for distributed data processing, leveraging technologies like Hadoop, Spark, Hive, and Pig.Performed data cleansing, enrichment, and transformation using Azure Data Factory to ensure data quality and consistency in downstream analytics.Created and maintained RESTful APIs for data access and integration, enabling seamless communication between systems.Collaborated with cross-functional teams to design and implement SSIS data integration solutions, facilitating data flow between on-premises and cloud platforms.Managed and optimized HDInsight clusters for scalable and efficient data storage and processing.Extracted, transformed, and loaded data from Oracle and MongoDB databases into cloud-based data stores.Version-controlled code using Azure Repos and automated build and deployment processes with Azure DevOps and Jenkins.Implemented and maintained ELK (Elasticsearch, Logstash, Kibana) stack for log analysis and monitoring.Collaborated with data analysts and business users to create Power BI dashboards and reports for data visualization and insights.Used Terraform for infrastructure as code (IAC) to efficiently provision and manage cloud resources.Handled data in various formats, including XML and JSON, and developed parsers and converters as needed.Conducted performance tuning and optimization of data pipelines and queries for improved efficiency and cost-effectiveness.Monitored data pipelines for consistency, accuracy, and reliability, implementing error handling and alerting mechanisms.Worked in an Agile and Scrum environment, actively participating in sprint planning, daily stand-ups, and retrospectives, using Azure Boards for project management.Environment: Python, Azure, ARM template, Azure Data Factory, Azure Blob storage, Dataflow, Pub/Sub, DataProc, Dataprep, REST API, ETL, SSIS, Hadoop, Spark, Hive, Pig, XML, JSON, Terraform, Oracle, MongoDB, Bitbucket, Maven, ELK, Scrum, Agile, Jenkins, Jira, Tableau.Anblicks, India July 2017 - September 2018Data AnalystAnblicks is a reputable IT services company with a strong focus on data analytics, cloud migration, AI, and digital transformation.Responsibilities:Utilized statistical concepts such as mean, median, and standard deviation to analyze and interpret data for actionable insights.Conducted data analysis and manipulation using Python, R, and SQL, performing tasks like data cleaning, transformation, and visualization.Employed Python libraries such as NumPy and Pandas for data manipulation and exploratory data analysis (EDA).Assisted in designing and implementing Extract, Transform, Load (ETL) processes, including using tools like Talend to extract data from various sources and prepare it for analysis.Worked with relational databases like Oracle and Teradata, writing SQL and PL/SQL queries to retrieve and transform data.Gained exposure to cloud platforms such as AWS for data storage and processing.Collaborated on data warehousing projects (EDW) to store and manage large volumes of data efficiently.Utilized Hive and Spark for big data processing and analysis, working with large datasets.Developed and maintained data pipelines, ensuring the efficient data flow from source to destination.Wrote shell scripts for automating data-related tasks and processes.Worked with XML data formats for data extraction and integration.Created data visualizations and reports using Matplotlib to communicate findings effectively.Participated in Agile development methodologies and used tools like JIRA for project management and issue tracking.Environment: Python, R, SQL, PL/SQL, NumPy, Pandas, ETL, Talend, Oracle, AWS, Hive, Spark, EDW, Shell Scripting, XML, Matplotlib, Teradata, Agile, JIRA.Mobulous Technologies, India February 2014 - June 2017Python DeveloperMobulous Technologies is a reputable mobile app development company known for its expertise, innovation, and commitment to clients worldwide.Responsibilities:Developed web applications using Python and Flask for efficient and scalable back-end solutions.Implemented HTML, CSS, and JavaScript to create user-friendly and responsive front-end interfaces.Utilized PostgreSQL and PL/SQL for database design, optimization, and maintenance for data integrity and performance.Integrated jQuery and Vue.js to enhance web application interactivity and user experience.Implemented AJAX for asynchronous data exchange between the front and back end, improving application responsiveness.Leveraged AWS services, including EC2, S3, and Lambda, for deployment, storage, and serverless computing, optimizing application scalability and reliability.Configured and utilized Active MQ for reliable messaging between distributed application components.Worked with XML for data interchange between systems for seamless integration and communication. Utilized PyTest to develop thorough unit tests.Collaborated in an Agile environment using JIRA for project management, issue tracking, and sprint planning.Managed code versioning and collaboration using GitHub. Utilized Maven for project build automation and Jenkins for continuous integration, streamlining the development and deployment process.Deployed applications on OpenShift for scalability, reliability, and efficient container orchestration.Monitored and optimized application performance using New Relic, ensuring optimal user experience and efficient resource utilization.Worked with WebLogic application server for hosting and deploying enterprise-level applications, ensuring stability and performance.Developed and debugged code using IntelliJ IDE for efficient and high-quality software development.Environment: Python, Flask, JavaScript, PostgreSQL, PL/SQL, jQuery, Vue.js, AJAX, AWS, EC2, S3, Lambda, Active MQ, XML, PyTest, JIRA, GitHub, Maven, Jenkins, OpenShift, New Relic, WebLogic, IntelliJ.Educational details:Bachelors in Information Technology, CSE from JNTU Hyderabad, 2012Masters in Information Technology Management from Campbellsville University, Kentucky, 2023Certifications:AWS Certified Cloud Practitioner  Issued Mar 2024. Expires Mar 2027.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise