Search Jobvertise Jobs
Jobvertise

Big Data Engineer (Data bricks)
Location:
US-GA-Atlanta
Jobcode:
3611473
Email Job | Report Job

Report this job





Incorrect company
Incorrect location
Job is expired
Job may be a scam
Other







Apply Online
or email this job to apply later

Job Title: Big Data Engineer (Data bricks) Location: Atlanta, GA Visa: USC/GC Duration: 24+ months Shift: This position is 95% remote candidates still must come onsite for mandatory meetings and events Notes: MUST have DataBricks, Apache kafka, and Spark streaming experience ON RESUME OR I WILL *Locals are always preferred over non locals Job Description: Company Client Corporation is Fortune-300 transportation company specializing in freight railroading. We operate approximately 21,000 route miles in 22 states and the District of Columbia, serve every major container port in the eastern United States, and provide efficient connections to other rail carriers. Client has the most extensive intermodal network in the East and is a major transporter of coal and industrial products. Job Description Client Corporation is currently seeking an experienced Data Engineer Big Data individual for their Midtown office in Atlanta, GA. The successful candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions. Join a smart, highly skilled team with a passion for technology, where you will work on our state of the art Big Data Platforms. They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements. The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions and perform data ingestion development and support duties. Skills and Experience: Required: 6+ years of overall IT experience 3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka Deep knowledge of troubleshooting and tuning Spark applications 3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc and different file formats across different platforms like JSON, XML, CSV 3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or 3+ years of experience building, testing, and optimizing 'Big Data' data ingestion pipelines, architectures, and data sets 2+ years of experience with Python (and/or Scala) and PySpark/Scala-Spark 3+ years of experience with Cloud platforms e.g. AWS, GCP, etc. 3+ years of experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery 2+ years of experience with NoSQL databases, including HBASE and/or Cassandra Experience in successfully building and deploying a new data platform on Azure/ AWS Experience in Azure / AWS Serverless technologies, like, S3, Kinesis/MSK, lambda, and Glue Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ Series Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity Catalog Knowledge of Unix/Linux platform and shell scripting is a must Strong analytical and problem-solving skills Preferred (Not Required): Strong SQL skills with ability to write intermediate complexity queries Strong understanding of Relational & Dimensional modeling Experience with GIT code versioning software Experience with REST API and Web Services Good business analyst and requirements gathering/writing skills Education Bachelor's Degree required. Preferably in Information Systems, Computer Science, Computer Information Systems or related field

Dazzletek Inc

Apply Online
or email this job to apply later



Sr Data Engineer
  Click here
New York, NY
Must be very senior/architect level Solid understanding of Data Modeling (can build without ETL tools) Kimball methodology SQL savvy DBT Devops knowle...
Posted more than a week ago



Data EngineerJava, spark, cloud technology) | Hybrid-Sunnyvale,
  Click here
Sunnyvale, CA
Description:Designs, develops, and implements Hadoop eco-system based applications to support business requirements.Follows approved life cycle method...
Posted more than a week ago



Azure Data Engineer - Only Locals
  Click here
Dallas, TX
10 years of experience 4+ recent Azure experience Azure Data Factory and Python/Pyspark experience Azure Certification is nice to have...
Posted more than a week ago



Remote - Data Engineer/Python Developer
  Click here
Remote
KFORCE URGENT REQUIREMENT Looking for candidates regarding the following: POSITION Data Engineer/Python Developer LOCATION Remote DURATION 12 months p...
Posted more than a week ago



AWS Data engineer
  Click here
Warren, NJ
Title: AWS Data Engineer Location: Warren, NJ Duration: 6 months Position type: W2 contract. Required Skills & Experience: AWS (the dataset from the C...
Posted more than a week ago


 
Search millions of jobs

Jobseekers
Employers
Company

Jobs by Title | Resumes by Title | Top Job Searches
Privacy | Terms of Use


* Free services are subject to limitations