|
Search Jobvertise Jobs
|
Jobvertise
|
Big Data Artificial Intelligence Developer _ Onsite at Houston, Location: US-TX-Houston Jobcode: 3601849 Email Job
| Report Job
Job Title: Big Data Artificial Intelligence Developer Location: Houston, TX ( Day 1 Onsite) Duration: Long term contract Experience Required: 3-5 years Job Summary: We are seeking a skilled Big Data AI Developer to join our Big Data team. The role involves designing and deploying AI models using machine learning and deep learning techniques within big data environments. You will work with distributed computing technologies to create scalable and efficient AI solutions. Key Responsibilities: Develop AI models using supervised, unsupervised, and semi-supervised learning techniques. Design and implement end-to-end machine learning pipelines from data ingestion and preprocessing to model training, evaluation, and deployment. Utilize TensorFlow, PyTorch, and other frameworks for building deep learning models applicable to computer vision, NLP, and other AI domains. Manage and optimize Spark-ML and Flink-ML jobs within distributed environments for large-scale machine learning tasks. Implement NLP and computer vision algorithms to solve complex problems in data analytics. Engage in graph-based machine learning using Neo4j to uncover insights from connected data. Work with the Cloudera suite to maintain and manage big data workflows, ensuring compatibility with AI model requirements. Optimize data storage and processing using technologies like Kafka, HDFS, HBASE, KUDU, and Cloudera Machine Learning. Collaborate with infrastructure teams to deploy models using cloud-native technologies and Kubernetes orchestration. Develop custom data models and algorithms, and apply advanced Python programming skills to solve challenging data science problems. Stay current with AI research and implement novel algorithms that can contribute to business goals. Collaborate with stakeholders to understand business challenges and translate them into technical solutions. Ensure models comply with data privacy and security regulations, applying best practices in data governance. Run AI Models on GPUs, tweak Models to utilize distributed GPUs. Design and implement robust MLOps (Machine Learning Operations) workflows to automate the machine learning lifecycle, from data collection and model development to deployment and monitoring, ensuring continuous integration and delivery (CI/CD) for AI products. Technical Qualifications: Strong expertise in machine learning and deep learning with frameworks like TensorFlow and PyTorch. Experience with distributed data processing frameworks, particularly SPARK-ML and Flink-ML. Proficient in Python for data science, with a solid understanding of libraries and toolkits for AI and machine learning. Experience in RAPIDS & GPU-Aware Scheduling. Familiarity with Cloudera Data Services and the broader Cloudera suite in a big data context. Proven ability in leveraging Kubernetes for deploying scalable AI applications. Knowledgeable in advanced areas of AI, such as large language models (LLM), and the ability to apply them to real-world scenarios. Experience with computer vision, NLP, and graph-based machine learning techniques. Understanding of data management practices and ETL processes within distributed environments. Education: Bachelor's or Master's degree in Computer Science, Data Science, AI, or related fields. Certifications: Relevant certifications in AI, big data technologies, or Kubernetes are advantageous. Soft Skills: Excellent problem-solving ability and critical thinking skills. Strong communication skills for effectively conveying complex technical concepts. Team-oriented mindset with experience working in collaborative environments. Commitment to continued exploration and staying abreast of emerging AI technologies.
Sage IT Inc
|