Quantcast

Data Scientist Machine Learning Engineer...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Scientist / Machine Learning Engineer
Target Location US-FL-Tampa
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
 Full Name: Candidate's Name  | Data Scientist | Machine Learning EngineerEmail EMAIL AVAILABLE | Phone PHONE NUMBER AVAILABLEProfessional Summary:
      15+ years of experience of Highly experienced aspects of Analytical projects for End-to-End implementation and value transformation of Machine/Deep Learning concepts & Data Scientist.      Good Experience with a focus on big data, technical lead, Deep Learning, Machine Learning, Image processing or AI.      Very good hands - on in Spark Core, Spark SQL, Spark Streaming and Spark machine learning using Scala and Python programming languages.      Good knowledge of key Oracle performance related features such as Query Optimizer, Execution Plans and Indexes.      Expert in core AI related software architecture and engineering disciplines, including NLP, deep learning, and the simulation of human like reasoning.      Worked on Vertex 6.0 to 8.0 Migration.      Good understanding of model validation processes and optimizations.      An excellent understanding of both traditional statistical modeling and Machine Learning techniques and algorithms like Regression, clustering, ensembling (random forest, gradient boosting), deep learning (neural networks), etc.      Proficient in understanding and analyzing business requirements, building predictive models, designing experiments, testing hypothesis, and interpreting statistical results into actionable insights and recommendations.      Fluency in Python with working knowledge of ML & Statistical libraries (e.g. - Scikit-learn, Pandas).      Experience in processing real-time data and building ML pipelines end to end.      Very Strong in Python, statistical analysis, tools, and modeling.      Experience working with large datasets and Deep Learning algorithms using Apache spark and Tensor Flow.      Written Templates for AWS infrastructure as a code using Terraform to build staging and production environments and defined Terraform modules such as Compute, Network, Operations, and Users to reuse in different environments.      Good knowledge of recurrent neural networks, LSTM networks, and word2vec.      Understanding business pain points and considering those as input for new platform architecture document.      Developed highly scalable classifiers and tools by leveraging machine learning, Apache spark & deep learning.      Built Terra grunt project to manage Terraform configuration file DRY while working wif multiple terraforms modules and worked with Terraform Templates to automate the Azure IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environment.      Design, build and deliver the operational and management tools, frameworks and processes for the Data Lake and drive the implementations into the Data Lake Cloud Operations team.      Performed multiple disaster recovery strategies to the AWS cloud from existing physical or virtual data centers, private clouds, or other public clouds.      Proficient code writing capability in a major programming language such as Python, R, Java, and Scala.      Experience using Deep Learning to solve problems in Image or Video analysis.      Good understanding of Apache Spark features& advantages over map reduce or traditional systems.      Very good hands-on in Spark Core, Spark SQL, Spark Streaming and Spark machine learning using Scala and Python programming languages.      Solid Understanding of RDD Operations in Apache Sparki.e. Transformations & Actions, Persistence (Caching), Accumulators, Broadcast Variables.      Enhancing the ETL architecture, create mappings using Informatica Power Center and load the data into Data Warehouse.      Experience in real-time processing using Apache Spark and Kafka.      Have good working experience of No SQL database like Cassandra and MongoDB.      Delivered at multiple end-to-end big data analytical based solutions and distributed systems like Apache Spark.      Experience leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test      Hands on experience leading delivery through Agile methodologies. Understanding of object-oriented programming.      Experience in managing code on GitHub.      Familiarity with concepts of MVC, JDBC, and RESTful.      Familiarity with build tools such as Maven and SBT.Development TechnologiesScikit-learn, Keras, Azure ML, Python, Spark-MLlib, PySpark, Scala, Spark, SQL, and Shell Scripting. Data frame, Dataset, RDDs, React JS, Java Spring Boot, Generative AI, Pandas, NumPy, Vertex AI, matplotlib, PyTorch, seaborn, AI Architect and bokeh-Scala etc.Distributed frameworksSpark-MLlib, Hadoop Map/Reduce, AWS-EC2, AWS-ECS, Spark, Delta Lake, hive, beeline, HDFS, AWS-S3, Impala and SqoopScheduling/DeploymentsGitHub-Actions, Gitlab-Runners, Jenkins, Jars, Pickel, Joblib, drill, Oozie, AutoSys and crontabReporting and PresentationApache Zeppelin, Jupyter Notebook s, matplotlib, seaborn and Bokeh-Scala, Tableau, Arcadia, Microsoft-office (Power-point, Word & Excel) and GitHub.IDEs, FTP and SSH toolsPyCharm, Jupyter-I python, Power BI, Spyder, Eclipse, IntelliJ, DB visualizer, Xshell, Putty, FileZilla and WinSCP.Coding Technologies and Build toolsColumn Transformers, PipelinesBag-of-words and TF-IDFOne hot encoding, frequency encoding and ordinal-lable encoding. General, KNN and iterative imputers, flask, postman, Gunicorn, flasgger, docker, Terraform,SBT, Maven.File FormatsImages (EasyOCR, Tesseract and other proprietary MICR products),
PDFs (poppler-utils, pdf2image, easyocr etc.),
yml, json, xml (Beautiful Soup)Structured (Database tables, Delimiter separated values),
Semi Structured (json, xml, html etc...), Compressed (Zgip, Snappy, LZO etc...)
and Binary (Sequence, Avro, Parquet, ORC etc...).Operating SystemsMac, Ubuntu, Tibco, Linux/Redhat and Windows.PROFESSIONAL EXPERIENCE:
Client: Kennesaw State University, Kennesaw, GA                                                              05/2023   Present                                                                               Role: Senior Data Scientist / Machine Learning Engineer / Data AnalystResponsibilities:      Worked on training and build Deep Learning models for classification and regression using neural networks in Scikit-learn TensorFlow (Keras) via Python ANN, CNN etc.      Successfully addressed challenges related to imbalanced datasets by implementing techniques such as oversampling, under sampling, and generating synthetic samples.      Worked on computing learning curve parameters like training sizes, training scores and validation scores for various regression models via K-folds cross-validation to identify the bias and variance tendencies for analyzing of Overfitting and underfitting in the selected model.      Loaded Data into Oracle Tables using SQL Loader.      Process utilizes a SageMaker stack that employs microservices like S3, AWS Glue for ETL tasks, Lambda triggers, Redshift data warehousing, and coded in Python with Jupyter notebooks. We used Jira for team collaboration.      SAR Writing and Transaction Monitoring.      Built use cases and worked on Jupyter Notebook for Data Cleaning, converted data into structured format, removed outliers, dropped irrelevant columns & missing values, imputed missing values with other statistical methods.      Worked on libraries like NumPy, Pandas, SciKit Learn, mathplot, seaborn, psycopg2 etc.      Extracted data from oracle database and spread sheets and staged into a single place and applied business logic to load them in the central oracle database.      Researched on Reinforcement Learning and control (Tensorflow, Torch), and machine learning model (Scikit-learn).      The AI/ML model training process starts when AWS Glue preprocess incoming S3 data, then invokes AWS Step Function to have SageMaker prepare for NTM model training.      Worked on multiple AI and Machine Learning programs within product suits; Programs include - Predictive Analytics service - baselining and forecasting of performance and security KPIs, Security Analytics - Anomaly detection service - clustering of devices based on behavior over time - NLP, LSTM, KubeFlow, Docker, AWS Sagemaker, AWS Greengrass.      Part of project management activities - project plans, prioritize activities, manage engagements, assign resources, plan budget, manage expectations and align stakeholders.      Manage Data Storage and processing pipelines in GCP for serving AI and ML services in Production, development and testing using SQL, Spark, Python.      Developed full-stack web applications using React JS and Java Spring Boot, seamlessly integrating ML models into user-friendly interfaces.      Served as a strong advocate to management in adapting Azure cloud data solutions such as data lake analytic, data factory, Azure databricks, Azure Kafka with HDinsight, Power BI, and Machine learning.      Programmed using python to prototype and deploy Machine Learning, Deep Learning, Predictive models, Probabilistic and Statistical Modeling based approach with user interface development.      Responsible to designing and deploying new ELK clusters (Elasticsearch, Logstash, Kibana, beats, Kafka, zookeeper etc.      Working with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Elasticsearch tuning based on our application needs.      Supports Design data structures, Azure data catalog, Databricks, Notebook, Azure data Factory, Data lack, Blob storage, HD insight, Data Warehouse, create and administer instance, assign users, revoke privileges.      Worked on CentOS7 and Linux to access the AWS EC2 instances.      Used Generative Adversarial Networks in generative AI that consist of two models the generator and discriminator where generator creates synthetic data, and the discriminator tries to distinguish it from real data.      Created customized Tableau dashboards for daily/weekly/monthly reporting purposes.      Perform unit testing and provide system test support and validate & monitor deliverable in production.      Automated the ML model building process by building Data Pipelines further integrating it with Data cleaning process.      Used PuTTy for accessing the AWS EC2 instances for training the models with Production phase data.      Understanding of ethical considerations in AI, including bias mitigation, fairness, and transparency.      Integrated the AWS server with GitLab to push the model into production and to monitor the performance.      Machine Learning model building is collaborated within the team through GitLab integration.      Use Jira for project management teams such us report, & analysis, workflow customization, issue /task management, project customization and help to team of all types manage work.      Utilized big data tools for MLOps like GCP, Big Query, DataProc for streamlining data lakes. AutoML for automating the model building process.      Design data visualizations to present complex analysis and insights to customers with Tableau or other related tools.      Implementation of novel iterative development procedures on Jupiter Lab based IDE AI Notebooks.      Manage Data Storage and processing pipelines in GCP for serving AI and ML services in Production, development and testing using SQL, Spark, Python and AI.      Developing a generic script for the regulatory documents.      Used python Element Tree (ET) to parse through the XML which is derived from PDF files.      Data which is stored in sqlite3 datafile (DB.) were accessed using the python and extracted the metadata, tables, and data from tables and converted the tables to respective CSV tables.      Used the XML tags and attributes to isolate headings, side-headings, and subheadings to each row in CSV file.      Used Text Mining and NLP techniques find the sentiment about the organization.      Continuously improved the quality of generated output in generative AI but challenges which remain such as maintaining diversity and reducing bias in generated samples.      Deployed a spam detection model and performed sentiment analysis of customer product reviews using NLP techniques.      Developed and implemented predictive models of user behavior data on websites, URL categorical, social network analysis, social mining, and search content based on large-scale Machine Learning.      Developed predictive models on large-scale datasets to address various business problems through leveraging advanced statistical modeling, machine learning, and deep learning.      Used Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy and NLTK in R for developing machine learning algorithms.      Used R programming language for graphically critiquing the datasets and gain insights to interpret the nature of the data.      Researching on Deep Learning to implement NLP.      Clustering, NLP, Neural Networks. Visualized and presented the results using interactive dashboards.      Involved in the transformation of files from GITHUB to DSX.      Used Dremio in AWS as Query engine for faster Joins and complex queries over AWS S3 bucket.      Developed ETL pipelines in and out of data warehouse using a combination of Python, Dremio and Snowflake. Used SnowSQL to write SQL queries against Snowflake.      Used Beautiful Soup for web scraping (Parsing the data)      Developed the code to capture description under headings of index section to the description column of CSV row.      Used some other python libraries like PDF Miner, PyPDF2, PDF Query, Sqlite3.      Trained and served ML pipelines using MLOps which aims to deploy and maintain ML systems in production reliably and efficiently.Client: Equifax Inc, Alpharetta GA                                                                                             11/2022   05/2023
Role: Senior Data Scientist / Machine Learning EngineerResponsibilities:      Incident and problem management, coordinating resolution of data movement disruptions.      Diverse experience in Anti - Money Laundering (AML) and Know Your Customer (KYC), Client Onboarding - Business As Usual), Banking and Financial Services.      Risk identification and evaluation, managing and improving upon internal controls which mitigate risks.      Familiar with SOX Compliances and Basel II standards. Worked on data analysis of Mantas - Anti-Money Laundering (AML) Compliance data and supported user requirements.      Involved in analysis of Business requirements, Design and Development of High level and Low-level designs, and Unit and Integration testing.
      Performed data exploratory, data visualizations, and feature selections using Python and Apache Spark.      Scaled Scikit-learn machine learning algorithms using apache spark.      Using techniques such as Fast Fourier Transformations, Convolution Neural Networks, and Deep learning.      Utilized GCP resources namely Big Query, cloud composer, compute engine, Kubernetes cluster and GCP storage buckets for building the production ML pipeline.      Expertise in Statistical analysis, Text mining, Supervised learning, Unsupervised Learning, and Reinforcement learning.      Used terraform to write Infrastructure as code and created Terraform scripts for EC2 instances, Elastic Load balancers and S3 buckets.      Create an Enterprise Data Lake on Azure Cloud. Consolidation of various data sources from GCP and AWS to establish  single version of truth  and enable efficient analytics capabilities.      Develop and productionize various versions of Machine Learning and Natural Language processing models in novel microservices architectures of batch scoring and real-time API serving of predictions.      AWS Batch reads the processed S3 data and applies feature engineering, then instructs SageMaker to create the NTM model and its corresponding inference endpoint.      Efficiently accessed data via multiple vectors (e.g., NFS, FTP, SSH, SQL, Sqoop, Flume, Spark).      Deploy ML pipelines in production using docker and Kubeflow and write automated test cases to maintain and monitor.      Good knowledge of Data warehouse and MPP database concepts with hands-on experience in Amazon Redshift, Actian Matrix and Vector databases.      Developed report layouts for Suspicious Activity and Pattern analysis under AML regulations.      Performed and documented associated collaboration mechanisms like stand-ups, sprints according to Agile development principles acting as the Scrum Master.      Developed a new data scheme for the data consumption store for the Machine Learning and AI models to quicken the processing time using SQL, Hadoop, and Cloud services.      Developed novel and efficient reporting architectures to report KPIs to relevant internal and external stakeholders using Google data studio and tableau server.      Deployed Py Torch sentiment analysis model and created a gateway for accessing it from a website. Used tsne, bag-of-words, and deployed the model using Amazon Sage Maker.      Developed web-based applications using Python, Django, PyTorch, Bootstrap, HTML and Angular.      Deploy machine learning models using Python, AWS SageMaker, Lambda, and API Gateway.
      Developed and implemented predictive models of user behavior data on websites, URL categorical, social network analysis, social mining and search content based on large-scale Machine Learning,      Wrote scripts in Python using Apache Spark and Elastic Search engine in creating dashboards visualized in Grafana.      Lead development for Natural Language Processing (NLP) initiatives with chat-bots and virtual assistants.      Worked to increase clients understanding of the Data Robot software package for their machine learning needs.
      Updating and installing vertex returns and o-series.      Featured Solutions Architect for implementing a hybrid cloud environment using Azure and Azure Stack and migrating existing workloads to the cloud. Also used Azure AI and machine learning to predict to scalable outcomes and possible scenarios across realm of directories and workloads.      Used Oracle JDeveloped to support JAVA, JSP and HTML codes used in modules.      Design, build and manage the ELK (Elasticsearch, Logstash, and Kibana) cluster for centralized logging and search functionalities for the App.      Used PySpark data frame to read text data, CSV data, Image data from HDFS, S3, and Hive. Used Apache Zeppelin to visualization of Big Data.      Design of Experiments based on Latin Hypercube sampling to generate data and train a simulated model of the environment for Reinforcement Learning model.
      Involved in planning and securing Data Science tool like Data Robot, Alteryx, and Dataiku.      Used Spark streaming to load the trained model to predict real-time data from Kafka.      Identified and documented Functional/Non-Functional and other related business decisions for implementing Actimize-SAM to comply with AML Regulations.      The web application can pick data which is stored in MongoDB. Stored the result in MongoDB.      Fully automated job scheduling, monitoring, and cluster management without human intervention using airflow.      Build Apache spark as Web service using a flask. worked with input file formats like an orc, parquet, Json, Avro.      Wrote Spark SQL UDFs, Hive UDFs.      Participated or contributed to AI research projects, showcasing a dedication to advancing the field.      Optimized Spark coding suing Performance Tuning of Apache spark.      Optimized machine learning algorithms based on need. Created features to train algorithms.      Used amazon elastic MapReduce (EMR) to process a huge number of datasets using Apache spark and TensorFlow.Client: Flatiron, New York NY                                                                                                     11/2019 - 09/2022Role: Senior Data Scientist / Machine Learning Engineer / Data Analyst
Responsibilities:      Responsible for performing Machine-learning techniques regression/classification to predict the outcomes.       Experienced in Machine Learning Regression Algorithms like Simple, Multiple, Polynomial, SVR(Support Vector Regression), Decision Tree Regression, Random Forest Regression.       AML transaction monitoring system implementation, AML remediation and mitigation of process & controls risk.       AWS Athena retrieves the new S3 data and adds it to its output table. Training for both the KB and Group Name prediction     is handle by the SageMaker & Supervised Blazing Text algorithm.       Tech stack is Python 2.7/PyCharm/Anaconda/pandas/NumPy/unit test/R/Oracle.       Developed large data sets from structured and unstructured data. Perform data mining.       Partnered with modelers to develop data frame requirements for projects.       Performed Ad-hoc reporting/customer profiling, segmentation using R/Python.       Tracked various campaigns, generating customer profiling analysis and data manipulation.       Provided python programming, with detailed direction, in the execution of data analysis that contributed to the final project deliverables. Responsible for data mining.       Analyzed large datasets to answer business questions by generating reports and outcome.       Conceived an enterprise level reference architecture for artificial intelligence, that seamlessly integrated machine learning (ML), robotic process automation (RPA), natural language understanding (NLU), and operational decision support (ODM).       Retrieving data from the database through SQL as per business requirements.       Create, maintain, modify and optimize SQL Server databases.       Connect Azure data stores, remote databases, remote devices. Administer identity and access control for all others.       AML transaction monitoring system implementation, AML remediation and mitigation of process & controls risk.       Understanding the business problem, build the hypothesis and validate the same using the data.       Implemented 11g and upgraded the existing database from Oracle 9i to Oracle 11g.       Used AWS- SageMaker machine learning service to transform data by creating and using SageMaker Notebooks.Client: Central Bank, OJSC  Aiyl Bank  Bishkek Kyrgyzstan		                        11/2004   07/2018
Role: Data Scientist / Head ResearcherResponsibilities:      Collaborated with internal stakeholders to understand business challenges and develop analytical solutions to optimize business processes.      Performed analysis using industry leading text mining, data mining, and analytical tools and open-source software.      Worked on strategic R&D projects to build cutting-edge ML, predictive analytics, and NLP solutions across securities trading, regulatory surveillance, fraud detection, risk analytics, and operations automation.      SRE/PM: Pilot to Production model to improve slow AI/Data Science applications: work to accelerate AI projects to move to production in Azure, GCP. Leverage Azure Kubernetes, AKS Cloud Scheduler: resulting in higher quality suggestions, better ad insertion fees, 15% increase from partner ad revenue. H2O.ai (ML DevOps, Driverless AI), Looker. Azure ML, GCP Big Tables, Big Query, Cloud SQL. ATP, Supply Chain. SAP integration (R/3, SCM, CM, BW).      Research and document advanced solutions architecture for other enterprise customers including building CI/CD pipelines and a robust monitoring system for model performance.      Skilled in monitoring servers using Nagios, Cloud watch and using ELK Stack Elasticsearch Fluentd Kibana.      Leveraging GCP Machine Learning and Artificial Intelligence to develop predictive analytics engine served through an API; microservices based containerized service, fully scalable.      Predictive maintenance and behavior modeling for manufacturing and health care IoT data using LSTMs, Microservices based architecture for Machine Learning based applications. XGboost implementation.      Used MATLAB, C/C++ with OpenCV and SVM, Neural Networks, Random Forest as classifiers.      Generated graphical reports using python package NumPy and matplotlib. Built various graphs for business decision making using Python matplotlib library.      Built deep learning network using TensorFlow on the data, and reduced wafer scrap by 15%, by predicting the likelihood of wafer damage. A combination of the z-plot features, image features (pigmentation) and probe features are being used.      Used Natural Language Processing (NLP) to pre-process the data, determine number of words, topics in emails and form cluster of words.      Worked on Information Extraction, NLP algorithms coupled with Deep Learning (ANN, CNN), Theano, Keras, TensorFlow.      Responsible for implementing monitoring solutions in Ansible, Terraform, Docker, and Jenkins.
      Automate Datadog Dashboards with the stack through Terraform Scripts.
      Wrote Scikit learn based machine learning algorithms for building POC s on sample dataset.      Analyzed structured, semi-structured and unstructured dataset using map-reduce and Apache spark.      Implemented end to end lambda architecture to analyze streaming and batch dataset.      Monitored ML-based applications for performance issues with ML-centric capabilities like data drift analysis, model-specific metrics, and alerts using Data Robot MLOps.      Converted mahout s machine learning algorithms to RDD based Apache Spark MLLib to improve performance.      Run all 36 queries in Matrix, Vector, TeraData, AWS Redshift, Snowflake, MemSQL and Microsoft Azure.      Deep Reinforcement Learning to determine optimal control trajectory in closed loop sequential control for cost minimization.      Smart state-of-charge monitor for electric vehicles based on Recurrent Neural Network and Seq2Seq forecast.      Build multiple features of machine learning using python, Scala, and Java based on need.      Developed multiple MapReduce jobs in java for data cleaning and preprocessing.      Write terraform scripts from scratch for building Dev, Staging, Prod and DR environments.
      Performance tuning of Oracle Databases and User applications.      Used amazon elastic MapReduce (EMR) to process a huge number of datasets using Apache spark and TensorFlow.      Lead Data Scientist for development of Machine Learning and NLP engines utilizing health population Data.      Strong domain knowledge in the areas of; CDD, OFAC, transaction monitoring, Fraud and suspicious activity reporting.      Involved in loading data from RDBMS and weblogs into HDFS using Sqoop and Flume      Involved in building complex streaming data Pipeline using Kafka and Apache Spark.      Worked on loading the data from MySQL to HBase where necessary using Sqoop. Wrote Hive UDF s based on need.      Exported the result set from Hive to MySQL using Sqoop after processing the data. Optimized hive queries.      Write terraform scripts for CloudWatch Alerts.      Used MLops and Devops for Building and deploying solutions.TECHNICAL PROJECTSApplied Binary Classification Competition with Atlanticus (Spring 2024) (1st Prize winner)Applied Analytics Project with Delta Air Lines (Spring 2024) (1st Prize winner)Conference of State Bank Supervisors Data Analytics Competition   Kennesaw, USA (April 27, 2023) (2nd Prize winner)EDUCATIONKennesaw State University, Kennesaw, GA                                                                                                          08/2022   expected: 06/2026Doctor of Philosophy in Data Science and Analytics
New York, NY                                                                                                                                                                           11/2019 - 03/2020Immersive Data Science programColumbia University in the City of New York, New York, NY	 07/2011 - 08/2012Masters of Public Administration in Economic Policy Management
Winner of the Joint Japan/World Bank Graduate Scholarship ProgramSaint-Petersburg State University of Economics & Finance, Saint-Petersburg, Russia        	09/1999 - 07/2004Bachelor of Arts and Masters in Forecasting and Strategic ManagementCertifications and Special Training
      IMF course on Monetary policy analysis in Joint Vienna Institute   Vienna, Austria, (Dec 9, 2013   Dec 20, 2013)
      IMF course on Macroeconomic Forecasting in Joint Vienna Institute   Vienna, Austria, (Apr 21, 2014   May 2, 2014)      Business conditions and macroeconomic forecasting seminar   Paris, France (Jan 24, 2011   Jan 27, 2011)      Course of Economic Development Strategy for CIS Countries   Seoul, Korea (Aug 08, 2010  Aug 28, 2010)      Macroeconomic modeling in the Central Bank   Moscow, Russia (Jul 22, 2010 - Jul 25, 2010)      Applied economic policy course in Joint Vienna Institute   Vienna, Austria, (May 25, 2009  Aug 31, 2009)
      IMF course on Financial Programming and Policies   Washington D.C., USA, (Apr 27, 2009 May 8, 2009)      Instruments of Financial Market Course   Gersenzee, Switzerland (Sep 1, 2008  Sep 20, 2008)      Principles and Practices of Islamic Economics and Banking Course   Baku, Azerbaijan (Apr 7, 2008  Apr 11, 2008)      Financial Instruments   Deutsche Bundesbank seminar (Oct 22, 2007  Oct 26, 2007)

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise