| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidate1 P a g eCandidate's Name
Street Address Jeffrey StreetIowa City, Iowa, IA Street Address , USAEMAIL AVAILABLEPHONE NUMBER AVAILABLESUMMARY OF QUALIFICATIONSResearch and development consultant with deep experience in theoretical and statistical analyses for designed research programs and products. Highly proficient in supporting the research team that provides psychometrics and analytics infrastructure for various projects. Proven successful in developing and implementing computer programs for educational measurement and statistics that have been used for many years. Expertise: Over 25 years of learning, working, and implementing statistical and analytical analysis on theoretical and technical issues of Educational Measurement and Statistics for existing testing products/services as well as new ones. Includes proposing more efficient and reliable methods for analyzing data and reporting the results. Research: Interests in test security, responses and response time, item response theory, automatic test assembly, DIF analysis, item selection and item exposure control procedures, item- and cluster-level (testlets) adaptations, and restricted- and unrestricted-maximum information. Computer Programing: Experienced developing computer programs in R, Python, SQL, C/C++, Visual BASIC, and Fortran. Developed software applications for ACTs projects and/or researches: Computerized Adaptive Test, Automated Test Assembly, Common Item Preparation, Common Item Equating, Classification Consistency, Log linear Bivariate, Beta Binomial Bivariate, Error Bands, and Delta Plot. Emerging Technology: Extensively trained in current analytical and computer skills including Data Mining, Data Science, Machine Learning, Big Data, TensorFlow, PyTorch, PySpark, and Jenkins.PROFESSIONAL ACTIVITIES AND ACHIEVEMENTSIndonesia Open University, Pondok Cabe, Tangsel, Banten, Indonesia. 2020- 2024Research and Development Consultant, Indonesia Cyber Education Institute (November 1, 2021- May 1, 2024).Ensuring the university and the institute deliver world-class online courses within five years. Measurement Consultant, Examination Center (March 2, 2020-October 30, 2021) 2 P a g eDeveloped and implemented several research and software for the Examination Center. ACT, Inc., Iowa City, Iowa 2009-2020Senior Psychometrician, Research Analytics Infrastructure Dept. (2018-2020) Extensive works in Zeppelin and Databrics platforms for supporting ACT, Inc. on a digital reporting. Used those platforms for developing analytical measurement computer programs in R and Python.Accomplishments: Developed a software of Classical Item Analysis written in Python that can be used to analyze classical test theory on items and a test form for multiple-choice items. Supported research teams with computer programs to make production works more efficient and accurate. Proposed a method for detecting answer-copying at an early stage of a computerized test. Successfully completed on five certified analytical courses of Databrics. Passed the Python course with the score of 98%. Completed online analytical courses of Data Science, Machine Learning, TensorFlow, PyTorch, PySpark, and Jenkins.Senior Psychometrician, Measurement Research Dept. (2016-2018) Responsible on conducting equating, scaling, item calibration, item/form analysis, DIF analyses, test security, and automatic test assembly.Accomplishments: Developed Sibtest_act. The program was written in R and is a modification of an R program of Simultaneous Item Bias Test (SIBTEST) from Multidimensional Item Response Theory (mirt) Package (Chalmer, 2016). SIBTEST is a classical model for detecting Differential Item Functioning (DIF) by applying a regressing approach. It can be used for dichotomous and polytomous data. Developed Effective Response Time. The program was written in R and based on a research paper of Meijer and Sotaridona (2006). It uses responses and response times for detecting data irregularities on computerized tests and/or computerized adaptive tests of multiple-choice items. Graphical results are provided in addition to the text output files. Developed an automatic calibration and reporting for one of ACT products that can reduce working time by about 80%. Single author of one paper and the first author of two papers that were presented in NCME conferences.Psychometrician II, Measurement Research Dept. (2013-2016) Worked with various teams for providing statistical analysis and computer programs needed. Used R, C, visual Fortran, and SAS to perform a variety of psychometric works including item calibration, cheating analysis, research papers, and daily quality control. Accomplishments: Developed Omega IndexHW. This R package is for detecting answering copying on multiple test forms and/or Computerized Adaptive Test that consist of multiple-choice 3 P a g eitems. This computer program is the extension of the Omega index (Wollack, J.A. (1996) that can only detect answer copying on a single test form. Developed Common Item Equating. This visual Fortran package contains methods for equating two test forms under nonequivalent-groups with anchor items/tests designs. Equating types include identity, mean, linear, and equipercentile. Equating methods include synthetic, nominal weights, Tucker, Levin observed score, Levin true score, and Braun/Holland. The first author of two papers that were presented in NCME conferences and the single author of one research paper was submitted.Research Associate, Measurement Research Dept. (2009-2013) Developed computer programs for supporting the needs of psychometric tasks. Used C/C++, visual Fortran, and SAS to perform the psychometric works including item calibration, equating, scaling, automatic test assembly, classification consistency, log linear bivariate, and daily quality control.Accomplishments: Developed Delta Plots. The purpose of this Fortran program is to plot automatically and graphically the delta points that are the error ranges of equating under an item response theory model. Modified Common Item Preparation. This C++ program is used to compute summary statistics and raw score distributions two test forms with anchor items/tests and it is used as an input file for the common item equating. Modified Classification Consistency. This C++ program is used for estimating the classification consistency and accuracy indices under three different psychometric models; the two-parameter beta binomial, four-parameter beta binomial, and three- parameter logistic item response theory models. Modified Log Linear Bivariate. This C++ program is implemented for providing a bivariate distribution of anchor and non-anchor items of a test form under the log linear model. Modified Beta Binomial Bivariate. This C++ program is implemented for providing a bivariate distribution of anchor and non-anchor items of a test form under the beta binomial model.Center for Educational Assessment (CEA), Jakarta, Indonesia 2006-2009 Director, Item Bank (2006-2009)Responsible for developing statistical measurement and computer program for new and existing test programs and providing technical measurement consultation and support for CEA staff. Accomplishments: Published two research reports and several newspaper articles. The third best research associate from more than 200 selected Indonesia government research associates in 2008. Worked as a part-time lecturer at the UHAMKA Educational Assessment Program and University of Indonesia Psychology Department.ACT, Inc., Iowa City, Iowa 1999-20064 P a g eResearch Associate, Measurement Research Dept. (2003-2006) Accomplishments: Hired as a full-time employee at ACT, Inc. about one year before receiving Ph.D. diploma. Developed Computerized Adaptive Test. This Fortran program is used for examining many related factors in a computerized adaptive test; such as item response models, item/testlet adaptations, test lengths, starting ability levels, and targeted probabilities of correct responses.Research Assistant, Measurement Research Dept. (1999-2003) Supported, analyzed data, and developed computer programs. Accomplishments: Presented for the first time of a research paper as the first author in NCME 2000 Conference.Research Assistant, University of Iowa (1997-1999) Supported my academic adviser for conducting the academic research and publishing the research papers.ACADEMIC BACKGROUNDPh.D. Educational Measurement and Statistics The University of Iowa M.A. Educational Measurement and Statistics The University of Iowa Strata I Applied Mathematics Bandung Institute ofTechnologyPublicationsWidiatmo, H.& Suryanto, A. (2023). Developing Automatic Item Generation. Journal: Frontiers in Education, section Assessment and Applied Measurement. Widiatmo, H. (2022). Methods in a CAT for Selecting Items Targeted at a Higher Probability of Correct Response. Journal: 2022 International Conference on Assessment and Learning(ICAL).Widiatmo, H. (2008). Index Objektifitas Ujian Nasional SMP 2008. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasional, Jakarta, Indonesia.Widiatmo, H. (2008). Index Objektifitas Ujian Nasional SMA 2008. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H. (2008). Index Objektifitas Ujian Nasional SMK 2008. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, 5 P a g eJakarta, Indonesia.Widiatmo, H. (2007). Index Objektifitas Ujian Nasional SMP 2007. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H. (2007). Index Objektifitas Ujian Nasional SMA 2008. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H. (2007). Index Objektifitas Ujian Nasional SMK 2007. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H. (2006). Index Objektifitas Ujian Nasional SMP 2007. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H. (2006). Index Objektifitas Ujian Nasional SMA 2006. Laporan Ilmiah Pusat Penilaian Pendidikan, Badan Penelitian dan Pengembangan Pendidikan Nasiona, Jakarta, Indonesia.Widiatmo, H (2006). A method for detecting answer-copying for multiple choice items (a case study at Junior High Schools in Garut City). Widyariset, Vol 9/4, Lembaga Pengetahuan Indonesia, Jakarta, Indonesia.Widiatmo, H (2006). A method of score conversion for Indonesia National Exam. Buletin Puspendik, Jakarta, Indonesia.Widiatmo, H., (2004). Developing a computerized adaptive test (CAT) Design of cognitive ability test (CogAT): Fixed- versus variable-starting test, Item- versus cluster-level adaptation, and Restricted- versus unrestricted-maximum information. Doctoral Dissertation, The University of Iowa.Yi, Q., Hanson, B.A., Widiatmo, H., & Harris, D.J. (2000). SPRT vs. CMT in computerized classification tests. MRDs Project.Widiatmo, H., (1998). Comparability of the computerized and paper-and-pencil versions of Iowa test educational development (ITED) from G-Theorys view. Unpublished master equivalent thesis, The University of Iowa.Paper PresentationsWidiatmo, H. (2022). Improving the Goodness-of-fit by Considering Responses and Response Times. Paper presented at International Conference on Assessment and Learning, Denpasar, Bali, Indonesia.6 P a g eWidiatmo, H. (2022). Methods in a CAT for Selecting Items Targeted at a Higher Probability of Correct Response. Paper presented at International Conference on Assessment and Learning, Denpasar, Bali, Indonesia.Widiatmo, H.& Suryanto, A. (2022). Developing Automatic Item Generation. Paper presented at International Conference on Innovation in Open and Distance Learning, Bali, Denpasar, Indonesia.Widiatmo, H. (2021). Detecting Data Irregularities by Considering Responses and Response Time. Paper presented at International Conference on Innovation in Open and Distance Learning, Jakarta, Indonesia.Widiatmo, H. & Huang, C. (2017). Detecting Answering Copying With or Without Response Time. Paper presented at the Annual Meeting of National Council on Measurement in Education, San Antonio, TX.Widiatmo, H. (2016). Using Response Times and Responses for Excluding Data Irregularities. Paper presented at the 2016 Annual Meeting of National Council on Measurement in Education, Washington, DC.Widiatmo, H. (2015). Develop a CAT Design for a Test That Measures Multilevel Grades. Proposal submitted to the 2016 Annual Meeting of American Educational Research Association, Washington, DC.Widiatmo, H. & Wright, D. B. (2015). Comparing two item response models that incorporate response times. Paper presented at the Annual Meeting of National Council on Measurement in Education, Chicago, IL.Yi, Q., Widiatmo, H., Hanson, B.A., Ban, J.C. & Harris, D.J. (2001). Impact of scoring options for Not Reached Items in CAT. Paper presented at the Annual Meeting of National Council on Measurement in Education, Seattle, W.A. Widiatmo, H., & Hanson, B.A. (2000). Describing a procedure for computing the standard error of equating of a composite score using the bootstrap. Paper presented at the Annual Meeting of National Council on Measurement in Education, New Orleans, LA. Yi, Q., Hanson, B.A., Widiatmo, H., & Harris, D.J. (2000). Procedures of screening for"Affected" common items in common-item nonequivalent group design. Paper presented at the Annual Meeting of National Council on Measurement in Education, New Orleans, LA.Yi, Q., Hanson, B.A., Widiatmo, H., & Harris, D.J. (1999). Empirical examination of common item P-Value differences in the common-item nonequivalent group design. Paper presented at the Annual Meeting of Florida Educational Research Association, Deerfield Beach, Florida.7 P a g eVispoel, W. P., Hendrickson, A .B., Bleiler, T., Widiatmo, H., Sharairi, S., & Ihrig, D. (1999). Limiting answer review and change on computerized adaptive vocabulary tests: Psychometric and attitudinal results. Paper presented at the Annual Meeting of National Council on Measurement in Education, Montreal, Canada GRADUATE COURSE WORKMeasurementTheory Technique in Educational MeasurementEducational Measurement and Evaluation Using Standardized Instruments Construction and Use of Evaluation InstrumentsItem Response TheoryEquating and Scaling of Educational TestsScaling MethodsGeneralizability TheoryTopics in Educational Measurement and Statistics (Computerized Adaptive Tests) StatisticsMathematical Statistics I & IICorrelation and RegressionExperimental Design and AnalysisIntroduction to Multivariate StatisticsFactor Analysis and Structural Equation ModelsNon-parametric Statistics MethodsSeminar Research on Data MethodologyComputerProgramming with Pascal.Programming with C/C++. |