Quantcast

Machine Learning Circuit Design Resume C...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Machine Learning Circuit Design
Target Location US-CO-Castle Rock
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
Candidate's Name
SUMMARYMachine learning applications:oGenerative AI: Anthropic, AI21 Labs, Cohere, Stability AI, and Amazon Titan Foundation Models. Various indexing techniques and data structures (k-d trees, ball trees, approximate nearest neighbor (ANN) algorithms), search methods of vectors (Facebook AI Similarity Search (FAISS) and Cosine Similarity search).oSupervised learning: Decision Tree (DT), K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM) and Nave Bayesian (NB) methodsoDeep learning: CNN, U-Net, Faster R-CNN, Generations of adaptive mixed traffic data setsoUnsupervised learning: Clustering of test inputs (Mini-batch k-Means, PCA, k-Means, BIRCH methods).oReinforced learning: Value-based, Policy-based and Actor-critic modelsQuantum applications:oQuantum machine learning: building of convolutional layer in Quantum CNN (QCNN), Hybrid Classical Quantum (HCQ) modeloQuantum computing: quantum circuit design using IBM qiskit SDK, quantum phase estimation algorithms, Shors Algorithm, RSA encrypted key and Elliptic Curve encrypted key.oPost-quantum: NIST 800-53, NIST recommended algorithms: Kyber Key-encapsulation Mechanism (KEM) and Key Exchange algorithms, and Dilithium Digital SignatureComputer vision applications:oedge-assisted AR: object detection, depth and pose estimations, and DNN inferencing on cloud, captured RGB data transmission to cloudoSix Degree of Freedom (6DoF) Virtual Reality (VR): Foreground and background segmentation, Disparity and depth estimations, Foreground object tracking, Relative motion estimation between camera and moving objects, Translation/Rotation estimation, 6DoF rendering (PCL, Mesh)oDisparity and Depth Estimation in Fisheye video: SGBM, Scale space disparity estimation, Belief Propagation (BP), Graph Cut (GC), Disparity to depth transformo360 Stereo video algorithm research and 360 Stereo generation using fisheye video frameso3D formulations: 2D-to-3D, Anaglyph 3D, Stereoscopic 3D, Auto-stereoscopic 3DoFeature detection, tracking, shape and motion: Harris, Harris-Laplacian, SIFT, SURF, KLT, Structure from Motion (SfM), Kalman filter, Extended-Kalman filter, Face recognition, GestureoDepth and disparity estimations: Stereo triangulation, SAD, Correlation, Similarity, IR-Laser (Kinect and Soft-kinect), Background segmentation, Occlusion detection and recoveryCamera applications:oFisheye camera calibration: manual and auto calibrationsoResearch of 3D stereo camera, sensor and fisheye lensoCamera calibration, Stereo calibration, Stereo rectificationVideo coding:oVirtual Reality Video Compression (H.264, H.265 and AV1), Broadcasting (HLS, Jitter, frame rate control), Coding noise reduction (Blocky, Ringing), Brightness control, Color enhancementoObject detection and tracking using fisheye video sequencesoVideo coding standards: H.265/VVC, AV1, H.265/HEVC, H.264/AVC, MPEGs, VP6, VP8, VP9, VC-1, SVC, MVC, H.263, MPEG2 TSoAudio coding standards: AAC, AC3, G.722, G.726, G.728, G.729, MPEG-4 structured audiooVisual quality enhancement and measurement, Low/High pass filter design, Coding artifact removal (de-blocking, de-ringing, mosquito noise), Edge detection/enhancement, Error resilience and concealment, Rate control, Motion estimation, Color processing and enhancementAlgorithm optimizations:oOpenMP, Multicore job distributions, OpenCL implementations with GPUsVideo encoding optimization:oAda Lovelace family GPU and NVIDIA Video Codec SDK 12.1 with H.264, HEVC and AV1 encoding.oPerformed split-frame encoding tests using multi-view video inputs and proposed a new split-frame encoding methodology.Multimedia communication applications:oVideo quality with impairment evaluations over DOCSIS and CBRS networks, InternetoALVR 3D video stream: optimize the encoder in the server, propose a rate control algorithmoBandwidth reduction algorithms: filter-based super resolution algorithms (bicubic, nearest neighbor, Lanczos), deep learning based super resolution (DLSS) approaches and Digital Harmonic KeyframeoVideo conferencing, Chatting, Streaming, Broadcasting, Transcoding, IPTVoMultimedia communication networks and real-time protocols: IP network, DOCSIS, CBRS, Wireless network, Packet cable network, Broadband network, RTP, RTCP, SIP, H.323, HLS, SOAP, RESTEDUCATIONPh.D in Electrical and Computer Engineering, Illinois Institute of TechnologyMS in Electrical and Computer Engineering, Illinois Institute of TechnologyMS in Electrical Engineering, University of California at Los AngelesBS in Electrical Engineering, Seoul National University, Seoul, KoreaTECHNICAL SKILLSSoftware Skills: Python, C, C++, Visual Basic, Visual C++, Java, JavaScript, XML, OOD, OOP, SSE SIMD Intrinsics, TI DSP SIMD, Code Composer Studio, Visual Studio, MATLAB, Clearcase, AssemblersOpen Sources: OpenCV, OpenGL, OpenCL, OpenNI, OpenMP, OpenMax, OpenVX, OpenVINOOperating Systems: Linux, Unix, Windows, TI DSP/BIOS, Android, VxWorks, PSOS, VMwarePROFESSIONAL EXPERIENCECharter Communications, Greenwood Village, CO May 2019  July 2024Principal EngineerWorking as a principal engineer for the research and developments of generative AI, deep learning, supervised learning, unsupervised learning, reinforced learning, quantum machine learning, quantum computing, post-quantum, real-time video communications, ALVR 3D video stream, edge assisted AR, bandwidth reduction algorithms, video quality measurements over DOCSIS, CBRS and Charter production networks.Generative AI: using Amazon Bedrock via APIs, we test a wide range of foundation models (FMs) to find the model that is best suited for our use case. FMs from Anthropic, AI21 Labs, Cohere, Stability AI, and Amazon Titan are used. Various indexing techniques and data structures, such as k-d trees, ball trees, approximate nearest neighbor (ANN) algorithms, Facebook AI Similarity Search (FAISS) and Cosine Similarity search are tested to speed up the search process.Deep learning: to achieve low latency processing of time-sensitive traffic using the classifications of incoming traffic types (e.g., to low latency DOCSIS, or to normal DOCSIS), CNN and U-Net model builds, and tests are performed. For the train, test and validation, adaptive mixed traffic data sets of seven different traffics (video conference, video stream, on-line game, cloud gaming, real-time stream, file download, file upload) are generated. Using YAML configurations, the mixed rates of traffic types can be configurable. Loss Comparisons of train and validation, accuracies, MeanIoU, confusion matrix, full classification report are used to evaluate U-Net model performance.Unsupervised learning: Internet traffic is collected from end users and marked with application labels accordingly using a localized operational packet-level classification. Found there is a trade-off between clustering accuracy and processing time. Found Mini-batch k-Means method is the best one among several clustering methods (i.e., PCA, k-Means, BIRCH).Supervised learning: tested Decision Tree (DT), K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM) and Nave Bayesian (NB) methods. Based on the comparisons of classification accuracy and processing time, Decision Tree is selected to construct the hybrid (clustering + supervised learning) classification model.Reinforced learning (RL): RL application research in the physical, networking and application layers of 5G advanced and 6G wireless networks is performed. Advantages and disadvantages of the value-based, the policy-based and the actor-critic models of RL are reviewed for three-layer applications.Quantum machine learning (QML): to resolve the limitations of machine learning applications on classical computer, implement and test one of QML applications, such as traffic classifications. All unitary operations on qubits, are formulated within Hilbert space as compositions of a set of gates that consists of all one-bit quantum gates . To build convolutional layer in QCNN, two-qubit unitary is applied to neighboring qubits. Convolutional layers of eight qubits and QCNN training circuit for eight mixed traffic data sets are proposed. The Hybrid Classical Quantum (HCQ) model is proposed to classify eight traffic types by using eight-qubit unitary operations in Quantum computing and the preprocessing in classical computing. In the preprocessing of HCQ, the configurable random traffic data mixing method is presented. The HCQ model is described in SCTE 2024 paper.Quantum computing: Quantum phase estimation algorithms (i.e., Kitaevs algorithm, Iterative Quantum Phase Estimation (IQPE), Statistical Quantum Phase Estimation (SQPE), Phase Estimation Based on Inverse QFT (IQFT)) for Shors Algorithm are implemented and tested to find the best quantum phase estimation algorithm to break RSA encryption key and Elliptic Curve encryption key. Performed several quantum circuit designs using IBM qiskit SDK.Post-quantum: Kyber Key-encapsulation Mechanism (KEM) and Key Exchange algorithms are reviewed, and NIST released python codes are built and tested by using Known Answer Tests (KAT) files recommended by NIST. Key generation, encapsulation and decapsulation including Modulation Learning With Error (MLWE) with lattice methods and Tweaked Fujisaki-Okamoto (FO) transform are also tested. Post-Quantum Dilithium Digital Signature is also built and tested. The core algorithms of Dilithium such as Number Theoretic Transform (NTT), Montgomery reductions, Chinese Remainder Theorem (CRT) are reviewed and tested.Edge assisted AR: The captured video has the raw RGB frames, so it is impossible to transmit the captured raw video frame to the edge server in real-time over current networks. Propose new video data compression methodology to handle real-time video transmission and the deep neural network (DNN) approach to detect the objects in the edge server. Due to the transmission latencies (upstream and downstream) and processing latency in edge server, the detected object can be mismatched during the rendering in the mobile device. Thus, propose the fast-tracking method to compensate these latencies.Real-time 3D video communications: Research and verification of hardware-based encoding for the real-time video encoding on NVIDIA GPU. Compared Ampere family GPU (H.264 and HEVC) with Ada Lovelace family GPU (H.264, HEVC and AV1). Found that video coding efficiency of AV1 is much better than HEVC or H.264. NVIDIA video codec for explicit split-frame encoding is very useful for real-time 3D video encoding to encode 3D video frames (left and right views). Propose 3D video frame split, and each part is processed in parallel by multiple NVENCs on the chip, which results in a significant speedup compared to sequential encoding.ALVR 3D video stream: Analyze the total, transport and decode latencies of ALVR platform, and review the causes of the outstanding latency issues. Optimize the encoder in the server (improved 30% of encoding time). Propose the rate control algorithm to reduce the fluctuation of encoding time and reduce the jitter. Review the over-rendering algorithm and propose the generalized FOV estimation formula.Bandwidth reduction algorithms: Digital Harmonic Keyframe is evaluated to achieve bandwidth savings (i.e., 50%) by using the objective video quality comparisons of NON-Keyframe and Keyframe on Charter live streams. These saved bandwidths (6:1 QAM ratio vs 12:1 QAM ratio) can be used for other Internet services. VMAF estimation processes are de-interlaced filter process for interlaced video, resolution and frame rate transcoding, reference frame synchronizations for NON-keyframe and Keyframe. Both filter-based super resolution algorithms (Bicubic, Nearest neighbor, Lanczos) and deep learning based super resolution (DLSS) approaches are tested and compared. DSLL approaches generate better PSNR, but filter approaches do better SSIM.Video quality with impairment tests: the objective of Super Ball test is to find the correlations of STVA Mean of Opinion Score (MOS) and Domos Quality of Outcome (QoO). Super Ball live streams were transmitted from New York STVA server to CTECH CMTS, and from CTECH CMTS to FG-1 lab. Four different impairment simulation scenarios (Latency only, Latency and jitter, Packet loss, Constant latency and jitter) were performed during the entire Super Ball game. STVA MOS scores are compared with QoO scores. Found that all STVA scores in the scenarios of latency only, latency and jitter, constant latency and jitter are correlated with Domos scores. However, during the packet loss simulations, STVA MOS scores are uncorrelated with QoO scores.Video encoding optimization: Ada Lovelace family GPU and NVIDIA Video Codec SDK 12.1 are used to optimize H.264, HEVC and AV1 encoding. Performed AV1 and HEVC codec performance comparisons with NVIDIA GPU-based encoding. Found that the benchmark test results of performance are H.264 (1.00X), H.265 (1.18X) and AV1 (1.42X). Performed split-frame encoding tests using multi-view video inputs and proposed a new split-frame encoding methodology.ForeSightSports, San Diego, CA February 2019  March 2019Sr. Software ArchitectShort Project: Working as a senior software architect to build the algorithms for detection, matching and tracking of the moving objects in the applications of sports events.NextVR, Newport Beach, CA May 2015  January 2019Sr. ArchitectWorking as a senior architect to build many algorithms for 3D Virtual Reality multimedia services and to implement real-time 3D stereo broadcasting of NBA, Golf, NFL, WWE, Oculus Venue, NAS car game, Kentucky Horse race, Youth winter Olympic game, and many concert events. 3D stereo 360 stitching algorithms are researched and implemented by using fisheye left and right view frames.Six Degree of Freedom (6DoF) on Virtual Reality: the proposed processes are foreground and background segmentation, disparity and depth estimations, foreground object tracking, translation and rotation estimations, relative motion estimation between camera and moving objects, 6DoF blendingAlgorithm optimizations: OpenMP, Multicore job distributions, OpenCL implementations with GPUsDisparity and Depth Estimation in Fisheye video: SGBM, Scale space disparity estimation, Belief Propagation (BP), Graph Cut (GC), Disparity to depth transformTranslation and Rotation Estimation for Argument Reality (AR) and 6DoF: Tested algorithms are Direct linear transformation (DLT), Perspective-n-Points (PnP), Efficient Perspective-n-Points (EPnP), Uncalibrated PnP (UPnP) and Quaternion approachObject Detection and Matching: AKAZE, ORB and BRISK are reviewed and compared by using 3D NBA game fisheye video sequences. Both AKAZE and ORB have similar results.Object Tracking: Kernelized Correlation Filter (KCF) tracker, Discriminative Correlation Filter (DCF) tracker and MOSSE Tracker are reviewed and tested. DCF-CSRT generates higher object tracking accuracy, but slower FPS throughput. KCF produces faster FPS throughput, but slightly lower object tracking accuracy. MOSSE is not as accurate as CSRT or KCF, but very fast.3D Stereo 360 Stitching: Four fisheye stereo pairs are used, and the related algorithms are Feature Detection (ORB, SURF, SIFT), Feature Matching, Camera Intrinsic Parameter Estimation, Bundle Adjustment, Wave Correction, Image Warping, Exposure Compensation, Seam Finder, Multi-Band Blending, and 360 visual quality enhancement. The most challenging issue is the dynamic seam locations along with the moving objects. To resolve these issues, a new stitching algorithm is proposed and show some results in the demo.Fisheye Camera Calibration: Calibration, Auto CalibrationFisheye Rectification and Correction: For the 360 stitching, the spherical projection algorithm is proposed. Using fisheye ray mapping functions (Equi-Distance, Equi-Solid Angle, Stereo Graphic) the fisheye correction algorithms are proposed and implemented.3D Virtual Reality Video Capturing, Compression and BroadcastingVideo CapturingoCameras: RED, F-55, Talos and BlackMagicoCanon fisheye lens, Entaniya fisheye lens, iZugar fisheye lens, PlenopticsoCamera distortions: Radial distortion, Tangential distortionCompressionoH.264 and AAC are used for 3D stereo broadcasting, and H.265, AV1, and MP3 are testedoCoding parameter settings for many different 3D stereo broadcasting services, H.265 tile-based (object-based) 3D stereo broadcasting is testedoMotion estimation algorithm in the fisheye boundary areasoBit rate control and high frame rate in 4k and 8k videosoAudio and video synchronizationVideo quality enhancement filtersoCoding noise reduction: Blocky in the dark areas on Concert events and Youth OlympicoBrightness controloColor enhancementEO Technics, Schaumburg IL, May. 2014  2015ConsultantWorking as a consultant to propose the business models that are related with IR-Laser depth camera applications and the architecture of object detection and recognition.Proposed business models3D formulations for auto-stereoscopic 3D (Glass free)Computer vision control applications (Gesture)Interactive digital signageComputational photographyObject detection and recognitionSIFT-based feature detection is used to find distinctive key-points that are invariant to location, scale and rotation, and are robust to affine transformations and illumination changes. The best match for each keypoint is found by identifying its nearest neighbor in the database of keypoints from training images. The feature matching is the keypoint with minimum Euclidean distance for the invariant descriptor vector (Best-Bin-First algorithm). To increase the robustness of object identification, the generalized Hough transform using R-table is applied. It identifies the clusters of features that vote for the same object pose. Each keypoint votes for the object that is consistent with the keypoint's location, scale, and orientation. For the pose determination, the Least-Squares method for the best estimated affine projection parameters relating the training image to the input image is applied. Bayesian analysis is used to decide whether the matching object is accepted or rejected.ooVoo LLC, New York, NY December 2012 - April 2014Senior Technology SpecialistPrimary responsibilities are the research and development of assigned projects related with foreground and background segmentation using depth information, face recognition algorithm formulation, filter design for Android and IOS, Anaglyph 3D filter design, and error resilient algorithm design for ooVoo video communication using RaptorQ forward error correction scheme. In addition, a manager role has been performed to develop the visual quality measurement algorithms with the Poly Tech research team.Foreground and background segmentation algorithm designMain goal of this project is to segment foreground and background images in video frames for ooVoo video communication. Many current proposed foreground and background segmentation algorithms such as OpenCV, Apple and Google release generate lot of foreground segmentation errors in the moving edges or in the strong background edges. To overcome the above segmentation errors, depth-based segmentation algorithms using Intel perceptual SDK are proposed.Object detection and tracking algorithm designHarris-Laplacian, SIFT and SURF were compared for the scale, rotation and affine invariance tests. Found that Hessian approaches (SIFT and SURF) were better than Harris-Laplacian. For the keypoint tracking, first, KLT was tested, but it had many limitations such as KLT tracker required the brightness constancy. Then, Kalman filter was reviewed. Extended Kalman filter approach outperformed linear-Kalman filter or KLT in the occlusion and non-linear movement areas.DirectShow filter design for background replacement applicationooVoo window application uses DirectShow filter graph for multimedia processing. To integrate the background replacement algorithm and Intel PCSDK on ooVoo window application, the following DirectShow filters are implemented: Modification of ooVoo DirectShow filter graph for Intel PCSDKs depth and video inputs, CSource filter design in CBase filters, Constructor design, Query interface design, Decide buffer size design, Fill buffer design and IAMStreamConfig interface designFace Recognition Algorithm ResearchThe ooVoo face recognition system requires both real-time operations and high accuracy of face recognition. However, the major challenges of face recognition such as facial expression, pose variations, illumination variations and occlusion objects should be resolved to achieve both real-time and high accuracy. Research both image-based and video-based face recognition algorithms. Adaptive Elastic Bunch Graph Matching (EBGM) algorithm for image-based face recognition is proposed, and adaptive dictionary-based face recognition algorithm for video inputs is also proposed. All related review documents and new proposals are published. Testing reference codes are Intel PCSDk, OpenCV and other open-source codesAndroid multimedia design with OpenMAX-IL, GStream and StagefrightAndroid user apps use the Application Frameworks Java classes that provide intuitive interfaces for manipulating different types of media. However, in order to manipulate the various media contents, more complicated framework components are required. Many advantages of Gstream and OpenMAX-IL implementations on Android multimedia framework are already proven by many vendors. Thus, research and test the JNI interfaces, IPC calls, Stagefright Media Player components that communicate with OMX via IPC invocations, gst-openmax, RPC DSP communications and Snapdragon SOCs.Adaptive Filter designs on Android and IOSOovoo video communication applications support Google Android and Apple IOS. To make the following six different presentations of captured video, the related filter algorithms of each presentation were created and implemented on the ooVoo applications of Android smart phones (Samsung Galaxy, HTC, etc) and Apple I-Phones.Anaglyph 3D filter designs for Window and Android applicationsAn anaglyph 3D is obtained by two stereoscopic images/videos such as left view and right view images/videos. Main idea of anaglyph 3D filter implementation is how to reduce the three major degradations such as ghosting, retinal rivalry and wrong colors. The input data formats of anaglyph 3D filter are so various and depend on the actual applications of anaglyph 3D filter. The left view and right view input are from the camera, the file formats such as AVI, MP4, etc, or YUV sources. However, the 24bit RGB input data is used for the color separation and merging algorithms of anaglyph 3D filter.Error resilient algorithm design for ooVoo video communicationooVoo communication systems provide endless multimedia services over IP networks and wireless networks. Especially the wireless devices such as Apple IOS and Android phones suffered the serious packet losses although they depended on the certain environments. However, the end users dont want to have any disconnection under any circumstance. Thus, ooVoo system added the retransmission and forward error correction such as Qualcomms RaptorQ. My roles are to review and to find some expected implementation issues of RaptorQ over the current ooVoo communication systems.Quartics, Irvine, CA November 2009  November 2012Principal engineerPrimary responsibilities are the research and development of assigned projects related with Quartics multimedia applications such as multiview rendering, 3D depth extraction, disparity estimation, multiview coding, streaming analysis, postprocessing, rate control, video conferencing, H.265, H.264, scalable video coding, WMV9 and VC-1. It is also responsible for me to resolve the issues for our customers: Power reduction for ACER, rate control for video conference customers and post-processing for Smart Cable. OpenCV, OpenGL, OpenMP and OpenCL were used for 3D, depth extraction, multiview rendering and camera calibration. Software developments of most projects used C/C++, assembler in UNIX, LINUX and Windows platforms.Multiview RenderingIt was performed for the glass-free 3D TV (autostereoscopic) and digital signage applications. Construct nine views using 2D image (one frame). Proposed multiview forward and backward rendering algorithms and implemented C/C++-codes for disparity estimation, forward and backward renderings. 2D-3D conversion algorithm (DDD algorithm) was adopted to generate the left and right views (3D stereo pair). To achieve the real-time operations of multiview rendering, the optimizations of disparity estimation, forward and backward renderings were performed. Based on the results of real-time analysis, the optimized nine-view renderings were running with real-time base on 400 MHz Quartics DSP.3D Depth ExtractionCamera calibration: Homography between the model plane and its image, Intrinsic and extrinsic camera parameter estimates, and Maximum likelihood estimator based on least square senseDepth estimation using stereo triangulation: Stereo calibration, Stereo rectification, Disparity estimation and occlusion detection and recovery, and Depth estimation using disparity information and camera parametersReal time depth estimations: Microsoft Kinect depth sensor with pseudorandom pattern analysis and Quartics depth sensor with adaptive infrared speckle analysisNon-real time depth estimations with structured light: Gray-code illumination, Phase shift illumination and Hybrid illuminationDisparity EstimationThree different algorithms of disparity estimation are implemented and compared: SAD based disparity estimation, Normalized correlation-based disparity estimation, and Similarity based disparity estimationOcclusion Detection and RecoveryThe occluded areas are detected by using the mismatches of forward estimated disparity and backward estimated disparity. Two recovery algorithms of occluded areas are proposed. Both algorithms use the direction, magnitude and slope of disparity.Multiview CodingMultiview encoder: Researched the prediction structure of MVC, random access analysis, motion estimation with disparity compensation, MVC coding profiles, MVC coding tools, ARF filter design, rate control etc. Wrote the overview document of MVC, "Overview of Multi-View Coding". Review of JM 17.2 (MVC extension of JM code base) and JMVC code.Multiview decoder: Review of frame interleaving of 3D, view parallel processing, management of MVC reference lists, adaptive reference lists etc. Wrote the MVC decode analysis document, "Decoder Design of H.264 Multi-View Coding Extension".2D Video-to-3D ConversionStudied DDD algorithm of 2D-to-3D conversion and tested DDD C-code. Reviewed video quality and corrected the disparity calculation formula in QVU 2D-to-3D. I also reviewed depth estimation of DDD and conversion formula from depth to disparity and compared DDD depth-to-disparity with 3D depth-to-disparity.Streaming AnalysisIdentify and apply key quality metrics for streaming video (Netflix, Hulu, Youtube, etc.) to measure/estimate current/future deficiencies for customer education, competitive evaluation, post processing evaluation, etc. Objective and subjective evaluations of Qvu post processing for improving streaming video experience are performed based on the above metrics. Following tools for objective quality evaluation are implemented: Edginess distortion measurements of Y, U and V (ITU-T J.247 Annex B), Correlation measurements (ITU-T J.247 Annex C), VQM measurements based on blocking and blurriness (ITU-T J.247 Annex D), Similarity measurements (SSIM) based on MSU algorithm, ITS VQM, MSU VQM, Cumulative probability of blur detection (CPBD) metricResearch and development of post-processing for Quartics Smart Cable project and  tables of H.264 in-loop filter work well to define block boundary strength. To use these tables QP value is required. However, Quartics Smart Cable does not have QP information unlike H.264 encoder and decoder. Thus, two new algorithms of QP estimation using macroblock of each blocky frame are proposed. Both algorithms are implemented and tested on Smart Cable. Verified new QP estimator by using the comparison of PSNRs and visual quality with JM16.2 code base. Wrote design document including algorithms, test results and verifications of QP estimator.Research of scalable video coding (SVC)Researched scalabilities (temporal, spatial and quality), profiles and levels, SVC bit stream format and rate control of SVC. Performed the benchmark coding efficiency comparison tests by using JSVM and JM code bases. Especially, the interlayer predictions of Intra, Motion and Residual were rigorously studied. Leading of Indian office engineer who were working on the feasibility of SVC implementation on QVU. Wrote a document of analysis of SVC, "Analysis of Scalable Video Coding Implementation on QVU". In addition, researched motion partitioning, motion vector estimation algorithms, motion search algorithms of full-pixel and sub-pixel, motion vector prediction algorithms and inter-layer prediction algorithms of motion vector. Wrote a document of motion estimation of SVC, "Motion Estimation Algorithm in JSVM (SVC)".Quartics Video Unit (QVU) video conferenceImplemented full-duplex video conferences with 720P. Tested and reviewed the visual quality in CBR mode. Proposed new rate control to enhance visual quality within bit rate budget. Compared with Rebeccas video conference. Performed the intensive VC-1 QVU reference code review and tests to find the feasibility of VC-1(QVU

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise