Home Workshop Programme Keynote Speaker Committee Accepted Papers Paper Submission Contact

ICDAR WML 2019

2nd International Workshop on

Machine Learning

Venue

Room no: CB11.04.400

University of Technology Sydney

PO Box 123

Broadway NSW 2007

Sydney, Australia

Scope and Motivation

Since 2010, the year of initiation of annual Imagenet Competition where research teams submit programs that classify and detect objects, machine learning has gained significant popularity. In the present age, Machine learning, in particular deep learning, is incredibly powerful to make predictions based on large amounts of available data. There are many applications of machine learning in Computer vision, pattern recognition including Document analysis, Medical image analysis etc. In order to facilitate innovative collaboration and engagement between document analysis community and other research communities like computer vision and images analysis etc. here we plan to organize a workshop of Machine learning before the ICDAR conference.

The topics of interest of this workshop include, but are not limited to:

etc.

Programme



Keynote Speaker




Prof. Yi Yang , Faculty of Engineering and Information Technology,
University of Technology Sydney (UTS), Australia.
https://www.uts.edu.au/staff/yi.yang

Title: Deep neural networks for large-scale video classification and localization

Abstract: Our works mainly focus on the design of deep neural networks to enable the agents with a human-like visual understanding of the world. To leverage video temporal dynamics for complex action recognition, we studied 3D convolutional neural networks for more efficient video classification. We also worked on the interactions between human and object in constrained environments, e.g., kitchen. The videos are ego-centric, containing subtle motion changes. Beyond recognition, we worked on accurate video localization for large-scale clip-level retrieval, which can be applied in many real-world applications, e.g., online video search systems. A more challenging task would be predicting the future from past frames. It requires more sophisticated reasoning and intelligence. To accurately predict future frames, we designed a CubicLSTM for better spatio-temporal modeling. Our works significantly improve the state-of-the-arts in many real-world datasets.

Organizing Committee


General Chairs:


Program Chairs:


Organizing Chair:


Program Committee:

List of Camera-Ready Papers


               Author
               Title
Haoran Liu and Anna Zhu
Synthesizing Scene Text Images for Recognition with Style Transfer
Zongyi Liu
A Deep Neural Network to Detect Keyboard Regions and Recognize Isolated Characters
Changjie Wu, Zi-Rui Wang, Jun Du, Jianshu Zhang and Jiaming Wang
Joint Spatial and Radical Analysis Network For Distorted Chinese Character Recognition
Qingqing Wang, Wenjing Jia, Sean He, Yue Lu, Michael Blumenstein, Ye Huang and Shujing Lyu
ReELFA: A Scene Text Recognizer with Encoded Location and Focused Attention
Christopher Tensmeyer and Tony Martinez
Robust Keypoint Regression
Hiroki Tanioka
A Fast Content-Based Image Retrieval Method Using Deep Visual Features
Jonathan Chung and Thomas Delteil
A Computationally Efficient Pipeline Approach to Full Page Offline Handwriting Text Recognition
Hongjian Zhan, Yue Lu and Umapada Pal
CNN-based Hindi Numeral String Recognition for Indian Postal Automation
Eman Eman, Syed Saqib Bukhari and Andreas Dengel
Cursive Script Textline Image Transformation for Improving OCR Accuracy
Manuel Carbonell, Joan Mas, Mauricio Villegas, Alicia Fornés and Josep Llados
End-to-End Handwritten Text Detection and Transcription in Full Pages
Rajkumar Saini, Pradeep Kumar, Shweta Patidar, Partha Roy and Marcus Liwicki
Trilingual 3D Script Identification and Recognition using Leap Motion Sensor
Romain Karpinski and Abdel Belaid
Semi-supervised learning through adversary networks for baseline detection
Chandra Sekhar, Anoushka Doctor, Prerana Mukherjee and Viswanath Pulabaigari
A Light weight and Hybrid Deep Learning Model based Online Signature Verification
Denis Coquenet, Yann Soullard, Clément Chatelain and Thierry Paquet
Have convolutions already made recurrence obsolete for unconstrained handwritten text recognition ?
Qi Song, Rui Zhang, Yongsheng Zhou, Qianyi Jiang, Xi Liu, Haozong Wang and Dong Wang
Reading Chinese Scene Text with Arbitrary Arrangement based on Character Spotting
Ishani Joshi, Purvi Koringa and Suman Mitra
Word embeddings in Low Resource Gujarati Language
Marco Wrzalik and Dirk Krechel
Balanced Word Clusters for Interpretable Document Representation
Martin Holeček, Antonin Hoskovec, Petr Baudis and Pavel Klinger
Table understanding in structured documents
Khurram Azeem Hashmi, Rakshith Bymana Ponnappa, Saqib Bukhari and Andreas Dengel
Feedback Learning: Automating the Process of Correcting and Completing the Extracted Information
Divya Srivastava and Gaurav Harit
Associating field components in heterogeneous handwritten form images using Graph Autoencoder
Lina Zheng, Ting Zhang and Xinguo Yu
Recognition of Handwritten Chemical Organic Ring Structure Symbols Using Convolutional Neural Networks
Viviana Beltrán, Nicholas Journet, Mickaël Coustaty and Antoine Doucet
Semantic Text Recognition via Visual Question Answering
Vinay Pondenkandath, Michele Alberti, Michael Diatta, Rolf Ingold and Marcus Liwicki
Historical Document Synthesis With Generative Adversarial Networks
Xianbiao Qi, Yihao Chen, Rong Xiao, Chun-Guang Li, Qin Zou and Shuguang Cui
A Novel Joint Character Categorization and Localization Approach for Character-Level Scene Text Recognition
Vinodh Kumar Ravindranath, Devashish Deshpande, K Venkata Vijay Girish, Darshan Patel, Neel Jambhekar and Vikash Singh
Inferring structure and meaning of semi-structured documents by using a Gibbs sampling based approach
Chetan Ralekar, Shubham Choudhary, Tapan Gandhi and Santanu Chaudhury
Intelligent Identification of Ornamental Devanagari Characters Inspired by Visual Fixations
Mohammad Mohsin Reza, Syed Saqib Bukhari and Andreas Dengel
Table Localization and Segmentation using GAN and CNN
Amandus Krantz and Florian Westphal
Cluster-based Sample Selection for Document Image Binarization
Chun-Chieh Chang, Ashish Arora, Leibny Paola Garcia Perera, David Etter, Daniel Povey and Sanjeev Khudanpur
Optical Character Recognition with Chinese and Korean Character Decomposition

Paper Submission

Paper Submission Instruction

ICDAR-WML 2019 will follow a single blind review process.
Authors may include their names and affiliations in the manuscript.

Paper Format and Length

Papers should be formatted with the style files/details available in the IEEE paper formatting template. Papers accepted for the conference will be allocated 6 pages in the proceedings, with the option of purchasing up to 2 extra pages for AUD 100 per page. This will have to be paid after paper acceptance and at the time of registration. The length of the submitted manuscript should match that intended for final publication. Therefore, if you are unwilling or unable to pay the extra charge you should limit yourself to 6 pages. Otherwise the page limit is 8 pages.

Camera-Ready Submission

All camera ready submissions and IEEE copyright form will be handled electronically
via the CPS Website ( https://ieeecps.org/#!/auth/login?ak=1&pid=6I1Fdb6d1mA5MeDNOVruyL ).

The due date of camera-ready submission is extended to Aug. 7th, 2019.

Please note the following when you submit the papers.

1) During the submission process, the system requests you to enther the paper ID. Please enter the abbreviation of workshop name followed by the paper ID on the initial submission in the form.

2) If you have already submitted your paper in the submission page for main conference which opened formerly, please resubmit your paper using this link.

 

Contact

For any other information you may contact ICDAR WML 2019 Secretary by email at icdarwml@gmail.com
or
ICDAR WML 2019 chair by email at umapada_pal@yahoo.com