12/7/2023 0 Comments Pdf expert text recognition![]() ![]() Hu, W., Cai, X., Hou, J., Yi, S., Lin, Z.: GTC: guided training of CTC towards efficient and accurate scene text recognition. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Gurari, D., et al.: Vizwiz grand challenge: answering visual questions from blind people. Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic data for text localisation in natural images. Gomez, R., et al.: ICDAR 2017 robust reading challenge on coco-text. 7098–7107 (2021)įu, Z., Xie, H., Jin, G., Guo, J.: Look back again: dual parallel attention network for accurate and robust scene text recognition. 257–261 (2005)įang, S., Xie, H., Wang, Y., Mao, Z., Zhang, Y.: Read like humans: autonomous, bidirectional and iterative language modeling for scene text recognition. In: ICLR (2021)Įzaki, N., Kiyota, K., Minh, B.T., Bulacu, M., Schomaker, L.: Improved text-detection methods for a camera-based text reading system for blind persons. 54(2), 1–35 (2021)ĭosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: ICCVW (2019)Ĭhen, X., Jin, L., Zhu, Y., Luo, C., Wang, T.: Text recognition in the wild: a survey. arXiv preprint arXiv:1911.08400 (2019)Ĭao, Y., Xu, J., Lin, S., Wei, F., Hu, H.: GCNet: non-local networks meet squeeze-excitation networks and beyond. 3113–3122 (2021)īartz, C., Bethge, J., Yang, H., Meinel, C.: Kiss: keeping it simple for scene text recognition. īaek, J., Matsui, Y., Aizawa, K.: What if we only use real datasets for scene text recognition? Toward scene text recognition with fewer labels. KeywordsĪtienza, R.: Vision transformer for fast and efficient scene text recognition. The experimental results show that the proposed method outperforms them and obtains state-of-the-art results in most benchmarks. It is examined on 7 commonly used benchmarks and compared with over 20 state-of-the-art methods. PTIE is a transformer model that can process multiple patch resolutions and decode in both the original and reverse character orders. To explore these areas, Pure Transformer with Integrated Experts (PTIE) is proposed. Secondly, images of different original aspect ratios react differently to the patch resolutions while ViT only employ one fixed patch resolution. Firstly, the first decoded character has the lowest prediction accuracy. ![]() Furthermore, two key areas for improvement were identified. This work proposes the use of a transformer-only model as a simple baseline which outperforms hybrid CNN-transformer models. Although the vision transformer (ViT) is able to capture such dependency at an early stage, its utilization remains largely unexploited in STR. However, such methods only make use of the long-term dependency mid-way through the encoding process. Many researchers utilized transformer as part of a hybrid CNN-transformer encoder, often followed by a transformer decoder. In recent times, the transformer architecture is being widely adopted in STR as it shows strong capability in capturing long-term dependency which appears to be prominent in scene text images. Conventional models in STR employ convolutional neural network (CNN) followed by recurrent neural network in an encoder-decoder framework. Scene text recognition (STR) involves the task of reading text in cropped images of natural scenes. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |