Non-redundant text generation is the process of generating a sequence of text without repetition of text. Text generation with redundancy affects the decision-making in the document stream. Hence it is necessary to generate the text without redundancy with novelty. Classic supervised learning models have difficulty in identifying the redundancy and discovery of new terms. Supervised learning models with unlabeled data are suitable for nonredundancy with novelty.
Identification of redundancy in the texts is implemented using a semi-supervised learning model, which comprises small labeled data to recognize the repetitive texts and a large amount of unlabeled data for the discovery of new texts to compensate for redundancy. Deep semi-supervised learning effectively utilizes labeled and unlabeled data by a deep neural network with large-scale datasets. A deep semi-supervised learning model for non-redundant text generation generates an effective sequence of text with boosted performance over the supervised learning model.
• In Natural Language Processing (NLP), a non-redundant text generation task is challenging as it influences the decision-making of the document stream.
• Semi-supervised learning strategy is developed for text generation under low-resourced constraints and addresses the training process of deep learning models.
• With an increasing research interest in data-efficient deep learning algorithms, the deep semi-supervised learning paradigm has emerged and posses wide applications fused into different systems and learning approaches.
• Semi-supervised learning strategy with a deep neural network is the preferred way for text generation without redundancy.
• Capturing long-term, especially prevailing sequential patterns to produce relevant and non-redundant event sentences.
• Thence, a deep reinforcement framework with hierarchical cross-attention, multi-task learning, and multi-topic dynamic memory network is developed to tackle relevance and non-redundancy challenges in real-time text summarization.