A word embedding is a method of learned representation for text, where words have the same meaning and a similar representation. In deep learning, word embedding methods compute distributed representations of words, also known as word embeddings, in the form of continuous vectors. Word representation methods based on the distribution hypothesis are mainly categorized into three types: matrix-based distributed representation,cluster-based distributed representation, and neural network-based distributed representation. Each word in the text doc is mapped to one vector, and each vector value is trained in a way that resembles a model of a neural network. Embedding Layer - It is learned combinedly with a neural network representation for natural language processing tasks, such as text classification, language modeling, and question answering. It requires text documents to be preprocessed such that each word is one-hot encoded. Word2Vec - Word2Vec is a statistical tool for efficiently learning a standalone word embedding from a text corpus. The two different learning models used as the word2vec approach to learning the word embedding include Continuous Bag-of-Words, or CBOW model, and Continuous Skip-Gram Model. GloVe - GloVe algorithm extends the word2vec method for efficiently learning word vectors. Embeddings for out of vocabulary words are fastest and morphoRNN. Word embedding techniques based in the context using deep learning are ELMo, OpenAI-GPT, and BERT. The neural networks applied to natural language processing in word embeddings are Recurrent neural networks, Long short term memory, and Gated recurrent units. Deep learning in word embedding techniques attracts great attention and is widely used in many applications such as text classification, knowledge mining, question-answering, smart internet of things systems, among others. Recent advances of word embeddings in deep learning are named entity recognition, Math-word embedding, and location prediction.